Artificial Intelligence and Why It's Not as Scary as It Sounds

Computers mimicking human intelligence: what could go wrong?

April 20th, 2023Author: David Thomsen 

What is Artificial Intelligence? 

Artificial intelligence, or AI for short, is a process in which computers aim to imitate the human brain and the way that information is processed and digested. AI can be used in just about any field, and examples can be seen with applications that use speech recognition, facial recognition, different computer programs, and so on.

Many AI systems work similarly. They are given a large amount of “training data” that they analyze and sort by deciphering patterns or connections. This allows it to then create predictions about new data that is put in, such as user data. For example, a chatbot would ingest training data from chat logs and different interactions, and then, after it is properly configured, the user can enter data and get predicted responses from the AI.  

History of Artificial Intelligence

The concept of AI was first introduced in 1943, though the name was not coined until 1955. Warren McCulloch and Walter Pitts created the first model of a biological neuron, and in 1950, John Von Neumann and Alan Turing laid the groundwork for future AI technologies. Since then, and as of much more recently, there have been many large scale AI projects that have entered the public eye. Mentioned ahead are some of the most groundbreaking recent developments. 

1997: IBM computer Deep Blue beats Garry Kasparov, the world champion at the time, in a game of chess. This match lasted for several days and consisted of two wins for Deep Blue, one win for Kasparov, and three separate draws. This game was a rematch from the first match that was played in Philadelphia the year prior. 


2011: IBM’s Watson competes (and wins!) a game of Jeopardy against the show's past champions, Brad Rutter and Ken Jennings. This game was not even a contest, as Watson swept the other competitors and was up almost $50,000.


2016: Google’s DeepMind division uses AlphaGo in order to beat legendary Go player Lee Sedol. Go is a strategy board game similar to chess, but infinitely more complex. In chess, there are between 10111 and 10123 positions (including illegal moves). In Go, there are 10360 possible moves. Having been beaten by AlphaGo, Sedol retired and stated “Even if I become the number one, there is an entity that cannot be defeated”.



2022: OpenAI launches ChatGPT which takes the world by storm. The detailed and articulate responses had amazed many of the common users and eventually OpenAI would go on to fine-tune as well as add on to these models and make multiple versions all building on the last and getting better. 


Why Now? 

The main reason for the huge expansion into AI and deep learning is accessibility. Over the last 60-70 years, hardware architecture has greatly improved, and a large amount of open-source software or materials are easily accessible on the open internet. In the age of big data, AI is used to make common functions much more efficient or personalized. AI is used to recommend ads, operate virtual assistants on your phone or computer, identify and recognize faces and speech in cameras, and help self-driving cars learn.

So... How does it work? 

Deep learning as well as neural networks can seem extremely overwhelming but I have outlined the key terms being used as well as a brief overview of how an AI works and learns from the data input. 

Key Terms: 

Model Parameters -  A variable within the model that can be altered in order to fine tune the response.

Batches and Epochs - An Epoch is the time it takes for one whole dataset to go from one end of the network to the other. This is often broken up and done in batches and iterations as well as being done multiple times to fine tune the parameters. 

Propagation - This is the process that is used when data is being passed through the network (forward or backward). Forward propagation is when the data is sent through the network and makes a guess at what the output should be, and back propagation is when it retraces its steps in order to see how close the output was to the expected output and calculates the difference and edits as needed. 

When looking at creating a deep learning module there are many steps that need to be taken to insure a working model. 

When looking at creating a deep learning module, there are many steps that need to be taken to ensure a working model. 

Gathering Data: This is the most important step, as this is the data from what the AI will learn. The better the dataset, the better the result the module can produce. One thing to beware of overfitting when providing a data set to learn from. When given a large set of data, what we are looking for is the best fitting response, which can be shown using a line of best fit. When the data is being overfit, it is being too specific about the output and will work very well with a practice set, but terribly in practice. 

The next step is to train the AI using the decided on training set then evaluate how well it performs. Continue to tweak the module until it provides the expected outputs, and then it can be tested using testing data rather than the training dataset. Essentially, this is how to test how it would work on its own without a hard-coded expected output. 


Should we be scared? 

Now I know the idea of computers trying to mimic human brains may seem scary, but in reality it has worked its way into nearly everyone’s day to day life whether they are aware of it or not. If you carry a smartphone, you likely have one of these AI on you. If you use text to speech on that phone, you are using AI. If you use facial recognition as a password for that phone, you are using AI. If you walk in front of a CCTV camera, you will likely have been processed by an AI in order to recognize you. AI is absolutely everywhere now and rather than fear it, people should learn to embrace it and evolve alongside it.

Works Cited:

Stay up to date with Twitter, Instagram, Facebook, and LinkedIn so you always know what we’re up to!