Red Hot Cyber

Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Search

A Brief History of Artificial Intelligence: From Alan Turing to Generative AI

Redazione RHC : 30 June 2024 22:47

Artificial Intelligence (AI) is in the Spotlight Today, Generating Unprecedented Interest and Debate. However, it’s important to recognize that this revolutionary technology has a rich history spanning over seventy years of continuous development. To fully appreciate the capabilities and potential of modern AI tools, it is necessary to trace the evolution of this field from its origins to its current state. This historical context not only deepens our understanding of current advancements but also allows us to predict future directions in AI development more accurately.

Alan Turing: COMPUTING MACHINERY AND INTELLIGENCE

The history of artificial intelligence begins with a publication by Alan Turing in 1950, where he posed the question: “Can machines think?” He proposed the “imitation game,” now known as the Turing test, in which a machine is considered intelligent if it cannot be distinguished from a human in a blind conversation.

The Dartmouth Summer Research Project

In 1955, the term “artificial intelligence” was used for the first time in a proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Since then, AI has gone through various stages of development.

Starting in the 1960s, expert systems began to develop, representing symbolic AI. These systems recorded human knowledge in specialized areas. A notable example was the R1 system, which in 1982 helped Digital Equipment Corporation save $25 million a year by creating efficient minicomputer configurations.

The advantage of expert systems was that specialists without programming knowledge could create and maintain knowledge bases. These systems remained popular in the 1980s and are still in use today.

While expert systems modeled human knowledge, a movement known as connectionism sought to model the human brain. In 1943, Warren McCulloch and Walter Pitts developed a mathematical model of neurons.

The Multilayer Perceptron (MLP)

The first computer implementations of neural networks were created in 1960 by Bernard Widrow and Ted Hoff. However, these models found practical application only in 1986 with the advent of the learning algorithm for the multilayer perceptron (MLP). This algorithm allowed models to learn from examples and then classify new data.

The MLP is a key architecture in the field of artificial neural networks, typically consisting of three or four layers of artificial neurons. Each layer in this structure is fully connected to the next, ensuring efficient transmission and processing of information.

A revolutionary breakthrough in the development of the MLP occurred with the advent of the backpropagation algorithm. This learning method allowed the creation of the first practical tool capable of not only learning information from a training dataset but also effectively generalizing the acquired knowledge to classify new, unseen input data.

The way the MLP works is based on assigning numerical weights to the connections between neurons and tuning them. The training process involves optimizing these weights to achieve the best classification on the training data. Once training is complete, the network can successfully classify new examples.

MLP demonstrates impressive versatility, allowing it to solve a wide range of practical problems. A fundamental condition for effective use is the presentation of data in a format compatible with the network architecture. A classic example of MLP use is the recognition of handwritten characters. However, to achieve optimal results in this task, image preprocessing is required to extract key features.

From MLP to Convolutional Neural Networks (CNN)

After the success of the MLP, various forms of neural networks emerged. One of these was the convolutional neural network (CNN) in 1998, capable of automatically identifying key features of images.

MLP and CNN fall into the category of discriminative models. Their main function is to make decisions by classifying input data, enabling interpretation, diagnosis, prediction, or recommendations based on the information received.

Parallel to the development of discriminative models, the development of generative neural networks was proceeding. These models have the unique ability to create new content after being trained on large sets of existing examples. The scope of generative models is extremely broad and includes generating texts from short messages to full literary works, creating images from simple illustrations to complex photorealistic compositions, composing music from melodies to full musical pieces, as well as synthesizing new data sequences that contribute to scientific discoveries in various fields.

Therefore, while discriminative models specialize in analyzing and classifying existing data, generative models open new horizons in the field of artificial intelligence, enabling the creation of unique content and promoting innovation in science and art. This variety of approaches and capabilities demonstrates the versatility and potential of modern neural networks in solving a wide range of problems and creating new forms of intellectual activity.

Generative Adversarial Networks (GAN) and Large Language Models

Generative models include generative adversarial networks (GAN) and transformer networks, such as GPT-4 and its textual version, ChatGPT. These models are trained on enormous datasets and are capable of generating text, images, and music.

The impressive advances in large language models (LLM) have given rise to a wave of alarming predictions about AI dominance in the world. However, such apocalyptic scenarios seem unjustified and premature. There is no doubt that modern AI models have made significant progress compared to their predecessors, but their development aims to increase capacity, reliability, and accuracy rather than acquiring self-awareness or autonomy.

False Myths and Fears of Artificial Intelligence

Professor Michael Wooldridge, speaking at the UK House of Lords in 2017, rightly observed: “The Hollywood fantasy of conscious machines is not inevitable, and I see no real way to achieve it.” Seven years later, this assessment remains relevant, highlighting the gap between scientific reality and popular myths about AI.

The potential of artificial intelligence offers many positive and exciting prospects, but it is important to remember that machine learning is just one tool in the AI arsenal. Symbolic AI continues to play an important role in enabling the integration of established knowledge, understanding, and human experience into systems.

Examples of practical applications of this combined approach are numerous. Self-driving cars could be programmed to account for traffic rules, eliminating the need to “learn from mistakes.” AI-based medical diagnostic systems can be validated by comparing their results with existing medical knowledge, increasing the reliability and explainability of the outcomes. Social norms and ethical principles can be incorporated into algorithms to filter inappropriate or biased content.

The future of artificial intelligence is seen as optimistic and multifaceted. It will be characterized by the synergy of various methods and approaches, including those developed decades ago. This holistic approach will create more reliable, ethical, and efficient AI systems that can harmoniously coexist with human society and integrate our capabilities rather than replace us.

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.