The future of Artificial Intelligence

As promised last week, we are returning to the subject of Artificial Intelligence… and, yes… We have already covered aspects of generative Artificial Intelligence, neural networks and how this field of computer science can impact businesses on our social media (/fb.watch/jrNGUI1AJQ/) back in January. We feel the issue of AI applications and Artificial Intelligence in more general, however, is of such importance at the moment, that it requires further consideration.

Artificial Intelligence’s or AI’s rapid technological advancement is, without a shadow of a doubt, increasingly impacting the daily lives of individuals as well as businesses as – owing to quickly developing computer science – it is now used in virtually anything we do that is connected to technologies we use on a daily basis. The developments in AI systems are fascinating and, perhaps, can also be somewhat scary to many.

Suddenly, it seems, AI technologies invaded most areas of our lives from art to managing projects of varied scales to virtual assistants to designing spaceship parts. The possibilities of the utilisation of machine learning, deep learning techniques, speech recognition, natural language processing, neural networks, computer vision, and literally anything that can contribute to developments in AI tools appear limitless.

In the article, you will be able to read about the following:

  • What is Artificial Intelligence?
  • How is Artificial Intelligence developed?
  • The history of Artificial Intelligence
  • Four main types of Artificial Intelligence
  • The role and place of AI systems
  • Is Artificial Intelligence important and what is its future?

What is Artificial Intelligence?

When the term Artificial Intelligence is used, a lot of us automatically think of deadly machines trying to take revenge on humanity. The sort of artificial beings we have seen in apocalyptical visions of film directors or read about in grim science fiction books. The truth, however, is very far from the dark Hollywood scenarios. 

AI solutions, most basically, refer to computer simulations of human intelligence (or how human brain works) performed by machines or computer systems. They include applications of different sorts, i.e. expert systems (javatpoint.com/expert-systems-in-artificial-intelligence), natural language processing, (ibm.com/topics/natural-language-processing),  speech recognition (ibm.com/topics/speech-recognition) and machine vision or computer vision (intel.com/content/www/us/en/manufacturing/what-is-machine-vision.html). 

Artificial Intelligence systems require a base of highly specialised hardware and software for writing and training machine learning algorithms. There isn’t a single programming language that would be synonymous with AI. Nonetheless, a few of them (e.g. Python, R,or Java) are rather popular.

The principle of how AI systems work is that they ingest huge amounts of what is labeled training data to later analyse it for correlations and patterns and use the patterns to make predictions about future states. The type of training data used obviously depends on the tasks Artificial Intelligence is meant to perform. This is how, for instance, chatbots (natural language processing) fed copious amounts of chat training data learn how to produce lifelike conversations with people or how, by reviewing millions of items, image recognition technologies learn to identify and describe images. Thus, in consequence, Artifficial Intelligence learns how to perform tasks commonly performed by humans in volumes exceeding human capacity without human intervention.

Hence, in most general terms and in its simplest form, Artificial Intelligence is a field that combines computer science and enormous sets of training data which, in turn, enable machines to solve problems of smaller and greater complexity limiting space for ocurrences of human error.

How is Artificial Intelligence developed?

During the development process, AI programming and AI projects are predominantly focused on three main aspects of cognition, i.e. learningreasoning and self-correction to later develop a simulation of human intelligence.

The learning process programming part is focused on data acquisition and the creation of rules which state how training data can later be turned into actionable information. The rules set by human experts (i.e. the AI algorithms) provide machines with detailed instructions characterised by mathematical logic on how individual tasks ought to be completed. The reasoning process aspect of AI programming, in turn, is aimed at making the right AI algorithms reach a predetermined result. Finally, the self-correction part of the Artificial Intelligence programming process in modern AI aims at making sure that AI algorithms continually fine-tune and self-perfect and provide the most accurate results immitating human intelligence all the while.

The history of Artificial Intelligence

Although the concept of Artificial Intelligence is quite a recent one, it actually – in a way – can be traced back to antiquity. Naturally, we don’t mean the modern understanding of the term. However, ancient inventors did engage in constructing “automatons”. The word derives from ancient Greek and, most basically, means “acting of one’s own will”. Obviously, these early machines had little to do with what Artificial Intelligence is now. Nevertheless, it’s noteworthy that inventors and engineers have been pondering the notion of machines being able to operate on their own for centuries. The groundwork for modern AI technologies, though, did not happen until the 20th century.

Early developments

Early in the 1900s, there was quite a fuss around the possibility of creating artificial humans. Mostly from science fiction creators. The idea got enough coverage, though, for early computer scientists to start getting interested in the issue. These early machines or robots were rather simple, mostly steam-powered with some being able to make very basic facial expressions or take a few steps. Perhaps surprisingly to many, the term robot itself does not derive from English (from which it spread to other languages). It was coined in 1921 by Czech playwright Karel Čapek in his play “Rossum’s Universal Robots” where the word was used for artificial people for the first time. In 1929, the first Japanese robot was created by professor Makoto Nishimura and the machine was named Gakutensoku (spectrum.ieee.org/the-short-strange-life-of-the-first-friendly-robot). In 1949, computer scientist Edmund Callis Berkley published a book titled “Giant Brains, or Machines that Think” (monoskop.org/images/b/bc/Berkeley_Edmund_Callis_Giant_Brains_or_Machines_That_Think.pdf) which, in essence, was a comparison of computers to the human brain or human mind.

Groundbreaking work

The groundbreaking work on AI technology, however, wasn’t done until the first half of the 1950s. In 1950, the legendary Alan Turning (nist.gov/blogs/taking-measure/alan-turings-everlasting-contributions-computing-ai-and-cryptography) published his work titled “Computer Machinery and Intelligence” where he proposed the idea of a test called The Imitation Game that would measure machine intelligence (more commonly known as The Turing Test). In 1952, computer scientist Arthur Samuel (history.computer.org/pioneers/samuel.html) created a program to play checkers which was the first ever program to learn the game independently and didn’t require human intervention. In 1955, John McCarthy (computerhistory.org/profile/john-mccarthy/#:~:text=McCarthy%20coined%20the%20term%20%E2%80%9CAI,programming%20language%20lisp%20in%201958.) organised and held a workshop on AI at Dartmouth where the term Artificial Intelligence was first used and got popularised.

Maturing AI

The period between 1957 and 1979 witnessed the maturation of the concept of Artificial Intelligence. During this time, AI research was facing both, struggles and rapid growth. On the one hand, in the 1960s and 1970s programming languages (still in use today) were created, books were published, first autonomous vehicle was constructed and the first anthropomorphic robot was made in Japan. For example, In 1958, McCarthy created LISP (an acronym for List Processing) (britannica.com/technology/LISP-computer-language) which is the first programming language for AI research still in use today. In 1959, the term machine learning was coined by Arthur Samuel. In 1961, the first industrial robot called Unimate (robots.ieee.org/robots/unimate/) started its work on a General Motors assembly line. doing jobs perceived as too dangerous for humans. In 1965, Edward Feigenbaum and Joshua Lederberg created the first-ever expert system (i.e. an AI system programmed to simulate the thinking and decision-making processes of a human being by their replication). In 1966, the first chatterbot (later abbreviated to chatbot) was created by Joseph Weizenbaum. ELIZA was a mock psychotherapist and used natural language processing (NLP) to hold conversations with humans. In 1968, “Group Method of Data Handling” was published by a Soviet mathematician Alexey Ivakhnenko (what we now call deep learning). In 1979, The American Association of Artificial Intelligence (currently known as the Association for the Advancement of Artificial Intelligence or AAAI) was funded. All of the above were relatively successful attempts at explaining and exploring the concept of Artificial Intelligence and making more or less accurate predictions about both, the dangers and opportunities AI technologies could bring. On the other hand, however, in that period, AI researchers were also facing quite a lot of struggle as authorities showed very little interest in funding early AI research.

AI boom

Between 1980 and 1987, there was a rapid advancement in AI research, and interest in AI technology witnessed a sudden boom. Deep learning technologies and expert systems became popular. Owing to this, computers were able to learn from their own mistakes and progress in making independent decisions. In 1981, XCON (expert configurer) entered commercial market. Its job was to order components for computer systems based on clients’ needs. Also in 1981, Japanese authorities decided to invest a massive amount of $850 million (today’s equivalent of around $2 billion) to develop the Fifth Generation Computer Project, i.e. machines which would translate, hold conversations with humans in human language, and express reasoning on a human level. In 1985, an autonomous drawing program AARON was demonstrated at the Association for the Advancement of Artificial Intelligence conference. In 1987 Alacrity (i.e. the first strategy managerial advisory system) utilising a complex expert system of 3000+ rules was introduced.

AI winter

Sadly, in 1987 AI winter started and lasted until 1993. Consumer, public as well as private interest in Artificial Intelligence decreased dramatically and, in consequence, resulted in decreased AI research funding.

AI acceleration

Fortunately, the situation started changing again in the early 1990s. The increasing interest in the notion of Artificial Intelligence was followed by increased AI research funding which accelerated progress even further. Amongst other things, the first-ever Artificial Intelligence system (Deep Blue developed by IBM) capable of outplaying the world chess champion Garry Kasparov was introduced in 1997. The man vs. machine chess match gathered a lot of coverage in the media and became quite a sensation. Also in 1997, Windows launched their speech recognition software. In 2000, the first robot named Kismet capable of simulating human emotions using facial expressions (basic eye-brow movements, blinking, lip movements) was developed by professor Cynthia Breazeal. In 2003, two famous Nasa rovers (Spirit and Opportunity) landed on the surface of Mars and started navigating the red planet’s landscape without human intervention. In 2006, social media giants (Facebook, Twitter and YouTube, amongst others) started using Artificial Intelligence as their ad and UX algorithms. In 2010, Xbox 360 Kinect was launched by Windows. Windows also released Watson in 2011 (a natural language processing computer program answering questions). In the same year, Siri (the first of the currently popular virtual assistants) was released by Apple.

Artificial General Intelligence

All this brings us to the period between 2012 and the present time and what is called Artificial General Intelligence.

In 2012, two Google-based AI researchers Jeff Dean and Andrew Y. Ng managed to train a neural network to recognise cats just by feeding it unlabeled images stripped of any background information or context. Soon enough, the neural network became an expert in doing so. In 2016, Sophia (one of the most famous humanoid robots ever constructed) was created by Hanson Robotics. Sophia is also famous for her outstanding performance in seeing, replicating emotions, and communicating using human speech. She also became the first robot citizen. In 2017, Facebook programmed two chatbots to hold conversations and learn negotiation skills using human language. Curiously, as a result, the chatbots stopped using English as the language of negotiations and ended up autonomously developing their own language altogether. Years 2018, 2019, and 2020 brought further developments in which a natural language processing Artificial Intelligence beat human participants in a Stanford reading and comprehension test, Google’s AlphaStar reached Grandmaster on the StarCraft2 video game, and beta testing started for GPT-3 (a deep learning Artificial intelligence for creating language and writing content almost indistinguishable from writing created by humans). In 2021 DALL-E was introduced by OpenAI. DALL-E is Artificial Intelligence that is able to process and understand images to a point at which it’s capable of creating accurate captions for them. 2023? Yes, at the moment, 2023 belongs to OpenAI’s Generative Pre-trained Transformer 4 (i.e. GPT-4) which is taking headlines by storm. And since it’s only March, we can’t wait to see what else 2023 is holding for us.

Four main types of Artificial Intelligence

Artificial Intelligence can, most basically, be divided into 4 main types. Namely, reactive machines, limited memory AI, theory of mind, and self aware AI.

Reactive machines

Reactive machines are Artificial Intelligence systems with no memory of their own and are exclusively task-oriented. This means that a specific kind of input will invariably deliver a specific kind of output or a predetermined result. A machine learning model, for example, can be a reactive machine as it ingests customer data (e.g. online purchase history) and will deliver recommendations to the same customer based strictly on the ingested data. Such AI systems are commonly utilised and extremely useful in services such as streaming platforms as it would be a tremendous effort for a human being to browse, analyse and recommend content to individual platform users. Reactive Artificial Intelligence also does well in self driving cars. One of the best and most famous examples of a reactive machine was Deep Blue, IBM’s supercomputer which beat the world chess champion. Deep Blue was capable of identifying the positions of its own and its opponent’s pieces on the chessboard and making predictions. However, it couldn’t use past mistakes as it didn’t have the necessary memory capacity, to make informed future decisions.

Limited memory

Limited memory AI is an attempt at imitating how neurons work in a human brain. Owing to that, limited memory AI gets smarter in the process of receiving more training data. Such is the case with deep learning algorithms, for instance, which improve image recognition with every single image they are fed. Unlike reactive machines, limited memory AI has an insight into the past and the capacity to monitor objects and/or situations over time. The data, however, isn’t saved into the AI’s memory as might be the case with the human mind. The improvement occurs over time as more training data is fed to the Artificial Intelligence.

The above two types of Artificial Intelligence are AI technologies which currently exist and are commonly used in numerous devices as well as AI applications. The following two types (i.e. theory of mind and self aware AI), in contrast, are still a future vision of Artificial Intelligence and, currently, no real life examples of these AI technologies exist.

Theory of mind

Let’s start with theory of mind AI system. The theoretical premise behind this Artificial Intelligence is its potential to truly understand the world surrounding it and how other sentient beings have emotions and thoughts as well as how relationships are created, influenced, and altered by the emotions and thoughts. This, in turn, theoretically ought to impact how these AI systems behave with reference to other beings. Just like a human being understands how he or she is affected by the emotions and thoughts of other human beings and how they themselves affect their surrounding, this form of Artificial Intelligence would be capable of processing reality in a very similar manner. Such AI technology, though, would require intelligence closely resembling human intelligence.

Self aware AI

Self aware AI systems are currently the tip of the AI technology evolution. A self aware AI system is one that has a consciousness, a sense of the self, and a deep understanding of its own existence. On the surface, there seems to be little difference between theory of mind AI technologies and self aware AI systems. Most basically, however, the difference between a theory of mind AI and a self aware AI would be a shift in the understanding from “I am” to “I know, I am”. However, since neither neuro-scientists nor psychiatrists and psychologists themselves have so far been able to determine the full nature of the phenomenon of consciousness, let alone, where and how in the brain it exactly emerges, a self aware Artificial Intelligence is far beyond the reach of computer science researchers in any foreseeable future.

The role and place of AI systems

Should we be bothered with how much space Artificial Intelligence systems are actually starting to take in our day-to-day lives? Well… Yes and no… It all depends on several factors such as how Artificial Intelligence technologies are used, who uses them, the collected data, or the legislation and ethics behind these technologies.

Let’s not focus on what can go wrong, though, as this is, perhaps, a subject for a completely different text. We don’t want to dismiss the issue, of course. This is not the point. We are miles away from saying this is not a relevant and important notion that also ought to be discussed with great care and necessary precision. The reason we don’t want to ponder what possibly could go wrong with reference to AI solutions is only because it is not the purpose of the article. So let’s just take a closer look at the advantages which AI may bring to businesses.

To start with, Artificial Intelligence systems can become powerful analytical tools which can provide enterprises with unprecedented insights into their own operations as the AI tools can, for example, repetitively and in a very task-oriented manner analyse numbers exceeding human analytical capacity. We can easily imagine a multinational corporation where unimaginable numbers of documents need to be filled in, segregated, and circulated. AI tools are a perfect solution as they are capable of completing such tasks quickly and with a relatively small number of errors. 

This analytical capacity has not only substantially increased efficiency as well as opened opportunities for businesses to explore new areas with Google or Uber being particularly bright examples. This is due to the vast improvement that is, amongst others, attributed to using AI to improve efficiency and gain competitive advantage.

The utilisation of AI tools doesn’t stop there, however. Imagine a situation in which CEOs of some major players in some sectors get substituted by machines. Yes, CEOs. Sounds sci-fi? Well… In August 2022, Hong Kong based company NetDragon Websoft appointed an AI as their flagship subsidiary CEO (thehustle.co/should-we-automate-the-ceo/?fbclid=IwAR1I0_Fczucb_QZj6exAYyu-npIP6pCze1-qzD6AQXhfN4nzLODOmZsbnyU). 

Due to their versatility, AI applications can be incorporated into numerous types of technologies such as automation, machine learning, machine vision, natural language learning (NLP), robotics, or autonomous vehicles.

Owing to this vast spectrum of possibilities of practical application, AI tools made it to a variety of business branches and public life sectors. They are now present in sectors such as healthcare, education, finance, law, production, banking, transport or security

Is Artificial Intelligence important and what is its future?

So with AIs growing increasingly potent, holding conversations, increasing their computing capacity, creating art, designing spaceship parts, and even running big businesses and with their access to all the knowledge there is circulating the network as well as with their exponential ability to learn, are they eventually going to take over the world? The answer is, not in any foreseeable future… They would have to desire to do so… and for them to desire to do so, they would have to have one key element. Awareness and everything that it entails. 

What are we likely going to witness in the foreseeable future then? 

Definitely an increase in efficiency in areas ranging from science to education, to security, to healthcare, to entertainment, to business, to whatnot. What we might actually be facing is technological acceleration on an unprecedented scale which, if we have enough good will and address the most burning issues that have become our collective problems in whatever area of life on the planet, will lead us to a brighter, safer, and more sustainable future.

editor