Saturday, November 9, 2024

AI: Where are we and where are we going?

Part 2: The state of the art

It is difficult to assess the “state of the art” when it comes to artificial intelligence because in the time it has taken you to read this sentence, AI has advanced. Unlike any other scientific endeavour that humans have undertaken – which tend to progress relatively systematically with the occasional breakthrough – AI is in a period of mind-blowing, unpredictable change.

I recently asked an audience of about 50 people how many were excited about AI. Most people raised a hand. I then asked how many were concerned about AI, and everyone’s hand shot up. This is probably representative of the public in general – we are simultaneously excited and nervous about imminent, massive changes to our lives with the increasing encroachment and integration of artificial intelligence.

Regardless of our individual excitement-to-trepidation ratio, every one of us living in the modern western world has already welcomed AI into our lives willingly – eagerly even – knowingly or not. Indeed, we may already be on the path to utopia or dystopia, and rapidly approaching the fork in the road that will determine the outcome. As I write this, there is global dismay at recent developments in the pursuit of AGI (artificial general intelligence), of which ChatGPT[1] is an example (more about this later), to the extent that more than 1,000 leading AI developers, including Elon Musk, have signed a letter[2] recommending a six-month pause on AI development. This pause, the letter explains, would primarily be to allow regulators and regulations to catch up. Is this even possible?

Let’s step back from the brink and look at what led us to this point. In the first article in this series (AI: Where are we and where are we going? Part 1: the basics), I talked about data inputs, outputs, and processors (algorithms), each typically involving contributions from both humans and computers. These are the foundations of AI. There are also different categories of AI that I call gadgets, assistants, and apps. These categories are not necessarily separate entities; an “assistant” can also be an app and a gadget. However, they each use data and algorithms a little differently.

Gadgets have become essential in our modern lives – from the time our alarm wakes us in the morning (after recording how long and how well we slept), we rely on gadgets to get us through the day. They brew our coffee and count our steps; they control our thermostats and rock our babies to sleep. We remotely unlock our car and drive (or let it drive) us to work or school, stopping and going at synchronized traffic light systems. These are just a few obvious examples of hundreds – no, thousands – of gadgets that we accept as normal in our lives. We dutifully do the software updates or automatic maintenance on all of them, without question or suspicion, satisfied that we are always up to date on the latest advancements.

Gadgets involve hardware, but that hardware often gathers and processes data, ostensibly to improve its service to us, whether it does so in its own body, or sends it back to “base” for assimilation and analysis with contributions from its brothers in the field. This is one of the reasons everything seems to need an internet connection, even if it is just a doorbell. “Smart” anything is synonymous with a two-way data stream.

“What do we have to fear from these gadgets?” you may be wondering. “They all make our lives so much easier.” That is certainly true, and the benign, face-value use of most of them is just that – an improvement to our everyday routines that relieve us of mundane or unpleasant tasks. There are, however, more secret, possibly even sinister, uses of gadgets. The data they gather is used for product or experience improvement for us, the consumers, but it can also be aggregated for AI to determine how to profile individuals and target them for ad campaigns for everything from consumer goods, to political aims, to societal agendas. Since these can be tailored to any level of personal detail, they are very effective.

Data and instructions can be sent to your gadgets, too, taking control of your thermostat, for example, which will obey, not you, but an unseen master.

Military drones, weapons, and robot soldiers also fall under my definition of “gadgets”, albeit gadgets that can behave viciously without conscience or remorse, controlled from a safe distance. I will leave the pros and cons of these attributes to your imagination and the military experts.

In the last 10 minutes, I have probably used AI assistants half a dozen times as Microsoft checked my spelling and grammar, correcting typos, suggesting commas, and highlighting phrasing it thinks is too wordy. With one click on the suggestion, evidence of my human fallibility was erased – very useful. Maps, voice and face recognition, games such as Chess and Go, video games, music and movie suggestions, internet search engines, and many more virtual aides that entertain or help us accomplish something fall under the “assistants” category.

In order for a computer algorithm to excel at chess, it only needs to be programmed with the allowable moves and the criteria for winning. This is called “perfect” data. On its turn, it easily computes every possible future move, choosing a move that maximizes its chances of winning. Computers started to win against chess masters as early as 1997, but it took until 2016 to win the game of Go against the best player in the world. Go reportedly has 10170 possible moves. That is 1 with 170 “0”s behind it. The world champion Go player, Lee Sedol, with years of dedicated practice and sacrifice to hone his skill, stamina, and intellectual ability, was beaten by Deep Mind’s AlphaGo program in a tense five-game match in March 2016.[3]  For an enthralling documentary about the experience, watch AlphaGo, available on YouTube.

Assistants that must work with “imperfect” data such as image recognition require millions of “labelled” data inputs to train a model (the relationship between the inputs and the output). Consider a computer that must learn how to identify school buses in images. It builds its knowledge of how to recognize a school bus by building a reference database of images labelled by humans as “school bus” or “not school bus”. So, when presented with a picture of a small, purple, nine-wheeled shape in a tree, it can determine if it fits the criteria.

(You might have thought those Captcha security images that ask you to pick out all the bridges simply proved you were human. True, but at the same time, you were providing your labelling expertise to enhance AI.)

In the old days (10 years ago), there were computer applications such as accounting, retail sales and inventory, medical, human resources, digital photography, and travel booking. These were typically quite expensive, standalone programs that had to be installed on a computer, and required a thick manual and training to learn how to use. Apple’s application marketplace – or “App Store” – democratized (and centralized) the use of computer programs – apps – for everything from banking, to photo editing, to music composing. They are no longer on your physical computer; they are in the cloud. Everything individual users do in these apps is tracked and documented, contributing to the immense amount of data powering and training AI.

Back to ChatGPT, the thing that alarmed the experts and provoked the “pause AI” letter. ChatGPT is a type of “natural language processor” that interacts in a conversational way. It can chat with you and write essays, poetry, even computer code with only a prompt from the user such as “Write a poem about climate change in the style of Shakespeare.” It does this by accessing, in a matter of seconds, all of the data that has been gathered and voluntarily uploaded by users of the world wide web.

A sister product to ChatGPT is DALL-E, described on its website as “An AI system that can create realistic images and art from a description in natural language.”[4] With the tap of a few keys, real (human) artists and writers could be redundant. Everyone who values creativity and originality might understandably be seriously worried. Both products are the creation of OpenAI, a company co-founded in 2016 by, ironically, Elon Musk. Frankenstein’s monster, perhaps?

I think the concept that scared the 1,000 AI developers who signed the letter was how easily people could be deceived by entirely made-up words and pictures that seem real, with potentially dire consequences.

In 1938, Orson Welles caused panic across the U.S. with his convincing Halloween news report of Martians landing in Grovers Mill, New Jersey.[5] We laugh at that now, but with AI able to create real videos, and make real people say real things, would we behave any differently? I don’t think we would stand a chance.

One last thing: artificial intelligence does not, in fact, exist, at least in the human definition of “intelligence” that implies abstract thought and original ideas. AI is only powerful because it can process inputs at lightning speed, comparing them to databases that are constantly being updated and trained with unwitting human guidance provided through our daily activities interacting with the internet. Humans might be relieved to know that they can still out-think computers, but AI doesn’t have to be able to think to make us feel like we’re not good enough. Just ask Lee Sedol.

[1] OpenAI, Introducing ChatGPT, accessed April 8, 2023

[2] Pause Giant AI Experiments: An Open Letter, accessed April 8, 2023

[3] AlphaGo versus Lee Sedol

[4] DALL-E 2, accessed April 8, 2023.

[5] H. G. Wells, War of the Worlds, 1897. Orson Welles adapted it for his radio broadcast in 1938.

Laurie Weston
Laurie Weston
Laurie Weston is a co-founder and scientific strategist for BIG Media, with a Bachelor of Science degree with honours in Physics and Astronomy from the University of Victoria in Canada. Laurie has more than 35 years of experience as a geophysicist in the oil and gas industry. She is president of Sound QI Solutions Ltd., a data analysis software and services company she founded in 2007.

2 COMMENTS

  1. Great article, Laurie! I’m an AI skeptic and I think the most important part of your article is the very last paragraph. I wish that everyone would understand this. If there is “danger” in AI in the near future, I think it will be when humans decide to turn over control of political and military decisions to AI databases. I guess that’s already in its infancy with self-driving cars, yet that doesn’t seem so bad because the goal of that machine is to be as safe as possible and exist peacefully in the environment. Sure, it can make mistakes, but I think the data is showing that machines are possibly better drivers than humans on average. The AI in self-driving cars is not responsible for new decisions – it’s just executing known safe behaviour.

    Somewhere there is a line and I think it starts appearing when AI takes over our power of choice or control of new ideas. It’s easy to “feel” that AI police with guns is not a good idea (although if that AI police can better distinguish a real gun vs. a fake gun than an emotionally charged human and avoid shooting an innocent person…). There is still also a huge gap between AI processing data and actually thinking and feeling. I don’t know if we’d ever get to the “Terminator” or “Matrix” scenarios….

    Like so many things, the global scope of AI in the world is unfathomably impossible to consider and act on. In situations like this, I always ask myself what I can do as an individual to impact my local sphere of influence, since that’s the only thing I can really affect. As a parent, I know it will be important to pay attention and emphasize the critical necessity of my kids putting in the hard work to learn how to write their own poetry and computer programs. I can definitely see the challenges on the horizon though and I don’t know exactly how I will respond to the inevitable push-back.

    • Thank you for the insightful comments, Jason. With the pace of change in AI, I, too, believe we are in uncharted territory and need to tread carefully. As you pointed out, there are definitely advantages to logical, unemotional decision-making, but we need to remember that the logic in these systems is designed and implemented by humans who imbed their own morality and biases. At the same time, we are drowning in data and could certainly use assistants that don’t require sleep, food, days off, affirmation, or benefits! AI will be a game-changer, hopefully for net good.

      Watch for more articles on the subject. And feel free to write your own member post.

spot_img

BIG Wrap

Somebody moved UK’s oldest satellite, and no one seems to know who or why

(BBC News) Someone moved the UK's oldest satellite, and there appears to be no record of exactly who, when, or why. Launched in 1969, just...

U.S. charges man over alleged Iranian plot to kill Trump

(BBC News) The US government has brought charges against an Iranian man in connection with an alleged plot to assassinate Donald Trump before he...