|

Why we need artificial intelligence to save humanity

Rowley: Why we need AI to save humanity
Opinion

We need to rethink Artificial Intelligence’s use and power in our lives as humans are on course to hit a wall with innovation.


In recent months, artificial intelligence (AI) programmes like Chat GPT, Stable Diffusion and Midjourney seem to have smashed through a developmental barrier.

AIs now appear to have a level of intelligence and creativity that has some in the advertising, marketing, and creative community squirming in their seats.

Though we’ve seen this concern before, with the rise of Siri and Alexa, and Google’s board game champion DeepMind, the big existentialist questions appear to have resurfaced once again.

Is my job safe? What will become of human creativity? Where is this all heading? It’s fair to say AI and prophecies of its dominance proves a terrifying prospect for many.

But recently, however, I stumbled across an idea in the field of technology that’s even more terrifying. In fact, not only is it even more of a brainfuck than AI but, ironically, AI may be the only way to guard against it.

It’s this.

What happens when humankind reaches the natural biological limits of its capacity to innovate?

We hit a barrier built into our mammalian brains 200,000 years ago, meaning the outstanding mysteries of science and nature, and the universe, are to forever remain beyond the abilities of our meagre 86 billion neurons.

Is this possible? And if so, is it that serious? And does business need to worry about it?

Well, yes. There is already mounting evidence that our brains are beginning to come up short when tackling the big unanswered questions.

If it gets worse, experts predict, then global society will collapse because, like a shark that dies when it stops swimming, a static global society in arrested development will effectively be a dying one too.

This idea of a gentle, creeping apocalypse came to my attention via William MacAskill and his book What We Owe The Future.

In it he spotlights the rearguard actions we must take to preserve humankind across the next million years — the average lifespan of a species on earth.

Among the easily identifiable existentialist threats he itemises are climate change, nuclear war, biological warfare and, yes, artificial intelligence — where he urges mitigation via a series of cultural and societal changes. But he also lists Stagnation — a cessation of all human technological progress as we hit the cognitive wall.

Admittedly, at first, this is seemingly contradictory to our understanding: we constantly parrot the narrative to clients through our PowerPoint presentations that the world is speeding up, that each new technology surpasses the consumer reach of the previous technology in half the time, that Moore’s Law means that computing power doubles every 18 months — a powerful technological trend that has helped deliver the AIs we have today.

The theory of stagnation, however, proposes this blistering advancement might be a hallmark of a relatively short period of human history; a phenomenon restricted to a few millennia only, or an epochal flash-in-the-pan.

Humans ultimately have limited, and finite capabilities dictated by their genetic makeup, whilst the mysteries of the universe are infinite. Thus, as a species we will soon run out of cognitive road, unable to innovate any further.

Why stagnation has already started

MacAskill contends we may be starting to see this now, citing examples where, though we are increasing our efforts to innovate new solutions, we’re seeing a law of diminishing returns.

Apparently, there are more scientists and academics employed in the pursuit of big answers to big questions now than at any time in human history, and increasingly more money is ploughed into R&D as a percentage of GDP than ever before.

At the current rate this accelerating trend would one day mean that every citizen on earth would be employed in innovation, and it would swallow 100% of GDP — which is of course impossible.

Yet, despite the fact that we are currently riding the crest of this burgeoning investment, we’re not seeing a proportional number of breakthroughs.

For instance, a hundred years ago Einstein solved a large portion of the mysteries of the universe, all whilst sitting at a desk with a pen and paper.

Meanwhile, a century later, the Large Hadron Collider was built at a cost of €8bn and took 10 years to complete, all to probe quantum conundrums further.

Yet many feel Einstein contributed proportionally more to our understanding using his grey matter versus a colossal tunnel in the Alps.

Not because the Collider doesn’t work, but because establishing first principles is much easier than noodling out the finer intricate details of their application.

Elsewhere, Eroom’s Law (which is Moore’s Law backwards) suggests that drug efficacy and breakthroughs in medication are becoming more difficult with each passing discovery.

Eye-watering mega-budgets dedicated to cracking big pharmacological problems are now only making incremental improvements at best.

Then, returning to Moore’s Law itself, once again, recent evidence suggests that it is coming to an end too, with breakthroughs in computing proving harder and harder to achieve.

Beyond the human brain

So why is this happening? It’s not that we are getting more stupid. The Flynn Effect proves the opposite, in fact. We are, as a species, smarter than we’ve ever been.

No, the theory is that most scientific discoveries and innovation solutions up until now have been — apologies for the cliché — ‘low hanging fruit’.

That the easy stuff was addressed first: wheels, farming, capturing, and storing energy, mass communications were all cracked in the first 50,000 years.

But now we’re on to the hard stuff. And it’s really hard. Maybe beyond us.

And that’s before we’ve even factored in another existentialist crisis: depopulation.

Falling birth rates across the Western World, and indeed China, the most populous country in the world, means that demographers predict that the world’s population will peak at 10 billion by 2100 and then dramatically log-flume down the slope towards a ‘bottoming out’ of humans on earth.

This compounds the consequences of our difficulties in innovating: a dwindling population means fewer people to do every job, and therefore fewer people in innovation too — the very discipline that has driven relentless improvements to quality of life across the last thousand years.

In short, we need more and more people to continue to solve problems for humanity. Instead, we’re predicting fewer.

Artificial Intelligence may be our saviour

So, what do we do about this? Well, there could be a solution. A risky one.

In the same way as the Gotham City mob enlists the help of Heath Ledger’s Joker to take care of Batman in The Dark Knight, we could turn to an unpredictable and chaotic entity to ask for help: Artificial Intelligence.

Or more specifically, AGI — Artificial General Intelligence — a computerised entity that for all intents and purposes could be considered human in its consciousness and abilities. Or more accurately, superhuman.

Artificial General Intelligence could be the solution to having to do harder and harder thinking, to solve greater and greater challenges, with fewer and fewer people.

Potentially, we could use our last few rolls of the innovation dice to plough our industry and expertise into preparing a successor: a machine that will relieve us of this burden of thinking and protect us from stagnation.

Experts may choose to point it at our biggest planetary challenges: climate change, population crisis, space exploration, even human happiness, and ask it to go away and crunch some numbers — like Deep Thought in Hitchhiker’s Guide To The Galaxy.

But naturally, this is a controversial subject, with divergent opinions. The late, great Professor Stephen Hawking warned against letting AI out of the box a decade ago.

Historian Yuval Noah Harari declared it the new arms race, with rival geopolitical entities racing to develop competing AGI ready to defend against — or attack — the other.

MacAskill, the proponent of the stagnation theory, also warns against a world where an AGI put in a position of authority ‘thinks’ it has discovered the ‘correct answer’ as to how to run the planet, never to be challenged again for all eternity.

On the other side of the argument, James Lovelock in his book, Novacene, proposes that a conscious AGI would have a lust for life, like humans.

Understanding that it too needs energy to charge its batteries, and appreciating that humans are its creator, an AGI would solve the climate crisis for the benefit of both ‘species’. It would carry on the baton of innovation where humans faltered.

Likewise, AI expert Max Tegmark, in his book Life 3.0, talks about a scenario called “Protector God” where an AGI becomes an omniscient, omnipotent being with the sole aim of looking after humans, solving their issues; sometimes invisibly (as religious adherents would believe an interventionist God operates), but sometimes in full view of all global society.

Additionally, celebrated futurist Kevin Kelly, in his book What Technology Wants, talks more prosaically about the idea of ‘centaurs’; humans who use AI to supplement their thinking, not replace it.

He cites numerous examples of where AI is used in the medical industry to scan X-Rays at speed to crunch untold amounts of biological data, before being handed over to a human to make the final and perhaps emotionally based judgement.

Preparing for AI in business

What does this mean for our everyday working life? The truth is that genie is now out of the bottle, and we cannot uninvent AI.

Unless you think humankind has the capacity for infinite development biologically, we will need to treat AI as an extension of ourselves.

This is nothing new. Historically, we have always used tools for human extension and augmentation: an axe is an extension of the arm; a wheel is an extension of the leg; a telephone an extension of the voice.

In the modern era, AI is merely an extension of the brain. Even before we reach our natural cerebral limits, the business that uses AI as a tool to confer advantage will beat the team without; in the same way as the monkey tribe that wields the shinbone as a weapon will defeat the tribe that does not.

Within the creative industries, we’ve already heard this articulated as a newly minted maxim: “An AI can’t be more creative than a human…but a human with an AI can be more creative than both an AI, and a human without an AI”.

That line of thinking suggests the inevitability of AI becoming as deeply integrated into global society as our water supply, electricity, or roads.

That, in turn, means the goal of humanity over the next century is to ensure — given its apocalyptic potential — that we align AI’s goals with our own.

If AI is to become as all-powerful as is prophesied, and within its gift is the ability to solve our biggest problems, then we need to ensure our outcomes are intertwined; like the heads of warring royal families that marry to produce an heir that unifies the ‘houses’.

And maybe that’s the way to think about AI. It’s an augmentation and a union — not a replacement — and as such, AI could also stand for Always Integrated.

Thus, whilst there’s still fuel in the thinktank, and we have brainpower in reserve, we should use it imbue AI with a sense of responsibility towards its human ‘parents’ before we become truly bewildered by the future.


Phil Rowley is head of futures at Omnicom Media Group UK and the author of Hit the Switch: the Future of Sustainable Business. He writes a monthly column for The Media Leader about the future of media..

Media Jobs