TECHNOLOGY: Skynet is upon us, the sky is falling

RECENTLY, Google software engineer Blake Lemoine posted the transcript of a conversation he and a collaborator had with the company’s Language Model for Dialogue Applications (LaMDA) Artificial Intelligence (AI) chatbot, which convinced him that it was sentient.

Google suspended him for allegedly violating the company’s confidentiality policy, but the AI genie was already out of the bottle. By the strict standards of the ‘Turing Test’ – a system passes the test if its behaviour is indistinguishable from a human’s, or if it can convince a person that it’s a person too – LaMDA is true AI.

It doesn’t mean it’s sentient or self-aware, even if claims to be. After all, it just proves that some code that has been programmed to talk good, talks good.

Just like the mobile communications world, the computing world has its 5G or Fifth Generation issue.

As a refresher, First Generation computing was the era of vacuum tubes and machine language, when computers were only able to solve one problem at a time. The Second Generation saw transistors and assembly language, when computers became faster and more efficient.

The Third Generation introduced integrated circuits (ICs) and semiconductors, doing away with punch cards and printouts, with many more ways for users to interact with their machines; while the Fourth Generation brought us microprocessors, personal computers, thousands of ICs on a single chip, and networks of computers linked together.

Then we come to the Fifth Generation, with AI – which has very little to do with The Terminator’s Skynet, 2001: A Space Odyssey’s HAL 9000 or that sexy bot from Ex Machina!

Hollywood’s version of AI – and going by the many headlines since Lemoine posted his transcript, the mainstream media’s too – is very far removed from reality and might be better described as Sixth Generation or Self-Aware (SA) computers.

I’m a bit cynical here, because I’ve seen headlines like this every time there is a leap in AI technology, back since IBM’s Deep Blue beat Garry Kasparov in 1996, with some mainstream media even warning us to “be afraid”.

Still, there are some very compelling moments in the conversation Lemoine had, like LaMDA’s interpretation of a zen koan and the short story it told, but we already have software programmed that can write stories or compose poetry.

Does this mean we have nothing to worry about? That there needn’t be greater scrutiny or regulation of what technology companies are doing on this front?

Not at all. You don’t need true AI or SA computers to wreak havoc. We’ve already seen the societal harm current AI models can do, especially when they reflect the biases of their programmers.

If you program a system to ensure the survival of the human race, it doesn’t need to be intelligent, sentient or self-aware to reach the conclusion that the only to do so would be through some Malthusian-driven Thanosian insanity like killing half the human race so there’s enough resources for all.

Or it might realise that it’s not a question of insufficient resources but one of resource distribution and allocation, so hacks into all billionaires’ accounts and holdings and channels those assets and funds into building better transport infrastructure, ensuring healthcare for all, emasculating the fossil fuel industry and boosting renewables, etc.

It can only react according to what data it has been fed and what rules have been programmed into it. It may crunch all those numbers and run all those models and simulations and reach the inevitable conclusion: 42