Corporate power

AI sector worries about what it is doing

A growing number of experts warn that artificial intelligence may be very dangerous. Whether the call for a six-month development pause makes sense, is another question.
The introduction of ChatGPT has made alarm bells ring louder. picture-alliance/Stanislav Kogiku/picturedesk.com/Stanislav The introduction of ChatGPT has made alarm bells ring louder.

In early May, Geoffrey Hinton quit working for Google. Aged 75, he is a pioneer of neural network programming, on which current AI systems are based. He says he left the multinational corporation in order to be free to discuss the risks of technology. Sometimes called the “godfather of AI”, he now states that chatbots are “quite scary” and could be used by “bad actors”.

Hinton is not the only worried expert. In late April, Norway’s powerful sovereign wealth fund declared that governments should speed up the regulation of AI in order to control risks. It also promised to set guidelines for responsible AI practices for the 9000 companies it invests in. These companies include tech giants like Apple, Google parent Alphabet and Microsoft.

Risks at three distinct levels

Even tech enthusiasts see risks at three distinct levels:

  • False information may spread faster and more effectively due to AI applications.
  • AI can cause serious economic and political disruption, for example by making some professions redundant and causing serious social hardship.
  • Powerful AI might cause the extinction or displacement of humankind. According to a survey done in 2022 among AI experts, the median estimated likelihood of such an event was 10 %.

A programmatic paper published in late 2021 by Harvard University warned that AI, as currently practiced, is misdirected towards centralised decision making. According to the authors, the AI community misunderstands what human intelligence is really about, neglecting debate, pluralism and cooperative action.

This year, the most prominent expert warning probably came in March. It was an open letter that was published by the Future of Life Institute. By 9 May, almost 28,000 persons, including persons from the tech sector, had signed it. The letter calls for an immediate pause on the development of AI systems which compete with human-level intelligence. In view of the risks posed by such systems, a six-month moratorium is proposed.

The letter argues that AI development must be refocused so “today’s powerful, state-of-the-art systems” can be made “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy and loyal.” It also suggests that “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols.” These protocols, moreover, should then be rigorously audited and overseen by independent experts.

Fundamental questions

The letter expresses fears that were previously only raised in the silos of expert communities. It asks: “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”

Distracting attention from the main argument, Elon Musk, the controversial entrepreneur who runs Tesla, SpaceX and Twitter, was an early signatory. Adding to the confusion, he soon announced he was starting a new AI company. He was actually an early investor in OpenAI, which is now one of the sector’s major forces. He later withdrew from the company, and OpenAI is now close to Microsoft.

Late last year, OpenAI rose to prominence with the release of ChatGPT. The open letter was timed with the release of GPT4 in March, which is even more powerful. It is able to hold human-like conversations, summarise lengthy documents, write poems and even pass law exams. It is not entirely reliable, so information must be checked.

Other companies have launched chatbots too. Their potential to disrupt education is obvious. For example, this technology is likely to displace various kinds of routine clerical work in the not so distant future. Call-centre workers, accountancy assistants or low-level bureaucrats might be affected.

The proposal of a six-month moratorium, however, is not entirely convincing. As Marietje Schaake of Stanford University has pointed out, parliaments and governments need much more time to pass complex laws. Adding to the difficulties, regulation must be both flexible and firmly enforceable because AI keeps evolving. Schaake agrees that the issues must be tackled fast, however, not least because only a handful of super-equipped companies have the data volumes and computing power needed to develop the most advanced AI systems, and they do not have a track record of openly discussing what they are doing. Even her own university, she admits, cannot compete with the leading AI labs.

The debate is still going on – including in general interest media. Various high-profile persons have explained why they did or did not sign the open letter. The big question, however, is open: How will AI be regulated in ways that serve – and protect – humankind?

Link
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Roli Mahajan is a freelance journalist based in Lucknow in North India. Her reporting is based on items that appeared in international media, including The Guardian, The New York Times and the Financial Times.
roli.mahajan@gmail.com

You might also like

Related Articles

Sustainability

The UN Sustainable Development Goals aim to transform economies in an environmentally sound manner, leaving no one behind.