Last Tuesday (28), an open letter with signatures of famous names of the tech world was published asking for a “pause” in the development of AI. However, artificial intelligence experts have begun to react with criticism of the letter. For them, the content is exaggerated, focused on “speculative” risks and made by an entity that defends longoprazismo.
Among the names who signed the letter calling for a pause in AI development are: Elon Musk, Steve Wozniak (co-founder of Apple), Jaan Tallinn (co-founder of Skype), Evan Sharp (co-founder of Pinterest) and Yuval Noah Hariri (author of the books Homo Deus and Sapiens). Recently, it was revealed that Musk tried to take control of OpenAI, the developer of ChatGPT and the focus of the spotlight on the topic of AI.
Experts criticize letter for pause in AIs
Arvin Narayanan, a professor of computer science at Princeton, and Sayash Kapoor, a doctoral candidate at the same university, both artificial intelligence researchers, are two critics of the letter’s contents. For them, the risks pointed out in the letter point to problems that may not even happen, ignoring real risks that are already happening.
To illustrate, the researchers compare the speculative risk (their term) that AI has amplified disinformation campaigns with the real problem of irresponsible use.
In the letter, it is mentioned that AI will be used to spread fake news. However, the researchers argue that this already happens with other open AIs – and, let’s face it, it doesn’t even need artificial intelligence, bots are enough. For Narayanan and Kapoor, the real risk today, which needs to be solved, is the use of the tools without following guidelines. They cite by name the case of the CNET website, which used AI to publish financial suggestion texts without proper review.
The “speculative risk” of leaving obsolete jobs (remembering that one of the subscribers fired half of the company he recently bought) was countered with the cases of worker exploitation and plagiarism in image-creating AIs. About the first situation, the researchers used as a reference the fact that OpenAI paid less than $ 2 (R $ 10.18) per hour to employees who filtered the toxic content of ChatGPT.
The third and final point raised was about the current risk of data leakage. The integration of AIs, such as ChatGPT, with other applications can be hacked, revealing sensitive data. Narayanan and Kapoor claim that the “letter hype” will hurt research into technology vulnerabilities, as companies will further shut down access to their LLMs (large language model).
Emily Bender, a professor of computational linguistics at the University of Washington, also criticized the content. However, unlike Naranayan and Kapoor, she also criticized the philosophy behind the entity that authored the letter.
In a post on his Medium, Bender points out that he agrees with some points of the letter, even a scientific article of his is cited in the letter. But she explains that the article is about LLMs, not artificial intelligence.
In addition to criticizing some points of the letter, even making counterpoints similar to those of the Princeton researchers, Bender recalled the proposal of Future of Life, the entity that authored the letter – and which has Elon Musk as one of its funders.
The University of Washington professor recalls that the entity follows the line of thought of “longtermism” (“longtermist” in English). This philosophy advocates the development of the long-term future (the origin of the name). But a very long term, like something for a century from now, 500 years or 10,000 years. While the now stands aside. That is, pausing development prevents problem solving.
An example of long-sightedness is the project to colonize Mars, something advocated by Musk. One of the criticisms of the project is that the money invested in it could address problems impacting humanity today. The most remembered problem is global warming, which could be reversed by expanding clean energy sources. But we can also cite projects to improve the supply of drinking water to the 2 billion people who consume contaminated water.
Criticism of the letter doesn’t say AI is perfect, nor that it needs to be developed in an uncontrolled way. On the contrary: Emily Bender, Arvin Narayanan and Sayash Kapoor show that there are real problems and that it is necessary to fix them.
However, this is not easy at all. In his book Acceleration, German sociologist Harmut Rosa explains his theory of social acceleration. In summary (I recommend reading if you want to know more), Rosa shows how society evolves faster than politics. This means that, either way, legislation on the correct use of AIs will come too late — as will laws on private data and social media.