Home News Experts Want Physical Switch to Turn Off AIs

Experts Want Physical Switch to Turn Off AIs

by Janes

A group of researchers published a paper this week in which they advocate a physical switch to turn off artificial intelligences. The paper, published by the famous University of Cambridge, has among its authors some members of OpenAI, creator of ChatGPT and leading company in the field of generative AI. The scientists’ idea is that the hardware of this technology relies on physical elements to interrupt its operation – if necessary.

The researchers’ proposal can be compared to a kill switch for AI. Kill switch is the name given to buttons or other safety mechanisms that shut down a machine in case of emergencies. For example, that gym treadmill clamp that must be attached to the T-shirt. In the event that the runner falls, the clamp pulls the cord and shuts down the machine.

Kill switch for AIs is defended even by members of OpenAI
Among the 19 authors of the article, five are members of OpenAI. The company is currently the main reference in artificial intelligence. The popularity of ChatGPT and its features, while susceptible to failure and “laziness,” grew rapidly after its release in late 2022 — followed by a drop in June 2023.

This rise of ChatGPT has sparked a race of generative AIs and raised the debate about potential risks of this technology. Google launched Gemini (which was previously called Bard), Meta launched generative AI for stickers, photo and video creation tools are gaining more space, Elon Musk went on a shopping spree to launch his own AI, Samsung debuted Galaxy AI on the Galaxy S24 lineup, Tim Cook talked about AI on the iPhone — you may have already understood.

The researchers’ proposal to solve possible security problems is to include kill switches directly in the AI hardware. In the article, the scientists point out that since there are few GPU suppliers (not to say that it is basically Nvidia in this segment), it would be easy to control who has access to this technology, which also makes it easier to identify the misuse of AIs.

From the article, we get the impression that the researchers argue that it is easy to apply the kill switch mechanism to GPUs. The scientists suggest that the security button on the hardware will allow regulatory bodies to activate it if they identify any breaches. What’s more, the kill switch itself could activate in case of misuse.

The authors also propose an operating license for companies, which should be renewed periodically to authorize AI research and development — no different from what happens with permits. Without renewal, the system would be disrupted. Of course, these remote control proposals carry another risk: they become targets for cyberattacks.

você pode gostar