The year 2022, OpenAI offered new possibilities of computing and AI to the public, such as, ChatGPT and its unexampled effectiveness in answering questions that can be posed to it. But this was only one step and a new technological advancement that is much more capable as an AI could be much more dangerous to humanity.
Robots and artificial intelligence have intrigued people for a very long time. As far as its origins go, the first signs of the profession could be seen even as early as 800 BC in Egypt, where the very word “man in the machine” was used. However, it was only in the 1950, with the pioneering studies of the outstanding British mathematician and cryptologist Alan Turing, and then with the creation of the present type computers that the hypothesis of artificial intelligence being feasible was even entertained for the first time. In concrete terms, it is since the 2000s and through the emergence of deep learning through the use of Big Data and the enormous volume of data to be processed, that artificial intelligence could evolve more widely.
In more recent years, however, development in the field has skyrocketed and, amongst others, as the average man-in-the-street has witnessed with the ChatGTP personnel attendant. The Q* project (Q Star) an innovation that OpenAI has just released is very scary. According to a memo obtained by our partners in Reuters, a team of scientists involved in the project has addressed an open letter to board of directors, in which they warn on the future of this new artificially intelligence.
Even though Project Q* up to now solved only mathematical problems intended for kids, its rate of learning could make Q* an AGI (Artificial General Intelligence) very soon. Already in the Terminator or the Person of Interest, we have seen a concept of what an AGI is; an AGI is the last stage of AI that would be able to do all the things that a man can be induced to do but tenfold and this, of course, puts man in competition with the machine and in this competition can very easily disappear. This letter, regarding the risks involving the Q* project, might have been the cause for the termination of OpenAI’s Chief Executive Officer on November 17, Sam Altman. A short-term layoff as over 700 employees stated they will quit Microsoft if Altman was not brought back to lead it, that made him to be brought back.