Home » Artificial intelligence beyond human intelligence and humans like ants: OpenAI and the risks of Strong AI

Artificial intelligence beyond human intelligence and humans like ants: OpenAI and the risks of Strong AI

by admin
Artificial intelligence beyond human intelligence and humans like ants: OpenAI and the risks of Strong AI

Stephen Hawking it’s gone: the famous astrophysicist passed away in 2018, but if we talk about artificial intelligence it is useful to remember his words. Especially if we talk about the risks associated with artificial intelligence.

Like when recalled (it was 2015) that AI could “wipe out humanity”, using the famous example of ants. They wouldn’t do it out of malice, but out of indifference: if man has to build a new highway, he doesn’t worry about the route passing over an anthill, he just does it and “goodbye to the ants”. Which are indifferent to us: it’s not that they are an obstacle in achieving our goals, they are simply something we don’t care about. They are insignificant.

And in the future we could become this for AIs. More precisely: we could become this for the so-called Strong AI, that is (simplifying) a real artificial intelligence that knows how to reason and solve problems autonomously. Like human intelligence, but maybe better.

In his example, Hawking spoke of this type of AI and again in 2015, he and 1,000 other people, including scientists, entrepreneurs and developers, signed a memorandum to remind the world that “AI could be the next nuclear bomb”. Among the signatories of that document there was also Elon Musk, which financially supported the birth of OpenAI that year. Eight years later, Sam Altman’s company is once again talking about Strong AI, infuriating (and worrying) the scientific community.

Artificial intelligence

What is LaMDA and why it is (not) Google’s answer to ChatGPT

by Emanuele Capone


twitter: the analysis of the OpenAI document

OpenAI and the reverse on being open source

What happened is that a few days ago, on the OpenAI site a text appeared (this one), entitled Planning for breaking latest news and beyond and signed by Altman, which talks about short-term and long-term projects. Projects to develop an breaking latest news, which is the scientific term to define Strong AI: the acronym stands for Artificial General Intelligence and indicates precisely an artificial intelligence that is able to solve problems of a general nature on its own.

See also  Barbarian Breaks Diablo 4 Damage Record with Terrifying Numbers

In the document, which is quite long and complicated, there are some important passages, which have been highlighted and openly on Twitter criticized by Professor Emily Bender, Professor of Computational Linguistics at the University of Washingtonma also by Timnit Gebru, the scientist fired by Google for having denounced the biases of AIwhich a few years ago founded Dair, an independent AI research institute and “free from the influence of Big Tech”.

Il first point to note is that, for the first time, the number one of OpenAI admits that “we were wrong about our initial idea of ​​being open (which is why the company is called what it’s called, ndr)” and therefore “we have gone from thinking that we should share everything to thinking that we should better evaluate what to share and how”.

The other important aspect is that OpenAI wants to stop being a non-profitwhich it has been until now: “We did not expect that growing (economically, ndr) was that important and when we realized that, we also realized that our original structure wasn’t going to work”, because “as a non-profit organization we would not have been able to collect enough money to carry out our mission, and so we have given ourselves a new structure”.

An AI in the shadow of Microsoft

These two points are fundamental, because they touch a very delicate aspect of the development of AI, which we have already dealt with several times on Italian Tech (here, for example): the origin of the data needed to train them. It should be remembered that this enormous amount of information is provided to developers for free, with the idea that they make a non-commercial use. That they don’t make money come on, in short. Which doesn’t exactly seem like the goal of the new OpenAI.

But these dots dots are also somehow connected to each other and are in turn connected to the economic interests of Microsoft: after having invested billions and billions of dollars in Altman’s company, it is understandable (from his point of view) that the Redmond giant wants everything except that it remains open source, open to all, accessible and non-profit. There must be a profit, otherwise the investment would not make sense.

See also  Worldcoin: This is how the cryptocurrency works

To confirm the connection, as he did not fail to point out Gebru, who several times in recent days has defined those of OpenAI as “clowns”. (which can be translated as clowns, but also with scoundrels and other less polite terms), the fact that among the co-authors of the document signed by Altman, Brad Smith, Kevin Scott, Brian Chesky and Jack Clark also appear. Who I am? Respectively, the president and CTO of Microsoft, the CEO of AirBnB and the co-founder di Anthropic (things?). Other than non-profit, it seems to understand.

Operating systems

On Windows 11 comes the new Bing with ChatGPT

by Bruno Ruffilli


“False to say that ChatGPT is a step towards Strong AI”

Beyond these corporate and business-related aspects, there are others of setting and approach. Of pride, to read what Bender wrote: “They really think they are on the road to developing breaking latest news, and they really think they are in a position to decide the good of all mankind”. Which is in fact what is written at the beginning of the document signed by Altman, perhaps with an excess of enthusiasm: “Our mission is to ensure that breaking latest news (an artificial intelligence system smarter than humans) goes to benefit of all mankind”.

And their aim is to create this kind of AI, which they do, at least according to Altman’s words, they would be doing with the various versions of their LLMs, such as the well-known GPT-3. This is perhaps the most bitterly disputed point by the scientific community: “They are giving this false illusion that ChatGPT is a step towards Strong AI – he explained to us Annalisa Barla, Associate Professor of Computer Science of the Dibris of the University of Genoa – Except that this thing is not true, at least as it is constructed now”. In short, also to put it like Professor Hiroshi Ishiguro, not only “ChatGPT is not an AI”, but above all “it is not a step on the road to Strong AI”. Despite what Microsoft shareholders would like to think.

See also  The Samsung Galaxy S24 Ultra appeared on the Walmart website before its launch

Another problem is that of the data source: “They say it is important to make an breaking latest news that is beneficial to all humanity, that openness is important, but then they contradict each other, close themselves off, put a stop to their being open source and are no longer a non-profit”, Barla pointed out to us. Above all, they still fail to document and explain the provenance of their datasets, i.e. (simplifying) the databases used to train their AIs, which is something that the scientific community has been asking to know for over 5 years now. Because it is important? Because not knowing the origin of the data from which AIs learn to talk, solve problems, create images, write texts and much, much more, exposes them to the risk of discrimination, abuse by governments, failure to respect minorities and copyright infringementjust to name a few examples.

To use Bender’s words again, “what you need are rules about how data can be collected and used, there is a need for transparency on datasets, models and the implementation of text and image generation systems”. It would also be useful to make them thinking a little less about profit and a little more (but really) about the good of humanity. Also because, to paraphrase Hawking, this artificial super intelligence would be better get it right the first timebecause we may not get a second chance.

@capoema

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy