Home Ā» Meta’s new AI offensive exposes Europe’s dangerous hardware gap

Meta’s new AI offensive exposes Europe’s dangerous hardware gap

by admin
Meta’s new AI offensive exposes Europe’s dangerous hardware gap

Dis Facebook parent company Meta presented its latest achievements in the field of artificial intelligence on Thursday. Meta’s Chief Infrastructure Engineer Santosh Janardhan explained in a blog posthow the group intends to set up its own infrastructure for training artificial intelligence in the future – and in doing so relies on self-developed technology down to the level of individual chips.

“Meta Training and Inference Accelerator” is the name of the chip that Meta developed for this purpose – it is for energy-efficient training and use (“Inference” is the logical application of trained knowledge to new data) of large AI models optimized. With it, Meta primarily wants to significantly reduce hardware costs, but also the time required for training large AI models such as OpenAI’s GPT4.

Meta does not plan to sell these chips to competitors, many details about the chip remain secret. However, as the group revealed to investors, it should be based on the energy-saving RISC-V design of the British chipmaker ARM and, with a power requirement of 25 watts, calculate significantly more efficiently than the A100 graphics chips (GPUs) from Nvidia, which have been the standard up to now for training AI.

More about AI

Jobs, Medicine, Society

Photo Illustration Of ChatGPT OpenAI With Logo

Chat GPT jeopardizes business

Cashiers could increasingly be replaced

Given the significant investment required to develop and manufacture a new chip – experts from the chip industry calculate several hundred million dollars for the design and manufacture of a small series – Meta’s investment shows how important own hardware is for AI development, and which The group hopes to gain competitive advantages from using AI as efficiently as possible.

also read

The chips are to be used in a new generation of data centers that Meta is currently developing and which are optimized to make compute-intensive AI applications suitable for everyday use. The group is currently working on a supercomputer for AI, the “Research SuperCluster”, which is set to set a new standard for efficient and fast AI training with 16,000 graphics chips and an undisclosed number of the new MTIA chips.

See also  Food, no more misleading names on supermarket shelves

For comparison: The German AI startup Aleph Alpha opened Europe’s most powerful AI data center called “alpha ONE” near Bayreuth last year, with just 512 graphics chips.

Significant head start for the US giants

“Hardware will become more and more important in the coming years,” engineer Janardhan is convinced. “Over the next decade, we will see increasing specialization and customization in chip design, purpose-built and workload-specific AI infrastructure, new systems and tools for large-scale deployment.”

Only companies that master not only software but also hardware are competitive here – a considerable head start for the US giants, for which investments in the billions are no obstacle.

This is exactly where the problem for the use of artificial intelligence has been: The training in particular, but also the inference calculation with large models such as ChatGPT, is so hardware-intensive that so far only a few companies in the world can afford it – giants like Microsoft or Googlewhich have their own hardware.

also read

Businessman discussing project details with a colleague

Even after a model like GPT is fully trained, billions of calculations must be performed for every request, every chat response. In comparison, classic database applications or the generation of web pages are considerably less complex.

Training a large model for a specific purpose, using custom training data, costs about $4 million per run, according to analysis firm Forrester. A single request can cost several cents, depending on the complexity of the model used – that doesn’t sound like much at first, but with over 60 million requests per day, as is currently the case with OpenAIthe costs quickly add up to over a hundred million dollars a month.

AI startups like OpenAI have to keep waiting lists, limit the number of inquiries and are specifically looking for support from corporations like Microsoft in order to be able to cope with the considerable hardware investments. We are currently only at the beginning of the AI ā€‹ā€‹revolution.

This is where you will find third-party content

In order to display embedded content, your revocable consent to the transmission and processing of personal data is required, since the providers of the embedded content as third-party providers require this consent [In diesem Zusammenhang kƶnnen auch Nutzungsprofile (u.a. auf Basis von Cookie-IDs) gebildet und angereichert werden, auch auƟerhalb des EWR]. By setting the switch to “on”, you agree to this (which can be revoked at any time). This also includes your consent to the transfer of certain personal data to third countries, including the USA, in accordance with Art. 49 (1) (a) GDPR. You can find more information about this. You can withdraw your consent at any time via the switch and via privacy at the bottom of the page.

So far, there has simply been a lack of the necessary computing power worldwide for the mass use of AI. The approach of developing your own hardware in good time brings long-term competitive advantages – accordingly, Meta keeps its chips secret and does not sell them to the competition.

See also  EU-Tanzania Business Forum: restart the investments of the European bloc

In contrast, Meta is much more generous when it comes to sharing its software development: In March, the group initially only made its own large language model LLama available as open source software to selected researchers. But then, within a few days, the program code first landed in the 4-Chan forums, then on relevant download sites. The model is now the basis for dozens of open source projects in the field of artificial intelligence.

Meta hopes that its own model could become a quasi-standard for the development of artificial intelligence. “The open platform will win,” said Facebook’s AI chief researcher Yann LeCun to the New York Times.

EU regulation could fall short

But Meta can be sure that no one can use the Llama model as efficiently as Meta itself – because only the group has also developed the hardware that fits the model and can train and optimize it regardless of costs and hardware resources.

At the same time, this development shows why the regulation of artificial intelligence, which is based solely on software, as is currently the case drafted by the European Commissioncould fall short.

The EU is specifically planning that developers of potentially risky AI models will have to register them in Europe. Currently, almost every large model is associated with risks because they can be used universally – even for fraud and manipulation.

also read

Close-up of man using laptop next to construction plan at desk

The EU would also like to regulate who is liable for the training and use of large models, and would like to have new models certified as largely risk-free before they are used. If a start-up without its own hardware resources uses and modifies the algorithm of a large provider via an interface (API), the large provider should also be responsible for this certification. Open source providers should also provide this certification.

See also  Early retirement, the INPS simulator pushes my date away: why?

First Analysis of the design show that providers like Meta, who control the entire value chain, have significant advantages over smaller start-ups that rely on models and hardware from others for training. Because only the big ones can prove to the EU beyond any doubt how their models work and how risks are mitigated.

For this certification, start-ups would be dependent on the cooperation of the large providers, who might have to disclose secret details. The EU would therefore give the big US providers the opportunity to slow down smaller competitors by referring to trade secrets.

also read

List of Top AI Profiteers

At the same time, however, the EU does not take into account that it is not the software but the hardware that is the limiting factor in AI development: the open source software has long been freely available and nobody can control how it is used and further developed. A law that now wants to hold open source developers liable regulates the genie that’s already out of the bottle.

Anyone who has a high-performance data center can train a model themselves on this basis. However, anyone planning fraud or malicious use is unlikely to report this to the EU. However, it could be regulated via access to these data centers, without which the use of the AI ā€‹ā€‹systems is not possible.

The USA, for example, realized this long ago ā€“ and implemented it in the form of geopolitical measures: the US government is currently doing everything it can to ensure that the Chiptechnologiewhich is necessary for the training and use of the AI ā€‹ā€‹algorithms, can under no circumstances be exported to China.

You can listen to our WELT podcasts here

In order to display embedded content, your revocable consent to the transmission and processing of personal data is required, since the providers of the embedded content as third-party providers require this consent [In diesem Zusammenhang kƶnnen auch Nutzungsprofile (u.a. auf Basis von Cookie-IDs) gebildet und angereichert werden, auch auƟerhalb des EWR]. By setting the switch to “on”, you agree to this (which can be revoked at any time). This also includes your consent to the transfer of certain personal data to third countries, including the USA, in accordance with Art. 49 (1) (a) GDPR. You can find more information about this. You can withdraw your consent at any time via the switch and via privacy at the bottom of the page.

“Everything on shares” is the daily stock exchange shot from the WELT business editorial team. Every morning from 7 a.m. with our financial journalists. For stock market experts and beginners. Subscribe to the podcast at Spotify, Apple Podcast, Amazon Music and Deezer. Or directly by RSS-Feed.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy