Home » Generative AI, with Urania data analysis is on a large scale

Generative AI, with Urania data analysis is on a large scale

by admin
Generative AI, with Urania data analysis is on a large scale

Lately, E4 Analyticsa “sister company” startup in E4 Computer Engineeringmade available Uraniaend-to-end solution per Big Data Analytics e Artificial Intelligence on a large scale orchestrated with kubernetes. To find out in detail what its potential and possibilities of use are (also with generative AI), we interviewed Mario Rosati, CEO of E4 Analytics.

Who is E4 Analytics?

It’s a startup that was born in 2018 from the intuition that thelarge-scale data analysis and theartificial intelligence they would have had among factors enabling high performance computingan area in which it has always operated E4 Engineering. Today, E4 Analytics creates software platforms which E4 engineering then markets with i systems HPC.

How did the idea of ​​a solution like Urania come about?

From the beginning we chose to base our platforms on tecnologie cloud nativeTherefore container e kubernetes. The container offers various advantages and lends itself to high performance activities because, unlike a virtual machine, can go directly to hardware resources without intermediate layers that could limit performance.

The choice of kubernetes is based on the fact that the large amount of calculations necessary to train artificial intelligence models can only be carried out in parallel. And the only way to have a container orchestration system that uses multiple servers at the same time is use kubernetes. In reality, Kubernetes was not created for high performance because it does not natively support GPU computing and high-performance networking. We therefore have built a kubernetes for thisintroducing the components needed to fully support GPU computing and high-bandwidth, low-latency networking.

We have added an additional component. Since managing kubernetes is not always easy, we have introduced agraphical interface accessible from the web which facilitates administration and use.

This is the basis on which it was developed Uraniaobject in which they can insert additional components to have a platform suitable for data scientists and data engineers.

The first version, called GAIA (acronym for GPU appliance for artificial intelligence), was smaller and was essentially the same type of project but limited to the single node. It was an object vertically scalable by adding more GPUs on the server that hosts it.

See also  Artificial intelligence protagonist at the Oracle World Tour

Urania extends this concept providing one more possibility: having distributed data processing systems where ideally a cluster di container exposed on different nodes cooperates with the single one parallel processing. This way, you get not just one vertical scalability with multiple GPUs on single mode but also one horizontal scalabilitywith multiple nodes made the same way that can cooperate.

It is clear that generative AI is the use case to which Urania lends itself well, especially if you intend to useGenerative AI on premise. We are then working on the creation of the first ones specialized generative AI systems on individual sectors or individual companies.

Do you have specific requests? Are you focusing on particular sectors?

We are in an initial state and companies, especially medium-large ones, are doing well looking carefully at these systems because they have a wide field of use. The prevalent use made of it today is through APIs exposed by OpenAI rather than by Microsoft or Google. A topic that many of our customers are interested in is that of have systems that run locally, which can incorporate knowledge about which absolute privacy is desired. For example, one area is that of creating systems that can be maintenance support when faced with complex machinery. A kind of virtual assistant to ask questions. Today in natural language and perhaps in a little while also via voice.

There is a real explosion of attention around AI. But do companies actually know what the real value is or are they interested mainly because they hear about it?

See also  Pokémon-style monster-catching MMOTem officially launched - Temtem - Gamereactor

It depends on the companies. The medium-large ones have sensed the potential and are looking for the path to exploit it in their production cycle. However, there is one aspect to underline: all the companies that have the know-how to realize the importance of these things had started a cloud strategy, perhaps using AI for forecasting.

Today, however, the evolution of the use of generative AI poses a problem: evaluate whether the cloud is the ideal environment. This is because as long as the systems for the forecasting probably there Computing power required wasn’t huge and therefore the cloud model could be convenient from an economic point of view. Today, however, the cost of infrastructure even on the cloud is important because GPU computing is expensive in itself. So the cloud modelwhich must however provide economies of scale, It’s not very convenient anymore.

There is another theme: the privacy. If, for example, you need to train models for a research on corporate knowledge or if you have to do the fine tuning of these models with internal information, having everything in the house guarantees respect for privacy.

Large companies have understood that the increase in productivity that can be achieved thanks to these intelligent systems should not be underestimated. This applies to coding, document generation and a bit for everything related to the generation part. There are many use cases much of the potential is still unexplored.

However, it should be kept in mind that computing power necessary to train a general purpose model is available only for over the top, not even an enterprise company can afford it. So the only way to have a model at home is starting from an open source Foundation and specialize it for your needs. from a computational point of view, the operation of specialization is much less expensive than generation of the Foundation model and is therefore within the reach of a medium-large company.

See also  iPhone 15 Pro Crimson Color First Exposure: Fruit fans can recognize it as a new model at a glance | XFastest News

Right now, you are trying to develop smaller Foundation models in terms of the parameters that determine them and therefore manageable with smaller hardware resources. This allows you to have them on premise in such a way as not to expose them to a series of security threats and to maintain data privacy.

Given how AI is rapidly evolving, a system that is excellent today could be useless tomorrow. What is your position on this?

Urania is a system designed to operate with Modular AI and it’s completely based on open source components. It was created with some of the most popular industry standards and we were able to quickly integrate all the most interesting components released in recent months. So we are confident that the choices made are paying off. Part of our work is clearly also dedicated toevolution of the platform. And the openness that the solution has is such that, if they are made available components that fit together particularly wellwe will try to bring them into the platform.

In this sense, it should be noted that we not only provide our customers with the hardware with the ready platform but also with a service semi-annual update. We have people who deal specifically with the evolution of Urania by following the news on AI from the open source market. Each of these innovations is studied and brought into the platform when it makes sense to do so, that is, when it actually has a community behind it that provides security in the continuity of development of the product offering quality documentation.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy