Home » Unity sets out to promote the application of neural network rendering technology, subverting the way of presenting the virtual 3D world_Digital_Universe_Scene

Unity sets out to promote the application of neural network rendering technology, subverting the way of presenting the virtual 3D world_Digital_Universe_Scene

by admin
Unity sets out to promote the application of neural network rendering technology, subverting the way of presenting the virtual 3D world_Digital_Universe_Scene

Original title: Unity sets out to promote the application of neural network rendering technology, subverting the way of presenting virtual 3D world

In recent years, the booming metaverse industry has attracted the attention of many people. Various technologies in the Internet and software industries are developing rapidly, which also forms the basis for us to imagine the future metaverse era. However, in terms of the production process in the field of science and technology, we are still in the state before the industrial revolution, the production efficiency is low, and there is still a long way to go before realizing large-scale and large-scale production. The high production cost of digital assets and the demanding requirements for technical talents have become important reasons hindering its development.

Take the very popular digital human in recent years as an example. Behind the dynamic performance is the intensive rendering of animators frame by frame. Not only is the cost very high, but the production process is very time-consuming and laborious. It is imperative to introduce AI technology into the production process and train tools as an extension of the developer’s brain. Only by introducing AI technology into the process of digital asset production can high-quality virtual content be produced on a large scale, thus leading the productivity revolution in the metaverse era.

Vibrant metaverse world created by Unity AI interactive technology

Unity Al interactive technology efficiently empowers the production of metaverse digital assets

The metaverse has three indispensable basic elements: people, places, and things. These digital assets together constitute the metaverse world and affect people’s experience in the metaverse world. Unity, as the world‘s leading interactive real-time 3D content creation and The operation platform has a layout in the fields of virtual human, car machine, digital twin, AR/VR, etc., and has a deep foundation, and these are metaverse-related technical fields. Unity has been consciously exploring how to accelerate digital assets through AI. The production process, in which the virtual digital human is the earliest application scenario in the Metaverse.

See also  Netflix's "Star Cowboys" live-action drama IGN 7 segments make fans upset_original

At present, in the process of creating virtual digital humans, most companies adopt the process of letting the model make extreme expressions in the light field equipment and shoot them, and then let the artist clean up the model with K frames. This step is very time-consuming. However, it is the only way to avoid the “uncanny valley” effect. But now there are better solutions for this job. For example, Ziva Dynamics, the world‘s leading digital character creation company acquired by Unity, is very good at using machine learning to help real-time character creation, and is proficient in complex simulation and model deformation.

Today, Ziva and Unity are cooperating to plan a development route, focusing on the popularization of affordable and scalable real-time 3D face technology, so that the performance of digital characters can be completed without expensive HMC (head-mounted camera) or volumetric capture equipment. Using the Unity AI artificial intelligence technology tools such as Ziva RT, Unity Deep Pose, and Kinematica created in cooperation with them, the character face creation that would have taken weeks or even months can be condensed into one click of a button in the cloud, greatly speeding up the creation The process shortens the time required for creation, which is conducive to creators to better develop their creativity.

Kinematica developed based on Unity Al interactive technology

AI function empowers virtual scene generation, making Unity digital assets one step faster

As another big head in the metaverse, the production of virtual scenes is also very important, and its volume is far from comparable to that of virtual humans. In a virtual world that covers thousands or even tens of thousands of square kilometers, it will undoubtedly be a disaster if every inch of land is placed and designed by purely manual methods. Therefore, the current scene creation in Unity digital assets has also introduced AI functions. For example, Unity World Generation is an art auxiliary tool driven by AI technology. Developers can simply “sweep” a few times, and a mountain can rise out of thin air, and The real light and shadow effects are simulated in real time.

See also  Ranking – The best professionals for your money

In addition, Unity also introduced Smart Assets. Every element of this Unity digital asset is driven by AI. The user controls the proportion of each element in the scene through a visual way, and the system will automatically generate a scene that conforms to the physical reality. All calculations are done automatically by AI, no need to manually adjust parameters.

World Generation, an art aid tool driven by Unity AI interactive technology

Unity artificial intelligence launched a variety of AI tools to make virtual item generation more efficient and faster

After having people and scenes, objects are needed to fill the metaverse world to make it more vivid and real. Compared with the inefficient production method of modeling and replicating designs one by one in the past, the generation of virtual items through 3D scanning technology and Unity artificial intelligence can undoubtedly greatly improve efficiency.

At present, Unity has launched some AI-driven functions to reconstruct objects in the real world in 3D through visualization and 3D scanning. For example, Unity ArtEngine uses Unity Al to interactively improve the workflow of surface photography and mapping. AI can be used to automatically generate material data in model assets based on photos, and quickly remove lighting, remove seams, and eliminate unnecessary artifacts. Photos are converted to physically based rendered footage. UGG, Off-White and other leading fashion brands and e-commerce platforms have taken the lead in using RestAR, and the application scenarios include 3D preview, AR try-on and so on.

Unity artificial intelligence empowers RestAR to quickly generate 3D models

Unity AI interaction technology: shortening the distance between us and the metaverse

To sum up, with the use of advanced AI technology and the filling of massive materials created by the current Unity digital assets, the Metaverse has become more diverse and “lively”. But the metaverse is not just a static picture. For example, when holding a virtual meeting, you can change your face. Through Unity’s AR Foundation, you can capture faces in a multi-platform way in Unity. Therefore, “interactivity”, “social interaction” “Attribute” is also a basic feature that cannot be ignored in the Metaverse. To solve this problem, Unity already has mature tools, such as ML-Agent and Unity Influence Engine, which can empower creators to carry out more efficient and low-cost interaction design and behavior simulation for NPC.

See also  Lottery today LIVE: results of the National and Province

The next frontier of Unity AI interactive technology – NeRF

In addition, NeRF, another key technology based on neural network rendering, is the next development direction of Unity AI interaction technology. It is reported that NeRF realizes the effective combination of neural field (NeUral Field) and graphics volume rendering (VolUme Rendering), and for the first time uses neural network implicit scene representation to achieve photo-realistic rendering effect. If this technology is mature, whether it is the generation of pictures or the expression of digital assets, they can be parameterized into an implicit space, which may completely change the expression of the virtual world.

Next, Unity will integrate NeRF and other AI-related technologies into more creative processes, so that the tool can truly become the brain of the creator.

Light and shadow changes presented by neural network rendering

All in all, when Unity’s real-time engine meets AI virtual-real interaction, Unity is ready to embrace the next-generation Internet digital content world. I believe that in the future, Unity AI interactive visualization technology will have a broader application space and truly let “what you see” That is, “income”.Return to Sohu to see more


Editor:

Disclaimer: The opinions of this article represent only the author himself. Sohu is an information release platform, and Sohu only provides information storage space services.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy