Home » NVIDIA Research builds AI models to add 3D objects and characters to virtual worlds

NVIDIA Research builds AI models to add 3D objects and characters to virtual worlds

by admin
NVIDIA Research builds AI models to add 3D objects and characters to virtual worlds
exist NVIDIA Research With the development of new artificial intelligence (AI) models, more and more companies and creators can place a variety of 3D buildings, vehicles and characters into the vast virtual worlds they create.

NVIDIA GET3D Training solely on 2D imagery produces extremely realistic textures and 3D shapes with intricate geometric detail. Users create these 3D objects in the same format as popular graphics software applications, and can immediately import these shapes into 3D renderers and game engines for subsequent editing.

The resulting objects can be used to render 3D-shaped buildings, outdoor spaces, or entire cities for use in industries such as gaming, robotics, architecture, and social media.

GET3D can generate an unlimited number of 3D shapes based on the data used to train it. Like an artist’s delicate sculptural clay, GET3D models transform numbers into complex 3D shapes.

Like using a dataset of 2D car images to train a GET3D model, it will create 3D shapes of cars, trucks, race cars, and pickups. When trained with 2D animal pictures, it produces animals such as 3D shaped foxes, rhinos, horses and bears. When trained with 2D chair images, it produces swivel chairs, dining chairs, and comfortable reclining chairs in various 3D shapes.

Film:

“GET3D Models brings us one step closer to democratizing the use of AI to create 3D content,” said Sanja Fidler, vice president of AI research at NVIDIA. “It can produce textured 3D shapes on the fly, which could be game-changing for developers. The rules of the game in the past help them quickly add all kinds of interesting objects to the virtual world.” Sanja Fidler is also an NVIDIA co-founder inAI Research Lab in Torontothe tool was developed by the lab.
At the Neural Information Processing Systems Conference (NeurIPS), Nov. 26-Dec. 4 in New Orleans and online, NVIDIA will present more than 20 papers and hold workshops,GET3D is one of them.

Virtual worlds need to be created with AI

See also  Twitter fails to pay rent on San Francisco offices: "It owes $136,000"

The real world is rich in appearance, the streets are lined with unique buildings, vehicles of all kinds whizz past, and people of all kinds shuttle through it. It takes time to simulate a 3D virtual world that reflects these features manually, and it is difficult to add details to this digital environment.

AI models are faster at building 3D virtual worlds than manual methods in the past, but they are still not precise enough. Even the latest inverse rendering methods can only generate 3D objects from 2D images taken from different angles, and developers can only create one 3D shape at a time.

GET3D is different, on an NVIDIA GPUinferenceabout 20 shapes can be generated in one second, and it works like a 2D imageGenerative Adversarial Networks, while generating 3D objects. Train it on a larger and more diverse dataset, the more detailed it can be output.

NVIDIA researchers trained the GET3D model using a composite of 2D images of 3D shapes taken from different camera angles. NVIDIA A100 Tensor Core GPU It took only two days to complete the training work on 1 million images.

Enables creators to modify shapes, textures and materials

GET3D gets its name from its ability to generate Explicit Textured 3D (Generate Explicit Textured 3D) meshes, meaning a triangular mesh is used to build shapes, like a papier-mâché model covered with textured material. This allows users to easily import and edit objects into game engines, 3D modelers and movie renderers.

Creators export GET3D-generated shapes to a drawing application to add realistic lighting effects as objects move or rotate in the scene.Developers pair GET3D with another AI tool from NVIDIA Research StyleGAN-NADAyou can use a text description to add a specific style to the image, such as turning a rendered car into a burned-out car or taxi, or turning a normal house into a haunted house.

The researchers point out that a future version of GET3D could use camera pose estimation techniques, allowing developers to train models using real-world data rather than synthetic data. The researchers will also improve the GET3D model to support generic generative techniques, allowing developers to train GET3D with a variety of 3D shapes at a time, rather than one object class at a time.

See also  Is this really the 'app' that will push us to enter the metaverse?

Watch NVIDIA founder and CEO Jen-Hsun Huang GTC Conference keynote replay to get NVIDIA AI ResearchUpdate on results.

Further reading:

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy