Home » Nvidia shows off an AI model that can turn dozens of shots into a 3D scene

Nvidia shows off an AI model that can turn dozens of shots into a 3D scene

by admin
Nvidia shows off an AI model that can turn dozens of shots into a 3D scene

Nvidia’s latest AI rendering is impressive: a tool that can quickly turn “dozens” of 2D snapshots into 3D rendered scenes. You can see this method in action in the video below, with the model dressed as Andy Warhol holding a vintage Polaroid camera. (Don’t overthink Warhol’s relationship: he’s only a small part of the PR world.)

The tool, called Instant NeRF, refers to “Neural Radiation Fields – a technique developed in 2020 by researchers at UC Berkeley, Google Research, and UC San Diego. If you want to explain the field of neural radiation in detail, you can Read one here, but in a nutshell, the method assigns the color and intensity of light to different 2D shots, then generates data to correlate these images from different viewpoints and render the final 3D scene. In addition to photos, the system requires Data about the camera’s location.

Researchers have improved this type of 2D model to 3D for several years now – adding more detail to the final presentation and increasing the speed of the presentation. Nvidia says its NeRF instant model is one of the fastest to develop to date, and cuts render times from a few minutes to a process that ends “almost immediately.”

Nvidia said in a blog post that as the technology becomes faster and easier to implement, it could be used for a variety of tasks. description of job.

“Instant NeRF can be used to create avatars or scenes of virtual worlds, capture video conference participants and their 3D environments, or reconstruct scenes for digital 3D maps,” writes Nvidia’s Isha Salian. “This technology can be used to train robots and self-driving cars to learn about the size and shape of objects in the real world by taking 2D images or video clips. It can also be used in architecture and entertainment to quickly create digital representations that creators can modify and The real environment of the build.” (The metaverse seems to be calling.)

See also  Announcement - Ice Nine Kills / Wurst Vacation Tour - Simm City Vienna

Unfortunately, Nvidia didn’t share the details of its approach, so we don’t know how many 2D images are needed or how long it will take to render the final 3D scene (which also depends on the computer’s ability to render). However, the technology appears to be advancing rapidly and could start to have a real impact in the next few years.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy