It is a daunting task when it comes to creating a virtual environment that looks realistic. Imagine having to work on a popular 3D video game like Red Dead Redemption 2, for example, took a team of around 1000 developers more than eight years to create. This includes moments when developers worked 100 hours a week. But all this is going to change with the latest AI powered software developed by Nvidia. This AI-powered software can help create lifelike realistic images in just minutes.
The latest AI software developed by Nvidia is no just going to make lives of software developers easy, but also help auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.Bryan Catanzaro, vice president of applied deep learning at Nvidia said – “We can create new sketches that have never been seen before and render those. We’re actually teaching the model how to draw based on the real video.”
The researchers from Nvidia has used standard machine learning approach. It identifies various objects in a video scene for example cars, trees, buildings etc. Later with the help of generative adversarial network or GAN, the developers are able to train a computer to fill the outline of objects with realistic 3D imagery.
On feeding the system with the outline of a scene where it is able to view the different objects. The system is able to fill in stunning, slightly shimmering details. The end result is impressive even though some of the objects appear to be bent or twisted.
Bryan Catanzaro further added that the classic computer graphics render up the way light interacts with the objects. Now researchers are trying to find a way how they can make artificial intelligence to change the rendering process.
According to Catanzaro the latest approach can open up new frontiers for game design. The technology allows you to render whole scenes instantly. Als,o it can be used to add a real person to a video game after feeding a few minutes of video footage of the person in real life.
He further said “You can’t realistically get real training data for every situation that might pop up”. The approach can also be used to render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. An announcement of the work was made today at NeurIPS, a major AI conference in Montreal.
Certainly it is a notable step taken by Nvidia researcher in digital imaging. It is still early, but certainly is possible we change the way we create and interact with virtual worlds in the near future. This could prove to be a game changer for commerce, for art, for innovation and more. With the presence of such tools and their capabilities, there certainly is a thin line between reality and the virtual world.
- Surveillance Camera Vulnerability Allows Hackers To Spy
- Ready Player One – Trailer Released
- Duo App – Send Video Message