Twitter
Advertisement

New algorithm for more realistic computer animation

Scientists have developed a new algorithm that can make computer animation more agile, acrobatic and realistic. The researchers at University of California, Berkeley in the US used deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts. The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.

Latest News
New algorithm for more realistic computer animation
FacebookTwitterWhatsappLinkedin

Scientists have developed a new algorithm that can make computer animation more agile, acrobatic and realistic. The researchers at University of California, Berkeley in the US used deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts. The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.

"This is actually a pretty big leap from what has been done with deep learning and animation," said UC Berkeley graduate student Xue Bin Peng. "In the past, a lot of work has gone into simulating natural motions, but these physics-based methods tend to be very specialised; they are not general methods that can handle a large variety of skills," said Peng. Each activity or task typically requires its own custom-designed controller. 

"We developed more capable agents that behave in a natural manner," he said. "If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We're moving toward a virtual stuntman," said Peng. The work could also inspire the development of more dynamic motor skills for robots.
Traditional techniques in animation typically require designing custom controllers by hand for every skill: one controller for walking, for example, and another for running, flips and other movements.

These hand-designed controllers can look pretty good, Peng said. Alternatively, deep reinforcement learning methods, such as GAIL, can simulate a variety of different skills using a single general algorithm, but their results often look very unnatural. "The advantage of our work is that we can get the best of both worlds," Peng said. "We have a single algorithm that can learn a variety of different skills, and produce motions that rival if not surpass the state of the art in animation with handcrafted controllers," said Peng.

To achieve this, Peng obtained reference data from motion-capture (mocap) clips demonstrating more than 25 different acrobatic feats, such as backflips, cartwheels, kip-ups and vaults, as well as simple running, throwing and jumping. 

After providing the mocap data to the computer, the team then allowed the system - dubbed DeepMimic - to "practice" each skill for about a month of simulated time, a bit longer than a human might take to learn the same skill. The computer practiced 24/7, going through millions of trials to learn how to realistically simulate each skill. It learned through trial and error: comparing its performance after each trial to the mocap data, and tweaking its behaviour to more closely match the human motion. 

Find your daily dose of news & explainers in your WhatsApp. Stay updated, Stay informed-  Follow DNA on WhatsApp.
Advertisement

Live tv

Advertisement
Advertisement