Animators solve the huge problem in 3D games: putting on clothes
Anyone who has watched a 3D animated film, or played video games for long enough has likely noticed something about the relationship between characters and clothing: we never see them put their clothes on. The reason for this is because animating a 3D character putting on a piece of fabric often leads to disastrous results with rather the character or the cloth failing to look "realistic."
Researchers at the Georgia Institute of Technology have sought to remedy this issue, and their results are impressive - albeit a bit silly.
In the video found in the header we can see several short clips of humanoid models attempting to put on clothing. While the animations are far from natural, GIT's Karen Liu and her team are attempting to find a way to have physics-rendered cloth interact with hand-animated 3D models without the need for animators to make sure that every fibre does not clip or get stuck on the character model.
Allowing water and cloth to be rendered by a physics engine has typically saved animators a great deal of time and now GIT's research could provide greater realism in 3D movies and video games.
"The closest thing to putting clothes on you'll see in an animated movie is The Incredibles putting on capes, and that's still a very simple case because it doesn't really involve a lot of physical contact between the characters and the cloth. [...] This is the problem we want to address because generating this kind of scene involves the physical interaction between two very different physical systems." - Karen Liu, head of GIT's research team on current clothing animation practices
One of the greatest challenges for Karen Liu's team is developing a path-finding algorithm that is both realistic, and fluid. According to Liu, "The character has to make the right decision on how he or she is going to move the limbs, so it has a clear path to the opening." This is difficult for a number of reasons, as humans make many motions that we do not really think about when getting dressed.
These include awkward motions such as a flick of the wrist, or wriggling our arms through sleeves. Without these animations, a 3D character can reach uncanny levels of movement in seconds, even if we visibly cannot see something wrong.
While Liu's team has come close to solving the issue with pre-rendered 3D spaces, they also want to see about applying their results to real-time 3D. This means we could see these algorithms used in video games shortly. They also want to apply their research to robots as well: most robots are designed to avoid collisions, but the process of putting on a shirt requires many small collisions to accomplish.
But what does this mean for the gaming world?
Currently in video games most - if not all - dressing scenes are pre-rendered. The closest examples of in-game undressing and dressing typically involves a fade-to-black or a character hiding behind some sort of visual block before magically changing clothes.
The most infamous of these changes being "The Sims" with characters spinning around inexplicably fast before reappearing with a new set of clothes. While putting clothes on may not seem earth-shatteringly important for gaming, these developments may help with immersion in video games that rely heavily on clothing changes or nudity for storytelling. Certain scenes from The Witcher and Mass Effect come to mind.
<iframe style="backface-visibility: hidden; transform: scale(1); display: block; margin-left: auto; margin-right: auto;" src="http://gifs.com/embed/KRXNWP" frameborder="0" scrolling="no" width="620" height="349" />
I could certainly see animations like this being used in scenes between a parent and child, or between two adults characters
Personally I think that this will be an interesting concept for the future of video games. Even if game developers do not use this research as a way to animated clothing, we could see this become useful for other applications. For example, we could see new video games where the object of a game could be to have a character place a physics-rendered object on or into another non-physics rendered object. Not sure what they would do with this concept, but I'm certain game developers could think of something!
What do you think about this research? Is it a waste of time? Can you think of any use for it in the future of gaming?