Interview und Making Of „Blue Hippy Cats – My Avatar Review“ mit Greg Mutt. Mit seinem „etwas anderem“ Review zum Film Avatar hat Greg sehr grossen Erfolg auf Youtube. Hier berichtet er wie ers gemacht hat, Motioncapture and „how to bring a Character to Life“….
Avatar – Review
Tell us a little about yourself and your career in CG-Graphics.
I started off as a clay-mation animator in the early nineties. I directed Tv commercials at Aardman animations for a while, (I have to name drop and say that during this time I got offered animation work at Pixar,( who were hiring a lot of stop-motion animators as CG animators didn’t really exist) but turned it down as I wasn’t interested in computers at that time. Plus I wanted to make my own films. Now I regret not going there for a year or two.) later I directed at Passion pictures. It was here that I finally succumbed to CG. I had just got fed up with standing under hot lights under great pressure.
Since then I directed CG commercials at Passion for a while, then with an ex Passion pictures producter I set up a commercials studio called ‚Moving House‘ (which I later closed to set up my newer one ‚Busty Kelp‘.)
In the last few years I have been concentration on developing my own Performance capture pipeline. Busty Kelp is currently represented by Aardman animations for Commercials work, but I also work as a facial animation consultant for Games and anything else that’ll have me.
You run the company bustykelp. What do You do there? How do you aquire jobs? http://bustykelp.com/
Busty Kelp is currently represented by Aardman animations for Commercials work, but I also work as a facial animation consultant for Games and anything else that’ll have me. I have good relations with Mocap companies who suggest me to Games clients.
Basically, I dont really go out and do any marketing. I lazily wait for things to land on my doorstep.
How did you come up with the idea to create „Greg“ ? Why gives he an review of „Avatar“?
I wanted to do aperformance capture test that Audiomotion ( the mocap guys) and Myself could use on our reels. They had a day free at the mocap studio coming up in a week or so at the time, so they asked me if there was anything i wanted to do.
The idea came about from trying to find a decent review of avatar on youtube, but stumbling instead, across really bad homemade reviews (mainly from teenage boys) The idea of doing one of these bad reviews, but with a bias against the Cat-like Navi and delivered by a dog, made me laugh.
It was a throwaway idea that i didn’t really analyse. I suggested it to the MD, Mick at Audiomotion and it made him laugh too , and seemed as good an idea as anything esle I could think of in the time and so we did it.
What reference Material did You use?
I watched Avatar a couple of times, and a load of youtube reviews of it. Thats about it.
What Software/Techniques did you use?
The rigging and all Mocap retargetting was done in XSI (or ‚Softimage‘ as its now known) and the texturing andrendering was Lightwave.
The body rig is pretty standard but the facial rig is my own design that I’ve been developing for a few years. Basically its a mix of bones and morphs and lots of other deformers. It involves multiple meshes doing various tasks feeding into a final mesh.
Can you tell us more about the facial rig, how does it work (morph targets, clusters, bones, muscles etc ? )
It is a mixture of Nurbs surfaces, contraints, ‚bones‘ (Softimage can use anything as a ‚bone‘) Shrinkwrapping, live feeding of shapes from one mesh to another, live subdivison and mocap-driven displacement.
There is a lot of complex procedures going on, but the theory is relatively straightforward and if I sat anyone down to look at my rig, they could probably go off and make their own so I dont want to spell it out, as then everyone would be doing it and I wouldnt be needed any more. :>)
… what is the motion capture data driving in the face?
The raw mocap is driving a variety of inputs that feed into different phenomena that all end up driving the mesh. some might be moving under the skin and some might move the skin itself. (I know that answer probably seems quite evasive and vague) Unlike most facial solutions, It is not based on the F.A.C.S. (facial action coding system) Its a far more direct approach.