Hi everyone, In the latest release of Wrap4D we
introduced a new pipeline for processing facial performance captured on 2 cameras mounted on a helmet. And this pipeline provides
great accuracy comparable to expensive seated 4D capture rigs
consisting of dozens of cameras. And the drawback of this new
pipeline is that it's quite laborious and it’s computationally intensive. And because of that, you are limited to the duration
of sequences you can process. So you clearly cannot use it for
pr
ocessing hours of facial performance. It's also quite complicated
to use for dynamic scenes, for fight scenes, and scenes with occlusions or
secondary dynamics on the face. So we decided to try the following idea: what if we take an HMC and capture a subset of an actor’s performance, say 10-15 minutes of everything the actor
is capable of doing with their face, process it with Wrap4D, and train a neural net to produce the same result by looking at just two images from the cameras. We call it
Faceform neural profile. So let me demonstrate how it works. To do that we will need a stereo-camera HMC. You can use Technoprops, Standard Deviation, or in this case our own Faceform HMC. So I'm going to put it on my head. And let's run a software called HMC Director. So I’m going to record a short sequence
of me doing extreme facial expressions. So let’s go ahead and click Record. And I'm going to do like a sad expression, or like a smile expression, a rage expression. And I can also speak to
one side of
the face, and another side of the face… And I can do a little bit of facial gym. I can even scratch my forehead. So let’s stop. And now we can go to the Take Manager. And Take Manager will bring us
to a folder on the wearable PC with the two videos from the
top and the bottom cameras. So what we need to do is to
copy these videos to Wrap4D. Here I have a pre-existing project in Wrap4D. And I am going to just drag
and drop my videos over here. We need to wait for a few seconds b
efore
the video is downloaded over WiFi. Alright. And now let’s do the same for the second video. With this done, we can plug this node over here, and this node over here. The result of these nodes is just a
video from the top and the bottom camera. And then we undistort it and
pass it to a NeuralNet node. This node has only one parameter which
is a path to a Faceform neural profile. And this one was captured more
than half a year ago from my face. The result of this node is a dense
set of
points all over my face. And the next node here is responsible
for converting this dense set of points into a mesh with consistent topology. So now I can technically go
to any frame of the sequence and the node in a few seconds will produce a mesh corresponding to this facial expression. So let's check a few more
examples of how it works. And now I can go to this node
and turn on the wireframe and change the color of the mesh to white. Now we can go here and we can see the overlap of our re
sulting mesh with the original camera sequence. And you can see that the tracking provides pretty great accuracy. So having this personalized neural net profile, basically makes the tracking
of performance fully automated. And we can literally process hours
of facial performance overnight. Because the neural net does a pretty good
job of capturing small animation nuances, you may end up having almost zero
animator input on top of the performance unless you really want to change the original
actor's performance
such as changing the gaze direction. So that’s it for this video, and stay tuned for further updates!
Comments
Some of the most interesting company to follow since its beginning.
Great work, looking forward to using it !
Amazing as always Andrew!
fantastic work! Now the workflow finally becomes feasible for longer animations without a huge farm hopefully
Сэд экспрешн 😢 Смайл экспрешн 😊 Рэйдж экспрешн 😡 Уане садй оф ще фейс 🙂 озер сайд оф зе фейс 🙃 Литл бит фэйшл джим 😦😮😯😧 Скретч май джим 🫡
I love this program. What is the cost?
Wow, this is incredible!
Fantastic !
This video is like a cheat code for a good mood.
I need this in my Autodesk Maya too.
WOW! cool!
Looks like magic
Волшебство!)
Is it okay to use mobile camera footage or dslr, ofc attached with helmet?
Hi! Nice to meet you! You system look great! How mutch this cost? Can i contact you?
but how to connect it to metahuman in unreal engine 5 for final facial animation ?