Blog

how-is-a-litsevaya-aws-nimation-v-half-life-2-1

Posted by:

How is the facial animation in Half-Life 2

In November 2019, Half-Life 2 will have been hit for 15 years. This is a considerable period for any work, and for the work of interactive, which is especially fast, one and a half decades – a real temporary abyss. It is all the more surprising to discover that, according to some technical aspects, Half-Life 2 is still able to compete with many modern developments. Today you can continue to admire the visual, physical model, sound and impact of the second HL, but I want to dwell on a little noticed, but from this no less surprising element of the game – on facial animation.

Half -down and half -time
Today, the standard of animation of persons for large projects is Motion Capture, which implies the seizure of the movements of a living person. The actor’s face is covered with many markers, he plays out the prepared scene in them, after which information about the movement of markers is sent to animators to adjust. Motion Capture is a fairly old technology that has already been driven by hundreds of films and games, but animation in Half-Life 2 works great without it.

In the original Half-Life, the front animation was limited only to the characters opening mouth during dialogs, t.e. practically absent. This is a completely normal story for 1998, but by the middle of the zero this could cause only laughter or bewilderment – technology developed very quickly.

First Half-Life
It was necessary to move forward. The sequel development team had two main options for how the facial animation was implemented in the new Half-Life.

1) catch up a bunch of actors, bring them to sensors and write everything down using Motion Capture.

2) do without actors, but then the whole load on the revival of the characters would fall on the shoulders of the animators.

Some scenes in Half-Life 2 nevertheless recorded casinosnogamstop.co.uk/ by capture of movement, but for the lion’s lion’s share of personal animations Valve used the third option.

The leading programmer of the studio, Ken Berdell, created a comprehensive system for generating animation of persons, which minimized the amount of handmade work, was very flexible and was almost no longer inferior to the capture of movement in reliability. Berdwell’s offspring was the research work of Paul Ekman, who in 1978 took shape as the Facial Action Coding System (FACS) – the classification and mathematical interpretation of human facial expressions.

FACS
FACS classifies human emotions based on the use of the so -called motor units in the facial facial expressions that represent the main movements made by individual muscles or muscle groups. Each motor unit is assigned its own number, and the intensity of movement is indicated by Latin letters from A – a barely noticeable movement, to E – the most powerful. Without an example, this is quite difficult to understand, so we will analyze the emotion of surprise according to the FACS system.

Surprise is recorded in FACS as 1+2+5B+26, which is a combination of four motor units. Turning to the list of motor units, we can understand that surprise consists of:

1 – the inner part of the eyebrow 2 is raised – the outer part of the eyebrow 5 is raised – weakly (b) the upper eyelid 26 is raised – the jaw is lowered

There is only a small part of all possible motor units
Code 1+2+5B+26 is far from the only idea of surprise in the Ekman system, but only one of the prototypes. There are also several basic options and many fence. In the same way, FACS presents all other types of emotions, whether it is anger, joy, irritation or fear.

Ekman proposed using FACS to determine the degree of depression and measure the level of pain in people who are incapable of independently say. However, you could hardly encounter FACS in medicine, but most likely you watched the series “Lie to Me”, where Paul Ekman is a prototype for the prototype for the protagonist, remarkably played by Tim’s company, as the mimic of Alix Vance in Half-Life, founded on Ekman system, was observed on the system of Ekman. By the way, it’s time to return directly to the game again.

Tim mouth is angry and clearly demonstrates FACS

Disassemble and rebuild
In fact, Paul Ekman has already done the main work on creating an animation system for Half-Life 2, and the development team could only digitize and adapt it to their own tasks.

First of all, the FACS system was deployed 180 degrees, because it was necessary not to analyze the existing facial expressions into the components, but on the contrary, to collect animation from these components for the heroes of the game. At the exit, the Ken Berdwell team received an extensive database of how and where to move this or that part of the face in order to get the desired emotion.

It should be understood that the same Motion Capture is nothing more than a way to obtain information about the motor activity of the muscles of the face, while the FACS animation system already contains all this information, allowing you to do without expensive equipment and work with no less expensive actors. Of course, the capture of the face will more accurately convey the individuality of each hero, but the animation of Half-Life 2 will not be called hack.

The second Half-Life
The focus here is that the models of almost all the characters of Half-Life 2 were written off from real people, so the digital analogues of motor units were scattered through the virtual heads based on the features of the structure of the face of a particular living person. In addition, the developers set the limit values of the intensity of animations, checking with the emotional portrait of a particular hero spelled out by the scriptwriters. Thus, all the characters of Half-Life 2 had unique sets of facial animations that looked at the level of an honest Motion Capture.

Gordon Freeman is not here for two reasons: 1) his face is a compilation from the faces of several people. 2) his face did not need to be anim.
As a result, Valve received an extremely powerful and very flexible facial animation system that looked great, but did not eat thousands of workdays of development. In addition, the refusal to use Motion Capture made it possible to realize in the game automatic generation of character animations based on audio files, which made it possible to free animators from a huge amount of routine work. Less routine = more attention to details = higher quality of the final product.

Facial animation is far from the only interesting topic that can be disassembled in the context of Half-Life 2. Animation of the movements of the characters here works in one bunch with a rather progressive artificial intelligence, Newtonian physics is realized with an amazing accuracy for 2004, and zombies are very juicy. I will definitely talk about anything from this next time.

0


About the Author

Add a Comment