CMU School of Drama


Friday, March 08, 2019

Weta Digital’s Remarkable Face Pipeline : Alita Battle Angel

Weta Digital’s Remarkable Face Pipeline : Alita Battle Angel | was adapted from Yukito Kishiro’s manga series Gunnm and was directed by Robert Rodriguez. James Cameron had been slated to make the film himself, but after many years of the project not commencing principle production, Cameron handed the Alita reins, along with 600 pages of notes and access to team of artists and technicians, to Rodriguez, according to the LA Times. Rodriguez shot the film in 57 days combining performance capture with live-action filming on set.

2 comments:

Elizabeth P said...

I've seen a lot of promotional materials and hype surrounding Alita, mostly because of the creative team behind it, who boasts their use of digital motion capture. Often times, especially when it comes to new technology, I am a bit hesitant about whether or not their claims will live up to the reality. Except Weta Digital has established itself really as the central hub for progress in motion capture for movies. I've seen a lot of movies that have utilized their designers and techniques and there's something very beautiful about what they do. It always reminds me of Uncanny Valley, and how their characters are so close to being human, and their attention to detail is magnificent, but something is always off. The use of motion capture is interesting, because it's being used to help create these more magical worlds and creatures, but the actual filming is just so bare, the world exists only in the mind of the actors and directors. The character of Alita is clearly not human, but when you look at the comparisons between the actress and I'm excited to see how the field progresses and how this will influence consumable culture.

Hsin said...

Date capture is really advancing rapidly recent years, I was lucky enough to touch some of the technology on set. This technology requires a great deal of try and error, since it was just introduced to the film making. The capture part is not the bottle neck, but the processing part is. When I was touring a system that capture actors' motion in 3D and project it in the virtual environment, the most difficult part is to translate what the machine sensor sees into understandable data for human. For the image or 3D model will need final touch before it is used as the artistic material, how to make the user interface for the editing software is really the bottle neck of this kind of projects. In this particular filming set, I would guess they are using the commercial package of the sensor and software, and then make the necessary setting change to make the re-animation of the actor's face possible.