VMI Blog VI: Metahumans Lighting Preview
One of the challenges of creating ‘beliveable’ Metahumans for VMI has been striking a fine balance between creating CG surrogates for our scientists – that they are more-or-less happy with – and avoiding stereotypes and cliches. It is not immediately apparent that the Metahuman interface sets out to avoid this, especially if you inspect the absurd standard head/body size ratios available. This is very pronounced in the available ‘body types’: if anything other than ‘average’ is chosen (why don’t they have ‘median’?). There seems to be a pronounced in-built bias towards believable ‘male’ faces/bodies vs that of ‘female’ faces/bodies. A bit of coded bias in action?
I hope they address this in future iterations – it’s not actually a terribly difficult problem – just a matter of confronting systemic prejudices that designers of systems bring to bear. OK, well maybe it is a bit hard. But it must be done and be shown to be done. That in itself is an interesting problem – how would you show it? How would you measure it, without running the risk of becoming a Nineteenth-Century Physiognomist? At what point does aesthetic software design converge with scientific and medical statistics?
The Metahuman parametric space is – of course – huge, so decisions about how to control physical parameters have to be made via the UI – it must articulate a huge range of options in a reducible way. Yet, for instance, the difference between ‘overweight’ and ‘skinny’, ‘tall’ and ‘short’ is cartoonish, to say the least. Perhaps this is a legacy of gaming cliches, where figures are pushed into extreme categories and physiological ratios: uber-babes, super-men, ugly old crones, and wizened ancients. Sigh – so much for the vaunted ‘realism’ these approaches bring – it’s very much a struggle around the the finer points of the Bell-curve distribution that something resembling actual portraiture might be found. It is all too easy to wander off inadvertently into caricature – yet the human brain is so attuned to nuances of faces and bodies that it’s those slight off-notes that send surrogacy plummeting into ‘weirdness’ – that sinister Other that isn’t-quite-right.
Anyway, before I detour into a lecture on bodily semiotics, I should say that it is remarkable that this is tractable at all. With sensible choices and some understanding of portraiture and physiology it is possible to work within this immense cartoon space and find, like Leonardo’s cartoons, that avatar that people might actually want to inhabit – for a brief time, anyway. A fleeting self-representation, not too ‘self’-ish, not too vaingloriously youngish, not too conventionally beautiful, not too narcissistic, not too modest or self-effacing, that’s somehow personable and comfortable to project a scientific story from.
Of course, we all see ourselves in a glass darkly, and yet that image is reversed. But we see that image many more times than we do the photographs that somehow ‘capture’ us – and the myriad more that don’t.
The thing to always bear in mind is that this is a bunch of shaders, algorithms, computational geometry, resembling the real. I’m very interested to see how it evolves, as it seems to me to be a perfect platform for training machine-learning systems that will, in the future, be able to create endless believable humanoid individuals. That’s probably why it’s a ‘free’ service – I must be trading my knowledge about ‘believability’ in exchange for access – the platform will get better at the job, and also more independent.
That’s not a bad trade-off at the moment – I guess I should read the fine print. At least in the short term it provides a remarkable way to rapidly create some pretty believable quick-turn-around representations of humans, with a bit of flawed-ness, too much symmetry (but I know I can turn that off), and some caricaturish defaults, that can help our VMI phantasms along their way. It’s an experiment. And quite fun to do for the moment – but also why one may not want to make the Metahumans™ too much like the ©Human.