Fulldome & Virtual Cameras in Mawson’s Huts

A virtual camera in Mawsons Huts

A virtual camera in Mawsons Huts

People often find screen resolution difficult to understand. I recall my former life as a university lecturer seeing the blank stares of arts students as they tried to grapple with something vaguely mathematical in nature – so hopefully I can explain it a bit better these days! Simply put, we think of screen resolution in terms of pixels – these are the tiny RGB (red-green-blue) elements that make up a screen, and the number of them determine the resolution – or apparent sharpness – of the image on the screen. More pixels equals more resolution, though pixels can also be big or small – so apparent resolution can also vary depending upon the distance of the viewer’s eyes from the screen. That’s why we can look at big advertising billboards and see an apparently sharp photographic image, yet when you view it close up you see that it is made of huge dots and the image seems amazingly coarse.

Standard PAL TV has a resolution of 720×576 pixels (px) – roughly a total of 414,720 px on display; Full 1920×1080 HDTV shows around 2,073,600 px – around 2 megapixels (MPx). Contemporary Hollywood digital cinema standards are Academy 4k (4096 x 3112 = 12,746,752 px) and super 35mm – the RED Camera (4520 x 2540 = 11,480,800 px) – though there are many competing solutions and there are all sorts of arcana such as rectangular pixels, anamorphic imaging and so on. In the planetarium/fulldome industry we talk about 3k, 4k and 8k resolution – meaning images that are not rectangular in nature, but full circular fisheye, with a diameter of around 3600px, 4096px and 8192px respectively; this translates into 12,960,000 (~13MPx), 16,777,216 (~17MPx) and 67,108,864 (67MPx) on screen.

This diagram illustrates the relative dimensions of screen images based upon the unit pixel:

Screen Dimensions

Screen Dimensions

The grey rectangle right at the centre is standard definition TV, the two light grey rectangles represent the 1280×720 and 1920×1080 HD video standards. So we can see that fulldome resolutions easily meet – and in the case of 8k far exceed – contemporary digital cinema standards.

I’ve been shooting fulldome with a 21 Megapixel Canon EOS 5D Mk II camera, with a Sigma 8mm 180º circular fisheye lens – this gives about 3700×3700 pixels per frame (~13.7MPx). This material will probably be upscaled to 4k resolution, to match 4k computer rendered material that I am also generating for the movie I’m currently shooting – we’ll see (that’s something best determined when I return home as the computational processing of all is data is quite demanding.) Furthermore, the sequences will be running at 30 frames per second (in comparison PAL is 25fps, film 24fps and NTSC 29.97fps). All in all a prodigious amount of data that I anticpate will run into several terabytes for the complete movie.

In an effort to maximize the flexibility of the shoot I thought it would also be useful to develop a technique whereby I can also ‘virtually’ reshoot material if I need to. This is an experiment, but I don’t see why it won’t work – the proof will be in the pudding. Here’s an illustration of the idea, using the Hurley Dolly and the fisheye camera rig:

Fisheye Shoot

Fisheye Shoot

The camera is run along the tracks performing a fisheye shoot – the blue hemisphere indicates the 180º field of view (FOV). This is a fairly slow process, shooting 1 frame per second, and the run over a 6 metre track takes about 30 minutes to 1 hour. The camera is angled so that it doesn’t shoot the tracks below it – the floor of the scene will be composited in later, using 3D compositing software.

This run is repeated twice more with the camera at different orientations – illustrated below:

Panoramic overlap

Panoramic overlap

This means that I end up with 3 fulldome sequences of frames that correspond spatially and temporally with each other – enabling me to stitch three frames together for each frame of motion. This creates a partial panorama movie, with frames somewhat like this (this is a rushed test, so it is not representative of the finished panos):

When re-projected and perspective-corrected in a 3D space, I should, in theory, be able to point a virtual camera within the scene and arbitrarily change the direction of the camera as it moves along the tracks. Naturally, there are limitations here – re-shooting 180º fisheye has more constraints in movement than if I was to virtually re-shoot using a standard lens (the possibilities here for a standard documentary and visual forensics are very exciting). There is also the issue of changing light over time – obviously the three passes will not exactly correspond. What would have been ideal would to shoot the sequences at exactly the same time over three days – hoping for similar weather; or shooting on a cloudy day with little change in light. However, I’ve found that panorama stitching programs can handle the light changes fairly well if the movement of light over time isn’t too pronounced – and much can be achieved via synthetic relighting in software – I suspect most viewers wouldn’t notice these effects as they will be too busy attending to other events in the scene.

It will take an awful amount of image processing to create the panoramic sequences – it is non-trivial – so, as with all experiments, I expect some successes and lots of failures. I’ve learned a lot in the process and it will be interesting to work on the technique and the technology over the next year to see if it can evolve into a genuinely workable solution – it certainly opens up a world of visual possibilities.

Needless to say, the final output movie – when projected onto a dome or anamorphically re-projected for flat-screen – will look perfectly normal to the viewer.

best,

Peter

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.