Visualizing GVDem2008

Some months ago I attended a workshop held by the CEAMARC group of the Census of Antarctic Marine Life (CAML), held at the Australian Antarctic Division.

It provided me with a fascinating overview of the extensive activities of both CAML and CEAMARC, and opened the doors to an amazing array of visualisation possibilities.

One of the most fascinating to me is the work being done by Dr Rob Beaman of James Cook University, an Ocean Mapping scientist, who has developed a very high resolution Bathymetric Digital Elevation Model (DEM) of the bathymetry and topography of Antarctica and seabed around the George V and Terre Adelie continental shelf and margin (region of the Mertz Glacier tongue) – also available at the Antarctic Data Centre.

The metadata description of the model (GVDem2008) is as follows:

“This dataset comprises Digital Elevation Models (DEMs) of varying resolutions for the George V and Terre Adelie continental shelf and margin, derived by incorporating all available singlebeam and multibeam point depth data into ESRI ArcGIS grids. The purpose was to provide revised DEMs for Census of Antarctic Marine Life (CAML) researchers who required accurate, high-resolution depth models for correlating seabed biota data against the physical environment.

The DEM processing method utilised all individual multibeam and singlebeam depth points converted to geographic xyz (long/lat/depth) ASCII files. In addition, an ArcGIS line shapefile of the East Antarctic coastline showing the grounding lines of coastal glaciers and floating iceshelves, was converted to a xyz ASCII file with 0 m as the depth value. Land elevation data utilised the Radarsat Antarctic Mapping Project (RAMP) 200 m DEM data converted to xyz ASCII data. All depth, land and coastline ASCII files were input to Fledermaus 3-D visualisation software for removal of noisy data within a 3-D editor window.

The cleaned point data were then binned into a gridded surface using Fledermaus DataMagic software, resulting in a 100 m resolution DEM with holes where no input data exists. ArcGIS Topogrid software was used to interpolate across the holes to output a full-coverage DEM. ArcGIS was used to produce the 250 m and 500 m resolution grids, then clip the 100 m and 250 m resolution grids at the 2000 m depth contour.”

The resulting dataset is the result of a prodigious amount of work in correlating a huge array of different data types from a variety of sources – Rob is an expert in Fledermaus and ArcGIS – complicated pieces of professional GIS and visualisation software for scientific visualisation.

It is easy to underestimate the amount of labour involved in this effort by many people – we see an apparently rainbow-coloured image stuck on a sphere (see below) – but in reality it is a hard-won measurement depicting the real world, created from days, weeks and months of ships assiduously sailing in structured patterns on the ocean at the end of the world; trawling multibeam and side-scan sonars through the oceans – often in challenging seas. This data must be collected and cross-matched, interpolated, holes patched – it is really a technical and observational marvel.

The spatial coverage of the dataset is:

S: -69.0 ; N: -63.0 ; W: 138.0 ; E: 148.00 ; Min Altitude: 0 M ; Max Altitude: 2391 M; Min Depth: 0 M ; Max Depth: -4108 M

and temporal coverage is:

Start Date: 2008-03-17 ; Stop Date: 2008-11-25

On Google Earth it looks like this:

GVDem2008_Google_Earth

GVDem2008 in Google Earth

(note: the red circle in the centre on the coast indicates the exact spot of Frank Hurley’s darkroom in Mawson’s Huts – a panorama I shot in 2008 – click here to view )

BUT – there is a very interesting and perplexing feature of all these visualisation programs – there is generally no easy-to-use pipeline for generating highly ‘realistic’ renditions of data. It’s something I’ve struggled with for years – and there is a huge learning curve involved in translating data into a kind of software-agnostic intermediate form that can then be read by more standard commercial and opensource 3D modelling packages. Obviously, these things are generally designed for quite different purposes, but it is still no simple feat to get data from GIS software to a landscape visualisation package like Vue, or modellers like Maya, Cinema 4d or Blender – and to retain features such as correct geolocation of the dataset on a spherical world/terrain model and to be able to simulate physical skies, seas and correct astronomical positions of heavenly bodies – like getting the sun right for a certain time of year. It IS do-able, as I am about to demonstrate, but it is not simple. It is quite extraordinary how many 3D modelling packages cannot handle lat-long spherical coordinate systems – generally everything operates in a flat Cartesian XYZ space. So a bit of data and format wrangling is always required. Nevertheless – the results are promising.

What interests me here is an ‘aesthetic’ interpretation of scientific data. The purpose of this is to enable ways of looking at the data ‘as if it is real’ – which, of course, it is, but it could never actually be seen this way – so it is a form of pseudo-realism or pseudo-naturalism. However, this semiotic framework enables us to perceive it in a more tractable way than the via the conventions of scientific visualisation – which are often very diagrammatic and conventionalised – and, in some respects, sometimes unclear. Here I’d argue that good scientific visualisation is as much an art as it is a science.

My experiment here is to generate a convincing fly-through of the GVDEM2008 dataset, as if from the point of view of an all-seeing submarine. The fact of the underwater ‘landscape’ is that it is phenomenally dark – in fact, under about 200m of of water, it is almost pitch black – an explanation for the evolution of bioluminescence  in many marine organisms. What’s more, the overwhelming majority of the world’s oceans are precisely that – vast pitch-black teeming microbial voids that have not seen sunlight for billions of years. That’s quite a thought, even as they have changed shape and location over the millennia with the processes of continental drift and crustal deformation. So, putting this aside, here we are imagining what it would be like to look across these vast landscapes as if we could see – huge vistas extending as far as the eye could see, mountain ranges and vast abysses, gigantic deserts separating oases of life – not only near the shallower waters of the continental shelves, but deep in the ocean, some surrounding submarine volcanic vents (black smokers) and other exotica worthy of Borges’ or Calvino’s fictional Marco Polo, yet stranger:

Working with Rob via email, we figured out that most optimal way of generating the required terrain data was to use 16-bit geotiffs (we tried 32-bit, but this was not accepted by the modelling software.) 16 bits of grey-scale information enable one to generate sufficiently detailed heightfields in modellers as to avoid obvious ‘stepping’ or quantising in the data.

Here’s a gallery of initial test renders :

The resolution is fairly coarse, but serves to demonstrate the utility of the workflow and provide context in which to work upon scaling and texturing – which, of course, in these tests is entirely wrong: in a Mercator projection the range is approximately 673km North/South and 453km East/West. Besides the disproportion (aspect ratio),  you get no sense of this scale in these images – so that is something to work on, and surprisingly difficult to achieve: especially when working with visual cues such as texture scale and environmental scale – e.g. cloud layers, horizonal curvature, and so forth. Furthermore a couple of the renders show artefacting – horizontal striations that have been introduced by various manipulations of the data. The textures you see are heighfield-sensitive procedural textures, used mainly for illustrative purposes. More stuff to work out!

Currently I am working on a much higher-resolution underwater fly-through (a paradoxical term), which involves an interesting and challenging interplay of decisions: on one hand it would be ideal to have a supercomputer with endless amounts of RAM and ultra-high-end GPUs at my disposal, on the other it is worthwhile spending the time optimising the geometry of the model so it is tractable for a high-end desktop computer (Mac Pro 8-core and small render farm).  There is always a trade-off between actual-fidelity and apparent-fidelity (I would never work as a relationships counsellor) – as the two images below demonstrate: the first image shows an initial pass at mesh optimisation (set at about 60% vertices reduction); the second at about 80% reduction. This deformation process reduces the number of vertices/polygons in the DEM polymesh from several million to a far more manageable 515105 points/ 1023821 polygons. Smart algorithms optimise the mesh using procedural checking (a mathematically interesting problem as it can be essentially insoluble) and intelligently redistribute the vertices, maintaining a formal contiguity with the original – but there are no hard and fast rules. The benefit of this is a reduced memory footprint as well as a significant decrease in render-time. By using bump-mapping and various illumination techniques I have reduced this even further – and every bit counts. The difference between, say 5 minutes per frame and 4 minutes per frame may not sound like very much, but if you’re running a render that might take 5 weeks, then you have saved yourself a week: it really adds up. I recently finished a render at the University of Melbourne that took nearly 6 months. Fortunately I was away in Antarctica a lot of that time.

GVDem Mesh optimised @ 60%

GVDem mesh optimisation @ 80% reduction

GVDem mesh optimisation @ 80% reduction

GVDem mesh optimisation @ 80% reduction, showing polylines.

GVDem mesh optimisation @ 80% reduction, showing polylines.

Finally, we make a comparison between two final renders (this is after an extensive render optimisation process – exploring the essentially infinite parameter space of render settings):

Render with a 100% resolution mesh (render time approximately 27 minutes, image size 1324 x 764 px):

GVDem marine render 100% mesh resolution

GVDem marine render 100% mesh resolution

Render with a 20% resolution mesh (render time approximately 10 minutes; image size 1440 x 1080 rectangular pixels; 1920 x 1080 square pixels resampled):

GVDEM 1080p HD

GVDEM 1920 x 1080p HD Render

And here’s the pay off for all these efforts: a rapid decrease in render time, with little differentiation in apparent visual resolution. To the trained eye there will be minor differences and even some artefacting; to the untrained eye – especially when watching this in motion – there will be no observable change.

The tracks that we see here are artefacts from multibeam sonar – the devices used to record the ocean floor – at this point about 3 kilometres underwater.  Somewhat bemusingly, I had assumed these were tracks of iceberg scour, as this is what I had been looking for (but in the wrong place) – Rob informs me that, of course, they are much further in towards the shoreline in shallower water. This is clearly where a sense of scale becomes crucially important – and I have had no clear point of reference. It just goes to show that working with specialists is imperative in this enterprise – people who actually know and understand what they are looking at – nevertheless, we see the fascinating submarine canyon system in this environment and there is a lot more to do with these initial experiments. Rob’s now guiding me through the environment and I think we’ll come up with a great fly-through exploring the ‘interesting bits’ and begin to tell the scientific story of this part of the world’s oceans: it’s an art-science collaboration.

Here are the immediate benefits of a good conversation:

GVDem Proposed Flight Path

This is extremely useful. Things have names (you’ll never find them on a world map anywhere), we can see where the Mertz and Ninnis Glacier Tongues are – and for me, well, I know Cape Denison extremely well, having spent two summers there – this gives me a sense of place and scale. What we’re looking at is enormous. Fascinating!

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.