DARK (2012)

August 3, 2012 in data visualisation, fulldome, movies, projects, research, science, video by Peter Morse

DARK is a fulldome movie that explains and explores the nature of Dark Matter, the missing 80% of the mass of the Universe.

The search for Dark Matter is the most pressing astrophysical problem of our time – the solution to which will help us understand why the Universe is as it is, where it came from, and how it has evolved over billions of years – the unimaginable depths of deep time, of which a human life is but a flickering instant.

But in that instant, we can grasp its immensity and, through science, we can attempt to understand it.

The movie is presented by Dr Alan Duffy, a brilliant  young astronomer from the International Centre for Radio Astronomy Research (ICRAR) at the University of Western Australia – who creates simulations of Dark Matter evolution inside supercomputers.

Alan introduces us to the idea of Dark Matter, why astronomers think it exists, and explains why Radio Astronomy is so well-suited to its discovery.

We explore why the new Australian Square Kilometre Array Pathfinder (ASKAP) Telescope, currently under construction in remote Western Australia, will be so important in this scientific quest.

But this is only the beginning.

We journey through completely immersive visualisations of Dark Matter evolution calculated upon some of the world’s fastest supercomputers – cosmological visions on a truly vast scale, in which galaxies themselves are but points of light, distributed across far larger intergalactic structures of Dark Matter. These visualisations, developed by Paul Bourke, demonstrate the cutting-edge of contemporary supercomputer visualisation of massive scientific datasets and astrophysical simulation.

It sounds like Science Fiction, but it’s not. It’s the real stuff. Real Data, seen in this way for the very first time.

If, like our composer, Cathie Travers, you don’t happen to be a Computational Cosmologist, then consider her response:

“It’s mind-blowing that we have this capacity to look into the universe, it doesn’t matter whether I am processing all the relevant data in the correct intellectual manner, it is fabulously and literally wondrous to experience any kind of glimpse into an experience of the infinity beyond my own tiny speck. My memory of seeing the version some weeks back at Horizon is: total and utter pleasure and excitement witnessing the visuals, my feeling that the light generated by Paul’s beautiful visualisations is a representation of what’s happened billions of years ago…a sense that the light of other days was passing through me as the image revolves and rotates around the full-dome. It will stay with me for a very long time – and hopefully with everyone who sees the film…and that is what will encourage the population to support further research.”

Directed by Peter MorseDARK is an adventure to the very edges of contemporary cosmology and data visualisation, telling a complex scientific story with a touch of humanity – for an intelligent audience.

We hope you enjoy DARK

Update: The movie previews to our select audience on August 28th at Horizon Planetarium, Scitech, Perth, Western Australia on August 28th 2012. Public release shortly thereafter, details to be announced.

Production details: 4k Fulldome resolution (4096 x 4096 px); 5.1 surround sound audio. Duration: 20 minutes.

Production Credits:

Directed by Peter Morse
Produced by Peter Morse & Paul Bourke
Written by Alan Duffy & Peter Morse
Presented by Alan Duffy
Dark Matter Simulations: Alan Duffy and Robert Crain

Dark Matter Visualisations: Paul Bourke

Music: Cathie Travers

Audio: Peter Morse & Trevor Hilton

Lighting: Peter Morse & Ákos Brúz & John Doyle

Fulldome Timelapse: Peter Morse & Chris Henderson

Digital Sky Milky Way Animation: Carley Tillett

Galaxy Animation: Paul Bourke

Editing, 3D Modelling and Computer Animation, Compositing & Special Effects, Colour Grade: Peter Morse

LadyBug-3 Video: Paul Bourke, Peter Morse

Parkes Panorama courtesy of Alex Cherney

Galaxy Images courtesy of Hubble, STScI, NASA

Milky Way Panorama courtesy ESO/S.Brunier

Compute & Network Support: Jason Tan, Ashley Chew, Khanh Li (iVEC@UWA)

Special thanks to:

Paul Ricketts, Centre for Learning Technology, UWAThomas Braunl, UWA Centre for Intelligent Information Processing SystemsJohn Doyle, Octagon Theatre, UWA

Andreas Wicenec, ICRAR

Sally Hildred, Martina Smith

Funded by iVEC@UWA and Scitech

©2012 iVEC@UWA & Peter Morse

website:

 

 

Syn[a]: Visualizing Biometric Data from a Musical Performance

May 8, 2012 in data visualisation, experiments, projects, research, science, technology by Peter Morse

In December 2011 the Syn[a] Group, in concert with AARnet and the TSO, ran a 5 day workshop at the University of Tasmania Conservatorium of Music, where we visualised the biometric data of musical performers and transmitted this over high-bandwidth networks (AARNet) in stereoscopic 3D, creating an immersive augmented telepresence environment.

AARNET News featured an overview of the project:

“In a trial conducted between the Tasmanian Symphony Orchestra, the University of Tasmania’s Conservatorium of Music in Hobart and AARNet in Sydney, the foundations for recreating an immersive musical performance experience were demonstrated.

During January 2012 Musicians from the Tasmanian Symphony Orchestra played a variety of pieces that were captured in High Definition 2D and stereoscopic 3D in Hobart and broadcast live across the AARNet network to Sydney.

Several high quality video streams were simultaneously broadcast to a variety of devices, including an off-the-shelf 3D TV, and a multi-channel audio field was captured and recreated to reinforce the spatial arrangement between the musicians.  Additional biometric data was captured in real-time using watch accelerometers and commercial E.M.G. headsets to build visualisations that were composited into augmented broadcast vision.

Overall 4 x Full HD resolution (equivalent to 3840 x 2160, or 8 Megapixels) was transmitted and the data rate reached approximately 800 Mbps – the resulting total rate from the University of Tasmania extended well over 1Gbps, made possible only via the recently built 10 Gbps Basslink circuit.

“Our research demonstrated the viability of next-generation high-bandwidth networks to deliver augmented orchestral performance for immersive environments over AARNet and NBN-equivalent infrastructure” said Project Director, Dr Peter Morse. “This will lead to innovations in music pedagogy and performance for the networked age – bringing orchestral performance to new audiences in exciting and engaging ways that we are only now beginning to explore.”

The trial has resulted in submissions for further funding to a variety of bodies to continue research into remote and augmented performance, including the impacts of using significant bandwidth to reduce latency for simultaneous performance.”

 

A public google plus album of photos can be seen here:

https://plus.google.com/photos/105046666883708810988/albums/5685534367205569697

MONA_Synaesthesia

The results of this research formed the basis of a performance at the MONA/TSO Synaesthesia Event in 2012:

https://www.tso.com.au/event/synaesthesia-music-of-colour-and-mind/

Review in AussieTheatre.com.au :

Apparently synaesthetic composers Scriabin and Rimsky-Korsakoff fought constantly over their “definition” of the F sharp chord: one experienced it as violet and the other as orange. This story was one of many told by synaeasthetic musician Andrew Legg who, with a group of artist-technologists, performed three keyboard improvisations while his body and brain were wired up to computers. Projections of real-time digital imaging of Legg’s vital signs ranged from interesting but prosaic colourful graphs to a beautiful, trippy, multicoloured lava-lamp-like animation. Legg’s work Syn[a]: Clavier a Lumiere, was the closest we came all weekend to experiencing the synaesthete’s inner eye.

Review in The Australian

Article in the Hobart Mercury

Link to synaesthesia research website

 

Visualizing Pausiris @ MONA

July 12, 2011 in data visualisation, projects, research by Peter Morse

Visualization of a 2000-year-old Egyptian Mummy – Peter Morse & Paul Bourke

Pausiris Final State © MONA/Peter Morse/Paul Bourke

We first started working upon the idea of visualizing one of the Egyptian Mummies from the Museum of Old and New Art (MONA, Tasmania) collection in 2007, when I was approached by the exhibition designer for MONA, Adrian Spinks – I was working at the University of Western Australia at the time – so a visit was arranged to the Western Australian Supercomputer Program (now iVEC@UWA) to discuss ideas with my collaborator Paul Bourke. This led onto a series of meetings with David Walsh, the MONA curatorial and design teams, and a number of site visits during the construction of the museum.

MONA had arranged for two mummies to be scanned at the Royal Hobart Hospital, using their new Computed Tomography (CT) scanner, undertaken by the radiologist Andrew Saunders with Gerald McInerney (see: medical imaging). This created  two key datasets – the first, a set of DICOM files for the mummy and coffin of Ta-Sheret-Min (Egypt, Late Period, end 26th – 28th Dynasty, c. 664–399 BCE; Human remains, linen wrappings, wood, plaster, pigment.)

Ta-Sheret-Min – Test Render © MONA/Paul Bourke

This initiated the first visualization project – of Ta-Sheret-Min – which took place during 2008-9. We looked at a wide range of exemplars of mummy visualization at a variety of international museums, coming across some pretty impressive examples of volumetric visualization using a variety of different techniques. However, none of them appeared to be developed to the very high level resolution we sought to achieve in this project. This initial enterprise produced a lot of work and crucial insights into how to develop visualization techniques appropriate for the project, as well as some unusual ideas.

An initial proposal for the exhibit was to create a special type of hologram. At the time a new synthetic holographic technology had been developed which was both full colour and supported animation as a function of the viewers’ position. Paul Bourke created several “panoramagrams” of Ta-Sheret-Min, displaying holographic animation as the viewer moved their point of view left to right across the hologram surface. The view gradually reveals the interior of the mummy, from the exterior, through funerary bindings, to the skeletal structure. However, there are currently distinct constraints in resolution and scale for this material- and compute-intensive process. Similarly, the volume dataset can be realised in laser-etched crystal, or it could be printed using 3D rapid prototyping techniques.

In these instances data visualization asks very interesting questions about portraiture, ethics, remembrance, resemblance and commodification.

After all, these processes can be applied to any volumetric dataset – not just mummies. For instance, you could now have yourself (alive or recently deceased beloved family or pets etc.) CAT-scanned and made into a hologram, a crystal paper-weight, or a life-size volumetric 3D prototype replicant. In the future will we be able to print bodies or organs using stem cells or some other medium? Surely, it’s all a question of ‘resolution’ and developments in materials science (amongst many others.) We were not tasked to explore these – any further, anyway. But it’s certainly fascinating to speculate within both the realms of the currently possible and the imaginary future. Where could this go? Where will it?

Hologram / Laser-etched Glass Volume © MONA/Paul Bourke

Pausiris – Photo of the Mummy © MONA

In 2010 it was decided to focus upon the second DICOM dataset, that of Pausiris (Egypt, Ptolemaic to Roman Period, 100 BCE – CE 100; Human remains encased in stucco plaster with glass eyes, incised and painted decoration) – as provenance and identity had been confirmed, and the artefact was a rarer and more interesting one. An added benefit was that the skeletal structure of the mummy was far more intact.

The Pausiris mummy had been scanned in three sections, achieving a higher resolution dataset – yet introducing problems with alignment of the parts, due to registration issues on the CT gurney. Paul Bourke resolved these alignment problems using data processing techniques that enabled the accurate registration of the entire volumetric dataset. These were combined into a complete netCDF file suitable for scientific visualization using Drishti, volume visualization software developed by our colleague Ajay Limaye at the Australian National University Vizlab. It was processed upon specially-built computers for the manipulation of high-rez volumetric data (with 96-128GB of RAM) using Nvidia Quadro 6000 GPUs, with 6GB of texture memory. The final volume after trimming was 512 x 512 x 2400 voxels. This is pushing state-of-the-art GPUs near their current absolute limit – and, indeed, it produced many crashes, freezes and problems. Which were, eventually, mostly resolved.

Pausiris – Test Render © MONA/Peter Morse/Paul Bourke

After much examination of how best to approach this, Paul and I conducted a series of renders, ultimately selecting what we felt to be the most effective set of parameters, meeting Adrian’s vision of a gradual revelation of the remains of Pausiris within the sarcophagus.

This is something that David Walsh indicated in a number of conversations: an interest in a kind of phenomenology of the body after death. Long after death: 2000 years; a ‘presencing’ of Pausiris, yet as someone who is ‘truly’ dead – unlike that of the Serrano ‘portrait’ of the recent corpse set in counterpoint across the room, the dead eyes staring towards this immortal death. This is mentioned by Walsh in his commentary published on the iTouch (the ‘O’) exhibition notes (indeed, I understand – the motivation behind it all): it’s a question about liminality; at what point does a deceased body move from a point of recent ‘liveness’ to being an artefact?

Literally: love’s labour, lost.

At what point does the body cease being a ‘she’/'he’ and become an ‘it’?

This question asks its corollary: how can ‘it’ be interrogated and resuscitated and ‘enlivened’? For Pausiris, it’s a question about deixis set up as a denkmal: who was this person? what was his aliveness like? A person like you and I, I suspect: who lived and breathed in another time and who believed different things; he understood the world in ways we no longer do. Interesting stuff.

Many approaches were discarded along the way, though they afford future profitable avenues of exploration (e.g. stereoscopic3D, different slicing approaches, computer re-animation, holographic visualization etc.) Decisions here involved the selection of transfer functions and colour gradients that elicited as much visible structure as possible within the data and the most effective way of presenting it – at the highest resolution. This was challenging because of the nature of the artefact – the CT scan basically reveals density information, to which are assigned greyscale values that can then be arbitrarily colorised. However, bone and plaster, for example, have quite similar densities, so distinguishing them is quite difficult via graphics processing alone, whereas relevant and interesting structures may be far more visible to a trained human eye. The possibilities are endless, yet we must be discerning.

Pausiris Skull – detail example © MONA/Peter Morse/Paul Bourke

The selected renders were composited in After Effects, using masks, complex layer interactions and re-timing procedures. I won’t go into detail other than to say that I am now intimately familiar with every voxel and pixel of the data. It ran through dozens of iterations, requiring lengthy processing operations over a period of several months, before we finally achieved a satisfactory outcome.

The final movie file is rendered at 4000 x 1500 pixel resolution, suitable for display upon the custom software runtime environment using two HD projectors, optical path folding and rear projection inside the specially engineered projection ‘sarcophagus.’

CAD Rendering of Installation Design (Adrian Spinks, MONA)

Final Installation view. Pausiris – Real – Left. Virtual Mummy – Right  © MONA/Peter Morse/Paul Bourke

A technical description of the installation and quartz-composer runtime software can be found here.

My impression of the whole installation at the end of this lengthy and very complex and challenging process is that it is a truly collaborative work of art: a collision of technical innovation, artistic insight, design vision and a unique patronage that, remarkably, all came together to enable us to see these beautiful artefacts in an insightful and original way.

There is something poetic about the transubstantiation of the body of a man who died 2000 years ago into a refrangible object of computational light, revealed by the spectrum from x-rays to optical wavelengths, via technologies that, to him, would seem indistinguishable from magic.

 

Pausiris – Sarcophgus External Detail © MONA/Peter Morse/Paul Bourke

Projection System – detail © MONA/Peter Morse/Paul Bourke

Note: Images courtesy of MONA, Paul Bourke and the author, where indicated.

SCARI : Sullivan’s Cove Augmented Reality Interface

February 2, 2011 in data visualisation, experiments, projects by Peter Morse

A brief overview of our proposed (but under development) Sullivan’s Cove Augmented Reality Interface. This is one outcome of our Geeks-in-Residence program with the Salamanca Arts Centre and the Tasmanian Symphony Orchestra. It will be tied-in to the development of the National Broadband Network, as it is potentially reliant upon high-bandwidth networking (both wired and wireless), both for delivery of datatypes and scalability of user-interaction to many mobile devices.

At this stage the outline is ambitious, but we anticipate getting a pilot together in the next few months.

Also inspired by the work of:

Paul Bourke, iVEC @ UWA – http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/kinect/

Ferhat Sen, Media Lab Helsinki – http://vimeo.com/19266986

Hobart Fulldome: An Imaginary Immersive Space

July 9, 2010 in antarctica, data visualisation, fulldome, projects by Peter Morse

A concept visualisation for a Hobart Fulldome cinema and visualisation centre, along the Hobart waterfront at Sullivan’s Cove.

Two simple platonic shapes for a platonic concept – of course, I would expect architects to do much better – but it could be very simple. It could be somewhere else entirely – but this seems like a good spot: a neutral zone.

Internally, I would propose a hemispherical 12-18+ metre diameter, 30º or 45º angled screen: Fulldome Cinema – akin to IMAX or Omnimax, but significantly less expensive, no vendor lock-in – and entirely digital. Run on Linux and opensource software (it already exists or we can make it.)

The image here is not to scale – just indicative of audience relationship to the screen. Of course, it could scale from anything between 25-250+ seats, depending upon the size of the development.

Internal Dome Screen Orientation

Internal Dome Screen Orientation

It would make economic, technical and cultural sense for Hobart to have a production, research and display facility like this. The business case could be well argued.

There’s lot’s happening in the future around the Hobart waterfront and I hope this sort of idea is on the radar of the relevant authorities.

Fulldome is a rapidly evolving and dynamic medium – the visualisation, creative screen and tourism potentials for Tasmanian and Antarctic Sciences and Arts are blindingly obvious.

Hobart Fulldome would immediately open up a range of national and international screen development networks. It will also lead to the local development of content for international export – as there are literally thousands of such systems being built around the world – desperate for new and innovative content. It is a new and undeveloped market – and we have so much here that is unique and currently unexplored for this exciting medium.

It should be a common-resource, a terra incognita, as the size of Hobart precludes development – and ownership – by individual organizations: a space of collaborative imagination is what is needed – this would also impart the requisite creative intellectual dynamism to the environment, as new and un-forseen interactions could develop.

I imagine something with scientist-in-residence and auteur/artist-in-residence programmes. A hybrid space for innovation and regular screenings – a schedule to be balanced and developed.

It would bring life and energy to this currently ‘dead’ area of the waterfront at Sullivan’s Cove, leading to a spread of activity all the way around to Salamanca Place, tied in with the Wireless Waterfront and Tasmanian NBN projects.

Key to this idea would be establishing an intimate association between the Sciences, Screen and Arts organisations that are locating around the area.

There are so many organisations and individuals producing amazing visualisation data and creative content here that could be drawn together into a kind of renaissance. Stories to tell, narratives to unfold, data to be seen and understood in new ways.

Fulldome screen content is already entirely distinct from traditional planetarium applications (such as astronomy) and ranges across a huge range of genres and styles (e.g. DomeFest ) – it is part of the future of immersive cinema.

It would be a very fertile place for innovation and export of research, technologies, visualisation, education and screen content. A place of knowledge for the future of how we envision the world.

It could be attractive to international and national conferences and forums (e.g. Ozviz ; ASTC.)

Such a facility can be very cost-effective to implement as technology costs have plummeted over the past 10 years. Many cutting-edge technologies are developed here in Australia (e.g. MirrorDome, iDome.) Besides mirrordome systems, fisheye and multi-projector systems are also now cost-effective.

And it needs a bar and a decent coffee shop: places to talk about ideas whilst admiring the view (virtual and real.)

Anyway, I’m just one man planting the seed of an idea…Hobart Fulldome is just a sketch of the possible amongst many…

Hobart Fulldome Model

Hobart Fulldome Model: Click to Enlarge

Addendum:

As this idea is free, and I have an infinite budget in my imagination, why not arrange it so the seats fold down into the floor late-ish in the evening and the screen becomes visible (but not accessible) from a public-space/restaurant/cafe/bar opening out onto the waterfront, so that musicians can play and people can sit and chat and eat, whilst Antarctica, Tasmanian landscapes and – well – other stuff – can wheel silently around them on the screen behind, as people look across the waters – perhaps hiring wireless headsets or using mobile phone apps to watch and listen and interact with the giant screen (cf. Solar Equation). That would be quite an experience – they’d still be learning(!): all these places (visualisation centres, fulldome systems, planetaria) are turned off  ’after hours’ – and – if designed the right way in the first place  - they needn’t be. They could still be earning their way.

I guess, whilst, yes, it is good practise to adopt principles from successful exemplars elsewhere – there is always space to innovate and do something uniquely ‘here’ – the Tasmanian Example, that makes a difference and that others emulate – because it works and it’s new and you’d only be free enough to do it on an island at the end of the world.

Constructive ideas are welcome.

Peter Rasmussen Innovation Award, Sydney Film Festival (2010)

June 21, 2010 in antarctica, data visualisation, fulldome, interviews, mawson's huts, projects by Peter Morse

Peter Rasmussen Innovation Award 2010, 57th Sydney Film Festival from Peter Morse on Vimeo.

2010 PETER RASMUSSEN INNOVATION AWARD WINNER

The Peter Rasmussen Innovation Award, now in its second year, was awarded to Peter Morse and announced at the closing night of the Sydney Film Festival.

Established by a board of trustees made up of friends and collaborators of the innovative Australian filmmaker Peter Rasmussen who have committed to raise the funds in perpetuity for the purpose of awarding a $5,000 cash prize, the Peter Rasmussen Innovation Award, is given each year at Sydney Film Festival to an Australian whose work in film, machinima or new media embodies a visionary spirit and a relentless determination in the face of obstacles – financial or otherwise – to create high quality works for the screen.The recipient’s work may be described as fringe, maverick, innovative. It may be pushing boundaries in form or mode of production, and may sit outside the usual categories of films shown at the festival.

Peter Morse has over 20 years experience in sophisticated visualisation techniques and content creation.He has in-depth technical skills and production experience in diverse fields such as 3D data visualisation, volumetric rendering, stereoscopic immersive virtual and augmented reality systems and computer programming – as well as video, photographic and film production, audio design and music. He has a wide-ranging creative practice and has exhibited digital media works around Australia and internationally in the USA, Germany, Britain, France, Finland and Holland.

On behalf of The Peter Rasmussen Trustees, Rosemary Blight said “Peter Morse’s work demonstrates an incredibly high level of technical innovation and practice, including leading work in 3D data visualisation. Peter’s works across both sciences and arts opens up ways for compelling narratives to play on all types of screens and in a huge variety of ways. Peter is an exciting artist to be awarded the Peter Rasmussen Fellowship.”

My many thanks to:

Paul Bourke, Director, Western Australian Supercomputer Program

David Jensen and Rob Easther, Mawson’s Huts Foundation

Chris Henderson, Inventor Extraordinaire

Vicki Sowry, Australian Network for Art and Technology

The Australian Antarctic Division Fellowship

and Screen Tasmania

and everyone else who believed in my work – you know who you are.

I am especially grateful to the trustees of the Peter Rasmussen Fellowship and the Sydney Film Festival for their recognition, support and encouragement.

Visualizing GVDem2008

June 1, 2010 in antarctica, data visualisation, projects, research by Peter Morse

Some months ago I attended a workshop held by the CEAMARC group of the Census of Antarctic Marine Life (CAML), held at the Australian Antarctic Division.

It provided me with a fascinating overview of the extensive activities of both CAML and CEAMARC, and opened the doors to an amazing array of visualisation possibilities.

One of the most fascinating to me is the work being done by Dr Rob Beaman of James Cook University, an Ocean Mapping scientist, who has developed a very high resolution Bathymetric Digital Elevation Model (DEM) of the bathymetry and topography of Antarctica and seabed around the George V and Terre Adelie continental shelf and margin (region of the Mertz Glacier tongue) – also available at the Antarctic Data Centre.

The metadata description of the model (GVDem2008) is as follows:

“This dataset comprises Digital Elevation Models (DEMs) of varying resolutions for the George V and Terre Adelie continental shelf and margin, derived by incorporating all available singlebeam and multibeam point depth data into ESRI ArcGIS grids. The purpose was to provide revised DEMs for Census of Antarctic Marine Life (CAML) researchers who required accurate, high-resolution depth models for correlating seabed biota data against the physical environment.

The DEM processing method utilised all individual multibeam and singlebeam depth points converted to geographic xyz (long/lat/depth) ASCII files. In addition, an ArcGIS line shapefile of the East Antarctic coastline showing the grounding lines of coastal glaciers and floating iceshelves, was converted to a xyz ASCII file with 0 m as the depth value. Land elevation data utilised the Radarsat Antarctic Mapping Project (RAMP) 200 m DEM data converted to xyz ASCII data. All depth, land and coastline ASCII files were input to Fledermaus 3-D visualisation software for removal of noisy data within a 3-D editor window.

The cleaned point data were then binned into a gridded surface using Fledermaus DataMagic software, resulting in a 100 m resolution DEM with holes where no input data exists. ArcGIS Topogrid software was used to interpolate across the holes to output a full-coverage DEM. ArcGIS was used to produce the 250 m and 500 m resolution grids, then clip the 100 m and 250 m resolution grids at the 2000 m depth contour.”

The resulting dataset is the result of a prodigious amount of work in correlating a huge array of different data types from a variety of sources – Rob is an expert in Fledermaus and ArcGIS – complicated pieces of professional GIS and visualisation software for scientific visualisation.

It is easy to underestimate the amount of labour involved in this effort by many people – we see an apparently rainbow-coloured image stuck on a sphere (see below) – but in reality it is a hard-won measurement depicting the real world, created from days, weeks and months of ships assiduously sailing in structured patterns on the ocean at the end of the world; trawling multibeam and side-scan sonars through the oceans – often in challenging seas. This data must be collected and cross-matched, interpolated, holes patched – it is really a technical and observational marvel.

The spatial coverage of the dataset is:

S: -69.0 ; N: -63.0 ; W: 138.0 ; E: 148.00 ; Min Altitude: 0 M ; Max Altitude: 2391 M; Min Depth: 0 M ; Max Depth: -4108 M

and temporal coverage is:

Start Date: 2008-03-17 ; Stop Date: 2008-11-25

On Google Earth it looks like this:

GVDem2008_Google_Earth

GVDem2008 in Google Earth

(note: the red circle in the centre on the coast indicates the exact spot of Frank Hurley’s darkroom in Mawson’s Huts – a panorama I shot in 2008 – click here to view )

BUT – there is a very interesting and perplexing feature of all these visualisation programs – there is generally no easy-to-use pipeline for generating highly ‘realistic’ renditions of data. It’s something I’ve struggled with for years – and there is a huge learning curve involved in translating data into a kind of software-agnostic intermediate form that can then be read by more standard commercial and opensource 3D modelling packages. Obviously, these things are generally designed for quite different purposes, but it is still no simple feat to get data from GIS software to a landscape visualisation package like Vue, or modellers like Maya, Cinema 4d or Blender – and to retain features such as correct geolocation of the dataset on a spherical world/terrain model and to be able to simulate physical skies, seas and correct astronomical positions of heavenly bodies – like getting the sun right for a certain time of year. It IS do-able, as I am about to demonstrate, but it is not simple. It is quite extraordinary how many 3D modelling packages cannot handle lat-long spherical coordinate systems – generally everything operates in a flat Cartesian XYZ space. So a bit of data and format wrangling is always required. Nevertheless – the results are promising.

What interests me here is an ‘aesthetic’ interpretation of scientific data. The purpose of this is to enable ways of looking at the data ‘as if it is real’ – which, of course, it is, but it could never actually be seen this way – so it is a form of pseudo-realism or pseudo-naturalism. However, this semiotic framework enables us to perceive it in a more tractable way than the via the conventions of scientific visualisation – which are often very diagrammatic and conventionalised – and, in some respects, sometimes unclear. Here I’d argue that good scientific visualisation is as much an art as it is a science.

My experiment here is to generate a convincing fly-through of the GVDEM2008 dataset, as if from the point of view of an all-seeing submarine. The fact of the underwater ‘landscape’ is that it is phenomenally dark – in fact, under about 200m of of water, it is almost pitch black – an explanation for the evolution of bioluminescence  in many marine organisms. What’s more, the overwhelming majority of the world’s oceans are precisely that – vast pitch-black teeming microbial voids that have not seen sunlight for billions of years. That’s quite a thought, even as they have changed shape and location over the millennia with the processes of continental drift and crustal deformation. So, putting this aside, here we are imagining what it would be like to look across these vast landscapes as if we could see – huge vistas extending as far as the eye could see, mountain ranges and vast abysses, gigantic deserts separating oases of life – not only near the shallower waters of the continental shelves, but deep in the ocean, some surrounding submarine volcanic vents (black smokers) and other exotica worthy of Borges’ or Calvino’s fictional Marco Polo, yet stranger:

Working with Rob via email, we figured out that most optimal way of generating the required terrain data was to use 16-bit geotiffs (we tried 32-bit, but this was not accepted by the modelling software.) 16 bits of grey-scale information enable one to generate sufficiently detailed heightfields in modellers as to avoid obvious ‘stepping’ or quantising in the data.

Here’s a gallery of initial test renders :

The resolution is fairly coarse, but serves to demonstrate the utility of the workflow and provide context in which to work upon scaling and texturing – which, of course, in these tests is entirely wrong: in a Mercator projection the range is approximately 673km North/South and 453km East/West. Besides the disproportion (aspect ratio),  you get no sense of this scale in these images – so that is something to work on, and surprisingly difficult to achieve: especially when working with visual cues such as texture scale and environmental scale – e.g. cloud layers, horizonal curvature, and so forth. Furthermore a couple of the renders show artefacting – horizontal striations that have been introduced by various manipulations of the data. The textures you see are heighfield-sensitive procedural textures, used mainly for illustrative purposes. More stuff to work out!

Currently I am working on a much higher-resolution underwater fly-through (a paradoxical term), which involves an interesting and challenging interplay of decisions: on one hand it would be ideal to have a supercomputer with endless amounts of RAM and ultra-high-end GPUs at my disposal, on the other it is worthwhile spending the time optimising the geometry of the model so it is tractable for a high-end desktop computer (Mac Pro 8-core and small render farm).  There is always a trade-off between actual-fidelity and apparent-fidelity (I would never work as a relationships counsellor) – as the two images below demonstrate: the first image shows an initial pass at mesh optimisation (set at about 60% vertices reduction); the second at about 80% reduction. This deformation process reduces the number of vertices/polygons in the DEM polymesh from several million to a far more manageable 515105 points/ 1023821 polygons. Smart algorithms optimise the mesh using procedural checking (a mathematically interesting problem as it can be essentially insoluble) and intelligently redistribute the vertices, maintaining a formal contiguity with the original – but there are no hard and fast rules. The benefit of this is a reduced memory footprint as well as a significant decrease in render-time. By using bump-mapping and various illumination techniques I have reduced this even further – and every bit counts. The difference between, say 5 minutes per frame and 4 minutes per frame may not sound like very much, but if you’re running a render that might take 5 weeks, then you have saved yourself a week: it really adds up. I recently finished a render at the University of Melbourne that took nearly 6 months. Fortunately I was away in Antarctica a lot of that time.

GVDem Mesh optimised @ 60%

GVDem mesh optimisation @ 80% reduction

GVDem mesh optimisation @ 80% reduction

GVDem mesh optimisation @ 80% reduction, showing polylines.

GVDem mesh optimisation @ 80% reduction, showing polylines.

Finally, we make a comparison between two final renders (this is after an extensive render optimisation process – exploring the essentially infinite parameter space of render settings):

Render with a 100% resolution mesh (render time approximately 27 minutes, image size 1324 x 764 px):

GVDem marine render 100% mesh resolution

GVDem marine render 100% mesh resolution

Render with a 20% resolution mesh (render time approximately 10 minutes; image size 1440 x 1080 rectangular pixels; 1920 x 1080 square pixels resampled):

GVDEM 1080p HD

GVDEM 1920 x 1080p HD Render

And here’s the pay off for all these efforts: a rapid decrease in render time, with little differentiation in apparent visual resolution. To the trained eye there will be minor differences and even some artefacting; to the untrained eye – especially when watching this in motion – there will be no observable change.

The tracks that we see here are artefacts from multibeam sonar – the devices used to record the ocean floor – at this point about 3 kilometres underwater.  Somewhat bemusingly, I had assumed these were tracks of iceberg scour, as this is what I had been looking for (but in the wrong place) – Rob informs me that, of course, they are much further in towards the shoreline in shallower water. This is clearly where a sense of scale becomes crucially important – and I have had no clear point of reference. It just goes to show that working with specialists is imperative in this enterprise – people who actually know and understand what they are looking at – nevertheless, we see the fascinating submarine canyon system in this environment and there is a lot more to do with these initial experiments. Rob’s now guiding me through the environment and I think we’ll come up with a great fly-through exploring the ‘interesting bits’ and begin to tell the scientific story of this part of the world’s oceans: it’s an art-science collaboration.

Here are the immediate benefits of a good conversation:

GVDem Proposed Flight Path

This is extremely useful. Things have names (you’ll never find them on a world map anywhere), we can see where the Mertz and Ninnis Glacier Tongues are – and for me, well, I know Cape Denison extremely well, having spent two summers there – this gives me a sense of place and scale. What we’re looking at is enormous. Fascinating!

Heritage Visualisation on iPhone

April 11, 2010 in antarctica, data visualisation, experiments, mawson's huts, panoramas, photography, research, video by Peter Morse

Mawson’s Huts Interactive Guide on iPhone

 

An experimental iPhone app for heritage visualisation.

A simple two-digit navigation system is demonstrated for interactive realtime walkthrough of Mawson’s Huts, Antarctica, using the Unity game engine.

Users can explore the interior and exterior of Mawson’s Huts and a variety of fully-spherical high-resolution photographic panoramas documenting the site. An audio soundtrack accompanies the visualisation as it is explored.

This demo application was available through the Apple iTunes store – but is currently withdrawn for further development for the iPad platform.

Further information can be found here.

Sea Surface Temperature & Height Anomalies Visualisation (2010)

April 6, 2010 in data visualisation, research, video by Peter Morse

 

Some experiments using Sea Surface Temperature (SST) and Sea Surface Height Anomaly (SSHA) data to create visualisations of the global ocean. The emphasis is upon the Southern hemisphere, looking at ocean circulation around Antarctica.

Created by Peter Morse and Ben Raymond (Australian Antarctic Division.) Images are derived from satellite data

The Ice Museum

November 5, 2009 in antarctica, data visualisation, fulldome, projects by Peter Morse

a memory of tomorrow
The Ice Museum is a fulldome movie about Antarctica, the oceans that surround it and a point in history when it began to be understood.
Beginning with Mawson’s Huts, it traces their origins during the 1911-14 Australasian Antarctic Expedition (AAE), moving across time through their human story to contemporary conservation efforts for their centenary in 2011.
It is also a spatial story: we move across complex data visualisations of the changing climate, the changing world, that trace their origins to these early explorers and scientists.
It is a series of connections that unfold over decades : the end of the Heroic Era; the scientific history; the aesthetics of the Antarctic and contemporary issues of climate change and human intervention in this last great wilderness.
The project has received initial script-development funding from Screen Tasmania and production support from the Mawson’s Huts Foundation – we will be shooting at Cape Denison, Antarctica, this austral summer (December to January, 2009.) An objective is to push genre boundaries – a collision of spectacular visual content in counterpoint with innovation in narrative form.
The Ice Museum will be a 45 minute feature – standard feature length for planetarium/fulldome productions.
It will be shot in stereoscopic/monoscopic 4K and mono 8K (with some 8k stereo research experiments). Monoscopic 4K/3K will be suitable for Australian and other international planetaria.

The Ice Museum (2009)


The Ice Museum is a fulldome movie about Antarctica, the oceans that surround it and a point in history when it began to be understood.

Beginning with Mawson’s Huts, it unfolds a tale from the 1911-14 Australasian Antarctic Expedition (AAE), moving across time through their human story to contemporary conservation efforts for their centenary in 2011.

This Ariadne’s thread is also a spatial story: we move across complex data visualisations of the changing climate, the changing world, that trace their origins to the work of these early scientists and explorers.

It is a series of connections that unfold over decades : the end of the Heroic Era; the scientific history and modern understanding; the aesthetics of the Antarctic and contemporary issues of climate change and human intervention in this last great wilderness.

The project has received initial script-development funding from Screen Tasmania and production support from the Mawson’s Huts Foundation. An objective is to push genre boundaries – a collision of spectacular visual content in counterpoint with innovation in narrative form.

The Ice Museum will be a 45 minute feature – standard feature length for planetarium/fulldome productions.

It has been shot in fulldome 4K. Post-production is occurring during 2011, for release in 2012.