NeRF Visualization for Cultural Heritage

The release of NVIDIA’s Instant NeRF is a fantastic opportunity to explore the technology’s applications for cultural heritage visualization. There is, of course, a lot of hype about it and its utility for ‘ushering in the metaverse’ – I would take this with several large grains of salt, as the technology is clearly in early days of development.

However, even at this point it presents a fascinating adjunct to conventional photogrammetric workflows – especially interesting to me is that it provides a radically different data model for storing and combining photographs, 3d meshes and textures, as the image produced is a synthesised navigable radiance field. Presumably this opens up entirely new ways of querying a dataset and of linking different datasets together in virtual representations. Also – from my tests – the model saved by the system is considerably more compressed than the data it is derived from – this is as one would expect as the NeRF has considerably more ‘intelligence’ embedded in it – the smarter the compression, the smaller it is.

Instant ngp provides a great playground for exploring how this stuff works. My colleague Andrew Hazelden has provided an excellent technical introduction to getting it up and running, and there is a useful youtube video here. Some contradictory advice is given regarding Python – in my experience sticking with the latest release is just fine.

Transit Hut

This is a NeRF visualization of the Transit Hut at the Mawson’s Huts Historic Site, Cape Denison, Antarctica. The dataset is a sparse 29 photos I took in 2007, with a view to creating a QTVR object movie and/or a photogrammetric model (which was almost impossible in 2007 – a long time ago in software/compute terms). So this is a pleasant surprise! Instant-ngp provides a convincing interpolation between viewpoints and an easy to navigate volumetric view of the radiance field – I immediately think of other ways of using this via its various export options – VDB in Houdini, Drishti, import into Omniverse, Unreal or Unity. Hmmm. Will have to play around and see what is possible – including clean up of ‘floaters’ and other redundant artefacts.

The initial constraint I recognise is that it seems mostly suited to ‘object movie’-style output. Attempting this with a large scene has been so far unsuccessful for me, but I expect that to improve, as it has clearly been achieved by the authors. It will also be interesting to see what results from employing different camera models in COLMAP and OpenCV – much of my source material has been shot using fisheye lenses, so this will be an initial thing to take into account.

A test neural radiance field visualization of the stern of the Ernest Shackleton’s ship Endurance, recently discovered by the Endurance22 expedition to the Weddell sea. This example is based upon publicly released data by the expedition.
More details on the expedition: endurance22.org

You may also like...

2 Responses

  1. SH says:

    Great post, Peter. I only learned about NeRF a few weeks ago. I like the idea and on a whim in Googled “NeRF Geophysics” and laughed when your page popped up!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.