Over the past few weeks, I’ve been working on preserving Oakland cemetery through the use of photogrammetry. As a photographer and videographer, I’m a bit ashamed to admit that I was not familiar with this practice, but I’m also very excited about its potential uses.

Photogrammetry is a hybrid of two-dimensional and three-dimensional image capturing. A photographer captures an object from multiple angles, using the flat images in combination to extrapolate a series of three-dimensional points that can then be turned into a polygon mesh. This method of 3D imaging, while not as accurate and precise as laser scanning or manually recreating objects in Blender or 3DSMax, has two distinct advantages: first, it requires no special field equipment and can be performed on objects of almost any size, and second, the images’ visual data can be used a second time to automatically texture the object created. Both of these advantages make photogrammetry an amazing tool for preserving cemeteries.

SIF has partnered with digital studio Beam Imagination to preserve Oakland cemetery with the use of innovative media. In addition to using aerial photography and LIDAR scans to record the topography and larger structure of Oakland, we wanted to preserve smaller structures, like headstones and monuments. Because these objects vary in size from 50 centimeters in height up to 5 meters in height, and because they could not be disturbed in the process of recording their geometry, they became the perfect candidates for photogrammetry. Our initial tests have focused on modeling this statue in the Jewish flats:

(Left) Marble statue of an unknown woman in the Jewish Flats section of Oakland Cemetery.

There is very little concrete data to go by when planning a photogrammetry shoot. There are simple standards to follow, such as choosing cameras with high-resolution sensors to maximize the amount of visual data and using wide-angle lenses to cover as much of the object as possible, but the field of photogrammetry is too young for proper standards to have been developed. For this reason, we chose to cover as much ground as possible with three different cameras and six lenses, running a reasonable gamut of shooting situations.

(Above) Our photographic arsenal included bodies from Nikon, Sony, and RED, and lenses by Nikkor, Zeiss, Rokinon, and Canon.

The DSLR used was a Nikon D810, with two Nikkor lenses, a 14-24mm f/2.8 zoom lens set to 14mm and a prime 24mm f/1.4 lens. Our mirrorless option was the Sony A7Rii, using a Rokinon 8mm f/3.5 fisheye lens, a Zeiss 18mm f/2.8 prime lens, and a Zeiss 24mm f/2.8 prime lens. Finally, we wanted to test uncompressed video as a source of visual data, so we used a RED Epic with a Canon 24-70mm f/2.8 zoom lens set to 24mm. The RED shot uncompressed 8K video, recording 33 megapixel frames progressively at a rate of 16 frames per second. Because our subject was tall and narrow, the video was shot vertically to maximize the amount of frame taken up by the statue. The still pictures were similarly taken in a portrait configuration.

Each test was performed as a series of four passes. First, the photographer captures a series of images in a 360-degree path around the subject at a medium height. The same pass is performed with the camera at the same height as the subject and the camera near the ground or base of the subject. The fourth pass consists of top and bottom views of the subject, as well as more extensive coverage of any complex areas. For example, the statue has a cavity on its underside left by a missing leg:


(Left) The concealed underside of the statue proved a challenge to accurately recreate in 3D.

After the photos had been taken, they were ingested, retouched for exposure and lens distortion correction, and re-exported as compressed files. Agisoft Photoscan was used to create a point cloud and extrapolate a mesh and textured object from the photos:



(Below) A ‘cloud’ of colored nodes extrapolated from two-dimensional image data.

(Belolw) A low-quality model extrapolated from the point cloud.

This model, a result of the test using the Nikon D810 with the 24mm prime, was our best result. The high resolution and clarity of the sensor in the D810’s body combined with a wide yet low-distortion lens allowed Photoscan to best extrapolate the depth data. The 8mm fisheye lens created circular and highly distorted images that yielded undesirable results, while the Nikon’s 14mm lens was given to unwanted light bleeding from the sun across the edges of the statue, meaning that the statue’s shoulders were not well-defined in 3D.

Though we experienced favorable results in this test, there are still improvements to be made. For example, the weather was not ideal: an overcast sky would have been much better as it would have more evenly lit our subject. The coverage of the recess on the underside of the statue was also less than ideal, most likely because the perspective used to document that area contained little environmental context and was not placed correctly in Photoscan.

When describing this type of process, I often use the term “digital media.” I feel this is a bit misleading, however; The term “media” implies plurality, the presence of more than one medium. The beautiful thing about digital media is that at its core, it is a single digital medium. Whereas traditional forms of media derive their respective information storage schemes by exploiting different physical and chemical properties–the use of chemical pigments in paints and light-sensitive silver halides in film, for example–the digital medium can be expressed at its lowest level as a series of numbers. This concrete, descriptive quality allows the digital medium to take forms that can convey meaning through any sense, whether aural, visual, tactile, or otherwise. To me, photogrammetry is not a process of changing medium, but rather more fully utilizing the medium. Every trend in contemporary media production, from virtual reality to data visualization, is utilizing the multifaceted capabilities of the digital medium to better convey information. As streaming, browsing, and sharing become ubiquitous parts of life for a majority of the population, bridging the gaps between different forms of the digital medium has become a large part of today’s media revolution.