Crime Scene Virtualization

I was approached by a professor from NUS who wanted to explore the idea of capturing a crime scene in virtual reality. The goal was to try to see if it was possible to take the subjectivity out of crime scene photography. Crime scene photographers have the heavy job of documenting the scene before the other investigators and examiners can touch anything. These photos will then be constantly referenced during the remainder of investigation and procescution, if any.

Photos are traditionally two-dimensioned, but with the use of 360° photos, more of the scene can be viewed at once, as one continuous image, placing users in the centre of the crime scene, surely an improvement from flat 2D-images. With simple VR gear like Google Cardboard and the like, the viewer can get a semi-immersive experience; investigators can get an overview and a better spatial understanding of the crime scene. However, the experience locks the viewer at the position of the camera; the viewer isn’t able to move closer towards areas of interest or change perspectives by translation but can only explore the scene by rotating the device. To allow translation and moving about in the scene, a 3D model will have to be created and this is typically done using laser scanners whi8ch is rather time-consuming.

Wouldn’t it be great if the entire crime scene could be captured quickly, possibly visualized in 3D instead of 2D photos which do not fully convey the scale and positions of the evidence. And at the same time, might it be possible to take the subjectivity of what the photographer captures out of the equation? In my mind, I thought it might be possible to recontruct a crime scene from 360° images, how hard could it be?

Forensic Science Advocacy Competition

I was given the opportunity to visit a simulated crime scene which was created for a competition. The task was help capture the 360° still images with a GoPro Fusion and hopefully try to create something usable. I initially tried to extract out still rectangular images from the 360° images I captured in an attempt to use photogrammetry software, which takes in 2D images, to create the 3D model.

Photogrammetry Software I tried

  1. Autodesk ReCap Photo
    • Licensed software
    • Free student license; generate models with up to 50 images per model
    • Cloud-based processing
  2. Meshroom by AliceVision
    • Free, Open source
    • No photo limit
    • Photos processed locally
  3. RealityCapture
    • Pay-per-input
    • No photo limit
    • Photos processed locally
    • Control points
  4. 3DF Zephyr Free
    • Free for personal use; generate models with up to 50 images per model
    • 360° image decomposition feature

Autodesk ReCap is actually really simple to use, just dump your photos in and a couple of minutes later you get back a pretty decent 3D model. The user-interface (for the student license at least) is pretty limiting as there aren’t many parameters you can tune. I then proceeded to look for alternatives which could offer a higher tunability, and Meshroom and RealityCapture came up. Both of them process images locally on my computer which meant long waiting times between attempts. I did appreciate the customizability and the lack of a paywall for Meshroom but ultimately it was the control points feature in RealityCapture that was especially useful for the crime scene. I didn’t really try Zephyr to do any 3D model construction as I had access to Autodesk ReCap which I’d expect to have similar results.

The ability to assign control points was a useful feature as there were many similar points in the images that made it difficult for the software to match automatically. An example would be the blood splatter and drip trails were difficult to differentiate, causing significant distortion and bad texturing. Using control points, we can guide the software to use the manually aligned points as reference to stich the model together.

Blurry blood trails

Results

Here’s what I came up with:

  1. 3D Model
  2. 360° Viewer

Learning Points

Through this process, I’ve learnt a few important things.

  1. The photos should be high resolution for better feature matching
  2. The photos should be ideally taken with the same device or model to minimize the need to calibrate multiple devices
  3. Extraction of still images from a 360° image causes the extracted rectangular image from any one viewpoint to have a significantly lower resolution than the marketed resolution of 5.2K.

Potential Extensions

If time permits, I would like to also look into using machine learning (I love buzz words) to recognize the blood splatter patterns and augment the crime scene with labels. We’ll see how it goes.