Normal map

Overview

The ground truth exposed in this modality is the normal vector that coming out of each pixel visible in the image.

This modality consists of the following file:

Relevant file

Location

normal_maps.exr

Camera folder

normal_maps.exr

In this file, we have converted the visual spectrum image into a normal map. The normal map provides the vector coming out of every surface visible in your datapoint.

The file uses the 16-bit floating point version of the OpenEXR file format, which provides room to store extremely accurate measurements in the color data.

Image1 Image2

A normal map of a human face (left) and its corresponding visual spectrum image (right)

To create the normal map, we have replaced the color value of each pixel with the X, Y, and Z components of the normal vector coming out of that pixel, where the axes are defined as follows:

  • -1.0 ≤ x ≤ 1.0, where +X is to the right in the camera space, becomes 0 ≤ R ≤ 1.

  • -1.0 ≤ y ≤ 1.0, where +Y is up in the camera space, becomes 0 ≤ G ≤ 1.

  • -1.0 ≤ z ≤ 1.0, where +Z is the camera direction, becomes 0 ≥ B ≥ 1 (note the reversed direction).

To retrieve the original components of the normal vector in the camera space, simply reverse the mapping as follows:

  • x = R*2-1

  • y = G*2-1

  • z = 1-(B*2)

Using this ground truth, you can train your model to reconstruct the curvature of the surface of the face and verify the accuracy of the network against the reality. See https://github.com/DatagenTech/dgutils/blob/master/Notebooks/exr.ipynb for an example of how to load and display a normal map using OpenCV.

Because the normal map of the subject in the scene remains the same regardless of lighting conditions and background imagery, only one normal map per camera is needed regardless of the number of lighting scenarios. If you have more than one camera in the scene, each camera folder has its own normal map, depicting the normal vectors from that camera’s point of view.