camera
#
The camera
object defines the location, orientation, and behavior of the camera that will be used to generate your visual data. The settings you choose for your camera will apply not just to the Rendered image, but the Depth map, Normal map, Semantic segmentation, and the 2D coordinates of each of the keypoints on your actor’s body.
The camera
object contains the following fields. Click on a field to learn more about it:
"camera": { "name": //A label that you can freely give your camera "intrinsic_params": { "projection": //The type of camera projection that should be used "resolution_width": //The number of pixels in the width of the output image "resolution_height": //The number of pixels in the height of the output image "fov_horizontal": //The camera's horizontal field of view, or the number of degrees of arc that the camera covers from side to side "fov_vertical": //The camera's horizontal field of view, or the number of degrees of arc that the camera covers from top to bottom "wavelength": //The type of light that the camera captures, i.e. visible or near-infrared }, "extrinsic_params": { "location": { //The x, y, and z coordinates of the camera in the scene }, "rotation": { //The orientation of the camera, defined using yaw, pitch, and raw } } },
name
#
string
The name
field is where you give the camera its name. This name will be the name of the Camera-level folder when you download this datapoint, providing you with a user-friendly method of identifying the camera’s purpose.
You can define the name of your camera freely.
Sample usage
A sample name field and its location in the data request
In the JSON hierarchy of the data request, the
name
field is found here:{ "datapoints": [ { "camera": { "name": "Right-side camera", } } ] }This value will name your camera folder “Right-side camera”, a reminder that the camera is located to the actor’s right.
intrinsic_params
#
The intrinsic_params
object defines the camera’s intrinsic parameters: its projection, resolution, field of view, and the type of light that it collects.
The intrinsic_params
object contains the following fields. You can click on a field to learn more about it:
"intrinsic_params": { "projection": //The type of camera projection that should be used "resolution_width": //The number of pixels in the width of the output image "resolution_height": //The number of pixels in the height of the output image "fov_horizontal": //The camera's horizontal field of view, or the number of degrees of arc that the camera covers from side to side "fov_vertical": //The camera's horizontal field of view, or the number of degrees of arc that the camera covers from top to bottom "wavelength": //The type of light that the camera captures, i.e. visible or near-infrared },
projection
#
string
The projection
field defines the general shape and distortion of the camera’s lens. Currently only one value is supported for this field: perspective
.
Sample usage
A sample projection field and its location in the data request
In the JSON hierarchy of the data request, the
projection
field is found here:{ "datapoints": [ { "camera": { "intrinsic_params": { "projection": "perspective", } } } ] }This value will generate images using a standard perspective lens:
![]()
resolution_width
and resolution_height
#
int
The resolution_width
and resolution_height
fields define the camera resolution: the number of pixels in the width and height of the output image. Each of these fields accepts an integer from 64 to 4096; they do not need to be the same.
![]()
|
![]()
|
![]()
|
![]()
|
![]()
|
![]()
|
Note
The height and width of your camera resolution should be in the same proportion as your camera’s horizontal and vertical field of view, unless you deliberately want to create a squashing or stretching effect.
Tip
Use this visualization tool to experiment with how camera resolutions and fields of view interact.
Sample usage
Sample resolution_width and resolution_height fields and their location in the data request
In the JSON hierarchy of the data request, the
resolution_width
andresolution_height
fields are found here:{ "datapoints": [ { "camera": { "intrinsic_params": { "resolution_width": 1024, "resolution_height": 768, } } } ] }This value will generate an output image with the dimensions 1024x768.
Note that the image will only be in proportion if your camera field of view has the same 4:3 ratio - for example, covering 12 degrees of arc in the horizontal direction and 9 degrees of arc in the vertical direction. If you leave the camera resolution the same but ask the camera to cover 9 degrees of arc in both the horizontal and vertical directions, the image will be stretched horizontally to cover the same area over a larger number of pixels.
fov_horizontal
and fov_vertical
#
int
The fov_horizontal
and fov_vertical
fields define the camera’s field of view in degrees. Each of these fields accepts an integer from 5 to 180; they do not need to be the same.
![]()
|
![]()
|
![]()
|
![]()
|
![]()
|
Note
The horizontal and vertical fields of view should be in the same proportion as your camera’s height and width, unless you deliberately want to create a squashing or stretching effect.
Tip
Use this visualization tool to experiment with how field of view and camera resolution interact.
Sample usage
Sample fov_horizontal and fov_vertical fields and their location in the data request
In the JSON hierarchy of the data request, the
fov_horizontal
andfov_vertical
fields are found here:{ "datapoints": [ { "camera": { "intrinsic_params": { "fov_horizontal": 12, "fov_vertical": 9, } } } ] }This value will generate an image that coves 12 degrees of arc in the horizontal direction and 9 degrees of arc in the vertical direction.
This image will be in proportion only if you set the image resolution to 1024x768 or another pair of dimensions that are at the same 4:3 ratio. If you change the image resolution to a 1024x1024 square without changing the field of view, the image will be stretched vertically to cover the same area over a larger number of pixels.
wavelength
#
string
The wavelength
field defines what type of light the camera will collect: visible light or near-infrared light.
Note
If you select “nir”, you must leave out the background object, and you must instead provide details about the behavior of the near-infrared spotlight using the lights object.
Sample usage
A sample wavelength field and its location in the data request
In the JSON hierarchy of the data request, the
wavelength
field is found here:{ "datapoints": [ { "camera": { "intrinsic_params": { "wavelength": "nir" }, } } ] }This value will generate an image using the near-infrared spectrum.
extrinsic_params
#
The extrinsic_params
object defines the camera’s location and orientation in the scene: the extrinsic parameters that define its relationship to your actor and anything else that might be present.
The extrinsic_params
object contains the following fields. You can click on a field to learn more about it:
"extrinsic_params": { "location": { //The camera's location in global x, y, and z coordinates }, "rotation": { //The camera's orientation, defined using yaw, pitch, and roll controls. } }
location
#
object
The location
object defines the position of the camera using global coordinates measured in meters. Each value is a floating-point variable that can range from -5 to 5 meters.
Tip
Use this visualization tool to experiment with positioning the camera so that the actor’s head is where you want it to be in the frame.
The location
object contains the following fields:
"location": { "x": //The x coordinate, which moves the camera from side to side "y": //The y coordinate, which moves the camera from back to front "z": //The z coordinate, which moves the camera down and up },
For an in-depth look at these axes, see About our coordinate systems.
Sample usage
Location in the data request
In the JSON hierarchy of the data request, the
location
field is found here:{ "datapoints": [ { "camera": { "extrinsic_params": { "location": { }, } } } ] }
A sample complete location object
rotation
#
object
The rotation
object defines how the camera is oriented in the scene, using yaw, pitch, and roll controls.
The yaw, pitch, and roll fields are floating-point values measured in degrees. Pitch can range from -90 to 90, while yaw and roll can range from -180 to 180.
Tip
Use this visualization tool to experiment with camera rotation.
The rotation
object contains the following fields:
"rotation": { "yaw": //Turns the camera from side to side. Positive values turn the camera to its left; negative values turn the camera to its right. "pitch": //Tilts the camera up and down. Positive values tilt the camera up; negative values tilt the camera down. "roll": //Tilts the camera from side to side. Positive values tilt the camera counterclockwise while negative values tilt the camera clockwise. },
Important
Rotations of the camera are applied in a specific order: first yaw, then pitch, then roll. The camera is initially pointed in the +y direction before the rotations are applied.
Sample usage
Location in the data request
In the JSON hierarchy of the data request, the
rotation
object is found here:{ "datapoints": [ { "camera": { "extrinsic_params": { "rotation": { }, } } } ] }