Click or drag to resize

Zivid.NET Namespace

The main namespace for the Zivid .NET API. The top node is Application. The main class to use to interface the 3D camera is Camera.
Classes
  ClassDescription
Public classApplication
Manager class for Zivid cameras.
Public classCamera
Interface to one Zivid camera
Public classCameraInfo
Information about camera model, serial number etc.
Public classCameraInfoRevisionGroup
Major/Minor hardware revision number. This field is deprecated and may be removed in a future version of the SDK. Please use HardwareRevision instead.
Public classCameraInfoUserDataGroup
Information about user data capabilities of the camera
Public classCameraIntrinsics
Information about the intrinsic parameters of the camera (OpenCV model)
Public classCameraIntrinsicsCameraMatrixGroup
The camera matrix K (=[fx,0,cx;0,fy,cy;0,0,1])
Public classCameraIntrinsicsDistortionGroup
The radial and tangential distortion parameters
Public classCameraState
Information about camera connection state, temperatures, etc.
Public classCameraStateNetworkGroup
Current network state
Public classCameraStateNetworkGroupIPV4Group
Current IPv4 protocol state
Public classCameraStateNetworkGroupLocalInterface
Current state of the computer's local network interface.
Public classCameraStateNetworkGroupLocalInterfaceIPV4Group
Current IPv4 protocol state of the computer's local network interface.
Public classCameraStateNetworkGroupLocalInterfaceIPV4GroupSubnet
An IPv4 subnet that the local network interface is connected to.
Public classCameraStateNetworkGroupLocalInterfaceIPV4GroupSubnetsList
List of IPv4 addresses and subnet masks that the interface is configured with. This list can contain multiple entries if the local network interface is configured with multiple IPv4 addresses.
Public classCameraStateNetworkGroupLocalInterfacesList
List of the computer's local network interfaces that discovered the camera. In the most common scenario with the camera connected to the computer with an Ethernet cable, this list will contain only the network interface for that Ethernet port. In more complex network scenarios it can be the case that the camera is discovered by multiple interfaces, and in that case this list will contain multiple network interfaces. However, when `CameraState::Status` is `connected`, only the one network interface that has the active TCP/IP connection to the camera will be listed.
Public classCameraStateTemperatureGroup
Current temperature(s)
Public classComputeDevice
Contains information about the ComputeDevice used by Application.
Public classFrame
A frame captured by a Zivid camera
Public classFrame2D
A 2D frame captured by a Zivid camera
Public classFrameInfo
Various information for a frame
Public classFrameInfoMetricsGroup
Metrics related to this capture.
Public classFrameInfoSoftwareVersionGroup
The version information for installed software at the time of image capture
Public classFrameInfoSystemInfoGroup
Information about the system that captured this frame
Public classFrameInfoSystemInfoGroupComputeDeviceGroup
Compute device
Public classFrameInfoSystemInfoGroupCPUGroup
CPU
Public classImageNETPixelFormat
Abstract base-class for all images
Public classImageBGRA
A BGRA image with 8 bits per channel
Public classImageRGBA
An RGBA image with 8 bits per channel
Public classImageSRGB
An RGBA image in the sRGB color space with 8 bits per channel
Public classMarkerDictionary
Holds information about fiducial markers such as ArUco markers for detection
Public classMatrix4x4
A single precision floating points 4x4 matrix, stored in row major order
Public classNetworkConfiguration
Network configuration of a camera
Public classNetworkConfigurationIPV4Group
IPv4 network configuration
Public classPointCloud
Point cloud with x, y, z, RGB color and SNR laid out on a 2D grid
Public classRangeT
Class describing a range of values for a given type T
Public classResolution
Class describing a resolution with a width and a height.
Public classSceneConditions
A description of the ambient conditions detected by the camera.
Public classSceneConditionsAmbientLightGroup
The ambient light detected by the camera.
Public classSettings
Settings used when capturing a 3D capture or 2D+3D capture with a Zivid camera.
Public classSettingsAcquisition
Settings for a single acquisition.
Public classSettingsAcquisitionsList
List of Acquisition objects.
Public classSettingsDiagnosticsGroup
When Diagnostics is enabled, additional diagnostic data is recorded during capture and included when saving the frame to a .zdf file. This enables Zivid's Customer Success team to provide better assistance and more thorough troubleshooting. Enabling Diagnostics increases the capture time and the RAM usage. It will also increase the size of the .zdf file. It is recommended to enable Diagnostics only when reporting issues to Zivid's support team.
Public classSettingsProcessingGroup
Settings related to processing of a capture, including filters and color balance.
Public classSettingsProcessingGroupColorGroup
Color settings. These settings are deprecated as of SDK 2.14. These settings will be removed in the next SDK major version (SDK 3.0). The recommended way to do a 2D+3D capture is to set a `Settings2D` object in the `Settings/Color` node. Tip: If you want to convert an existing settings.yml file to use the `Settings/Color` node, you can import the .yml file into Zivid Studio and then re-export it to .yml.
Public classSettingsProcessingGroupColorGroupBalanceGroup
Color balance settings.
Public classSettingsProcessingGroupColorGroupExperimentalGroup
Experimental color settings. These may be renamed, moved or deleted in the future.
Public classSettingsProcessingGroupFiltersGroup
Filter settings.
Public classSettingsProcessingGroupFiltersGroupClusterGroup
Removes floating points and isolated clusters from the point cloud.
Public classSettingsProcessingGroupFiltersGroupClusterGroupRemovalGroup
Cluster removal filter.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroup
Experimental filters. These may be renamed, moved or deleted in the future.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroup
Corrects artifacts that appear when imaging scenes with large texture gradients or high contrast. These artifacts are caused by blurring in the lens. The filter works best when aperture values are chosen such that the camera has quite good focus. The filter also supports removing the points that experience a large correction.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCorrectionGroup
Contrast distortion correction filter.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupRemovalGroup
Contrast distortion removal filter.
Public classSettingsProcessingGroupFiltersGroupHoleGroup
Contains filters that can be used to deal with holes in the point cloud.
Public classSettingsProcessingGroupFiltersGroupHoleGroupRepairGroup
Fills in point cloud holes by interpolating remaining surrounding points.
Public classSettingsProcessingGroupFiltersGroupNoiseGroup
Contains filters that can be used to clean up a noisy point cloud.
Public classSettingsProcessingGroupFiltersGroupNoiseGroupRemovalGroup
Discard points with signal-to-noise ratio (SNR) values below a threshold.
Public classSettingsProcessingGroupFiltersGroupNoiseGroupRepairGroup
Get better surface coverage by repairing regions of missing data due to noisy points. Consider disabling this filter if you require all points in your point cloud to be of high confidence.
Public classSettingsProcessingGroupFiltersGroupNoiseGroupSuppressionGroup
Reduce noise and outliers in the point cloud. This filter can also be used to reduce ripple effects caused by interreflections. Consider disabling this filter if you need to distinguish very fine details and thus need to avoid any smoothing effects.
Public classSettingsProcessingGroupFiltersGroupOutlierGroup
Contains a filter that removes points with large Euclidean distance to neighboring points.
Public classSettingsProcessingGroupFiltersGroupOutlierGroupRemovalGroup
Discard point if Euclidean distance to neighboring points is above a threshold.
Public classSettingsProcessingGroupFiltersGroupReflectionGroup
Contains a filter that removes points likely introduced by reflections (useful for shiny materials).
Public classSettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroup
Discard points likely introduced by reflections (useful for shiny materials).
Public classSettingsProcessingGroupFiltersGroupSmoothingGroup
Smoothing filters.
Public classSettingsProcessingGroupFiltersGroupSmoothingGroupGaussianGroup
Gaussian smoothing of the point cloud.
Public classSettingsProcessingGroupResamplingGroup
Settings for changing the output resolution of the point cloud.
Public classSettingsRegionOfInterestGroup
Removes points outside the region of interest.
Public classSettingsRegionOfInterestGroupBoxGroup
Removes points outside the given three-dimensional box. Using this feature may significantly speed up acquisition and processing time, because one can avoid acquiring and processing data that is guaranteed to fall outside of the region of interest. The degree of speed-up depends on the size and shape of the box. Generally, a smaller box yields a greater speed-up. The box is defined by three points: O, A and B. These points define two vectors, OA that goes from PointO to PointA, and OB that goes from PointO to PointB. This gives 4 points O, A, B and (O + OA + OB), that together form a parallelogram in 3D. Two extents can be provided, to extrude the parallelogram along the surface normal vector of the parallelogram plane. This creates a 3D volume (parallelepiped). The surface normal vector is defined by the cross product OA x OB.
Public classSettingsRegionOfInterestGroupDepthGroup
Removes points that reside outside of a depth range, meaning that their Z coordinate falls above a given maximum or below a given minimum.
Public classSettingsSamplingGroup
Sampling settings.
Public classSettings2D
Settings used when capturing 2D images with a Zivid camera.
Public classSettings2DAcquisition
Settings for one 2D acquisition. When capturing 2D HDR, all 2D acquisitions must have the same Aperture setting. Use Exposure Time or Gain to control the exposure instead.
Public classSettings2DAcquisitionsList
List of acquisitions used for 2D capture.
Public classSettings2DProcessingGroup
2D processing settings.
Public classSettings2DProcessingGroupColorGroup
Color settings.
Public classSettings2DProcessingGroupColorGroupBalanceGroup
Color balance settings.
Public classSettings2DProcessingGroupColorGroupExperimentalGroup
Experimental color settings. These may be renamed, moved or deleted in the future.
Public classSettings2DSamplingGroup
Sampling settings.
Structures
  StructureDescription
Public structureColorBGRA
Color with 8-bit blue, green, red and alpha channels
Public structureColorRGBA
Color with 8-bit red, green, blue and alpha channels
Public structureColorSRGB
Color with 8-bit red, green, blue and alpha channels in the sRGB color space
Public structureDuration
Hi-resolution time span. Valid range is 1 nanosecond to 292 years.
Public structurePointXY
Point with two coordinates as float
Public structurePointXYZ
Point with three coordinates as float
Public structurePointXYZColorBGRA
Struct which contains XYZ point and BGRA color packed together
Public structurePointXYZColorRGBA
Struct which contains XYZ point and RGBA color packed together
Delegates
  DelegateDescription
Public delegateCameraUpdateSettingsDelegate
Delegate function for updating settings
Public delegateCameraInfoCopyWithDelegate
Public delegateCameraInfoRevisionGroupCopyWithDelegate
Public delegateCameraInfoUserDataGroupCopyWithDelegate
Public delegateCameraIntrinsicsCameraMatrixGroupCopyWithDelegate
Public delegateCameraIntrinsicsCopyWithDelegate
Public delegateCameraIntrinsicsDistortionGroupCopyWithDelegate
Public delegateCameraStateCopyWithDelegate
Public delegateCameraStateNetworkGroupCopyWithDelegate
Public delegateCameraStateNetworkGroupIPV4GroupCopyWithDelegate
Public delegateCameraStateNetworkGroupLocalInterfaceCopyWithDelegate
Public delegateCameraStateNetworkGroupLocalInterfaceIPV4GroupCopyWithDelegate
Public delegateCameraStateNetworkGroupLocalInterfaceIPV4GroupSubnetCopyWithDelegate
Public delegateCameraStateTemperatureGroupCopyWithDelegate
Public delegateFrameInfoCopyWithDelegate
Public delegateFrameInfoMetricsGroupCopyWithDelegate
Public delegateFrameInfoSoftwareVersionGroupCopyWithDelegate
Public delegateFrameInfoSystemInfoGroupComputeDeviceGroupCopyWithDelegate
Public delegateFrameInfoSystemInfoGroupCopyWithDelegate
Public delegateFrameInfoSystemInfoGroupCPUGroupCopyWithDelegate
Public delegateNetworkConfigurationCopyWithDelegate
Public delegateNetworkConfigurationIPV4GroupCopyWithDelegate
Public delegateSceneConditionsAmbientLightGroupCopyWithDelegate
Public delegateSceneConditionsCopyWithDelegate
Public delegateSettingsAcquisitionCopyWithDelegate
Public delegateSettingsCopyWithDelegate
Public delegateSettingsDiagnosticsGroupCopyWithDelegate
Public delegateSettingsProcessingGroupColorGroupBalanceGroupCopyWithDelegate
Public delegateSettingsProcessingGroupColorGroupCopyWithDelegate
Public delegateSettingsProcessingGroupColorGroupExperimentalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupClusterGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupClusterGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCorrectionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupHoleGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupHoleGroupRepairGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupRepairGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupSuppressionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupOutlierGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupOutlierGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupReflectionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupSmoothingGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupSmoothingGroupGaussianGroupCopyWithDelegate
Public delegateSettingsProcessingGroupResamplingGroupCopyWithDelegate
Public delegateSettingsRegionOfInterestGroupBoxGroupCopyWithDelegate
Public delegateSettingsRegionOfInterestGroupCopyWithDelegate
Public delegateSettingsRegionOfInterestGroupDepthGroupCopyWithDelegate
Public delegateSettingsSamplingGroupCopyWithDelegate
Public delegateSettings2DAcquisitionCopyWithDelegate
Public delegateSettings2DCopyWithDelegate
Public delegateSettings2DProcessingGroupColorGroupBalanceGroupCopyWithDelegate
Public delegateSettings2DProcessingGroupColorGroupCopyWithDelegate
Public delegateSettings2DProcessingGroupColorGroupExperimentalGroupCopyWithDelegate
Public delegateSettings2DProcessingGroupCopyWithDelegate
Public delegateSettings2DSamplingGroupCopyWithDelegate
Enumerations
  EnumerationDescription
Public enumerationCameraInfoModelOption
The model of the camera
Public enumerationCameraStateInaccessibleReasonOption
If the camera status is `inaccessible`, then this enum value will give you the reason.
Public enumerationCameraStateStatusOption
This enum describes the current status of this camera. The enum can have the following values: * `inaccessible`: The camera was discovered, but the SDK is not able to connect to this camera. This can be because the IP settings of the camera and the PC are not compatible, or because there are several cameras with the same IP connected to your PC. The `InaccessibleReason` enum will give you more details about why this camera is not accessible. The network configuration of the camera can be changed using the ZividNetworkCameraConfigurator CLI tool. See the knowledge base for more information. * `busy`: The camera is currently in use by another process. This can be a different process on the same PC, or on a different PC if the camera is shared on a network. * `applyingNetworkConfiguration`: The camera network configuration is being changed by the current process. * `firmwareUpdateRequired`: The camera is accessible, but requires a firmware update before you can connect to it. * `updatingFirmware`: The camera firmware is currently being updated in the current process. * `available`: The camera is available for use by the current process. This means that you can invoke camera.connect() on this camera. * `connecting`: The camera is currently connecting in the current process. * `connected`: The camera is connected in the current process. This means camera.connect() has successfully completed and you can capture using this camera. * `disconnecting`: The camera is currently disconnecting in the current process. When disconnection has completed, the camera will normally go back to the `available` state. * `disappeared`: The camera was found earlier, but it can no longer be found. The connection between the PC and the camera may be disrupted, or the camera may have lost power. When in `disappeared` state, the camera will not be returned from `Application::cameras()`. The camera will go back to one of the other states if the camera is later found again.
Public enumerationNetworkConfigurationIPV4GroupModeOption
DHCP or manual configuration
Public enumerationPointCloudDownsampling
Option for downsampling
Public enumerationSceneConditionsAmbientLightGroupFlickerClassificationOption
A classification of the detected ambient light flicker, if any. The values `grid50hz` and `grid60hz` indicate that ambient light matching a 50 Hz or 60 Hz power grid was detected in the scene. In those cases it is recommended to use Ambient Light Adaptation for better point cloud quality. The value `unknownFlicker` indicates that some significant time-varying ambient light was detected, but it did not match the characteristics of a standard power grid.
Public enumerationSettingsEngineOption
Set the Zivid Vision Engine to use. The Phase Engine is the fastest choice in terms of both acquisition time and total capture time, and is a good compromise between quality and speed. The Phase Engine is recommended for objects that are diffuse, opaque, and slightly specular, and is suitable for applications in logistics such as parcel induction. The Stripe Engine is built for exceptional point cloud quality in scenes with highly specular reflective objects. This makes the engine suitable for applications such as factory automation, manufacturing, and bin picking. Additional acquisition and processing time are required for the Stripe Engine. The Omni Engine is built for exceptional point cloud quality on all scenes, including scenes with extremely specular reflective objects, as well as transparent objects. This makes the Omni Engine suitable for applications such as piece picking. Same as for the Stripe Engine, it trades off speed for quality. The Omni Engine is only available for Zivid 2+. The Sage engine is built for use cases that require all points to be correct/accurate with particularly high confidence. This can be very suitable for eliminating problems such as reflection artifacts. This validation comes at the cost of speed, and sometimes reduced number of valid points due to the removal of low-confidence data. The Sage Engine is only available for Zivid 2+R. The Sage Engine is an experimental engine. This involves that it can be changed or removed in the future.
Public enumerationSettingsProcessingGroupColorGroupExperimentalGroupModeOption
This setting controls how the color image is computed. `automatic` is the default option. `automatic` is identical to `useFirstAcquisition` for single-acquisition captures and multi-acquisition captures when all the acquisitions have identical (duplicated) acquisition settings. `automatic` is identical to `toneMapping` for multi-acquisition HDR captures with differing acquisition settings. `useFirstAcquisition` uses the color data acquired from the first acquisition provided. If the capture consists of more than one acquisition, then the remaining acquisitions are not used for the color image. No tone mapping is performed. This option provides the most control of the color image, and the color values will be consistent over repeated captures with the same settings. `toneMapping` uses all the acquisitions to create one merged and normalized color image. For HDR captures the dynamic range of the captured images is usually higher than the 8-bit color image range. `toneMapping` will map the HDR color data to the 8-bit color output range by applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to normalize the captured color image to the full 8-bit output. Note that when using `toneMapping` mode the color values can be inconsistent over repeated captures if you move, add or remove objects in the scene. For the most control over the colors, select the `useFirstAcquisition` mode. This setting is deprecated as of SDK 2.14. This setting will be removed in the next SDK major version (SDK 3.0). The recommended way to do a 2D+3D capture is to set a `Settings2D` object in the `Settings/Color` node. Tip: If you want to convert an existing settings.yml file to use the `Settings/Color` node, you can import the .yml file into Zivid Studio and then re-export it to .yml.
Public enumerationSettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroupModeOption
The reflection filter has two modes: Local and Global. Local mode preserves more 3D data on thinner objects, generally removes more reflection artifacts and processes faster than the Global filter. The Global filter is generally better at removing outlier points in the point cloud. It is advised to use the Outlier filter and Cluster filter together with the Local Reflection filter. Global mode was introduced in SDK 1.0 and Local mode was introduced in SDK 2.7.
Public enumerationSettingsProcessingGroupResamplingGroupModeOption
Setting for upsampling or downsampling the point cloud data by some factor. This operation is performed after all other processing has been completed. Downsampling is used to reduce the number of points in the point cloud. This is done by combining each 2x2 or 4x4 group of pixels in the original point cloud into one pixel in a new point cloud. This downsample functionality is identical to the downsample method on the PointCloud class. The averaging process reduces noise in the point cloud, but it will not improve capture speed. To improve capture speed, consider using the subsampling modes found in Settings/Sampling/Pixel. Upsampling is used to increase the number of points in the point cloud. It is not possible to upsample beyond the full resolution of the camera, so upsampling may only be used in combination with the subsampling modes found in Settings/Sampling/Pixel. For example, one may combine blueSubsample2x2 with upsample2x2 to obtain a point cloud that matches a full resolution 2D capture, while retaining the speed benefits of capturing the point cloud with blueSubsample2x2. Upsampling is achieved by expanding pixels in the original point cloud into groups of 2x2 or 4x4 pixels in a new point cloud. Where possible, values are filled at the new points based on an interpolation of the surrounding original points. The points in the new point cloud that correspond to points in the original point cloud are left unchanged. Note that upsampling will lead to four (upsample2x2) or sixteen (upsample4x4) times as many pixels in the point cloud compared to no upsampling, so users should be aware of increased computational cost related to copying and analyzing this data.
Public enumerationSettingsSamplingGroupColorOption
Choose how to sample colors for the point cloud. The `rgb` option gives a 2D image with full colors. The `grayscale` option gives a grayscale (r=g=b) 2D image, which can be acquired faster than full colors. The `disabled` option gives no colors and can allow for even faster captures. The `grayscale` option is not available on all camera models. This setting is deprecated as of SDK 2.14. This setting will be removed in the next SDK major version (SDK 3.0). The recommended way to do a 2D+3D capture is to set a `Settings2D` object in the `Settings/Color` field. Tip: If you want to convert an existing settings.yml file to the recommended API, you can import the .yml file into Zivid Studio and then re-export it to .yml.
Public enumerationSettingsSamplingGroupPixelOption
For Zivid 2/2+, this setting controls whether to read out the full image sensor and use white projector light or to subsample pixels for specific color channels with corresponding projector light. Picking a specific color channel can help reduce noise and effects of ambient light - projecting blue light is recommended. For Zivid 2+R, the user does not have to set the projection color and should only consider whether to scale the resolution `by2x2` or `by4x4`. Both of these modes will boost the signal strength by about 4x compared to `all`, so the user should consider a corresponding reduction in exposure time. Sampling at a decreased resolution decreases capture time, as less data will be captured and processed.
Public enumerationSettings2DProcessingGroupColorGroupExperimentalGroupModeOption
This setting controls how the color image is computed. `automatic` is the default option. It performs tone mapping for HDR captures, but not for single-acquisition captures. Use this mode with a single acquisition if you want to have the most control over the colors in the image. `toneMapping` uses all the acquisitions to create one merged and normalized color image. For HDR captures the dynamic range of the captured images is usually higher than the 8-bit color image range. `toneMapping` will map the HDR color data to the 8-bit color output range by applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to normalize the captured color image to the full 8-bit output. Note that when using `toneMapping` mode the color values can be inconsistent over repeated captures if you move, add or remove objects in the scene. For the most control over the colors in the single-acquisition case, select the `automatic` mode.
Public enumerationSettings2DSamplingGroupColorOption
Choose how to sample colors for the 2D image. The `rgb` option gives an image with full colors. The `grayscale` option gives a grayscale (r=g=b) image, which can be acquired faster than full colors. The `grayscale` option is not available on all camera models.
Public enumerationSettings2DSamplingGroupPixelOption
Set the pixel sampling to use for the 2D capture. This setting defines how the camera sensor is sampled. When doing 2D+3D capture, picking the same value that is used for 3D is generally recommended.