Manager class for Zivid cameras. Handles connection to cameras and configuration of the log system. Creating an instance of `Application` allocates memory which won't be collected until you manually invoke `Dispose()` (or use `using`). Creating a second `Application` instance before the previous `Application` instance has been manually disposed will trigger an exception.
Interface to one Zivid camera
Information about camera model, serial number etc.
The hardware revision of the camera
Information about user data capabilities of the camera
Information about the intrinsic parameters of the camera (OpenCV model)
The camera matrix K (=[fx,0,cx;0,fy,cy;0,0,1])
The radial and tangential distortion parameters
Information about camera connection state, temperatures, etc.
Contains information about the ComputeDevice used by.
A frame captured by a Zivid camera
A 2D frame captured by a Zivid camera
Various information for a frame
The version information for installed software at the time of image capture
Information about the system that captured this frame
Abstract base-class for all images
A BGRA image with 8 bits per channel
An RGBA image with 8 bits per channel
A single precision floating points 4x4 matrix, stored in row major order
Point cloud with x, y, z, RGB color and SNR laid out on a 2D grid
Class describing a range of values for a given type T
Settings used when capturing with a Zivid camera
Settings for a single acquisition
List of Acquisition objects
When Diagnostics is enabled, extra diagnostic information is recorded during capture. This extra information is included when saving the frame to a .zdf file, and will help Zivid's support team to provide better assistance. Enabling Diagnostics increases the capture time and the RAM usage. It will also increase the size of the .zdf file. It is recommended to enable Diagnostics only when reporting issues to Zivid's support team.
Experimental features. These settings may be changed, renamed, moved or deleted in the future.
Settings related to processing of a capture, including filters and color balance
Color balance settings
Experimental color settings. These may be renamed, moved or deleted in the future.
Removes floating points and isolated clusters from the point cloud.
Experimental filters. These may be renamed, moved or deleted in the future.
Corrects artifacts that appear when imaging scenes with large texture gradients or high contrast. These artifacts are caused by blurring in the lens. The filter works best when aperture values are chosen such that the camera has quite good focus. The filter also supports removing the points that experience a large correction.
Fills in removed points by interpolating remaining surrounding points.
Contains filters that can be used to clean up a noisy point cloud
Discard points with signal-to-noise ratio (SNR) values below a threshold
Get better surface coverage by repairing regions of missing data due to noisy points. Consider disabling this filter if you require all points in your point cloud to be of high confidence.
Reduce noise and outliers in the point cloud. This filter can also be used to reduce ripple effects caused by interreflections. Consider disabling this filter if you need to distinguish very fine details and thus need to avoid any smoothing effects.
Contains a filter that removes points with large Euclidean distance to neighboring points
Discard point if Euclidean distance to neighboring points is above a threshold
Contains a filter that removes points likely introduced by reflections (useful for shiny materials)
Discard points likely introduced by reflections (useful for shiny materials)
Experimental reflection filter related settings
Gaussian smoothing of the point cloud
Removes points outside the region of interest.
Removes the points outside the box. The box is defined by three points: O, A and B. These points define two vectors, OA that goes from PointO to PointA, and OB that goes from PointO to PointB. This gives 4 points O, A, B and (O + OA + OB), that together form a parallelogram in 3D. Two extents can be provided, to extrude the parallelogram along the surface normal vector of the parallelogram plane. This creates a 3D volume (parallelepiped). The surface normal vector is defined by the cross product OA x OB.
Removes points that reside outside of a depth range, meaning that their Z coordinate falls above a given maximum or below a given minimum.
Settings used when capturing 2D images with a Zivid camera
Settings for a single acquisition
List of acquisitions. Note that the Zivid SDK only supports a single acquisition per capture in 2D mode.
Processing related settings
Color balance settings
Color with 8-bit blue, green, red and alpha channels
Color with 8-bit red, green, blue and alpha channels
Hi-resolution time span. Valid range is 1 nanosecond to 292 years.
Point with three coordinates as float
Struct which contains XYZ point and BGRA color packed together
Struct which contains XYZ point and RGBA color packed together
Delegate function for updating settings
The model of the camera
Option for downsampling
Set the Zivid Vision Engine to use. The Phase Engine is the fastest choice in terms of both acquisition time and total capture time, and is a good compromise between quality and speed. The Phase Engine is recommended for objects that are diffuse, opaque, and slightly specular, and is suitable for applications in logistics such as parcel induction. The Stripe Engine is built for exceptional point cloud quality in scenes with highly specular reflective objects. This makes the engine suitable for applications such as factory automation, manufacturing, and bin picking. Additional acquisition and processing time are required for the Stripe Engine. The Omni Engine is built for exceptional point cloud quality on all scenes, including scenes with extremely specular reflective objects, as well as transparent objects. This makes the Omni Engine suitable for applications such as piece picking. Same as for the Stripe Engine, it trades off speed for quality. The Omni Engine is only available for Zivid 2+. The Stripe and Omni engines are currently experimental, and may be changed and improved in the future.
This setting controls how the color image is computed. `automatic` is the default option. `automatic` is identical to `useFirstAcquisition` for single-acquisition captures and multi-acquisition captures when all the acquisitions have identical (duplicated) acquisition settings. `automatic` is identical to `toneMapping` for multi-acquisition HDR captures with differing acquisition settings. `useFirstAcquisition` uses the color data acquired from the first acquisition provided. If the capture consists of more than one acquisition, then the remaining acquisitions are not used for the color image. No tone mapping is performed. This option provides the most control of the color image, and the color values will be consistent over repeated captures with the same settings. `toneMapping` uses all the acquisitions to create one merged and normalized color image. For HDR captures the dynamic range of the captured images is usually higher than the 8-bit color image range. `toneMapping` will map the HDR color data to the 8-bit color output range by applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to normalize the captured color image to the full 8-bit output. Note that when using `toneMapping` mode the color values can be inconsistent over repeated captures if you move, add or remove objects in the scene. For the most control over the colors, select the `useFirstAcquisition` mode.
The reflection filter has two modes: Local and Global. Local mode preserves more 3D data on thinner objects, generally removes more reflection artifacts and processes faster than the Global filter. The Global filter is generally better at removing outlier points in the point cloud. It is advised to use the Outlier filter together with the Local Reflection filter. Global mode was introduced in SDK 1.0 and Local mode was introduced in SDK 2.7.
Choose how to sample colors for the pointcloud. The `rgb` option gives all colors for a regular Zivid camera. The `disabled` option gives no colors and can allow for faster captures. It is also useful if you want to avoid projecting white light in the subsampling modes under `Sampling::Pixel`.
Set whether the full image sensor should be used with white projector light or only specific color channels with corresponding projector light. Using only a specific color channel will subsample pixels and give a smaller resolution. Subsampling decreases the capture time, as less data will be captured and processed. Picking a specific color channel can also help reduce noise and effects of ambient light. Projecting blue light will in most cases give better data than red light.