Click or drag to resize

Zivid.NET Namespace

The main namespace for the Zivid .NET API. The top node is Application. The main class to use to interface the 3D camera is Camera.
Classes
  ClassDescription
Public classApplication
Manager class for Zivid cameras.
Public classCamera
Interface to one Zivid camera
Public classCameraInfo
Information about camera model, serial number etc.
Public classCameraInfoRevisionGroup
Major/Minor hardware revision number. This field is deprecated and may be removed in a future version of the SDK. Please use HardwareRevision instead.
Public classCameraInfoUserDataGroup
Information about user data capabilities of the camera
Public classCameraIntrinsics
Information about the intrinsic parameters of the camera (OpenCV model)
Public classCameraIntrinsicsCameraMatrixGroup
The camera matrix K (=[fx,0,cx;0,fy,cy;0,0,1])
Public classCameraIntrinsicsDistortionGroup
The radial and tangential distortion parameters
Public classCameraState
Information about camera connection state, temperatures, etc.
Public classCameraStateNetworkGroup
Current network state
Public classCameraStateNetworkGroupIPV4Group
Current IPv4 protocol state
Public classCameraStateTemperatureGroup
Current temperature(s)
Public classComputeDevice
Contains information about the ComputeDevice used by Application.
Public classFrame
A frame captured by a Zivid camera
Public classFrame2D
A 2D frame captured by a Zivid camera
Public classFrameInfo
Various information for a frame
Public classFrameInfoSoftwareVersionGroup
The version information for installed software at the time of image capture
Public classFrameInfoSystemInfoGroup
Information about the system that captured this frame
Public classFrameInfoSystemInfoGroupComputeDeviceGroup
Compute device
Public classFrameInfoSystemInfoGroupCPUGroup
CPU
Public classImageNETPixelFormat
Abstract base-class for all images
Public classImageBGRA
A BGRA image with 8 bits per channel
Public classImageRGBA
An RGBA image with 8 bits per channel
Public classImageSRGB
An RGBA image in the sRGB color space with 8 bits per channel
Public classMarkerDictionary
Holds information about fiducial markers such as ArUco markers for detection
Public classMatrix4x4
A single precision floating points 4x4 matrix, stored in row major order
Public classNetworkConfiguration
Network configuration of a camera
Public classNetworkConfigurationIPV4Group
IPv4 network configuration
Public classPointCloud
Point cloud with x, y, z, RGB color and SNR laid out on a 2D grid
Public classRangeT
Class describing a range of values for a given type T
Public classResolution
Class describing a resolution with a width and a height.
Public classSettings
Settings used when capturing with a Zivid camera.
Public classSettingsAcquisition
Settings for a single acquisition.
Public classSettingsAcquisitionsList
List of Acquisition objects.
Public classSettingsDiagnosticsGroup
When Diagnostics is enabled, additional diagnostic data is recorded during capture and included when saving the frame to a .zdf file. This enables Zivid's Customer Success team to provide better assistance and more thorough troubleshooting. Enabling Diagnostics increases the capture time and the RAM usage. It will also increase the size of the .zdf file. It is recommended to enable Diagnostics only when reporting issues to Zivid's support team.
Public classSettingsProcessingGroup
Settings related to processing of a capture, including filters and color balance.
Public classSettingsProcessingGroupColorGroup
Color settings.
Public classSettingsProcessingGroupColorGroupBalanceGroup
Color balance settings.
Public classSettingsProcessingGroupColorGroupExperimentalGroup
Experimental color settings. These may be renamed, moved or deleted in the future.
Public classSettingsProcessingGroupFiltersGroup
Filter settings.
Public classSettingsProcessingGroupFiltersGroupClusterGroup
Removes floating points and isolated clusters from the point cloud.
Public classSettingsProcessingGroupFiltersGroupClusterGroupRemovalGroup
Cluster removal filter.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroup
Experimental filters. These may be renamed, moved or deleted in the future.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroup
Corrects artifacts that appear when imaging scenes with large texture gradients or high contrast. These artifacts are caused by blurring in the lens. The filter works best when aperture values are chosen such that the camera has quite good focus. The filter also supports removing the points that experience a large correction.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCorrectionGroup
Contrast distortion correction filter.
Public classSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupRemovalGroup
Contrast distortion removal filter.
Public classSettingsProcessingGroupFiltersGroupHoleGroup
Contains filters that can be used to deal with holes in the point cloud.
Public classSettingsProcessingGroupFiltersGroupHoleGroupRepairGroup
Fills in point cloud holes by interpolating remaining surrounding points.
Public classSettingsProcessingGroupFiltersGroupNoiseGroup
Contains filters that can be used to clean up a noisy point cloud.
Public classSettingsProcessingGroupFiltersGroupNoiseGroupRemovalGroup
Discard points with signal-to-noise ratio (SNR) values below a threshold.
Public classSettingsProcessingGroupFiltersGroupNoiseGroupRepairGroup
Get better surface coverage by repairing regions of missing data due to noisy points. Consider disabling this filter if you require all points in your point cloud to be of high confidence.
Public classSettingsProcessingGroupFiltersGroupNoiseGroupSuppressionGroup
Reduce noise and outliers in the point cloud. This filter can also be used to reduce ripple effects caused by interreflections. Consider disabling this filter if you need to distinguish very fine details and thus need to avoid any smoothing effects.
Public classSettingsProcessingGroupFiltersGroupOutlierGroup
Contains a filter that removes points with large Euclidean distance to neighboring points.
Public classSettingsProcessingGroupFiltersGroupOutlierGroupRemovalGroup
Discard point if Euclidean distance to neighboring points is above a threshold.
Public classSettingsProcessingGroupFiltersGroupReflectionGroup
Contains a filter that removes points likely introduced by reflections (useful for shiny materials).
Public classSettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroup
Discard points likely introduced by reflections (useful for shiny materials).
Public classSettingsProcessingGroupFiltersGroupSmoothingGroup
Smoothing filters.
Public classSettingsProcessingGroupFiltersGroupSmoothingGroupGaussianGroup
Gaussian smoothing of the point cloud.
Public classSettingsProcessingGroupResamplingGroup
Settings for changing the output resolution of the point cloud.
Public classSettingsRegionOfInterestGroup
Removes points outside the region of interest.
Public classSettingsRegionOfInterestGroupBoxGroup
Removes points outside the given three-dimensional box. Using this feature may significantly speed up acquisition and processing time, because one can avoid acquiring and processing data that is guaranteed to fall outside of the region of interest. The degree of speed-up depends on the size and shape of the box. Generally, a smaller box yields a greater speed-up. The box is defined by three points: O, A and B. These points define two vectors, OA that goes from PointO to PointA, and OB that goes from PointO to PointB. This gives 4 points O, A, B and (O + OA + OB), that together form a parallelogram in 3D. Two extents can be provided, to extrude the parallelogram along the surface normal vector of the parallelogram plane. This creates a 3D volume (parallelepiped). The surface normal vector is defined by the cross product OA x OB.
Public classSettingsRegionOfInterestGroupDepthGroup
Removes points that reside outside of a depth range, meaning that their Z coordinate falls above a given maximum or below a given minimum.
Public classSettingsSamplingGroup
Sampling settings.
Public classSettings2D
Settings used when capturing 2D images with a Zivid camera.
Public classSettings2DAcquisition
Settings for a single acquisition.
Public classSettings2DAcquisitionsList
List of acquisitions. Note that the Zivid SDK only supports a single acquisition per capture in 2D mode.
Public classSettings2DProcessingGroup
Processing related settings.
Public classSettings2DProcessingGroupColorGroup
Color settings.
Public classSettings2DProcessingGroupColorGroupBalanceGroup
Color balance settings.
Public classSettings2DSamplingGroup
Sampling settings.
Structures
  StructureDescription
Public structureColorBGRA
Color with 8-bit blue, green, red and alpha channels
Public structureColorRGBA
Color with 8-bit red, green, blue and alpha channels
Public structureColorSRGB
Color with 8-bit red, green, blue and alpha channels in the sRGB color space
Public structureDuration
Hi-resolution time span. Valid range is 1 nanosecond to 292 years.
Public structurePointXY
Point with two coordinates as float
Public structurePointXYZ
Point with three coordinates as float
Public structurePointXYZColorBGRA
Struct which contains XYZ point and BGRA color packed together
Public structurePointXYZColorRGBA
Struct which contains XYZ point and RGBA color packed together
Delegates
  DelegateDescription
Public delegateCameraUpdateSettingsDelegate
Delegate function for updating settings
Public delegateCameraInfoCopyWithDelegate
Public delegateCameraInfoRevisionGroupCopyWithDelegate
Public delegateCameraInfoUserDataGroupCopyWithDelegate
Public delegateCameraIntrinsicsCameraMatrixGroupCopyWithDelegate
Public delegateCameraIntrinsicsCopyWithDelegate
Public delegateCameraIntrinsicsDistortionGroupCopyWithDelegate
Public delegateCameraStateCopyWithDelegate
Public delegateCameraStateNetworkGroupCopyWithDelegate
Public delegateCameraStateNetworkGroupIPV4GroupCopyWithDelegate
Public delegateCameraStateTemperatureGroupCopyWithDelegate
Public delegateFrameInfoCopyWithDelegate
Public delegateFrameInfoSoftwareVersionGroupCopyWithDelegate
Public delegateFrameInfoSystemInfoGroupComputeDeviceGroupCopyWithDelegate
Public delegateFrameInfoSystemInfoGroupCopyWithDelegate
Public delegateFrameInfoSystemInfoGroupCPUGroupCopyWithDelegate
Public delegateNetworkConfigurationCopyWithDelegate
Public delegateNetworkConfigurationIPV4GroupCopyWithDelegate
Public delegateSettingsAcquisitionCopyWithDelegate
Public delegateSettingsCopyWithDelegate
Public delegateSettingsDiagnosticsGroupCopyWithDelegate
Public delegateSettingsProcessingGroupColorGroupBalanceGroupCopyWithDelegate
Public delegateSettingsProcessingGroupColorGroupCopyWithDelegate
Public delegateSettingsProcessingGroupColorGroupExperimentalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupClusterGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupClusterGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCorrectionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupExperimentalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupHoleGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupHoleGroupRepairGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupRepairGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupNoiseGroupSuppressionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupOutlierGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupOutlierGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupReflectionGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupSmoothingGroupCopyWithDelegate
Public delegateSettingsProcessingGroupFiltersGroupSmoothingGroupGaussianGroupCopyWithDelegate
Public delegateSettingsProcessingGroupResamplingGroupCopyWithDelegate
Public delegateSettingsRegionOfInterestGroupBoxGroupCopyWithDelegate
Public delegateSettingsRegionOfInterestGroupCopyWithDelegate
Public delegateSettingsRegionOfInterestGroupDepthGroupCopyWithDelegate
Public delegateSettingsSamplingGroupCopyWithDelegate
Public delegateSettings2DAcquisitionCopyWithDelegate
Public delegateSettings2DCopyWithDelegate
Public delegateSettings2DProcessingGroupColorGroupBalanceGroupCopyWithDelegate
Public delegateSettings2DProcessingGroupColorGroupCopyWithDelegate
Public delegateSettings2DProcessingGroupCopyWithDelegate
Public delegateSettings2DSamplingGroupCopyWithDelegate
Enumerations
  EnumerationDescription
Public enumerationCameraInfoModelOption
The model of the camera
Public enumerationCameraStateInaccessibleReasonOption
If the camera status is `inaccessible`, then this enum value will give you the reason.
Public enumerationCameraStateStatusOption
This enum describes the current status of this camera. The enum can have the following values: * `inaccessible`: The camera was discovered, but the SDK is not able to connect to this camera. This can be because the IP settings of the camera and the PC are not compatible, or because there are several cameras with the same IP connected to your PC. The `InaccessibleReason` enum will give you more details about why this camera is not accessible. The network configuration of the camera can be changed using the ZividNetworkCameraConfigurator CLI tool. See the knowledge base for more information. * `busy`: The camera is currently in use by another process. This can be a different process on the same PC, or on a different PC if the camera is shared on a network. * `applyingNetworkConfiguration`: The camera network configuration is being changed by the current process. * `firmwareUpdateRequired`: The camera is accessible, but requires a firmware update before you can connect to it. * `updatingFirmware`: The camera firmware is currently being updated in the current process. * `available`: The camera is available for use by the current process. This means that you can invoke camera.connect() on this camera. * `connecting`: The camera is currently connecting in the current process. * `connected`: The camera is connected in the current process. This means camera.connect() has successfully completed and you can capture using this camera. * `disconnecting`: The camera is currently disconnecting in the current process. When disconnection has completed, the camera will normally go back to the `available` state. * `disappeared`: The camera was found earlier, but it can no longer be found. The connection between the PC and the camera may be disrupted, or the camera may have lost power. When in `disappeared` state, the camera will not be returned from `Application::cameras()`. The camera will go back to one of the other states if the camera is later found again.
Public enumerationNetworkConfigurationIPV4GroupModeOption
DHCP or manual configuration
Public enumerationPointCloudDownsampling
Option for downsampling
Public enumerationSettingsEngineOption
Set the Zivid Vision Engine to use. The Phase Engine is the fastest choice in terms of both acquisition time and total capture time, and is a good compromise between quality and speed. The Phase Engine is recommended for objects that are diffuse, opaque, and slightly specular, and is suitable for applications in logistics such as parcel induction. The Stripe Engine is built for exceptional point cloud quality in scenes with highly specular reflective objects. This makes the engine suitable for applications such as factory automation, manufacturing, and bin picking. Additional acquisition and processing time are required for the Stripe Engine. The Omni Engine is built for exceptional point cloud quality on all scenes, including scenes with extremely specular reflective objects, as well as transparent objects. This makes the Omni Engine suitable for applications such as piece picking. Same as for the Stripe Engine, it trades off speed for quality. The Omni Engine is only available for Zivid 2+.
Public enumerationSettingsProcessingGroupColorGroupExperimentalGroupModeOption
This setting controls how the color image is computed. `automatic` is the default option. `automatic` is identical to `useFirstAcquisition` for single-acquisition captures and multi-acquisition captures when all the acquisitions have identical (duplicated) acquisition settings. `automatic` is identical to `toneMapping` for multi-acquisition HDR captures with differing acquisition settings. `useFirstAcquisition` uses the color data acquired from the first acquisition provided. If the capture consists of more than one acquisition, then the remaining acquisitions are not used for the color image. No tone mapping is performed. This option provides the most control of the color image, and the color values will be consistent over repeated captures with the same settings. `toneMapping` uses all the acquisitions to create one merged and normalized color image. For HDR captures the dynamic range of the captured images is usually higher than the 8-bit color image range. `toneMapping` will map the HDR color data to the 8-bit color output range by applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to normalize the captured color image to the full 8-bit output. Note that when using `toneMapping` mode the color values can be inconsistent over repeated captures if you move, add or remove objects in the scene. For the most control over the colors, select the `useFirstAcquisition` mode.
Public enumerationSettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroupModeOption
The reflection filter has two modes: Local and Global. Local mode preserves more 3D data on thinner objects, generally removes more reflection artifacts and processes faster than the Global filter. The Global filter is generally better at removing outlier points in the point cloud. It is advised to use the Outlier filter and Cluster filter together with the Local Reflection filter. Global mode was introduced in SDK 1.0 and Local mode was introduced in SDK 2.7.
Public enumerationSettingsProcessingGroupResamplingGroupModeOption
Setting for upsampling or downsampling the point cloud data by some factor. This operation is performed after all other processing has been completed. Downsampling is used to reduce the number of points in the point cloud. This is done by combining each 2x2 or 4x4 group of pixels in the original point cloud into one pixel in a new point cloud. This downsample functionality is identical to the downsample method on the PointCloud class. The averaging process reduces noise in the point cloud, but it will not improve capture speed. To improve capture speed, consider using the subsampling modes found in Settings/Sampling/Pixel. Upsampling is used to increase the number of points in the point cloud. It is not possible to upsample beyond the full resolution of the camera, so upsampling may only be used in combination with the subsampling modes found in Settings/Sampling/Pixel. For example, one may combine blueSubsample2x2 with upsample2x2 to obtain a point cloud that matches a full resolution 2D capture, while retaining the speed benefits of capturing the point cloud with blueSubsample2x2. Upsampling is achieved by expanding pixels in the original point cloud into groups of 2x2 or 4x4 pixels in a new point cloud. Where possible, values are filled at the new points based on an interpolation of the surrounding original points. The points in the new point cloud that correspond to points in the original point cloud are left unchanged. Note that upsampling will lead to four (upsample2x2) or sixteen (upsample4x4) times as many pixels in the point cloud compared to no upsampling, so users should be aware of increased computational cost related to copying and analyzing this data.
Public enumerationSettingsSamplingGroupColorOption
Choose how to sample colors for the point cloud. The `rgb` option gives a 2D image with full colors. The `grayscale` option gives a grayscale (r=g=b) 2D image, which can be acquired faster than full colors. The `disabled` option gives no colors and can allow for even faster captures. The `grayscale` option is not available on all camera models.
Public enumerationSettingsSamplingGroupPixelOption
Set whether the full image sensor should be used with white projector light or only specific color channels with corresponding projector light. Using only a specific color channel will subsample pixels and give a smaller resolution. Subsampling decreases the capture time, as less data will be captured and processed. Picking a specific color channel can also help reduce noise and effects of ambient light. Projecting blue light will in most cases give better data than red light.
Public enumerationSettings2DSamplingGroupColorOption
Choose how to sample colors for the 2D image. The `rgb` option gives an image with full colors. The `grayscale` option gives a grayscale (r=g=b) image, which can be acquired faster than full colors. The `grayscale` option is not available on all camera models.
Public enumerationSettings2DSamplingGroupPixelOption
Use this setting to obtain an image that matches a point cloud captured with the equivalent sampling setting.