Zivid.NET Namespace |
Class | Description | |
---|---|---|
Application |
Manager class for Zivid cameras.
| |
Camera |
Interface to one Zivid camera
| |
CameraInfo | Information about camera model, serial number etc. | |
CameraInfoRevisionGroup | Major/Minor hardware revision number. This field is deprecated and may be removed in a future version
of the SDK. Please use HardwareRevision instead.
| |
CameraInfoUserDataGroup | Information about user data capabilities of the camera | |
CameraIntrinsics | Information about the intrinsic parameters of the camera (OpenCV model) | |
CameraIntrinsicsCameraMatrixGroup | The camera matrix K (=[fx,0,cx;0,fy,cy;0,0,1]) | |
CameraIntrinsicsDistortionGroup | The radial and tangential distortion parameters | |
CameraState | Information about camera connection state, temperatures, etc. | |
CameraStateNetworkGroup | Current network state | |
CameraStateNetworkGroupIPV4Group | Current IPv4 protocol state | |
CameraStateTemperatureGroup | Current temperature(s) | |
ComputeDevice | Contains information about the ComputeDevice used by Application. | |
Frame | A frame captured by a Zivid camera | |
Frame2D | A 2D frame captured by a Zivid camera | |
FrameInfo | Various information for a frame | |
FrameInfoSoftwareVersionGroup | The version information for installed software at the time of image capture | |
FrameInfoSystemInfoGroup | Information about the system that captured this frame | |
FrameInfoSystemInfoGroupComputeDeviceGroup | Compute device | |
FrameInfoSystemInfoGroupCPUGroup | CPU | |
ImageNETPixelFormat | Abstract base-class for all images | |
ImageBGRA | A BGRA image with 8 bits per channel | |
ImageRGBA | An RGBA image with 8 bits per channel | |
ImageSRGB | An RGBA image in the sRGB color space with 8 bits per channel | |
MarkerDictionary | Holds information about fiducial markers such as ArUco markers for detection | |
Matrix4x4 |
A single precision floating points 4x4 matrix, stored in row major order
| |
NetworkConfiguration | Network configuration of a camera
| |
NetworkConfigurationIPV4Group | IPv4 network configuration | |
PointCloud |
Point cloud with x, y, z, RGB color and SNR laid out on a 2D grid
| |
RangeT | Class describing a range of values for a given type T | |
Resolution | Class describing a resolution with a width and a height. | |
Settings | Settings used when capturing with a Zivid camera. | |
SettingsAcquisition | Settings for a single acquisition. | |
SettingsAcquisitionsList | List of Acquisition objects. | |
SettingsDiagnosticsGroup | When Diagnostics is enabled, additional diagnostic data is recorded during capture and included when saving
the frame to a .zdf file. This enables Zivid's Customer Success team to provide better assistance and more
thorough troubleshooting.
Enabling Diagnostics increases the capture time and the RAM usage. It will also increase the size of the
.zdf file. It is recommended to enable Diagnostics only when reporting issues to Zivid's support team.
| |
SettingsProcessingGroup | Settings related to processing of a capture, including filters and color balance. | |
SettingsProcessingGroupColorGroup | Color settings. | |
SettingsProcessingGroupColorGroupBalanceGroup | Color balance settings. | |
SettingsProcessingGroupColorGroupExperimentalGroup | Experimental color settings. These may be renamed, moved or deleted in the future. | |
SettingsProcessingGroupFiltersGroup | Filter settings. | |
SettingsProcessingGroupFiltersGroupClusterGroup | Removes floating points and isolated clusters from the point cloud.
| |
SettingsProcessingGroupFiltersGroupClusterGroupRemovalGroup | Cluster removal filter. | |
SettingsProcessingGroupFiltersGroupExperimentalGroup | Experimental filters. These may be renamed, moved or deleted in the future. | |
SettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroup | Corrects artifacts that appear when imaging scenes with large texture gradients
or high contrast. These artifacts are caused by blurring in the lens. The filter
works best when aperture values are chosen such that the camera has quite good focus.
The filter also supports removing the points that experience a large correction.
| |
SettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCorrectionGroup | Contrast distortion correction filter. | |
SettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupRemovalGroup | Contrast distortion removal filter. | |
SettingsProcessingGroupFiltersGroupHoleGroup | Contains filters that can be used to deal with holes in the point cloud. | |
SettingsProcessingGroupFiltersGroupHoleGroupRepairGroup | Fills in point cloud holes by interpolating remaining surrounding points.
| |
SettingsProcessingGroupFiltersGroupNoiseGroup | Contains filters that can be used to clean up a noisy point cloud. | |
SettingsProcessingGroupFiltersGroupNoiseGroupRemovalGroup | Discard points with signal-to-noise ratio (SNR) values below a threshold. | |
SettingsProcessingGroupFiltersGroupNoiseGroupRepairGroup | Get better surface coverage by repairing regions of missing data due to noisy points.
Consider disabling this filter if you require all points in your point cloud to be of
high confidence.
| |
SettingsProcessingGroupFiltersGroupNoiseGroupSuppressionGroup | Reduce noise and outliers in the point cloud. This filter can also be used to reduce
ripple effects caused by interreflections. Consider disabling this filter if you need to
distinguish very fine details and thus need to avoid any smoothing effects.
| |
SettingsProcessingGroupFiltersGroupOutlierGroup | Contains a filter that removes points with large Euclidean distance to neighboring points. | |
SettingsProcessingGroupFiltersGroupOutlierGroupRemovalGroup | Discard point if Euclidean distance to neighboring points is above a threshold. | |
SettingsProcessingGroupFiltersGroupReflectionGroup | Contains a filter that removes points likely introduced by reflections (useful for shiny materials). | |
SettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroup | Discard points likely introduced by reflections (useful for shiny materials). | |
SettingsProcessingGroupFiltersGroupSmoothingGroup | Smoothing filters. | |
SettingsProcessingGroupFiltersGroupSmoothingGroupGaussianGroup | Gaussian smoothing of the point cloud. | |
SettingsProcessingGroupResamplingGroup | Settings for changing the output resolution of the point cloud.
| |
SettingsRegionOfInterestGroup | Removes points outside the region of interest.
| |
SettingsRegionOfInterestGroupBoxGroup | Removes points outside the given three-dimensional box.
Using this feature may significantly speed up acquisition and processing time, because
one can avoid acquiring and processing data that is guaranteed to fall outside of the
region of interest. The degree of speed-up depends on the size and shape of the box.
Generally, a smaller box yields a greater speed-up.
The box is defined by three points: O, A and B. These points define two vectors,
OA that goes from PointO to PointA, and OB that goes from PointO to PointB.
This gives 4 points O, A, B and (O + OA + OB), that together form a
parallelogram in 3D.
Two extents can be provided, to extrude the parallelogram along the surface
normal vector of the parallelogram plane. This creates a 3D volume (parallelepiped).
The surface normal vector is defined by the cross product OA x OB.
| |
SettingsRegionOfInterestGroupDepthGroup | Removes points that reside outside of a depth range, meaning that their Z coordinate
falls above a given maximum or below a given minimum.
| |
SettingsSamplingGroup | Sampling settings.
| |
Settings2D | Settings used when capturing 2D images with a Zivid camera. | |
Settings2DAcquisition | Settings for a single acquisition. | |
Settings2DAcquisitionsList | List of acquisitions. Note that the Zivid SDK only supports a single acquisition per capture in 2D mode. | |
Settings2DProcessingGroup | Processing related settings. | |
Settings2DProcessingGroupColorGroup | Color settings. | |
Settings2DProcessingGroupColorGroupBalanceGroup | Color balance settings. | |
Settings2DSamplingGroup | Sampling settings.
|
Structure | Description | |
---|---|---|
ColorBGRA | Color with 8-bit blue, green, red and alpha channels | |
ColorRGBA | Color with 8-bit red, green, blue and alpha channels | |
ColorSRGB | Color with 8-bit red, green, blue and alpha channels in the sRGB color space | |
Duration |
Hi-resolution time span. Valid range is 1 nanosecond to 292 years.
| |
PointXY | Point with two coordinates as float | |
PointXYZ | Point with three coordinates as float | |
PointXYZColorBGRA | Struct which contains XYZ point and BGRA color packed together | |
PointXYZColorRGBA | Struct which contains XYZ point and RGBA color packed together |
Enumeration | Description | |
---|---|---|
CameraInfoModelOption | The model of the camera | |
CameraStateInaccessibleReasonOption | If the camera status is `inaccessible`, then this enum value will give you the reason. | |
CameraStateStatusOption | This enum describes the current status of this camera. The enum can have the following values:
* `inaccessible`: The camera was discovered, but the SDK is not able to connect to this camera. This can
be because the IP settings of the camera and the PC are not compatible, or because there are several
cameras with the same IP connected to your PC. The `InaccessibleReason` enum will give you more
details about why this camera is not accessible. The network configuration of the camera can be changed
using the ZividNetworkCameraConfigurator CLI tool. See the knowledge base for more information.
* `busy`: The camera is currently in use by another process. This can be a different process on the same
PC, or on a different PC if the camera is shared on a network.
* `applyingNetworkConfiguration`: The camera network configuration is being changed by the current process.
* `firmwareUpdateRequired`: The camera is accessible, but requires a firmware update before you can connect
to it.
* `updatingFirmware`: The camera firmware is currently being updated in the current process.
* `available`: The camera is available for use by the current process. This means that you can invoke
camera.connect() on this camera.
* `connecting`: The camera is currently connecting in the current process.
* `connected`: The camera is connected in the current process. This means camera.connect() has successfully
completed and you can capture using this camera.
* `disconnecting`: The camera is currently disconnecting in the current process. When disconnection has
completed, the camera will normally go back to the `available` state.
* `disappeared`: The camera was found earlier, but it can no longer be found. The connection between the
PC and the camera may be disrupted, or the camera may have lost power. When in `disappeared` state, the
camera will not be returned from `Application::cameras()`. The camera will go back to one of the other
states if the camera is later found again.
| |
NetworkConfigurationIPV4GroupModeOption | DHCP or manual configuration | |
PointCloudDownsampling | Option for downsampling | |
SettingsEngineOption | Set the Zivid Vision Engine to use.
The Phase Engine is the fastest choice in terms of both acquisition time and total capture
time, and is a good compromise between quality and speed. The Phase Engine is recommended for
objects that are diffuse, opaque, and slightly specular, and is suitable for applications in
logistics such as parcel induction.
The Stripe Engine is built for exceptional point cloud quality in scenes with highly specular
reflective objects. This makes the engine suitable for applications such as factory automation,
manufacturing, and bin picking. Additional acquisition and processing time are required for
the Stripe Engine.
The Omni Engine is built for exceptional point cloud quality on all scenes, including scenes
with extremely specular reflective objects, as well as transparent objects. This makes the Omni
Engine suitable for applications such as piece picking. Same as for the Stripe Engine, it trades
off speed for quality. The Omni Engine is only available for Zivid 2+.
| |
SettingsProcessingGroupColorGroupExperimentalGroupModeOption | This setting controls how the color image is computed.
`automatic` is the default option. `automatic` is identical to `useFirstAcquisition` for
single-acquisition captures and multi-acquisition captures when all the acquisitions have
identical (duplicated) acquisition settings. `automatic` is identical to `toneMapping` for
multi-acquisition HDR captures with differing acquisition settings.
`useFirstAcquisition` uses the color data acquired from the first acquisition provided. If
the capture consists of more than one acquisition, then the remaining acquisitions are not used
for the color image. No tone mapping is performed. This option provides the most control of
the color image, and the color values will be consistent over repeated captures with the same
settings.
`toneMapping` uses all the acquisitions to create one merged and normalized color image. For
HDR captures the dynamic range of the captured images is usually higher than the 8-bit color
image range. `toneMapping` will map the HDR color data to the 8-bit color output range by
applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to
normalize the captured color image to the full 8-bit output. Note that when using `toneMapping`
mode the color values can be inconsistent over repeated captures if you move, add or remove
objects in the scene. For the most control over the colors, select the `useFirstAcquisition`
mode.
| |
SettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroupModeOption | The reflection filter has two modes: Local and Global. Local mode preserves more 3D data
on thinner objects, generally removes more reflection artifacts and processes faster than
the Global filter. The Global filter is generally better at removing outlier points in
the point cloud. It is advised to use the Outlier filter and Cluster filter together with the
Local Reflection filter.
Global mode was introduced in SDK 1.0 and Local mode was introduced in SDK 2.7.
| |
SettingsProcessingGroupResamplingGroupModeOption | Setting for upsampling or downsampling the point cloud data by some factor. This operation
is performed after all other processing has been completed.
Downsampling is used to reduce the number of points in the point cloud. This is done by
combining each 2x2 or 4x4 group of pixels in the original point cloud into one pixel in
a new point cloud. This downsample functionality is identical to the downsample method
on the PointCloud class. The averaging process reduces noise in the point cloud, but it
will not improve capture speed. To improve capture speed, consider using the subsampling
modes found in Settings/Sampling/Pixel.
Upsampling is used to increase the number of points in the point cloud. It is not possible
to upsample beyond the full resolution of the camera, so upsampling may only be used in
combination with the subsampling modes found in Settings/Sampling/Pixel. For example, one may
combine blueSubsample2x2 with upsample2x2 to obtain a point cloud that matches a full
resolution 2D capture, while retaining the speed benefits of capturing the point cloud with
blueSubsample2x2. Upsampling is achieved by expanding pixels in the original point cloud into
groups of 2x2 or 4x4 pixels in a new point cloud. Where possible, values are filled at the
new points based on an interpolation of the surrounding original points. The points in the
new point cloud that correspond to points in the original point cloud are left unchanged.
Note that upsampling will lead to four (upsample2x2) or sixteen (upsample4x4) times as many
pixels in the point cloud compared to no upsampling, so users should be aware of increased
computational cost related to copying and analyzing this data.
| |
SettingsSamplingGroupColorOption | Choose how to sample colors for the point cloud. The `rgb` option gives a 2D image
with full colors. The `grayscale` option gives a grayscale (r=g=b) 2D image, which
can be acquired faster than full colors. The `disabled` option gives no colors and
can allow for even faster captures.
The `grayscale` option is not available on all camera models.
| |
SettingsSamplingGroupPixelOption | Set whether the full image sensor should be used with white projector light or
only specific color channels with corresponding projector light.
Using only a specific color channel will subsample pixels and give a
smaller resolution.
Subsampling decreases the capture time, as less data will be captured and processed.
Picking a specific color channel can also help reduce noise and effects of ambient light.
Projecting blue light will in most cases give better data than red light.
| |
Settings2DSamplingGroupColorOption | Choose how to sample colors for the 2D image. The `rgb` option gives an image
with full colors. The `grayscale` option gives a grayscale (r=g=b) image, which
can be acquired faster than full colors.
The `grayscale` option is not available on all camera models.
| |
Settings2DSamplingGroupPixelOption | Use this setting to obtain an image that matches a point cloud captured with the equivalent sampling setting.
|