Zivid.NET Namespace |
Class | Description | |
---|---|---|
Application |
Manager class for Zivid cameras.
| |
Camera |
Interface to one Zivid camera
| |
CameraInfo | Information about camera model, serial number etc. | |
CameraInfoRevisionGroup | Major/Minor hardware revision number. This field is deprecated and may be removed in a future version
of the SDK. Please use HardwareRevision instead.
| |
CameraInfoUserDataGroup | Information about user data capabilities of the camera | |
CameraIntrinsics | Information about the intrinsic parameters of the camera (OpenCV model) | |
CameraIntrinsicsCameraMatrixGroup | The camera matrix K (=[fx,0,cx;0,fy,cy;0,0,1]) | |
CameraIntrinsicsDistortionGroup | The radial and tangential distortion parameters | |
CameraState | Information about camera connection state, temperatures, etc. | |
CameraStateNetworkGroup | Current network state | |
CameraStateNetworkGroupIPV4Group | Current IPv4 protocol state | |
CameraStateNetworkGroupLocalInterface | Current state of the computer's local network interface.
| |
CameraStateNetworkGroupLocalInterfaceIPV4Group | Current IPv4 protocol state of the computer's local network interface. | |
CameraStateNetworkGroupLocalInterfaceIPV4GroupSubnet | An IPv4 subnet that the local network interface is connected to.
| |
CameraStateNetworkGroupLocalInterfaceIPV4GroupSubnetsList | List of IPv4 addresses and subnet masks that the interface is configured with. This list can
contain multiple entries if the local network interface is configured with multiple IPv4
addresses.
| |
CameraStateNetworkGroupLocalInterfacesList | List of the computer's local network interfaces that discovered the camera. In the most common scenario
with the camera connected to the computer with an Ethernet cable, this list will contain only the
network interface for that Ethernet port. In more complex network scenarios it can be the case that the
camera is discovered by multiple interfaces, and in that case this list will contain multiple network
interfaces. However, when `CameraState::Status` is `connected`, only the one network interface that has
the active TCP/IP connection to the camera will be listed.
| |
CameraStateTemperatureGroup | Current temperature(s) | |
ComputeDevice | Contains information about the ComputeDevice used by Application. | |
Frame | A frame captured by a Zivid camera | |
Frame2D | A 2D frame captured by a Zivid camera | |
FrameInfo | Various information for a frame | |
FrameInfoMetricsGroup | Metrics related to this capture. | |
FrameInfoSoftwareVersionGroup | The version information for installed software at the time of image capture | |
FrameInfoSystemInfoGroup | Information about the system that captured this frame | |
FrameInfoSystemInfoGroupComputeDeviceGroup | Compute device | |
FrameInfoSystemInfoGroupCPUGroup | CPU | |
ImageNETPixelFormat | Abstract base-class for all images | |
ImageBGRA | A BGRA image with 8 bits per channel | |
ImageRGBA | An RGBA image with 8 bits per channel | |
ImageSRGB | An RGBA image in the sRGB color space with 8 bits per channel | |
MarkerDictionary | Holds information about fiducial markers such as ArUco markers for detection | |
Matrix4x4 |
A single precision floating points 4x4 matrix, stored in row major order
| |
NetworkConfiguration | Network configuration of a camera
| |
NetworkConfigurationIPV4Group | IPv4 network configuration | |
PointCloud |
Point cloud with x, y, z, RGB color and SNR laid out on a 2D grid
| |
RangeT | Class describing a range of values for a given type T | |
Resolution | Class describing a resolution with a width and a height. | |
SceneConditions | A description of the ambient conditions detected by the camera.
| |
SceneConditionsAmbientLightGroup | The ambient light detected by the camera.
| |
Settings | Settings used when capturing a 3D capture or 2D+3D capture with a Zivid camera. | |
SettingsAcquisition | Settings for a single acquisition. | |
SettingsAcquisitionsList | List of Acquisition objects. | |
SettingsDiagnosticsGroup | When Diagnostics is enabled, additional diagnostic data is recorded during capture and included when saving
the frame to a .zdf file. This enables Zivid's Customer Success team to provide better assistance and more
thorough troubleshooting.
Enabling Diagnostics increases the capture time and the RAM usage. It will also increase the size of the
.zdf file. It is recommended to enable Diagnostics only when reporting issues to Zivid's support team.
| |
SettingsProcessingGroup | Settings related to processing of a capture, including filters and color balance. | |
SettingsProcessingGroupColorGroup | Color settings.
These settings are deprecated as of SDK 2.14. These settings will be removed in the next SDK major
version (SDK 3.0). The recommended way to do a 2D+3D capture is to set a `Settings2D` object
in the `Settings/Color` node. Tip: If you want to convert an existing settings.yml file to use the
`Settings/Color` node, you can import the .yml file into Zivid Studio and then re-export it to .yml.
| |
SettingsProcessingGroupColorGroupBalanceGroup | Color balance settings. | |
SettingsProcessingGroupColorGroupExperimentalGroup | Experimental color settings. These may be renamed, moved or deleted in the future. | |
SettingsProcessingGroupFiltersGroup | Filter settings. | |
SettingsProcessingGroupFiltersGroupClusterGroup | Removes floating points and isolated clusters from the point cloud.
| |
SettingsProcessingGroupFiltersGroupClusterGroupRemovalGroup | Cluster removal filter. | |
SettingsProcessingGroupFiltersGroupExperimentalGroup | Experimental filters. These may be renamed, moved or deleted in the future. | |
SettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroup | Corrects artifacts that appear when imaging scenes with large texture gradients
or high contrast. These artifacts are caused by blurring in the lens. The filter
works best when aperture values are chosen such that the camera has quite good focus.
The filter also supports removing the points that experience a large correction.
| |
SettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupCorrectionGroup | Contrast distortion correction filter. | |
SettingsProcessingGroupFiltersGroupExperimentalGroupContrastDistortionGroupRemovalGroup | Contrast distortion removal filter. | |
SettingsProcessingGroupFiltersGroupHoleGroup | Contains filters that can be used to deal with holes in the point cloud. | |
SettingsProcessingGroupFiltersGroupHoleGroupRepairGroup | Fills in point cloud holes by interpolating remaining surrounding points.
| |
SettingsProcessingGroupFiltersGroupNoiseGroup | Contains filters that can be used to clean up a noisy point cloud. | |
SettingsProcessingGroupFiltersGroupNoiseGroupRemovalGroup | Discard points with signal-to-noise ratio (SNR) values below a threshold. | |
SettingsProcessingGroupFiltersGroupNoiseGroupRepairGroup | Get better surface coverage by repairing regions of missing data due to noisy points.
Consider disabling this filter if you require all points in your point cloud to be of
high confidence.
| |
SettingsProcessingGroupFiltersGroupNoiseGroupSuppressionGroup | Reduce noise and outliers in the point cloud. This filter can also be used to reduce
ripple effects caused by interreflections. Consider disabling this filter if you need to
distinguish very fine details and thus need to avoid any smoothing effects.
| |
SettingsProcessingGroupFiltersGroupOutlierGroup | Contains a filter that removes points with large Euclidean distance to neighboring points. | |
SettingsProcessingGroupFiltersGroupOutlierGroupRemovalGroup | Discard point if Euclidean distance to neighboring points is above a threshold. | |
SettingsProcessingGroupFiltersGroupReflectionGroup | Contains a filter that removes points likely introduced by reflections (useful for shiny materials). | |
SettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroup | Discard points likely introduced by reflections (useful for shiny materials). | |
SettingsProcessingGroupFiltersGroupSmoothingGroup | Smoothing filters. | |
SettingsProcessingGroupFiltersGroupSmoothingGroupGaussianGroup | Gaussian smoothing of the point cloud. | |
SettingsProcessingGroupResamplingGroup | Settings for changing the output resolution of the point cloud.
| |
SettingsRegionOfInterestGroup | Removes points outside the region of interest.
| |
SettingsRegionOfInterestGroupBoxGroup | Removes points outside the given three-dimensional box.
Using this feature may significantly speed up acquisition and processing time, because
one can avoid acquiring and processing data that is guaranteed to fall outside of the
region of interest. The degree of speed-up depends on the size and shape of the box.
Generally, a smaller box yields a greater speed-up.
The box is defined by three points: O, A and B. These points define two vectors,
OA that goes from PointO to PointA, and OB that goes from PointO to PointB.
This gives 4 points O, A, B and (O + OA + OB), that together form a
parallelogram in 3D.
Two extents can be provided, to extrude the parallelogram along the surface
normal vector of the parallelogram plane. This creates a 3D volume (parallelepiped).
The surface normal vector is defined by the cross product OA x OB.
| |
SettingsRegionOfInterestGroupDepthGroup | Removes points that reside outside of a depth range, meaning that their Z coordinate
falls above a given maximum or below a given minimum.
| |
SettingsSamplingGroup | Sampling settings.
| |
Settings2D | Settings used when capturing 2D images with a Zivid camera. | |
Settings2DAcquisition | Settings for one 2D acquisition.
When capturing 2D HDR, all 2D acquisitions must have the same Aperture setting. Use Exposure Time
or Gain to control the exposure instead.
| |
Settings2DAcquisitionsList | List of acquisitions used for 2D capture. | |
Settings2DProcessingGroup | 2D processing settings. | |
Settings2DProcessingGroupColorGroup | Color settings. | |
Settings2DProcessingGroupColorGroupBalanceGroup | Color balance settings. | |
Settings2DProcessingGroupColorGroupExperimentalGroup | Experimental color settings. These may be renamed, moved or deleted in the future. | |
Settings2DSamplingGroup | Sampling settings.
|
Structure | Description | |
---|---|---|
ColorBGRA | Color with 8-bit blue, green, red and alpha channels | |
ColorRGBA | Color with 8-bit red, green, blue and alpha channels | |
ColorSRGB | Color with 8-bit red, green, blue and alpha channels in the sRGB color space | |
Duration |
Hi-resolution time span. Valid range is 1 nanosecond to 292 years.
| |
PointXY | Point with two coordinates as float | |
PointXYZ | Point with three coordinates as float | |
PointXYZColorBGRA | Struct which contains XYZ point and BGRA color packed together | |
PointXYZColorRGBA | Struct which contains XYZ point and RGBA color packed together |
Enumeration | Description | |
---|---|---|
CameraInfoModelOption | The model of the camera | |
CameraStateInaccessibleReasonOption | If the camera status is `inaccessible`, then this enum value will give you the reason. | |
CameraStateStatusOption | This enum describes the current status of this camera. The enum can have the following values:
* `inaccessible`: The camera was discovered, but the SDK is not able to connect to this camera. This can
be because the IP settings of the camera and the PC are not compatible, or because there are several
cameras with the same IP connected to your PC. The `InaccessibleReason` enum will give you more
details about why this camera is not accessible. The network configuration of the camera can be changed
using the ZividNetworkCameraConfigurator CLI tool. See the knowledge base for more information.
* `busy`: The camera is currently in use by another process. This can be a different process on the same
PC, or on a different PC if the camera is shared on a network.
* `applyingNetworkConfiguration`: The camera network configuration is being changed by the current process.
* `firmwareUpdateRequired`: The camera is accessible, but requires a firmware update before you can connect
to it.
* `updatingFirmware`: The camera firmware is currently being updated in the current process.
* `available`: The camera is available for use by the current process. This means that you can invoke
camera.connect() on this camera.
* `connecting`: The camera is currently connecting in the current process.
* `connected`: The camera is connected in the current process. This means camera.connect() has successfully
completed and you can capture using this camera.
* `disconnecting`: The camera is currently disconnecting in the current process. When disconnection has
completed, the camera will normally go back to the `available` state.
* `disappeared`: The camera was found earlier, but it can no longer be found. The connection between the
PC and the camera may be disrupted, or the camera may have lost power. When in `disappeared` state, the
camera will not be returned from `Application::cameras()`. The camera will go back to one of the other
states if the camera is later found again.
| |
NetworkConfigurationIPV4GroupModeOption | DHCP or manual configuration | |
PointCloudDownsampling | Option for downsampling | |
SceneConditionsAmbientLightGroupFlickerClassificationOption | A classification of the detected ambient light flicker, if any.
The values `grid50hz` and `grid60hz` indicate that ambient
light matching a 50 Hz or 60 Hz power grid was detected in the
scene. In those cases it is recommended to use Ambient Light
Adaptation for better point cloud quality. The value
`unknownFlicker` indicates that some significant time-varying
ambient light was detected, but it did not match the
characteristics of a standard power grid.
| |
SettingsEngineOption | Set the Zivid Vision Engine to use.
The Phase Engine is the fastest choice in terms of both acquisition time and total capture
time, and is a good compromise between quality and speed. The Phase Engine is recommended for
objects that are diffuse, opaque, and slightly specular, and is suitable for applications in
logistics such as parcel induction.
The Stripe Engine is built for exceptional point cloud quality in scenes with highly specular
reflective objects. This makes the engine suitable for applications such as factory automation,
manufacturing, and bin picking. Additional acquisition and processing time are required for
the Stripe Engine.
The Omni Engine is built for exceptional point cloud quality on all scenes, including scenes
with extremely specular reflective objects, as well as transparent objects. This makes the Omni
Engine suitable for applications such as piece picking. Same as for the Stripe Engine, it trades
off speed for quality. The Omni Engine is only available for Zivid 2+.
The Sage engine is built for use cases that require all points to be correct/accurate with
particularly high confidence. This can be very suitable for eliminating problems such as
reflection artifacts. This validation comes at the cost of speed, and sometimes reduced number
of valid points due to the removal of low-confidence data. The Sage Engine is only available
for Zivid 2+R. The Sage Engine is an experimental engine. This involves that it can be changed
or removed in the future.
| |
SettingsProcessingGroupColorGroupExperimentalGroupModeOption | This setting controls how the color image is computed.
`automatic` is the default option. `automatic` is identical to `useFirstAcquisition` for
single-acquisition captures and multi-acquisition captures when all the acquisitions have
identical (duplicated) acquisition settings. `automatic` is identical to `toneMapping` for
multi-acquisition HDR captures with differing acquisition settings.
`useFirstAcquisition` uses the color data acquired from the first acquisition provided. If
the capture consists of more than one acquisition, then the remaining acquisitions are not used
for the color image. No tone mapping is performed. This option provides the most control of
the color image, and the color values will be consistent over repeated captures with the same
settings.
`toneMapping` uses all the acquisitions to create one merged and normalized color image. For
HDR captures the dynamic range of the captured images is usually higher than the 8-bit color
image range. `toneMapping` will map the HDR color data to the 8-bit color output range by
applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to
normalize the captured color image to the full 8-bit output. Note that when using `toneMapping`
mode the color values can be inconsistent over repeated captures if you move, add or remove
objects in the scene. For the most control over the colors, select the `useFirstAcquisition`
mode.
This setting is deprecated as of SDK 2.14. This setting will be removed in the next SDK major
version (SDK 3.0). The recommended way to do a 2D+3D capture is to set a `Settings2D` object
in the `Settings/Color` node. Tip: If you want to convert an existing settings.yml file to use
the `Settings/Color` node, you can import the .yml file into Zivid Studio and then re-export it
to .yml.
| |
SettingsProcessingGroupFiltersGroupReflectionGroupRemovalGroupModeOption | The reflection filter has two modes: Local and Global. Local mode preserves more 3D data
on thinner objects, generally removes more reflection artifacts and processes faster than
the Global filter. The Global filter is generally better at removing outlier points in
the point cloud. It is advised to use the Outlier filter and Cluster filter together with the
Local Reflection filter.
Global mode was introduced in SDK 1.0 and Local mode was introduced in SDK 2.7.
| |
SettingsProcessingGroupResamplingGroupModeOption | Setting for upsampling or downsampling the point cloud data by some factor. This operation
is performed after all other processing has been completed.
Downsampling is used to reduce the number of points in the point cloud. This is done by
combining each 2x2 or 4x4 group of pixels in the original point cloud into one pixel in
a new point cloud. This downsample functionality is identical to the downsample method
on the PointCloud class. The averaging process reduces noise in the point cloud, but it
will not improve capture speed. To improve capture speed, consider using the subsampling
modes found in Settings/Sampling/Pixel.
Upsampling is used to increase the number of points in the point cloud. It is not possible
to upsample beyond the full resolution of the camera, so upsampling may only be used in
combination with the subsampling modes found in Settings/Sampling/Pixel. For example, one may
combine blueSubsample2x2 with upsample2x2 to obtain a point cloud that matches a full
resolution 2D capture, while retaining the speed benefits of capturing the point cloud with
blueSubsample2x2. Upsampling is achieved by expanding pixels in the original point cloud into
groups of 2x2 or 4x4 pixels in a new point cloud. Where possible, values are filled at the
new points based on an interpolation of the surrounding original points. The points in the
new point cloud that correspond to points in the original point cloud are left unchanged.
Note that upsampling will lead to four (upsample2x2) or sixteen (upsample4x4) times as many
pixels in the point cloud compared to no upsampling, so users should be aware of increased
computational cost related to copying and analyzing this data.
| |
SettingsSamplingGroupColorOption | Choose how to sample colors for the point cloud. The `rgb` option gives a 2D image
with full colors. The `grayscale` option gives a grayscale (r=g=b) 2D image, which
can be acquired faster than full colors. The `disabled` option gives no colors and
can allow for even faster captures.
The `grayscale` option is not available on all camera models.
This setting is deprecated as of SDK 2.14. This setting will be removed in the next SDK major
version (SDK 3.0). The recommended way to do a 2D+3D capture is to set a `Settings2D` object
in the `Settings/Color` field. Tip: If you want to convert an existing settings.yml file to
the recommended API, you can import the .yml file into Zivid Studio and then re-export it to
.yml.
| |
SettingsSamplingGroupPixelOption | For Zivid 2/2+, this setting controls whether to read out the full image sensor and
use white projector light or to subsample pixels for specific color channels with
corresponding projector light. Picking a specific color channel can help reduce noise
and effects of ambient light - projecting blue light is recommended.
For Zivid 2+R, the user does not have to set the projection color and should only
consider whether to scale the resolution `by2x2` or `by4x4`. Both of these modes
will boost the signal strength by about 4x compared to `all`, so the user should
consider a corresponding reduction in exposure time.
Sampling at a decreased resolution decreases capture time, as less data will be captured and processed.
| |
Settings2DProcessingGroupColorGroupExperimentalGroupModeOption | This setting controls how the color image is computed.
`automatic` is the default option. It performs tone mapping for HDR captures, but not for
single-acquisition captures. Use this mode with a single acquisition if you want to have
the most control over the colors in the image.
`toneMapping` uses all the acquisitions to create one merged and normalized color image. For
HDR captures the dynamic range of the captured images is usually higher than the 8-bit color
image range. `toneMapping` will map the HDR color data to the 8-bit color output range by
applying a scaling factor. `toneMapping` can also be used for single-acquisition captures to
normalize the captured color image to the full 8-bit output. Note that when using `toneMapping`
mode the color values can be inconsistent over repeated captures if you move, add or remove
objects in the scene. For the most control over the colors in the single-acquisition case,
select the `automatic` mode.
| |
Settings2DSamplingGroupColorOption | Choose how to sample colors for the 2D image. The `rgb` option gives an image
with full colors. The `grayscale` option gives a grayscale (r=g=b) image, which
can be acquired faster than full colors.
The `grayscale` option is not available on all camera models.
| |
Settings2DSamplingGroupPixelOption | Set the pixel sampling to use for the 2D capture. This setting defines how the camera sensor is sampled.
When doing 2D+3D capture, picking the same value that is used for 3D is generally recommended.
|