visionx Namespace Reference

ArmarX headers. More...

Namespaces

 armem
 
 armem_images
 
 components
 
 ConvexPolygonCalculations
 
 depthfilter
 
 depthfromstereo
 
 exceptions
 
 imrec
 
 imrecman
 
 opstress
 
 playback
 
 roboception
 
 tools
 
 utils
 
 voxelgrid
 
 yolo
 

Classes

class  ArMarkerLocalizer
 ArMarkerLocalizer uses CTexturedRecognition of IVTRecognition in order to recognize and localize objects. More...
 
class  ArMarkerLocalizerOpenCV
 ArMarkerLocalizerOpenCV uses CTexturedRecognition of IVTRecognition in order to recognize and localize objects. More...
 
class  ArMarkerLocalizerOpenCVPropertyDefinitions
 
class  ArMarkerLocalizerPropertyDefinitions
 
class  ArMemToImageProvider
 Brief description of class ArMemToImageProvider. More...
 
class  ArMemToPointCloudProvider
 
interface  ArMemToPointCloudProviderInterface
 
class  AzureKinectIRImageProvider
 Brief description of class AzureKinectIRImageProvider. More...
 
class  AzureKinectIRImageProviderPropertyDefinitions
 
class  AzureKinectPointCloudProvider
 Brief description of class AzureKinectPointCloudProvider. More...
 
class  AzureKinectPointCloudProviderPropertyDefinitions
 
class  BigBowlLocalization
 BigBowlLocalization uses the CVisualTargetLocator of IVT in order to recognize and localize the marker balls at the hands of the robot. More...
 
class  BigBowlLocalizationPropertyDefinitions
 
class  BlobRecognition
 BlobRecognition uses CSegmentableRecognition from IVT to recognize and localize single-colored objects based on their color and shape. More...
 
class  BlobRecognitionPropertyDefinitions
 
class  CalibrationCreator
 CalibrationCreator determines if the robot hand is colliding with another object, causing it to move. More...
 
class  CalibrationCreator2
 
class  CalibrationCreatorPropertyDefinitions
 
class  CapturingImageProvider
 The CapturingImageProvider provides a callback function to trigger the capturing of images with different synchronization modes. More...
 
class  CapturingImageProviderPropertyDefinitions
 
class  CapturingPointCloudProvider
 The CapturingPointCloudProvider provides a callback function to trigger the capturing of point clouds with different synchronization modes. More...
 
class  CapturingPointCloudProviderPropertyDefinitions
 
class  CHandLocalisation
 
class  CHandModelV2
 
class  CHandModelVisualizer
 
class  ChannelConfigWidget
 
class  CMoveMasterModel
 
class  CoinPointCloud
 
struct  ColorFormat
 
class  ColorMarkerObjectLocalizer
 ColorMarkerObjectLocalizer uses CSegmentableRecognition from IVT to recognize and localize single-colored objects based on their color and shape. More...
 
class  ColorMarkerObjectLocalizerPropertyDefinitions
 
class  CParticleFilterFrameworkParallelized
 
class  CParticleFilterRobotHandLocalisation
 
class  DeepFaceRecognition
 Brief description of class DeepFaceRecognition. More...
 
class  DeepFaceRecognitionPropertyDefinitions
 
struct  DetailedPFRating
 
class  Display
 
class  DisplayWidget
 
class  DummyArMarkerLocalizer
 Brief description of class DummyArMarkerLocalizer. More...
 
class  DummyArMarkerLocalizerPropertyDefinitions
 
class  FakePointCloudProvider
 A brief description. More...
 
class  FakePointCloudProviderPropertyDefinitions
 
class  FeatureLearningObjectChooserWidget
 
class  FeatureLearningSaveToMemoryWidget
 
class  FitKnownRectangleRotationMaxPoints
 Finds the rotation of a rectangle of known size at a given position that contains the most points from a point cloud. More...
 
class  FPSCounter
 The FPSCounter class provides methods for calculating the frames per second (FPS) count in periodic tasks. More...
 
class  FramedPointCloud
 A point cloud which keeps track of its reference coordinate frame and allows changing frames using armarx::FramedPose. More...
 
class  GuiPlugin
 GuiPlugin brief description. More...
 
class  HandLocalisationThread
 
class  HandMarkerLocalization
 HandMarkerLocalization uses the CVisualTargetLocator of IVT in order to recognize and localize the marker balls at the hands of the robot. More...
 
class  HandMarkerLocalizationPropertyDefinitions
 
class  HandModeliCub
 
class  ICP
 
class  IEEE1394ImageProvider
 IEEE1394 image provider captures images from one or more cameras and supports the following image transmission formats: More...
 
class  IEEE1394PropertyDefinitions
 
class  ImageBuffer
 Handler of image buffer for ImageProcessors. More...
 
class  ImageMonitorGuiPlugin
 ImageMonitorGuiPlugin brief description. More...
 
class  ImageMonitorProperties
 ImageMonitorProperties brief one line description. More...
 
class  ImageMonitorPropertiesWidget
 
class  ImageMonitorStatisticsWidget
 
class  ImageMonitorWidget
 
class  ImageMonitorWidgetController
 
class  ImageProcessor
 The ImageProcessor class provides an interface for access to ImageProviders via Ice and shared memory. More...
 
class  ImageProcessorPropertyDefinitions
 
class  ImageProvider
 ImageProvider abstract class defines a component which provide images via ice or shared memory. More...
 
class  ImageProviderConfigWidget
 
class  ImageProviderInfo
 
class  ImageRecorderGuiPlugin
 
class  ImageRecorderWidgetController
 
interface  ImageRecordingManagerInterface
 
class  ImageSequenceProvider
 Image sequence provider loads images from a device and broadcasts notifications after loading at a specified frame rate. More...
 
class  ImageSequenceProviderPropertyDefinitions
 
class  ImageToArMem
 
interface  ImageToArMemInterface
 
class  ImageTransferStats
 The ImageTransferStats class provides information on the connection between ImageProvider and ImageProcessor. More...
 
class  ImageViewerArea
 
class  IntelRealSenseProvider
 Brief description of class IntelRealSenseProvider. More...
 
class  IntelRealSenseProviderPropertyDefinitions
 
class  KinectAndCameraCalibration
 KinectAndCameraCalibration executes the calibration for the left camera of the stereo and kinect rgb camera. More...
 
class  KinectAndCameraCalibrationObserver
 
class  KinectAndCameraCalibrationPropertyDefinitions
 
class  KinectV1PointCloudProvider
 A brief description. More...
 
class  KinectV1PointCloudProviderPropertyDefinitions
 
class  KinectV2PointCloudProvider
 A brief description. More...
 
class  KinectV2PointCloudProviderPropertyDefinitions
 
class  LabeledPointCloudMerger
 Brief description of class LabeledPointCloudMerger. More...
 
class  LabeledPointCloudMergerPropertyDefinitions
 Property definitions of LabeledPointCloudMerger. More...
 
class  LegacyRGBOpenPoseEstimation
 
struct  LocalResultImageProvider
 
class  Manager
 
class  MaskRCNNPointCloudObjectLocalizer
 http://pointclouds.org/documentation/tutorials/global_hypothesis_verification.php More...
 
class  MaskRCNNPointCloudObjectLocalizerPropertyDefinitions
 
class  MergedLabeledPointCloud
 Merges several labeled or unlabeled point clouds into one labeled point cloud. More...
 
class  ObjectLearningByPushing
 ObjectLearningByPushing is the vision part of the approach for interactive object segmentation. More...
 
class  ObjectLearningByPushingObserver
 
class  ObjectLearningByPushingPropertyDefinitions
 
class  ObjectLocalizerProcessor
 ObjectLocalizerProcessor. More...
 
class  ObjectLocalizerProcessorJob
 ObjectLocalizerProcessorJob encapsules the object localization job. More...
 
class  ObjectLocalizerProcessorPropertyDefinitions
 
class  ObjectShapeClassification
 
class  ObjectShapeClassificationPropertyDefinitions
 
class  OIFwdKinematicsInterface
 
class  OLPEvaluation
 Brief description of class OLPEvaluation. More...
 
class  OLPEvaluationPropertyDefinitions
 
class  OpenNIImageProvider
 OpenNI image provider captures 3D-points and/or color(s) images from a single device and supports the following image transmission formats: More...
 
class  OpenNIPointCloudProvider
 A brief description. More...
 
class  OpenNIPointCloudProviderPropertyDefinitions
 
class  OpenNIPropertyDefinitions
 
class  OpenPose3DDepthImageConverter
 
class  PointCloudAndImageProcessor
 The PointCloudAndImageProcessor class provides an interface for access to PointCloudProviders and ImageProviders via Ice and shared memory. More...
 
class  PointCloudAndImageProvider
 
class  PointCloudFilter
 Brief description of class PointCloudFilter. More...
 
class  PointCloudFilterPropertyDefinitions
 
class  PointCloudObjectLocalizer
 http://pointclouds.org/documentation/tutorials/global_hypothesis_verification.php More...
 
class  PointCloudObjectLocalizerPropertyDefinitions
 
class  PointCloudProcessor
 The PointCloudProcessor class provides an interface for access to PointCloudProviders via Ice and shared memory. More...
 
class  PointCloudProcessorPropertyDefinitions
 Properties of PointCloudProcessor. More...
 
class  PointCloudProvider
 PointCloudProvider abstract class defines a component which provide point clouds via ice or shared memory. More...
 
class  PointCloudProviderInfo
 
class  PointCloudProviderPropertyDefinitions
 
class  PointCloudSegmenter
 A brief description. More...
 
class  PointCloudSegmenterPropertyDefinitions
 
struct  PointCloudSegmenterTab
 
class  PointCloudSegmentsTable
 
class  PointCloudToArMem
 
interface  PointCloudToArMemInterface
 
class  PointCloudToArViz
 Brief description of class PointCloudToArViz. More...
 
class  PointCloudToArVizPropertyDefinitions
 Property definitions of PointCloudToArViz. More...
 
class  PointCloudTransferStats
 The PointCloudTransferStats class provides information on the connection between PointCloudProvider and PointCloudProcessor. More...
 
class  PointCloudVisualization
 Brief description of class PointCloudVisualization. More...
 
class  PointCloudVisualizationHandler
 
class  PointCloudVisualizationPropertyDefinitions
 
struct  Polygon2D
 
class  RCPointCloudProvider
 Brief description of class RCPointCloudProvider. More...
 
class  RCPointCloudProviderPropertyDefinitions
 
class  ResultImageAndPointCloudProvider
 
class  ResultImageProvider
 The ResultImageProvider is used by the ImageProcessor to stream result images to any other processor (e.g. More...
 
class  ResultPointCloudProvider
 The ResultPointCloudProvider is used by the PointCloudProcessor to stream result PointClouds to any other processor (e.g. More...
 
class  ResultPointCloudProviderPropertyDefinitions
 
class  RGBDHandLocalizer
 Brief description of class RGBDHandLocalizer. More...
 
class  RGBDHandLocalizerPropertyDefinitions
 
class  RGBDOpenPoseEstimation
 
class  RoboceptionUser
 This class contains common implementation for RCImageProvider and RCPointCloudProvider. More...
 
class  RobotHandLocalizationWithFingertips
 RobotHandLocalizationWithFingertips localizes the robot hand using the marker ball and the finger tips. More...
 
class  RobotHandLocalizationWithFingertipsPropertyDefinitions
 
class  SaveDialog
 
class  SegmentAABBShapesProvider
 Brief description of class SegmentAABBShapesProvider. More...
 
class  SegmentAABBShapesProviderPropertyDefinitions
 Property definitions of SegmentAABBShapesProvider. More...
 
class  SegmentableObjectRecognition
 SegmentableObjectRecognition uses CSegmentableRecognition from IVT to recognize and localize single-colored objects based on their color and shape. More...
 
class  SegmentableObjectRecognitionPropertyDefinitions
 
struct  SegmentableTemplate
 
struct  SegmentableTemplateEntry
 
struct  SegmentableTemplateHeader
 
class  SegmentableTemplateRecognition
 SegmentableTemplateRecognition uses CSegmentableRecognition from IVT to recognize and localize single-colored objects based on their color and shape. More...
 
class  SegmentableTemplateRecognitionPropertyDefinitions
 
class  SegmentRansacShapeExtractor
 Brief description of class SegmentRansacShapeExtractor. More...
 
class  SegmentRansacShapeExtractorPropertyDefinitions
 Property definitions of SegmentRansacShapeExtractor. More...
 
class  SegmentSpatialRelations
 Brief description of class SegmentSpatialRelations. More...
 
class  SegmentSpatialRelationsPropertyDefinitions
 Property definitions of SegmentSpatialRelations. More...
 
class  Settings
 
class  SimpleEpisodicMemoryImageConnector
 
class  SimpleEpisodicMemoryImageConnectorPropertyDefinitions
 
class  SimpleEpisodicMemoryOpenPoseEstimationConnector
 
class  SimpleEpisodicMemorySemanticGraphConnector
 
class  StereoCalibrationProviderPropertyDefinitions
 
class  StereoCameraProvider
 Stereo camera provider is based on IEEE1394 image provider with the restriction that the number of specified cameras must be two. More...
 
class  StereoImagePointCloudProvider
 A brief description. More...
 
class  StereoImagePointCloudProviderPropertyDefinitions
 
class  TexturedObjectRecognition
 TexturedObjectRecognition uses CTexturedRecognition of IVTRecognition in order to recognize and localize objects. More...
 
class  TexturedObjectRecognitionPropertyDefinitions
 
class  UCLObjectRecognition
 Brief description of class UCLObjectRecognition. More...
 
class  UCLObjectRecognitionPropertyDefinitions
 
class  UserAssistedSegmenter
 Brief description of class UserAssistedSegmenter. More...
 
class  UserAssistedSegmenterConfigDialog
 
class  UserAssistedSegmenterGuiWidgetController
 UserAssistedSegmenterGuiWidgetController brief one line description. More...
 
class  UserAssistedSegmenterPropertyDefinitions
 UserAssistedSegmenterPropertyDefinitions. More...
 
class  VideoFileImageProvider
 
class  VideoFileImageProviderPropertyDefinitions
 
class  VisionXApplication
 Base Class for all ArmarX applications. More...
 
class  VisualContactDetection
 VisualContactDetection determines if the robot hand is colliding with another object, causing it to move. More...
 
class  VisualContactDetectionObserver
 
class  VisualContactDetectionPropertyDefinitions
 
class  VoxelGridMappingProvider
 Brief description of class VoxelGridMappingProvider. More...
 
class  VoxelGridMappingProviderPropertyDefinitions
 
class  VoxelLine
 A line of voxels, from start voxel to an end voxel. More...
 
class  WebCamImageProvider
 
class  WebCamImageProviderPropertyDefinitions
 
class  WidgetController
 WidgetController brief one line description. More...
 
class  X11ScreenCaptureImageProvider
 

Typedefs

using CByteImagePtr = std::shared_ptr< CByteImage >
 
using CByteImageUPtr = std::unique_ptr< CByteImage >
 
using CByteImageUPtrVec = std::vector< std::unique_ptr< CByteImage > >
 
using CColorParameterSetPtr = std::shared_ptr< CColorParameterSet >
 
using CObjectFinderStereoPtr = std::shared_ptr< CObjectFinderStereo >
 
using CSegmentableRecognitionPtr = std::shared_ptr< CSegmentableRecognition >
 
using DebugDrawerPointCloudPtr = std::shared_ptr< armarx::DebugDrawer24BitColoredPointCloud >
 
using DepthFilter = visionx::depthfilter::Component
 
using ImageContainer = std::vector< CByteImagePtr >
 
using ImageProcessorPtr = IceInternal::Handle< ImageProcessor >
 Shared pointer for convenience. More...
 
using ImageRecordingManager = imrecman::Component
 
using LabelDensityVisualizer = voxelgrid::LabelDensity::Visualizer
 
using LabelDensityVoxel = voxelgrid::LabelDensity::Voxel
 
using LabelDensityVoxelGrid = voxelgrid::LabelDensity::VoxelGrid
 
using LabelOccupancyVisualizer = voxelgrid::LabelOccupancy::Visualizer
 
using LabelOccupancyVoxel = voxelgrid::LabelOccupancy::Voxel
 
using LabelOccupancyVoxelGrid = voxelgrid::LabelOccupancy::VoxelGrid
 
using OpenPoseStressTest = opstress::Component
 
using Point = Eigen::Vector3f
 
using PointCloud = pcl::PointCloud< Point >
 
using PointCloudProcessorPtr = IceInternal::Handle< PointCloudProcessor >
 Shared pointer for convenience. More...
 
using PointCloudVisualizationCollection = std::map< visionx::PointContentType, DebugDrawerPointCloudPtr >
 
using PointCloudVisualizationHandlerPtr = std::shared_ptr< PointCloudVisualizationHandler >
 
using PointD = pcl::FPFHSignature33
 
using PointL = pcl::PointXYZRGBL
 
using PointLPtr = pcl::PointCloud< PointL >::Ptr
 
using PointO = pcl::PointXYZRGBA
 
using PointOPtr = pcl::PointCloud< PointO >::Ptr
 
using PointRGBL = pcl::PointXYZRGBL
 
using PointRGBLPtr = pcl::PointCloud< PointRGBL >::Ptr
 
using Points = std::vector< Point >
 
using PointT = pcl::PointXYZRGBA
 
using TaggedPoints = std::pair< std::string, Points >
 
template<typename VoxelT >
using VoxelGrid = voxelgrid::VoxelGrid< VoxelT >
 
using VoxelGridStructure = voxelgrid::VoxelGridStructure
 
using Yolo = visionx::yolo::Component
 

Enumerations

enum  KinectProcessorType {
  CPU, OPENCL, OPENCLKDE, OPENGL,
  CUDA, CUDAKDE
}
 
enum  ViewMode { Full, StatusOnly }
 

Functions

Eigen::Hyperplane3f alignPlaneNormal (Eigen::Hyperplane3f plane, Eigen::Vector3f normal)
 Flip plane if its normal does not point towards normal. More...
 
bool comparePointClustersDescending (const pcl::PointIndices &lhs, const pcl::PointIndices &rhs)
 
void convertDepthInMetersToWeirdArmarX (const cv::Mat &input, CByteImage *output)
 
void convertIvtRgbToCvGray (const CByteImage &ivt_rgb, cv::Mat &cv_gray)
 
cv::Mat convertWeirdArmarXToDepthInMeters (CByteImage const *input)
 Convert ArmarX encoding of depth information in an RGB image to depth in meters. More...
 
void convertWeirdArmarXToDepthInMeters (const uint8_t *input, cv::Mat &output)
 Like convertWeirdArmarXToDepthInMeters(CByteImage const* input), but writes into a pre-initialized cv::Mat with the correct size and type. More...
 
void copyCvDepthToGrayscaleIVT (cv::Mat const &inputInMeters, CByteImage *output)
 Copies a floating point depth image (in meters) to an RGB depth representation. More...
 
void copyCvMatToIVT (cv::Mat const &input, CByteImage *output)
 Copies the contents of an OpenCV matrix to an IVT CByteImage. More...
 
template<class PointT >
void copyPointCloudWithoutIndices (const pcl::PointCloud< PointT > &input, const std::vector< int > &indices, pcl::PointCloud< PointT > &output)
 
bool eq (float lhs, float rhs, float tol=1e-6f)
 
void evaluateOLP (OLPEvaluation *p, bool testRecognition)
 
void ExtractAnglesFromRotationMatrix (const Mat3d &mat, Vec3d &angles)
 
void fillCameraMatrix (const CCalibration::CCameraParameters &ivtCameraParameters, cv::Mat &output)
 Fill a propertly initialized camera matrix with the given camera parameters. More...
 
void fillCameraMatrix (const visionx::CameraParameters &cameraParams, cv::Mat &output)
 Fill a propertly initialized camera matrix with the given camera parameters. More...
 
template<class ColorT >
void fillColorArray (float array[3], const ColorT &color)
 
template<class PointT >
void filterNaNs (pcl::PointCloud< PointT > &pc)
 
Eigen::Hyperplane3f fitPlaneSVD (const pcl::PointCloud< pcl::PointXYZRGBA > &cloud, const pcl::PointIndices &indices, Eigen::Vector3f center=Eigen::Vector3f::Zero())
 
Eigen::Hyperplane3f fitPlaneSVD (const pcl::PointCloud< pcl::PointXYZRGBA > &cloud, Eigen::Vector3f center=Eigen::Vector3f::Zero())
 
template<class PointCloudPtrT >
 FramedPointCloud (PointCloudPtrT pointCloudPtr) -> FramedPointCloud< typename PointCloudPtrT::element_type::PointType >
 
template<class PointCloudPtrT >
 FramedPointCloud (PointCloudPtrT pointCloudPtr, const std::string &frameName) -> FramedPointCloud< typename PointCloudPtrT::element_type::PointType >
 
pcl::PointIndicesPtr getPlaneInliers (const pcl::PointCloud< pcl::PointXYZRGBA > &cloud, Eigen::Hyperplane3f plane, float maxDistance)
 
template<class PointT >
pcl::PointCloud< PointT >::Ptr getPointCloudWithIndices (const pcl::PointCloud< PointT > &input, const pcl::PointIndices &indices)
 
template<class PointT >
pcl::PointCloud< PointT >::Ptr getPointCloudWithIndices (const pcl::PointCloud< PointT > &input, const std::vector< int > &indices)
 
template<class PointT >
pcl::PointCloud< PointT >::Ptr getPointCloudWithoutIndices (const pcl::PointCloud< PointT > &input, const std::vector< int > &indices)
 
float hsvDistance (const Eigen::Vector3f &lhs, const Eigen::Vector3f &rhs, const Eigen::Vector3f &hsvWeights=Eigen::Vector3f::Ones())
 
float hsvDistance (const pcl::PointXYZHSV &lhs, const pcl::PointXYZHSV &rhs, const Eigen::Vector3f &hsvWeights=Eigen::Vector3f::Ones())
 
Eigen::Vector3f hsvMean (const pcl::PointCloud< pcl::PointXYZHSV > &hsvCloud, const std::vector< int > &indices)
 
template<class PointRGB >
Eigen::Vector3f hsvMean (const pcl::PointCloud< PointRGB > &rgbCloud, const std::vector< int > &indices)
 
template<class CameraParams >
cv::Mat makeCameraMatrix (const CameraParams &cameraParams, int rows=3, int cols=3, int type=CV_64F)
 Builds a camera matrix. More...
 
cv::Mat makeCameraMatrix (const visionx::CameraParameters &cameraParams)
 
bool operator!= (const ColoredPoint3D &lhs, const ColoredPoint3D &rhs)
 
bool operator!= (const LabeledPoint3D &lhs, const LabeledPoint3D &rhs)
 
bool operator!= (const Point3D &lhs, const Point3D &rhs)
 
std::ostream & operator<< (std::ostream &os, const ColoredPoint3D &rhs)
 
std::ostream & operator<< (std::ostream &os, const LabeledPoint3D &rhs)
 
std::ostream & operator<< (std::ostream &os, const Point3D &rhs)
 
std::ostream & operator<< (std::ostream &os, const RGBA &rhs)
 
Eigen::Vector3f pointCloudMedian (const pcl::PointCloud< pcl::PointXYZRGBA > &cloud, const pcl::PointIndices &indices={})
 
void sortPointClustersDescending (std::vector< pcl::PointIndices > &clusterIndices)
 
int toByte (float f)
 
template<class PointT >
void transformPointCloudFromTo (const pcl::PointCloud< PointT > &sourceCloud, pcl::PointCloud< PointT > &targetCloud, const std::string &sourceFrame, const std::string &targetFrame, const VirtualRobot::RobotPtr &robot)
 Transform a point cloud from a robot frame to another. More...
 

Variables

const double DSHT_default_allowed_deviation [12]
 
const std::map< std::string, std::pair< unsigned int, std::string > > MPII_TO_MPI
 

Detailed Description

ArmarX headers.

This package provides an API to read recordings frame-by-frame of any type and obtain their metadata.

Current available formats are:

  • Video formats (.avi, .mp4; provided by OpenCV2) via VideoPlaybackStrategy
  • Loose image collections (any image format) via ImageSequencePlaybackStrategy
  • Chunked image collections (ImageMonitor recording output) via ChunkedImageSequencePlaybackStrategy

Required header:

// Include the required header. NOTE: Other header files are not part of the public API and may be subject to change
#include <VisionX/libraries/playback.h>

Example (Play back an AVI recording):

// Initialise replay
const std::filesystem::path path = "/home/anon/recording_2010-10-10.avi";
// Read frames
while (rep->hasNextFrame())
{
cv::Mat frame;
rep->getNextFrame(frame); // ::CByteImage or a raw unsigned char pointer are supported as well, albeit discouraged
// ... process/display/manipulate frame ...
}
// Stop playback
rep->stopPlayback();
// {rep} is now uninitialised and should be destroyed

::playback

Author
Christian R. G. Dreher chris.nosp@m.tian.nosp@m..dreh.nosp@m.er@s.nosp@m.tuden.nosp@m.t.ki.nosp@m.t.edu
Date
2018

Typedef Documentation

◆ CByteImagePtr

using CByteImagePtr = std::shared_ptr<CByteImage>

Definition at line 109 of file ImageMonitorWidgetController.h.

◆ CByteImageUPtr

using CByteImageUPtr = std::unique_ptr<CByteImage>

Definition at line 57 of file ImageProvider.h.

◆ CByteImageUPtrVec

using CByteImageUPtrVec = std::vector<std::unique_ptr<CByteImage> >

Definition at line 58 of file ImageProvider.h.

◆ CColorParameterSetPtr

typedef std::shared_ptr< CColorParameterSet > CColorParameterSetPtr

Definition at line 44 of file BlobRecognition.h.

◆ CObjectFinderStereoPtr

using CObjectFinderStereoPtr = std::shared_ptr<CObjectFinderStereo>

Definition at line 43 of file BlobRecognition.h.

◆ CSegmentableRecognitionPtr

typedef std::shared_ptr< CSegmentableRecognition > CSegmentableRecognitionPtr

Definition at line 43 of file ColorMarkerObjectLocalizer.h.

◆ DebugDrawerPointCloudPtr

using DebugDrawerPointCloudPtr = std::shared_ptr<armarx::DebugDrawer24BitColoredPointCloud>

Definition at line 79 of file PointCloudVisualization.h.

◆ DepthFilter

Definition at line 113 of file Component.h.

◆ ImageContainer

using ImageContainer = std::vector<CByteImagePtr>

Definition at line 110 of file ImageMonitorWidgetController.h.

◆ ImageProcessorPtr

Shared pointer for convenience.

Definition at line 428 of file ImageProcessor.h.

◆ ImageRecordingManager

◆ LabelDensityVisualizer

◆ LabelDensityVoxel

◆ LabelDensityVoxelGrid

◆ LabelOccupancyVisualizer

◆ LabelOccupancyVoxel

◆ LabelOccupancyVoxelGrid

◆ OpenPoseStressTest

Definition at line 82 of file OpenPoseStressTest.h.

◆ Point

typedef pcl::PointXYZRGBA Point

Definition at line 69 of file ObjectShapeClassification.h.

◆ PointCloud

using PointCloud = pcl::PointCloud<Point>

Definition at line 54 of file RCPointCloudProvider.cpp.

◆ PointCloudProcessorPtr

Shared pointer for convenience.

Definition at line 630 of file PointCloudProcessor.h.

◆ PointCloudVisualizationCollection

using PointCloudVisualizationCollection = std::map<visionx::PointContentType, DebugDrawerPointCloudPtr>

Definition at line 80 of file PointCloudVisualization.h.

◆ PointCloudVisualizationHandlerPtr

◆ PointD

typedef pcl::FPFHSignature33 PointD

Definition at line 81 of file MaskRCNNPointCloudObjectLocalizer.h.

◆ PointL

typedef pcl::PointXYZL PointL

Definition at line 80 of file MaskRCNNPointCloudObjectLocalizer.h.

◆ PointLPtr

using PointLPtr = pcl::PointCloud<PointL>::Ptr

Definition at line 66 of file PointCloudSegmenter.h.

◆ PointO

using PointO = pcl::PointXYZRGBA

Definition at line 63 of file PointCloudSegmenter.h.

◆ PointOPtr

using PointOPtr = pcl::PointCloud<PointO>::Ptr

Definition at line 64 of file PointCloudSegmenter.h.

◆ PointRGBL

using PointRGBL = pcl::PointXYZRGBL

Definition at line 67 of file PointCloudSegmenter.h.

◆ PointRGBLPtr

using PointRGBLPtr = pcl::PointCloud<PointRGBL>::Ptr

Definition at line 68 of file PointCloudSegmenter.h.

◆ Points

using Points = std::vector<Point>

Definition at line 70 of file ObjectShapeClassification.h.

◆ PointT

typedef pcl::PointXYZRGBA PointT

Definition at line 79 of file MaskRCNNPointCloudObjectLocalizer.h.

◆ TaggedPoints

using TaggedPoints = std::pair<std::string, Points>

Definition at line 71 of file ObjectShapeClassification.h.

◆ VoxelGrid

Definition at line 33 of file VoxelGridCore.h.

◆ VoxelGridStructure

◆ Yolo

Definition at line 234 of file Component.h.

Enumeration Type Documentation

◆ KinectProcessorType

Enumerator
CPU 
OPENCL 
OPENCLKDE 
OPENGL 
CUDA 
CUDAKDE 

Definition at line 54 of file KinectV2PointCloudProvider.h.

◆ ViewMode

enum ViewMode
strong
Enumerator
Full 
StatusOnly 

Definition at line 52 of file ImageProviderConfigWidget.h.

Function Documentation

◆ alignPlaneNormal()

Eigen::Hyperplane3f alignPlaneNormal ( Eigen::Hyperplane3f  plane,
Eigen::Vector3f  normal 
)

Flip plane if its normal does not point towards normal.

Definition at line 148 of file tools.cpp.

◆ comparePointClustersDescending()

bool visionx::comparePointClustersDescending ( const pcl::PointIndices &  lhs,
const pcl::PointIndices &  rhs 
)
inline

Definition at line 105 of file tools.h.

+ Here is the caller graph for this function:

◆ convertDepthInMetersToWeirdArmarX()

void convertDepthInMetersToWeirdArmarX ( const cv::Mat &  input,
CByteImage *  output 
)

Definition at line 98 of file OpenCVUtil.cpp.

+ Here is the caller graph for this function:

◆ convertIvtRgbToCvGray()

void convertIvtRgbToCvGray ( const CByteImage &  ivt_rgb,
cv::Mat &  cv_gray 
)

Definition at line 14 of file opencv_conversions.cpp.

◆ convertWeirdArmarXToDepthInMeters() [1/2]

cv::Mat convertWeirdArmarXToDepthInMeters ( CByteImage const *  input)

Convert ArmarX encoding of depth information in an RGB image to depth in meters.

ArmarX stores the depth in mm in the R and G channel of an RGB image. This is due to the fact that we cannot provide images of different types. So a RGB-D camera needs to use a single format for both color and depth.

Parameters
inputWeird ArmarX encoding of depth image.
Returns
Floating point matrix containing the depth in meters.

Definition at line 61 of file OpenCVUtil.cpp.

+ Here is the caller graph for this function:

◆ convertWeirdArmarXToDepthInMeters() [2/2]

void convertWeirdArmarXToDepthInMeters ( const uint8_t *  input,
cv::Mat &  output 
)

Like convertWeirdArmarXToDepthInMeters(CByteImage const* input), but writes into a pre-initialized cv::Mat with the correct size and type.

Parameters
inputThe raw pixel buffer with weird ArmarX depth encoding.
outputA cv::Mat with the correct size and type.

Definition at line 76 of file OpenCVUtil.cpp.

◆ copyCvDepthToGrayscaleIVT()

void copyCvDepthToGrayscaleIVT ( cv::Mat const &  inputInMeters,
CByteImage *  output 
)

Copies a floating point depth image (in meters) to an RGB depth representation.

Close to the camera: White Farther away: Black

Parameters
inputInMetersOpenCV floating point matrix (CV_32FC1) containting depth in meters.
outputIVT CByteImage for visulaizing the depth info in an RGB image.

Definition at line 26 of file OpenCVUtil.cpp.

◆ copyCvMatToIVT()

void copyCvMatToIVT ( cv::Mat const &  input,
CByteImage *  output 
)

Copies the contents of an OpenCV matrix to an IVT CByteImage.

Parameters
inputOpenCV matrix (RGB or grayscale)
outputIVT CByteImage (RGB or grayscale)

Definition at line 13 of file OpenCVUtil.cpp.

◆ copyPointCloudWithoutIndices()

void visionx::copyPointCloudWithoutIndices ( const pcl::PointCloud< PointT > &  input,
const std::vector< int > &  indices,
pcl::PointCloud< PointT > &  output 
)

Definition at line 60 of file tools.h.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ eq()

bool visionx::eq ( float  lhs,
float  rhs,
float  tol = 1e-6f 
)

Definition at line 7 of file ice_point_operators.cpp.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ evaluateOLP()

void visionx::evaluateOLP ( OLPEvaluation p,
bool  testRecognition 
)

Definition at line 44 of file OLPEvaluation.cpp.

◆ ExtractAnglesFromRotationMatrix()

void visionx::ExtractAnglesFromRotationMatrix ( const Mat3d &  mat,
Vec3d &  angles 
)
inline

Definition at line 174 of file HandLocalisation.cpp.

+ Here is the caller graph for this function:

◆ fillCameraMatrix() [1/2]

void fillCameraMatrix ( const CCalibration::CCameraParameters &  ivtCameraParameters,
cv::Mat &  output 
)

Fill a propertly initialized camera matrix with the given camera parameters.

Parameters
cameraParamsThe camera parameters.
outputThe cv matrix. Must be floating point, i.e. CV_32F or CV_64F.

Definition at line 209 of file OpenCVUtil.cpp.

+ Here is the call graph for this function:

◆ fillCameraMatrix() [2/2]

void fillCameraMatrix ( const visionx::CameraParameters &  cameraParams,
cv::Mat &  output 
)

Fill a propertly initialized camera matrix with the given camera parameters.

Parameters
cameraParamsThe camera parameters.
outputThe cv matrix. Must be floating point, i.e. CV_32F or CV_64F.

Definition at line 195 of file OpenCVUtil.cpp.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ fillColorArray()

void visionx::fillColorArray ( float  array[3],
const ColorT &  color 
)

Definition at line 44 of file CoinPointCloud.cpp.

+ Here is the caller graph for this function:

◆ filterNaNs()

void visionx::filterNaNs ( pcl::PointCloud< PointT > &  pc)
inline

Definition at line 116 of file tools.h.

◆ fitPlaneSVD() [1/2]

Eigen::Hyperplane3f fitPlaneSVD ( const pcl::PointCloud< pcl::PointXYZRGBA > &  cloud,
const pcl::PointIndices &  indices,
Eigen::Vector3f  center = Eigen::Vector3f::Zero() 
)

Definition at line 53 of file tools.cpp.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ fitPlaneSVD() [2/2]

Eigen::Hyperplane3f fitPlaneSVD ( const pcl::PointCloud< pcl::PointXYZRGBA > &  cloud,
Eigen::Vector3f  center = Eigen::Vector3f::Zero() 
)

Definition at line 46 of file tools.cpp.

+ Here is the call graph for this function:

◆ FramedPointCloud() [1/2]

visionx::FramedPointCloud ( PointCloudPtrT  pointCloudPtr) -> FramedPointCloud< typename PointCloudPtrT::element_type::PointType >

◆ FramedPointCloud() [2/2]

visionx::FramedPointCloud ( PointCloudPtrT  pointCloudPtr,
const std::string &  frameName 
) -> FramedPointCloud< typename PointCloudPtrT::element_type::PointType >

◆ getPlaneInliers()

pcl::PointIndicesPtr getPlaneInliers ( const pcl::PointCloud< pcl::PointXYZRGBA > &  cloud,
Eigen::Hyperplane3f  plane,
float  maxDistance 
)

Definition at line 96 of file tools.cpp.

◆ getPointCloudWithIndices() [1/2]

pcl::PointCloud<PointT>::Ptr visionx::getPointCloudWithIndices ( const pcl::PointCloud< PointT > &  input,
const pcl::PointIndices &  indices 
)

Definition at line 96 of file tools.h.

+ Here is the call graph for this function:

◆ getPointCloudWithIndices() [2/2]

pcl::PointCloud<PointT>::Ptr visionx::getPointCloudWithIndices ( const pcl::PointCloud< PointT > &  input,
const std::vector< int > &  indices 
)

Definition at line 87 of file tools.h.

+ Here is the call graph for this function:

◆ getPointCloudWithoutIndices()

pcl::PointCloud<PointT>::Ptr visionx::getPointCloudWithoutIndices ( const pcl::PointCloud< PointT > &  input,
const std::vector< int > &  indices 
)

Definition at line 77 of file tools.h.

+ Here is the call graph for this function:

◆ hsvDistance() [1/2]

float hsvDistance ( const Eigen::Vector3f &  lhs,
const Eigen::Vector3f &  rhs,
const Eigen::Vector3f &  hsvWeights = Eigen::Vector3f::Ones() 
)

Definition at line 118 of file tools.cpp.

◆ hsvDistance() [2/2]

float hsvDistance ( const pcl::PointXYZHSV &  lhs,
const pcl::PointXYZHSV &  rhs,
const Eigen::Vector3f &  hsvWeights = Eigen::Vector3f::Ones() 
)

Definition at line 111 of file tools.cpp.

◆ hsvMean() [1/2]

Eigen::Vector3f hsvMean ( const pcl::PointCloud< pcl::PointXYZHSV > &  hsvCloud,
const std::vector< int > &  indices 
)

Definition at line 129 of file tools.cpp.

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ hsvMean() [2/2]

Eigen::Vector3f visionx::hsvMean ( const pcl::PointCloud< PointRGB > &  rgbCloud,
const std::vector< int > &  indices 
)

Definition at line 51 of file tools.h.

+ Here is the call graph for this function:

◆ makeCameraMatrix() [1/2]

cv::Mat visionx::makeCameraMatrix ( const CameraParams &  cameraParams,
int  rows = 3,
int  cols = 3,
int  type = CV_64F 
)

Builds a camera matrix.

Parameters
cameraParamsThe camera matrix.
Returns
The CV camera matrix.

Definition at line 77 of file OpenCVUtil.h.

+ Here is the call graph for this function:

◆ makeCameraMatrix() [2/2]

cv::Mat makeCameraMatrix ( const visionx::CameraParameters &  cameraParams)

Definition at line 25 of file opencv_conversions.cpp.

+ Here is the caller graph for this function:

◆ operator!=() [1/3]

bool operator!= ( const ColoredPoint3D &  lhs,
const ColoredPoint3D &  rhs 
)

Definition at line 29 of file ice_point_operators.cpp.

◆ operator!=() [2/3]

bool operator!= ( const LabeledPoint3D &  lhs,
const LabeledPoint3D &  rhs 
)

Definition at line 40 of file ice_point_operators.cpp.

◆ operator!=() [3/3]

bool operator!= ( const Point3D &  lhs,
const Point3D &  rhs 
)

Definition at line 12 of file ice_point_operators.cpp.

+ Here is the call graph for this function:

◆ operator<<() [1/4]

std::ostream & operator<< ( std::ostream &  os,
const ColoredPoint3D &  rhs 
)

Definition at line 34 of file ice_point_operators.cpp.

◆ operator<<() [2/4]

std::ostream & operator<< ( std::ostream &  os,
const LabeledPoint3D &  rhs 
)

Definition at line 45 of file ice_point_operators.cpp.

◆ operator<<() [3/4]

std::ostream & operator<< ( std::ostream &  os,
const Point3D &  rhs 
)

Definition at line 17 of file ice_point_operators.cpp.

◆ operator<<() [4/4]

std::ostream & operator<< ( std::ostream &  os,
const RGBA &  rhs 
)

Definition at line 23 of file ice_point_operators.cpp.

◆ pointCloudMedian()

Eigen::Vector3f pointCloudMedian ( const pcl::PointCloud< pcl::PointXYZRGBA > &  cloud,
const pcl::PointIndices &  indices = {} 
)

Definition at line 7 of file tools.cpp.

+ Here is the call graph for this function:

◆ sortPointClustersDescending()

void visionx::sortPointClustersDescending ( std::vector< pcl::PointIndices > &  clusterIndices)
inline

Definition at line 110 of file tools.h.

+ Here is the call graph for this function:

◆ toByte()

int visionx::toByte ( float  f)

◆ transformPointCloudFromTo()

void transformPointCloudFromTo ( const pcl::PointCloud< PointT > &  sourceCloud,
pcl::PointCloud< PointT > &  targetCloud,
const std::string &  sourceFrame,
const std::string &  targetFrame,
const VirtualRobot::RobotPtr robot 
)

Transform a point cloud from a robot frame to another.

Parameters
sourceCloudThe source point cloud.
targetCloudThe target point cloud (may be same as sourceCloud).
sourceFrameThe current frame of sourceCloud.
targetFrameThe current frame to transform to.
robotThe reference robot to use for transformation.

Definition at line 129 of file FramedPointCloud.h.

+ Here is the call graph for this function:

Variable Documentation

◆ DSHT_default_allowed_deviation

const double DSHT_default_allowed_deviation[12]
Initial value:
= { 120, 120, 120,
M_PI / 180 * 30, M_PI / 180 * 30, M_PI / 180 * 30,
M_PI / 180 * 12, M_PI / 180 * 12, M_PI / 180 * 12,
M_PI / 180 * 12, M_PI / 180 * 12, M_PI / 180 * 12
}

Definition at line 131 of file HandLocalisationConstants.h.

◆ MPII_TO_MPI

const std::map<std::string, std::pair<unsigned int, std::string> > MPII_TO_MPI
Initial value:
{
{"head", {0, "Head"}},
{"neck", {1, "Neck"}},
{"right_shoulder", {2, "RShoulder"}},
{"right_elbow", {3, "RElbow"}},
{"right_hand", {4, "RWrist"}},
{"left_shoulder", {5, "LShoulder"}},
{"left_elbow", {6, "LElbow"}},
{"left_hand", {7, "LWrist"}},
{"right_waist", {8, "RHip"}},
{"right_knee", {9, "RKnee"}},
{"right_foot", {10, "RAnkle"}},
{"left_waist", {11, "LHip"}},
{"left_knee", {12, "LKnee"}},
{"left_foot", {13, "LAnkle"}},
}

Definition at line 32 of file OpenPoseEstimationUtil.h.

visionx::imrec::Playback
std::shared_ptr< visionx::imrec::AbstractPlaybackStrategy > Playback
Convenience alias for an instance of any playback strategy.
Definition: AbstractPlaybackStrategy.h:48
M_PI
#define M_PI
Definition: MathTools.h:17
visionx::imrec::newPlayback
visionx::imrec::Playback newPlayback(const std::filesystem::path &path)
Instantiates and returns a new playback strategy which is capable of replaying the file or collection...
Definition: public_api.cpp:205