|
Object localization processor are special ImageProcessors (VisionX-HowTos-implement-imageprocessor ) that perform object localization in a stereo image pair. The ObjectLocalizationProcessors are scheduled by the memoryx::ObjectLocalizationUpdater and the localization result is written to the WorkingMemory of MemoryX. In this section we will focus on the VisionX aspects of ObjectLocalizerProcessor. For the complete processing chain of object localization an memory update and fusion see the MemoryX documentation.
Integrating a new object localizer processor is achieved by subclassing the visionx::ObjectLocalizerProcessor in the following way:
#include <VisionX/core/ImageProcessor.h> namespace visionx { class ExampleObjectLocalizerProcessor : virtual public visionx::ObjectLocalizerProcessor { public: /** * ObjectLocalizerProcessor interface: The initRecognizer method needs to be implemented by any ObjectLocalizer. * the method is called in the same thread as addObjectClass and localizeObjectClass. Initialize your * recognition method here. * * @return success */ virtual bool initRecognizer() = 0; /** * ObjectLocalizerProcessor interface: The addObjectClass method needs to be implemented by any ObjectLocalizer. * Add object class to localizer. Called by the ObjectLocalizerProcessor for each object class in prior knowledge, that * has the RecognitionMethod attribute corresponding to the default name of this component (see Component::getDefaultName()) * * @param objectClassEntity entity containing all information available for the object class * @param fileManager GridFileManager required to read files associated to prior knowledgge from the database. Usually accessed via an AbstractEntityWrapper. * * @return success of adding this entity to the recognition method */ virtual bool addObjectClass(const memoryx::EntityPtr& objectClassEntity, const memoryx::GridFileManagerPtr& fileManager) = 0; /** * ObjectLocalizerProcessor interface: The localizeObjectClass method needs to be implemented by any ObjectLocalizer. * Based on the object class name and the camera images it generates a list of localization results which correspond * to found instances of objects belonging to that class. * * This method is called by an @ObjectLocalizerProcessorJob. * * @param objectClassNames names of the class to localize * @param cameraImages the two input images * @param resultImages the two result images. are provided if result images are enabled. * * @return list of object instances */ virtual memoryx::ObjectLocalizationResultList localizeObjectClasses(const std::vector<std::string>& objectClassNames, CByteImage** cameraImages, CByteImage** resultImages) = 0; }; }
The main interface of ObjectLocalizerProcessors comprises three methods which need to be implemented by each ObjectLocalizerProcessor. All three methods are called in the same thread to ease the integration of OpenGL based recognition, etc.
These methods need to be implemented:
Besides these methods, the onInit, onConnect, onDisconnect and onExit methods are available in the interface as usual for ManagedIceObjects.
The models of the recognizable object classes are stored in memoryx::PriorKnowledge. Each class has a corresponding entity in PriorKnowledge of type memoryx::ObjectClass. The object class entities have an attribute called recognitionMethod. The addObjectClass method is called for each object class entity where the recognitionMethod matches the proxy name of the ObjectLocalizerProcessor. The default proxy name can be specified by implementing the method armarx::ManagedIceObject::getDefaultName().
Object model information, such as features, can also be stored with the object class entity. Usually this information is accessed via the GridFileManager passed to the addObjectClass method. To ease the access usually a suitable memoryx::EntityWrapper needs is implemented that provides method level access to the features. See the MemoryX documentation.