|
|
The goal of this project is to equip the robot with PointGoal navigation capabilities.
The implemented navigation stack consists of three layers:
Additionally, execution is continuously monitored, and a safety guard ensures collision avoidance by reducing the velocity in close proximity to obstacles.
This work is described in the following publications:
List of papers that build upon this work:
The framework follows the strategy pattern, allowing a configurable selection of algorithms for each layer of the navigation stack. The available options include:
The global planners provide a collision free positional path. To obtain suitable base orientations, a subsequent orientation optimization step is performed.
A* and SPFA edge weighting:
In addition to the Euclidean distance between nodes, the proximity to the nearest obstacle is also considered. For further details, refer to the paper in the Publications section.
Orientation optimization:
The orientation optimization aims to balance the following objectives:
This optimization is applied to all nodes except the start and goal nodes and is implemented using the Ceres Solver library.
For global planning, we use a costmap-based approach. A costmap is a fixed-resolution grid where each cell is assigned a score, representing its cost, and a validity mask to indicate whether it is traversable.
Also, we use the costmaps for other maps, e.g. distance fields for simplicity.
Named locations, such as "in front of the table," can be defined in two ways: absolute or relative to objects. For relative locations, a reference frame must be specified. Typically, the root frame is used for rigid objects, while any known frame of the object can be utilized for articulated objects. This flexibility ensures accurate and context-aware location specification.
Those locations should follow a certain naming scheme. That is depending on the location
{location}{object_name}/{location}:{instanceId}. Here, the location should be specified in camel case, e.g., inFrontOf. The instance ID is optional. If it is specified, the location is only valid for a specific object. Usually, the locations are available for all objects of a class.Rooms are represented as polygons with a specified height. This representation enables several functionalities, such as restricting operations to specific rooms or elevating subsymbolic data to a symbolic level—for instance, determining whether the robot is located within a particular room. Additionally, this information can serve as a navigation target, facilitating room-specific navigation tasks.
Once a navigation stack is instantiated, a NavigatorServer can be used to coordinate its execution. This class orchestrates the different layers and processes navigation commands. Through well-defined interfaces, dependencies can be injected, enabling modular and flexible integration.
The NavigatorServer can be instantiated in a custom component or accessed via the navigator, the main component of this package.
The navigator client is the counterpart of the navigator server. It provides access to a navigator server to which navigation requests can be sent. Also, callbacks can be registered to handle events (e.g., start of movement, planning failure, ...).
During execution, the events described in the following can occur. Each of the events also provides meaningful information (timestamp of the event, poses, etc ...).
We can group them as follows related to the layers of the navigation stack
Global planning:
Local planning:
Safety guard:
Also, the progress of the execution is available through the following events
In case of general failures, the following events
For a detailed description, see the .
The navigation memory provides the following core segments:
Scene information:
Navigation stack:
navigatorPrior knowledge is loaded into the memory on startup including locations and rooms.
Locations are provided through a locations.json with the following structure:
Rooms are provided through a rooms.json with the following structure:
To determine a suitable navigation target the following algorithms exist:
RoomNavigationTarget: generates a navigation target in front of a roomTo test the navigation stack in simulation, humans are modeled using the Reciprocal Velocity Obstacle (RVO) algorithm. Human paths are first generated with a global planning algorithm targeting random goals based on the traversability map. Their velocities are then initialized along these paths and dynamically adjusted to avoid obstacles, including the robot, using RVO.
On the code level, this package contains the following libraries and components:
The following fundamental libraries are available:
The following libraries provide strategies for the different layers of the navigation stack:
To instantiate different strategies, the factories library provides factory methods that abstract the underlying algorithms. These factories also handle optional algorithms and dependencies, ensuring flexible and modular integration.
Other libraries include:
The client library offers a user-friendly API for interacting with the navigator component. It serves as the primary entry point for users who want to command the robot to navigate to a specific location.
navigatornavigation_memorySkill providers:
navigation_skill_providerdistance_to_obstacle_costmap_provider: This component extracts scene information from the object memory, specifically the poses of known objects, and generates a corresponding distance map. The distance map is automatically updated whenever a known object's pose changes. Additionally, the generation process can be restricted to specific object datasets.dynamic_distance_to_obstacle_costmap_provider: Enhances a base costmap, such as one generated by the distance_to_obstacle_costmap_provider, by incorporating dynamic scene information from sources like laser scanner data. This component ensures the distance map is continuously updated to reflect real-time changes in the environment.costmap_merger: Combines multiple costmaps into a single unified map. The costmaps are merged in a prioritized order, with the first costmap taking precedence over subsequent ones. This functionality is particularly useful for integrating static scene information with dynamic, up-to-date data near the robot, ensuring a comprehensive and prioritized representation of the environment.dynamic_scene_provider: This component is used to track humans, including an estimate of their velocity.laser_scanner_feature_extraction component processes raw laser scanner data and clusters it into meaningful features. If distance maps are available, the laser scanner data can be filtered to focus solely on unknown obstacles. The extracted features include:navigation_skill_providerhuman_simulatorImplemented in the library skills are the following basic skills
| Skill name | Description |
|---|---|
NavigateTo | Navigate to a target pose using absolute coordinates. |
NavigateToLocation | Navigate to a predefined location. The location is provided as an entity in the memory. |
NavigateToNamedLocation | Navigate to a location. Fuzzy matching is applied to resolve the exact location id. |
Skills to give local commands, resulting in point-to-point motion:
| Skill name | Description |
|---|---|
MoveXMeters | Move the robot a specified distance into a certain direction. |
MoveRelativePlanar | Move the robot to a relative position in the planar (x, y) coordinate system. |
RotateXDegrees | Rotate the robot by a specified angle in degrees around its vertical axis. |
More advanced skills:
| Skill name | Description | Status |
|---|---|---|
BringHumanIntoRoom | Assist a human in entering a specified room. | Planned / Draft |
GuideHumanToRoom | Guide a human to a designated room. | Planned / Draft |
NavigateToChargingStation | Navigate the robot to a charging station. | Planned / Draft |
This widget provides an intuitive and user-friendly interface for managing locations. Its key features include: