Advanced Robot Perception Dashboard Development for Intelligent Systems
This tutorial uses Python exclusively for all examples and implementations. All code examples are written in Python 3.10+ using the rclpy library for ROS 2 Jazzy Jalisco.
📖 Start by understanding: Read the "What is RViz2?" section next to learn why visualization is critical before diving into code!
RViz2 (ROS Visualization 2) is the most powerful 3D visualization tool for ROS 2, essential for developing, debugging, and demonstrating intelligent robot systems.
RViz2 is not just a visualization tool - it's your window into the robot's "brain". It allows you to:
RViz2 is used worldwide by:
Main visualization area where all 3D content is rendered.
Features:
Control center for adding and configuring visualization elements.
Common Displays:
Camera control for different viewing perspectives.
View Types:
Time synchronization for simulation and playback.
Functions:
Configuration panel for active tools.
Available Tools:
System-wide settings affecting all displays.
Key Settings:
RViz2 updates in real-time as your robot operates, showing live data from sensors, changing robot states, and dynamic environments.
Visualize diverse data simultaneously:
RViz2 isn't just for viewing - you can interact with your robot:
Save your entire RViz2 setup (all displays, settings, views) to a .rviz file and reload it instantly.
Extend RViz2 with custom plugins for specialized visualizations:
Visualizing sensor fusion, obstacle detection, path planning, and decision-making in self-driving cars.
Example: Waymo uses RViz to debug their autonomous taxi fleet.
Monitoring robot arm movements, collision avoidance, and pick-and-place operations in factories.
Example: ABB uses visualization for robot programming and debugging.
Visualizing surgical robots, showing tool positions, force feedback, and safety zones during operations.
Example: Da Vinci surgical system uses 3D visualization for surgeon feedback.
Tracking multiple robots, inventory locations, optimal paths, and fleet coordination in real-time.
Example: Amazon robotics visualizes 1000+ robots simultaneously.
Visualizing rover operations, terrain mapping, sample collection, and communication with Earth.
Example: NASA JPL uses RViz for Mars rover mission planning.
Monitoring autonomous construction equipment, site mapping, and coordination of multiple machines.
Example: Built Robotics visualizes autonomous bulldozers and excavators.
Without Visualization:
With RViz2:
In this tutorial, you'll learn RViz2 in this order:
Let's start by setting up your environment and running your first RViz2 visualization. Follow these steps carefully.
Make sure you have ROS 2 Jazzy installed and sourced. Open a terminal and verify:
You should see: jazzy
First, create a ROS 2 workspace for your visualization project:
Create your package:
Expected Output:
going to create a new package package name: perception_dashboard destination directory: /home/user/ros2_ws/src package format: 3 version: 0.0.0 description: TODO: Package description maintainer: ['user'] licenses: ['TODO: License declaration'] build type: ament_python dependencies: ['rclpy', 'std_msgs', 'geometry_msgs', 'sensor_msgs', 'visualization_msgs', 'nav_msgs', 'tf2_ros'] creating folder ./perception_dashboard creating ./perception_dashboard/package.xml creating source and include folder creating folder ./perception_dashboard/perception_dashboard ...
Navigate to your package and create your first visualization node:
Open the file and add this code:
#!/usr/bin/env python3
"""
Simple Marker Node - Your First RViz2 Visualization
This node publishes a basic marker that you'll see in RViz2
"""
import rclpy
from rclpy.node import Node
from visualization_msgs.msg import Marker
from std_msgs.msg import ColorRGBA
class SimpleMarkerNode(Node):
def __init__(self):
super().__init__('simple_marker_node')
# Create publisher for visualization markers
self.marker_pub = self.create_publisher(Marker, '/visualization_marker', 10)
# Create timer to publish markers at 1 Hz
self.timer = self.create_timer(1.0, self.publish_marker)
self.get_logger().info('Simple Marker Node Started!')
self.get_logger().info('Publishing markers on topic: /visualization_marker')
def publish_marker(self):
"""Publish a simple cube marker"""
marker = Marker()
# Header
marker.header.frame_id = "map"
marker.header.stamp = self.get_clock().now().to_msg()
# Marker properties
marker.ns = "basic_shapes"
marker.id = 0
marker.type = Marker.CUBE
marker.action = Marker.ADD
# Position (center of the marker)
marker.pose.position.x = 0.0
marker.pose.position.y = 0.0
marker.pose.position.z = 0.5
# Orientation
marker.pose.orientation.x = 0.0
marker.pose.orientation.y = 0.0
marker.pose.orientation.z = 0.0
marker.pose.orientation.w = 1.0
# Size (1m x 1m x 1m cube)
marker.scale.x = 1.0
marker.scale.y = 1.0
marker.scale.z = 1.0
# Color (Red, semi-transparent)
marker.color.r = 1.0
marker.color.g = 0.0
marker.color.b = 0.0
marker.color.a = 0.8
# Publish the marker
self.marker_pub.publish(marker)
self.get_logger().info('Published red cube marker')
def main(args=None):
rclpy.init(args=args)
node = SimpleMarkerNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Edit the setup.py file to register your node:
Find the entry_points section and modify it:
entry_points={
'console_scripts': [
'simple_marker_node = perception_dashboard.simple_marker_node:main',
],
},
Save and exit (Ctrl+X, then Y, then Enter)
Build your package using colcon:
Expected Output:
Starting >>> perception_dashboard Finished <<< perception_dashboard [1.23s] Summary: 1 package finished [1.45s]
Source your workspace:
Add this to your ~/.bashrc to automatically source your workspace:
Open a new terminal and run your node:
Expected Output:
[INFO] [1234567890.123456789] [simple_marker_node]: Simple Marker Node Started! [INFO] [1234567890.123456789] [simple_marker_node]: Publishing markers on topic: /visualization_marker [INFO] [1234567890.123456789] [simple_marker_node]: Published red cube marker [INFO] [1234567891.123456789] [simple_marker_node]: Published red cube marker [INFO] [1234567892.123456789] [simple_marker_node]: Published red cube marker ...
In another terminal, check if the topic is publishing:
You should see: /visualization_marker
Open another new terminal and launch RViz2:
In the "Displays" panel on the left:
mapClick "Add" button (bottom left):
Click "Add" button again:
In the Marker display settings:
/visualization_markerYou should now see a red cube floating in the 3D view! The cube is 1 meter on each side and positioned 0.5 meters above the ground plane.
Now let's experiment! Try modifying the code to see different results:
Edit simple_marker_node.py and change the color to green:
# Color (Green instead of red)
marker.color.r = 0.0 # Change from 1.0
marker.color.g = 1.0 # Change from 0.0
marker.color.b = 0.0
marker.color.a = 0.8
Try different marker types:
marker.type = Marker.SPHERE # Try SPHERE, CYLINDER, ARROW
Add a counter to make the marker move:
def __init__(self):
super().__init__('simple_marker_node')
self.marker_pub = self.create_publisher(Marker, '/visualization_marker', 10)
self.timer = self.create_timer(0.1, self.publish_marker) # Faster updates
self.counter = 0.0 # Add counter
self.get_logger().info('Simple Marker Node Started!')
def publish_marker(self):
"""Publish a moving marker"""
import math
marker = Marker()
marker.header.frame_id = "map"
marker.header.stamp = self.get_clock().now().to_msg()
marker.ns = "basic_shapes"
marker.id = 0
marker.type = Marker.SPHERE
marker.action = Marker.ADD
# Make it move in a circle
marker.pose.position.x = 2.0 * math.cos(self.counter)
marker.pose.position.y = 2.0 * math.sin(self.counter)
marker.pose.position.z = 0.5
marker.pose.orientation.w = 1.0
marker.scale.x = 0.5
marker.scale.y = 0.5
marker.scale.z = 0.5
marker.color.r = 1.0
marker.color.g = 0.0
marker.color.b = 1.0 # Purple
marker.color.a = 1.0
self.marker_pub.publish(marker)
self.counter += 0.05 # Increment counter
After each change:
Then restart your node (Ctrl+C in the node terminal, then run it again)
You've successfully created and run your first RViz2 visualization! You now understand the basics of:
Next: Continue reading to learn about more advanced visualization techniques!
At this point, you should have 2 terminal windows open:
Should show: "Published red cube marker" messages
Should show: Red cube in 3D view
💡 Tip: Keep both terminals visible so you can see the log messages while observing the visualization!
RViz2 is the primary 3D visualization tool for ROS 2, providing a powerful framework for visualizing robot state, sensor data, and planning results in real-time.
Start RViz2 with the following command:
Or launch with a specific configuration file:
| Panel | Function |
|---|---|
| Displays Panel | Add/remove visualization elements |
| Views Panel | Control camera position and type |
| Time Panel | Display current ROS time |
| Tool Properties | Configure selected tools |
| 3D View | Main visualization window |
Default tool for camera manipulation
Set navigation goals
Set initial robot pose
Measure distances in 3D space
Publish clicked points
Use Ctrl+S to save your RViz configuration and Ctrl+O to load a saved configuration.
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import JointState
from std_msgs.msg import Header
import math
class RobotStatePublisher(Node):
def __init__(self):
super().__init__('robot_state_publisher')
# Publisher for joint states
self.joint_pub = self.create_publisher(
JointState,
'joint_states',
10
)
# Timer for publishing at 50Hz
self.timer = self.create_timer(0.02, self.publish_joint_states)
# Joint configuration
self.joint_names = [
'joint1', 'joint2', 'joint3',
'joint4', 'joint5', 'joint6'
]
self.counter = 0.0
self.get_logger().info('Robot State Publisher initialized')
def publish_joint_states(self):
msg = JointState()
msg.header = Header()
msg.header.stamp = self.get_clock().now().to_msg()
msg.name = self.joint_names
# Create smooth sinusoidal motion for demonstration
msg.position = [
math.sin(self.counter) * 0.5,
math.cos(self.counter) * 0.3,
math.sin(self.counter * 2) * 0.4,
math.cos(self.counter * 1.5) * 0.2,
math.sin(self.counter * 0.5) * 0.6,
math.cos(self.counter * 0.7) * 0.3
]
msg.velocity = []
msg.effort = []
self.joint_pub.publish(msg)
self.counter += 0.02
def main(args=None):
rclpy.init(args=args)
node = RobotStatePublisher()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Click "Add" → Select "RobotModel" → Configure:
/robot_descriptionIn Global Options, set Fixed Frame to:
Create professional detection markers for object recognition and tracking.
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from visualization_msgs.msg import Marker, MarkerArray
from geometry_msgs.msg import Point
from std_msgs.msg import ColorRGBA
import random
class PerceptionVisualizer(Node):
def __init__(self):
super().__init__('perception_visualizer')
# Publisher for detection markers
self.marker_pub = self.create_publisher(
MarkerArray,
'/detections',
10
)
self.timer = self.create_timer(0.5, self.publish_detections)
self.detection_id = 0
self.get_logger().info('Perception Visualizer initialized')
def create_bounding_box(self, id, x, y, z, size_x, size_y, size_z,
label, confidence):
"""Create a bounding box marker for object detection"""
marker = Marker()
marker.header.frame_id = 'base_link'
marker.header.stamp = self.get_clock().now().to_msg()
marker.ns = 'detections'
marker.id = id
marker.type = Marker.CUBE
marker.action = Marker.ADD
# Position
marker.pose.position.x = x
marker.pose.position.y = y
marker.pose.position.z = z
marker.pose.orientation.w = 1.0
# Size
marker.scale.x = size_x
marker.scale.y = size_y
marker.scale.z = size_z
# Color based on confidence
marker.color = ColorRGBA()
marker.color.r = 1.0 - confidence
marker.color.g = confidence
marker.color.b = 0.0
marker.color.a = 0.5 # Semi-transparent
marker.lifetime.sec = 1
return marker
def create_text_marker(self, id, x, y, z, text):
"""Create a text marker for labels"""
marker = Marker()
marker.header.frame_id = 'base_link'
marker.header.stamp = self.get_clock().now().to_msg()
marker.ns = 'labels'
marker.id = id
marker.type = Marker.TEXT_VIEW_FACING
marker.action = Marker.ADD
marker.pose.position.x = x
marker.pose.position.y = y
marker.pose.position.z = z + 0.5 # Above the box
marker.scale.z = 0.2 # Text height
marker.color = ColorRGBA()
marker.color.r = 1.0
marker.color.g = 1.0
marker.color.b = 1.0
marker.color.a = 1.0
marker.text = text
marker.lifetime.sec = 1
return marker
def publish_detections(self):
marker_array = MarkerArray()
# Simulate 3 detected objects
objects = [
{'label': 'Person', 'pos': (2.0, 1.0, 0.9),
'size': (0.5, 0.5, 1.8), 'conf': 0.95},
{'label': 'Car', 'pos': (5.0, -2.0, 0.8),
'size': (4.5, 2.0, 1.6), 'conf': 0.88},
{'label': 'Bicycle', 'pos': (3.0, 3.0, 0.5),
'size': (1.8, 0.6, 1.0), 'conf': 0.75}
]
for i, obj in enumerate(objects):
# Add slight random motion
x = obj['pos'][0] + random.uniform(-0.1, 0.1)
y = obj['pos'][1] + random.uniform(-0.1, 0.1)
z = obj['pos'][2]
# Bounding box
box_marker = self.create_bounding_box(
i * 2, x, y, z,
obj['size'][0], obj['size'][1], obj['size'][2],
obj['label'], obj['conf']
)
marker_array.markers.append(box_marker)
# Label with confidence
text = f"{obj['label']} ({obj['conf']:.2f})"
text_marker = self.create_text_marker(
i * 2 + 1, x, y, z, text
)
marker_array.markers.append(text_marker)
self.marker_pub.publish(marker_array)
self.get_logger().info(f'Published {len(marker_array.markers)} markers')
def main(args=None):
rclpy.init(args=args)
node = PerceptionVisualizer()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from tf2_ros import TransformBroadcaster
from geometry_msgs.msg import TransformStamped
import math
class TFPublisher(Node):
def __init__(self):
super().__init__('tf_publisher')
self.tf_broadcaster = TransformBroadcaster(self)
self.timer = self.create_timer(0.01, self.publish_transforms)
self.counter = 0.0
self.get_logger().info('TF Publisher initialized')
def publish_transforms(self):
# Base link to world
t = TransformStamped()
t.header.stamp = self.get_clock().now().to_msg()
t.header.frame_id = 'world'
t.child_frame_id = 'base_link'
# Simulate robot moving in a circle
radius = 2.0
t.transform.translation.x = radius * math.cos(self.counter)
t.transform.translation.y = radius * math.sin(self.counter)
t.transform.translation.z = 0.0
# Rotation to face forward along path
t.transform.rotation.x = 0.0
t.transform.rotation.y = 0.0
t.transform.rotation.z = math.sin(self.counter / 2.0)
t.transform.rotation.w = math.cos(self.counter / 2.0)
self.tf_broadcaster.sendTransform(t)
# Camera link relative to base_link
t2 = TransformStamped()
t2.header.stamp = self.get_clock().now().to_msg()
t2.header.frame_id = 'base_link'
t2.child_frame_id = 'camera_link'
t2.transform.translation.x = 0.3
t2.transform.translation.y = 0.0
t2.transform.translation.z = 0.5
t2.transform.rotation.w = 1.0
self.tf_broadcaster.sendTransform(t2)
self.counter += 0.01
def main(args=None):
rclpy.init(args=args)
node = TFPublisher()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Click "Add" → Select "TF" → Configure:
Generate a TF tree diagram:
This generates a PDF showing the TF tree hierarchy.
Panels:
- Class: rviz_common/Displays
Name: Displays
- Class: rviz_common/Views
Name: Views
- Class: rviz_common/Time
Name: Time
Visualization Manager:
Class: ""
Displays:
- Class: rviz_default_plugins/Grid
Name: Grid
Enabled: true
Plane Cell Count: 50
Cell Size: 1.0
Color: 160; 160; 160
- Class: rviz_default_plugins/RobotModel
Name: Robot Model
Enabled: true
Description Topic: /robot_description
Visual Enabled: true
Alpha: 1.0
- Class: rviz_default_plugins/TF
Name: TF
Enabled: true
Show Names: true
Show Axes: true
Marker Scale: 1.0
- Class: rviz_default_plugins/MarkerArray
Name: Detections
Enabled: true
Topic: /detections
Global Options:
Fixed Frame: world
Background Color: 48; 48; 48
Views:
Current:
Class: rviz_default_plugins/Orbit
Distance: 10.0
Focal Point:
X: 0.0
Y: 0.0
Z: 0.0
Pitch: 0.5
Yaw: 0.5
File → Save Config As → Choose location and name
Keyboard: Ctrl+S
Include in your launch file for automatic loading
Create a comprehensive RViz2 configuration that visualizes a complete intelligent robot system including robot state, sensor data, object detections, and planning information.
Display camera feed with overlays showing detected objects and confidence scores.
3D bounding boxes with labels and confidence scores for all detected objects.
Complete TF visualization showing all coordinate frames and their relationships.
Display robot path history and planned trajectory with clear differentiation.
The complete perception dashboard code integrates all concepts covered in this tutorial. Study the example carefully and adapt it for your specific use case.
#!/usr/bin/env python3
"""
Comprehensive Perception Dashboard for Intelligent Robot Systems
Combines robot state, sensor data, detections, and path planning
"""
import rclpy
from rclpy.node import Node
from visualization_msgs.msg import Marker, MarkerArray
from geometry_msgs.msg import Point, PoseStamped
from nav_msgs.msg import Path
from std_msgs.msg import ColorRGBA, Header
from tf2_ros import TransformBroadcaster
from geometry_msgs.msg import TransformStamped
import math
import random
class PerceptionDashboard(Node):
def __init__(self):
super().__init__('perception_dashboard')
# Publishers
self.marker_pub = self.create_publisher(
MarkerArray, '/perception/markers', 10
)
self.path_pub = self.create_publisher(
Path, '/planned_path', 10
)
self.trajectory_pub = self.create_publisher(
Path, '/robot_trajectory', 10
)
# TF broadcaster
self.tf_broadcaster = TransformBroadcaster(self)
# Timers
self.create_timer(0.1, self.publish_detections)
self.create_timer(0.05, self.publish_transforms)
self.create_timer(1.0, self.publish_planned_path)
self.create_timer(0.1, self.update_trajectory)
# State variables
self.counter = 0.0
self.trajectory_points = []
self.max_trajectory_points = 100
self.get_logger().info('Perception Dashboard initialized')
def publish_detections(self):
"""Publish object detection markers"""
marker_array = MarkerArray()
# Simulated detections
detections = [
{
'id': 0, 'class': 'person',
'pos': (3.0, 2.0, 0.9),
'size': (0.5, 0.5, 1.8),
'confidence': 0.92,
'velocity': (0.5, 0.2, 0.0)
},
{
'id': 1, 'class': 'car',
'pos': (8.0, -3.0, 0.8),
'size': (4.5, 2.0, 1.5),
'confidence': 0.88,
'velocity': (-1.0, 0.0, 0.0)
}
]
for det in detections:
# Create markers for each detection
# ... (implementation details)
pass
self.marker_pub.publish(marker_array)
def publish_transforms(self):
"""Publish robot transform tree"""
t = TransformStamped()
t.header.stamp = self.get_clock().now().to_msg()
t.header.frame_id = 'world'
t.child_frame_id = 'base_link'
radius = 3.0
t.transform.translation.x = radius * math.cos(self.counter * 0.5)
t.transform.translation.y = radius * math.sin(self.counter * 0.5)
t.transform.translation.z = 0.0
self.tf_broadcaster.sendTransform(t)
self.counter += 0.05
def publish_planned_path(self):
"""Publish planned navigation path"""
path = Path()
path.header.frame_id = 'world'
path.header.stamp = self.get_clock().now().to_msg()
# Create planned path
for i in range(50):
pose = PoseStamped()
pose.header = path.header
# ... (path generation)
path.poses.append(pose)
self.path_pub.publish(path)
def update_trajectory(self):
"""Update robot trajectory history"""
# Track robot path over time
# ... (implementation)
pass
def main(args=None):
rclpy.init(args=args)
node = PerceptionDashboard()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Understanding proper ROS 2 package organization is essential for creating maintainable and professional robot perception dashboards.
perception_dashboard/
├── perception_dashboard/
│ ├── __init__.py
│ ├── perception_node.py
│ ├── marker_utils.py
│ ├── tf_publisher.py
│ └── sensor_publisher.py
├── launch/
│ ├── dashboard.launch.py
│ ├── perception_only.launch.py
│ └── full_system.launch.py
├── config/
│ ├── perception_dashboard.rviz
│ ├── robot_params.yaml
│ └── sensor_config.yaml
├── rviz/
│ ├── default.rviz
│ └── debug.rviz
├── resource/
│ └── perception_dashboard
├── test/
│ ├── test_perception_node.py
│ └── test_marker_utils.py
├── package.xml
├── setup.py
├── setup.cfg
└── README.md
<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>perception_dashboard</name>
<version>1.0.0</version>
<description>Intelligent Robot Perception Dashboard for RViz2</description>
<maintainer email="a.albanna@uop.edu.jo">Dr. Abulkarim Albanna</maintainer>
<license>Apache-2.0</license>
<buildtool_depend>ament_python</buildtool_depend>
<depend>rclpy</depend>
<depend>std_msgs</depend>
<depend>geometry_msgs</depend>
<depend>sensor_msgs</depend>
<depend>visualization_msgs</depend>
<depend>nav_msgs</depend>
<depend>tf2_ros</depend>
<depend>tf2_geometry_msgs</depend>
<test_depend>ament_copyright</test_depend>
<test_depend>ament_flake8</test_depend>
<test_depend>ament_pep257</test_depend>
<test_depend>python3-pytest</test_depend>
<export>
<build_type>ament_python</build_type>
</export>
</package>
from setuptools import setup
import os
from glob import glob
package_name = 'perception_dashboard'
setup(
name=package_name,
version='1.0.0',
packages=[package_name],
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
# Include launch files
(os.path.join('share', package_name, 'launch'),
glob('launch/*.launch.py')),
# Include config files
(os.path.join('share', package_name, 'config'),
glob('config/*.yaml')),
# Include RViz configs
(os.path.join('share', package_name, 'rviz'),
glob('rviz/*.rviz')),
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='Dr. Abulkarim Albanna',
maintainer_email='a.albanna@uop.edu.jo',
description='Intelligent Robot Perception Dashboard for RViz2',
license='Apache-2.0',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'perception_node = perception_dashboard.perception_node:main',
'tf_publisher = perception_dashboard.tf_publisher:main',
'sensor_publisher = perception_dashboard.sensor_publisher:main',
],
},
)
#!/usr/bin/env python3
"""
Launch file for Perception Dashboard
Starts all necessary nodes and RViz2 with custom configuration
"""
from launch import LaunchDescription
from launch_ros.actions import Node
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
from launch_ros.substitutions import FindPackageShare
import os
def generate_launch_description():
# Declare launch arguments
use_sim_time = LaunchConfiguration('use_sim_time', default='false')
# Get package directories
pkg_share = FindPackageShare('perception_dashboard').find('perception_dashboard')
# RViz config file
rviz_config_file = PathJoinSubstitution([
FindPackageShare('perception_dashboard'),
'rviz',
'perception_dashboard.rviz'
])
# Parameters file
params_file = PathJoinSubstitution([
FindPackageShare('perception_dashboard'),
'config',
'robot_params.yaml'
])
return LaunchDescription([
# Declare arguments
DeclareLaunchArgument(
'use_sim_time',
default_value='false',
description='Use simulation time if true'
),
# Perception node
Node(
package='perception_dashboard',
executable='perception_node',
name='perception_node',
output='screen',
parameters=[
params_file,
{'use_sim_time': use_sim_time}
]
),
# TF publisher
Node(
package='perception_dashboard',
executable='tf_publisher',
name='tf_publisher',
output='screen',
parameters=[{'use_sim_time': use_sim_time}]
),
# Sensor publisher
Node(
package='perception_dashboard',
executable='sensor_publisher',
name='sensor_publisher',
output='screen',
parameters=[{'use_sim_time': use_sim_time}]
),
# RViz2
Node(
package='rviz2',
executable='rviz2',
name='rviz2',
arguments=['-d', rviz_config_file],
output='screen'
),
])
# robot_params.yaml
perception_node:
ros__parameters:
# Update rates (Hz)
marker_update_rate: 10.0
transform_update_rate: 50.0
path_update_rate: 1.0
# Visualization settings
marker_lifetime: 1.0
trajectory_max_points: 100
# Detection settings
confidence_threshold: 0.5
max_detections: 20
# Frame IDs
world_frame: "world"
base_frame: "base_link"
camera_frame: "camera_link"
laser_frame: "laser_frame"
# Colors (RGBA)
person_color: [0.0, 1.0, 0.0, 0.5]
car_color: [1.0, 0.0, 0.0, 0.5]
bicycle_color: [0.0, 0.0, 1.0, 0.5]
# Marker sizes
text_size: 0.2
arrow_shaft_diameter: 0.1
arrow_head_diameter: 0.2
[develop]
script_dir=$base/lib/perception_dashboard
[install]
install_scripts=$base/lib/perception_dashboard