Week 11: RViz2 Visualization

Advanced Robot Perception Dashboard Development for Intelligent Systems

Theory First Hands-On Practice Python Only
University of Petra Robotics Programming Dr. Abulkarim Albanna

Learning Objectives

What You'll Learn
  • Master RViz2 interface and intelligent display panels
  • Visualize robot states, sensor data, and transformations
  • Create perception visualizations with detection markers
  • Develop custom RViz configurations for debugging
  • Build comprehensive robot perception dashboards
Python-Focused Tutorial

This tutorial uses Python exclusively for all examples and implementations. All code examples are written in Python 3.10+ using the rclpy library for ROS 2 Jazzy Jalisco.

📖 Start by understanding: Read the "What is RViz2?" section next to learn why visualization is critical before diving into code!

Prerequisites
  • ROS 2 Jazzy Jalisco
  • Python 3.10+
  • TF2 transforms knowledge
  • Sensor integration experience
  • Python programming skills
  • Understanding of coordinate frames

What is RViz2?

RViz2 (ROS Visualization 2) is the most powerful 3D visualization tool for ROS 2, essential for developing, debugging, and demonstrating intelligent robot systems.

Why RViz2 Matters

RViz2 is not just a visualization tool - it's your window into the robot's "brain". It allows you to:

  • See what your robot sees (sensors, cameras)
  • Understand what your robot thinks (planning, perception)
  • Debug issues visually (transforms, paths)
  • Present your work professionally
Industry Standard

RViz2 is used worldwide by:

  • Research Labs: MIT, Stanford, CMU
  • Companies: Boston Dynamics, Tesla, Waymo
  • Startups: Autonomous vehicle companies
  • Academia: Robotics courses globally

Core Components of RViz2

3D Viewport

Main visualization area where all 3D content is rendered.


Features:

  • Real-time 3D rendering
  • Camera controls (orbit, pan, zoom)
  • Multiple coordinate frames
  • Grid and axis helpers
Displays Panel

Control center for adding and configuring visualization elements.


Common Displays:

  • Robot Model
  • TF (Transforms)
  • Markers
  • Camera/LaserScan
  • PointCloud
  • Path/Trajectory
Views Panel

Camera control for different viewing perspectives.


View Types:

  • Orbit: Rotate around point
  • FPS: First-person view
  • TopDown: Bird's eye view
  • XY/XZ/YZ: Orthographic views
Time Panel

Time synchronization for simulation and playback.


Functions:

  • Display current ROS time
  • Show elapsed time
  • Sync with simulation
  • Bag file playback control
Tool Properties

Configuration panel for active tools.


Available Tools:

  • 2D Nav Goal
  • 2D Pose Estimate
  • Measure distance
  • Publish point
  • Select objects
Global Options

System-wide settings affecting all displays.


Key Settings:

  • Fixed Frame (reference)
  • Background color
  • Frame rate (FPS)
  • Target frame

Key Features of RViz2

RViz2 updates in real-time as your robot operates, showing live data from sensors, changing robot states, and dynamic environments.

Example: As your robot moves, you see its position update, sensors detecting obstacles, and path planning happening live.

Visualize diverse data simultaneously:

  • 3D meshes (robot models)
  • Point clouds (lidar data)
  • Images (camera feeds)
  • Laser scans (2D lidar)
  • Markers (custom shapes)
  • Paths (navigation routes)
  • TF frames (transforms)
  • Maps (occupancy grids)

RViz2 isn't just for viewing - you can interact with your robot:

  • Set Navigation Goals: Click to tell robot where to go
  • Set Initial Pose: Click to localize robot on map
  • Publish Points: Click to send waypoints
  • Select Objects: Click to inspect details
  • Measure Distances: Click-and-drag to measure

Save your entire RViz2 setup (all displays, settings, views) to a .rviz file and reload it instantly.

Benefit: No need to reconfigure displays every time. Perfect for demonstrations and consistent debugging sessions.

Extend RViz2 with custom plugins for specialized visualizations:

  • Custom display types for your unique sensors
  • Specialized panels for your application
  • Integration with third-party tools
  • Company-specific visualization needs

Real-World Applications

Autonomous Vehicles

Visualizing sensor fusion, obstacle detection, path planning, and decision-making in self-driving cars.

Example: Waymo uses RViz to debug their autonomous taxi fleet.

Manufacturing Robots

Monitoring robot arm movements, collision avoidance, and pick-and-place operations in factories.

Example: ABB uses visualization for robot programming and debugging.

Medical Robotics

Visualizing surgical robots, showing tool positions, force feedback, and safety zones during operations.

Example: Da Vinci surgical system uses 3D visualization for surgeon feedback.

Warehouse Automation

Tracking multiple robots, inventory locations, optimal paths, and fleet coordination in real-time.

Example: Amazon robotics visualizes 1000+ robots simultaneously.

Space Exploration

Visualizing rover operations, terrain mapping, sample collection, and communication with Earth.

Example: NASA JPL uses RViz for Mars rover mission planning.

Construction Robots

Monitoring autonomous construction equipment, site mapping, and coordination of multiple machines.

Example: Built Robotics visualizes autonomous bulldozers and excavators.

Why Visualization is Critical

Without Visualization:

  • ❌ Debug by reading log files (slow)
  • ❌ Guess what sensors detect
  • ❌ Hard to explain to non-engineers
  • ❌ Miss spatial relationship issues

With RViz2:

  • ✅ See problems immediately
  • ✅ Understand robot perception
  • ✅ Professional demonstrations
  • ✅ Catch coordinate frame errors
Learning Path

In this tutorial, you'll learn RViz2 in this order:

  1. Getting Started: Create and run your first visualization
  2. Interface Mastery: Learn all panels and controls
  3. Robot State: Visualize robot models and joint states
  4. Perception: Create detection markers and overlays
  5. Transforms: Master coordinate frame visualization
  6. Configuration: Save and share professional setups
  7. Project: Build a complete perception dashboard

Getting Started - Step by Step

Let's start by setting up your environment and running your first RViz2 visualization. Follow these steps carefully.

Before You Begin

Make sure you have ROS 2 Jazzy installed and sourced. Open a terminal and verify:

echo $ROS_DISTRO

You should see: jazzy

Step 1: Create Your Workspace

Setting Up ROS 2 Workspace

First, create a ROS 2 workspace for your visualization project:

mkdir -p ~/ros2_ws/src
cd ~/ros2_ws/src

Create your package:

ros2 pkg create perception_dashboard --build-type ament_python --dependencies rclpy std_msgs geometry_msgs sensor_msgs visualization_msgs nav_msgs tf2_ros

Expected Output:

going to create a new package
package name: perception_dashboard
destination directory: /home/user/ros2_ws/src
package format: 3
version: 0.0.0
description: TODO: Package description
maintainer: ['user ']
licenses: ['TODO: License declaration']
build type: ament_python
dependencies: ['rclpy', 'std_msgs', 'geometry_msgs', 'sensor_msgs', 'visualization_msgs', 'nav_msgs', 'tf2_ros']
creating folder ./perception_dashboard
creating ./perception_dashboard/package.xml
creating source and include folder
creating folder ./perception_dashboard/perception_dashboard
...

Step 2: Create Your First Visualization Node

Creating simple_marker_node.py

Navigate to your package and create your first visualization node:

cd ~/ros2_ws/src/perception_dashboard/perception_dashboard
touch simple_marker_node.py
chmod +x simple_marker_node.py

Open the file and add this code:

#!/usr/bin/env python3
"""
Simple Marker Node - Your First RViz2 Visualization
This node publishes a basic marker that you'll see in RViz2
"""

import rclpy
from rclpy.node import Node
from visualization_msgs.msg import Marker
from std_msgs.msg import ColorRGBA

class SimpleMarkerNode(Node):
    def __init__(self):
        super().__init__('simple_marker_node')
        
        # Create publisher for visualization markers
        self.marker_pub = self.create_publisher(Marker, '/visualization_marker', 10)
        
        # Create timer to publish markers at 1 Hz
        self.timer = self.create_timer(1.0, self.publish_marker)
        
        self.get_logger().info('Simple Marker Node Started!')
        self.get_logger().info('Publishing markers on topic: /visualization_marker')
    
    def publish_marker(self):
        """Publish a simple cube marker"""
        marker = Marker()
        
        # Header
        marker.header.frame_id = "map"
        marker.header.stamp = self.get_clock().now().to_msg()
        
        # Marker properties
        marker.ns = "basic_shapes"
        marker.id = 0
        marker.type = Marker.CUBE
        marker.action = Marker.ADD
        
        # Position (center of the marker)
        marker.pose.position.x = 0.0
        marker.pose.position.y = 0.0
        marker.pose.position.z = 0.5
        
        # Orientation
        marker.pose.orientation.x = 0.0
        marker.pose.orientation.y = 0.0
        marker.pose.orientation.z = 0.0
        marker.pose.orientation.w = 1.0
        
        # Size (1m x 1m x 1m cube)
        marker.scale.x = 1.0
        marker.scale.y = 1.0
        marker.scale.z = 1.0
        
        # Color (Red, semi-transparent)
        marker.color.r = 1.0
        marker.color.g = 0.0
        marker.color.b = 0.0
        marker.color.a = 0.8
        
        # Publish the marker
        self.marker_pub.publish(marker)
        self.get_logger().info('Published red cube marker')

def main(args=None):
    rclpy.init(args=args)
    node = SimpleMarkerNode()
    
    try:
        rclpy.spin(node)
    except KeyboardInterrupt:
        pass
    
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
Understanding This Code
  • Line 12: Creates a publisher for Marker messages
  • Line 15: Timer calls publish_marker() every 1 second
  • Line 22: Creates a new Marker message
  • Line 26: Sets the reference frame (coordinate system)
  • Line 31: CUBE type creates a box shape
  • Lines 49-52: RGBA color (Red=1.0, Alpha=0.8 for transparency)

Step 3: Configure the Package

Update setup.py

Edit the setup.py file to register your node:

cd ~/ros2_ws/src/perception_dashboard
nano setup.py

Find the entry_points section and modify it:

    entry_points={
        'console_scripts': [
            'simple_marker_node = perception_dashboard.simple_marker_node:main',
        ],
    },

Save and exit (Ctrl+X, then Y, then Enter)

Step 4: Build the Package

Compile Your Code

Build your package using colcon:

cd ~/ros2_ws
colcon build --packages-select perception_dashboard

Expected Output:

Starting >>> perception_dashboard
Finished <<< perception_dashboard [1.23s]

Summary: 1 package finished [1.45s]

Source your workspace:

source install/setup.bash
Pro Tip

Add this to your ~/.bashrc to automatically source your workspace:

echo "source ~/ros2_ws/install/setup.bash" >> ~/.bashrc

Step 5: Run Your Node

Start the Marker Publisher

Open a new terminal and run your node:

source /opt/ros/jazzy/setup.bash
source ~/ros2_ws/install/setup.bash
ros2 run perception_dashboard simple_marker_node

Expected Output:

[INFO] [1234567890.123456789] [simple_marker_node]: Simple Marker Node Started!
[INFO] [1234567890.123456789] [simple_marker_node]: Publishing markers on topic: /visualization_marker
[INFO] [1234567890.123456789] [simple_marker_node]: Published red cube marker
[INFO] [1234567891.123456789] [simple_marker_node]: Published red cube marker
[INFO] [1234567892.123456789] [simple_marker_node]: Published red cube marker
...
Verify It's Working

In another terminal, check if the topic is publishing:

ros2 topic list | grep visualization

You should see: /visualization_marker

ros2 topic echo /visualization_marker --once

Step 6: Visualize in RViz2

Launch RViz2 and See Your Marker

Open another new terminal and launch RViz2:

source /opt/ros/jazzy/setup.bash
rviz2
Configure RViz2 to See Your Marker:
1
Set Fixed Frame

In the "Displays" panel on the left:

  • Find "Global Options"
  • Click "Fixed Frame"
  • Change from "map" to map
2
Add Grid

Click "Add" button (bottom left):

  • Select "By display type"
  • Scroll to "Grid"
  • Click "OK"
3
Add Marker Display

Click "Add" button again:

  • Select "By display type"
  • Scroll to "Marker"
  • Click "OK"
4
Configure Marker Topic

In the Marker display settings:

  • Expand "Marker"
  • Click on "Topic"
  • Select /visualization_marker
Success!

You should now see a red cube floating in the 3D view! The cube is 1 meter on each side and positioned 0.5 meters above the ground plane.

RViz2 Camera Controls
  • Left-click + drag: Rotate the camera
  • Middle-click + drag: Pan the view
  • Scroll wheel: Zoom in/out
  • Right-click: Context menu

Step 7: Experiment and Learn

Modify Your Code

Now let's experiment! Try modifying the code to see different results:

Experiment 1: Change the Color

Edit simple_marker_node.py and change the color to green:

        # Color (Green instead of red)
        marker.color.r = 0.0  # Change from 1.0
        marker.color.g = 1.0  # Change from 0.0
        marker.color.b = 0.0
        marker.color.a = 0.8
Experiment 2: Change the Shape

Try different marker types:

        marker.type = Marker.SPHERE  # Try SPHERE, CYLINDER, ARROW
Experiment 3: Make It Move

Add a counter to make the marker move:

    def __init__(self):
        super().__init__('simple_marker_node')
        self.marker_pub = self.create_publisher(Marker, '/visualization_marker', 10)
        self.timer = self.create_timer(0.1, self.publish_marker)  # Faster updates
        self.counter = 0.0  # Add counter
        self.get_logger().info('Simple Marker Node Started!')
    
    def publish_marker(self):
        """Publish a moving marker"""
        import math
        
        marker = Marker()
        marker.header.frame_id = "map"
        marker.header.stamp = self.get_clock().now().to_msg()
        
        marker.ns = "basic_shapes"
        marker.id = 0
        marker.type = Marker.SPHERE
        marker.action = Marker.ADD
        
        # Make it move in a circle
        marker.pose.position.x = 2.0 * math.cos(self.counter)
        marker.pose.position.y = 2.0 * math.sin(self.counter)
        marker.pose.position.z = 0.5
        
        marker.pose.orientation.w = 1.0
        
        marker.scale.x = 0.5
        marker.scale.y = 0.5
        marker.scale.z = 0.5
        
        marker.color.r = 1.0
        marker.color.g = 0.0
        marker.color.b = 1.0  # Purple
        marker.color.a = 1.0
        
        self.marker_pub.publish(marker)
        self.counter += 0.05  # Increment counter

After each change:

cd ~/ros2_ws
colcon build --packages-select perception_dashboard
source install/setup.bash

Then restart your node (Ctrl+C in the node terminal, then run it again)

Congratulations!

You've successfully created and run your first RViz2 visualization! You now understand the basics of:

  • Creating ROS 2 Python packages
  • Publishing visualization markers
  • Configuring RViz2
  • Building and running ROS 2 nodes

Next: Continue reading to learn about more advanced visualization techniques!

Your Running System

At this point, you should have 2 terminal windows open:

Terminal 1: Node Running
ros2 run perception_dashboard simple_marker_node

Should show: "Published red cube marker" messages

Terminal 2: RViz2 Running
rviz2

Should show: Red cube in 3D view

💡 Tip: Keep both terminals visible so you can see the log messages while observing the visualization!

RViz2 Interface and Intelligent Displays

RViz2 is the primary 3D visualization tool for ROS 2, providing a powerful framework for visualizing robot state, sensor data, and planning results in real-time.

Step 1: Launch RViz2

Start RViz2 with the following command:

ros2 run rviz2 rviz2

Or launch with a specific configuration file:

ros2 run rviz2 rviz2 -d ~/my_config.rviz
Understanding the Interface Layout
Panel Function
Displays Panel Add/remove visualization elements
Views Panel Control camera position and type
Time Panel Display current ROS time
Tool Properties Configure selected tools
3D View Main visualization window
Essential Toolbar Tools
Interact

Default tool for camera manipulation

2D Nav Goal

Set navigation goals

2D Pose Estimate

Set initial robot pose

Measure

Measure distances in 3D space

Publish Point

Publish clicked points

Pro Tip

Use Ctrl+S to save your RViz configuration and Ctrl+O to load a saved configuration.

Visualizing Robot States and Sensor Data

Robot State Publisher Example
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import JointState
from std_msgs.msg import Header
import math

class RobotStatePublisher(Node):
    def __init__(self):
        super().__init__('robot_state_publisher')
        
        # Publisher for joint states
        self.joint_pub = self.create_publisher(
            JointState, 
            'joint_states', 
            10
        )
        
        # Timer for publishing at 50Hz
        self.timer = self.create_timer(0.02, self.publish_joint_states)
        
        # Joint configuration
        self.joint_names = [
            'joint1', 'joint2', 'joint3', 
            'joint4', 'joint5', 'joint6'
        ]
        
        self.counter = 0.0
        self.get_logger().info('Robot State Publisher initialized')
    
    def publish_joint_states(self):
        msg = JointState()
        msg.header = Header()
        msg.header.stamp = self.get_clock().now().to_msg()
        msg.name = self.joint_names
        
        # Create smooth sinusoidal motion for demonstration
        msg.position = [
            math.sin(self.counter) * 0.5,
            math.cos(self.counter) * 0.3,
            math.sin(self.counter * 2) * 0.4,
            math.cos(self.counter * 1.5) * 0.2,
            math.sin(self.counter * 0.5) * 0.6,
            math.cos(self.counter * 0.7) * 0.3
        ]
        
        msg.velocity = []
        msg.effort = []
        
        self.joint_pub.publish(msg)
        self.counter += 0.02

def main(args=None):
    rclpy.init(args=args)
    node = RobotStatePublisher()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
1
Add RobotModel Display

Click "Add" → Select "RobotModel" → Configure:

  • Description Topic: /robot_description
  • Visual Enabled: ✓
  • Alpha: 1.0
2
Configure Fixed Frame

In Global Options, set Fixed Frame to:

base_link
Sensor Display Types in RViz2
  • LaserScan: 2D lidar data visualization
  • PointCloud2: 3D point cloud visualization
  • Image: Camera image display
  • Camera: Camera with overlays
  • Range: Ultrasonic/IR sensor cones

Perception Visualization (Detection Markers)

Create professional detection markers for object recognition and tracking.

Creating Detection Markers
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from visualization_msgs.msg import Marker, MarkerArray
from geometry_msgs.msg import Point
from std_msgs.msg import ColorRGBA
import random

class PerceptionVisualizer(Node):
    def __init__(self):
        super().__init__('perception_visualizer')
        
        # Publisher for detection markers
        self.marker_pub = self.create_publisher(
            MarkerArray, 
            '/detections', 
            10
        )
        
        self.timer = self.create_timer(0.5, self.publish_detections)
        self.detection_id = 0
        self.get_logger().info('Perception Visualizer initialized')
    
    def create_bounding_box(self, id, x, y, z, size_x, size_y, size_z, 
                              label, confidence):
        """Create a bounding box marker for object detection"""
        marker = Marker()
        marker.header.frame_id = 'base_link'
        marker.header.stamp = self.get_clock().now().to_msg()
        marker.ns = 'detections'
        marker.id = id
        marker.type = Marker.CUBE
        marker.action = Marker.ADD
        
        # Position
        marker.pose.position.x = x
        marker.pose.position.y = y
        marker.pose.position.z = z
        marker.pose.orientation.w = 1.0
        
        # Size
        marker.scale.x = size_x
        marker.scale.y = size_y
        marker.scale.z = size_z
        
        # Color based on confidence
        marker.color = ColorRGBA()
        marker.color.r = 1.0 - confidence
        marker.color.g = confidence
        marker.color.b = 0.0
        marker.color.a = 0.5  # Semi-transparent
        
        marker.lifetime.sec = 1
        
        return marker
    
    def create_text_marker(self, id, x, y, z, text):
        """Create a text marker for labels"""
        marker = Marker()
        marker.header.frame_id = 'base_link'
        marker.header.stamp = self.get_clock().now().to_msg()
        marker.ns = 'labels'
        marker.id = id
        marker.type = Marker.TEXT_VIEW_FACING
        marker.action = Marker.ADD
        
        marker.pose.position.x = x
        marker.pose.position.y = y
        marker.pose.position.z = z + 0.5  # Above the box
        
        marker.scale.z = 0.2  # Text height
        
        marker.color = ColorRGBA()
        marker.color.r = 1.0
        marker.color.g = 1.0
        marker.color.b = 1.0
        marker.color.a = 1.0
        
        marker.text = text
        marker.lifetime.sec = 1
        
        return marker
    
    def publish_detections(self):
        marker_array = MarkerArray()
        
        # Simulate 3 detected objects
        objects = [
            {'label': 'Person', 'pos': (2.0, 1.0, 0.9), 
             'size': (0.5, 0.5, 1.8), 'conf': 0.95},
            {'label': 'Car', 'pos': (5.0, -2.0, 0.8), 
             'size': (4.5, 2.0, 1.6), 'conf': 0.88},
            {'label': 'Bicycle', 'pos': (3.0, 3.0, 0.5), 
             'size': (1.8, 0.6, 1.0), 'conf': 0.75}
        ]
        
        for i, obj in enumerate(objects):
            # Add slight random motion
            x = obj['pos'][0] + random.uniform(-0.1, 0.1)
            y = obj['pos'][1] + random.uniform(-0.1, 0.1)
            z = obj['pos'][2]
            
            # Bounding box
            box_marker = self.create_bounding_box(
                i * 2, x, y, z,
                obj['size'][0], obj['size'][1], obj['size'][2],
                obj['label'], obj['conf']
            )
            marker_array.markers.append(box_marker)
            
            # Label with confidence
            text = f"{obj['label']} ({obj['conf']:.2f})"
            text_marker = self.create_text_marker(
                i * 2 + 1, x, y, z, text
            )
            marker_array.markers.append(text_marker)
        
        self.marker_pub.publish(marker_array)
        self.get_logger().info(f'Published {len(marker_array.markers)} markers')

def main(args=None):
    rclpy.init(args=args)
    node = PerceptionVisualizer()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
Marker Types Available
  • CUBE/SPHERE/CYLINDER: Basic shapes
  • ARROW: Directional indicators
  • LINE_STRIP/LINE_LIST: Connected lines
  • TEXT_VIEW_FACING: Text labels
  • MESH_RESOURCE: Custom 3D models
  • TRIANGLE_LIST: Custom geometry

Transform Visualization

TF Publisher Example
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from tf2_ros import TransformBroadcaster
from geometry_msgs.msg import TransformStamped
import math

class TFPublisher(Node):
    def __init__(self):
        super().__init__('tf_publisher')
        
        self.tf_broadcaster = TransformBroadcaster(self)
        self.timer = self.create_timer(0.01, self.publish_transforms)
        self.counter = 0.0
        
        self.get_logger().info('TF Publisher initialized')
    
    def publish_transforms(self):
        # Base link to world
        t = TransformStamped()
        t.header.stamp = self.get_clock().now().to_msg()
        t.header.frame_id = 'world'
        t.child_frame_id = 'base_link'
        
        # Simulate robot moving in a circle
        radius = 2.0
        t.transform.translation.x = radius * math.cos(self.counter)
        t.transform.translation.y = radius * math.sin(self.counter)
        t.transform.translation.z = 0.0
        
        # Rotation to face forward along path
        t.transform.rotation.x = 0.0
        t.transform.rotation.y = 0.0
        t.transform.rotation.z = math.sin(self.counter / 2.0)
        t.transform.rotation.w = math.cos(self.counter / 2.0)
        
        self.tf_broadcaster.sendTransform(t)
        
        # Camera link relative to base_link
        t2 = TransformStamped()
        t2.header.stamp = self.get_clock().now().to_msg()
        t2.header.frame_id = 'base_link'
        t2.child_frame_id = 'camera_link'
        
        t2.transform.translation.x = 0.3
        t2.transform.translation.y = 0.0
        t2.transform.translation.z = 0.5
        
        t2.transform.rotation.w = 1.0
        
        self.tf_broadcaster.sendTransform(t2)
        
        self.counter += 0.01

def main(args=None):
    rclpy.init(args=args)
    node = TFPublisher()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()
Add TF Display

Click "Add" → Select "TF" → Configure:

  • Show Names:
  • Show Axes:
  • Show Arrows:
  • Marker Scale: 1.0
View TF Tree

Generate a TF tree diagram:

ros2 run tf2_tools view_frames

This generates a PDF showing the TF tree hierarchy.

Common TF Issues
  • Missing transforms: Check if all required frames are being published
  • Old transforms: Verify timestamp synchronization
  • Disconnected frames: Ensure complete chain from fixed frame
  • High latency: Reduce publish rate or optimize transform logic

Custom RViz Configurations

RViz Configuration Structure
Panels:
  - Class: rviz_common/Displays
    Name: Displays
  - Class: rviz_common/Views
    Name: Views
  - Class: rviz_common/Time
    Name: Time

Visualization Manager:
  Class: ""
  Displays:
    - Class: rviz_default_plugins/Grid
      Name: Grid
      Enabled: true
      Plane Cell Count: 50
      Cell Size: 1.0
      Color: 160; 160; 160
      
    - Class: rviz_default_plugins/RobotModel
      Name: Robot Model
      Enabled: true
      Description Topic: /robot_description
      Visual Enabled: true
      Alpha: 1.0
      
    - Class: rviz_default_plugins/TF
      Name: TF
      Enabled: true
      Show Names: true
      Show Axes: true
      Marker Scale: 1.0
      
    - Class: rviz_default_plugins/MarkerArray
      Name: Detections
      Enabled: true
      Topic: /detections
      
  Global Options:
    Fixed Frame: world
    Background Color: 48; 48; 48
    
  Views:
    Current:
      Class: rviz_default_plugins/Orbit
      Distance: 10.0
      Focal Point:
        X: 0.0
        Y: 0.0
        Z: 0.0
      Pitch: 0.5
      Yaw: 0.5
1
Save Configuration

File → Save Config As → Choose location and name

Keyboard: Ctrl+S

2
Load Configuration
ros2 run rviz2 rviz2 -d config.rviz
3
Launch with Config

Include in your launch file for automatic loading

Hands-on Project: Intelligent Robot Perception Dashboard

Project Overview

Create a comprehensive RViz2 configuration that visualizes a complete intelligent robot system including robot state, sensor data, object detections, and planning information.

Project Requirements

Camera Visualization

Display camera feed with overlays showing detected objects and confidence scores.

  • Real-time image display
  • Detection overlays
  • Proper frame alignment
Detection Markers

3D bounding boxes with labels and confidence scores for all detected objects.

  • Bounding boxes
  • Text labels
  • Confidence visualization
Transform Tree

Complete TF visualization showing all coordinate frames and their relationships.

  • All frames visible
  • Frame names displayed
  • Axes visualization
Trajectory Visualization

Display robot path history and planned trajectory with clear differentiation.

  • Historical path
  • Planned trajectory
  • Color-coded paths
Complete Dashboard Implementation

The complete perception dashboard code integrates all concepts covered in this tutorial. Study the example carefully and adapt it for your specific use case.

#!/usr/bin/env python3
"""
Comprehensive Perception Dashboard for Intelligent Robot Systems
Combines robot state, sensor data, detections, and path planning
"""

import rclpy
from rclpy.node import Node
from visualization_msgs.msg import Marker, MarkerArray
from geometry_msgs.msg import Point, PoseStamped
from nav_msgs.msg import Path
from std_msgs.msg import ColorRGBA, Header
from tf2_ros import TransformBroadcaster
from geometry_msgs.msg import TransformStamped
import math
import random

class PerceptionDashboard(Node):
    def __init__(self):
        super().__init__('perception_dashboard')
        
        # Publishers
        self.marker_pub = self.create_publisher(
            MarkerArray, '/perception/markers', 10
        )
        self.path_pub = self.create_publisher(
            Path, '/planned_path', 10
        )
        self.trajectory_pub = self.create_publisher(
            Path, '/robot_trajectory', 10
        )
        
        # TF broadcaster
        self.tf_broadcaster = TransformBroadcaster(self)
        
        # Timers
        self.create_timer(0.1, self.publish_detections)
        self.create_timer(0.05, self.publish_transforms)
        self.create_timer(1.0, self.publish_planned_path)
        self.create_timer(0.1, self.update_trajectory)
        
        # State variables
        self.counter = 0.0
        self.trajectory_points = []
        self.max_trajectory_points = 100
        
        self.get_logger().info('Perception Dashboard initialized')
    
    def publish_detections(self):
        """Publish object detection markers"""
        marker_array = MarkerArray()
        
        # Simulated detections
        detections = [
            {
                'id': 0, 'class': 'person', 
                'pos': (3.0, 2.0, 0.9),
                'size': (0.5, 0.5, 1.8),
                'confidence': 0.92,
                'velocity': (0.5, 0.2, 0.0)
            },
            {
                'id': 1, 'class': 'car',
                'pos': (8.0, -3.0, 0.8),
                'size': (4.5, 2.0, 1.5),
                'confidence': 0.88,
                'velocity': (-1.0, 0.0, 0.0)
            }
        ]
        
        for det in detections:
            # Create markers for each detection
            # ... (implementation details)
            pass
        
        self.marker_pub.publish(marker_array)
    
    def publish_transforms(self):
        """Publish robot transform tree"""
        t = TransformStamped()
        t.header.stamp = self.get_clock().now().to_msg()
        t.header.frame_id = 'world'
        t.child_frame_id = 'base_link'
        
        radius = 3.0
        t.transform.translation.x = radius * math.cos(self.counter * 0.5)
        t.transform.translation.y = radius * math.sin(self.counter * 0.5)
        t.transform.translation.z = 0.0
        
        self.tf_broadcaster.sendTransform(t)
        self.counter += 0.05
    
    def publish_planned_path(self):
        """Publish planned navigation path"""
        path = Path()
        path.header.frame_id = 'world'
        path.header.stamp = self.get_clock().now().to_msg()
        
        # Create planned path
        for i in range(50):
            pose = PoseStamped()
            pose.header = path.header
            # ... (path generation)
            path.poses.append(pose)
        
        self.path_pub.publish(path)
    
    def update_trajectory(self):
        """Update robot trajectory history"""
        # Track robot path over time
        # ... (implementation)
        pass

def main(args=None):
    rclpy.init(args=args)
    node = PerceptionDashboard()
    rclpy.spin(node)
    node.destroy_node()
    rclpy.shutdown()

if __name__ == '__main__':
    main()

ROS 2 Package Structure

Understanding proper ROS 2 package organization is essential for creating maintainable and professional robot perception dashboards.

Complete Package Directory Structure
perception_dashboard/
├── perception_dashboard/
│   ├── __init__.py
│   ├── perception_node.py
│   ├── marker_utils.py
│   ├── tf_publisher.py
│   └── sensor_publisher.py
├── launch/
│   ├── dashboard.launch.py
│   ├── perception_only.launch.py
│   └── full_system.launch.py
├── config/
│   ├── perception_dashboard.rviz
│   ├── robot_params.yaml
│   └── sensor_config.yaml
├── rviz/
│   ├── default.rviz
│   └── debug.rviz
├── resource/
│   └── perception_dashboard
├── test/
│   ├── test_perception_node.py
│   └── test_marker_utils.py
├── package.xml
├── setup.py
├── setup.cfg
└── README.md

Essential Package Files

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>perception_dashboard</name>
  <version>1.0.0</version>
  <description>Intelligent Robot Perception Dashboard for RViz2</description>
  <maintainer email="a.albanna@uop.edu.jo">Dr. Abulkarim Albanna</maintainer>
  <license>Apache-2.0</license>

  <buildtool_depend>ament_python</buildtool_depend>

  <depend>rclpy</depend>
  <depend>std_msgs</depend>
  <depend>geometry_msgs</depend>
  <depend>sensor_msgs</depend>
  <depend>visualization_msgs</depend>
  <depend>nav_msgs</depend>
  <depend>tf2_ros</depend>
  <depend>tf2_geometry_msgs</depend>

  <test_depend>ament_copyright</test_depend>
  <test_depend>ament_flake8</test_depend>
  <test_depend>ament_pep257</test_depend>
  <test_depend>python3-pytest</test_depend>

  <export>
    <build_type>ament_python</build_type>
  </export>
</package>

from setuptools import setup
import os
from glob import glob

package_name = 'perception_dashboard'

setup(
    name=package_name,
    version='1.0.0',
    packages=[package_name],
    data_files=[
        ('share/ament_index/resource_index/packages',
            ['resource/' + package_name]),
        ('share/' + package_name, ['package.xml']),
        # Include launch files
        (os.path.join('share', package_name, 'launch'),
            glob('launch/*.launch.py')),
        # Include config files
        (os.path.join('share', package_name, 'config'),
            glob('config/*.yaml')),
        # Include RViz configs
        (os.path.join('share', package_name, 'rviz'),
            glob('rviz/*.rviz')),
    ],
    install_requires=['setuptools'],
    zip_safe=True,
    maintainer='Dr. Abulkarim Albanna',
    maintainer_email='a.albanna@uop.edu.jo',
    description='Intelligent Robot Perception Dashboard for RViz2',
    license='Apache-2.0',
    tests_require=['pytest'],
    entry_points={
        'console_scripts': [
            'perception_node = perception_dashboard.perception_node:main',
            'tf_publisher = perception_dashboard.tf_publisher:main',
            'sensor_publisher = perception_dashboard.sensor_publisher:main',
        ],
    },
)

#!/usr/bin/env python3
"""
Launch file for Perception Dashboard
Starts all necessary nodes and RViz2 with custom configuration
"""

from launch import LaunchDescription
from launch_ros.actions import Node
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration, PathJoinSubstitution
from launch_ros.substitutions import FindPackageShare
import os

def generate_launch_description():
    # Declare launch arguments
    use_sim_time = LaunchConfiguration('use_sim_time', default='false')
    
    # Get package directories
    pkg_share = FindPackageShare('perception_dashboard').find('perception_dashboard')
    
    # RViz config file
    rviz_config_file = PathJoinSubstitution([
        FindPackageShare('perception_dashboard'),
        'rviz',
        'perception_dashboard.rviz'
    ])
    
    # Parameters file
    params_file = PathJoinSubstitution([
        FindPackageShare('perception_dashboard'),
        'config',
        'robot_params.yaml'
    ])
    
    return LaunchDescription([
        # Declare arguments
        DeclareLaunchArgument(
            'use_sim_time',
            default_value='false',
            description='Use simulation time if true'
        ),
        
        # Perception node
        Node(
            package='perception_dashboard',
            executable='perception_node',
            name='perception_node',
            output='screen',
            parameters=[
                params_file,
                {'use_sim_time': use_sim_time}
            ]
        ),
        
        # TF publisher
        Node(
            package='perception_dashboard',
            executable='tf_publisher',
            name='tf_publisher',
            output='screen',
            parameters=[{'use_sim_time': use_sim_time}]
        ),
        
        # Sensor publisher
        Node(
            package='perception_dashboard',
            executable='sensor_publisher',
            name='sensor_publisher',
            output='screen',
            parameters=[{'use_sim_time': use_sim_time}]
        ),
        
        # RViz2
        Node(
            package='rviz2',
            executable='rviz2',
            name='rviz2',
            arguments=['-d', rviz_config_file],
            output='screen'
        ),
    ])

# robot_params.yaml
perception_node:
  ros__parameters:
    # Update rates (Hz)
    marker_update_rate: 10.0
    transform_update_rate: 50.0
    path_update_rate: 1.0
    
    # Visualization settings
    marker_lifetime: 1.0
    trajectory_max_points: 100
    
    # Detection settings
    confidence_threshold: 0.5
    max_detections: 20
    
    # Frame IDs
    world_frame: "world"
    base_frame: "base_link"
    camera_frame: "camera_link"
    laser_frame: "laser_frame"
    
    # Colors (RGBA)
    person_color: [0.0, 1.0, 0.0, 0.5]
    car_color: [1.0, 0.0, 0.0, 0.5]
    bicycle_color: [0.0, 0.0, 1.0, 0.5]
    
    # Marker sizes
    text_size: 0.2
    arrow_shaft_diameter: 0.1
    arrow_head_diameter: 0.2

[develop]
script_dir=$base/lib/perception_dashboard

[install]
install_scripts=$base/lib/perception_dashboard

Building and Running the Package

1
Create Package
cd ~/ros2_ws/src
ros2 pkg create perception_dashboard --build-type ament_python --dependencies rclpy std_msgs geometry_msgs sensor_msgs visualization_msgs nav_msgs tf2_ros
2
Build Package
cd ~/ros2_ws
colcon build --packages-select perception_dashboard
source install/setup.bash
3
Run Individual Node
ros2 run perception_dashboard perception_node
4
Launch Full System
ros2 launch perception_dashboard dashboard.launch.py
Package Organization Best Practices
  • Keep Python modules in the package directory
  • Use launch files for complex startups
  • Store configurations in YAML files
  • Keep RViz configs in dedicated folder
  • Include tests for all major functions
  • Document with clear README
  • Use meaningful node and topic names
  • Follow ROS 2 naming conventions
Package Checklist
Required Files:
Recommended Structure:

Additional Resources

Video Tutorials
  • ROS 2 RViz2 Basics
  • Advanced Visualization
  • Custom Markers
  • Perception Systems
Example Projects
  • Navigation Stack Viz
  • Object Detection
  • Multi-Robot Systems
  • SLAM Mapping
Important Dates
  • Assignment Release: Week 11, Day 1
  • Submission Deadline: Week 12, Day 7 (23:59)
  • Late Submission: 10% penalty per day
  • Demo Session: Week 13