Robot-assisted spine surgery has been clinically validated worldwide, demonstrating enhanced
performance, efficiency, and safety. However, current spine robotic systems rely on fiducial
markers, requiring additional incisions that increase the risk of infection. Additionally,
optical tracking systems are susceptible to line-of-sight obstructions. Previous research has
demonstrated limitations in simplifying obstacles as single points, making complex obstacle
environments unmanageable.
In this thesis, a multi-robot cooperative platform for surgery was developed, featuring a
plug-in to integrate a markerless depth image segmentation network for globally optimal
collision-free object tracking. The system enables a tracking robot arm to automatically adjust
its pose to maintain visual contact with a moving target, while the second surgical robot arm
acts as an obstacle. To achieve this, precise hand-eye calibration algorithms, including camera
calibration and ArUco marker detection, were developed with ROS2. Its accuracy was assessed in a
simulation environment before being tested on the real robot. The state-of-the-art cuRobo, a
GPU-accelerated motion planning library, was translated by developing ROS2 packages for highly
parallel inverse kinematics and trajectory optimization. Leveraging parallel computing enables
realizing a global optimal solution for obstacle avoidance. The motion-tracking performance of
the real robot on the generated trajectory was analyzed. Separate experiments and solve time
analyses were conducted for both simple and complex obstacle avoidance scenarios. The pipeline
was further developed into a continuous, efficient, collision-free object-tracking system. A
tracking performance analysis showed that the system generated near-optimal, smooth, and
collision-free paths. Finally, the algorithms were applied in a multi-robot marker tracking
experiment, with a demo showcasing markerless knee tracking through collision-free paths.