top of page

BAXTER STOCKING STUFFER

Baxter Stocking Stuffer: Service

OVERVIEW

The goal of this project was utilize ROS to get a Rethink Robotics Baxter robot to act as Santa's helper. Baxter had a set of presents and a set of stockings for our cohort members in his workspace and needed to be able to identify a stocking, locate the corresponding present, and then place the present in the stocking.

Baxter Stocking Stuffer: Text

PROJECT OVERVIEW

We determined that it would be easiest to develop an algorithm that utilized the cameras in Baxter's hands. We would then use OpenCV for color detection and pose estimation and a tag tracking algorithm to figure out the pose of each of the stockings. This information would then be fed into inverse kinematics equations to determine how to get our end effector (Baxter's gripper) to certain locations. Our team broke the task into steps in order to come up with a solution that would lead to the fastest and most accurate present sorting:


  1. Sweep the stockings

  2. Store the tag ID and location from each stocking

  3. Relate the stocking tag ID to a present color

  4. Sweep the table

  5. Identify the colors and locations of presents

  6. Move Baxter's gripper to the present location

  7. Pick up the present

  8. Move Baxter's gripper to the corresponding stocking location

  9. Drop the present into the stocking

Baxter Stocking Stuffer: Text
baxter_workspace_placeholder.jpeg
Baxter Stocking Stuffer: Image

IMPLEMENTATION

We built a ROS package with four different nodes, each running in a specific sequence to accomplish the goal of stocking stuffing. The four nodes are:

  • needed_present_identifier.py

  • poseusingidandqr.py

  • poseusingcolordetection.py

  • back_to_stocking_and_release.py

Please follow this link to look into the code in further detail or this one to look at a demo video.

Baxter Stocking Stuffer: Text

FURTHER IMPROVEMENTS

An improvement to this project would be to get Baxter to identify presents and stockings using Microsoft's Kinect or the Asus Xtion Pro Live, as well as PCL's (point cloud libraries), thus eliminating the need for tags. One reason for this is that PCL's are more accurate than tags at identifying poses, but are harder to get working. If we are able to get it working, we could have pictures of each person on the stocking and have Baxter recognize who it is, and then sort presents accordingly. Furthermore, we could locate presents not only using color recognition but also based on their shape. Another improvement would be to use both arms at the same time. For example, one arm could open the stocking while the other hand drops the present in, eliminating the need for having a placeholder in the stocking to hold it open. We could also use both arms at the same time to scan stockings and locate presents ... the possibilities are endless!

Baxter Stocking Stuffer: Text
bottom of page