3D Mapping Robot

Develop a system that allows a mobile robot to create 3D reconstructions and build maps.


Robotic Operating System, SLAM, and 3D Reconstruction


D. Ortiz-Monasterio, S. López, and D. Azúa


15 weeks

My role

I was responsible for the implementation of the mobile robot with ROS, the navigation and odometry, and the communication with the server. I designed the camera bracket and the electrical/electronic configuration. Also, I collaborated with my teammates to integrate the whole system. 

Proof of concept

Tecnológico de Monterrey 2017
3D Mapping Robot


Have a low-cost 3D mapping system combining a 3D sensor with a mobile robot.

Our solution

3D Mapping Robot3D Mapping Robot

Our Approach to Design

3D Mapping Robot


There were several sketches and models until we had 2D and 3D representations of the final prototype. Next, the computer-aided design and model was created for all the components of the device. Then, all the components were 3D printed and we checked tolerances and adjustment to then reprint them again if necessary. Finally, we post-processed the parts, sanding and painting.

Primary research

We interviewed a class instructor, a lab manager, and a student to understand how the current inventory management system works in the University of Washington laboratories. Then we created surveys for students and lab staff, to understand the manual process to manage the laboratory equipment and items. Also, we sent surveys to potential users, including users of labs, warehouse employees, and libraries administrators to obtain information related to potential features and concerns about interacting with robots.

User Testing

First, we did 1:1 User Evaluation to test the Hardware/Software and the check-in/check-out process. Participants performed a series of tasks under instruction by one of our team members. Notes and video recording were taken, six participants are involved in each round of the 1:1 evaluation. Then, we ran a Fly On The Wall session to observe the Human-Robot Interaction. We took notes of people’s behaviors when the mobile robot was navigating through the environment, without and with sound alerts. The robot received a series of navigation goals sent by the operator and it was up to the navigation stack to do the routing and planning. We observed users for 20 mins in the GIX laboratory. 

Functional Testing

First, we defined metrics for each part of the system. For the navigation, we sent a navigation goal to the Fetch and then we measured the success rate, time, distance from the nav goal, and the number of collisions. Next, for the Fetch and Kinova  grasping, we also measured the pick and place time and success rate. 

Secondary Research

First, we started looking for previous implementations of the mobile robot with ROS and depth cameras. Then we start looking for methods to reconstruct 3D images using the depth sensor and a mobile robot.


Ubuntu 16.04 was installed in the board and then Robotic Operating System where the interface was developed. The R200 camera implements a long-range 3D image system and stereo vision. The camera can provide color, depth and infrared video transmissions, in addition, it provides texture information, for this the overlay is used in a depth image to create a cloud of color points and superimpose it in a 3D model for reconstruction. The ROS meta-operating system will be used to control both the sensor and the robot and visualize the maps created with the RVIZ graphical interface that includes ROS. The system use the SLAM technique (Simultaneous Localization and Mapping) which consists of: the extraction of a reference point, data association, estimation of the state, updating the state and updating of the reference point. The camera will act as a publisher, it will publish the environment information, the actuators of the robot will act as subscribers, and finally the robot will publish its location.
3D Mapping Robot
3D Mapping Robot3D Mapping Robot3D Mapping Robot

My takeaways

This was my first experience using the Robotic Operating System, so I learned how using nodes we can run multiple tasks and communicate different components. I learned how to create a luanch files, drivers for motion, and 3D models for the robot. Finally, I could understand the importance of distributing the load of each client/server part to ensure the system could provide a 3D representation in real time.

Designing using emerging technologies in new and impactful ways.