Kinect Controller

Implement an application to manipulate programs and 2D/3D objects. Using a depth sensor and open-source tools.


Kinect SDK, Unity 3D, FAAST, and Client/Server configurations.


D. Jiménez and D. Azúa


4 weeks

My role

I configurated FAAST in order to acquire the input we needed and created the libraries to associate the right position of the hand with a specific gesture. Also, I was in charge of the animations and after acquiring the gestures I make a C# project that can manipulate 3D objects based on the gestures. 

Proof of concept

ECE Paris Ecole d'Ingénieurs 2017
Kinect Controller


Manipulate computer objects and applications without using the keyboard or mouse.

Our solution

Kinect ControllerKinect Controller

Our Approach to Design

Kinect Controller


There were several sketches and models until we had 2D and 3D representations of the final prototype. Next, the computer-aided design and model was created for all the components of the device. Then, all the components were 3D printed and we checked tolerances and adjustment to then reprint them again if necessary. Finally, we post-processed the parts, sanding and painting.

Primary research

We interviewed a class instructor, a lab manager, and a student to understand how the current inventory management system works in the University of Washington laboratories. Then we created surveys for students and lab staff, to understand the manual process to manage the laboratory equipment and items. Also, we sent surveys to potential users, including users of labs, warehouse employees, and libraries administrators to obtain information related to potential features and concerns about interacting with robots.

User Testing

First, we did 1:1 User Evaluation to test the Hardware/Software and the check-in/check-out process. Participants performed a series of tasks under instruction by one of our team members. Notes and video recording were taken, six participants are involved in each round of the 1:1 evaluation. Then, we ran a Fly On The Wall session to observe the Human-Robot Interaction. We took notes of people’s behaviors when the mobile robot was navigating through the environment, without and with sound alerts. The robot received a series of navigation goals sent by the operator and it was up to the navigation stack to do the routing and planning. We observed users for 20 mins in the GIX laboratory. 

Functional Testing

First, we defined metrics for each part of the system. For the navigation, we sent a navigation goal to the Fetch and then we measured the success rate, time, distance from the nav goal, and the number of collisions. Next, for the Fetch and Kinova  grasping, we also measured the pick and place time and success rate. 

Secondary Research


First the Kinect RGB color VGA video camera and a depth sensor work together to detect the motion of the user. Then we used FAAST to read the joints of the user. FAAST is a middleware to facilitate integration of full-body control. FAAST includes a custom VRPN server to stream skeletons over a network, allowing applications to read the skeletal joints as trackers using any VRPN client. The interaction with Windows programs and to manipulate 2D objects is based on gestures created by the user. Once the joint data axis was obtained we need to create the gestures. We associated the gestures with an specific action to manipulate Windows programs. For example, open Paint and manipulating it with gestures or change the RGB vector of a 2D shape to change its color. All the above was performed using C# scripts. To do the Unity3D dynamic Project interaction based Kinect we used the Kinect for Windows Software Development Kit (SDK). This kit enables developers to create applications that support gesture, using Kinect sensor technology on computers. The kit contains the Kinect Manager and Gesture Listener, with this tools we can access the Kinect with Unity and obtain information of some pre-charged gestures.
Kinect Controller
Kinect ControllerKinect ControllerKinect Controller

My takeaways

Using the Kinect sensor I could understand how to integrate this peripheral to manipulate programs and objects. With this knowledge we could create specific gestures and associate them to an action and that way help people that have issues manipulating, for instance a mouse or keyboard, like people with Parkinson's disease.

Designing using emerging technologies in new and impactful ways.