There are few open-source graphic applications that can display magnetic resonance images that have the features needed for a complete analysis.
Our Approach to Design
There were several sketches and models until we had 2D and 3D representations of the final prototype. Next, the computer-aided design and model was created for all the components of the device. Then, all the components were 3D printed and we checked tolerances and adjustment to then reprint them again if necessary. Finally, we post-processed the parts, sanding and painting.
We interviewed a class instructor, a lab manager, and a student to understand how the current inventory management system works in the University of Washington laboratories. Then we created surveys for students and lab staff, to understand the manual process to manage the laboratory equipment and items. Also, we sent surveys to potential users, including users of labs, warehouse employees, and libraries administrators to obtain information related to potential features and concerns about interacting with robots.
First, we did 1:1 User Evaluation to test the Hardware/Software and the check-in/check-out process. Participants performed a series of tasks under instruction by one of our team members. Notes and video recording were taken, six participants are involved in each round of the 1:1 evaluation. Then, we ran a Fly On The Wall session to observe the Human-Robot Interaction. We took notes of people’s behaviors when the mobile robot was navigating through the environment, without and with sound alerts. The robot received a series of navigation goals sent by the operator and it was up to the navigation stack to do the routing and planning. We observed users for 20 mins in the GIX laboratory.
First, we defined metrics for each part of the system. For the navigation, we sent a navigation goal to the Fetch and then we measured the success rate, time, distance from the nav goal, and the number of collisions. Next, for the Fetch and Kinova grasping, we also measured the pick and place time and success rate.
We defined the requirements and then we started the secondary research to start looking for the features that medical professionals use while examining an MRI. After defining the above, we look for the tools that could be useful and we found: wxPython (GUI), OpenCV (computer vision), VISIT (visualization, animation and analysis tool), Matplotlib (2D plot library), Imutils (module for image processing), SciPy (mathematical functions), Numpy (N-dimensional matrix object).
The input for the program is a dataset of Magnetic Resonance images. The images used come from the MRI data center that comes with MATLAB, these images have a size of 128x128px. The graphical application is wrote in Python. The GUI has a mechanism for displaying and manipulating the content of the data set using buttons. Also, it has a button that displays a reconstructed volume with options to rotate it in order to inspect it. Each image of the dataset is converted to a matrix in order to manipulate the images. By doing matrix operations and using Python libraries you can, for instance, rotate and resize an image.