Motion capture technology has revolutionized filmmaking and video games, animating fantastic characters with realistic movements. In 2010, the Microsoft Kinect, a peripheral for the Xbox 360, began capturing the movements of players, allowing them to control video game characters with their bodies. The 2009 movie Avatar used large-volume motion capture and advanced methods of capturing facial expressions to allow director James Cameron to create one of the most simultaneously imaginative and immersive fantasy worlds ever filmed. These motion capture systems sample the movements of a person many times per second, and then send the data to a computer for processing so that the subtleties of motion can be detected and analyzed.
Similar technology is now spurring innovation in robotics, vision and control research. Principle Investigator (PI) Dr. David Cappelleri (pictured above), and co-PI’s Dr. Philippos Mordohai, Dr. Antonio Valdevit, and Dr. Mark Blackburn of Stevens Institute of Technology have received a grant from the National Science Foundation (NSF) for a large volume, real-time, high resolution motion capture system to aid in the development of technologies that will help find and rescue people during emergencies, explore dangerous and unpredictable areas for military or scientific purposes, map unknown terrain, conduct detailed 3-D surveillance, enhance athletic performance, and help people walk again.
According to Dr. Michael Bruno, Dean of the Charles V. Schaefer, Jr. School of Engineering and Science, “This major research instrumentation, guided by the talent and vision of our researchers, will accelerate the delivery of transformative technologies into the marketplace while supporting educational initiatives and enhancing the research learning environment at Stevens.”
The motion capture system allows researchers to capture fine details and the most subtle of movements in a large-volume environment. Twelve high-resolution motion capture cameras, each of which can capture images at speeds of up to 2,000 frames per second (120 frames per second at 16 megapixels), will be mounted on the walls and ceiling at the perimeter of a sizable room at Stevens, and markers will be affixed to the objects to be tracked in 3D space. The movements of objects are then monitored and data describing their motion are synchronized and streamed to a computer, where they can be used by third party applications for a multitude of research purposes, most pressingly for work in robotics and control.
Robotics entails three main questions at the most basic level: “Where is the robot?,” “Where is the robot going?” and “How will the robot get there?” Motion capture with localization algorithms allows researchers to focus on dynamics and control as exemplified by the latter two questions, improving the function and performance of robots and unmanned vehicles.
Consequently, Dr. Cappelleri gains valuable instrumentation for his research on micro-aerial robots, which can be controlled precisely in order to fit through small spaces and reach areas that are inaccessible for humans and existing equipment. The robots provide highly promising capabilities for search and rescue operations, surveillance, building exploration, communication relay, and mapping. Dr. Cappelleri can currently control the attitude, or orientation about the center of mass, for his vehicles, and the motion capture system allows him to combine the attitude control with position control. This means that the vehicles can dynamically maintain a specific position in a shifting environment, or accurately and effectively move to a certain position in three-dimensional space.
The system will provide “ground truth” localization for robot collaboration algorithms as well as sensing and visualization. Ongoing research in the latter field seeks to build 3D models of a location from a single camera that moves through a scene. The algorithms that enable the reconstruction must identify and track landmarks between video frames to build the 3D scene. The motion capture system provides a detailed ground truth for researchers to verify and refine their algorithms so that they are more accurate.
The comparisons also allow researchers to fine-tune approaches to the collaboration algorithms through which robots work together to accurately determine their location. Stevens researchers across multiple disciplines are applying this idea to the development of heterogeneous active sensor teams for environmental monitoring and mapping, the control of mobile robot networks, and the coordination of human-robot teams for facilitating labor-intensive tasks like search and rescue or source-seeking.
“This advanced motion capture system creates a physical and conceptual space for faculty to collaborate at the highest tier of robotics and control research, galvanizing ongoing fundamental research and expanding the scope of emerging research activities.” says Dr. Constantin Chassapis, Deputy Dean of the School of Engineering and Science, and Director of the Department of Mechanical Engineering.
In addition to robotics, the motion capture system will be used in biomechanical applications to develop human tracking algorithms for analyzing and improving performance, whether in a medical rehabilitation or athletic performance setting. For instance, if a person has recently recovered from an injury to his left leg, the muscles in that leg will be weaker, forcing the right leg to overcompensate. This makes the right leg susceptible to injuries as well. Analyzing the individual’s gait can help professionals quantify this imbalance and provide therapy to correct it. The system will monitor and analyze the gait of an individual and correlate that data with force sensors to build a database of gaits which can be used to help improve human performance. This information also promises to revolutionize knee and hip arthroplasty by allowing researchers to customize implants for specific patients.
Researchers will set up the system in a cutting-edge laboratory in the Altofer building at Stevens, creating the only motion capture system focused on robotics and control in the New York Metropolitan area. The motion capture system will also provide a central location to collaborate and work on problems that are shared across disciplines.