Knob Turning Demo
A demonstration collected from the AnySense app collecting both tactile and contact microphone data (turn sound on).
A key driver behind the recent advances in machine learning, especially in vision and language, has been the availability of abundant, ready-to-use data. Technologies like the Internet and the smartphone have played a pivotal role in this data explosion. Training such generalizable models for robotics is bottlenecked by our inability to collect diverse, high-quality data in the real world. One way of addressing this bottleneck is to create tools that combine scalable, intuitive data collection interfaces with cheap, accessible sensors. We show: (a) AnySense, an iPhone-based application that integrates the iPhone’s sensory suite with external multisensory inputs via Bluetooth and wired interfaces, enabling both offline data collection and online streaming to robots; (b) Using AnySense to interface with AnySkin, a versatile tactile sensor capable of multi-axis contact force measurement; and (c) Deploying robot policies trained from multisensory (visuotactile) inputs on a Stretch robot. AnySense will be fully open-sourced at RSS and opened up to the robotics community for multi-sensory data collection and learning.
A demonstration collected from the AnySense app collecting both tactile and contact microphone data (turn sound on).
A demonstration collected from the AnySense app collecting both tactile and iPhone microphone data (turn sound on).
We use the AnySense app alongside AnySkin to collect tactile-rich video demonstrations for the task of erasing a whiteboard. Using these demonstrations, we train a behavior cloning policy and deploy our model on the Hello Robot Stretch.
As a user touches the magnetized skin, the readings from the magnetometer underneath it (which measures the change in magnetic flux) are streamed via bluetooth to a computer, where they can then be visualized.
We train a slip detection model using tactile readings from the AnySkin sensor as input. Once an object is grasped, a user can tug on it, which is detected by the model. The gripper then opens to hand the object to the user.