Google releases a “GPT-like” robot model, the RT-1

News Report Technology


The Trust Project is a worldwide group of news organizations working to establish transparency standards.

Google has released a new robot model, the RT-1, which is similar to the GPT model used in its OpenAI artificial intelligence program. The new model is designed with Google’s other robotics programs, including its driverless car program, in mind. The RT-1 model presented here is a step toward generative AI models in the field of robotics. In the real world, the RT-1 can execute over 700 instructions with a 97% success rate.

Google releases a "GPT-like" robot model, the RT-1

The recent advances in machine learning (ML) research, such as computer vision and natural language processing, have been enabled by a shared common approach that uses large, diverse datasets and expressive models. Although there have been various attempts to apply this approach to robotics, robots so far have not used highly-capable models as much as other subfields.

The model encodes a written command and a set of images as tokens using a pre-trained FiLM EfficientNet model before compressing them using TokenLearner. This is the architecture of RT-1. The Transformer then receives these and produces action tokens.

Developers gathered a sizable, varied dataset of robot trajectories in order to develop a system that could generalize to new tasks and demonstrate robustness to various distractions and backgrounds. To gather 130k episodes over 17 months, they deployed 13 EDR robot manipulators, each of which has a 7-degree-of-freedom arm, a two-finger gripper, and a mobile base. The researchers used human examples obtained by remote teleoperation, and they marked each event with a written explanation of the command that the robot had just carried out. Picking and arranging objects, opening and closing drawers, getting objects into and out of drawers, positioning elongated objects upright, knocking over objects, pulling napkins, and opening jars are among the high-level skills included in the dataset.

The following video displays a few sample PaLM-SayCan-RT1 long-horizon task performances in several actual kitchens.

In all four areas, RT-1 performs significantly better than baselines, displaying exceptional levels of generalization and resilience.

The RT-1 Robotics Transformer is an action-generation model for real-world robotics tasks that is simple and scalable. It tokenizes all inputs and outputs and compresses them using a pre-trained EfficientNet model with early language fusion and a token learner. RT-1 demonstrates strong performance across hundreds of tasks, as well as extensive generalization and robustness in real-world settings.

Learn more:

Disclaimer

Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.

Damir Yalalov

Damir is the Editor/SEO/Product Lead at mpost.io. He is most interested in SecureTech, Blockchain, and FinTech startups. Damir earned a bachelor's degree in physics.

Follow Author

More Articles
🗞 Metaverse Newsletter
👾 Follow us
  YouTube Icon     YouTube Icon     YouTube Icon     YouTube Icon