Welcome to the website for the NCN-DFG Collaborative grant:
learnINg versaTile lEgged locomotioN wiTh actIve perceptiON (INTENTION).
The INTENTION project is a collaboration between the Poznan University of Technology and the Technische Universität Darmstadt.

Overview

The INTENTION project aims to fill the gap between human-made systems and their biological counterparts.
The goal of this project is to innovate in terms of perception, control, and learning to provide a solid basis for physical intelligence.
We aim to develop quadruped robots that can perceive and perform agile locomotion in unstructured, confined, and dynamic environments, switch between different modes of locomotion, and improve performance over time.
The use of non-visual sensing is at the core of our research, as our target is to sense through actuators using force-transparent motors and proprioception. These will allow our robot to go beyond human sensing and utilize non-visual data for interpretation and decision-making.
To increase the physical intelligence of the robot, we consider a design with a flexible spine. Up to now, this feature is widespread in biological creatures but is rarely considered in legged robots. We want to exploit this extra degree of freedom to increase the robot’s agility and sensing capabilities by fully exploiting whole-body contacts, allowing the platform to face challenging environments.

Key Research Questions

  • Perception learning of physical interactions How to obtain physical parameters of the surrounding environment through direct interaction of the robot end-effectors with the environment? What is the proper knowledge representation that the locomotion learning component can exploit?
  • Whole body contacts How can we exploit sensing contacts from the robot body? Can a flexible spine improve sensing and acting capabilities? We will explore the topic of sensing through actuators using force-transparent motors and proprioception. Currently, most legged robots interact with the surroundings only with their end-effectors. The body contact estimation is essential for multi-modal locomotion.
  • Multi-modal locomotion Adapting different gaits to the terrain type/properties. Using hybrid locomotion with the support of robot different body parts.
  • Learning for locomotion How can we handle large sensory input data in Reinforcement Learning? Can we automatically extract information on environment conditions (e.g., terrain conditions) from interaction data and exploit it to improve locomotion learning? Is it possible to enhance the robustness of existing locomotion control structures through Reinforcement Learning and other Machine Learning techniques?

Latest Posts