Vision and Touch Combining Robot!

Updated: Sep 24, 2021

On the new EBRAINS research foundation, researchers of the Human Brain Project have associated brain-inspired deep learning with biomimetic robots. How the brain allows us to see and explore the world is perhaps the most interesting part of cognition. When orienting ourselves, we constantly combine information from all six senses in a seemingly effortless way–a feature that even the most advanced AI systems struggle to replicate.

Cognitive neuroscientists, computational modelers, and roboticists are currently cooperating to reveal new insight into the neural components behind this, by making robots whose internal functions impersonate the brain. In the Human Brain Project, Pennartz has worked together with computational modelers Shirin Dora, Sander Bohte, and Jorge F. Mejias to make complex neural organization designs for perceptions dependent on real-life data from lab mice. Their model, named "MultiPrednet" comprises modules for visual and tactile input, and a third that consolidates them.

Credits: HumanBrainProject []

To test how the MultiPrednet acts in a body, the specialists collaborated with Martin Pearson at Bristol Robotics Lab. Together they incorporated it into Whiskeye, a rodent-like robot that self-sufficiently investigates its current circumstance, utilizing head-mounted cameras for eyes, and 24 artificial whiskers to gather tactile information.

Katrin Amunts, Scientific Research Director of the HBP says: "To understand cognition, we should investigate how the mind goes about as a feature of the body in a climate. Pawel Swieboda, CEO of EBRAINS and Director General of the HBP, remarks: "The robots of the future will profit with developments that interface experiences from brain science to AI and mechanical technology. With EBRAINS, Europe can be at the focal point of this shift to more bio-motivated AI and innovation."

Access to the Experiment via the EBRAINS Knowledge Graph:

Source: Human Brain Project / EBRAINS

4,832 views0 comments