Locomotion modeling evolves with brain-inspired neural networks
A team of scientists at EPFL have built a new neural network system that can help understand how animals adapt their movement to changes in their own body and to create more powerful artificial intelligence systems.
Deep learning has been fueled by artificial neural networks, which stack simple computational elements on top of each other, to create powerful learning systems. Given enough data, these systems can solve challenging tasks like recognize objects, beat human’s at Go and also control robots. “As you can imagine, the architecture of how you stack these elements on top of each other might influence how much data you need to learn and what the ceiling performance is,” says Professor Alexander Mathis at EPFL’s School of Life Sciences.
Working with doctoral students Alberto Chiappa and Alessandro Marin Vargas, the three scientists have developed a new network architecture called DMAP for “Distributed Morphological Attention Policy”. This network architecture incorporates fundamental principles of biological sensorimotor control, making it an interesting tool to study sensorimotor function.
The problem that DMAP is trying to address is that animals – including humans – have evolved to adapt to changes in both their environment and their own bodies. For example, a child can adapt its ability to walk efficiently throughout all the body changes in shape and weight from a toddler to adulthood – and do so on different types of surfaces, etc. When developing DMAP, the team focused on how an animal can learn to walk when its body is subject to these “morphological perturbations” – changes in the length and thickness of body parts.
“Typically, in Reinforcement Learning, so-called fully connected neural networks are used to learn motor skills,” says Mathis. Reinforcement Learning is a machine-learning training method that “rewards” desired behaviors and/or “punishes” undesired ones.
He continues: “Imagine you have some sensors that estimate the state of your body – for example, the angles of your wrist, elbow, shoulder, and so on. This sensor signals are the input to the motor system, and the output are the muscle activations, which generate torques. If one uses fully connected networks, then for instance in the first layer all sensors from across the body are integrated”. In contrast, in biology sensory information is combined in a hierarchical way.”
“We took principles of neuroscience, and we distilled them in a neural network to design a better sensorimotor system,” says Alberto Chiappa. In their paper, published at the 36th Annual Conference on Neural Information Processing Systems (NeurIPS), the researchers present DMAP that “combines independent proprioceptive processing, a distributed policy with individual controllers for each joint, and an attention mechanism, to dynamically gate sensory information from different body parts to different controllers.”
DMAP was able to learn to “walk” with a body subject to morphological perturbations, without receiving any information about the morphological parameters such as the specific limb lengths and widths. Remarkably, DMAP could “walk” as well as a system that had access to those body parameters.
“So we created a Reinforcement Learning system thanks to what we know from anatomy”, says Alberto Chiappa. “After we trained this model, we noticed that it exhibited dynamic gating reminiscent of what happens in the spinal cord, but interestingly this behavior emerged spontaneously.”
Overall, models like DMAP serve two roles: building better artificial intelligence systems based on biological insights, and conversely building better models to understand the brain.
NeurIPS is one of the leading Machine Learning Conferences and many other EPFL labs present their latest work there.
Swiss Government Excellence Scholarships
Alberto Silvio Chiappa, Alessandro Marin Vargas, Alexander Mathis. DMAP: a Distributed Morphological Attention Policy for Learning to Locomote with a Changing Body. NeurIPS December 2022. DOI: 10.48550/arXiv.2209.14218