Neuro Computing Systems

Research Lab at KTH Stockholm, Sweden

A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics


Journal article


Cristian Axenie, Christoph Richter, J. Conradt
Italian National Conference on Sensors, 2016

Semantic Scholar DBLP DOI PubMedCentral PubMed
Cite

Cite

APA   Click to copy
Axenie, C., Richter, C., & Conradt, J. (2016). A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics. Italian National Conference on Sensors.


Chicago/Turabian   Click to copy
Axenie, Cristian, Christoph Richter, and J. Conradt. “A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics.” Italian National Conference on Sensors (2016).


MLA   Click to copy
Axenie, Cristian, et al. “A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics.” Italian National Conference on Sensors, 2016.


BibTeX   Click to copy

@article{cristian2016a,
  title = {A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics},
  year = {2016},
  journal = {Italian National Conference on Sensors},
  author = {Axenie, Cristian and Richter, Christoph and Conradt, J.}
}

Abstract

Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor.