Rodriguez Vargas, Diego Alexander: Learning Grasping and Walking Motion Generation for Humanoid Robots. - Bonn, 2021. - Dissertation, Rheinische Friedrich-Wilhelms-Universität Bonn.
Online-Ausgabe in bonndoc:
author = {{Diego Alexander Rodriguez Vargas}},
title = {Learning Grasping and Walking Motion Generation for Humanoid Robots},
school = {Rheinische Friedrich-Wilhelms-Universität Bonn},
year = 2021,
month = apr,

note = {For acting in human-made scenarios, humanoid robots are undoubtedly the most versatile and flexible platforms among a vast number of available robotic systems. However, this versatility comes at the cost of complexity. Dexterous grasping and bipedal locomotion pose still several challenges in terms of planning and control mainly due to the high dimensionality, complex dynamics and real-time constraints. Inspired by human nature, learning approaches offer a promising alternative to address these issues. By leveraging on prior knowledge and experiences, represented as neural networks, latent spaces, probabilistic models, among others, this thesis present novel learning approaches to generate grasping and walking motions for humanoid robots.
Initially, geometrical variations of an object category are aggregated into a latent (shape) space in order to register novel object shapes in a non-rigid fashion. Grasping knowledge is then transferred to novel instances based on their object shape. This knowledge includes approaching motions and the joint configuration of multi-fingered robotic hands, whose inherent high-dimensionality is handled by learning postural synergies. The object registration can be performed online with 3D sensors or RGB cameras. The shape inference from RGB images is especially relevant to objects challenging to perceive by depth sensors, e.g., those with transparent or shiny surfaces. The proposed grasping approaches put particular emphasis on providing functional grasps that enable not only to pick objects but to use them. The grasping transfer is evaluated in several robotic platforms in single and dual arm applications.
On the second part of this thesis, the attention is confined to the optimization and generation of bipedal walking motions. By means of Gaussian processes, the dissimilarity between a physics-based simulator and a real humanoid robot is characterized. Thus, experiments performed in simulation and with the real platform are integrated into a sample-efficient Bayesian optimizer that selects the most informative parameters to evaluate dictated by the relative entropy of a cost function. Finally, the advantages and applicability of recent deep reinforcement learning methods on locomotion controllers are discussed. A novel approach is presented that learns a single control policy capable of omnidirectional walking without any analytical gait or any previous notion of walking. The robustness and omnidirectional capabilities of the learned walking controller is evaluated in a series of experiments and the learned gait is successfully transferred to the real hardware.},

url = {}

The following license files are associated with this item: