dissertation topics in robotics

  • Current Members
  • Off-Campus Students
  • Robot Videos
  • Funded Projects
  • Publications by Year
  • Publications by Type
  • Robot Learning Lecture
  • Robot Learning IP
  • Humanoid Robotics Seminar
  • Research Oberseminar
  • New, Open Topics
  • Ongoing Theses
  • Completed Theses
  • External Theses
  • Advice for Thesis Students
  • Thesis Checklist and Template
  • Jobs and Open Positions
  • Current Openings
  • Information for Applicants
  • Application Website
  • TU Darmstadt Student Hiwi Jobs
  • Contact Information

Currently Open Theses Topics

We offer these current topics directly for Bachelor and Master students at TU Darmstadt who can feel free to DIRECTLY contact the thesis advisor if you are interested in one of these topics. Excellent external students from another university may be accepted but are required to first email Jan Peters before contacting any other lab member for a thesis topic. Note that we cannot provide funding for any of these theses projects.

We highly recommend that you do either our robotics and machine learning lectures ( Robot Learning , Statistical Machine Learning ) or our colleagues ( Grundlagen der Robotik , Probabilistic Graphical Models and/or Deep Learning). Even more important to us is that you take both Robot Learning: Integrated Project, Part 1 (Literature Review and Simulation Studies) and Part 2 (Evaluation and Submission to a Conference) before doing a thesis with us.

In addition, we are usually happy to devise new topics on request to suit the abilities of excellent students. Please DIRECTLY contact the thesis advisor if you are interested in one of these topics. When you contact the advisor, it would be nice if you could mention (1) WHY you are interested in the topic (dreams, parts of the problem, etc), and (2) WHAT makes you special for the projects (e.g., class work, project experience, special programming or math skills, prior work, etc.). Supplementary materials (CV, grades, etc) are highly appreciated. Of course, such materials are not mandatory but they help the advisor to see whether the topic is too easy, just about right or too hard for you.

Only contact *ONE* potential advisor at the same time! If you contact a second one without first concluding discussions with the first advisor (i.e., decide for or against the thesis with her or him), we may not consider you at all. Only if you are super excited for at most two topics send an email to both supervisors, so that the supervisors are aware of the additional interest.

FOR FB16+FB18 STUDENTS: Students from other depts at TU Darmstadt (e.g., ME, EE, IST), you need an additional formal supervisor who officially issues the topic. Please do not try to arrange your home dept advisor by yourself but let the supervising IAS member get in touch with that person instead. Multiple professors from other depts have complained that they were asked to co-supervise before getting contacted by our advising lab member.

NEW THESES START HERE

Walk your network: investigating neural network’s location in Q-learning methods.

Scope: Master thesis Advisor: Theo Vincent and Boris Belousov Start: Flexible Topic:

Q-learning methods are at the heart of Reinforcement Learning. They have been shown to outperform humans on some complex tasks such as playing video games [1]. In robotics, where the action space is in most cases continuous, actor-critic methods are relying on Q-learning methods to learn the critic [2]. Although Q-learning methods have been extensively studied in the past, little focus has been placed on the way the online neural network is exploring the space of Q functions. Most approaches focus on crafting a loss that would make the agent learn better policies [3]. Here, we offer a thesis that focuses on the position of the online Q neural network in the space of Q functions. The student will first investigate this idea on simple problems before comparing the performance to strong baselines such as DQN or REM [1, 4] on Atari games. Depending on the result, the student might as well get into MuJoCo and compare the results with SAC [2]. The student will be welcome to propose some ideas as well.

Highly motivated students can apply by sending an email to [email protected] . Please attach your CV and clearly state why you are interested in this topic.

Requirements

  • Strong Python programming skills
  • Knowledge in Reinforcement Learning
  • Experience with deep learning libraries is a plus

References [1] Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." nature 518.7540 (2015): 529-533. [2] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018. [3] Hessel, Matteo, et al. "Rainbow: Combining improvements in deep reinforcement learning." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018. [4] Agarwal, R., Schuurmans, D. & Norouzi, M.. (2020). An Optimistic Perspective on Offline Reinforcement Learning International Conference on Machine Learning (ICML).

Co-optimizing Hand and Action for Robotic Grasping of Deformable objects

dissertation topics in robotics

This project aims to advance deformable object manipulation by co-optimizing robot gripper morphology and control policies. The project will involve utilizing existing simulation environments for deformable object manipulation [2] and implementing a method to jointly optimize gripper morphology and grasp policies within the simulation.

Required Qualification:

  • Familiarity with deep learning libraries such as PyTorch or Tensorflow

Preferred Qualification:

  • Attendance of the lectures "Statistical Machine Learning", "Computational Engineering and Robotics" and "Robot Learning"

Application Requirements:

  • Curriculum Vitae
  • Motivation letter explaining why you would like to work on this topic and why you are the perfect candidate

Interested students can apply by sending an e-mail to [email protected] and attaching the required documents mentioned above.

References: [1] Xu, Jie, et al. "An End-to-End Differentiable Framework for Contact-Aware Robot Design." Robotics: Science & Systems. 2021. [2] Huang, Isabella, et al. "DefGraspNets: Grasp Planning on 3D Fields with Graph Neural Nets." arXiv preprint arXiv:2303.16138 (2023).

Geometry-Aware Diffusion Models for Robotics

In this thesis, you will work on developing an imitation learning algorithm using diffusion models for robotic manipulation tasks, such as the ones in [2, 3, 4], but taking into account the geometry of the task space.

If this sounds interesting, please send an email to [email protected] and [email protected] , and possibly attach your CV, highlighting the relevant courses you took in robotics and machine learning.

What's in it for you:

  • You get to work on an exciting topic at the intersection of deep-learning and robotics
  • We will supervise you closely throughout your thesis
  • Depending on the results, we will aim for an international conference publication

Requirements:

  • Be motivated -- we will support you a lot, but we expect you to contribute a lot too
  • Robotics knowledge
  • Experience setting up deep learning pipelines -- from data collection, architecture design, training, and evaluation
  • PyTorch -- especially experience writing good parallelizable code (i.e., runs fast in the GPU)

References: [1] https://arxiv.org/abs/2112.10752 [2] https://arxiv.org/abs/2308.01557 [3] https://arxiv.org/abs/2209.03855 [4] https://arxiv.org/abs/2303.04137 [5] https://arxiv.org/abs/2205.09991

Learning Latent Representations for Embodied Agents

dissertation topics in robotics

Interested students can apply by sending an E-Mail to [email protected] and attaching the required documents mentioned below.

  • Experience with TensorFlow/PyTorch
  • Familiarity with core Machine Learning topics
  • Experience programming/controlling robots (either simulated or real world)
  • Knowledgeable about different robot platforms (quadrupeds and bipedal robots)
  • Resume / CV
  • Cover letter explaining why this topic fits you well and why you are an ideal candidate

References: [1] Ho and Ermon. "Generative adversarial imitation learning" [2] Arenz, et al. "Efficient Gradient-Free Variational Inference using Policy Search"

Characterizing Fear-induced Adaptation of Balance by Inverse Reinforcement Learning

dissertation topics in robotics

Interested students can apply by sending an E-Mail to [email protected] and attaching the required documents mentioned below.

  • Basic knowledge of reinforcement learning
  • Hand-on experience with reinforcement learning or inverse reinforcement learning
  • Cognitive science background

References: [1] Maki, et al. "Fear of Falling and Postural Performance in the Elderly" [2] Davis et al. "The relationship between fear of falling and human postural control" [3] Ho and Ermon. "Generative adversarial imitation learning"

Timing is Key: CPGs for regularizing Quadruped Gaits learned with DRL

To tackle this problem we want to utilize Central Pattern Generators (CPGs), which can generate timings for ground contacts for the four feet. The policy gets rewarded for complying with the contact patterns of the CPGs. This leads to a straightforward way of regularizing and steering the policy to a natural gait without posing too strong restrictions on it. We first want to manually find fitting CPG parameters for different gait velocities and later move to learning those parameters in an end-to-end fashion.

Highly motivated students can apply by sending an E-Mail to [email protected] and attaching the required documents mentioned below.

Minimum Qualification:

  • Good Python programming skills
  • Basic knowledge of the PyTorch library
  • Basic knowledge of Reinforcement Learning
  • Good knowledge of the PyTorch library
  • Basic knowledge of the MuJoCo simulator

References: [1] Cheng, Xuxin, et al. "Extreme Parkour with Legged Robots."

Damage-aware Reinforcement Learning for Deformable and Fragile Objects

dissertation topics in robotics

Goal of this thesis will be the development and application of a model-based reinforcement learning method on real robots. Your tasks will include: 1. Setting up a simulation environment for deformable object manipulation 2. Utilizing existing models for stress and deformability prediction[1] 3. Implementing a reinforcement learning method to work in simulation and, if possible, on the real robot methods.

If you are interested in this thesis topic and believe you possess the necessary skills and qualifications, please submit your application, including a resume and a brief motivation letter explaining your interest and relevant experience. Please send your application to [email protected].

Required Qualification :

  • Enthusiasm for and experience in robotics, machine learning, and simulation
  • Strong programming skills in Python

Desired Qualification :

  • Attendance of the lectures "Statistical Machine Learning", "Computational Engineering and Robotics" and (optionally) "Robot Learning"

References: [1] Huang, I., Narang, Y., Bajcsy, R., Ramos, F., Hermans, T., & Fox, D. (2023). DefGraspNets: Grasp Planning on 3D Fields with Graph Neural Nets. arXiv preprint arXiv:2303.16138.

Imitation Learning meets Diffusion Models for Robotics

dissertation topics in robotics

The objective of this thesis is to build upon prior research [2, 3] to establish a connection between Diffusion Models and Imitation Learning. We aim to explore how to exploit Diffusion Models and improve the performance of Imitation learning algorithms that interact with the world.

We welcome highly motivated students to apply for this opportunity by sending an email expressing their interest to Firas Al-Hafez ( [email protected] ) Julen Urain ( [email protected] ). Please attach your letter of motivation and CV, and clearly state why you are interested in this topic and why you are the ideal candidate for this position.

Required Qualification : 1. Strong Python programming skills 2. Basic Knowledge in Imitation Learning 3. Interest in Diffusion models, Reinforcement Learning

Desired Qualification : 1. Attendance of the lectures "Statistical Machine Learning", "Computational Engineering and Robotics" and/or "Reinforcement Learning: From Fundamentals to the Deep Approaches"

References: [1] Song, Yang, and Stefano Ermon. "Generative modeling by estimating gradients of the data distribution." Advances in neural information processing systems 32 (2019). [2] Ho, Jonathan, and Stefano Ermon. "Generative adversarial imitation learning." Advances in neural information processing systems 29 (2016). [3] Garg, D., Chakraborty, S., Cundy, C., Song, J., & Ermon, S. (2021). Iq-learn: Inverse soft-q learning for imitation. Advances in Neural Information Processing Systems, 34, 4028-4039. [4] Chen, R. T., & Lipman, Y. (2023). Riemannian flow matching on general geometries. arXiv preprint arXiv:2302.03660.

The Power of Friendship: Dynamic Multi-Agent Reward Sharing

dissertation topics in robotics

In this thesis, we want to investigate the use of generalised reward-sharing between agents that would allow for collaboration in a less constrained manner than previous methods. The method would assume partial observability of each agent, and a bi-level optimisation system that would disseminate scaled rewards between agents in addition to environmental rewards. We aim to demonstrate good performance on small-scale multi-agent settings (StarCraft, Prisoner's Dilemma, Stag Hunt) and to outperform existing methods in large-scale or dynamic settings (Braess's Paradox, network optimisation).

How to apply

  • Send an email to [email protected] with your CV and a short motivation email explaining your reason for applying for this thesis and your academic/career objectives.

Minimum knowledge

  • Good Python programming skills;
  • Basic knowledge of Reinforcement Learning.

Preferred knowledge

  • Knowledge of the PyTorch library;
  • Knowledge of the MushroomRL library.
  • Get to work on a difficult open problem in deep learning with important mathematical underpinnings
  • Possible publication at an international conference (depending on results)

References: [1] Peysakhovich, Alexander, and Adam Lerer. "Prosocial learning agents solve generalized stag hunts better than selfish ones." arXiv preprint arXiv:1709.02865 (2017). [2] Jaques, Natasha, et al. "Social influence as intrinsic motivation for multi-agent deep reinforcement learning." International conference on machine learning. PMLR, 2019. [3] Yi, Yuxuan, et al. "Learning to Share in Networked Multi-Agent Reinforcement Learning." Advances in Neural Information Processing Systems 35 (2022): 15119-15131.

  • Be extremely motivated -- we will support you a lot, but we expect you to contribute a lot too

Scaling Behavior Cloning to Humanoid Locomotion

Scope: Bachelor / Master thesis Advisor: Joe Watson Added: 2023-10-07 Start: ASAP Topic: In a previous project [1], I found that behavior cloning (BC) was a surprisingly poor baseline for imitating humanoid locomotion. I suspect the issue may lie in the challenges of regularizing high-dimensional regression.

The goal of this project is to investigate BC for humanoid imitation, understand the scaling issues present, and evaluate possible solutions, e.g. regularization strategies from the regression literature.

The project will be building off Google Deepmind's Acme library [2], which has BC algorithms and humanoid demonstration datasets [3] already implemented, and will serve as the foundation of the project.

To apply, email [email protected] , ideally with a CV and transcript so I can assess your suitability.

  • Experience, interest and enthusiasm for the intersection of robot learning and machine learning
  • Experience with Acme and JAX would be a benefit, but not necessary

References: [1] https://arxiv.org/abs/2305.16498 [2] https://github.com/google-deepmind/acme [3] https://arxiv.org/abs/2106.00672

Robot Gaze for Communicating Collision Avoidance Intent in Shared Workspaces

Scope: Bachelor/Master thesis Advisor: Alap Kshirsagar , Dorothea Koert Added: 2023-09-27 Start: ASAP

dissertation topics in robotics

Topic: In order to operate close to non-experts, future robots require both an intuitive form of instruction accessible to lay users and the ability to react appropriately to a human co-worker. Instruction by imitation learning with probabilistic movement primitives (ProMPs) [1] allows capturing tasks by learning robot trajectories from demonstrations including the motion variability. However, appropriate responses to human co-workers during the execution of the learned movements are crucial for fluent task execution, perceived safety, and subjective comfort. To facilitate such appropriate responsive behaviors in human-robot interaction, the robot needs to be able to react to its human workspace co-inhabitant online during the execution. Also, the robot needs to communicate its motion intent to the human through non-verbal gestures such as eye and head gazes [2][3]. In particular for humanoid robots, combining motions of arms with expressive head and gaze directions is a promising approach that has not yet been extensively studied in related work.

Goals of the thesis:

  • Develop a method to combine robot head/gaze motion with ProMPs for online collision avoidance
  • Implement the method on a Franka-Emika Panda Robot
  • Evaluate and compare the implemented behaviors in a study with human participants

Highly motivated students can apply by sending an email to [email protected]. Please attach your CV and transcript, and clearly state your prior experiences and why you are interested in this topic.

  • Strong Programming Skills in python
  • Prior experience with Robot Operating System (ROS) and user studies would be beneficial
  • Strong motivation for human-centered robotics including design and implementation of a user study

References : [1] Koert, Dorothea, et al. "Learning intention aware online adaptation of movement primitives." IEEE Robotics and Automation Letters 4.4 (2019): 3719-3726. [2] Admoni, Henny, and Brian Scassellati. "Social eye gaze in human-robot interaction: a review." Journal of Human-Robot Interaction 6.1 (2017): 25-63. [3] Lemasurier, Gregory, et al. "Methods for expressing robot intent for human–robot collaboration in shared workspaces." ACM Transactions on Human-Robot Interaction (THRI) 10.4 (2021): 1-27.

Tactile Sensing for the Real World

Topic: Tactile sensing is a crucial sensing modality that allows humans to perform dexterous manipulation[1]. In recent years, the development of artificial tactile sensors has made substantial progress, with current models relying on cameras inside the fingertips to extract information about the points of contact [2]. However, robotic tactile sensing is still a largely unsolved topic despite these developments. A central challenge of tactile sensing is the extraction of usable representations of sensor readings, especially since these generally contain an incomplete view of the environment.

Recent model-based reinforcement learning methods like Dreamer [3] leverage latent state-space models to reason about the environment from partial and noisy observations. However, more work has yet to be done to apply such methods to real-world manipulation tasks. Hence, this thesis will explore whether Dreamer can solve challenging real-world manipulation tasks by leveraging tactile information. Initial results suggest that tasks like peg-in-a-hole can indeed be solved with Dreamer in simulation (see figure above), but the applicability of this method in the real world has yet to be shown.

In this work, you will work with state-of-the-art hardware and compute resources on a hot research topic with the option of publishing your work at a scientific conference.

Highly motivated students can apply by sending an email to [email protected]. Please attach a transcript of records and clearly state your prior experiences and why you are interested in this topic.

  • Ideally experience with deep learning libraries like JAX or PyTorch
  • Experience with reinforcement learning is a plus
  • Experience with Linux

References [1] 2S Match Anest2, Roland Johansson Lab (2005), https://www.youtube.com/watch?v=HH6QD0MgqDQ [2] Gelsight Inc., Gelsight Mini, https://www.gelsight.com/gelsightmini/ [3] Hafner, D., Lillicrap, T., Ba, J., & Norouzi, M. (2019). Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603.

Large Vision-Language Neural Networks for Open-Vocabulary Robotic Manipulation

dissertation topics in robotics

Robots are expected to soon leave their factory/laboratory enclosures and operate autonomously in everyday unstructured environments such as households. Semantic information is especially important when considering real-world robotic applications where the robot needs to re-arrange objects as per a set of language instructions or human inputs (as shown in the figure). Many sophisticated semantic segmentation networks exist [1]. However, a challenge when using such methods in the real world is that the semantic classes rarely align perfectly with the language input received by the robot. For instance, a human language instruction might request a ‘glass’ or ‘water’, but the semantic classes detected might be ‘cup’ or ‘drink’.

Nevertheless, with the rise of large language and vision-language models, we now have capable segmentation models that do not directly predict semantic classes but use learned associations between language queries and classes to give us ’open-vocabulary’ segmentation [2]. Some models are especially powerful since they can be used with arbitrary language queries.

In this thesis, we aim to build on advances in 3D vision-based robot manipulation and large open-vocabulary vision models [2] to build a full pick-and-place pipeline for real-world manipulation. We also aim to find synergies between scene reconstruction and semantic segmentation to determine if knowing the object semantics can aid the reconstruction of the objects and, in turn, aid manipulation.

Highly motivated students can apply by sending an e-mail expressing their interest to Snehal Jauhri (email: [email protected]) or Ali Younes (email: [email protected]), attaching your letter of motivation and possibly your CV.

Topic in detail : Thesis_Doc.pdf

Requirements: Enthusiasm, ambition, and a curious mind go a long way. There will be ample supervision provided to help the student understand basic as well as advanced concepts. However, prior knowledge of computer vision, robotics, and Python programming would be a plus.

References: [1] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2”, https://github.com/facebookresearch/detectron2 , 2019. [2] F. Liang, B. Wu, X. Dai, K. Li, Y. Zhao, H. Zhang, P. Zhang, P. Vajda, and D. Marculescu, “Open-vocabulary semantic segmentation with mask-adapted clip,” in CVPR, 2023, pp. 7061–7070, https://github.com/facebookresearch/ov-seg

Dynamic Tiles for Deep Reinforcement Learning

dissertation topics in robotics

Linear approximators in Reinforcement Learning are well-studied and come with an in-depth theoretical analysis. However, linear methods require defining a set of features of the state to be used by the linear approximation. Unfortunately, the feature construction process is a particularly problematic and challenging task. Deep Reinforcement learning methods have been introduced to mitigate the feature construction problem: these methods do not require handcrafted features, as features are extracted automatically by the network during learning, using gradient descent techniques.

In simple reinforcement learning tasks, however, it is possible to use tile coding as features: Tiles are simply a convenient discretization of the state space that allows us to easily control the generalization capabilities of the linear approximator. The objective of this thesis is to design a novel algorithm for automatic feature extraction that generates a set of features similar to tile coding, but that can arbitrarily partition the state space and deal with arbitrary complex state space, such as images. The idea is to combine the feature extraction problem directly with Linear Reinforcement Learning methods, defining an algorithm that is able both to have the theoretical guarantees and good convergence properties of these methods and the flexibility of Deep Learning approaches.

  • Curriculum Vitae (CV);
  • A motivation letter explaining the reason for applying for this thesis and academic/career objectives.
  • Knowledge of the Atari environments (ale-py library).

Accepted candidate will

  • Define a generalization of tile coding working with an arbitrary input set (including images);
  • Design a learning algorithm to adapt the tiles using data of interaction with the environment;
  • Combine feature learning with standard linear methods for Reinforcement Learning;
  • Verify the novel methodology in simple continuous state and discrete actions environments;
  • (Optionally) Extend the experimental analysis to the Atari environment setting.

Deep Learning Meets Teleoperation: Constructing Learnable and Stable Inductive Guidance for Shared Control

This work considers policies as learnable inductive guidance for shared control. In particular, we use the class of Riemannian motion policies [3] and consider them as differentiable optimization layers [4]. We analyze (i) if RMPs can be pre-trained by learning from demonstrations [5] or reinforcement learning [6] given a specific context; (ii) and subsequently employed seamlessly for human-guided teleoperation thanks to their physically consistent properties, such as stability [3]. We believe this step eliminates the laborious process of constructing complex policies and leads to improved and generalizable shared control architectures.

Highly motivated students can apply by sending an e-mail expressing your interest to [email protected] and [email protected] , attaching your letter of motivation and possibly your CV.

  • Experience with deep learning libraries (in particular Pytorch)
  • Knowledge in reinforcement learning and/or machine learning

References: [1] Niemeyer, Günter, et al. "Telerobotics." Springer handbook of robotics (2016); [2] Selvaggio, Mario, et al. "Autonomy in physical human-robot interaction: A brief survey." IEEE RAL (2021); [3] Cheng, Ching-An, et al. "RMP flow: A Computational Graph for Automatic Motion Policy Generation." Springer (2020); [4] Jaquier, Noémie, et al. "Learning to sequence and blend robot skills via differentiable optimization." IEEE RAL (2022); [5] Mukadam, Mustafa, et al. "Riemannian motion policy fusion through learnable lyapunov function reshaping." CoRL (2020); [6] Xie, Mandy, et al. "Neural geometric fabrics: Efficiently learning high-dimensional policies from demonstration." CoRL (2023).

Dynamic symphony: Seamless human-robot collaboration through hierarchical policy blending

This work focuses on arbitration between the user and assistive policy, i.e., shared autonomy. Various works allow the user to influence the dynamic behavior explicitly and, therefore, could not satisfy stability guarantees [3]. We pursue the idea of formulating arbitration as a trajectory-tracking problem that implicitly considers the user's desired behavior as an objective [4]. Therefore, we extend the work of Hansel et al. [5], who employed probabilistic inference for policy blending in robot motion control. The proposed method corresponds to a sampling-based online planner that superposes reactive policies given a predefined objective. This method enables the user to implicitly influence the behavior without injecting energy into the system, thus satisfying stability properties. We believe this step leads to an alternative view of shared autonomy with an improved and generalizable framework.

Highly motivated students can apply by sending an e-mail expressing your interest to [email protected] or [email protected] , attaching your letter of motivation and possibly your CV.

References: [1] Niemeyer, Günter, et al. "Telerobotics." Springer handbook of robotics (2016); [2] Selvaggio, Mario, et al. "Autonomy in physical human-robot interaction: A brief survey." IEEE RAL (2021); [3] Dragan, Anca D., and Siddhartha S. Srinivasa. "A policy-blending formalism for shared control." IJRR (2013); [4] Javdani, Shervin, et al. "Shared autonomy via hindsight optimization for teleoperation and teaming." IJRR (2018); [5] Hansel, Kay, et al. "Hierarchical Policy Blending as Inference for Reactive Robot Control." IEEE ICRA (2023).

Feeling the Heat: Igniting Matches via Tactile Sensing and Human Demonstrations

In this thesis, we want to investigate the effectiveness of vision-based tactile sensors for solving dynamic tasks (igniting matches). Since the whole task is difficult to simulate, we directly collect real-world data to learn policies from the human demonstrations [2,3]. We believe that this work is an important step towards more advanced tactile skills.

Highly motivated students can apply by sending an e-mail expressing your interest to [email protected] and [email protected] , attaching your letter of motivation and possibly your CV.

  • Good knowledge of Python
  • Prior experience with real robots and Linux is a plus

References: [1] https://www.youtube.com/watch?v=HH6QD0MgqDQ [2] Learning Compliant Manipulation through Kinesthetic and Tactile Human-Robot Interaction; Klas Kronander and Aude Billard. [3] https://www.youtube.com/watch?v=jAtNvfPrKH8

Inverse Reinforcement Learning for Neuromuscular Control of Humanoids

Within this thesis, the problems of learning from observations and efficient exploration in overactued systems should be addressed. Regarding the former, novel methods incorporating inverse dynamics models into the inverse reinforcement learning problem [1] should be adapted and applied. To address the problem of efficient exploration in overactuted systems, two approaches should be implemented and compared. The first approach uses a handcrafted action space, which disables and modulates actions in different phases of the gait based on biomechanics knowledge [2]. The second approach uses a stateful policy to incorporate an inductive bias into the policy [3]. The thesis will be supervised in conjunction with Guoping Zhao ( [email protected] ) from the locomotion lab.

Highly motivated students can apply by sending an e-mail expressing their interest to Firas Al-Hafez ( [email protected] ), attaching your letter of motivation and possibly your CV. Try to make clear why you would like to work on this topic, and why you would be the perfect candidate for the latter.

Required Qualification : 1. Strong Python programming skills 2. Knowledge in Reinforcement Learning 3. Interest in understanding human locomotion

Desired Qualification : 1. Hands-on experience on robotics-related RL projects 2. Prior experience with different simulators 3. Attendance of the lectures "Statistical Machine Learning", "Computational Engineering and Robotics" and/or "Reinforcement Learning: From Fundamentals to the Deep Approaches"

References: [1] Al-Hafez, F.; Tateo, D.; Arenz, O.; Zhao, G.; Peters, J. (2023). LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning, International Conference on Learning Representations (ICLR). [2] Ong CF; Geijtenbeek T.; Hicks JL; Delp SL (2019) Predicting gait adaptations due to ankle plantarflexor muscle weakness and contracture using physics-based musculoskeletal simulations. PLoS Computational Biology [3] Srouji, M.; Zhang, J:;Salakhutdinow, R. (2018) Structured Control Nets for Deep Reinforcement Learning, International Conference on Machine Learning (ICML)

Robotic Tactile Exploratory Procedures for Identifying Object Properties

dissertation topics in robotics

Goals of the thesis

  • Literature review of robotic EPs for identifying object properties [2,3,4]
  • Develop and implement robotic EPs for a Digit tactile sensor
  • Compare performance of robotic EPs with human EPs

Desired Qualifications

  • Interested in working with real robotic systems
  • Python programming skills

Literature [1] Lederman and Klatzky, “Haptic perception: a tutorial” [2] Seminara et al., “Active Haptic Perception in Robots: A Review” [3] Chu et al., “Using robotic exploratory procedures to learn the meaning of haptic adjectives” [4] Kerzel et al., “Neuro-Robotic Haptic Object Classification by Active Exploration on a Novel Dataset”

Scaling learned, graph-based assembly policies

dissertation topics in robotics

  • scaling our previous methods to incorporate mobile manipulators or the Kobo bi-manual manipulation platform. The increased workspace of both would allow for handling a wider range of objects
  • [2] has shown more powerful, yet, it includes running a MILP for every desired structure. Thus another idea could be to investigate approaches aiming to approximate this solution
  • adapting the methods to handle more irregular-shaped objects / investigate curriculum learning

Highly motivated students can apply by sending an e-mail expressing your interest to [email protected] , attaching your letter of motivation and possibly your CV.

  • Experience with deep learning libraries (in particular Pytorch) is a plus
  • Experience with reinforcement learning / having taken Robot Learning is also a plus

References: [1] Learn2Assemble with Structured Representations and Search for Robotic Architectural Construction; Niklas Funk et al. [2] Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery; Niklas Funk et al. [3] Structured agents for physical construction; Victor Bapst et al.

Long-Horizon Manipulation Tasks from Visual Imitation Learning (LHMT-VIL): Algorithm

dissertation topics in robotics

The proposed architecture can be broken down into the following sub-tasks: 1. Multi-object 6D pose estimation from video: Identify the object 6D poses in each video frame to generate the object trajectories 2. Action segmentation from video: Classify the action being performed in each video frame 3. High-level task representation learning: Learn the sequence of robotic movement primitives with the associated object poses such that the robot completes the demonstrated task 4. Low-level movement primitives: Create a database of low-level robotic movement primitives which can be sequenced to solve the long-horizon task

Desired Qualification: 1. Strong Python programming skills 2. Prior experience in Computer Vision and/or Robotics is preferred

Long-Horizon Manipulation Tasks from Visual Imitation Learning (LHMT-VIL): Dataset

During the project, we will create a large-scale dataset of videos of humans demonstrating industrial assembly sequences. The dataset will contain information of the 6D poses of the objects, the hand and body poses of the human, the action sequences among numerous other features. The dataset will be open-sourced to encourage further research on VIL.

[1] F. Sener, et al. "Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities". CVPR 2022. [2] P. Sharma, et al. "Multiple Interactions Made Easy (MIME) : Large Scale Demonstrations Data for Imitation." CoRL, 2018.

Adaptive Human-Robot Interactions with Human Trust Maximization

dissertation topics in robotics

  • Good knowledge of Python and/or C++;
  • Good knowledge in Robotics and Machine Learning;
  • Good knowledge of Deep Learning frameworks, e.g, PyTorch;

References: [1] Xu, Anqi, and Gregory Dudek. "Optimo: Online probabilistic trust inference model for asymmetric human-robot collaborations." ACM/IEEE HRI, IEEE, 2015; [2] Kwon, Minae, et al. "When humans aren’t optimal: Robots that collaborate with risk-aware humans." ACM/IEEE HRI, IEEE, 2020; [3] Chen, Min, et al. "Planning with trust for human-robot collaboration." ACM/IEEE HRI, IEEE, 2018; [4] Poole, Ben et al. “On variational bounds of mutual information”. ICML, PMLR, 2019.

Causal inference of human behavior dynamics for physical Human-Robot Interactions

dissertation topics in robotics

Highly motivated students can apply by sending an e-mail expressing your interest to [email protected] , attaching your a letter of motivation and possibly your CV.

  • Good knowledge of Robotics;
  • Good knowledge of Deep Learning frameworks, e.g, PyTorch
  • Li, Q., Chalvatzaki, G., Peters, J., Wang, Y., Directed Acyclic Graph Neural Network for Human Motion Prediction, 2021 IEEE International Conference on Robotics and Automation (ICRA).
  • Löwe, S., Madras, D., Zemel, R. and Welling, M., 2020. Amortized causal discovery: Learning to infer causal graphs from time-series data. arXiv preprint arXiv:2006.10833.
  • Yang, W., Paxton, C., Mousavian, A., Chao, Y.W., Cakmak, M. and Fox, D., 2020. Reactive human-to-robot handovers of arbitrary objects. arXiv preprint arXiv:2011.08961.

Incorporating First and Second Order Mental Models for Human-Robot Cooperative Manipulation Under Partial Observability

Scope: Master Thesis Advisor: Dorothea Koert , Joni Pajarinen Added: 2021-06-08 Start: ASAP

dissertation topics in robotics

The ability to model the beliefs and goals of a partner is an essential part of cooperative tasks. While humans develop theory of mind models for this aim already at a very early age [1] it is still an open question how to implement and make use of such models for cooperative robots [2,3,4]. In particular, in shared workspaces human robot collaboration could potentially profit from the use of such models e.g. if the robot can detect and react to planned human goals or a human's false beliefs during task execution. To make such robots a reality, the goal of this thesis is to investigate the use of first and second order mental models in a cooperative manipulation task under partial observability. Partially observable Markov decision processes (POMDPs) and interactive POMDPs (I-POMDPs) [5] define an optimal solution to the mental modeling task and may provide a solid theoretical basis for modelling. The thesis may also compare related approaches from the literature and setup an experimental design for evaluation with the bi-manual robot platform Kobo.

Highly motivated students can apply by sending an e-mail expressing your interest to [email protected] attaching your CV and transcripts.

References:

  • Wimmer, H., & Perner, J. Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception (1983)
  • Sandra Devin and Rachid Alami. An implemented theory of mind to improve human-robot shared plans execution (2016)
  • Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, SM Ali Eslami,and Matthew Botvinick. Machine theory of mind (2018)
  • Connor Brooks and Daniel Szafir. Building second-order mental models for human-robot interaction. (2019)
  • Prashant Doshi, Xia Qu, Adam Goodie, and Diana Young. Modeling recursive reasoning by humans using empirically informed interactive pomdps. (2010)
  • College of Computing

robot with woman picking up food

Ph.D. in Robotics

The Institute for Robotics and Intelligent Machines (IRIM ) serves as the flagship for Tech’s robotics efforts and therefore, the research institute has an integral relationship with the program. Almost all of IRIM faculty members serve as research advisors to students pursuing the robotics degree.

The program supports Tech’s mission to provide education in disciplines related to science, technology, and interdisciplinary areas, and to recruit and educate outstanding students who will provide leadership in a world that is increasingly dependent on technology. Currently, Tech has more than 40 faculty members actively engaged in the Ph.D. robotics program.

Admission Requirements

The Georgia Tech criteria used in determining each applicant’s eligibility for consideration includes:

  • A bachelor’s degree or its equivalent (prior to matriculation) from a recognized institution; graduation in the upper quarter of their class; students must show evidence of preparation in their chosen field sufficient to ensure profitable graduate study;
  • GRE scores (General Test is required for all; Subject Tests in Computer Science, Math or Physics recommended but not required);
  • For international applicants, satisfactory scores on the Test of English as a Foreign Language (TOEFL). Minimum scores are 100 (Internet-based test), 250 (computer-based) or 600 (paper-based).

Students enroll for the Robotics Ph.D. Program through one of the participating units:

  • Aerospace Engineering
  • Biomedical Engineering
  • Electrical and Computer Engineering
  • Mechanical Engineering

Students should indicate that they are applying for the Robotics Program through that unit by marking a check box. As minimum requirements, students must satisfy all of the specific admission requirements of the home unit.

The Robotics Ph.D. Program Committee will make final admission decisions in coordination with the home units.

Decisions are based on a combination of factors:

  • Academic degrees and records
  • Statement of purpose
  • Letters of recommendation
  • GRE and TOEFL test scores
  • Relevant work experience

Also considered is the appropriateness of the applicant’s goals to the Robotics Ph.D. Program, their expected abilities in carrying out original research, and the faculty research interests.

Complete the online application . 

Program of Study

The main emphasis of the  Robotics Ph.D. program  is the successful completion of an original and independent research thesis. The degree requirements are designed around this goal.

Minimum Requirements

  • Completion of 36 semester hours of courses with a letter grade
  • Passing a comprehensive qualifying exam with written and oral components.
  • Successfully conducting, documenting, and defending a piece of original research culminating in a doctoral thesis.

Ph.D. Candidacy

Prior to completing all of these requirements, Georgia Tech defines the Ph.D Candidate milestones. Admission to candidacy requires that the student:

  • Complete all course requirements (except the minor);
  • Achieve a satisfactory scholastic record;
  • Pass the comprehensive examination;
  • Submit and receive approval naming the dissertation topic and delineating the research topic.

Core Area Courses

The core areas of robotics consist of: Mechanics, Control, Perception, Artificial Intelligence, Autonomy and Human-Robot Interaction (HRI). They are used to select three foundation courses and three targeted elective courses. Visit phdrobotics.gatech.edu/program for a full list of core area courses.

Qualifying Exam

The purpose of the comprehensive exam is to assess the student’s general knowledge of the degree area and specialized knowledge of the chosen research area. The comprehensive examination provides an early assessment of the student's potential to satisfactorily complete the requirements for the doctoral degree. As such, it requires that fundamental principles be mastered and integrated so that they can be applied to solving problems relevant to robotics.

After three regular semesters (Fall or Spring) from entering the Ph.D. program the student must take the comprehensive examination at the next scheduled offering, usually during the fourth regular semester. If the comprehensive examination is failed, the student may have one additional opportunity at the next scheduled offering. The examination will be offered at least once every year.

The comprehensive exam is a written and oral examination and is administered by a faculty committee, selected by the thesis advisor in consultation with the student, and approved by the Robotics Program Committee. The committee consists of:

  • Three faculty members consistent with the student's graduate coursework and research area.
  • The thesis advisor as a non-voting observer.

From the Catalog:

IMAGES

  1. Robotics Research Topics

    dissertation topics in robotics

  2. Advance Topics in Robotics

    dissertation topics in robotics

  3. Robotics Topics For Research Paper

    dissertation topics in robotics

  4. Seminar Topics for Robotics 2023

    dissertation topics in robotics

  5. The Advancement of Robot Technology

    dissertation topics in robotics

  6. step-by-step guide to robotics

    dissertation topics in robotics

VIDEO

  1. Robotics: Science and Systems 2023 Day 2

  2. CS 6301 Special Topics: Introduction to Robot Manipulation and Navigation project group 2

  3. Top 3 #Dissertation topics of 2024

  4. 10 Common Management Dissertation Topics with Royal Research

  5. Dissertation Research Methodology เรียนรู้จากตัวอย่างจริง I Industrial Engineering EP.120

  6. Transportation Dissertation Topics

COMMENTS

  1. Topics for Research in Robotics and Intelligent Systems

    Princeton, New Jersey 08544 USA - Operator: (609) 258-3000. General areas for study and research: Chemical and Biological Engineering Control of chemical and biological dynamic processes Optimal design of systems for material processing Manipulation of matter at atomic and molecular scale Civil and Environmental Engineering Structural health ...

  2. Intelligent Autonomous Systems

    Causal inference of human behavior dynamics for physical Human-Robot Interactions. Scope: Master's thesis Advisor:Georgia Chalvatzaki, Kay Hansel Added: 2021-10-16 Start: ASAP Topic: In this thesis, we will study and develop ways of approximating an efficient behavior model of a human in close interaction with a robot. We will research the ...

  3. Ph.D. in Robotics

    Submit and receive approval naming the dissertation topic and delineating the research topic. Core Area Courses. The core areas of robotics consist of: Mechanics, Control, Perception, Artificial Intelligence, Autonomy and Human-Robot Interaction (HRI). They are used to select three foundation courses and three targeted elective courses.