DianGroup Robotics Lab
Logo Huazhong University of Science and Technology
Logo DianGroup Robotics Lab

Hi! There is Diangroup Robotics Lab at Huazhong University of Science and Technology.

We focus on the research of robotics and artificial intelligence including: reinforcement learning, robot manipulation, digital twin system, etc.


News
2024
One paper is accepted by ROBIO 2024. Congratulations to Ziyuan! Project Github
Oct 09
Selected Publications (view all )
Deep Reinforcement Learning for Sim-to-Real Transfer in a Humanoid Robot Barista
Deep Reinforcement Learning for Sim-to-Real Transfer in a Humanoid Robot Barista

Ziyuan Wang, Yefan Lin, Leyu Zhao, Jiahang Zhang, Xiaojun Hei# (# corresponding author)

IEEE International Conference on Robotics and Biomimetics (ROBIO) 2024

In this paper, we study the coffee-making application as an example. We proposed a reinforcement learning robot manipulation method with visual perception for filling-up the sim-to-real gap. We constructed a high-fidelity coffee making digital twin simulation environment.

Deep Reinforcement Learning for Sim-to-Real Transfer in a Humanoid Robot Barista

Ziyuan Wang, Yefan Lin, Leyu Zhao, Jiahang Zhang, Xiaojun Hei# (# corresponding author)

IEEE International Conference on Robotics and Biomimetics (ROBIO) 2024

In this paper, we study the coffee-making application as an example. We proposed a reinforcement learning robot manipulation method with visual perception for filling-up the sim-to-real gap. We constructed a high-fidelity coffee making digital twin simulation environment.

Self-Perceptive Framework: A Manipulation Framework with Visual Compensation for Zero Position Error
Self-Perceptive Framework: A Manipulation Framework with Visual Compensation for Zero Position Error

Ziyuan Wang, Yefan Lin, Jiahang Zhang, Changjiang Han, Xiaojun Hei# (# corresponding author)

IEEE International Conference on Robotics and Automation (ICRA 2025) Under review.

In this paper, to improve the success rate of robot manipulation tasks with zero position problem and reduce the frequency of recalibration, we proposed a robot manipulation framework, the Self-Perceptive Framework(SPF), which uses reinforcement learning and incorporates self-perceptive information.

Self-Perceptive Framework: A Manipulation Framework with Visual Compensation for Zero Position Error

Ziyuan Wang, Yefan Lin, Jiahang Zhang, Changjiang Han, Xiaojun Hei# (# corresponding author)

IEEE International Conference on Robotics and Automation (ICRA 2025) Under review.

In this paper, to improve the success rate of robot manipulation tasks with zero position problem and reduce the frequency of recalibration, we proposed a robot manipulation framework, the Self-Perceptive Framework(SPF), which uses reinforcement learning and incorporates self-perceptive information.

All publications