Yiting Wang

I am a final-year undergraduate student in Automatic Control and Robotics at Poznan University of Technology, Poland. My research interests focus on robotics, optimal control, and reinforcement learning, with an emphasis on robotic manipulation and planning for complex, dynamic tasks.
At my home university, I have worked on projects involving legged robots under the supervision of Prof. Krzysztof Walas and robotic air hockey planning and diploma thesis under the supervision of Dr. Piotr Kicki. These experiences have strengthened my expertise in robotic systems and intelligent planning algorithms.
Currently, I am a research intern at the IRIS Lab, Arizona State University, under the supervision of Prof. Wanxin Jin. My research, which began in the summer of 2024, focuses on optimal control and robotic manipulation, contributing to advanced methodologies for precision and efficiency in robotic systems.
Previously, I worked as a research assistant in the KIMED OEKOSYSTEM project at the University of Lübeck and the University Hospital Schleswig-Holstein in Germany, supervised by Prof. Floris Ernst and Daniel Wulff. My work involved developing deep learning solutions for needle detection and segmentation in 3D ultrasound images, specifically for kidney interventions. I implemented models, conducted experiments, and fine-tuned solutions to enhance accuracy and reliability in medical imaging workflows.
news
Dec 12, 2024 | Yiting’s website is created! |
---|---|
Sep 12, 2024 | [Research] Adaptive Neural Gradient Fields for Robot Planning and Control with Hardware in the Loop We present a novel optimization-based approach for robot planning and control that significantly enhances performance and efficiency. Our method leverages neural networks to learn and model gradient fields, enabling more sophisticated handling of hardware differentiation challenges in real-world robotic systems. In experimental validation using the Gymnasium HalfCheetah-v4 environment, our policy demonstrates remarkable sample efficiency. With just 100 training iterations and optimization over a 10-timestep horizon, we achieve stable forward locomotion with precise endpoint control. The trained agent successfully maintains continuous forward movement while accurately stopping at designated target positions, highlighting the method’s effectiveness in both trajectory following and terminal state convergence. This concise training paradigm represents a significant advancement in sample-efficient learning for complex locomotion tasks, particularly in scenarios requiring precise position control. |
Jun 08, 2024 | [Project] Reinforcement Learning for Robust Locomotion of Legged Robots on Versatile Terrain Using PPO This project builds on Rapid Motor Adaptation (RMA), leveraging reinforcement learning to improve legged robot locomotion on diverse terrains. Video cut: |