I received my Bachelors degree (Summa Cum Laude) in Mechanical Engineering at Seoul National University.
Previously, I was a full-time research scientist at NAVER LABS,
developing machine/reinforcement learning algorithms to make robot arms like
AMBIDEX perform various daily tasks. I also spent some time at Saige Research as an undergraduate research intern.
My dream is to see robots performing all sorts of complex,
long-horizon and contact-rich manipulation tasks in real-world, just like we humans do everyday.
One scalable way to acheive this dream is to pretrain robots with
(a) a repertoire of reusable low-level motor control skills and
(b) an intelligence that can temporally or spatially compose such skills to complete any given unseen tasks.
What kind of low-level manipulation skills should we pretrain in advance, and how should we train them?
With such reusable repertoire of skills, how should we train an intelligence
that can orchestrate them to perform longer-horizon tasks? My upcoming works will mainly address these questions!
This course provides a practical introduction to training robots using data-driven methods.
Key topics include data collection methods for robotics, policy training methods, and using simulated environments for robot learning.
Throughout the course, students will have hands-on experience to collect robot data, train policies, and evaluate its performance.
An app for Apple Vision Pro that can stream user's head / wrist / finger tracking results to any machines connected to the same network. This can be used to (a) teleoperate robots using human motions and (b) collect datasets of humans navigating and manipulating the real-world!
DART is a teleoperation platform that leverages cloud-based simulation and augmented reality (AR) to revolutionize robotic data collection.
It enables higher data collection throughput with reduced physical fatigue and facilitates robust policy transfer to real-world scenarios.
All datasets are stored in the DexHub cloud database, providing an ever-growing resource for robot learning.
Most robotics practitioners spend most time shaping the environments (e.g. rewards, observation/action spaces, low-level controllers, simulation dynamics) than to tune RL algorithms to obtain a desirable controller. We posit that the community should focus more on (a) automating environment shaping procedures and/or (b) developing stronger RL algorithms that can tackle unshaped environments.
An algorithm that can discover diverse and useful set of skills from scratch that is inherently safe to be composed for unseen downstream tasks.
Considering safety during skill discovery phase is a must when solving safety-critical downstream tasks.
Drawing robot ARTO-1 performs complex drawings in real-world by learning low-level stroke drawing skills, requiring delicate force control, from human demonstrations. This approach eases the planning required to actually perform an artistic drawing.
We detect collisions for robot manipulators using unsupervised anomaly detection methods. Compared to supervised approach, this approach does not require collisions datasets and even detect unseen collision types.