Long Chen

Long Chen

Senior Research Director &
Principal Scientist

Xiaomi

Google Scholar Citations Badge

I am a Senior Research Director and Principal Scientist at Xiaomi, where my mission is bringing Artificial Intelligence to the physical world. I lead research in Embodied Intelligence, with a focus on pioneering the next generation of Autonomous Driving and Robotics. My work is at the intersection of Computer Vision, end-to-end learning systems, and Vision-Language-Action (VLA) models, aimed at building intelligent agents that can perceive, reason, and act in complex, real-world environments.

Previously, I was a Staff Scientist at Wayve, where I spearheaded the development of Wayve’s VLA models for the next generation of End-to-End Autonomous Driving. Before that, as a Research Engineer at Lyft Level 5, I led the fleet learning initiative to pretrain the ML planner for Lyft’s self-driving cars using large-scale, crowd-sourced fleet data.

My work Driving-with-LLMs [ICRA 2024, over 250 citations and 500 GitHub stars] was one of the first works on exploring Large Language Models (LLMs) for Autonomous Driving; LINGO [ECCV 2024] is the first VLA end-to-end driving model tested on public roads in London; and SimLingo [CVPR 2025] won 1st place at the CVPR 2024 CARLA Autonomous Driving Challenge. My work has been widely featured in news media, such as Fortune, Financial Times, MIT Technology Review, Nikkei Robotics, and 36kr.

Interests
  • Artificial Intelligence
  • Computer Vision
  • Multi-modal Large Language Models (LLMs)
  • Robotics
Education
  • PhD in Computer Vision / Machine Learning, 2015 - 2018

    Bournemouth University, UK

  • MSc in Medical Image Computing, 2013 - 2014

    University College London (UCL), UK

  • BSc in Biomedical Engineering, 2009 - 2013

    Dalian University of Technology (DUT), China

Recent News

Experience

 
 
 
 
 
Wayve
Staff Scientist
August 2021 – Present London, UK

AV2.0 - building the next generation of self-driving cars with End-to-End (E2E) Machine Learning, Vision-Language-Action (VLA) models.

[CVPR 2024: End-to-End Tutorial] [ICRA 2023: End-to-End Workshop]

 
 
 
 
 
Lyft Level 5
Research Engineer
Lyft Level 5
May 2018 – July 2021 London, UK

Autonomy 2.0 - Data-Driven Planning models for Lyft’s self-driving vehicles.

[CVPR 2021: Autonomy 2.0 Tutorial] [ICRA 2021: Crowd-sourced Data-Driven Planner]

Featured Publications

Recent Publications

Full publication list can be found on Google Scholar.
LingoQA: Video Question Answering for Autonomous Driving
Autonomous driving has long faced a challenge with public acceptance due to the lack of explainability in the decision-making process. …
LingoQA: Video Question Answering for Autonomous Driving
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
Large Language Models (LLMs) have shown promise in the autonomous driving sector, particularly in generalization and interpretability. …
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
One Thousand and One Hours: Self-driving Motion Prediction Dataset
Motivated by the impact of large-scale datasets on ML systems we present the largest self-driving dataset for motion prediction to …
One Thousand and One Hours: Self-driving Motion Prediction Dataset
SimNet: Learning reactive self-driving simulations from real-world observations
In this work, we present a simple end-to-end trainable machine learning system capable of realistically simulating driving experiences. …
SimNet: Learning reactive self-driving simulations from real-world observations
What data do we need for training an av motion planner?
We investigate what grade of sensor data is required for training an imitation-learning-based AV planner on human expert demonstration. …
What data do we need for training an av motion planner?
Recent Developments and Future Challenges in Medical Mixed Reality
Mixed Reality (MR) is of increasing interest within technology-driven modern medicine but is not yet used in everyday practice. This …
Recent Developments and Future Challenges in Medical Mixed Reality