BEng, MEng (Tsinghua); PhD (Toronto)
Department of Computer Science
The University of Hong Kong

Chuan Wu's Photo

I am a Professor in the Department of Computer Science at the University of Hong Kong. I received my Ph.D. degree from the Department of Electrical and Computer Engineering in the University of Toronto, Canada, and both my M.Engr. and B.Engr. degrees from the Department of Computer Science and Technology in Tsinghua University, China.

Research Interest

My current research interests span cloud computing, distributed machine learning systems, distributed learning algorithms, and intelligent technologies for elderly care. My research features performance modeling and algorithm design for various network systems using optimization and machine learning methods, as well as the design and implementation of various systems based on relevant methods. Please see my research projects and publications for more details.


Phone: (852) 2857 8459
Fax: (852) 2559 8447
Office: Room 427, Chow Yei Ching Building
Mailing address:
Department of Computer Science
The University of Hong Kong
Pokfulam Road
Hong Kong


**[Nov 2022] I am looking for self-organized, well-motivated PhD students (for Fall 2023), postdoc researchers and RAs (anytime) to work on a variety of topics in distributed machine learning algorithms and systems, AI technologies and systems for smart elderly care, and 3D modeling/streaming for virtual reality applications. Candidates with solid background and/or strong interest in mathematical modeling and analysis, and candidates with solid programming skills and/or strong interest in building systems are both very welcome. Please contact me by email with your CV (please include your GPA and rank). **

[Sep 15, 2022] Yangrui's paper "SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training" has been accepted to NeurIPS 2022. Congratulations!

[Sep 6, 2022] Shiwei's paper "Accelerating Large-Scale Distributed Neural Network Training with SPMD Parallelism" has been accepted to ACM SOCC 2022. Congratulations!

[Jul 9, 2022] Yangrui's paper "BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing" (co-first-authored with Tsinghua collaborator) has been accepted to USENIX NSDI 2023. Congratulations!

[Jan 15, 2022] Hanpeng, Yuchen and Chenyu's paper "A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training" has been accepted to MLSys 2022. Congratulations!

[December 4, 2021] Ziyue's two papers have been accepted to IEEE INFOCOM 2022. Congratulations!

[August, 2021] Xiaodong Yi has graduated with Ph.D. and joined Tencent in Shenzhen as an AI technology researcher. Congratulations!

[March 25, 2021] I received an Amazon Research Award for carrying out research on compilation optimization in distributed DNN training 😃

[December 5, 2020] Zhe's first paper "Near-Optimal Topology-adaptive Parameter Synchronization in Distributed DNN Training" has been accepted to IEEE INFOCOM 2021. Congratulations!

[November 16, 2020] Yixin Bao has graduated with Ph.D. and joined Facebook at Seattle, U.S., as a research scientist. Congratulations!

[October 28, 2020] Xiaodong, Shiwei and Ziyue's paper "Optimizing Distributed Training Deployment in Heterogeneous GPU Clusters" has been accepted to ACM CoNEXT 2020. Congratulations!

[August 28, 2020] Xiaodong and Ziyue's paper "Fast Training of Deep Learning Models over Multiple GPUs" has been accepted to ACM/IFIP Middleware 2020. Congratulations!

[August 9, 2020] Yangrui, Yanghua and Yixin's paper "Elastic Parameter Server Load Distribution in Deep Learning Clusters" has been accepted to ACM SOCC'20. Congratulations!

© 2007-2022 Chuan Wu. Last updated November 2022.