About Me

I am a second year Ph.D. student in Computer Science Department at University of California, Los Angeles. I am generally interested in cloud computing. I build large scale ML training systems on cloud. I also develop system support for cloud. Currently, I an working on resource disaggregation for Cloud 3.0. I am a member of SOLAR group and co-advised by Professor Harry Xu and Professor Miryung Kim.

Prior to graduate school, I earned my B.E. in Computer Science from Tsinghua University in 2019, and I was a research intern in PACMAN group. I also worked with Professor Umut Acar on scheduling algorithms for multithreaded parallel computing in 2018.


Research Experience

Dorylus: Affordable and Scalable GNN Training over Billion-Edge Graphs

I built Dorylus, a distributed system for training graph neural networks (GNNs) together with John. Uniquely, Dorylus could take advantage of serverless computing to increase scalability at a low cost. To be specific, we

Dorylus outperformed both existing systems (by up to 3.8x faster and 10.7x cheaper) and our GPU-based variant (by 2.05x more performance per dollar).

PMALLOC: An Efficient Allocator for Non-volatile Memory (NVM)

This is my undergraduate thesis project, which is one of eight finalists of the best thesis in the CS department. In this project, I

Efficient Scheduling with Private Deques in Multiprogrammed Environments

The work improved the performance of the task scheduler in MPL, a compiler for parallel ML (a variant of Standard ML). To be specific, I

Crash Consistency in Non-volatile Memory (NVM) for High Performance Computing (HPC)

I worked as an undergraduate research assistant with two PhD students. My contributions:



Last updated 01/2021