I’m currently a final-year Ph.D. candidate at UCLA CS, advised by Prof. Cho-Jui Hsieh. My research interest lies in automated and efficient machine learning. Prior to UCLA, I received my B.Eng. degree in 2019 from the Department of Electronic Engineering, Tsinghua University.
[02/2023] Check out our Lion optimizer, discovered by symbolic program search.
[05/2022] I got invited by Citadel Securities to attend their Ph.D. Summit.
[01/2022] Three papers (1 spotlight) were accepted to ICLR'22.
[06/2021] I joined Google Research, Brain Team as a student researcher.
[02/2021] Our paper on “robust and accurate object detection” got accepted to CVPR'21.
[01/2021] Two papers (1 oral) got accepted to ICLR'21 with DARTS-PT won the Outstanding Paper Award.
[07/2020] I started my internship at Google Research, Perception Team.
[05/2020] Our paper on “stabilizing neural architecture search” got accepted to ICML'20.
* indicates equal contribution
Red Teaming Language Model Detectors with Language Models
Z. Shi*, Y. Wang*, F. Yin*, X. Chen, K. Chang, C. Hsieh
Symbol Tuning Improves In-Context Learning in Language Models
J. Wei, L. Hou, A. Lampinen, X. Chen, D. Huang, Y. Tay, X. Chen, Y. Lu, D. Zhou, T. Ma, Q. Le
Symbolic Discovery of Optimization Algorithms
X. Chen*, C. Liang*, D. Huang, E. Real, K. Wang, Y. Liu, H. Pham, X. Dong, T. Luong, C. Hsieh, Y. Lu, Q. Le
[Code] [PyTorch implementation by lucidrains] [Timm] [Optax] [Praxis] [Keras] [T5X] [Twitter #1] [Twitter #2] [Synced]
- Lion has been successfully deployed in production systems such as Google’s search ads CTR model.
- Lion has been widely adopted by the community, e.g., MosaicML employed Lion to train their LLMs.
Random Sharpness-Aware Minimization
Y. Liu, S. Mai, M. Cheng, X. Chen, C. Hsieh, Y. You
Towards Efficient and Scalable Sharpness-Aware Minimization
Y. Liu, S. Mai, X. Chen, C. Hsieh, Y. You
CVPR 2022 [Twitter]
When Vision Transformers Outperform ResNets without Pre-Training or Strong Data Augmentations
X. Chen, C. Hsieh, B. Gong
ICLR 2022 (spotlight) [JAX Checkpoint] [PyTorch Checkpoint] [Twitter]
Concurrent Adversarial Learning for Large-Batch Training
Y. Liu, X. Chen, M. Cheng, C. Hsieh, Y. You
Learning to Schedule Learning Rate with Graph Neural Networks
Y. Xiong, L. Lan, X. Chen, R. Wang, C. Hsieh
RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving
R. Wang, X. Chen, M. Cheng, X. Tang, C. Hsieh
Robust and Accurate Object Detection via Adversarial Learning
X. Chen, C. Xie, M. Tan, L. Zhang, C. Hsieh, B. Gong
CVPR 2021 [TensorFlow Checkpoint] [Colab] [Twitter]
Rethinking Architecture Selection in Differentiable NAS
R. Wang, M. Cheng, X. Chen, X. Tang, C. Hsieh
ICLR 2021 (oral, outstanding paper award) [Code]
Stabilizing Differentiable Architecture Search via Perturbation-Based Regularization
X. Chen, C. Hsieh
ICML 2020 [Code]
Efficient Neural Interaction Function Search for Collaborative Filtering
Q. Yao*, X. Chen*, J. Kwok, Y. Li, C. Hsieh
WWW 2020 [Code]
Neural Feature Search: A Neural Architecture for Automated Feature Engineering
X. Chen*, Q. Lin*, C. Luo*, X. Li, H. Zhang, Y. Xu, Y. Dang, K. Sui, X. Zhang, B. Qiao, W. Zhang, W. Wu, M. Chintalapati, D. Zhang
Cross-Domain Recommendation without Sharing User-Relevant Data
C. Gao, X. Chen, F. Feng, K. Zhao, X. He, Y. Li, D. Jin
Neural Multi-Task Recommendation from Multi-Behavior Data
C. Gao, X. He, D. Gan, X. Chen, F. Feng, Y. Li, T. Chua, D. Jin
[02/2022] Meta Fellowship Finalist
[01/2022] Amazon Science Fellowship
[03/2021] ICLR Outstanding Paper Award
[06/2019] Outstanding Graduate & Bachelor Thesis, Tsinghua University
[11/2018] 2nd place (feedback phase), NIPS AutoML Challenge
[06/2018] Qualcomm Scholarship
[06/2017] Guangzhou Pharmaceutical Corporation Scholarship
[06/2016] Geru Zheng Scholarship
[07/2021 - 08/2023] Student Researcher, Google Research, Brain Team (now Google DeepMind), Mountain View, CA
[07/2020 - 06/2021] Student Researcher, Google Research, Perception Team, Seattle, WA
[02/2019 - 08/2019] Research Intern, 4Paradigm, Beijing, China
[07/2018 - 11/2018] Research Assistant, Massachusetts Institute of Technology, Cambridge, MA
[01/2018 - 06/2018] Research Intern, Microsoft Research Asia, Beijing, China
[09/2019 - present] Ph.D. in Computer Science, University of California, Los Angeles
[09/2016 - 07/2019] B.Ec. in Economics (2nd Degree), Tsinghua University
[09/2015 - 07/2019] B.Eng. in Electronic Engineering, Tsinghua University
Teaching Assistant, UCLA CS 260C: Deep Learning (Winter 2022)
Teaching Assistant, UCLA CS 180: Algorithms & Complexity (Spring 2021, Fall 2021)
Email: xiangning at cs dot ucla dot edu