About Me

I co-founded Converge Lab, an AI startup that aims to bring large language models to the physical world.

I obtained my Ph.D. degree at UCLA CS in 2023, advised by Prof. Cho-Jui Hsieh. I was also affiliated with Google Research / Google DeepMind since 2020. Prior to UCLA, I received my B.Eng. degree in 2019 from the Department of Electronic Engineering, Tsinghua University.

[Publications] [Awards] [Experience] [Education] [Service] [Teaching] [Contact]


News

[11/2023] Successfully defended my Ph.D. thesis! Immense gratitude to my advisors and collaborators!
[09/2023] Two papers were accepted to NeurIPS 2023.
[02/2023] Check out our Lion optimizer, discovered by symbolic program search.
[05/2022] I got invited by Citadel Securities to attend their Ph.D. Summit.
[01/2022] Three papers (1 spotlight) were accepted to ICLR 2022.
[06/2021] I joined Google Research, Brain Team as a student researcher.
[02/2021] Our paper on “robust and accurate object detection” got accepted to CVPR 2021.
[01/2021] Two papers (1 oral) got accepted to ICLR 2021 with DARTS-PT won the Outstanding Paper Award.
[07/2020] I started my internship at Google Research, Perception Team.
[05/2020] Our paper on “stabilizing neural architecture search” got accepted to ICML 2020.


Selected Publications

* indicates equal contribution
[Google Scholar]

2023

Red Teaming Language Model Detectors with Language Models
Z. Shi*, Y. Wang*, F. Yin*, X. Chen, K. Chang, C. Hsieh
TACL

Symbol Tuning Improves In-Context Learning in Language Models
J. Wei, L. Hou, A. Lampinen, X. Chen, D. Huang, Y. Tay, X. Chen, Y. Lu, D. Zhou, T. Ma, Q. Le
EMNLP 2023
[Twitter]

Symbolic Discovery of Optimization Algorithms
X. Chen*, C. Liang*, D. Huang, E. Real, K. Wang, Y. Liu, H. Pham, X. Dong, T. Luong, C. Hsieh, Y. Lu, Q. Le
NeurIPS 2023
[Code] [PyTorch implementation by lucidrains] [Timm] [Optax] [Praxis] [Keras] [T5X] [Twitter #1] [Twitter #2] [Synced]
- Lion has been successfully deployed in production systems such as Google’s search ads CTR model.
- Lion has been widely adopted by the community, e.g., MosaicML employed Lion to train their LLMs.

Why Does Sharpness-Aware Minimization Generalize Better Than SGD?
Z. Chen*, J. Zhang*, Y. Kou, X. Chen, C. Hsieh, Q. Gu
NeurIPS 2023

2022

Random Sharpness-Aware Minimization
Y. Liu, S. Mai, M. Cheng, X. Chen, C. Hsieh, Y. You
NeurIPS 2022

Towards Efficient and Scalable Sharpness-Aware Minimization
Y. Liu, S. Mai, X. Chen, C. Hsieh, Y. You
CVPR 2022
[Twitter]

When Vision Transformers Outperform ResNets without Pre-Training or Strong Data Augmentations
X. Chen, C. Hsieh, B. Gong
ICLR 2022 (spotlight)
[JAX Checkpoint] [PyTorch Checkpoint] [Twitter]

Concurrent Adversarial Learning for Large-Batch Training
Y. Liu, X. Chen, M. Cheng, C. Hsieh, Y. You
ICLR 2022

Learning to Schedule Learning Rate with Graph Neural Networks
Y. Xiong, L. Lan, X. Chen, R. Wang, C. Hsieh
ICLR 2022

2021

RANK-NOSH: Efficient Predictor-Based Architecture Search via Non-Uniform Successive Halving
R. Wang, X. Chen, M. Cheng, X. Tang, C. Hsieh
ICCV 2021

Robust and Accurate Object Detection via Adversarial Learning
X. Chen, C. Xie, M. Tan, L. Zhang, C. Hsieh, B. Gong
CVPR 2021
[TensorFlow Checkpoint] [Colab] [Twitter]

Rethinking Architecture Selection in Differentiable NAS
R. Wang, M. Cheng, X. Chen, X. Tang, C. Hsieh
ICLR 2021 (oral, outstanding paper award)
[Code]

DrNAS: Dirichlet Neural Architecture Search
X. Chen*, R. Wang*, M. Cheng*, X. Tang, C. Hsieh
ICLR 2021
[Code]

2020

Stabilizing Differentiable Architecture Search via Perturbation-Based Regularization
X. Chen, C. Hsieh
ICML 2020
[Code]

Efficient Neural Interaction Function Search for Collaborative Filtering
Q. Yao*, X. Chen*, J. Kwok, Y. Li, C. Hsieh
WWW 2020
[Code]

2019

Neural Feature Search: A Neural Architecture for Automated Feature Engineering
X. Chen*, Q. Lin*, C. Luo*, X. Li, H. Zhang, Y. Xu, Y. Dang, K. Sui, X. Zhang, B. Qiao, W. Zhang, W. Wu, M. Chintalapati, D. Zhang
ICDM 2019

Cross-Domain Recommendation without Sharing User-Relevant Data
C. Gao, X. Chen, F. Feng, K. Zhao, X. He, Y. Li, D. Jin
WWW 2019

Neural Multi-Task Recommendation from Multi-Behavior Data
C. Gao, X. He, D. Gan, X. Chen, F. Feng, Y. Li, T. Chua, D. Jin
ICDE 2019


Selected Awards

[02/2022] Meta Fellowship Finalist
[01/2022] Amazon Science Fellowship
[03/2021] ICLR Outstanding Paper Award
[06/2019] Outstanding Graduate & Bachelor Thesis, Tsinghua University
[06/2018] Qualcomm Scholarship
[06/2017] Guangzhou Pharmaceutical Corporation Scholarship
[06/2016] Geru Zheng Scholarship


Experience

[07/2021 - 08/2023] Student Researcher, Google Research, Brain Team (now Google DeepMind), Mountain View, CA
[07/2020 - 06/2021] Student Researcher, Google Research, Perception Team, Seattle, WA
[02/2019 - 08/2019] Research Intern, 4Paradigm, Beijing, China
[01/2018 - 06/2018] Research Intern, Microsoft Research Asia, Beijing, China


Education

[09/2019 - 11/2023] Ph.D. in Computer Science, University of California, Los Angeles
[09/2016 - 07/2019] B.Ec. in Economics (2nd Degree), Tsinghua University
[09/2015 - 07/2019] B.Eng. in Electronic Engineering, Tsinghua University


Academic Service

PC Member / Reviewer: ICML (2021-), ICLR (2021-), NeurIPS (2020-), JMLR (2022-), TMLR (2022-), CVPR (2021-), ICCV (2021-), ECCV (2020-), AAAI (2021-), JAIR (2022-)


Teaching

Teaching Assistant, UCLA CS 260C: Deep Learning (Winter 2022)
Teaching Assistant, UCLA CS 180: Algorithms & Complexity (Spring 2021, Fall 2021)


Contact

Email: xiangning at cs dot ucla dot edu