Zilong Zheng

Ph.D. Candidate in Computer Science at UCLA

Engineering VI 491
404 Westwood Plaza
University of California, Los Angeles
Los Angeles, CA, 90095
Email: zilongzheng0318 at ucla dot edu

About

I am a third year Ph.D. candidate in the Department of Computer Science, UCLA. I am currently doing research on machine learning at Center for Vision, Cognition, Learning, and Autonomy (VCLA), under the supervision of Prof. Song-chun Zhu. Before that, I obtained bachelor degree of Computer Science at Univeristy of Minnesota. I also received B.E. degree in Micro-electronic Technology from Univeristy of Electronic Science and Technology of China (UESTC). My research interests lie in Machine Learning and Cognitive Science.


Publications

  • Learning Dynamic Generator Model by Alternating Back-Propagation Through Time AAAI'19

    Jianwen Xie*, Ruiqi Gao*, Zilong Zheng, Song-Chun Zhu, Ying Nian Wu (* equal contributions)
    The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI) 2019 (Spotlight)

    PDF Code Website
    This paper studies the dynamic generator model for spatial-temporal processes such as dynamic textures and action sequences in video data. In this model, each time frame of the video sequence is generated by a generator model, which is a non-linear transformation of a latent state vector, where the non-linear transformation is parametrized by a top-down neural network. The sequence of latent state vectors follows a non-linear auto-regressive model, where the state vector of the next frame is a non-linear transformation of the state vector of the current frame as well as an independent noise vector that provides randomness in the transition. The non-linear transformation of this transition model can be parametrized by a feedforward neural network. We show that this model can be learned by an alternating back-propagation through time algorithm that iteratively samples the noise vectors and updates the parameters in the transition model and the generator model. We show that our training method can learn realistic models for dynamic textures and action patterns.
  • Learning Descriptor Networks for 3D Shape Synthesis and Analysis CVPR'18

    Jianwen Xie*, Zilong Zheng*, Ruiqi Gao, Wenguan Wang, Song-Chun Zhu, Ying Nian Wu (* equal contributions)
    IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 (Oral)

    PDF Code Website
    This paper proposes a 3D shape descriptor network, which is a deep convolutional energy-based model, for modeling volumetric shape patterns. The maximum likelihood training of the model follows an “analysis by synthesis” scheme and can be interpreted as a mode seeking and mode shifting process. The model can synthesize 3D shape patterns by sampling from the probability distribution via MCMC such as Langevin dynamics. The model can be used to train a 3D generator network via MCMC teaching. The conditional version of the 3D shape descriptor net can be used for 3D object recovery and 3D object super-resolution. Experiments demonstrate that the proposed model can generate realistic 3D shape patterns and can be useful for 3D shape analysis.

Patents

  • Noninvasive brain blood oxygen parameter measuring method

    Patent Publication No.: CN104382604A, Priority-date: 2014-12-02. PDF

  • Near infrared noninvasive detection probe for tissue blood oxygen saturation

    Patent Publication No.: CN204394526U, Priority-date: 2014-12-02. PDF

  • Brain blood oxygen saturation degree noninvasive monitor

    Patent Publication No.: CN204394527U, Priority-date: 2014-12-02. PDF