Email: renzhou200622 [at] gmail.com Contact me with your CV if you are interested in full-time or doing an internship with us. :)
About me
I am a founding member of Wormpex AI Research, the AI branch of BianLiFeng (便利蜂), a fast growing advanced convenience store chain in China backed by a global capital (which has opened over 1000 convenience stores from scratch within the past 2 years). At Wormpex AI research, we build state-of-the-art AI technologies to facilitate new retail logistics from storefronts, warehouses to manufacture. Before that, I has spent 3 wonderful years at Snap Research as a senior research scientist, working on applying multimodal understanding to support Snap’s content monetization, content security, and creative content creation.
As a senior research lead at Wormpex AI Research, I am managing a Multimodal Machine Perception Team, composed of elite researchers and engineers in both Bellevue, WA and Beijing, China. My team are conducting cutting-edge research and building intelligent production systems to benefit Bianlifeng’s retail business using multimodal input signals, with a focus on human-behavior-related modeling, such as human detection, pose, action, ReID, tracking, human-POS machine-interaction, etc.
Selected honors: 1. Runner-up winner in NIPS 2017 Adversarial Attack and Defense Competition (among 107 teams); 2. been nominated to the “CVPR 2017 Best Student Paper Award”; 3. winner of the “IEEE Trans. on Multimedia 2016 Best Paper Award”; 4. developed the first part-based hand gesture recognition system using Kinect sensor with Nanyang Technological University and Microsoft Research Redmond (Demo1, Demo2, Demo3).
Program Committee of CVPR, AAAI, IJCAI, ECAI, ACM Multimedia, FG, etc.
Research Highlights
My research interests lie in the fields of Computer Vision, Multimedia, Machine Learning, and Natural Language Processing. I have worked on hand, gesture, human pose, multi-modal joint understanding, image and video captioning, object detection, action detection, human ReID, shape understanding, reinforcement learning, and adversarial machine learning, etc.
My current focuses are: 1. human pose, hand, and gesture, 2. object detection, action detection, and human Re-ID, 3. multi-modal joint understanding, vision and language.