I am a Research Scientist and Lecturer at the UCLA Computer Science Department. I received my PhD in Computer Science from UCLA under supervision of Professor Majid Sarrafzadeh, which was followed by a two-year interdisciplinary post-doctoral fellowship working on joint research projects between the Computer Science Department and the School of Medicine.
I am the founder of the Project EyeSee, in which I am designing a new wave of interactive, context-aware, and augmented reality-based apps that significantly improve the quality of life in people with low vision and cognitive deficits. The design of EyeSee apps follows a user centered approach that enables the users to be involved in all the phases of the development.
I received my B.Sc. degree from Sharif University in 2007. I earned my M.Sc. and Ph.D. degrees both in computer science from UCLA in 2010 and 2013. I am a named inventor on three US patents, two of which have been licensed and moving towards commercialization. I am the recipient of the Edward K. Rice Outstanding Doctoral Student Award, UCLA Chancellor's Award for Postdoctoral Research, Alcon Young Investigator Award, and the Vodafone Wireless Innovation Award. I have received unrestricted gifts from influential companies such as Google and Symantec for pursuing end-to-end collaborative research.
My research expertise is in cyber-physical systems, mobile computing, and machine learning. In particular, I have been actively involved in the design of augmented and virtual reality technologies with underlying machine learning techniques to perform context extraction and adaptation. In addition, one of the major goals of my research is to design Internet of Things-based infrastructures that offer people with disabilities the assistance and support they need to achieve a higher quality of life and cope with social isolation.
I have recently developed a deep learning architecture with convolutional neural networks for automated detection of glaucoma, a chronic and irreversible eye disease, which leads to deterioration in vision and quality of life. The proposed deep learning architecture contains six learned layers: four convolutional layers and two fully-connected layers. The proposed deep learning architecture could be incorporated within remote screening programs that take advantage of mobile technologies such as smartphone-based retinal cameras. Most recently, I have applied a deep learning approach to general-purpose sensor data classification problems in a constrained setup, where the device has a limited amount of energy.
The following is a list of academic institutions with which I am currently collaborating or have collaborated in the past.