I am a second-year PhD student in the Computer Science department at UCLA. I am a part of the UCLA Compilers Group, and my advisor is Jens Palsberg. My main research interests lie in Compilers and Program Analysis tools, and I am currently working on applying machine learning techniques to build better Static Analysis tools. The classical algorithms and techniques developed in the last 50 years of static-analysis research have resulted in some amazing tools in this area, but I feel that simply extending them is only going to bring us an incremental improvement. The next 10x improvement needs something new, and machine learning with its recent success in multiple fields and its well-developed libraries, could just be the answer. This is just a conjecture, but I'm willing to bet on it. Regarding my past: I completed a five year Dual Degree (Integrated Bachelors & Masters) in Computer Science from the Indian Institute of Technology Madras. There I worked on the runtime design of the X10 task-parallel programming language, and on algorithms for graph matchings.

The research work posted as of now is from my five-year Dual-Degree (Bachelors and Masters) at IIT Madras. The paper with Professor V Krishna Nandivada is in the field of parallel-programming languages, and the paper with Professor Meghana Nasre is in the field of graph-matching algorithms. My research work at UCLA is currently underway and hence is not posted here.

Task‐parallel languages allow the programmer to express parallelism in the form of tasks (akin to light-weight threads) and to synchronize tasks using high‐level constructs like barriers, clocks, and phasers. The low-level implementation of these constructs is handled by the language runtime. While these high‐level synchronization primitives help the programmer express the program logic in a convenient manner, they also have their associated overheads. In this paper, we identify the sources of some of these overheads for task‐parallel languages like X10 that support lock‐step synchronization, and propose a mechanism to reduce these overheads.

We first propose three desirable properties that an efficient runtime (for task‐parallel languages like X10, HJ, Chapel, and so on) should satisfy, to minimize the overheads during lock‐step synchronization. We use these properties to derive a scheme called uClocks to improve the efficiency of X10 clocks; uClocks consists of an extension to X10 clocks and two related runtime optimizations. We prove that uClocks satisfies the proposed desirable properties. We have implemented uClocks for the X10 language+runtime and show that the resulting system leads to a geometric mean speedup of 5.36× on a 16‐core Intel system and 11.39× on a 64‐core AMD system, for benchmarks with a significant number of synchronization operations.

Real-world matching scenarios, like the matching of students to courses in a university setting, involve
complex downward-feasible constraints like credit limits, time-slot constraints for courses, basket constraints
(say, at most one humanities elective for a student),
in addition to the preferences of students over courses and vice versa, and class capacities.
We model this problem as a many-to-many bipartite matching problem where both students and courses specify preferences over
each other and students have a set of downward-feasible constraints.
We propose an Iterative Algorithm Framework that uses a many-to-one matching algorithm
and outputs a many-to-many matching that satisfies all the constraints. We prove that the output of such an algorithm is
Pareto-optimal from the student-side if the many-to-one algorithm used is Pareto-optimal from the student side.
For a given matching, we propose a new metric called the Mean Effective Average Rank (MEAR), which quantifies the goodness of allotment from the side of the students or the courses.
We empirically evaluate two many-to-one matching algorithms with synthetic data modeled on real-world instances and present the
evaluation of these two algorithms on different metrics including MEAR scores, matching size and number of unstable pairs.

What kind of a researcher am I?

One dimension of this question is about what kind of problems I like to work on, and I believe that there are two kinds of researchers for this. The first kind are very driven by a particular topic or research problem and would only be strongly motivated to work on that. The second kind of researcher loves solving hard problems in general, and would be happy to work on any challenging problem that matches his skill-set. I think I'm the second kind of researcher, and my ideal problem is ill-specified, open-ended, and requires design thinking.

The second dimenion is about the kind of solutions I like to find, and again there two kinds of researchers for this. The first kind like to develop a clean, elegant solution for their problem, even if it sometimes requires simplifying the problem or working on a particular variation of it that is amenable to develop theory for. The second kind of researchers want solutions for real-world problems with real-world constraints, and are ready to accept solutions that may seem heuristic or empirically justified in nature, as long as they work well in practice. Again, I consider myself to be the second kind of researcher. I like to do whatever it takes, whether using theory or heuristics, to solve the problem at hand.

What do I think is special about me as a researcher?

Since I find several areas interesting, I also happen to know a little bit about a lot of topics - like a jack of all trades. I can talk about programming languages, software-engineering and computer architecture in depth, but I also know a decent amount about machine-learning, natural-language processing, graph-algorithms, operating systems, computer security, bio-statistics, and economics.

What are my interests outside of academia?

I love talking about philosophy (sometimes a bit too much), psychology and economics, and I also love reading about them. I like travelling to meet friends and family who don't live in my city.