Research > Overview
We at CompL specialize in JIT compilers and our current focus is on combining static and dynamic analyses to soundly and efficiently optimize programs written in languages with managed runtimes.
Read on for details about few of our current research directions:
1. Staged Analysis for JIT Compilers
Most modern languages (both statically and dynamically typed ones) now come with managed runtimes where relevant parts of programs are compiled dynamically (just-in-time), with speculations made based on run-time profiles. However, JIT compilation snatches resources from the program under execution, and cannot afford to perform precise program analyses. Under this thread, we are combining the best of static (ahead-of-time) and dynamic analysis to perform aggressive optimizations in JIT compilers, and are the leading group internationally in such static+dynamic research.
Our TOPLAS 2019 paper proposes the idea of static+dynamic analysis for incomplete programs; and our SAS 2022 and FMSD 2024 papers formalize this idea based on the classic theory of partial evaluation.
Currently, we are applying our Java static+dynamic analyses to IBM's production environments, and developing novel aggressive optimizations that were erstwhile impractical on JITted systems.
Relevant resources:
2. Memory Optimizations for OO Languages
OO languages thrive on objects, and the underlying abstractions come with a lot of expense. Under this thread, we have been optimizing all aspects related to object management in Java-like languages. We allocate method-local objects on stack (instead of heap), replace eligible objects with scalars, and inline profitable fields in their containers. These optimizations lead to run-time benefits in terms of object allocation, field access, as well as garbage collection. Cherry on the cake: All of our work is implemented in real-world JIT compilers of production Java Virtual Machines.
Our PLDI 2024 paper is the first known sound usage of static analysis to allocate eligible objects on stack in a production Java runtime; our OOPSLA 2025 paper builds on it and efficiently combines the best of static analysis and JIT speculation; and our upcoming CGO 2026 paper applies static+dynamic analysis to selectively inline value-type objects.
Currently, we are exploring program transformations to enhance the scenarios in which our optimizations can be applied, and also working with the compilers of newer relevant languages like Go.
Relevant resources:
3. Optimizing Dynamic-Language Programs
Static analysis of dynamic languages is considered very hard, and most of them are either specialized in an application-specific manner (e.g. Python and R) or are left at the mercy of dynamic specialization (e.g. JavaScript). Under this thread, we are developing novel static analyses for dynamic languages, and combining them with existing runtimes to generate better code. We have an intermediate representation that exposes hidden language semantics for JavaScript, and an associated framework that makes static optimization easier and correct by design.
Our OOPSLA 2023 paper proposes the first-of-its-kind system that reuses JIT-compiled code across different executions of a program, while preserving precision, for a compiler that maintains multiple versions of compiled functions, and our VMIL 2023 paper visualizes the working of the underlying optimizing compiler.
Currently, our focus is on developing novel static analyses and optimizations for JavaScript, and we are looking forward to apply our prior idea of reusing specialized code (tested on R) for Python.
Relevant resources: