Login
Talks & Seminars
Title: Combined Code Motion and Register Allocation Using the VSDG
Prof. Alan Mycroft, University of Cambridge
Date & Time: February 3, 2010 11:30
Venue: Conference Room, 01st floor, 'C' Block, Kanwal Rekhi Bldg.
Abstract:
We define the Value State Dependence Graph (VSDG). The VSDG is a form of the Value Dependence Graph (VDG) which includes state dependence edges to model serialised computation. These are used to model store dependencies and loop termination dependencies (thereby resolving a correctness problem with the original VDG) of the original program. We show that state dependence edges (now modelling implementation-oriented serialisation) can then be added until the VSDG represents a single Control Flow Graph. The central idea is that this can be done incrementally so that we have a class of algorithms which effectively interleave register allocation and code motion, thereby avoiding a well-known phase-order problem in compilers. This class operates by first normalising the VSDG during construction, to remove all duplicated computation and performing maximal lifting from loops, and then repeatedly choosing between: - allocate a value to a register, - spill a value to memory, - move a loop-invariant computation within a loop to avoid register spillage, and - statically duplicate a computation to avoid register spillage. We show that the classical two-phase approach (code motion then register allocation in both Chow and Chaitin forms) are examples of this class, and propose a new algorithm based on depth-first cuts of the VSDG.
Speaker Profile:
Alan Mycroft is Professor of Computing in the Computer Laboratory of Cambridge University. His research interests span an arc from semantic models of programming languages to actually building optimizing compilers. A core interest is that of static analysis of programs to extract properties of their run-time behaviour. Such properties can be used to enable optimisations or to facilitate “compile-time debugging”. His PhD created the subject of “strictness analysis” when he argued that apparent run-time inefficiencies in modern high-level languages can often be removed by program analysis and optimization phases. Other work has encompassed type-based decompilation and also language and compilation issues for “Silicon Compilers”, i.e. compiling specifications directly to hardware. In 2005/06, he held a visiting faculty position with Intel Research Cambridge involving developing languages and techniques for compiling to ‘multi-core’ processors; this research illuminates the benefits of type-like systems of program analysis at enabling programmers to express and manage their implicit treaty with a compiler (“optimize as much as you can, but don't step over the line”).
List of Talks

Webmail

Username:
Password:
Faculty CSE IT
Forgot Password
    [+] Sitemap     Feedback