headerphoto

Motivation

A central challenge of multi-core architectures is how to leverage their computing power for programs that were not built with parallelism in mind—that is, the vast majority of programs as we know them. Recent years have seen considerable efforts in automatic parallelization, mostly relying on static program analysis to identify sections amenable for parallel execution, and using concepts like transactional memory to preserve data dependences.

While these efforts have shown impressive advances, we believe that they will face important scalability issues. The larger a program becomes, the harder it gets to precisely identify dependences between items statically, resulting in conservative approximations producing non-parallel and overly general code. The problem is that the actual environment and usage profile (which may enable several optimizations) cannot be sufficiently anticipated. Of course, one could resort to dynamic runtime techniques to determine dependencies, but the initial overhead of dynamic analysis so far would not be offset by later performance gains.

All of this changes, though, as soon as one moves the analysis and code generation from compile time to run time. Rather than analyzing and compiling a program just once for all anticipated runs, we can now reanalyze and recompile programs in specific contexts, as set by the input and the environment. Interestingly, it is the additional power of multi-core architectures that makes such a continuous adaptation possible: While one core runs the (still serial) programs, the other cores can be used for monitoring, learning, optimization, and speculation.

Moving from anticipation to adaptation enables a number of software optimizations that are only possible in such dynamic settings. First and foremost comes adaptive parallelization— that is, the execution of the program in parallel depending on the current environment. Parallelization, however, is just one of the optimizations that can become adaptive: By creating specialized function instances for the environment at hand, we can optimize and adapt serial functions and still obtain performance gains. Using statistical learning, we can identify the conditions under which specific specializations and parallelizations perform best, further boosting performance.

In this project, we rethink software optimization from the ground up, replacing anticipation by adaptation—using runtime techniques fueled by the additional power of multicore processors. With dynamic analysis, we constantly monitor program behavior and specialize functions on the fly for actual usage profiles. Using speculation, we can apply highly aggressive optimizations such as automatic parallelization, and still roll back to conservative variants if needed. Our techniques work on legacy C-like programs; all one needs to do is recompile them.

With our approach, programs will constantly adapt to their usage and their environment, discovering and exploiting optimization as available—an adaptive approach that will result in significant performance and efficiency gains of legacy software on multicore systems.



It is not the strongest of the species that survives, nor the most intelligent. It is the one that is the most adaptable to change.

- Charles G. Darwin