A Compiler for an Implicitly Parallel Functional Language
Location
Hager-Lubbers Exhibition Hall
Description
Functional programming presents a relatively unexplored approach to achieving high-performance computing. Typically, the field has been dominated by imperative languages such as C/C++ and FORTRAN. However, purely functional languages use functions without side effects, a characteristic that can prove useful when parallelizing code. The goal of this research is to create an automatic parallelizing compiler for functional programs. The compiler will use the LLVM infrastructure to transform Lisp-like source code into parallelized LLVM byte code. The LLVM byte code can then be used to generate machine code that executes on multiple processors with multiple cores. Parallelism is clearly a critical technology of the future, but presents new challenges to developers. Much as high-level languages with optimizing compilers have supplanted hand-written assembly, automatic parallelization optimized for specific architectures is poised to eliminate error-prone manual multiprogramming.
A Compiler for an Implicitly Parallel Functional Language
Hager-Lubbers Exhibition Hall
Functional programming presents a relatively unexplored approach to achieving high-performance computing. Typically, the field has been dominated by imperative languages such as C/C++ and FORTRAN. However, purely functional languages use functions without side effects, a characteristic that can prove useful when parallelizing code. The goal of this research is to create an automatic parallelizing compiler for functional programs. The compiler will use the LLVM infrastructure to transform Lisp-like source code into parallelized LLVM byte code. The LLVM byte code can then be used to generate machine code that executes on multiple processors with multiple cores. Parallelism is clearly a critical technology of the future, but presents new challenges to developers. Much as high-level languages with optimizing compilers have supplanted hand-written assembly, automatic parallelization optimized for specific architectures is poised to eliminate error-prone manual multiprogramming.