Trevor McDonell

Share

cse.unsw.edu.au/~tmcdonell
@tlmcdonell

I am a PhD student in the Programming Languages and Systems group at the University of New South Wales. My interests include parallel programming (in particular data parallelism), functional languages, and using GPUs as general-purpose compute accelerators.

YOW! Lambda Jam 2013 Brisbane

Accelerating Haskell Array Codes with Multicore GPUs

TALK –  VIEW SLIDES WATCH VIDEO

Current graphics cards are massively parallel multicore processors optimised for workloads with a large degree of SIMD parallelism. Peak performance of these devices is far greater than that of traditional CPUs, however this is difficult to realise because good performance requires highly idiomatic programs, whose development is work intensive and requires expert knowledge. To raise the level of abstraction we are developing a domain-specific high-level language in Haskell for programming these devices. Computations are expressed in the form of parameterised collective operations —such as maps, reductions, and permutations— over multi-dimensional arrays. These computations are online compiled and executed on the graphics processor.

In this talk, I will introduce the Accelerate project; the language and its embedding in Haskell, as well as the online code generator and runtime system targeting CUDA GPUs.

Prerequisites:

None required. However, if your computer (nix/mac) has a NVIDIA graphics card, you can follow along with the examples in the workshop by installing the CUDA SDK:

http://developer.nvidia.com/cuda-downloads

as well as the Accelerate package from github:

https://github.com/AccelerateHS/accelerate https://github.com/AccelerateHS/accelerate-cuda

For those who wish to follow along but do not have the right hardware, you can use the interpreter from the base package to run Accelerate programs without using the GPU (much more slowly of course).

The most recent version of GHC is recommended (currently 7.6.2).


Accelerating Haskell Array Codes with Multicore GPUs

WORKSHOP –  VIEW SLIDES

Current graphics cards are massively parallel multicore processors optimised for workloads with a large degree of SIMD parallelism. Peak performance of these devices is far greater than that of traditional CPUs, however this is difficult to realise because good performance requires highly idiomatic programs, whose development is work intensive and requires expert knowledge. To raise the level of abstraction we are developing a domain-specific high-level language in Haskell for programming these devices. Computations are expressed in the form of parameterised collective operations —such as maps, reductions, and permutations— over multi-dimensional arrays. These computations are online compiled and executed on the graphics processor.

In the workshop, I will go into more detail about how to actually write programs using Accelerate (with some advice for making these programs efficient). This will work up from the basics; types in the language, arrays and operations on them such as map and fold, building towards development of a range of larger example programs such as a Mandelbrot fractal viewer and an N-body particle simulation.

Prerequisites:

None required. However, if your computer (nix/mac) has a NVIDIA graphics card, you can follow along with the examples in the workshop by installing the CUDA SDK:

http://developer.nvidia.com/cuda-downloads

as well as the Accelerate package from github:

https://github.com/AccelerateHS/accelerate https://github.com/AccelerateHS/accelerate-cuda

For those who wish to follow along but do not have the right hardware, you can use the interpreter from the base package to run Accelerate programs without using the GPU (much more slowly of course).

The most recent version of GHC is recommended (currently 7.6.2).