What are GPGPUs?

    GPGPUs are general-purpose graphics processing units. This means using graphics cards for doing general purpose computing. Because of the demand of modern graphics-intensive programs, graphics cards have become very power computers in their own right.

    GPGPUs are particularly good at matrix multiplication, random number generation, FFTs, and other numerically intensive and repetitive mathematical operations. They can deliver 5–10 times speed-up for many codes with careful programming.

    For more examples of applications that are well-suited to CUDA see NVIDIA’s CUDA pages at

    Submitting Batch Jobs

    You must request a GPU, you must request one in your PBS script. To do so, you add a node attribute to your #PBS -l line. Here is an example that requests one GPU.

    #PBS -l nodes=1:gpus=1,mem=2gb,walltime=1:00:00,qos=flux

    Note that you must use nodes=1 and not procs=1 or the job will not run.

    Programming for GPGPUs

    The GPGPUs on nyx and flux are NVidia graphics processors and use NVidia's CUDA programming language. This is a very C-like language (that can be linked with Fortan codes) that makes programming for GPGPUs straightforward. For more information on CUDA programming, see the documentation at

    NVidia also makes special libraries available that make using the GPGPUs even easier. Two of these libraries are cudablas and cufft.

    cudablas is a single-precision BLAS library that uses the GPGPU for matrix operations. For more information on the BLAS routines implemented by cudablas, see the documentation at

    CUFFT is a set of FFT routines that use the GPGPU for their calculations. For more information on the FFT routines implemented by cudafft, see the documentation at

    To use the CUDA compiler (nvcc) or to link your code against one of the CUDA-enabled libraries, load the cuda module by typing:

    module load cuda

    This will give you access to the nvcc compiler and will set the environment variable CUDA_INSTALL_PATH which can be used to link against libcudablas, libcufft, and other CUDA libraries in ${CUDA_INSTALL_PATH}/lib.

    CUDA-based applications can be compiled on the login nodes, but cannot be run there, since they do not have GPGPUs.

    To install the sample code type after loading the CUDA module; answer the questions about where you want it installed, then you can go into that directory and type make to compile the sample code. Note that any of the sample codes that require an X Windows interface will not work on Nyx or Flux.