Highlights of Flux

    Flux is a collaboration between the AVP-RCI, ITS, the Center for Advanced Computing at the College of Engineering, LSA, and the Medical School to provide a high quality High Performance Computing cluster environment for researchers at the University of Michigan Ann Arbor campus and their collaborators.

    Access To Flux

    Primarily, access to Flux requires a purchase of an allocation, with a unit of allocation being 1 core for 1 month.  The rate mentioned below is for an allocation of resources and time and is not based on actual usage.  The charges will appear on your monthly Statement of Activity after every month in which you had an allocation.   To inquire about flux allocations please email: flux-support@umich.edu.

    Flux rates are available here

    Advantages of using Flux

    • Since you're buying an allocation, you can purchase exactly what you need for the length of time you need it, including short-term increases as your requirements change.
    • Flux is a large pool of computers; if a machine goes down, your allocation will be unaffected.  The scheduler will just find another machine for your jobs to run on.
    • Allocations are cumulative, so you can have multiple funding sources.

    Flux Training and Support

    Training is available for all users of Flux. Training inquires and support requests should be emailed to: flux-support@umich.edu.

    Flux II Hardware Highlights

    • Shares head node, queuing, and software infrastructure with the CAC cluster Nyx
    • Dedicated login and transfer hosts (flux-login.engin.umich.edu and flux-xfer1.engin.umich.edu).
    • Dual socket six core Intel Core I7 CPU nodes providing 12 total cores per node
    • 48GB of RAM available on each 12 core node
    • 40Gbps Infiniband networking between all nodes for fast MPI communication
    • Expanded Lustre storage infrastructure to the CAC's /nobackup file system for a total of 143TB

    Software on Flux II

    Flux II allocations share the same software as the CAC cluster Nyx. Software is accessible via the module system. Some units also provide their own software not managed by the CAC. This requires that unit's module to be loaded before the units software is available. An example is loading the lsa module. The module will add, in addition to the stock CAC modules, any software that the LSA stewards of Flux have added.

    Queuing on Flux II

    Flux II is accessed via the Nyx login nodes and is shared by the CAC cluster Nyx. Flux II has its own queue called flux which should be used in your pbs file.  Also all Flux users must have an account with valid allocation. For more details on queuing options see the Nyx PBS documentation.

    Once you get your flux allocation (in this case, let's call it "example_flux"), there are three things you'll need to change in your PBS script to use flux - the queue, the account, and the qos:

    #PBS -A example_flux
    #PBS -l qos=flux
    #PBS -q flux

     

    Planned Changes for Flux III

    As we expand Flux to its self-sufficient state, we are planning on increasing the rate over the course of 2-4 years to reflect to full costs of operating a high-performance computing cluster.

    Flux III will also have its own software library, independant of that maintained by the College of Engineering on Nyx.  We expect this to be quite similar to Nyx's software library, perhaps with more focus on a broader selection of software.

    Flux III will have its own storage, networking, login and administrative hosts and will not share that with the College of Engineering's cluster, Nyx.  This will allow for a more transparent cost structure and more independance for Flux than is currently possible.