You are here

CDER Cluster

We have extensively updated the heterogeneous CDER cluster in Fall 2018 with twice as many nodes and GPUs plus a Hadoop/Spark subsystem. The CDER cluster is now ready for use by national and international early adopters as well as community at large, for both instructors and their students for teaching parallel and distributed computing and related topics in various courses.  

How to request access to CDER: An instructor should request a GSU RS-ID to CDER by navigating to This step will generate a GSU RS-ID for the user. Then each students should also request their individual GSU RS-ID. Instructor should then send the class list with GSU RS-IDs to  This step will complete the account creation process and users will receive an email with instruction on how to log in to the system. For access or system admin help, please email with "CDER" in the subject line.

CDER is an NSF funded cluster through Award CNS-1205650 "Collaborative Research: CI-ADOO-NEW: Parallel and Distributed Computing Curriculum Development and Educational Resources.  This heterogeneous 28-node cluster features 656 cores, 1 TB of RAM, and four GPUs, including NVIDIA V100s, that are able to sustain a mixed user workload, including Apache Spark, that is facilitated by SLURM scheduler. Detailed information on the cluster and instructions on use can be found at

  • CentOS 7.5 64-bit
  • SLURM Scheduler
  • InfiniBand QDR Interconnect
  • 11x Compute Nodes
    • 20 core - 2x Intel Xeon E5-2650 v3
    • 64 GB RAM
  • A large Compute Node
    • 36 core - 2x Intel Xeon E5-2699 v3
    • 128 GB RAM
  • 12x Compute Nodes (Spark subsystem)
    • 28 core - 2x Intel Xeon Gold 5120 
    • 192 GB RAM
  • 2x V100 GPU Node
    • 20 Core - 2x Intel Xeon Silver 4114
    • 384 GB Ram
    • NVIDIA V100 16GB
  • GPU Node
    • 12 core - 2x Intel Xeon E5-2620 v3
    • 64 GB RAM
    • GeForce GTX 980
  • GPU Node
    • 12 core - 2x Intel Xeon E5-2620 v3
    • 64 GB RAM
    • GeForce GTX Titan Xp