The Division of Information Technology is committed to supporting research computing for Stevens faculty, researchers, and students. We offer an array of diverse services and solutions aimed at handling your research needs.  

Our catalog for research computing includes HPC clusters for research needs, VM Hosting as a cloud-based solution, physical server hosting, and consulting services. Information Technology is here to partner with you and support your research needs. 

HPC Clusters

Currently, in our state-of-the-art data center, we host a few high-performance clusters for the Stevens campus community to use. 

Dorothy, Pharos, and the new DuckUte are free for campus researchers. In order to gain access, please submit a request.

For software requests and installation please submit a process ticket so that we may make the proper updates. 

Dorothy

Dorothy, our traditional general-purpose high-performance computing (HPC) system, available on request to campus researchers.

  • 80 compute nodes 
  • 900 cores 
  • 4GB+ RAM/core 
  • 50TB of Storage 
  • C, C++, Fortran, Python, Matlab 
  • 1Gb Networking 
  • Slurm job scheduler 
Pharos

Pharos, the private high-throughput and data-intensive computing cluster used exclusively by Davidson Labs and the CEOE department. 

  • 64 compute nodes 
  • 1280 cores 
  • 6GB+ RAM/core 
  • 2 PB of Lustre distributed storage 
  • 56 Gbps Mellanox Infiniband fabric for compute nodes and storage system 
  • C, C++, Fortran, R, Python, Matlab 
  • Moab/Torque job scheduler

Contact Dr. Raju V. Datla for more information on Pharos. 

 
DuckUte

DuckUte, a condominium model mixed High Performance Computing cluster at Stevens, extends our compute capabilities for machine learning, simulation and modeling and wide range of other applications. This HPC cluster, formerly known as oHPC, is built with HPE BladeSystem-based servers and has a unique combination of the CPU and GPU compute nodes, managed by the SLURM scheduler.

  • oHPC stack backed by the Linux system
  • 31 shared mixed CPU/GPU compute nodes (8,10 or 12-core Intel® CPU E5-2680 251GB RAM or NVIDIA Quadro K3100M 4GB RAM)
  • Total number of CPUs: 1152
  • Total number of CUDA cores: 10,752
  • Up to 140TB Storage (HPE 3PAR StoreServ Storage Systems)
  • 10Gb interconnects
  • SLURM-based resource manager and job scheduler
  • Standard development tools and custom-built applications
 
Coming FY 2023! New HPC Cluster
  • Condominium model cluster 
  • Will have faster CPUs and GPUs to complete computations much faster than our other HPC clusters
  • Will have multiple queues (Standard, high memory, and GPU queues)

Please note that all jobs on HPC servers are completed on a first come first served basis. 

Consulting Services

We partner with faculty and research teams to understand their computational requirements and deliver solutions to meet them. Not sure where to start? Schedule a consultation to get started

Colocation

We manage an ultramodern datacenter designed to host high-density compute, storage, and network devices – it is the preferred location on campus for hosting servers and systems used for research. 

  • APC model AR-3300 racks
  • Dual AP8865 PDUs
  • Generator backup power
  • Datacenter UPS

Control unclassified information (CUI) environment 

Our CUI cluster meets the requirement of NIST special publication 800-171. This environment is available for researchers working on government contracts with CUI requirements.