Supercomputing Resources
The following are high performance computing resources offered to the UNMC Research community.
GPU based A100-DGX system
This super computer is part of the Center for Intelligent Health Care under the leadership of Dr. John Windle. The Research IT Office has deployed the system at UNMC, provides the service, and support maintenance of the system.
The system is very powerful resource for running Artificial Intelligent, Machine Learning Algorithms, and Deep Learning neural networks that can handle analysis of large datasets. In addition, this system is one of a kind in NE and it is hosted in UNMC HIPAA compliant environment. Researchers can make a request to have an account on the system and perform computing jobs.
One node:
- CPUs: 2x64x2 sockets/cores-per-socket/threads-per-core, AMD EPYC 7742 64-Core Processor
- GPUs: 8, NVIDIA A100 Tensor Core, 40 GB GPU memory each
- RAM: 1008 GB
- Local Storage: 15.58 TB solid state
Ubuntu Linux 20.04 with standard development packages and libraries installed. SLURM job scheduling and Docker container support are available.
Generally the researcher is responsible for installing their applications in their own home directory, however, we can install general purpose apps to be used and shared system-wide that would benefit the user community. Examples of that would be Java Runtimes, Python, MATLAB, R, SAS, TensorFlow, etc.
Resources:
CPU based INBRE cluster
Nebraska INBRE sponsored two cluster with attached storage has been deployed by the RITO in UNMC HIPAA compliant environment. The Research IT Office provides the service, and support maintenance of the system. The UNMC users and INBRE associated Primary Undergraduate Institutes researchers can make a request to have user accounts on the system by contacting us.
System Specifications (224 cores / 448T):
Two nodes each with 112 cores/224 Threads. Both nodes are mounted onto dedicated storage servers.
Each Node – DELL POWEREDGE FC630 with dual Intel Xeon 2.3 GHz CPU, 14C/28T, 256 GB RDIMM memory, 10 Gbit/sec network connectivity.
One storage server can host up to 20 TB data and second storage server can host up to 36 TB of data. Both these servers are connected to INBRE computing nodes on 10 Gbit/sec network fiber.
Holland Computing Center at Peter Kiewit Institute in Omaha
The Research IT Office has close working relationship with PKI. If you are planning on accomplishing some work on the supercomputers at PKI, please contact us. The UNMC user can request a user account on the HCC cluster to perform High-Performance Computing data analysis using non-PHI data. HCC users can access and use many HCC resources in a shared manner at no charge:
- Crane and Rhino are HCC’s high-performance clusters for general usage. A user can run jobs on these clusters without charge. As these are shared resources, there are some limitations on jobs. For example, by default, the max job run time is 7 days, the max CPUs available per user is 2000, and the max job number per user is 1000. Learn more details on these limitations. The general cluster documentation can be found here.
- Anvil is HCC's cloud platform. Each research group can initially spawn 10 virtual machine instances with a combined total of 20 virtual cores, 60GB RAM, 10 storage volumes, and 100GB volume storage capacity. The detail of per group limits can be found here.
- For storage, users can access three main filesystems including Home with a 20GB quota per user, Common with a 30TB free quota per research group, and Work with a 50TB quota per group. Additional details on the limits and usage expectation of these filesystems can be found here.
Again, as these are shared resources, certain limitations are applied. HCC also offers Priority Access on a cost-recovery basis to the HCC user community. Details on Priority Access can found at here.
Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support
ACCESS is a program established and funded by the National Science Foundation to help researchers and educators utilize the nation’s advanced computing systems and services. ACCESS builds upon the successes of the 11-year XSEDE project, while also expanding the ecosystem with capabilities for new modes of research and further democratizing participation. Almost any computer application that requires more than a desktop or laptop could qualify as needing an advanced computing system. Examples include supercomputer applications, AI and machine learning, big data analysis and storage, and others.
On Demand Online Training for Supercomputing
High quality online workshops are available from Cornell University to all XSEDE supercomputer users. After creating a free user account (it will take about 1 business day), please go ahead and jump into the workshop material.