Ada is a hybrid supercomputer consistng of a large memory head node and 2 to 10 compute nodes with 4 GPUs each. It includes a 4TB global NVMe filesystem and a Python based distributed computing environment. Ada is perfect for Artificial Intelligence, Machine Learing, Bioinformatics, and scientific and technical applications requiring support for large datasets. It is optimized for running Pytorch. Designed for computationally intensive high-performance computing, the maximum Ada configuration has 768 CPU cores, 40 GPUs, and 800TB of disk array storage.
A new generation mainframe class computer with large shared memory and many cores that is departmental sized with the ability to replace traditional mainframes in many applications. It is perfect for In-memory Databases, Big Data and Artificial Intelligence, and for traditional enterprise computing applications with large in-memory needs. The Midframe includes a 4TB global NVMe filesystem and a Python based distributed computing environment. With the maximum 6 node configuration, it supports 768 cores (1536 threads), 18TB memory (6TB globally shared) and 72 disk drive bays (1PB). It can optionally include 3 GPUs in each of the 5 worker nodes, allowing for a maximum of 15 GPUs.
The Trio and Duet large memory servers provide up to 9TB and 6TB of globally shared memory respectively. The 6U Trio provides 384 cores (768 threads) and 36 SATA/SAS drive bays. The 4U Duet provides 256 cores (512 threads) and 24 drive bays. These servers include a 4TB global NVMe filesystem and a Python based distributed computing environment. The Trio and Duet Linux systems are ideal for hosting large in-memory databases, and Big Data and enterprise applications. These systems also have the option of adding up to three MI210 or MI100 GPUs per node tranforming them into powerful computational engines for machine learning, advanced statistics and heavy duty floating point processing.