The Crunchomics’ Documentation¶
Crunchomics: The Genomics Compute Environment for SILS and IBED¶
Cruchomics has 1 application server, a head node and 5 compute nodes. A storage system is mounted in which all users have a fast 25 GB SSD home directory and a 500 GB home directory. Volumes from the faculty server can be mounted on the system.
CPU: AMD EPYC Rome 7302 32 cores, 64 threads, 3GHz
Compute cluster: 160 cores, 320 threads
Infiniband internal connection
- Local storage compute cluster: 8TB (SSD/NVMe): /scratch
/scratch is emptied after month of inactivity
- Memory:
Application server: 1024 GB
Head node and compute nodes: 512 GB
- Storage: gross 504 TB operated in RAID-Z2: net approx. 220 TB.
If one disk fails: no data loss
Snapshots taken: some protection against unintentional file deletions
No backups made!
File systems are mounted on all nodes.
OS: CentOS 7
Help: w.c.deleeuw@uva.nl / j.rauwerda@uva.nl
- The Crunchomics’ Documentation
- 1. Log in on the Head Node
- 2. Preparing Your Account
- 3. Account
- 4. The Crunchomics Application Server
- 5. The Crunchomics Compute Cluster
- 6. Miniconda
- 7. Slurm Overview
- 8. Slurm Jobs
- 9. Using Slurm Indirectly
- 10. Debugging Slurm jobs
- 11. Use of Local NVMe Storage on Compute Nodes
- 12. Docker / Singularity Containers