Site Policy
2019 年 10 月 16 日

System configuration drawing

As of 2019, the system is configured as shown below.

Compute node configuration

As of March 2019, compute nodes of the following specifications and types are available.

Node typeCPU modelNo. of CPUsNo. of cores(per CPU)No. of cores(total per node)Memory capacityGPGPUSSDSSD    Network (per node)Host nameNo. of nodesNo. of core(total)Remarks
Fat compute node Intel Xeon Gold 6154 8 18 144 6TB N/A N/A InfiniBand 4×EDR×4 fat1,fat2 2 288 HyperThread:OFF
TurboBoost:ON
Medium compute node Intel Xeon Gold 6148 4 20 80 3TB N/A N/A InfiniBand 4×EDR×4 m01-m08 10 800 HyperThread:OFF
TurboBoost:ON
Thin compute node AMD EPYC7501 2 32 64 512GB N/A SSD(1.6TB,3.2TB)×1
(per node)
InfiniBand 4×EDR×4 at001-at136 136 8,704 HyperThread:OFF
TurboBoost:ON
Thin compute node Intel Xeon Gold 6130 2 16 32 384GB N/A SSD(1.6TB,3.2TB)×1
(per node)
InfiniBand 4×EDR×4 it001-it052 52 1,664 HyperThread:OFF
TurboBoost:ON
Thin compute node (equipped with GPGPU) Intel Xeon Gold 6136 2 12 24 384GB NVIDIA V100
SXM2×4
SSD(1.6TB,3.2TB)×1
(per node)
InfiniBand 4×EDR×4 igt001-igt016 16 384 HyperThread:OFF
TurboBoost:ON

Please note that the type of CPU varies according to whether the node is a Fat, Medium, or Thin compute node. In addition, for a Thin node, the node may be removed from the above units and used for other purposes, and the available number of units may change without prior notice. Please see the system operation status for the currently available number of nodes.

Specifications for each CPU

The basic specifications for each CPU are provided as follows (cited from Intel’s homepage):

Processor nameXeon Gold 6154Xeon Gold 6148AMD EPYC7501Xeon Gold 6130Xeon Gold 6136
Codename Skylake Skylake Naples Skylake Skylake
Release timing Third quarter of 2017 Third quarter of 2017 Second quarter of 2017 Third quarter of 2017 Third quarter of 2017
Number of cores 18 20 32 16 12
Number of physical threads 36 40 64 32 24
Clock speed 3.00GHz 2.4GHz 2.0GHz 2.1GHz 3.00GHz
Theoretical operation performance (per CPU) 1728.0GFLOPS 1536.0GFLOPS 512.0GFLOPS 1075.2GFLOPS 1152.0GFLOPS
Maximum Turbo boost frequency 3.70GHz 3.70GHz 3GHz 3.70GHz 3.70GHz
Cache 24.75MB 27.5MB 64MB 21MB 24.75MB

The specifications for GPGPU equipped on the GPU-mounted node in the Thin node are as follows:

GPU name Tesla V100 SXM2 (GPGPU)
Double-precision floating-point operation peak performance 7.5TFLOPS
Single-precision floating-point operation peak performance 15TFLOPS
Number of CUDA cores 640
Memory size 6GB(GDDR5)
Memory bandwidth (ECC OFF) 900GB/sec

Each compute node is connected to an InfiniBand switch fabric of one full bisection; thus, it is possible to conduct communication without the nodes affecting the bandwidth used by others.

Recommended purposes for each compute node

Fat compute node

A Fat compute node is a server that uses the Non-Uniformed Memory Access (NUMA) architecture mounted with 6 TB of physical memory. It is a large server that can utilize a single memory address space of size up to 6 TB from a single process and so is suited for use by multi-threaded programs (such as de novo assembler at large-scale assembly, Velvet, AllpathsLG, and so forth) that require large memory address spaces in a single process. However, the processor is older than the Thin compute node by one generation. There is only one Fat node, which is shared by all users. Please sufficiently examine in advance the program to be used, necessary memory size, calculation algorithm to be tried, and so forth.

Medium compute node

This compute node is mounted with 80 cores and 3 TB physical memory. It is suitable for executing programs that require large memories, although not as large as that of the Fat compute node.

Thin compute node

There are two types of compute nodes, each with two-units of AMD EPYC 7501 and Intel Xeon Gold 6130, which are the latest CPU for servers as of March 2019. As the single-unit CPU performance is highest for the Thin compute node in this configuration, please use Thin compute nodes for applications corresponding to MPI parallel, embarrassingly parallel jobs without dependency among tasks, and jobs that conduct a large amount of parallel IO from multiple nodes.
In addition, some of the nodes are equipped with GPGPU (Tesla V100 SXM2) and SSD.

In principle, these compute nodes need to be used via the job management system. For specific procedures, please see How to use the system.

Internal network configuration

The compute nodes are connected in full bisection with InfiniBand EDR × 1. In addition, all compute nodes are connected to the InfiniBand core switch group, and the core switches are connected to the firewall for the Supercomputer with 10 GbE × 4.

Storage configuration

The NIG Cluster provides the following disk domains classified largely by performance and purpose:

Type of storageMount directoryMount protocolLocal/ remoteAvailable compute nodeAccess speedMain purpose or remarks
High-speed domain /lustre6-/lustre8 lustre Remote Accessible from all types of compute nodes High.
Supports highly parallel writing from multiple nodes.
Home directory and scratch area for job output
SSD domain /data1 Direct mount Local Available on Thin compute nodes Extremely high. Job scratch data storage location (deleted within a certain period); however, it cannot be shared among nodes.
GPFS domain /gpfs1 - /gpfs3 Spectrum Scale Remote Unavailable on research nodes High.
Supports highly parallel writing from multiple nodes.
For data management group
Tape domain     Remote Unavailable on research nodes Slow For data backup

High-speed domain

This comprises the Lustre File System (Lustre), which is a high-speed file system. Lustre is a high-performance file system for large-capacity parallel IO from multiple nodes. NIG Super uses it as the user home directory domain and the output destination for job outputs. However, Lustre does not always give high performance in every case, such as when accessing a large amount of small-sized files (several tens of thousands).

Item nameSetting value
File system capacity 3.8 PB (1-file system)
5PB (2-file system)
Stripe count (system default) 1
Stripe size 1,048,576
quota size per user 1 TB (expansion possible by application)

By making a computer resource expansion application, it is possible to expand the quota limitation to the desired value. Please apply if you need to. While our policy is to assign capacities to best suit each user’s request, please note that we may have to refuse assigning the requested capacity if the value is exceptional, such as use of 100 TB for several years. Please also note that we check the usage records every fiscal year and may reduce the assigned capacity.

Power-saving domain

This is mainly used for backup and business-related purposes in the home directory, and is not currently open as a work domain that can be directly written from the jobs of general users. The details regarding its configuration are omitted here. Please understand.

SSD domain

The SSD mounted on the SSD-equipped nodes described in the hardware configuration section is mounted and can be used at /ssd on the corresponding nodes. It is extremely useful for jobs that refer to or write in a large number of small files. However, /ssd is not shared by the login node. To utilize this domain for this purpose, it is necessary to copy the data from the home directory before the calculation process in the job script, and to save the results from /ssd to the home directory before job completion when the results are written at /ssd.