ToDown
제품문의
ToDown
| |
Hamburger Icon

NVIDIA H100 GPU | 최고 성능의 AI·딥러닝 가속기

NVIDIA GPU

최고의 GPU 성능, 완전한 AI 인프라 경험
NVIDIA Preferred Partner
씨이랩이 제공합니다.
nvidia-banner

제품선택

Tensor Core
딥러닝·AI 연산을 위한
NVIDIA의 초고속 병렬 컴퓨팅 플랫폼

NVIDIA H100 Tensor 코어 GPU

H100 GPU
최고 성능의 AI·딥러닝 가속기

모든 데이터 센터에 탁월한 성능·확장성·보안

arrow-icon
Form Factor H100 SXM H100 PCIe H100 NVL²
FP64 34 teraFLOPS 26 teraFLOPS 67 teraFLOPS
FP64 Tensor Core 67 teraFLOPS 51 teraFLOPS 134 teraFLOPS
FP32 67 teraFLOPS 51 teraFLOPS 134 teraFLOPS
TF32 Tensor Core 989 teraFLOPS² 756 teraFLOPS² 1,979 teraFLOPS²
BFLOAT16
Tensor Core
1,979 teraFLOPS² 1,513 teraFLOPS² 3,958 teraFLOPS²
FP16 Tensor Core 1,979 teraFLOPS² 1,513 teraFLOPS² 3,958 teraFLOPs²
FP8 Tensor Core 3,958 teraFLOPS² 3,026 teraFLOPS² 7,916 teraFLOPs²
INT8 Tensor Core 3,958 TOPS² 3,026 TOPS² 7,916 TOPS²
GPU memory 80GB 80GB 188GB
GPU memory
bandwidth
3.35TB/s 2TB/s 7.8TB/s³
Decoders 7 NVDEC
7 JPEG
7 NVDEC
7 JPEG
14 NVDEC
14 JPEG
Max thermal design
power (TDP)
Up to 700W
(configurable)
300-350W
(configurable)
2x 350-400W
(configurable)
Multi-Instance
GPUs
Up to 7 MIGS
@ 10GB each
Up to 7 MIGS
@ 10GB each
Up to 14 MIGS
@ 12GB each
Form factor SXM PCIe
dual-slot air-cooled
2x PCIe
dual-slot air-cooled
Interconnect NVLink: 900GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
NVLink: 600GB/s
PCIe Gen5: 128GB/s
Server options NVIDIA HGX H100 Partner
and NVIDIA-Certified
Systems™ with 4 or 8
GPUs NVIDIA DGX H100
with 8 GPUs
Partner and
NVIDIA-Certified
Systems
with 1–8 GPUs
Partner and
NVIDIA-Certified
Systems
with 2-4 pairs
NVIDIA AI Enterprise Add-on Included Included