H100 GPU
Top Performance AI·Deep Learning Accelerator
Outstanding performance, scalability, and security for every data center

Outstanding performance, scalability, and security for every data center
| Form Factor | H100 SXM | H100 PCIe | H100 NVL² |
|---|---|---|---|
| FP64 | 34 teraFLOPS | 26 teraFLOPS | 67 teraFLOPS |
| FP64 Tensor Core | 67 teraFLOPS | 51 teraFLOPS | 134 teraFLOPS |
| FP32 | 67 teraFLOPS | 51 teraFLOPS | 134 teraFLOPS |
| TF32 Tensor Core | 989 teraFLOPS² | 756 teraFLOPS² | 1,979 teraFLOPS² |
| BFLOAT16 Tensor Core | 1,979 teraFLOPS² | 1,513 teraFLOPS² | 3,958 teraFLOPS² |
| FP16 Tensor Core | 1,979 teraFLOPS² | 1,513 teraFLOPS² | 3,958 teraFLOPs² |
| FP8 Tensor Core | 3,958 teraFLOPS² | 3,026 teraFLOPS² | 7,916 teraFLOPs² |
| INT8 Tensor Core | 3,958 TOPS² | 3,026 TOPS² | 7,916 TOPS² |
| GPU memory | 80GB | 80GB | 188GB |
| GPU memory bandwidth | 3.35TB/s | 2TB/s | 7.8TB/s³ |
| Decoders | 7 NVDEC 7 JPEG | 7 NVDEC 7 JPEG | 14 NVDEC 14 JPEG |
| Max thermal design power (TDP) | Up to 700W (configurable) | 300-350W (configurable) | 2x 350-400W (configurable) |
| Multi-Instance GPUs | Up to 7 MIGS @ 10GB each | Up to 7 MIGS @ 10GB each | Up to 14 MIGS @ 12GB each |
| Form factor | SXM | PCIe dual-slot air-cooled | 2x PCIe dual-slot air-cooled |
| Interconnect | NVLink: 900GB/s PCIe Gen5: 128GB/s | NVLink: 600GB/s PCIe Gen5: 128GB/s | NVLink: 600GB/s PCIe Gen5: 128GB/s |
| Server options |
NVIDIA HGX H100 Partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUs |
Partner and NVIDIA-Certified Systems with 1–8 GPUs |
Partner and NVIDIA-Certified Systems with 2-4 pairs |
| NVIDIA AI Enterprise | Add-on | Included | Included |