ToDown
Support
ToDown
| |
Hamburger Icon

NVIDIA H200 GPU | Next-Gen AI·HPC Accelerator

NVIDIA GPU

Ultimate GPU performance, complete AI infrastructure experience.
NVIDIA Preferred Partner
XIIlab delivers.
nvidia-banner

Select Product

Tensor Core
Ultra-fast parallel computing platform
by NVIDIA for deep learning and AI

NVIDIA H200 Hopper Architecture Tensor Core GPU Card front view

H200 GPU
Next-Gen AI·HPC Accelerator

Maximizing AI and HPC workload performance

arrow-icon
Form Factor H200 SXM¹ H200 NVL¹
FP64 34 teraFLOPS 34 teraFLOPS
FP64 Tensor Core 67 teraFLOPS 67 teraFLOPS
FP32 67 teraFLOPS 67 teraFLOPS
TF32 Tensor Core 989 teraFLOPS² 989 teraFLOPS²
BFLOAT16
Tensor Core
1,979 teraFLOPS² 1,979 teraFLOPS²
FP16 Tensor Core 1,979 teraFLOPS² 1,979 teraFLOPS²
FP8 Tensor Core 3,958 teraFLOPS² 3,958 teraFLOPS²
INT8 Tensor Core 3,958 TOPS² 3,958 TOPS²
GPU memory 141GB 141GB
GPU memory
bandwidth
4.8TB/s 4.8TB/s
Decoders 7 NVDEC
7 JPEG
7 NVDEC
7 JPEG
Max thermal design
power (TDP)
Up to 700W
(configurable)
Up to 600W
(configurable)
Multi-Instance
GPUs
Up to 7 MIGS
@ 18GB each
Up to 7 MIGS
@ 18GB each
Form factor SXM PCIe
Interconnect NVLink: 900GB/s
PCIe Gen5: 128GB/s
2-way or 4-way NVIDIA NVLink Bridge: 900GB/s
PCIe Gen5: 128GB/s
Server options 4 or 8 GPUs NVIDIA HGX H100
Partner and NVIDIA-Certified System™
8 GPUs NVIDIA MGX™ H200 NVL
Partner and NVIDIA-Certified Systems
NVIDIA AI Enterprise Add-on Included