ToDown
Support
ToDown
| |
Hamburger Icon

DGX B200 | Next-Gen NVIDIA Enterprise GPU Server for AI·HPC Workloads

NVIDIA GPU

Ultimate GPU performance, complete AI infrastructure experience.
NVIDIA Preferred Partner XIIlab delivers.
nvidia-banner

Select Product

DGX Platform
NVIDIA's unified computing platform
for ultra-large-scale AI infrastructure

NVIDIA DGX B200 AI Server System Data Center Rack Front Image

DGX B200
GPU Server for AI·HPC Workloads

Foundation for AI Factory

arrow-icon
GPU 8x NVIDIA Blackwell GPUs
GPU Memory 1,440 GB total,
64 TB/s HBM3e bandwidth
Performance FP4: 144 | 72* petaFLOPS
FP8: 72 | 36* petaFLOPS
NVIDIA® NVSwitch™ 2x
NVLink Bandwidth 14.4 TB/s aggregate bandwidth
System Power ~14.3 kW max
CPU 2 Intel® Xeon® Platinum 8570
112 Cores total, 2.1 GHz (Base),
4 GHz (Max Boost)
System Memory 2 TB, configurable to 4 TB
Networking 4x OSFP ports serving 8x single-port
NVIDIA ConnectX-7 VPI
> Up to 400 Gb/s InfiniBand/Ethernet
2x dual-port QSFP112
NVIDIA BlueField-3 DPU
> Up to 400 Gb/s InfiniBand/Ethernet
Management
Network
10 Gb/s onboard NIC with RJ45
100 Gb/s dual-port ethernet NIC
Host BMC with RJ45
Storage OS: 2x 1.9 TB NVMe M.2
Internal: 8x 3.84 TB NVMe U.2
Software NVIDIA AI Enterprise
NVIDIA Mission Control
NVIDIA DGX OS / Ubuntu
Rack Units 10 RU
Operating Temp 10-35°C (50-90°F)