Accelerator

H200 SXM

NVIDIA Hopper sxm gpu summary for training, inference, and roofline-style performance analysis.

Back to Accelerator Catalog

Vendor
NVIDIA
Architecture
Hopper
Unit
SXM GPU
Form factor
SXM
Launch
2023-11-13
Memory
141 GB HBM3E
HBM bandwidth
4.8 TB/s
BF16 peak
989 TFLOPS
FP16 peak
989 TFLOPS
FP8 dense peak
1.98 PFLOPS
FP8 sparse peak
3.96 PFLOPS
FP4 dense peak
n/a
FP4 sparse peak
n/a
FP64 peak
34 TFLOPS
INT8 peak
1.98 POPS
Interconnect
NVIDIA NVLink - 900 GB/s per GPU
Power
Up to 700 W configurable TDP
Software stack
CUDA, TensorRT-LLM, NVIDIA AI Enterprise

Notes

  • NVIDIA lists tensor core values with sparsity; dense values here are half of those listed sparse peaks.
  • Memory capacity and bandwidth are the main H200 differentiators relative to H100.

Sources