The H100 is a high-performance Inferred GPU. Featuring 80GB HBM3 of ultra-fast memory, it is engineered for the most demanding AI model training, large language models (LLMs), and complex scientific computing.
Recommended Scenarios
LLM Training
Advanced AI Research
Deep Learning Fine-tuning
Architecture
Hopper
VRAM Capacity
80GB HBM3
Bandwidth
3350 GB/s
CUDA Cores
16896
FP16 Perf.
1979 TFLOPS
Power (TDP)
700W