LLM News

Every LLM release, update, and milestone.

Filtered by:gpu-computing✕ clear
researchNVIDIA

POET-X reduces LLM training memory by 40%, enables billion-parameter models on single H100

Researchers introduce POET-X, a memory-efficient variant of the Reparameterized Orthogonal Equivalence Training framework that reduces computational overhead in LLM training. The method enables pretraining of billion-parameter models on a single Nvidia H100 GPU, where standard optimizers like AdamW exhaust memory.