research
Research reveals LLMs internalize logic as geometric flows in representation space
A new geometric framework demonstrates that LLMs internalize logical reasoning as smooth flows—embedding trajectories—in their representation space, rather than merely pattern-matching. The research, which tests logic across different semantic contexts, suggests next-token prediction training alone can produce higher-order geometric structures that encode logical invariants.