ThinkLabs AI Raises Million to Solve Power Grid Crisis with Physics-Informed AI
Nvidia-backed ThinkLabs AI secures Series A to revolutionize power grid management with AI that runs complex grid studies in minutes.
Nvidia-backed ThinkLabs AI secures Series A to revolutionize power grid management with AI that runs complex grid studies in minutes.
Tsinghua University and Z.ai researchers release IndexCache, a sparse attention optimizer that cuts 75% of redundant computation in DeepSeek-based LLMs, delivering 1.82× faster inference at 200K token context lengths.
Researchers at Tsinghua University have developed IndexCache, a new technique that achieves 1.82x faster inference for long-context AI models by eliminating redundant sparse attention computations.