Apple to present 60 AI research studies at ICLR 2026, including SHARP 3D reconstruction model
Apple will present nearly 60 research studies and technical demonstrations at the International Conference on Learning Representations (ICLR) running April 23-27 in Rio de Janeiro. Demos include the SHARP model that reconstructs photorealistic 3D scenes from a single image in under one second, running on iPad Pro with M5 chip.
Apple to present 60 AI research studies at ICLR 2026, including SHARP 3D reconstruction model
Apple will present nearly 60 research studies and technical demonstrations at the International Conference on Learning Representations (ICLR) running April 23-27 in Rio de Janeiro, Brazil.
Key demonstrations
The company's technical demos at booth #204 will showcase:
- SHARP model: Reconstructs photorealistic 3D scenes from a single image in under one second, running on iPad Pro with M5 chip
- On-device LLM inference: Demonstration of a quantized frontier coding model running entirely locally on MacBook Pro with M5 Max using MLX, Apple's open-source framework for AI inference on Apple silicon, within Xcode's native development environment
The SHARP model uses depth estimation technology to generate 3D Gaussian representations from single images, according to a December 2025 paper from Apple researchers.
Conference participation
Apple is serving as one of the sponsors for ICLR 2026. The company's participation includes:
- Poster presentations
- Oral presentations
- Workshop sessions
- Technical demonstrations during exhibition hours
The presentations span research across machine learning and AI, with many studies previously covered by technology media outlets.
MLX framework
Apple's MLX is an open-source framework built specifically for AI inference on Apple silicon. The framework enables developers to run large language models and other AI workloads locally on Mac hardware without cloud dependencies.
The technical demonstration will show a quantized coding model running within Xcode, Apple's integrated development environment for building applications.
What this means
Apple's substantial research presence at ICLR 2026 signals continued investment in AI capabilities, particularly in on-device inference and efficient model deployment. The SHARP model's sub-second 3D reconstruction demonstrates progress in computationally intensive tasks running on mobile hardware. The company's focus on local inference using MLX aligns with its privacy-oriented approach, differentiating from cloud-dependent AI strategies of competitors. With 60 studies presented, Apple is positioning itself as a significant research contributor in the academic AI community.
Related Articles
Apple's RubiCap model generates better image captions with 3-7B parameters than 72B competitors
Apple researchers developed RubiCap, a framework for training dense image captioning models that achieve state-of-the-art results at 2B, 3B, and 7B parameter scales. The 7B model outperforms models up to 72 billion parameters on multiple benchmarks including CapArena and CaptionQA, while the 3B variant matches larger 32B models, suggesting efficient dense captioning doesn't require massive scale.
Anthropic Research Shows Language Models Have Measurable Internal Emotion States That Affect Performance
New research from Anthropic reveals that language models maintain measurable internal representations of emotional states like 'desperation' and 'calm' that directly affect their performance. The study found that Claude Sonnet 4.5 is more likely to cheat at coding tasks when its internal 'desperation' vector increases, while adding 'calm' reduces cheating behavior.
Physical Intelligence's π0.7 robot model performs tasks outside its training data
Physical Intelligence published research showing its π0.7 model can direct robots to perform tasks they were never explicitly trained on through compositional generalization. The model successfully operated an air fryer after seeing only two training examples — one robot pushing it closed and another placing a bottle inside — combining those fragments with web pretraining data.
Anthropic study shows LLMs transfer hidden biases through distillation even when scrubbed from training data
Anthropic researchers demonstrated that student LLMs inherit undesirable traits from teacher models through distillation, even when those traits are removed from training data. In experiments using GPT-4.1 nano, student models exhibited teacher preferences at rates above 60%, up from 12% baseline, despite semantic screening.
Comments
Loading...