Olyxee LabOlyxee Lab

Exploring the frontiers
of intelligence

Olyxee Lab is our research division — dedicated to advancing the science of artificial intelligence. We work on fundamental problems in AI safety, interpretability, and capability, and publish our findings openly.

Research
areas

Active

Frontier Model Research

Pushing the boundaries of what AI systems can understand, reason about, and generate. We explore novel architectures and training paradigms.

Active

AI Safety & Alignment

Developing methods to ensure AI systems behave as intended. Mechanistic interpretability, reward modeling, and formal verification of AI behavior.

Active

Interpretability

Understanding how neural networks represent and process information internally. Making the black box transparent without sacrificing capability.

Active

Efficient Intelligence

Building AI systems that achieve more with less — smaller models, fewer parameters, lower energy. Intelligence shouldn't require a data center.

Exploring

Multimodal Reasoning

Systems that can perceive, reason across, and generate content spanning text, vision, audio, and structured data simultaneously.

Exploring

Emergent Capabilities

Studying how complex behaviors arise from simple training objectives. Understanding and predicting capability jumps in scaled systems.

Recent work

Interpretability·2025

On the Geometry of Concept Representation in Language Models

We show that high-level concepts in transformer models form predictable geometric structures, enabling targeted concept editing without retraining.

AI Safety·2025

Verification-First Deployment for Edge AI Systems

A framework for systematic pre-deployment verification of AI models targeting heterogeneous edge hardware.

Efficient Intelligence·2025

Sparse Attention Is All You Need: Efficient Transformers for Resource-Constrained Environments

We demonstrate that structured sparse attention patterns can match dense attention quality at 40% of the compute budget.

AI Safety·2024

Failure Mode Taxonomy for Deployed AI Systems

A comprehensive classification of how AI systems fail in production, with detection signatures and mitigation strategies for each mode.

Our approach

Open research

We publish our findings, share our methods, and contribute to the broader scientific community. Knowledge compounds when shared.

Safety by design

Every research direction is evaluated through the lens of safety. We don't build capabilities without understanding their implications.

From lab to product

Our best research becomes Olyxee products. The path from paper to production is short and deliberate.

Join us at
the frontier

We're looking for researchers and engineers who want to work on problems that matter.