Our Core Thesis

The Cognitive Frontier:
A New Theory of Intelligence

Why the next breakthrough in AI won't come from bigger models—it will come from understanding the human mind.

A Century-Old Framework is Holding Us Back

The history of intelligence measurement—and why we need a new paradigm.

1904

Spearman's g

The birth of IQ testing. Charles Spearman proposes general intelligence as a single factor underlying all cognitive abilities.

1983

Gardner's Multiple Intelligences

Howard Gardner challenges the single-factor model, proposing 8 distinct intelligences. A step forward, but still not operationalized for AI.

2012-2023

The Scaling Era

Deep learning revolution. AlexNet, transformers, GPT. Billions invested in compute. The mantra: "scale is all you need."

2024+

The Inflection Point

Diminishing returns on scaling. AI leaders acknowledge the limits. A new approach is needed—one grounded in cognitive science.

AI Leaders Calling for Change

The architects of modern AI are signaling that scaling alone won't get us there.

Leader Quote Source
Ilya Sutskever
OpenAI Co-founder
"The 2010s were the age of scaling. Now we're back in the age of wonder and discovery." Reuters, Nov 2024
Yann LeCun
Meta Chief AI Scientist
"Auto-regressive LLMs are doomed. They cannot plan, reason, or understand the physical world." Lex Fridman Podcast #416
Yoshua Bengio
Turing Award Winner
"We need machines that can reason like humans—System 2 thinking, not just pattern matching." AAAI 2020 Keynote
Gary Marcus
NYU, AI Researcher
"Deep learning alone will never give us AGI. We need hybrid systems with cognitive architecture." Northeastern AI
Demis Hassabis
DeepMind CEO
"Neuroscience will be essential to building truly intelligent systems." Singularity Hub

Key insight: Every major AI system is still optimizing for tasks that correlate with Spearman's g. We're measuring the wrong thing.

Defining Complementary Intelligence (c)

What g misses—and why it matters for the future of AI.

The 50% Problem

Spearman's g explains only ~40-50% of cognitive variance.

For over a century, we've optimized for half the picture. AI systems trained on g-correlated tasks have reached remarkable capabilities—but they've also hit a ceiling.

What about the other half?

That's where human advantage lives. That's where complementary intelligence (c) operates. And that's what we're mapping.

g
40-50%
+
c
50-60%

= Complete cognitive picture

Examples of Unexplored Cognitive Territories

These are not exhaustive—they represent the kind of capabilities we're mapping.

Meta-cognitive Monitoring

Knowing what you don't know. Calibrating confidence appropriately.

Why AI struggles: Requires genuine uncertainty, not just probability distributions.

Epistemological Intelligence

How we know what we know. Evaluating the quality and source of knowledge.

Why AI struggles: No grounded world model or understanding of truth.

System Thinking

Understanding complex interdependencies. Seeing second and third-order effects.

Why AI struggles: Trained on isolated examples, not interconnected systems.

Dual-Process Integration

Balancing intuition (System 1) and analysis (System 2) appropriately.

Why AI struggles: Only has System 2. No fast, intuitive mode.

Contextual Judgment

Decision-making under ambiguity. Reading situations and adapting.

Why AI struggles: Needs explicit rules; can't "read the room."

Collaborative Sense-Making

Group cognition that exceeds individual reasoning capacity.

Why AI struggles: Single-agent architecture; no emergent group intelligence.

Key insight: These aren't "soft skills"—they're measurable cognitive primitives with neural correlates. And there are more to discover.

c is not a fixed list. It's a research program—a commitment to continuously mapping the cognitive territories where humans hold qualitative advantages.

The Co-Evolution Model

Advancing together, not competing.

Paradigm Shift

Co-evolution, Not Competition

The dominant narrative frames AI as a threat to human relevance. We reject this.

This is not wishful thinking—it's how complex systems evolve. Predators and prey, immune systems and pathogens, technology and society. Mutual pressure creates mutual advancement.

Competition Model Co-Evolution Model
AI gets smarter → humans become obsolete AI gets smarter → humans level up → cycle continues
Zero-sum: AI wins, humans lose Positive-sum: Both advance
Humans as bottleneck Humans as data source + capability frontier
Static benchmark (pass/fail) Moving target (continuous advancement)
AI trained on results AI trained on process
Fear-driven adoption Value-driven adoption

The Infinity Loop

CONTINUOUS CO-EVOLUTION

HUMAN

1

DISCOVER

Identify under-utilized cognitive primitives through neuroscience research.

2

UPSKILL

Train humans in AI-complementary skills, capturing how thought is formed.

PROCESS DATA

The bridge. We capture reasoning traces, decision branches, uncertainty signals—not just final outputs. This data doesn't exist anywhere else.

AI

3

BENCHMARK

Build rigorous evaluation frameworks for complementary intelligence (c).

4

MODEL

Train AI systems on high-fidelity Process Data.

As AI masters Level N, we discover the next cognitive frontier and upskill humans to Level N+1. Continuous co-evolution.

"Human + AI + better process beats a strong AI alone."

— Adopted from Garry Kasparov

We don't replace humans. We create Centaurs—human-AI teams that outperform either alone. This was proven in freestyle chess. It will be proven across every domain.

The Long View

What this means for the next decade.

For AI Development

Beyond Pattern Recognition

  • → New benchmarks beyond MMLU, HellaSwag
  • → Training on Process Data, not just results
  • → Architectures designed for human collaboration
For Human Work

Amplification, Not Replacement

  • → From "AI will take my job" to "AI will amplify my value"
  • → Centaur teams as the new unit of productivity
  • → Continuous upskilling as competitive advantage
For Society

Human Agency Preserved

  • → Complementarity preserves human agency
  • → Economic value from collaboration, not replacement
  • → A path to beneficial AI that includes humans

"We envision a future where AI advancement and human development are not opposing forces, but a single, accelerating spiral—each making the other more capable, more valuable, more aligned."

Go Deeper

Explore our methodology, evidence, and how we address known challenges.

Read the Research See Our Rigor Contact Us