Intellectual Honesty

Built on Rigor

We believe transparency is a strength. Here are the toughest questions we face—and our honest responses.

Scientific Critiques

Research Methodology & Theory

Questions about our research methodology and theoretical foundations.

?

"'New intelligence' sounds like pseudoscience."

Claims about discovering "new types of intelligence" often lack scientific grounding.

We don't claim to discover "new intelligence." We identify under-utilized cognitive primitives—capabilities humans possess but rarely optimize. Our Behavioral Wind Tunnels methodology, published in the Journal of Neuroscience (2025), provides a roadmap for measuring these constructs in ecologically valid settings. This is empirical science with a clear research agenda, not speculation.

?

"Current intelligence models already work—why do we need 'c'?"

From Spearman's g to CHC theory, intelligence frameworks have been validated for over a century.

These models are incomplete, not wrong. g and its successors capture ~40-50% of cognitive variance—missing 50-60%. That unmeasured variance was historically hard to operationalize, so it didn't matter. But for human-AI complementarity, it's exactly what matters. We operationalize c (complementary intelligence) as the cognitive dimensions where humans maintain qualitative advantage over AI. c doesn't replace existing models—it extends them for the AI era.

?

"AI learns faster than humans—why train humans at all?"

If AI masters skills faster than humans can learn them, human training seems futile.

We focus on qualitative difference, not speed. Kasparov's Law proves human-AI teams outperform either alone—not because humans are faster, but because they contribute different cognitive capabilities. We train humans in dimensions where AI fundamentally struggles: meta-cognition, epistemology, contextual judgment.

Business Model Critiques

Strategy & Defensibility

Questions about our strategy, differentiation, and defensibility.

?

"You're doing EdTech AND AI Research—two different companies."

Human training and AI benchmarking seem like unrelated businesses.

Human training is our data labeling pipeline—not a separate product. Every cognitive exercise generates proprietary Process Data that feeds our AI benchmarks. The enterprise business isn't a distraction; it's the engine that produces our core research asset.

?

"Big labs can copy you once you prove the concept."

Google/OpenAI/Anthropic will replicate this with more resources.

Talent can be hired. Data cannot be backfilled. Our moat is the longitudinal Process Data we accumulate through enterprise training. Every cohort generates cognitive data that doesn't exist anywhere else. A competitor starting today needs years to build a comparable dataset—by then, we've moved further ahead.

?

"This sounds like generic soft skills training."

The market is flooded with "leadership development" programs.

We turn soft skills into measurable outcomes using cognitive neuroscience. We study humans not in the lab, but in the workplace. We use cognitive tasks and neuroimaging—not motivational speeches and games. This is cognitive engineering that drives both performance and well-being.

Why We Share This

Our approach to intellectual honesty.

Transparency Builds Trust

Investors, partners, and clients deserve to see the hard questions, not just the pitch deck optimism.

Challenges Make Us Stronger

Every critique sharpens our thinking and improves our approach. We welcome rigorous challenge.

Science Requires Humility

We might be wrong about some things. Admitting that publicly keeps us honest and open to learning.

"The first principle is that you must not fool yourself—and you are the easiest person to fool."

— Richard Feynman

Have More Questions?

We welcome rigorous inquiry. If you have a challenge we haven't addressed, let us know.

Challenge Our Thinking Review Our Research