We believe transparency is a strength. Here are the toughest questions we face—and our honest responses.
Questions about our research methodology and theoretical foundations.
Claims about discovering "new types of intelligence" often lack scientific grounding.
We don't claim to discover "new intelligence." We identify under-utilized cognitive primitives—capabilities humans possess but rarely optimize. Our Behavioral Wind Tunnels methodology, published in the Journal of Neuroscience (2025), provides a roadmap for measuring these constructs in ecologically valid settings. This is empirical science with a clear research agenda, not speculation.
From Spearman's g to CHC theory, intelligence frameworks have been validated for over a century.
These models are incomplete, not wrong. g and its successors capture ~40-50% of cognitive variance—missing 50-60%. That unmeasured variance was historically hard to operationalize, so it didn't matter. But for human-AI complementarity, it's exactly what matters. We operationalize c (complementary intelligence) as the cognitive dimensions where humans maintain qualitative advantage over AI. c doesn't replace existing models—it extends them for the AI era.
If AI masters skills faster than humans can learn them, human training seems futile.
We focus on qualitative difference, not speed. Kasparov's Law proves human-AI teams outperform either alone—not because humans are faster, but because they contribute different cognitive capabilities. We train humans in dimensions where AI fundamentally struggles: meta-cognition, epistemology, contextual judgment.
Questions about our strategy, differentiation, and defensibility.
Human training and AI benchmarking seem like unrelated businesses.
Human training is our data labeling pipeline—not a separate product. Every cognitive exercise generates proprietary Process Data that feeds our AI benchmarks. The enterprise business isn't a distraction; it's the engine that produces our core research asset.
Google/OpenAI/Anthropic will replicate this with more resources.
Talent can be hired. Data cannot be backfilled. Our moat is the longitudinal Process Data we accumulate through enterprise training. Every cohort generates cognitive data that doesn't exist anywhere else. A competitor starting today needs years to build a comparable dataset—by then, we've moved further ahead.
The market is flooded with "leadership development" programs.
We turn soft skills into measurable outcomes using cognitive neuroscience. We study humans not in the lab, but in the workplace. We use cognitive tasks and neuroimaging—not motivational speeches and games. This is cognitive engineering that drives both performance and well-being.
Our approach to intellectual honesty.
Investors, partners, and clients deserve to see the hard questions, not just the pitch deck optimism.
Every critique sharpens our thinking and improves our approach. We welcome rigorous challenge.
We might be wrong about some things. Admitting that publicly keeps us honest and open to learning.
"The first principle is that you must not fool yourself—and you are the easiest person to fool."
— Richard FeynmanWe welcome rigorous inquiry. If you have a challenge we haven't addressed, let us know.