We're Building AI Wrong —and It Could Cost Us Everything
By Bryan J. Kaus
AI is the most powerful tool we've ever built. It has infinite promise and we're about to waste it.
Microsoft and Meta just posted AI-driven earnings that sent markets soaring, proving the technology's transformative potential. But while we celebrate the wins, most companies are missing the strategic imperative hiding in plain sight: AI's greatest value isn't replacing human capability it's amplifying it. In this article, I’ll explore some of the existential risks and offer some practical means for avoiding the competency cliff.
I've spent nearly twenty years building with transformative technologies. I watched the internet create Amazon, mobilize private citizens into livery drivers via Uber, and cloud computing enable Netflix's global dominance. Each wave succeeded not by eliminating human expertise, but by multiplying it exponentially.
The AI wave of late is different in scale, not in principle. Yet this time, I'm watching companies make the same mistake: optimizing for short-term efficiency while accidentally dismantling the talent pipelines and institutional knowledge that took decades to build.
The decisions we make in the next 12–24 months will determine whether AI becomes our greatest competitive multiplier or just another expensive way to hollow out our capabilities.
The hype cycle, revisited
Remember when every firm needed an "app" even if it was just a glossy wrapper around a mobile website? We're living through the 2025 equivalent: hundreds of AI-powered products built on the same foundation models, looking innovative until you dig deeper and like unmasking the same root sources beneath a trendy veneer.
The real question isn't whether we're building with AI - it's how we’re building with AI and whether we're building sustainable capability or inadvertently hollowing it out.
The four risks no one's talking about
Risk #1: We're destroying the talent pipeline
The restructuring is already happening at the highest levels. McKinsey - the gold standard of human expertise - just deployed 12,000 AI agents while reducing headcount from 45,000 to 40,000. Their own senior partner calls it "existential." Traditional strategy projects that once required 14 consultants now need just 2-3, plus AI agents.
Companies across industries are following suit, using AI to handle tasks traditionally given to new graduates - from coding basic functions to drafting research reports -while simultaneously cutting entry-level hiring. Big Tech alone reduced new graduate positions by roughly 25% in 2024.
I graduated straight into the Great Recession, so I am familiar with brutal job markets. But the difference here is that this is structural, not cyclical. When we automate the foundational work, we eliminate the training ground where tomorrow's leaders develop judgment, context, and pattern recognition.
Entry-level roles do more than fill spreadsheets - they create institutional memory. Customer service calls teach empathy that later shapes product strategy. Junior coding builds systems thinking that eventually drives architecture. Automate all of that away, and you end up with senior leaders who've never seen the wiring behind the walls. While I can easily see how roles as they existed might not need to exist, we need to evolve them rather than simply eliminating them.
Risk #2: The "easy button" is making us stupid
In a study titled “Generative AI Use and Cognitive Load” (MIT, April 2025), researchers found something unsettling: people who frequently use ChatGPT show declining neural engagement and reduced originality over time. We become what we repeatedly practice. If machines do the thinking, humans lose the muscle for inquiry.
Stephen Covey said it best: "We see the world not as it is, but as we are." When AI becomes our cognitive mirror, that reflection gets shallower by the day.
Risk #3: Governance gaps create "paperclip maximizer" scenarios
You know the thought experiment: give an AI the goal of making paperclips, and it might convert the entire planet into raw materials. The stakes are real, and guardrails matter.
Regulators are starting to catch up. NIST released comprehensive AI risk management frameworks, and the UK's AI Safety Institute is stress-testing frontier models across borders. But most companies are flying blind without internal governance structures. If you thought it was difficult with “data & analytics” this is exponential.
Risk #4: The skills gap is widening into a chasm
The Information Systems Audit and Control Association's (ISACA) June 2025 survey shows 89% of digital professionals believe they need AI upskilling within two years just to stay relevant. Yet one-third of organizations have zero formal policies. In healthcare, 75% of providers admit to GenAI skills shortages while announcing aggressive adoption plans. And even in technically advanced industries, there is a lack of intentional strategy, direction and comprehension.
This isn't a problem for tomorrow. This is today's operational reality.
The path forward: Four principles for sustainable AI adoption
1. Augment, don't automate away judgment
Deploy AI to amplify human decision-making, not replace it entirely. Think "AI + analyst" rather than "AI instead of analyst." This preserves institutional knowledge while accelerating insights.
2. Build learning infrastructure before you need it
Create AI literacy programs that start before automation disrupts workflows and erodes comprehension. Channeling Adam Grant's Think Again, approach reward curiosity and second-order questioning as core competencies.
3. Treat governance like Sarbanes-Oxley 2.0
Approach AI risk management with the same rigor you once applied to financial controls: as foundational infrastructure, not regulatory checkbox. Boards need scenario planning, audit trails, and kill switches for every AI deployment.
4. Measure what actually matters
Track decision quality, critical-thinking assessment scores, average time-to-insight, innovation-pipeline hit rate, execution velocity, and talent pipeline health alongside cost savings. When entry-level retention drops or critical thinking scores slide, course-correct before the culture calcifies around dependency.
What leaders must do now
AI is already rewriting balance sheets - Microsoft and Meta's performance proves it. But shareholder value built on workforce erosion is sugar-rush growth that can crash hard.
Strategic leaders need to:
• Map vulnerable tasks, not just roles. Automate the repetitive stuff, preserve the ambiguous judgment calls for humans.
• Fund the talent pipeline. Apprenticeships, rotations, and AI-assisted learning labs keep career ladders intact.
• Own the ethics. Algorithms can't testify in court or take responsibility for decisions. You can. Governance isn't optional anymore.
• Lead with purpose. Define the future talent you need, then let AI serve that vision—not the other way around.
The bottom line
AI isn't salvation or apocalypse. It's leverage. neutral until you aim it somewhere specific.
Chase efficiency without capability, and you win the quarter while losing the decade. Architect AI around human potential, and you amplify judgment, accelerate innovation, and future-proof the enterprise.
The edge has always belonged to those who learn faster than the game changes. McKinsey's Bob Sternfels puts it perfectly: "You're going to have to learn over a career at a rate you and I have never seen." The firms and individuals who embrace this reality—who build learning velocity into their DNA—will thrive. The rest will get disrupted by someone smaller, faster, and more adaptive.
But here's the counterintuitive opportunity: while monolithic consulting firms shrink teams and deploy AI agents, nimble specialists who truly understand both human dynamics and AI capabilities are positioned to capture disproportionate value. McKinsey may need fewer people per project, but they still need the right people - those who can "work well with others" and drive organizational change.
Build and integrate AI right and it will pay compounding dividends; build it wrong or ignore it and that will cost you everything.



