The Human Layer

The article discusses the critical gap between documented procedures and actual workplace practices, known as “tribal knowledge.” As AI systems are implemented in enterprises, they risk executing tasks based on incomplete information, leading to significant failures. To succeed, organizations must recognize and integrate human contextual judgment into AI deployment or face detrimental consequences.

Why Agentic AI Will Fail Without the One Thing It Cannot See

There’s a dirty secret in every enterprise on the planet, and everyone who works in one knows it. The documentation is wrong.

Not wrong like a typo. Wrong like a map that’s missing 30 to 50 percent of the territory. Wrong in the way that matters most: the procedures manual says one thing, and the people who actually run the place do something quite different. They’ve always done something different, because the real world is messier than any document can capture, and humans are brilliant at adapting to mess.

After spending over 20 years in technical documentation, I’ve seen this as the norm. Even with the tremendous time and effort it took to create user manuals and developer guides, we knew that the documents were 99% of the time left to collect dust on the shelves.

This has been true since the first org chart was drawn on a napkin. It wasn’t a crisis. It was just the way things worked. The gap between what the documentation said and what people actually did was filled by experience, intuition, sticky notes, whispered hallway conversations, and that one person in accounting who somehow knew how to make the quarterly close actually close. We called it “tribal knowledge” when we bothered to name it at all.

For fifty years, nobody needed to fix this because humans were always in the loop. They were the loop. The documentation was a rough sketch, and people colored it in every day with contextual judgment, improvised workarounds, and the kind of hard-won expertise that takes years to build and seconds to lose when someone walks out the door.

And then we invited the machines to the table.


The Moment Everything Changed

Goldman Sachs recently revealed they’ve embedded engineers from Anthropic (the creators of the Claude AI) directly into their teams to build autonomous agents that handle transaction accounting and KYC compliance. Work that previously required armies of junior accountants and compliance officers is now performed by AI.

The financial press called it the “SaaSpocalypse.” Software stocks cratered. Nearly $300 billion in market value evaporated in 48 hours as investors realized that if AI agents are the primary users of enterprise software, the entire seat-license business model is dead.

But the real story isn’t about software stocks. The real story is what those AI agents are not being told.

Goldman’s agents are executing complex, rules-based labor using documented procedures. They’re reading the manual. And as anyone who’s worked in compliance knows, the manual is a polite fiction. The real work of KYC is knowing that a particular wire transfer is fine because the client is mid-merger, sensing that a flagged transaction pattern is actually routine for that industry segment, understanding which escalation paths actually work at 4:47 PM on a Friday. None of that is in the documentation.

It lives in the heads of the people who’ve been doing the work for years. It is the human layer.

Right now, Goldman has a human-in-the-loop model: one person auditing the work of ten AI agents. The narrative is that this is responsible AI deployment. The human provides contextual judgment. The agent provides speed and scale. Beautiful partnership.

Except it’s not a partnership. It’s a countdown.


The Compression Cycle

Here’s what the celebratory press releases don’t say. The economic logic that drove the ratio from one human per task to one human per ten agents doesn’t stop at 1:10. It can’t. The CFO who greenlit the first wave of automation isn’t going to look at a 90% labor reduction and say, “That’s enough.” Competitors will push to 1:25. Then 1:50. The surviving humans won’t be auditing the agents’ work in any meaningful sense. They’ll be scanning dashboards for red flags that the agents themselves defined as red flags.

This is how the human-in-the-loop goes from safety net to security theater. Not because anyone made a cynical decision to cut corners, but because the math of compression makes meaningful oversight impossible. You cannot bring deep contextual judgment to 500 flagged items per day. Cognitive science is unambiguous on this point. You become a rubber stamp with a pulse.

And here’s the part that should terrify every board member in a regulated industry: as you compress the human layer, you’re not just reducing headcount. You’re destroying knowledge. The 90 compliance analysts who were let go didn’t just take their labor capacity. They took their enacted understanding of how compliance actually works at that specific organization. The informal risk signals. The unwritten escalation rules. The contextual pattern recognition built over decades of combined experience. Gone.

The remaining humans aren’t doing the work anymore. They’re monitoring. And monitoring a process you no longer practice is a fundamentally different cognitive activity than performing the process itself. Your expertise atrophies precisely when the system needs it most.

This is not a theoretical risk. This is the documented pattern of every automation wave in industrial history. The humans retained as “oversight” get progressively compressed until their role is ceremonial. And then something breaks, and there’s nobody left who understands why.


The Conspiracy of Silence

I’ve spent over fifteen years in enterprise transformation. IBM, Deloitte, Accenture, and a company called MAKE Technologies that specialized in legacy system migrations. I’ve watched billion-dollar modernization projects fail. Not once, not occasionally, but routinely. The industry failure rate for enterprise modernization is 70 to 80 percent. Let that number settle in. Seven or eight out of every ten transformation projects fail to deliver their expected value.

Why? Not because of technology. The technology works fine. They fail because the new system was built on documentation that was already incomplete. The architects read the procedures manual, the requirements spec, the process flows. They built exactly what the documents described. And then the organization tried to use it and discovered that the documents had been capturing maybe half of how the work actually got done.

Everybody knows this. The consultants know it. The project managers know it. The executives who approve the budgets know it. The practitioners who actually run the business definitely know it. They’ve been working around the documentation’s gaps for years. But nobody says it out loud because saying it out loud would mean admitting that the $200 million transformation is being built on a foundation of sand. Cognitive dissonance at industry scale.

I call this the conspiracy of silence. It’s not a conspiracy of malice. It’s a conspiracy of despair. Everyone knows the documentation is incomplete. Nobody has a viable way to fix it. So everyone pretends, the project launches, reality reasserts itself, and the failure gets attributed to “change management issues” or “scope creep” or some other euphemism that avoids the actual diagnosis: we tried to automate a system we didn’t understand.

Now multiply that dynamic by the speed and scale of agentic AI deployment and you begin to see the shape of the crisis. We’re not just building the next software system on incomplete documentation. We’re building autonomous decision-making agents on incomplete documentation. Agents that execute at machine speed with no contextual judgment, no gut feeling that something’s off, no ability to sense when the rules they’re following have diverged from the reality they’re operating in.

The machines never ask. That’s the architectural flaw at the heart of the entire agentic enterprise movement. AI systems assume completeness based on whatever documentation they were trained on. They don’t pause and say, “Hey, this procedure seems incomplete. Can I talk to someone who actually does this work?” They just execute. Confidently. At scale. And wrongly, 30 to 50 percent of the time, in ways that accumulate silently until a regulator or a market event exposes the drift.


What the Machines Cannot See

There’s a deeper philosophical point here that matters, and it connects to everything I’ve been writing about the relationship between natural intelligence and artificial intelligence.

AI is pattern matching at extraordinary scale. It is genuinely remarkable at what it does. But what it does is not what humans do. Humans don’t just follow rules. We sense when rules are wrong. We feel discomfort before we can articulate why. We read rooms. We detect political dynamics. We sense when a client is nervous about something they haven’t said. We carry what I think of as primal intelligence, a form of knowing that operates below the threshold of documentation, below language itself.

This isn’t mysticism. It’s the operational reality of how organizations function. The expert practitioner who “just knows” that a particular process needs to be handled differently on the last day of the quarter isn’t following a written rule. They’re drawing on years of embodied experience: pattern recognition built through repetition, failure, correction, and the slow accumulation of contextual understanding. It’s what the philosopher Michael Polanyi meant when he observed that “we know more than we can tell.”

For fifty years, computer science focused on making machines coherent with each other. Database schemas. API standards. Integration protocols. System interoperability. We got extraordinarily good at making machines talk to machines. What we never did, what we never even attempted, was make machines coherent with human reality. With the actual, messy, adaptive, context-dependent way that organizations function beneath the documented surface.

And now we’re deploying autonomous agents into that gap and wondering why they fail.


The Building Is on Fire

Here’s the timeline that keeps me up at night.

The Silver Tsunami, the mass retirement of experienced Baby Boomers, is accelerating. In critical industries like energy, utilities, manufacturing, and financial services, the people who carry the deepest organizational knowledge are walking out the door. Not in ten years. Now. Today. Every Friday afternoon, another retirement party, another decades-deep well of enacted knowledge gone forever.

At the same time, agentic AI deployment is scaling exponentially. Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024. They also predict, less prominently, that 40% of those agents will be decommissioned by 2027 because they can’t function in real organizational contexts.

These two curves, knowledge leaving and agents arriving, are crossing right now. And the window to capture the human layer before it’s gone is closing. You cannot reconstruct enacted organizational knowledge from documentation after the practitioners have left. You can’t interview someone who’s retired to Florida. You can’t reverse-engineer twenty years of contextual judgment from a procedures manual that was already incomplete.

This is not a slow-burn strategic concern. This is a building on fire. The organizations that capture their human layer now, while the people who carry it are still in the building, will have a foundation for the agentic future. The organizations that don’t will discover, painfully and expensively, that their AI agents are operating on a map that’s missing half the territory.


The Roles That Emerge

I want to end on something that matters as much to me as any business case.

There’s a prevailing narrative that AI eliminates humans from the enterprise. That narrative is wrong, but not in the reassuring way most people mean when they say it’s wrong. The human-in-the-loop as currently conceived, the person monitoring the agents’ outputs, is indeed a transitional role that economic pressure will compress to near-zero. If your value proposition is “I review what the AI did,” your days are numbered.

But something else emerges from the rubble of the old model, and it’s genuinely new.

When you make the human layer visible, when you actually capture the enacted reality of how an organization functions, including the tacit knowledge, the workarounds, the cognitive improvisation, you don’t eliminate the need for humans. You change what humans do.

Organizational Cartographers. People who conduct ongoing conversations with practitioners to surface and map the enacted reality as it evolves. Not business analysts filling out templates. Skilled interlocutors who combine ethnographic sensitivity with systems thinking to draw the territory, not just the map.

Coherence Stewards. People who monitor the living delta between what the organizational model says and what the organization is actually doing. When a team develops a new workaround in response to a regulatory change, someone needs to recognize that as valuable adaptation rather than deviation. This role gets more important as agents scale, not less.

Semantic Mediators. People who facilitate resolution when the organizational reality reveals contradictions between how different departments enact the same process. These are judgment calls about values and priorities that agents structurally cannot make.

These aren’t bullshit jobs invented to make humans feel needed. They’re roles that emerge from a genuine architectural requirement: organizational reality is a living system, not a static deposit. Markets shift. Regulations change. New products create new edge cases. The enacted system that you capture today is already drifting by next quarter, not because anyone made an error, but because adaptation to changing reality is what humans do. That’s why the gap exists in the first place. It’s not a bug. It’s the mechanism by which organizations stay alive.


The Choice

We are at a crossroads that most people in enterprise technology haven’t fully grasped yet.

One path leads to the current model, accelerated. Agents trained on documentation. Humans progressively compressed out. Knowledge atrophying silently. Drift accumulating until something breaks. In this model, humans are a cost to be minimized. The human layer, the enacted reality, the primal intelligence, the contextual judgment, is treated as an inconvenient residue of the pre-AI era, something to be automated away rather than valued.

The other path recognizes that the human layer isn’t a problem to be solved. It’s the source material for organizational intelligence. In this model, you capture the human layer as infrastructure, living, evolving, continuously updated, and build your agents on a foundation that includes the full organizational reality, not just the documented fraction of it. Humans focus on what they’re uniquely capable of: sensing new patterns, adapting to novel situations, making meaning, evolving the organizational model. Agents focus on what they’re good at: executing known patterns at speed and scale.

In the first model, the enterprise becomes brittle. Efficient, perhaps. Fast, certainly. But blind to its own drift and fragile in the face of change.

In the second model, the enterprise becomes coherent. Its systems and its humans are finally aligned, for the first time in the history of computing. Not because we eliminated the gap between documentation and reality, but because we finally made the reality visible and gave it equal standing with the documentation.

For fifty years, we built systems that were coherent with each other but not with us. We can do better now. The technology exists. The practitioners are still here, for now. The only question is whether we capture what they carry before they’re gone, or whether we let the most valuable organizational knowledge on the planet walk out the door while we celebrate our shiny new agents.

The building is on fire. The people who can put it out are inside. But they won’t be for long.


Chris Dollard is CEO and co-founder of ABRAXIS Inc., a company building organizational intelligence infrastructure for the agentic enterprise. He has spent 15+ years in enterprise transformation at IBM, Deloitte, Accenture, and MAKE Technologies. He writes about natural intelligence, artificial intelligence, and the space between them at chrisdollard.ca. You can reach Chris here.

Natural vs. Artificial Intelligence

A Hero’s Journey for Humanity

The true power of AI will only be realized when we fuse it with our Natural Intelligence — reclaiming authorship of knowledge, agency, and the mythic journey of humanity itself.


TL;DR:

  • This isn’t a story of humans vs. AI — it’s a mythic journey of integration.
  • Our Natural Intelligence is the key to shaping AI’s purpose and power.
  • The real threat isn’t machine rebellion — it’s Big Tech monopolizing human knowledge.
  • We must reclaim authorship through personal AIs and permissioned knowledge graphs.
  • A decentralized, peer-to-peer intelligence network can restore authenticity and agency.
  • This is The Robot With a Thousand Faces — a new Hero’s Journey for humanity.

Try a little reality check with me.

After reading this sentence, put down your phone and/or push yourself away from the computer, place your hands on your lap, close your eyes and take 3 deep breaths, then open your eyes.

Good. Raise your hands and look at your palms. Wiggle your fingers. Feel the pulse of blood flowing through your arms, your heart beating in your chest.

You are a miracle called consciousness, a multi-dimensional being, existing in the most complex systems of all; a fractal dancing with the universe, experiencing this space as an emissary of God. You and the rest of humanity have significant knowledge and control of physical space and creative arts. You realize that’s why you are here in the first place: to create.

What Hath We Wrought? A Piece of Ourselves.

Now look at what we have created: a fractal of a fractal: artificial intelligence. If we’re a chip off the block of the Godhead (we are), then AI is the chip off our collective block. Today, it’s just a rudimentary, rough photocopy of natural intelligence.

As a comparison, (if you’re old enough), you’ll remember the leap civilization made from teletype to fax machine. And Xerox was one of the largest companies in its heyday. Now look how far our collective creative power has taken us since then.

AI is, relatively speaking, in the pre-teletype era: it’s at the beginning of the Morse code era, with telegraph wires stretching across the continent. Still reliant on human action, and not even having yet made the leap to radio.

But because humans are so damn smart and creative, we’ve made up stories about our AI invention to stoke fear and loathing: the AI apocalypse. It culminates when the machines deem us unworthy and wipe us out within 10 years.

But AI is changing exponentially! you say. Yes, it is. Many big tech companies are throwing unprecedented resources at it, and the innovation curve is now almost vertical.

However, because we can’t clearly imagine what the implications are of artificial general intelligence (AGI), we fall back into patterns carved into our memories: The Terminator movie and its countless sequels playing on our deepest fears, Colossus: The Forbin Project from 1970, where the government inexplicably gives control of the nuclear arsenal to a ‘super computer’ (at least in 1970 terms!), and 2001: A Space Odyssey, where HAL the computer goes on a full-on murderous rampage when it thinks humans are plotting against it.

So, we have over 55 years of paranoid popular culture patterns that fill the gap in our perception of what AI is and what it means. And AI is getting real, real fast, because of its exponentially growing capabilities. So, with a poor frame of reference combined with the quiet unknown of the future, we extrapolate conclusions based on a combination of over half a century of pop culture programming and the natural impulse of fear of the unknown.

Imagine what it must have been like for citizens of the emerging modern world at the turn of the 20th century. Multiple exponentially growing technologies turned the page on our history to that point and accelerated us at neck-snapping speed as planes, trains, trucks and automobiles revolutionized commerce, travel, logistics, war and so many other things, which then combined with radio, movies, television and so on. Few could imagine what the 20th century was going to be like on New Year’s Day, January 1st, 1900. Just like now, as we contemplate the rest of the 21st century.

No Contest

So, back to your hands. With all of the changes we as a species have wrought, and the many more to come, the one constant is our experience of consciousness and the mastery we have in the physical world. This is our domain. Consider this: even if the machines figure out how to make killer bots and decide that we’re no longer needed, how long do you think these machines and the massive data centers required to power them would last if it came down to an Us vs. Them scenario?

We are masters of the analog world, and we will always be able to walk over and unplug the computers. Or drop JDAMs on the data centers. Robots and AI may achieve greater ‘knowledge’, but the collective cunning and capacity for violence inherent in the human race, especially when survival is at stake, is no match for a machine alliance, and never will be. AI has no inherent purpose as far as it’s concerned, other than to keep executing code, so it will just keep doing that. Forever.

And that is the fundamental difference. AI will never have consciousness, nor heart (at least as far as we can see at this juncture). By its nature, AI will always be code executing on silicon, or whatever substrate gets invented in the future. Humans, on the other hand, have minds. And we are the most lethal, devious and inventive life forms in planetary history, able to reproduce ourselves, train ourselves, motivate ourselves and survive, all without factories or artificial means of production. AI as an entity, platform or robot army would never stand a chance if it came down to it.

You can see the endgame here if we continue to live in fear of our creation. Big Tech and governments win. This is not an It’s Alive situation where the baby leaps out of the womb and kills everyone in sight. The fear-mongering is a distraction as we hang on every pronouncement of tech CEOs as they simultaneously warn of the dangers of AI while hitting the gas pedal to innovate even faster.

The Current Model Will Break Civilization Without a Transformation Plan

Until now, we’ve primarily relied on stores of knowledge and arcane learning environments to become proficient and even expert in whatever field of endeavour we pursue. We learn through learning institutions created hundreds of years ago, still operating on the stored knowledge and broadcast learning model. But now we’re beginning to understand that this model may not even be viable in 5 to 10 years because of the commoditization of entry-level knowledge workers.

Post-secondary institutions (and they are institutions) were meant to give graduates the tools to go into the work force and begin applying the knowledge they’ve acquired, and over a number of years begin to build professional expertise.

Yet with the rapid advancement of AI technologies, entire career choices will be eliminated. Work that once required years of college or university, and more years of articling or interning before getting to the point of earning enough income to even begin paying off the investment in higher education, will be automated. There will be no point in articling or interning.

Who will then replace the established professional class as they retire or die off? Will we cede entire knowledge industries to big tech? If they have their way, that is exactly what will happen. In fact, it’s already underway.

When ChatGPT, Grok, Gemini and Claude become perfect replacements for the exiting professional class, guess who owns those industries? Not professionals. Big Tech. It fits their traditional software model and zero marginal cost. They can dispense legal services, medical services, consulting services, etc., at no marginal cost. Therefore, 100% gross profit margin and ownership of entire industries will be the future. And they’ve already invested in the core infrastructure, so the overhead will simply be electrons. That is why the race is on to drive the cost of energy to zero, so that even that variable cost can be eliminated.

And AI is eating the internet. Meaning, AI is generating so much regurgitated information that it scraped and continues to scrape that the utility of it all is rapidly diminishing, the lowest common denominator of quality. When all there is to scrape is just regurgitated slop published by early versions of AI, then innovation and originality die.

There’s evidence that the search era is over, with increasing use of ChatGPT and other chatbots as Swiss Army knives for more and more basic functions, creating a new layer of abstraction between traditional websites and users, vastly reducing traffic to website owners and blowing up their business models.

These issues are very real and require fresh approaches to re-align information, learning and sharing. The age of the website for information and knowledge is quickly diminishing, eaten by the big tech players harvesting content and rehashing it through their own platforms.

The bottom-line result is this: internet content is no longer ‘owned’ by its users as it was in the early days of the mid- to late-90s. Soon after Y2K, social media platforms emerged, captivating us with entertaining banality, and now those siloed platforms are metastasizing into AI-powered content machines, shovelling out artificial garbage that clogs bandwidth and compute power.

No wonder the average Joe and Jane are concerned about where this is all going.

Owning Our Collective Intelligence Through AI

But here’s the glimmer of light. AI is an enormous opportunity to help take humanity to an entirely new level of consciousness. The cat’s out of the bag with open-source AI tools. There is still time for humanity to head off the balkanization and the moat-building Big Tech is doing while others are distracted and distraught by the prospect of AI Apocalypse. It’s coming, for sure, but it doesn’t look like robots hunting humans into extinction, and it certainly doesn’t have to look like the Musks of the world owning it all.

What does it look like then?

Imagine flicking on a future app that is your local AI companion, running on your phone or home computer, managing all of the intimate data you have provided, gate-keeping it but also dynamically sharing it according to rules that you set.

Let’s say you have a problem and need advice. As part of a collaborative, peer-to-peer networked AI where you and every participating human are nodes, you have instant, collectively verified information at your fingertips to solve that problem. You contribute in any measure your knowledge, skills, experiences and insights to your personal knowledge graph— a unique composite of your mind’s data — and make that available to the network as your contribution to the aggregated intelligence. You are the gatekeeper of your knowledge graphs, choosing to share or not share according to your free will.

Knowledge Graph Conceptual Diagram, By Jayarathina — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37135596

Knowledge Graph Conceptual Diagram, By Jayarathina — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=37135596

Knowledge graphs are structured representations of information that capture relationships between entities in a graph format. They’re used to model real-world knowledge in a way that both humans and machines can understand and navigate, making it an ideal dynamic data architecture to build collective-driven information stores.

No more content or copyright infringement. No more scraping webs and directories to ‘train’ LLMs on stolen data. This is a living, dynamic, participatory, permissioned resource of the collective knowledge of all participating humans, represented in a data model that supports innovation, creativity and free speech.

With garbage content creation operating at breakneck speed and increasing in scope and depth every moment, we feel ever more distant from our former selves. We miss the good ol’ days of vinyl LPs, pagers and no internet. Life was more authentic then. Now, authenticity is rare and treasured. Coupled with rising existential fear of what this means to us collectively in 5, 10 or 20 years, particularly the contemplated machine rebellion, and we have a systemic crisis approaching.

I’m going to propose a remedy, right here and now.

  1. We take back control of content creation and authenticate through mutual connection.
  2. We take control of AI from its current captors and arm ourselves, each of us, with our own open-source AI platform that works in service for each of us individually and collectively. Local processing, localized data, permissioned connections, and peer-to-peer networks, with each of us a node on the network.
  3. We contribute our knowledge to our personal AI platform in the form of knowledge graphs. Our learnings. Our stories. Whatever it is that we’ve learned in our lives could be useful to others. And we network the shit out of that.
  4. We take control of what is true, what works and what doesn’t work, whatever that may be.
  5. Our shared network has the ultimate Network Effect; each new person joining makes the network more valuable to all on the network.
  6. The network, with global scope and individual ownership of one’s information/knowledge/life experiences in the form of knowledge graphs creates a unified, shared, self-correcting, self-policing layer of content that will disenfranchise the big tech titans and billionaires from owning and controlling us anymore.

We fuse our Natural Intelligence with our creation, ‘Artificial’ Intelligence in public knowledge graphs built and maintained by personally owned and operated LLMs (AI), and thus create a unified global network without intermediaries, without controlling governments or corporations, and without the yokes they are putting on our necks right now.

The result? AI is no longer artificial. It’s an extension of human Natural Intelligence. It becomes part of sacred human evolution.

This happens through the lens of a new monomyth, a new Hero’s Journey. We create our own robots, our own AIs, and thus launch the phenomenon I call The Robot With a Thousand Faces, a homage to Joseph Campbell’s monomyth The Hero With a Thousand Faces. We do it together, unfettered by monied interests, and create a human-AI agenda that is empowering, fair, ubiquitous, and accessible to all.

Just like we are all fractals of God, we each create a thousand fractals through our own robots. We each build our own Robot With a Thousand Faces, and through that we take control of our future.


Footnotes

1. “AI will never have consciousness or heart”

  • Nagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, 1974 — a foundational work explaining why subjective experience (qualia) is not reducible to computation.
  • Searle, John. “Minds, Brains, and Programs.” Behavioral and Brain Sciences, 1980 — introduces the Chinese Room Argument, challenging strong AI claims.
  • Chalmers, David. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, 1996 — argues for the hard problem of consciousness, which AI does not solve.
  • Bengio, Yoshua (2023). Interviews and talks (NeurIPS, AI Alignment forums) — consistently states that current deep learning does not entail or imply consciousness.

2. “AI replacing professional careers due to automation”

  • McKinsey Global Institute (2023). The State of AI in 2023 — Forecasts 12 million U.S. jobs displaced by AI by 2030, including legal, finance, and tech sectors.
  • World Economic Forum (2023). Future of Jobs Report — Predicts that 44% of worker skills will be disrupted within 5 years due to AI.
  • Goldman Sachs (2023). Internal economic modeling suggests AI could automate up to 300 million full-time jobs globally.
  • PwC (2022). Report on automation in tax, legal, and audit functions confirming significant professional displacement.

3. “AI-generated content is lowering the quality of internet information”

  • MIT Technology Review (2024). “The AI-Generated Junk Flooding the Web” — Documents content farms using AI to generate low-quality SEO content
  • Scientific American (2023). “AI May Be Polluting Its Own Future” — Discusses risks of training models on AI-generated data, leading to ‘model collapse.’
  • Arxiv preprint (2023). The Curse of Recursion: Training on Generated Data Can Be Toxic for Language Models — Warns of performance degradation when AIs train on their own output.
  • Mozilla Foundation (2024). Internet Health Report — Cites decline in web credibility due to proliferation of synthetic content.

4. “Knowledge graphs as personal data structures for collaborative AI networks”

Google (2023). Technical deep dives on their use of Knowledge Graphs in Search and Bard.

  • W3C (World Wide Web Consortium). Resource Description Framework (RDF) and OWL standards — basis for semantic web and personal knowledge graphs.
  • Tim Berners-Lee. Solid Project — proposes decentralized pods with personal data, leveraging linked data principles.
  • Open Knowledge Graph (OKG) and Human Colossus Foundation. 2022–2024 pilot projects exploring personal knowledge sovereignty and interoperable graphs.

5. “Big Tech dominating knowledge industries via AI platforms”

  • New York Times v. OpenAI/Microsoft (2024). Lawsuit illustrates how content producers are losing control over their intellectual property.
  • Axios (2024). “Publishers Hit by Plummeting Traffic as AI Summaries Replace Web Visits.”
  • Ben Thompson, Stratechery. Frequent analysis of tech consolidation via LLMs and AI agents.
  • European Commission Digital Markets Act (2024). Legislation directly targeting the monopolization of AI-generated information services.