Why AI Hasn’t Transformed Work Yet — And What Needs to Change

Nomad Data
April 23, 2025
At Nomad Data we help you find the right dataset to address any business problem. Submit your free data request describing your use case and you'll be connected with data providers from our over
partners who can address your exact need.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Despite widespread adoption of AI tools like ChatGPT, Copilot, and other LLM-based solutions, most companies today aren’t seeing the sweeping productivity gains that were promised. At Nomad Data, we’ve worked closely with companies across industries to explore AI-driven automation, and we believe the disconnect comes down to one critical issue: context.

Enterprise work is not generic. It’s complex, deeply specialized, and highly contextual. Yet the majority of AI solutions on the market today are built as generalized tools — remarkable in their capabilities, but often disconnected from the actual needs of the individuals using them. The burden of adapting these tools to real-world business processes within an organization has been unfairly placed on the users themselves, who are rarely technologists.

Consider a claims adjuster, a financial analyst, or a legal associate. These professionals are domain experts, but they aren’t trained to configure AI systems. Expecting them to extract and encode their tacit knowledge into prompts or workflows is unrealistic. Their expertise lives in their heads, developed through years of pattern recognition, intuition, and contextual decision-making. When we ask them to explain how they do their work, they often gloss over crucial nuances—because to them, those details are obvious.

This is where most AI implementations falter. Companies purchase powerful general-purpose tools and expect employees to figure out how to use them. But the tools lack the specific knowledge of how the job is done. That’s why at Nomad Data, we take a different approach: we embed ourselves with the teams doing the work, extracting and formalizing their implicit knowledge, and translating it into AI workflows that actually replicate their outputs.

Take insurance claims processing as an example. A single claim can span hundreds of pages. Humans read through these documents to construct a summary, identify red flags, and assess the validity of the claim. We don’t just throw an AI model at this task. We sit down with claims handlers, walk through examples, understand their decisions, ask why exceptions exist, and iterate until the AI mirrors their reasoning.

In one engagement, we reduced what amounted to a year's worth of human review time into just minutes. But it wasn’t a plug-and-play solution. It required building a custom architecture of AI building blocks—each performing specific subtasks, validating outputs, cross-checking reasoning, and adapting to the company's unique needs. This is not the job of an out-of-the-box chatbot. Because Nomad Data has built it’s system to be highly configurable, this process can still be very quick.

The misconception we see most often is that an API call to a foundation model is enough. But enterprise AI isn’t just about calling a model and getting a good answer. It’s about orchestrating dozens of steps, prompts, evaluations, and data flows—and doing it in a way that matches how humans actually work. That means building systems where the AI can reason, validate, and interact like a well-trained junior employee, not just a search engine with a personality.

So how do we know when an AI solution is viable? The key metric we use is knowledge articulation. Can the human articulate all the steps and logic they follow? If not—if their decisions are rooted in deeply ingrained pattern recognition with no explicit rules—then full automation may not be realistic. But when that knowledge can be extracted, we iterate with the employee, compare outputs side-by-side, and refine until the AI reaches parity.

The results have been extraordinary. We’ve seen automation reduce time-to-decision by 99% in fields like legal discovery, insurance claim review, and financial diligence. And we’re just scratching the surface. As AI agents gain access to real tools—Excel, PowerPoint, email, databases—the range of replicable work will explode. But only if we continue investing in the front-end work of capturing context.

That’s the frontier we’re focused on: building systems that not only perform but understand. That means more intelligent knowledge extraction, more flexible AI orchestration, and more embedded partnerships with the people who do the work. The future of enterprise AI doesn’t lie in off-the-shelf tools. It lies in closing the context gap.

If your team is grappling with how to turn AI hype into results, let’s talk. At Nomad Data, we don’t just implement AI — we make it work.

Learn More