Last week, I went to HumanX 2025 in Las Vegas to see firsthand how AI is evolving—catching up with clients, meeting new faces, and hearing from industry leaders shaping the future. The audience reflected the AI space itself—a mix of tech giants, ambitious startups, and everything in between. The insights on stage were sharp, the discussions candid, and the message clear—AI’s future isn’t about bigger models, but better data. The real breakthroughs will come from those who master data quality, structure, and scalability.
The speed of AI innovation is staggering, and many organizations are still struggling to keep up. From my experience working at the intersection of AI and the future of work, today’s AI boom feels like a direct parallel to the digital transformation wave of the past decade. Companies that struggled to adapt then fell behind, and the same pattern is unfolding now. The challenge isn’t just about deploying AI—it’s about ensuring AI systems are built on high-quality, context-specific data to drive meaningful, long-term impact.
I was fortunate to be invited to ArcticBlue's workshop on AI adoption, (a real conference highlight) which challenged conventional thinking. They're urging their clients to challenge the traditional POC-heavy approach that often leads to fatigue and instead emphasize a fail-fast mindset. The session pushed the idea that rapid experimentation, rather than lengthy proof-of-concept cycles, is what allows enterprises to iterate quickly and effectively without flaming out and/or sacrificing data quality.
Arsalan Tavakoli, SVP of Field Engineering at Databricks, underscored this challenge perfectly: “80-90% of organizations are still struggling to get AI into production”. The reality is that many of these AI projects stall not because of weak models, but because of lack of alignment with broader organizational goals, poor data infrastructure and a lack of domain-specific intelligence. He reinforced the idea that high-quality data is the key to unlocking AI adoption, and without well-structured, domain-driven datasets, AI remains stuck in experimental phases instead of delivering real-world impact.
Undoubtedly, the belle of the ball at HumanX was Agentic AI. There was a lot of excitement about what this could mean for automation and efficiency, but also an underlying concern: How do we ensure these systems remain trustworthy?
Many panelists stressed that AI models trained on generic, crowdsourced data lack the nuance to operate effectively in specialized fields like healthcare, finance, and cybersecurity. As Kara Sprague, CEO of Hacker One said eloquently, "Cybersecurity tends to move so quickly … it’s a great petri dish for where we can look at humans in the loop,” Hot take: AI can only be as good as the data it’s trained on, and without expert-driven annotation, we’re setting ourselves up for failure.
Arvind Jain, CEO of Glean, added another layer to the conversation: “We haven’t reached 5% of AI’s capabilities.” That statement highlighted just how early we are in AI’s evolution. It’s a reminder that while AI is progressing quickly, we’re still just scratching the surface. The next breakthroughs won’t come from bigger models alone—they’ll come from better data infrastructure, expert annotation, and continuous refinement.
One of the most refreshing takeaways from HumanX was the reinforcement of human expertise in AI development. Despite the constant discussion around automation, it was clear that AI is nowhere near replacing deep human knowledge.
Ross Harper, CEO of Limbic AI, put it bluntly: “We can't just throw an LLM in with a patient and tell it to act as a therapist. We need regulated, well trained, evidence-backed AI agents that deliver all aspects of care.” That perspective underscored the necessity of human oversight in AI-driven decision-making. AI is an amplifier of expertise, not a replacement for it. Whether it’s in healthcare, legal, or technical domains, the systems that will succeed are the ones that embed human intelligence into the data pipeline from the start.
AI’s impact on the workforce was another major theme. Many conversations focused on the opportunity for AI to enhance productivity, while others expressed concern about displacement and job security. It became clear that the companies and individuals who invest in AI literacy and upskilling will be the ones who thrive in this shift.
For me, this discussion reinforced something I’ve been thinking about a lot: AI isn’t here to replace people—it’s here to redefine work. The real challenge isn’t whether AI will take jobs, but rather how we design workforces that leverage both AI and human expertise in a sustainable way.
If there was one overarching theme at HumanX 2025, it was this: Data quality is everything. We talk a lot about new model architectures, but at the end of the day, the organizations that prioritize high-quality, expert-driven data will be the ones that succeed.
The most important takeaway for me? AI isn’t just about innovation—it’s about getting the foundation right. And that foundation is data. If we don’t solve for quality now, we’ll be dealing with the consequences for years to come.
As we look ahead to HumanX 2026 in San Francisco, I’m more convinced than ever that the next evolution of AI isn’t just about scaling models—it’s about scaling human intelligence alongside them.
No matter how specific your needs, or how complex your inputs, we’re here to show you how our innovative approach to data labelling, preprocessing, and governance can unlock Perles of wisdom for companies of all shapes and sizes.