If the pace of AI innovation felt fast in 2025, UVA Darden's Conference on Ethical AI in Business only validated that instinct. The LaCross AI Institute convened researchers, economists and industry operators at The Forum Hotel in Charlottesville in early December 2025, with keynotes from Anthropic's Peter McCrory and UVA's Anton Korinek anchoring the conversation. The day was built around a single idea: minding the gap between AI's astonishing sprint up the capability curve and the messy reality of real-world adoption, ethics and ROI.

The day was split in two, with academic presentations taking up the morning, and business-related presentations filling the afternoon.

I presented "Building Agentic AI: Hype to Tangible Progress" in an afternoon session moderated by Darden's Carlos Bortoni. The room was standing‑room only despite the time (late on a Friday afternoon) and a snowstorm — a small sign that attendees had moved past curiosity to a stage of physical investment in learning how to do AI right.

Below are some of my takeaways.

Human purpose vis-à-vis AI acceleration

One theme that kept bubbling up was, "How do we (humans) keep our purpose intact as AI innovation continues to accelerate and proliferate?"

While not novel in itself, I think the LaCross AI Institute did a great job of bringing academic and business worlds together in a manner that made room for this important question. Coming from the business world, it was refreshing to hear discussions, formal and off-the-cuff, about not just what we can automate or optimize with AI, but the importance of investigating whether, why and how the work done by humans still matters once intelligent systems begin shouldering more of the burden.

The academic setting let us linger on questions business can rush past:

  • If an AI can do my tasks, what's the point of me as an employee?
  • How do I remain a moral agent when machine agents enter the loop?
  • Where does judgment, care and craft live?

The LaCross AI Institute's framing of AI ethics as a value chain — from chips and data centers to outcomes and people — made those questions actionable. Ethical AI isn't a feature; it's an outcome produced when governance touches every layer, from data to models to orchestration to security and measurement.

From my experience, business leaders want agents to amplify people, not replace them. Yet they must answer the same question: "How do I prepare my team and avoid losing purpose?" The answer, from my experience at WWT, involves a strategic blend AI skills training + AI literacy + keeping a human in the loop where meaning and judgement matter. On the other hand, you should grant autonomy to agents where repetition and speed matter — and strategically design that hand-off. Purpose survives when people move up the stack toward sense‑making, strategy, exception handling and care for others. Agents thrive when they assume the tasks we don't want to spend our lives on. In short, organizations must teach employees to work with AI as a peer and explicitly set autonomy thresholds.

The overarching question of preserving human purpose in an AI world, which may seem more suitable to contemplation in the ivory towers of academia, is something I believe all business people should keep in mind amidst the real adoption angst out there, especially as businesses race ahead into the exciting knowns and unknowns of AI's unfolding future. Purpose isn't a sentimental add‑on; it should be the north star for design. AI systems — especially agents that plan, select tools, remember context and act — will reshape workflows and, by extension, identity at work.

What else I heard in the halls

Infrastructure questions

Another recurrent theme of the conference involved asking where AI should be built — on private GPU clusters, in the cloud or using some hybrid variation. 

The answer of course depends the use case in question, on scale, on data gravity and governance, etc., but everyone agreed that both computing power, and raw energy, are the ceiling. Answering the question for your business involves planning early for the power, cooling and networking your AI solutions will need, because long‑context, reasoning‑heavy agents will no doubt raise sustained inference costs.

Investment sanity

Given the timing of the conference, at the year-end of 2025, it should come as no surprise that there was plenty of talk about AI bubbles vs durable value. 

The LaCross AI Institute's perspective matches my own at WWT: Real value accrues faster when enterprises build out foundational AI platform capabilities (e.g., involving data, governance, observability, etc.), integrating them into core systems, and reusing them across AI use cases. This is tactical value that is very real today, as is the market demand for such solutions. While anything can happen, demand for AI factories doesn't appear to be waning anytime soon.

Why "context" beats "prompts" and preserves sanity

One of the interesting threads discussed is how we're moving away from prompt engineering to context engineering — designing the entire state an agent "sees" (e.g., instructions, history, tools, retrieved knowledge, etc.) for long‑running, multi‑step work. This is an evolution and builds on great prompting skills to not only reduce resources expended but help keep agents coherent over time, providing an actionable result we can trust.

On the platform side, dynamic context matters because it helps agents retrieve only the information that's needed, compact memory as sessions grow, and strip out nonessential "thinking tokens" so the window remains viable. If successful, this approach can lower costs and improve agent reliability while keeping us humans focused on what the agent should do next, not wrestling it back into coherence.

Cutting through the agentic hype

The UVA audience was lively, engaged and interested in seeing how the theory discussed in the morning sessions was being applied in the real world. Here is a quick overview of the simple framework I shared, which WWT uses to cut through the hype and move from AI concepts to AI outcomes:

  1. Compare human & machine capabilities: I talked about the differences and evolution between human and AI capabilities, including the importance of understanding where models already meet or exceed human‑level performance and where they still need guardrails (e.g., reasoning depth, domain knowledge, tool use). The market has a lot of "agent washing" right now. Simply labeling a prompt driven assistant with a plug in an agent doesn't make it one. Real agents can plan, remember, choose tools and act under governance and observability.
  2. Identify your place on the agentic AI maturity roadmap: I shared WWT's roadmap for assessing agentic AI maturity, which progresses from chatbots to assistants to task agents to advanced agents to expert agents. Keep in mind: Organizations should increase autonomy only when governance, data and observability are in place. Autonomy is a deliberate design decision.
  3. Architect for context, tools and memory: During development, it's important to design agents that can plan, select tools, retrieve what they need (e.g., RAG, APIs), and remember just enough to stay coherent without blowing out your token budgets. It's equally important to build monitoring capabilities around the action surface, including prompts, tool calls, data access, outputs, etc. In many cases, AI agents will be integrated into an application at the heart of AI-native software engineering efforts.
  4. Prove value with real workflows: I showcased WWT's RFP Assistant — a agentic application that ingests RFPs, qualifies, summarizes and drafts responses in hours instead of days, improving throughput and win rates. It's a concrete example of AI architecture delivers measurable business impact in tandem with a seasoned human workforce.

At WWT, all of the AI development is done under the umbrella of Practical AI — which empowers organizations to answer "build my own or buy off-the-shelf?" AI questions through a proven approach that prioritizes both speed and ROI. 

Closing thoughts on an excellent conference

Business settings often prioritize speed to impact. Academia gives us room to ask, "What kind of humans are we becoming as we build these systems?" The best conversations at UVA's Conference on Ethical AI in Business weren't about features; they were about agency — both human and machine — and how to align them so people remain authors of their work rather than passengers.

The LaCross AI Institute's commitment to ethics‑grounded research, which dovetails nicely with the Responsible AI we preach at WWT, and the lively dialogue felt like the appropriate counterweight to keep in mind as we pursue AI innovation. Thanks again to all the wonderful people I had a chance to connect with.

If there's a single takeaway from UVA, it's this: Human purpose survives AI acceleration when we design for it. Give humans judgment and meaning; give agents speed and scale. If we build the handshake, engineer the context and monitor the action, we can let people spend more of their day on the work that makes them proud.