AI is having a moment. Tools powered by large language models (LLMs) are being embraced across industries, promoted as the future of work, innovation, and automation. But while their benefits are often loudly marketed, the risks are quieter, more systemic, and far less understood. 

This case study looks at the modern AI boom, exploring the challenges that arise when companies rely too heavily on systems they do not control. Industry analysis and observations outline the hidden trade-offs in security, privacy, legal exposure, and workforce behavior. The goal is not to reject AI or rush to implement it, but to set clear guardrails and thoughtful deployment for businesses. This shift reflects a growing realization across industries: the race to adopt AI has outpaced the ability to manage its dependencies, prompting organizations to reassess how much trust and control they can afford to relinquish.

1. Introduction: Another mirage on the tech horizon

Tech history is full of hyped revolutions: blockchain, the metaverse, Web3, augmented reality and 5G. Now, we're asking whether AI is here to stay or destined to fade.

Tools like ChatGPT, Claude and Gemini are being rapidly integrated into customer service, software development, email and cybersecurity. They generate content instantly, automate repetitive tasks and offer answers at scale.

But beneath the surface, familiar concerns emerge:

  • Who owns the information?
  • How accurate are the answers?
  • What happens to the data?
  • How were the outputs generated?

Companies rushing to deploy AI must consider what they're giving up. This study examines how to implement AI wisely, what's at risk and what leaders should do before it's too late. The goal is no longer speed but smart adoption: preserving autonomy, data ownership and accountability.

Despite growing investment, many companies still struggle to bridge the gap between what users want and what explainability tools deliver.

2. Challenge: Centralization and intellectual dependence

The intelligence behind today's AI tools doesn't live inside your organization. It's built and operated by external vendors using massive foundation models. These models determine what your AI can do, how it behaves and what it can't understand.

The power dynamic is skewed. Companies are building critical functions on infrastructure they don't control. Vendors are also monetizing specialization. Generic models may be widely available, but niche capabilities like legal, engineering, logistics and healthcare require custom models. That means:

  • Paying to fine-tune models with proprietary data
  • Renting private model instances for privacy or compliance
  • Funding ongoing retraining to stay current
  • Buying access to vendor-owned vertical models

Solving your company's specific problems won't come prepackaged. It will require investment and hosting.

This growing dependence on centralized intelligence infrastructure creates strategic risk. As companies outsource cognition, they may also surrender control over core functions and decision-making. Innovation becomes tied to vendor priorities not business strategy.

That's why many enterprises are re-evaluating their reliance on external AI. When strategic knowledge lives outside the organization, independence turns into dependence.

Many users assume their interactions with LLMs are private. They're not. Inputs are often logged, retained for training and can be disclosed through legal channels.

Cisco's 2023 Consumer Privacy Survey found that 62% of users mistakenly believe their AI interactions aren't stored or reviewed (Cisco, 2023). This misperception is widespread.

As AI tools flood the workplace, concerns are rising around accuracy, ethics, data misuse and long-term societal impact.

Challenge

Organizations increasingly use LLMs to draft sensitive documents, explore internal strategies or process confidential issues. These interactions generate logs that fall outside user control, raising compliance risks under laws like the California Consumer Privacy Act (CCPA).

A January 2025 TELUS Digital survey found:

  • 57% of enterprise employees entered sensitive data into public AI tools
  • 68% used personal accounts to access them, bypassing corporate oversight (TELUS Digital, 2025)

These behaviors create blind spots in data governance and increase the risk of regulatory violations.

Example

An employee drafts messaging about a workplace dispute using an LLM. That conversation could later appear in arbitration. Some platforms allow users to create public share links, which have been indexed by search engines exposing internal prompts and outputs without legal process.

Impact

  • Confidentiality: Trade secrets or personnel issues may be exposed
  • Compliance: Unprotected LLM use could violate GDPR, HIPAA or internal policies
  • Legal discovery: Stored prompt history could be subpoenaed

Recommendation

Treat LLMs like any service handling private data. Choose tools with clear data protection policies. Avoid pasting sensitive details into prompts. Train employees on legal risks and set rules for how AI-generated content is used and stored.

Legal and privacy incidents are growing. Dependence without governance turns convenience into compliance risk.

4. Risk Area: Explainability in high-stakes contexts

One of the largest concerns with AI is how readily users trust its outputs without insight into the underlying reasoning. That trust is especially risky in high-stakes contexts like credit decisions, hiring or medical recommendations, where the rationale behind a decision must be explainable for accountability. In healthcare, for example, a 2024 systematic review found that in 50 percent of the examined clinical AI systems, introducing explainable AI (XAI) features significantly increased clinicians' trust in model outputs, especially when explanations were concise and aligned with clinical reasoning (Rosenbacke et al., 2024). However, the same review also cautioned that overly complex or inconsistent explanations can undermine trust. The opacity of many foundation models thus opens a significant accountability gap when they are used in decision workflows without human oversight or traceability.

Real-world example

Regulators have reminded lenders that even when using AI or "black box" models for loan pre-approvals, they must still comply with the Equal Credit Opportunity Act (ECOA) and its implementing Regulation B. Under Regulation B, creditors that take adverse action, such as denying credit, must provide applicants with specific, understandable reasons for the decision. The Consumer Financial Protection Bureau (CFPB) has reiterated that this requirement applies equally to algorithmic or AI-driven credit models. In a 2023 circular, the CFPB warned that generic explanations like "insufficient credit score" or "purchasing history" may not meet the legal standard if the underlying model used behavioral or transaction-level data to make decisions (CFPB, 2023). Legal analysts have further noted that this applies even when lenders use complex or opaque "black box" systems, and that failure to provide a clear rationale could lead to enforcement or reputational risk (Welch & Burcat, 2024; American Bar Association, 2023).

Real-world example

One of the biggest concerns with AI is how easily users trust its outputs without understanding the reasoning behind them. That's dangerous in high-stakes areas like credit decisions, hiring or healthcare, where accountability requires clear explanations.

In healthcare, a 2024 review found that adding explainable AI (XAI) features increased clinicians' trust in 50% of systems studied, especially when explanations were concise and aligned with clinical reasoning (Rosenbacke et al., 2024). But overly complex or inconsistent explanations can erode trust.

Real-world example: Lending

Regulators remind lenders that even when using AI for loan decisions, they must comply with the Equal Credit Opportunity Act (ECOA). Under Regulation B, creditors must give applicants specific, understandable reasons for denial.

The Consumer Financial Protection Bureau (CFPB) warned in a 2023 circular that vague explanations like "insufficient credit score" may not meet legal standards if the model used behavioral or transaction-level data (CFPB, 2023).

Real-world example: Hhiring

In Mobley v. Workday, Inc. (2023), the plaintiff alleged that Workday's AI screening tool discriminated against older, disabled and minority candidates. Some claims were dismissed, but in 2024 and 2025, age discrimination claims were allowed to proceed as a collective action (HR Dive, 2025; Law & The Workplace, 2025).

The EEOC has made clear: employers using third-party AI tools are still responsible for ensuring fair outcomes. They can't escape liability by outsourcing decisions to a "black box" (EEOC Strategic Enforcement Plan 2024–2028).

Why it matters

If AI is involved in decisions about jobs, loans, healthcare or legal rights, companies remain accountable. When the process is opaque, it's nearly impossible to explain or defend decisions, opening the door to lawsuits, fines and reputational damage.

Key risks

  • No clear reasoning: Chain-of-thought explanations may sound convincing but don't reflect the model's actual logic (Anthropic, 2025)
  • Legal exposure: Regulations demand transparency
  • Erosion of trust: People lose confidence when decisions seem random or biased

A 2024 Deloitte survey found:

  • 62% of consumers are less likely to use AI services they don't understand
  • 73% say transparency directly affects their trust (Deloitte, 2024)
A graph with different colored squares

AI-generated content may be incorrect.

How to reduce the risks

  • Keep humans involved in high-stakes decisions
  • Record audit logs and version histories
  • Use AI systems that explain their reasoning
  • Be transparent and offer ways to challenge outcomes

A 2025 Gallup survey shows the gap:

  • 40% of companies say explainability is a major risk
  • Only 17% are addressing it
  • Over 80% of consumers trust AI more when it explains itself
  • More than 90% want privacy explanations

AI doesn't reason like a person. It assigns scores, picks the highest probability and outputs a result. For users, that looks like a black box.

Companies must choose: regain control through transparent systems or accept exposure to legal and public consequences.

As generative AI tools spread, many organizations overlook the legal frameworks around user-input data.

Major LLM providers often include broad permissions in their terms of service. Unless customers opt into stricter enterprise agreements, providers may retain and use input data for training.

For example:

  • Anthropic retains consumer data for up to five years with consent or 30 days without (Anthropic Privacy Center, 2025)
  • OpenAI's terms allow use of inputs for service improvement and policy enforcement (OpenAI Terms of Use)

If companies don't negotiate tighter controls, their prompts may become part of someone else's product.

Risks to watch

  • Public content (blog posts, Slack messages) may be ingested into training datasets without consent
  • Unique strategies or queries may help build tools that companies don't control or profit from
  • Users often don't know where their data goes or who can access it

This creates exposure under GDPR, CCPA and other data protection laws. Regulated industries, IP holders and any organization handling sensitive data need strong contractual safeguards.

6. Recommendations: Building guardrails before scaling AI

Rethinking AI dependence doesn't mean rejecting it. It means building structures that preserve ownership, transparency and accountability.

  • Acceptable use policies: Define where AI tools can be used, especially in sensitive departments
  • Tiered access models: Restrict LLM use in legal, finance and product development
  • Prompt logging and audit trails: Monitor interactions for compliance
  • Deploy private models: Use on-premises options to protect sensitive prompts
  • Training and verification protocols: Build AI literacy and human oversight

Relevant frameworks

  • NIST AI Risk Management Framework (2023)
  • ISO/IEC 42001 (2023)
  • OWASP Top 10 for LLM Applications (2025)

Aligning safeguards with these frameworks ensures accountability and compliance across the AI lifecycle.

7. Conclusion: Skepticism is a strategy

The promise of AI is as exciting as it is cautionary. History shows that every technology boom produces winners and casualties, and the difference often comes down to governance, foresight, and restraint. Companies that rush headlong into AI adoption without ownership of their data, models, or decision pipelines will find themselves exposed to legal discovery, regulatory penalties, reputational damage, and escalating costs for access to specialized intelligence.

The organizations that will thrive with AI are those that see AI as a partner, not a crutch. They understand that over-reliance creates vulnerability, legal, strategic, and reputational, while selective, governed adoption builds resilience. The lesson emerging across industries is clear: dependence without control is risk, but dependence with transparency and ownership is opportunity. Rethinking that balance is not just a compliance task; it's a competitive advantage. In a world of hype and one-size-fits-all promises, skepticism is not weakness; it's modern risk management. 

References:

American Bar Association. (2023). Adverse action notice compliance considerations for creditors that use AI. Business Law Today. https://businesslawtoday.org/2023/11/adverse-action-notice-compliance-considerations-for-creditors-that-use-ai

American Bar Association. (2024). Navigating the AI employment bias maze: Legal compliance and risks. Business Law Today.

Anthropic. (2025). Reasoning models don't always say what they think. https://www.anthropic.com/research/reasoning-models-dont-say-think

Anthropic. (2025). Updates to consumer terms and privacy policy. https://www.anthropic.com/news/updates-to-our-consumer-terms

Anthropic Privacy Center. (2025). How long do you store my organization's data? https://privacy.anthropic.com/en/articles/7996866-how-long-do-you-store-my-organization-s-data

California Consumer Privacy Act of 2018, Cal. Civ. Code § 1798.100, as amended by the California Privacy Rights Act of 2020.

Cisco. (2023). 2023 consumer privacy survey.https://www.cisco.com/c/en/us/about/trust-center/privacy.html

Consumer Financial Protection Bureau. (2023). Circular 2023-03: Adverse action notification requirements in connection with credit decisions based on complex algorithms. Washington, DC: CFPB. https://www.consumerfinance.gov/compliance/supervisory-guidance/circular-2023-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms

Deloitte. (2024). State of generative AI in the enterprise: 5th edition. Deloitte Insights. https://www.deloitte.com/insights/us/en/topics/analytics/generative-ai-enterprise-survey.html

Digital Information World. (2025). Sensitive data is slipping into AI. https://www.digitalinformationworld.com/2025/09/sensitive-data-is-slipping-into-ai.html

Google Cloud. (2024). Vertex AI documentation. https://cloud.google.com/vertex-ai/docs/privacy

HR Dive. (2025). Judge allows Workday AI bias lawsuit to proceed as collective action. https://www.hrdive.com/news/workday-ai-bias-lawsuit-class-collective-action/748518

Law & The Workplace. (2025). AI bias lawsuit against Workday reaches next stage as court grants conditional certification of ADEA claim. https://www.lawandtheworkplace.com/2025/06/ai-bias-lawsuit-against-workday-reaches-next-stage-as-court-grants-conditional-certification-of-adea-claim

McKinsey & Company. (2024). The state of AI in 2024: Expanding AI's impact amid heightened risk. McKinsey Global Survey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2024

Mobley v. Workday, Inc., Case No. 44074 (Clearinghouse).

PwC. (2025). CEO pulse survey 2025: Balancing innovation and risk in the age of AI. https://www.pwc.com/ceopulsesurvey2025

Rosenbacke, R., Melhus, Å., McKee, M., & Stuckler, D. (2024). How explainable artificial intelligence can increase or decrease clinicians' trust in AI applications in health care: A systematic review. JMIR AI, 3, e53207. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561425

Seyfarth. (2024). Mobley v. Workday: Court holds AI service providers could be directly liable for employment discrimination under agent theory. https://www.seyfarth.com/news-insights/mobley-v-workday-court-holds-ai-service-providers-could-be-directly-liable-for-employment-discrimination-under-agent-theory.html

Skadden, D. M., & Burcat, B. A. (2024). CFPB applies adverse action notification requirement to artificial intelligence models. Skadden Insights. https://www.skadden.com/insights/publications/2024/01/cfpb-applies-adverse-action-notification-requirement

Susskind, R., & Susskind, D. (2015). The future of the professions. Oxford University Press.

TELUS Digital. (2025). AI and privacy in the workplace: 2025 enterprise survey. https://www.telus.com/en/blog/ai-privacy-enterprise-survey-2025

U.S. Equal Employment Opportunity Commission. (2024). Strategic enforcement plan 2024–2028. https://www.eeoc.gov/strategic-enforcement-plan-2024-2028

Zhou, Z., Zhang, H., et al. (2023). Trust but verify: Identifying hallucinations in LLMs for cybersecurity. https://arxiv.org/abs/2307.08533

McKinsey & Company. (2024). Global AI Trust Survey: How Consumers Perceive Artificial Intelligence in Decision-Making. https://www.mckinsey.com/insights/global-ai-trust-2024-report