Skip to content
WWT LogoWWT Logo Text
The ATC
Search...
Ctrl K
Top page results
See all search results
Featured Solutions
What's trending
Help Center
Log In
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalSustainabilityImplementation ServicesLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Featured today
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Featured learning path
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
WWT in the news
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
Partner spotlight
What we do
Our capabilities
AI & DataAutomationCloudConsulting & EngineeringData CenterDigitalSustainabilityImplementation ServicesLab HostingMobilityNetworkingSecurityStrategic ResourcingSupply Chain & Integration
Industries
EnergyFinancial ServicesGlobal Service ProviderHealthcareLife SciencesManufacturingPublic SectorRetailUtilities
Learn from us
Hands on
AI Proving GroundCyber RangeLabs & Learning
Insights
ArticlesBlogCase StudiesPodcastsResearchWWT Presents
Come together
CommunitiesEvents
Who we are
Our organization
About UsOur LeadershipLocationsSustainabilityNewsroom
Join the team
All CareersCareers in AmericaAsia Pacific CareersEMEA CareersInternship Program
Our partners
Strategic partners
CiscoDell TechnologiesHewlett Packard EnterpriseNetAppF5IntelNVIDIAMicrosoftPalo Alto NetworksAWS
The ATC
AI AssistantsResearchAI SolutionsGenAIResearch NoteAI Proving GroundAI & Data
WWT Research • Research Note
• January 11, 2024 • 6 minute read

Generative AI: Risks, Rewards and a Framework for Utilization

Identifying existing and potential use cases will enable faster return on investment with generative AI and help determine your ownership model moving forward.

In this report

  1. A framework for generative AI adoption
  2. Think about your level of ownership 
    1. List of key generative AI terms

This was originally published in May 2023

It's likely that many within your organization are already experimenting with generative AI for business purposes — writing code or marketing pitches, analyzing pricing models, or automating certain tasks — while your senior leadership or board of directors is becoming more interested in AI solutions given the hype.

As an IT practitioner, it's your job to understand the tech landscape and educate your organization about the power and risks of certain technology solutions, regardless of their application. Generative AI is no different.

At this stage, you should be thinking about generative AI from a board-level perspective, looking beyond the near term and well into the future. What are all the risks and rewards? What are the ways your organization might win or lose? How will your people react or respond? How might generative AI make your organization more competitive and effective?

Don't assume anyone within your organization is thinking strategically about how or where generative AI should be applied.

Similar to our recommendations for prioritizing enterprise automation initiatives, we think IT leaders should focus on developing and driving a top-down strategy that aligns with business goals, clarifies where generative AI is already in use, scales grassroots efforts where appropriate, and adopts a people-first approach.

A framework for generative AI adoption

For most organizations, generative AI is not ready for enterprise-wide adoption. But there are pockets of use cases where applications such as ChatGPT can add immediate value.  

In general, we've seen many organizations already ban or restrict generative AI use due to the inherent risk. From our perspective, here is a helpful framework evaluating various use cases for large language models (LLMs) like ChatGPT:

A framework for LLM use case evaluation.
A framework for LLM use case evaluation.

Think about your level of ownership 

With potential use cases identified, you and your leadership can now think about ownership models.

Some organizations — likely those with deep resources and in highly regulated industries — may consider developing and training their own LLM solutions. 

What are the costs of training your own LLM? 

  • Hardware: Training your own LLM would require sophisticated network infrastructure that includes hundreds of GPUs, several terabytes of storage, up to 300 terabytes of memory and a high-speed network.
  • People: Training your own LLM will demand a large team of highly-skilled specialists, including infrastructure engineers, data engineers, solution architects and data scientists.
  • Processes: To train your own LLM, you'll need parallel ingestion and processing capabilities, advanced hardware accelerators and optimization tools, and access to high-quality data.

In all, the cost of training your own LLM today could balloon to $100 million or more — a massive investment that doesn't even consider the chain reaction it might cause in regard to your carbon footprint, security parameters and cloud consumption. Further, we estimate the total cost of ownership of a mature LLM would require between 10 million to 19 million hours of cloud usage just to train the model.

Developing a proprietary generative AI solution will take months to deliver (if not longer), but if done correctly, the resulting model would be highly secure and likely very impactful for your specific organization.

Most organizations will lean toward buying or leasing a base model and fine-tuning as needed. This approach would still consume time and resources but be optimized for use cases and maintain a level of security. 

Those opting to consume an enterprise-grade LLM can find some value in the information retrieval and rudimentary analysis capabilities while maintaining low costs, but they will sacrifice risk-limited security guardrails.

Another question to consider is whether you should train your own LLM with your proprietary data and records. If you have the means to do so and have thoughtful and mature data governance and risk management policies in place, then the answer is yes. Otherwise, we do not recommend it.

LLM Adoption Modeling Bell Curve
Organizations that opt to fully train their own LLM will have a highly secure and capable solution that could cost roughly $100 million to develop, while organizations opting for a low-cost version may be more susceptible to security threats. We see most organizations utilizing a "fine-tuned" method.   

Up next: 

  • Manage Generative AI Before It Manages You
  • Sound Data Strategy Paramount to Generative AI
  • Common Pitfalls When Getting Started With Data Governance

List of key generative AI terms

For those unfamiliar with generative AI, here's a glossary of terms that should help you gain a clearer understanding of what all the hype is about:

  • LLM or large language model: A type of AI algorithm that leverages deep learning techniques to process natural language to understand, summarize, predict and generate content; they have at least a few million parameters.
  • GPT or generative pre-trained transformer: A type of LLM trained on a large corpus using the transformer neural network to generate text as a response to input.
  • NLP or natural language processing: The processing of human language by a machine including parsing, understanding, generating, etc.
  • Corpus: Essentially, the training data. A collection of machine-readable text structured as a dataset.
  • Vector: The numerical representation of a word or phrase. A list of numbers representing different aspects of a word or phrase.
  • Tokens: A unit of input text. A token is the smallest semantic unit defined in a document/corpus (not necessarily a word). ChatGPT, for example, has a 4,000 token limit. GPT-4 permits up to 32,000 tokens.
  • Parameters: The weights or variables used to train a target model. For example, 187 billion parameters were used to train ChatGPT.
  • Transformer: The algorithm behind LLMs. A deep learning model adopting the attention mechanism that learns different weights and the significance for each part of the input data in a robust manner.
  • RL or reinforcement learning: A feedback-based machine learning paradigm where the model/agent learns to act in an environment to maximize a defined reward.
  • RLHF or reinforcement learning from human feedback: A technique that trains a reward model directly from human feedback and uses the model as a reward function to optimize an agent's policy using RL.
  • Inference: Testing the model. Feeding new data to the model to get its response/prediction.
WWT Research
Insights powered by the ATC

This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research.


This report is compiled from surveys WWT Research conducts with clients and internal experts; conversations and engagements with current and prospective clients, partners and original equipment manufacturers (OEMs); and knowledge acquired through lab work in the Advanced Technology Center and real-world client project experience. WWT provides this report "AS-IS" and disclaims all warranties as to the accuracy, completeness or adequacy of the information.

Contributors

Brian Feldt
Writer | Producer
Tim Brooks
Managing Director & Chief AI Advisor

Contributors

Brian Feldt
Writer | Producer
Tim Brooks
Managing Director & Chief AI Advisor

In this report

  1. A framework for generative AI adoption
  2. Think about your level of ownership 
    1. List of key generative AI terms
What's Next A Guide for CEOs to Accelerate AI Excitement and Adoption
  • About
  • Careers
  • Locations
  • Help Center
  • Sustainability
  • Blog
  • News
  • Press Kit
  • Contact Us
© 2025 World Wide Technology. All Rights Reserved
  • Privacy Policy
  • Acceptable Use Policy
  • Information Security
  • Supplier Management
  • Quality
  • Cookies