It's likely that many within your organization are already experimenting with generative AI for business purposes — writing code or marketing pitches, analyzing pricing models, or automating certain tasks — while your senior leadership or board is becoming more interested in AI solutions given the hype.
As an IT practitioner, it's your job to understand the tech landscape and educate your organization about the power and risks of certain technology solutions, regardless of their application. Generative AI is no different.
At this stage, you should be thinking about generative AI from a board-level perspective, looking beyond the near term and well into the future. What are all the risks and rewards? What are the ways your organization might win or lose? How will your people react or respond? How might generative AI make your organization more competitive and effective?
Don't assume anyone within your organization is thinking strategically about how or where generative AI should be applied.
Similar to our recommendations for prioritizing enterprise automation initiatives, we think IT leaders should focus on developing and driving a top-down strategy that aligns with business goals, clarifies where generative AI is already in use, scales grassroots efforts where appropriate, and adopts a people-first approach.
For most organizations, generative AI is not ready for enterprise-wide adoption. But there are pockets of use cases where applications such as ChatGPT can add immediate value.
In general, we've seen many organizations already ban or restrict generative AI use due to the inherent risk. From our perspective, here is a helpful framework evaluating various use cases for large language models (LLMs) like ChatGPT:
With potential use cases identified, you and your leadership can now think about ownership models.
Some organizations — likely those with deep resources and in highly regulated industries — may consider developing and training their own LLM solutions.
What are the costs of training your own LLM?
- Hardware: Training your own LLM would require sophisticated network infrastructure that includes hundreds of GPUs, several terabytes of storage, up to 300 terabytes of memory and a high-speed network.
- People: Training your own LLM will demand a large team of highly-skilled specialists, including infrastructure engineers, data engineers, solution architects and data scientists.
- Processes: To train your own LLM, you'll need parallel ingestion and processing capabilities, advanced hardware accelerators and optimization tools, and access to high-quality data.
In all, the cost of training your own LLM today could balloon to $100 million or more — a massive investment that doesn't even consider the chain reaction it might cause in regard to your carbon footprint, security parameters and cloud consumption. Further, we estimate the total cost of ownership of a mature LLM would require between 10 million to 19 million hours of cloud usage just to train the model.
Developing a proprietary generative AI solution will take months to deliver (if not longer), but if done correctly, the resulting model would be highly secure and likely very impactful for your specific organization.
Most organizations will lean toward buying or leasing a base model and fine-tuning as needed. This approach would still consume time and resources but be optimized for use cases and maintain a level of security.
Those opting to consume an enterprise-grade LLM can find some value in the information retrieval and rudimentary analysis capabilities while maintaining low costs, but they will sacrifice risk-limited security guardrails.
Another question to consider is whether you should train your own LLM with your proprietary data and records. If you have the means to do so and have thoughtful and mature data governance and risk management policies in place, then the answer is yes. Otherwise, we do not recommend it.
- Manage Generative AI Before It Manages You
- Sound Data Strategy Paramount to Generative AI
- Common Pitfalls When Getting Started With Data Governance
For those unfamiliar with generative AI, here's a glossary of terms that should help you gain a clearer understanding of what all the hype is about:
- LLM or large language model: A type of AI algorithm that leverages deep learning techniques to process natural language to understand, summarize, predict and generate content; they have at least a few million parameters.
- GPT or generative pre-trained transformer: A type of LLM trained on a large corpus using the transformer neural network to generate text as a response to input.
- NLP or natural language processing: The processing of human language by a machine including parsing, understanding, generating, etc.
- Corpus: Essentially, the training data. A collection of machine-readable text structured as a dataset.
- Vector: The numerical representation of a word or phrase. A list of numbers representing different aspects of a word or phrase.
- Tokens: A unit of input text. A token is the smallest semantic unit defined in a document/corpus (not necessarily a word). ChatGPT, for example, has a 4,000 token limit. GPT-4 permits up to 32,000 tokens.
- Parameters: The weights or variables used to train a target model. For example, 187 billion parameters were used to train ChatGPT.
- Transformer: The algorithm behind LLMs. A deep learning model adopting the attention mechanism that learns different weights and the significance for each part of the input data in a robust manner.
- RL or reinforcement learning: A feedback-based machine learning paradigm where the model/agent learns to act in an environment to maximize a defined reward.
- RLHF or reinforcement learning from human feedback: A technique that trains a reward model directly from human feedback and uses the model as a reward function to optimize an agent's policy using RL.
- Inference: Testing the model. Feeding new data to the model to get its response/prediction.
This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research. It consists of the opinions of WWT Research and as such should be not construed as statements of fact. WWT provides the Report "AS-IS", although the information contained in Report has been obtained from sources that are believed to be reliable. WWT disclaims all warranties as to the accuracy, completeness or adequacy of the information.