Since the emergence of enterprise-grade generative AI, organizations have tapped into the rich capabilities of foundational models, developed by the likes of OpenAI, Google DeepMind, Mistral, and others. Over time, however, businesses often found these models limiting since they were trained on vast troves of public data. Enter customization—the practice of adapting large language models (LLMs) to better suit a business’s specific needs by incorporating its own data and expertise, teaching a model new skills or tasks, or optimizing prompts and data retrieval.
Customization is not new, but the early tools were fairly rudimentary, and technology and development teams were often unsure how to do it. That’s changing, and the customization methods and tools available today are giving businesses greater opportunities to create unique value from their AI models.
We surveyed 300 technology leaders in mostly large organizations in different industries to learn how they are seeking to leverage these opportunities. We also spoke in-depth with a handful of such leaders. They are all customizing generative AI models and applications, and they shared with us their motivations for doing so, the methods and tools they’re using, the difficulties they’re encountering, and the actions they’re taking to surmount them.

Our analysis finds that companies are moving ahead ambitiously with customization. They are cognizant of its risks, particularly those revolving around data security, but are employing advanced methods and tools, such as retrieval-augmented generation (RAG), to realize their desired customization gains.
Download the full report.
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.