OpenAI Custom Model Program: A Leap Towards Tailored AI Solutions

OpenAI, a leading player in the artificial intelligence (AI) arena, is taking significant strides in amplifying its OpenAI Custom Model Program. This initiative is designed to empower enterprise clients to craft bespoke generative AI models using OpenAI’s cutting-edge technology, tailored to their unique use cases, domains, and applications.

The OpenAI Custom Model Program was unveiled at OpenAI’s inaugural developer conference, DevDay, last year. It provided companies with the chance to collaborate with a dedicated team of OpenAI researchers to train and optimize models for specific domains. Since its inception, the program has seen enrollment from dozens of customers. However, OpenAI recognized the need to expand the OpenAI Custom Model Program to further enhance performance, leading to the introduction of assisted fine-tuning and custom-trained models.

Assisted Fine-Tuning: A New Dimension to OpenAI Custom Model Program

openai custom model program
Image Source: OpenAI

Assisted fine-tuning is a new feature of the OpenAI Custom Model Program. It’s designed to help organizations improve the performance of their AI models on specific tasks. Here’s a bit more detail on each component:

  1. Techniques beyond mere fine-tuning: Fine-tuning is a process where a pre-trained model (a model that has already been trained on a large dataset) is further trained (or “fine-tuned”) on a smaller, specific dataset. The idea is to leverage the knowledge that the model has already gained during its pre-training and apply it to a specific task. Assisted fine-tuning takes this a step further by employing additional techniques to enhance the model’s performance.
  2. Additional hyperparameters: In machine learning, hyperparameters are parameters that are set before the learning process starts. They control the learning process and can have a big impact on the model’s performance. Organizations can optimize their models for specific tasks by adjusting these hyperparameters.
  3. Parameter-efficient fine-tuning methods at a larger scale: This refers to fine-tuning methods that make efficient use of parameters (the internal variables that the model learns during training) and can be applied at a large scale. This means that organizations can fine-tune their models on larger datasets, potentially leading to better performance.
  4. Data training pipelines, evaluation systems, and other supporting infrastructure: These are the systems and infrastructure that organizations set up to train their models, evaluate their performance, and support the entire machine learning workflow. Assisted fine-tuning enables organizations to set up these systems to boost their model’s performance on specific tasks.

Custom-Trained Models

Custom-trained models, on the other hand, are bespoke models constructed with OpenAI, utilizing OpenAI’s base models and tools, such as GPT-4. These are designed for customers who need to fine-tune their models more deeply or infuse new, domain-specific knowledge.

openai custom model program

OpenAI cites the example of SK Telecom, a Korean telecommunications behemoth, which collaborated with OpenAI to fine-tune GPT-4 to enhance its performance in telecom-related conversations in Korean. Another client, Harvey, which is developing AI-powered legal tools with backing from the OpenAI Startup Fund, partnered with OpenAI to create the OpenAI Custom Model Program for case law. This model included a vast amount of legal text and input from experienced attorneys.

The Future of Customized Models

OpenAI envisions that in the future, the majority of organizations will develop customized models personalized to their industry, business, or use case. With a plethora of techniques available to build a custom model, organizations of all sizes can develop personalized models to realize more meaningful, specific impacts from their AI implementations.

OpenAI is reportedly nearing an impressive $2 billion in annualized revenue. However, there’s undoubtedly internal pressure to sustain this pace, especially as the company plans a $100 billion data center co-developed with Microsoft. The cost of training and serving flagship generative AI models isn’t decreasing anytime soon. Therefore, consulting work like custom model training could be the key to maintaining revenue growth while OpenAI strategizes its next steps.

Fine-tuned and custom models could also alleviate the load on OpenAI’s model serving infrastructure. Tailored models are often smaller and more efficient than their general-purpose counterparts. As the demand for generative AI reaches new heights, these models undoubtedly present an appealing solution for OpenAI, which has historically faced compute-capacity challenges.

In addition to expanding the OpenAI Custom Model Program and custom model building, OpenAI has also unveiled new model fine-tuning features for developers working with GPT-3.5. This includes a new dashboard for comparing model quality and performance, support for integrations with third-party platforms (starting with the AI developer platform Weights & Biases), and enhancements to tooling. However, OpenAI remains tight-lipped about fine-tuning for GPT-4, which was launched in early access during DevDay.