Artificial Intelligence developers employ a plethora of strategies to ramp up model performance, aiming to decrease latency, enhance accuracy, and curtail costs. Methods include the expansion of model knowledge through retrieval-augmented generation, customising the model's behaviour via fine-tuning, or developing a custom-trained model embedded with fresh domain-specific knowledge. At ToolPilot, we are thrilled to bring to the limelight the latest improvements launched by OpenAI to serve these needs.
In their endeavour to support AI implementations for their customers, OpenAI presents new features allowing more control over fine-tuning with the API and expanding opportunities for teamwork with their in-house AI experts and researchers to create custom models.
Unveiling New Fine-tuning API Features
OpenAI spearheaded the self-service fine-tuning API for GPT-3.5 back in August 2023. The assistance provided by fine-tuning facilitates the deep understanding of content and enhances a model’s existing knowledge and capabilities designed for a specific task. Moreover, this fine-tuning API accommodates a larger volume of examples than what can fit into a single prompt, which results in higher quality outputs while simultaneously saving on cost and latency. Various uses of fine-tuning encompass training a model to generate improved code in a specific programming language, summarizing text in a certain format, or creating customized content based off user behavior.
For instance, the global job matching and hiring platform, Indeed, sought to simplify the hiring process by introducing a feature that provides personalized recommendations to job seekers, thus emphasizing jobs that align with their skills, experience, and preferences. By using the GPT-3.5 Turbo, Indeed has been able to generate more accurate and higher quality recommendations.
Today, OpenAI introduces novel features providing developers with more control over their fine-tuning undertakings. These features include saving a complete fine-tuned model checkpoint during each training epoch to mitigate the need for future retraining, a side-by-side Playground UI for comparing model quality and performance, integration support with external platforms, metrics computed over the validation dataset at the end of each epoch for improved visibility into model performance, and the ability to configure available hyperparameters from the Dashboard. Moreover, several improvements to the fine-tuning dashboard have been implemented, including heightened configuration controls, visibility into more detailed training metrics, and the ability to rerun jobs from previously saved configurations.
Enhancing the Custom Models Program
The Custom Model program, launched at DevDay last November, was designed to train and optimize models for a particular domain, in collaboration with a dedicated group of OpenAI researchers. The program's evolution further maximizes performance by offering an assisted fine-tuning service forming a collaborative partnership with OpenAI's technical teams to deploy techniques beyond the fine-tuning API, such as extra hyperparameters and various parameter-efficient fine-tuning methods, on a larger scale.
In cases where organizations need to train a purpose-built model from scratch that knows their business, industry, or domain, fully custom-trained models offer new knowledge by modifying critical steps of the model training process using novel mid-training and post-training techniques.
Concluding, OpenAI believes in the future, most businesses will develop custom models personalised to their industry, business, or application. Visit our fine-tuning API docs for initiating the fine-tuning of our models. And for more details on how custom models can be designed for your specific use case, please feel free to reach out to OpenAI.
Disclaimer: The above article was written with the assistance of AI. The original sources can be found on OpenAI.