top of page

OpenAI Launches Customizable GPT-3.5 Turbo with Fine-Tuning Capabilities



Mett.Ai News Desk

Developers and Businesses Empowered to Enhance AI Text Generator's Performance and Adaptability

OpenAI has unveiled a game-changing feature, granting users of its GPT-3.5 Turbo model the capability to fine-tune the system using their proprietary data. This groundbreaking development now empowers developers and enterprises to not only bolster the AI model's dependability but also tailor its functionality to precise applications.

OpenAI's recent announcement states that finely-tuned iterations of the GPT-3.5 Turbo model can now rival, or even surpass, the core capabilities of GPT-4, the flagship model developed by the company. While GPT-4 remains OpenAI's crown jewel, the fine-tuning feature allows the GPT-3.5 Turbo model to excel in targeted tasks.

Addressing the demands of developers and businesses, OpenAI has provided this update to enable the customization of models for more tailored user experiences. This advancement allows developers to optimize models to better align with their intended use cases, and deploy these tailored models at scale.

By fine-tuning the GPT-3.5 Turbo through OpenAI's API, companies can guide the model to adhere to specific instructions. For example, they can ensure the AI responds consistently in a particular language or refines its formatting of responses, such as for code completion. Moreover, the fine-tuning process enables the tuning of the model's output "feel," including tone and style, to harmonize with a desired brand voice.

A notable benefit of the fine-tuning feature is the potential for more efficient API calls and reduced costs. According to OpenAI, early testers have achieved up to a 90% reduction in prompt size by integrating instructions directly into the model. This efficiency enhancement contributes to faster responses.

To embark on the fine-tuning journey, users need to prepare their data, upload the requisite files, and initiate a fine-tuning job via OpenAI's API. To ensure alignment with OpenAI's safety standards, all fine-tuning data is subjected to a moderation API and a GPT-4-powered moderation system.

Presently, the fine-tuning costs are outlined as follows:

Training: $0.008 per 1,000 tokens

Usage input: $0.012 per 1,000 tokens

Usage output: $0.016 per 1,000 tokens

Tokens represent individual units of raw text, such as parts of words. To provide a cost estimate, OpenAI suggests that a fine-tuning job for GPT-3.5 Turbo involving a training file of 100,000 tokens (equivalent to approximately 75,000 words) would amount to around $2.40.

Today, OpenAI has also released updated versions of two GPT-3 base models, namely babbage-002 and davinci-002. These upgraded models also feature fine-tuning capabilities along with support for pagination and greater extensibility. It's worth noting that OpenAI plans to retire the original GPT-3 base models on January 4, 2024.

Looking ahead, OpenAI disclosed that fine-tuning support for GPT-4, which boasts the unique ability to comprehend images in addition to text, is anticipated to arrive in the upcoming fall season. While specific details remain undisclosed, this forthcoming feature holds considerable promise for further advancing AI capabilities in understanding and generating content across diverse media types.

Latest News


Professor Graham Morgan Unveils the Transformative Power of Game Development Beyond Entertainment

From Healthcare to AI-driven Finance: How Game Technology is Shaping Diverse Fields and Future Careers


Google Removes Controversial Live Video Chat App Chamet from Play Store Over UGC Violations

Chamet's Removal Highlights Google's Commitment to Ensuring Safe and Appropriate App Content


INA and GDS Partner to Transform Indonesia's Data Center Landscape

Collaboration Sets the Stage for Nationwide Data Center Expansion and Pioneering Tech Advancements in Indonesia

bottom of page