Microsoft unveils Phi-3-mini, its tiniest AI language model.

 

Microsoft unveils Phi-3-mini, its tiniest AI language model.
                                                        Image by efes from Pixabay

Microsoft has released the first of three little models that will make up the next iteration of the lightweight AI model Phi-3-mini.

3.8 billion parameters make up the Phi-3-mini, which was trained on a smaller dataset than huge language models such as GPT-4.


It is currently accessible on Hugging Face, Ollama, and Azure, according to The Verge. Microsoft intends to make available Phi-3 Medium (14 billion parameters) and Phi-3 Small (7 billion parameters). (The number of complex instructions a model can comprehend is referred to as its parameter.)

Released in December, the company's Phi-2 performed on par with larger versions such as Llama 2.

According to Microsoft, Phi-3 outperforms its predecessor and can yield results that are nearly as good as those of a model ten times larger than it can, as The Verge noted.

According to Eric Boyd, corporate vice president of Microsoft Azure AI Platform, Phi-3-mini is just as competent as LLMs like GPT-3.5 "just in a smaller form factor." Boyd made this claim to The Verge.

Compared to their larger counterparts, small AI models are frequently less expensive to run and perform better on mobile devices like phones and laptops.

According to Boyd, developers used a "curriculum" to train Phi-3. They were influenced by the way kids picked up knowledge from bedtime stories, picture books with simple language, and sentence structures covering more complex subjects.

Boyd explains, "We took a list of more than 3,000 words and asked an LLM to make 'children's books' to teach Phi because there aren't enough children's books out there."

Post a Comment

0 Comments