There's a new AI model family on the block, and it's one of the few that can be reproduced from scratch.
On Tuesday, Ai2, the nonprofit AI research organization founded by the late Paul Allen, released OLMo 2, the second family of models in its OLMo series. (OLMo's short for "Open Language Model.") While there's no shortage of "open" language models to choose from (see: Meta's Llama), OLMo 2 meets the Open Source Initiative's definition of open source AI, meaning the tools and data used to develop it are publicly available.
The Open Source Initiative, the long-running institution aiming to define and "steward" all things open source, finalized its open source AI definition in October. But the first OLMo models, released in February, met the criterion as well.
"OLMo 2 [was] developed start-to-finish with open and accessible training data, open-source training code, reproducible training recipes, transparent evaluations, intermediate checkpoints, and more," AI2 wrote in a blog post. "By openly sharing our data, recipes, and findings, we hope to provide the open-source community with the resources needed to discover new and innovative approaches."
There's two models in the OLMo 2 family: one with 7 billion parameters (OLMo 7B) and one with 13 billion parameters (OLMo 13B). Parameters roughly correspond to a model's problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.
Like most language models, OLMo 2 7B and 13B can perform a range of text-based tasks, like answering questions, summarizing documents, and writing code.
To train the models, Ai2 used a data set of 5 trillion tokens. Tokens represent bits of raw data; 1 million tokens is equal to about 750,000 words. The training set included websites "filtered for high quality," academic papers, Q&A discussion boards, and math workbooks "both synthetic and human generated."
Ai2 claims the result is models that are competitive, performance-wise, with open models like Meta's Llama 3.1 release.
Image Credits:Ai2
"Not only do we observe a dramatic improvement in performance across all tasks com ...