EleutherAI is an open-source AI research consortium founded in 2020. EleutherAI was established to build open-source large language models, but the consortium has also researched in the fields of BioML, machine learning art, mechanistic interpretability, and model alignment.
EleutherAI uses cloud computing resources from CoreWeave and Tensorflow Research Cloud. This is a cheaper alternative to having privately owned computing resources.
Here are some of the popular Eleuther Models:
GPT-Neo was designed as an open-source alternative to GPT-3. It is a transformer model created using EleutherAI’s replication of the GPT-3 architecture, and it was trained using 125 million parameters.
Unlike some other models, you can train GPT-Neo locally on their GPUs instead of training it on a Cloud server.
You can then generate text using the trained model or download a pre-trained one. Users can also create and train their own tokenizer instead of using the recommended Huggingface’s pretrained GPT2 tokenizer.
GPT-Neo-1.3B is a transformer model that tries to replicate GPT-3 architecture while remaining open-source. It was trained using 1.3 billion parameters, and EleutherAI’s the Pile dataset.
The Pile dataset contains lewd and insulting expressions, so GPT-Neo-1.3B may generate insulting texts. It’s best to have a human moderator filter out generated text before releasing it directly to the public. This is unlike OpenAI’s NLP models that have an automated content moderator.
Also trained on the Pile dataset, GPT-J is an open-source transformer model trained on over 6 billion parameters. GPT-J was developed using a JAX-based python machine learning library.
GPT-J has a 68.29% accuracy for LAMBADA sentence completion and an average of 27.62% for zero-shot accuracy using the Hendrycks Test evaluation.
Announced in February 2022, GPT-NeoX-20B is the latest NLP model to be released by EleutherAI. GPT-NeoX-20B is an NLP model that is trained on 20 billion parameters. At the time of release, It was the largest open-source pretrained general purpose autoregressive language model.
GPT-NeoX-20B has a 72% accuracy on LAMBADA sentence completion. It has an average of 28.98% zero-shot accuracy for STEM, social science and humanities subject groups when measured using the Hendrycks Test Evaluation.
EleutherAI’s text generation web app runs based on the GPT-J-6B model. It generates text based on a human text prompt.
You can adjust the temperature and Top-P parameters of the NLP model.
The “Temperature” of the model determines the level of randomness of the generated text. It goes from 0 to 1.5, where 1.5 will give you the most creative and random text.
The “Top-P” parameter also controls the randomness of the model slightly. It goes from 0 to 1. You can adjust the parameters above randomly to evaluate the model’s generated text.
The AI text generator works quite well. But it takes some lengthy seconds or even a minute to respond, unlike Open AI’s ChatGPT whose response is almost instantaneous. The text generator is based on an NLP model that contains some lewd content. So, you might get a few offensive responses from the AI text generator.
The Pile is an 825 GB English language corpus or dataset designed to train large language models. The Pile was constructed from 22 different high-quality data subsets. Some of the datasets were pre-existing, while some were newly constructed. Several of the subsets were derived from professional and academic background sources.
Preexisting datasets incorporated into the collection include Project Gutenberg, the English Wikipedia, Open-Subtitles, Enron Emails Corpus, and Books3. New datasets were obtained from sources including GitHub, PubMed Central, YouTube, HackerNews, BookCorpus, BookCorpus2, Ubuntu IRC, ArXiv, Stack Exchange and others.
The Pile was developed as an alternative to GPT-2 and GPT-3, which performed poorly on many components of the Pile. The metric that indicated their poor performance was BPB.
The Pile BPB (bits per byte) is a measure of how robust a trained model performs in comprehending the varying domains of philosophy, GitHub repositories, medicine, chat logs, biology, physics, chemistry, mathematics, and computer science.
Pythia is an undergoing project by EleutherAI that seeks to research how knowledge is developed and how it evolves during training in autoregressive transformers. The project will study the grammar learning trajectories as well as the memorization and training order patterns of language models. EleutherAI’s team is using scaling laws and interpretability analysis to conduct this research.
Eight model sizes are trained on 2 different datasets including the Pile and the Pile deduplicated for the project. The models are trained within the GPT-NeoX library.
EleutherAI plans to release a user-friendly version of this project for other researchers to work on.
Language model evaluation harness is a project that supplies a unified framework for testing autoregressive language models such as GPT-J, GPT-Neo, GPT-2, and GPT-3. The language models are tested on various tasks like anagrams, arithmetic, ethics, STEM, social science, medicine, accounting, law, religion etc.
Polyglot is EleutherAI’s current project to train language models in Korean and other languages. The current project includes two large Korean language models with 1.3 billion and 3.8 billion parameters. Plans are made to include Nordic and East Asian languages.
Most large language models are predominantly trained using datasets comprising mostly of the English language. Polyglot aims to include languages like Romanian, Indonesian and Vietnamese that have yet to be largely ignored by other large language models.
A recent tweet by the official EleutherAI handle indicates that the consortium will be pivoting and focusing on AI ethics, alignment and interpretability in 2023. This will include work on plagiarism using AI and AI governance.
EleutherAI is also working on other projects that revolve around memorization, capacity removal and ELK (Eliciting Latent Knowledge).
In November 2022, EleutherAI won a 5.94M V100-hour INCITE grant to use ORNL’s Summit Computer to train foundation models along with other AI researchers. This is the first time the United States government funds open-source AI research with millions of dollars worth of computing power.
The future looks bright for Eleuther AI.
EleutherAI is an open-source consortium of AI and machine learning researchers that came together to provide a decentralized alternative to OpenAI. They work on open-source AI models and products that rival OpenAI’s offerings.
GPT-J performs better than the smaller GPT-3 individual models such as Ada and Babbage. But it does not perform as well as Davinci or GPT-3 as a whole. This is because GPT-3 Davinci was trained with more parameters and on a larger dataset.
That said, GPT-J is a good, open-source, free alternative to the costly GPT-3.
GPT-Neo is a free, open-source alternative to OpenAI’s GPT-3. GPT-Neo is a transformer-based NLP model. Three variants of GPT-Neo were released. They are the original GPT-Neo, GPT-Neo-1.3B and GPT-NeoX-20B.
Yes, GPT-J is open-source. You can join EleutherAI’s Discord channel if you want to contribute to the project.
Yes, GPT-Neo is free and open-source. It is a free alternative to GPT-3.
Yes, GPT-Neo is better than GPT-2.
EleutherAI developed GPT-Neo as an alternative to GPT-2 and GPT-3.
While not necessarily better than GPT-3, other alternatives to GPT-3 are BERT, OPT, AlexaTM, BLOOM, GLaM, PaLM, Gopher and GPT-Neo.
BLOOM is an NLP model trained on 176 billion parameters which is 1 billion parameters more than GPT-3. BLOOM can generate text in 46 natural languages and 13 programming languages.
GLaM by Google is a language model trained using a dataset of over 1.6 trillion tokens. A text quality filter was used to select quality data subsets of webpages combined with books and Wikipedia for the final dataset. GLaM has over 1.2 trillion parameters.
BERT by Google is a transformer-based language model for NLP research. It has two models pretrained using data from BooksCorpus and the English Wikipedia.