AI21's Jamba: Hybrid LLM Breakthrough

No Image
No Image
Source Link

AI21 Labs has just made waves in the AI community with the release of Jamba, a groundbreaking hybrid SSM-Transformer MoE implementation. This cutting-edge model, boasting a staggering 52 billion parameters, with 12 billion active during generation, marks a significant leap forward in natural language processing capabilities. Jamba features 16 experts, with two active during generation, utilizing a new architecture with Joint Attention and Mamba components, propelling it to support a context length of up to 256K. Remarkably, it can fit up to 140K context on a single A100 80GB GPU, delivering three times the throughput on long contexts compared to previous models like Mixtral 8x7B. Released under the Apache 2.0 license, Jamba is now accessible via Hugging Face & Transformers (>4.38.2), positioning itself as a formidable contender on the Open LLM Leaderboard Benchmarks. However, details regarding training data and language support remain undisclosed. This release heralds a new era in AI-driven language models, promising unparalleled performance and versatility.