Skip to main content
Languages

Bobbie-model

If you’ve been following the open-source LLM space, you’ve likely memorized the specs of Llama 3, Mixtral, and Qwen. But a new contender has been quietly gaining traction in the "small model" category: .

The research collective has hinted at a 13B version with Mixture of Depths (MoD) later this year. Until then, Bobbie-7B deserves a spot in your evaluation pipeline. bobbie-model

| Stage | Dataset | Tokens | Purpose | |-------|---------|--------|---------| | 1 | RedPajama (v2) | 1.2T | Base language modeling | | 2 | SlimPajama + CodeAlpaca | 400B | Code & reasoning | | 3 | Synthetic multi-turn chat | 50B | Instruction following | If you’ve been following the open-source LLM space,

| Benchmark | Bobbie-7B | Llama-3-8B | Mistral-7B | |-----------|-----------|------------|------------| | MMLU (5-shot) | 64.2 | 66.7 | 63.9 | | GSM8K (8-shot) | 52.8 | 54.9 | 50.3 | | HumanEval (pass@1) | 32.5 | 34.2 | 31.8 | | | 82.3 | 67.1 | 71.4 | | Inference tokens/sec | 98 | 72 | 88 | Until then, Bobbie-7B deserves a spot in your

messages = [ "role": "user", "content": "Summarize this 20k token document..." ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) output = model.generate(inputs, max_new_tokens=512, temperature=0.7) print(tokenizer.decode(output[0][inputs.shape[1]:])) Bobbie works out-of-the-box with vLLM 0.6.0+: