Abu Dhabi's TII releases compact reasoning model Falcon-H1R
Open-source Falcon H1R 7B model achieves 88% on mathematics benchmark
#UAE #LLMs - Technology Innovation Institute (TII), the applied research arm of Abu Dhabi's Advanced Technology Research Council (ATRC), has released Falcon H1R 7B, an open-source AI model with 7 billion parameters that delivers advanced reasoning capabilities. The compact model outperforms larger systems from Microsoft, Alibaba and NVIDIA across mathematics, coding and reasoning benchmarks, achieving 88.1% accuracy on the AIME-24 mathematics test and processing up to 1,500 tokens per second per GPU. The release aims to make advanced AI more accessible whilst demonstrating efficient model design can match or exceed the performance of significantly larger systems.
SO WHAT? - The release of the Falcon H1R 7B model underscores that effective AI systems need no longer rely on massive parameter counts and computational resources. The new model will potentially help lower barriers to entry for researchers and organisations with limited infrastructure. By achieving competitive performance at a fraction of the size of rival models, Falcon H1R 7B challenges assumptions about the relationship between model scale and capability.
Here are some key points about the new model announcement:
TII’s Falcon H1R 7B achieved 88.1% accuracy on the AIME-24 mathematics benchmark, surpassing ServiceNow AI’s larger Apriel 1.5 model with 15 billion parameters, which scored 86.2% on the same assessment.
The model delivered 68.6% accuracy on coding and agentic tasks, scoring 34% on the LCB v6 benchmark compared to DeepSeek R1-0528 Qwen 3 8B at 26.9% and outperforming the larger Qwen3-32B model at 33.4%.
Falcon H1R 7B processes up to 1,500 tokens per second per GPU at batch size 64, nearly double the speed of Qwen3-8B, through its hybrid Transformer-Mamba architecture that optimises both speed and accuracy.
TII chief executive Dr Najwa Aaraj stated the model achieves near-perfect scores on benchmarks whilst maintaining exceptionally low memory and energy consumption, addressing deployment and sustainability requirements for real-world applications.
The model has been released as open-source under the Falcon TII License, available through Hugging Face with complete technical documentation detailing training strategies and benchmark performance for the global research community.
TII chief researcher Dr Hakim Hacid emphasised the model demonstrates how scientific precision and scalable design can combine to enable developers to build more accessible AI systems without sacrificing performance quality.
The release builds on TII’s Falcon programme track record, with the first four generations achieving number-one global rankings in their respective categories, establishing consistent performance in efficient AI model development.
ATRC secretary general His Excellency Faisal al Bannai stated the model reflects the UAE’s commitment to building open and responsible AI that supports economic growth, research leadership and technological resilience.
ZOOM OUT - The Falcon H1R 7B release follows Abu Dhabi AI research university Mohamed bin Zayed University of Artificial Intelligence's December 2025 launch of K2 V2, a 70-billion-parameter open-source reasoning model with full transparency including training data and code. MBZUAI's model achieved 83 percent on complex logic puzzles and 94.7 percent on mathematics datasets, demonstrating the emirate's growing concentration of AI research capabilities. The two institutions' releases within weeks of each other underscore Abu Dhabi's emergence as a centre for open-source AI development with multiple organisations contributing frontier models to the global research community.
[Written and edited with the assistance of AI]
Work in progress
LINKS
Falcon H1-7B-Instruct (Hugging Face)
Falcon H1 Family code (GitHub)


