SLMs@Home

Leaderboards

Downloads


To run SLMs@Home in Windows, open PowerShell and navigate to the directory containing the benchmark you downloaded and run ".\(benchmark file name here)"

To run SLMs@Home in Linux, open your terminal and navigate to the directory containing the benchmark you downloaded and run "chmod +x (benchmark file name here) && ./(benchmark file name here)"

What is SLMs@Home?

This website is a leaderboard for looking at various Small Language Models benchmarked by people like you! SLMs@Home allows you to benchmark and measure the performance, capabilities, and obedience of various Language Models. You have the opportunity to take your findings and upload them to our public leaderboard as a contribution to the community!

Why Create SLMs@Home?

At the moment, many langauge model benchmarks are very large datasets that are computationally expensive for the language model. We here at C4AI wish to change that and allow for the average user to be able to benchmark a langauge model if they so wish. We do not have massive datasets. Instead, we have very sparse datasets that allow people to be able to run it locally and in a timely manner, and mainly consist of real-world scenarios, questions, and requests for the model! The name "SLMs@Home" was inspired by other crowdsourcing "@Home" projects such as Folding@Home and Minecraft@Home. The idea behind an at-home crowdsourced language model benchmarking tool was inspired by various PC component benchmarking tools such as Geekbench, 3DMark, and UserBenchmark.

FAQ