SLMs@Home
Leaderboards
Downloads
To run SLMs@Home in Windows, open PowerShell and navigate to the directory containing the benchmark you downloaded and run ".\(benchmark file name here)"
To run SLMs@Home in Linux, open your terminal and navigate to the directory containing the benchmark you downloaded and run "chmod +x (benchmark file name here) && ./(benchmark file name here)"
What is SLMs@Home?
This website is a leaderboard for looking at various Small Language Models benchmarked by people like you!
SLMs@Home allows you to benchmark and measure the performance, capabilities, and obedience of various Language Models.
You have the opportunity to take your findings and upload them to our public leaderboard as a contribution to the community!
Why Create SLMs@Home?
At the moment, many langauge model benchmarks are very large datasets that are computationally expensive for the language model.
We here at C4AI wish to change that and allow for the average user to be able to benchmark a langauge model if they so wish.
We do not have massive datasets. Instead, we have very sparse datasets that allow people to be able to run it locally and in a timely manner, and mainly consist of real-world scenarios, questions, and requests for the model!
The name "SLMs@Home" was inspired by other crowdsourcing "@Home" projects such as Folding@Home and Minecraft@Home.
The idea behind an at-home crowdsourced language model benchmarking tool was inspired by various PC component benchmarking tools such as Geekbench, 3DMark, and UserBenchmark.
FAQ
-
Do I need to pay to benchmark models? - No! This benchmark is entirely free!
-
Do I need an account to be able to benchmark models - No! This benchmark is entirely anonymous.
-
Is this benchmark open-source? - Partially. The benchmark functionality of the code is open. We have a GitHub repo of a stripped down version of our internal benchmark code. Our internal code contains networking to interact with the leaderboard, a security stack to prevent spoofing, and upload limitations to prevent from overwhelming the server.
-
What information do you collect? - When you upload your results to SLMs@Home, we collect only the benchmark data you send and your IP for a limited amount of time for rate limiting purposes.
-
Where can I find the queries used in the benchmarks? - If you visit the leaderboard of the model you wish to see, the queries we use to test the language models are below the leaderboard!
-
How does this website stay afloat? - C4AI makes its earnings off other products and services. SLMs@Home is our way of giving back to the community!