EnviroLLM gives you the tools to track and optimize resource usage when running models on your own hardware.
Run one command (no installation needed):
npx envirollm startThen visit the dashboard to see your metrics in real-time!
Requirements: Node.js and Python 3.7+
The CLI automatically finds most popular LLM setups:
Absolutely! Everything's available on GitHub.
LLMs are a fascinating technology to me, but running them locally can be a black box. I wanted to create a tool that gives users visibility and control over the environmental impact of their AI experiments. Since I'm not able to impact cloud-based inference, I thought this would be a good way to contribute to more sustainable AI practices.