Frequently Asked Questions

What is EnviroLLM?

EnviroLLM gives you the tools to track and optimize resource usage when running models on your own hardware.

What can I do with it?

  • • Monitor CPU, memory, and power usage in real-time
  • • See which models are actually efficient vs. resource hogs
  • • Compare running models locally vs. using cloud APIs
  • • View how you can optimize performance of your model

How do I start?

Run one command (no installation needed):

npx envirollm start

Then visit the dashboard to see your metrics in real-time!

Requirements: Node.js and Python 3.7+

What technology stack does EnviroLLM use?

  • Frontend: Next.js, React, TypeScript, Tailwind CSS
  • Backend: Python, FastAPI, PyTorch/TensorFlow
  • CLI: Node.js, TypeScript
  • Deployment: Vercel (Frontend), Railway (Backend)

Which LLM tools does it work with?

The CLI automatically finds most popular LLM setups:

  • • Ollama
  • • LLaMA/LlamaCPP
  • • Python scripts
  • • Text Generation WebUI
  • • KoboldCPP
  • • Oobabooga
  • • LM Studio
  • • GPT4All

Can I contribute?

Absolutely! Everything's available on GitHub.

Why build this?

LLMs are a fascinating technology to me, but running them locally can be a black box. I wanted to create a tool that gives users visibility and control over the environmental impact of their AI experiments. Since I'm not able to impact cloud-based inference, I thought this would be a good way to contribute to more sustainable AI practices.