Should I put the OpenWRT One router in my homelab? It’s really tempting me right now. He uses a Mediatek WiFi chipset, and phew, no Broadcom (hello Raspberry Pi).
For now, I’m using the router(modem) provided by Orange with their Livebox. The thing is, Orange doesn’t offer a Bridge mode like Freebox does
nico06/08/2025
Nvidia Tesla A100 SXM4 to PCIE Adapter . Go train LLM at home
nico04/08/2025
Listening on port 3410, such memories — OptixPro 1.33 written by s13az3 in Delphi. Year 2004
This iMac looks great for light daily use and remote SSH coding.
nico29/07/2025
AI
Overview of API provision and proxies for consumption.
Generally, avoid unified API platforms — they are very expensive. It's better to consume APIs directly from the source, such as the creators of frontier models like Mistral, Anthropic, etc.
TUIs are also gaining popularity in the summer of 2025. The major creators of frontier models all have their own TUIs. I prefer using them because they are much lighter and more minimalist in approach. Moreover, the models are often nerfed and distilled in Cursor or Windsurf, with smaller context windows
I find it's the most ideal way to consume APIs Inference is TUI. Nicolas
✶ API Inference
pure 🟢 api.anthropic.com/v1 api.mistral.ai/v1 api.deepseek.com/v1 api.moonshot.ai/v1 generativelanguage.googleapis.com
unified 🔴 Openrouter Eden AI LiteLLM TogetherAI
✶ API Inference Proxy
TUI 🟢 Claude code Qwen code Gemini cli Crush Opencode Anon Kode Aider
Personally, I think this is the best place for a code-oriented agent like Claude or Qwen to have a terminal as its interface. No friction, no heavy IDE client. You just type 'claude', 'qwen' and boom — the entire folder context is loaded with a simple prompt interface. It's art: minimalist, simple, just the way I like it.
nico27/07/2025
The Open-Source Chinese AI Gang Qwen - Kimi - DeepSeek 🇨🇳