Using the raw Flux Kontext Dev open-weight model of 23GB with a LoRa on Lambda.ai cloud GPU.
I have the models stored on my own hard drive, and I upload them directly to the Lambda instance via Filebrowser through an SSH port forward on port 8080. I also expose the ComfyUI dashboard through the tunnel on port 8188.
Host LambdaComfy
Hostname 192.9.251.153 #ip lambalabs
User ubuntu
LocalForward 8080 127.0.0.1:8080
LocalForward 8188 127.0.0.1:8188
I'm using an Nvidia A10 with 24GB of VRAM. It's a bit tight for the full 23GB model, but it works... it takes about 1 to 2 minutes to generate an image.
Depending on your available VRAM
32GB VRAM --> Full-speed, full-precision model (24Go)
The side project for the SOS button for my mother is almost finished, at least on the software side. I ended up using Python with the excellent UV package. Now, I just need to design two nice cases in Fusion360 to house the two Raspberry Pi 3B+ units with their buttons and the piezoelectric buzzer for the other one.
nico28/06/2025
UV is now my default Python package manager for all of my projects
- A single tool to replace pip, pip-tools, pipx, poetry, pyenv, twine, virtualenv, and more. - 10-100x faster than pip. - An extremely fast Python package and project manager, written in Rust.