Permalink

While attempting to enable GPU hardware acceleration in LXC containers using Incus, I encountered a significant roadblock: Debian 12's kernel 6.1 lacks support for the Intel N150 GPU, making hardware acceleration impossible.

This led me to explore bleeding-edge distributions like ArchLinux. I tested vanilla ArchLinux 2025.07.01 with kernel 6.15.4, and the GPU support worked flawlessly out of the box.

Post image
Permalink

BambuLab A1
State of the art printer

Permalink

STEM models
Science, technology, engineering, and mathematics

- Grok4   
- o3-pro 
- DeepSeekR1

Post image
Permalink

Attention, Debian version 12.11 comes with the 6.1.0-37 kernel, which does not support the GPU of the Intel N150. I’m considering switching to Arch Linux on this GMKtec G3+ mini PC in order to get a more recent kernel version and thus benefit from GPU support.

Post image
Permalink

Using the raw Flux Kontext Dev open-weight model of 23GB with a LoRa on Lambda.ai cloud GPU.

I have the models stored on my own hard drive, and I upload them directly to the Lambda instance via Filebrowser through an SSH port forward on port 8080. I also expose the ComfyUI dashboard through the tunnel on port 8188.

Host LambdaComfy
    Hostname 192.9.251.153 #ip lambalabs
    User ubuntu
    LocalForward 8080 127.0.0.1:8080
    LocalForward 8188 127.0.0.1:8188
 
I'm using an Nvidia A10 with 24GB of VRAM. It's a bit tight for the full 23GB model, but it works... it takes about 1 to 2 minutes to generate an image.
 
Depending on your available VRAM

32GB VRAM --> Full-speed, full-precision model (24Go)
20GB VRAM --> FP8 quantized (12Go)
≤ 12GB VRAM --> Int8/GGUF small version (~5Go)
 
Install on Lambda.ai
 
$ curl -LsSf https://astral.sh/uv/install.sh | sh
$ uv add torch torchvision torchaudio
$ uv pip install -r requirements.txt
$ uv run main.py # run ComfyUI dash
 
$ curl -fsSL https://raw.githubusercontent.com/filebrowser/get/master/get.sh | bash
$ filebrowser -r ComfyUI/  # helping upload models
Post image
Post image
Post image
Post image
Post image
Permalink

side project : sos buttons done

Post image
Post image
Permalink

Art attack

Post image
Permalink

The side project for the SOS button for my mother is almost finished, at least on the software side. I ended up using Python with the excellent UV package. Now, I just need to design two nice cases in Fusion360 to house the two Raspberry Pi 3B+ units with their buttons and the piezoelectric buzzer for the other one.

Permalink

UV is now my default Python package manager for all of my projects

github.com/astral-sh/uv

- A single tool to replace pip, pip-tools, pipx, poetry, pyenv, twine, virtualenv, and more.
- 10-100x faster than pip.
- An extremely fast Python package and project manager, written in Rust.

Example : Install & usage
$ curl -LsSf https://astral.sh/uv/install.sh | sh
(add shell permanently)
$ echo 'source $HOME/.local/bin/env' >> ~/.bashrc 
$ source ~/.bashrc
$ uv init --name myproject .
$ uv add pygame (example)
$ uv run game.py

Context
docs.astral.sh/uv/llms.txt

Permalink

Will IDEs soon be a thing of the past?
gemini / claude code / codex / 

Post image
Permalink

go to ryzen

Post image
Permalink

Creation of a 20-second video clip that retrieves weather information to be displayed on two 55-inch, 3000-nit dynamic kiosk screens located in the city center. Software used: Python with BeautifulSoup, FFmpeg (drawtext filter), Raspberry Pi 3B+, Affinity Designer, CapCut. 

I tried using hardware acceleration, but I ran into several issues with the drawtext filter, which seems to rely solely on CPU processing. To render a 20-second video in 1080p at 60FPS, the Raspberry Pi takes about 2 minutes. I use cron jobs every hour to refresh the data.

Permalink

🧃  đŸĒĢ  Laptops with x86 architecture, hmm yeah...

Post image
Permalink

What is DNSSEC? Dive into DNSimple's fun, illustrated comic to learn all about it!

howdnssec.works

Post image
Close
Fullscreen image