<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Running Immich with AI-Powered Image Search on Raspberry Pi 5 + AXera NPU]]></title><description><![CDATA[<p dir="auto"><strong>TL;DR</strong>: Got Immich running with CLIP-based semantic search on a Raspberry Pi 5 using the AXera AX8850 NPU. Chinese language search works surprisingly well thanks to the ViT-L-14-336-CN model. Setup took about 30 minutes once I figured out the ML server configuration.</p>
<h2>What is Immich?</h2>
<p dir="auto">Immich is an open-source, self-hosted photo and video management platform. Think Google Photos, but you control the data. It supports automatic backup, intelligent search, and cross-device access.</p>
<h2>Why This Setup?</h2>
<p dir="auto">I wanted to test AI-accelerated image search on edge hardware. The AXera AX8850 NPU on our M5Stack development board provides hardware acceleration for the CLIP models, making semantic search actually usable on a Pi.</p>
<h2>Hardware Setup</h2>
<ul>
<li>Raspberry Pi 5</li>
<li>M5Stack AX8850 AI Module (provides NPU acceleration)</li>
<li>Standard Pi power supply and storage</li>
</ul>
<h2>Step-by-Step Deployment</h2>
<h3>1. Download the Pre-built Package</h3>
<p dir="auto">Grab the optimized Immich build from HuggingFace:</p>
<pre><code class="language-bash">git clone https://huggingface.co/AXERA-TECH/immich
</code></pre>
<p dir="auto"><strong>Note</strong>: You'll need <code>git lfs</code> installed. If you don't have it, install it first.</p>
<p dir="auto"><strong>What you get:</strong></p>
<pre><code class="language-bash">m5stack@raspberrypi:~/rsp/immich $ ls -lh
total 421M
drwxrwxr-x 2 m5stack m5stack 4.0K Oct 10 09:12 asset
-rw-rw-r-- 1 m5stack m5stack 421M Oct 10 09:20 ax-immich-server-aarch64.tar.gz
-rw-rw-r-- 1 m5stack m5stack    0 Oct 10 09:12 config.json
-rw-rw-r-- 1 m5stack m5stack 7.6K Oct 10 09:12 docker-deploy.zip
-rw-rw-r-- 1 m5stack m5stack 104K Oct 10 09:12 immich_ml-1.129.0-py3-none-any.whl
-rw-rw-r-- 1 m5stack m5stack 9.4K Oct 10 09:12 README.md
-rw-rw-r-- 1 m5stack m5stack  177 Oct 10 09:12 requirements.txt
</code></pre>
<h3>2. Load the Docker Image</h3>
<pre><code class="language-bash">cd immich
docker load -i ax-immich-server-aarch64.tar.gz
</code></pre>
<p dir="auto">If Docker isn't installed, you'll need to set that up first.</p>
<h3>3. Configure the Environment</h3>
<pre><code class="language-bash">unzip docker-deploy.zip
cp example.env .env
</code></pre>
<h3>4. Start the Core Services</h3>
<pre><code class="language-bash">docker compose -f docker-compose.yml -f docker-compose.override.yml up -d
</code></pre>
<p dir="auto">Success looks like this:</p>
<pre><code class="language-bash">[+] Running 3/3
 ✔ Container immich_postgres  Started                                      1.0s 
 ✔ Container immich_redis     Started                                      0.9s 
 ✔ Container immich_server    Started                                      0.9s 
</code></pre>
<h3>5. Set Up the ML Service (The Interesting Part)</h3>
<p dir="auto">The ML service handles the AI-powered image search. It runs separately to leverage the NPU.</p>
<p dir="auto"><strong>Create and activate a virtual environment:</strong></p>
<pre><code class="language-bash">python -m venv mich
source mich/bin/activate
</code></pre>
<p dir="auto"><strong>Install dependencies:</strong></p>
<pre><code class="language-bash">pip install https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc2/axengine-0.1.3-py3-none-any.whl
pip install -r requirements.txt
pip install immich_ml-1.129.0-py3-none-any.whl
</code></pre>
<p dir="auto"><strong>Launch the ML server:</strong></p>
<pre><code class="language-bash">python -m immich_ml
</code></pre>
<p dir="auto">You should see:</p>
<pre><code class="language-bash">[10/10/25 09:50:12] INFO     Listening at: http://[::]:3003 (8698)              
[INFO] Available providers:  ['AXCLRTExecutionProvider']
[10/10/25 09:50:16] INFO     Application startup complete.  
</code></pre>
<p dir="auto">The <code>AXCLRTExecutionProvider</code> confirms the NPU is being used.</p>
<h2>Web Interface Configuration</h2>
<h3>Initial Setup</h3>
<ol>
<li>Navigate to <code>http://&lt;your-pi-ip&gt;:3003</code> (e.g., <code>192.168.20.27:3003</code>)</li>
<li><strong>First visit requires admin account creation</strong> - credentials are stored locally</li>
</ol>
<p dir="auto">&lt;img src="<a href="https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich1.png" target="_blank" rel="noopener noreferrer nofollow ugc">https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich1.png</a>" width="95%" /&gt;</p>
<h3>Configure the ML Server</h3>
<p dir="auto">This is critical - the web interface needs to know where your ML service is running.</p>
<ol>
<li>Go to Settings → Machine Learning</li>
<li>Set the URL to your Pi's IP and port 3003: <code>http://192.168.20.27:3003</code></li>
<li><strong>Choose your CLIP model based on language:</strong>
<ul>
<li>Chinese search: <code>ViT-L-14-336-CN__axera</code></li>
<li>English search: <code>ViT-L-14-336__axera</code></li>
</ul>
</li>
</ol>
<p dir="auto">&lt;img src="<a href="https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich4.png" target="_blank" rel="noopener noreferrer nofollow ugc">https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich4.png</a>" width="95%" /&gt;</p>
<h3>First-Time Index</h3>
<p dir="auto"><strong>Important</strong>: You need to manually trigger the initial indexing.</p>
<ol>
<li>Go to Administration → Jobs</li>
<li>Find "SMART SEARCH"</li>
<li>Click "Run Job" to process your uploaded images</li>
</ol>
<p dir="auto">&lt;img src="<a href="https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich6.png" target="_blank" rel="noopener noreferrer nofollow ugc">https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich6.png</a>" width="95%" /&gt;</p>
<h2>Testing Image Search</h2>
<p dir="auto">Upload some photos, wait for indexing to complete, then try semantic searches:</p>
<p dir="auto">&lt;img src="<a href="https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich7.png" target="_blank" rel="noopener noreferrer nofollow ugc">https://m5stack.oss-cn-shenzhen.aliyuncs.com/resource/linux/ax8850_card/images/immich7.png</a>" width="95%" /&gt;</p>
<p dir="auto">The search works conceptually - you can search for "sunset" or "dogs playing" and it'll find relevant images even if those exact words aren't in the filename.</p>
<h2>Technical Notes</h2>
<ul>
<li>The NPU acceleration makes CLIP inference fast enough for interactive search</li>
<li>Chinese language support is genuinely good with the CN model</li>
<li>The ML server runs independently, so you can restart it without affecting the main Immich service</li>
<li>Docker handles PostgreSQL and Redis automatically</li>
</ul>
<h2>Why M5Stack in This Stack?</h2>
<p dir="auto">The AX8850 NPU module provides the hardware acceleration that makes this practical on a Pi. Without it, running CLIP inference would be too slow for interactive use. We're working on more edge AI applications that leverage this acceleration - this Immich setup is a good real-world test case.</p>
<hr />
<p dir="auto">Questions about the setup or the NPU integration? Happy to dig into specifics.</p>
]]></description><link>https://community.m5stack.com/topic/7947/running-immich-with-ai-powered-image-search-on-raspberry-pi-5-axera-npu</link><generator>RSS for Node</generator><lastBuildDate>Sun, 15 Mar 2026 01:36:50 GMT</lastBuildDate><atom:link href="https://community.m5stack.com/topic/7947.rss" rel="self" type="application/rss+xml"/><pubDate>Wed, 17 Dec 2025 07:12:11 GMT</pubDate><ttl>60</ttl></channel></rss>