Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.
In my case I changed this:
immich-machine-learning:
...
volumes:
- model-cache:/cache
To that:
immich-machine-learning:
...
volumes:
- ./cache:/cache
I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.
That’ll be all.
Which model would you recommend? I just switched from ViT-B/32 to ViT-SO400M-16-SigLIP2-384__webli since it seemed to be the most popular.
I switched to the same model. It’s absolutely spectacular. The only extra thing I did was to increase the concurrent job count for Smart Search and to give the model access to my GPU which sped up the initial scan at least an order of magnitude.
Seems to work really well. I can do obscure searches like Outer Wilds and it will pull up pictures I took from my phone of random gameplay movements, so it’s not doing any filename or metadata cheating there.