🤔 Running local llm server. Looks like many support Apple M series chip, so

Mini PC + GPU (Nvidia?), upgradable + higher power usage? or Mac Mini M4, non-upgradable 🥲

or I just run this in my laptop 😬 jan.ai