Pure Java powered by Llama.cpp and integrated with a native browser engine. No Electron. No Chromium. No JCEF. No compromises.
Extreme Efficiency. Only 95MB installed on disk. Native engineering stays lean.
Built with Project Panama for a true native experience on Apple Silicon.
Full support for 1-bit/ternary quantization. High-quality 8 Billion params intelligence at just 1.14 GB.
While other local LLM tools are built on bloated frameworks and consume gigabytes of space, Alenface is engineered for speed and precision.
Seamlessly search, download, and manage GGUF model variants directly within the app.
We provide continuous support for the Bonsai architecture, ensuring you can run cutting-edge AI on a standard laptop.
Alenface is currently in its Bootstrap Phase (unsigned). To run the downloaded app, clear macOS Gatekeeper via Terminal:
The immediate goal is to transition from test builds to a professionally distributed app. We are raising funds for Apple Developer Code Signing, dedicated build servers, and cross-platform foundations (Windows/Linux).
Help us clear the "Phase Zero" hurdles and get your name in the "About" screen of the application.
Get early access to signed builds, join the Windows Port roadmap, and get your name in the "About" screen.
Gain access to the Linux Port insider track, get direct tech support, and receive deep-dive technical updates.
Can't support financially? Be our zero-bloat evangelist. Record a video of Alenface running a 1GB+ model with lightning speed, or mention us on Reddit as the ultimate lightweight alternative to Electron clients.
Join the Mission on Patreon