Alenface v26.2.2

The Ultralight Native LLM Client.

Pure Java powered by Llama.cpp and integrated with a native browser engine. No Electron. No Chromium. No JCEF. No compromises.

65 MB

Extreme Efficiency. Only 95MB installed on disk. Native engineering stays lean.

Native

Built with Project Panama for a true native experience on Apple Silicon.

Bonsai-8B

Full support for 1-bit/ternary quantization. High-quality 8 Billion params intelligence at just 1.14 GB.

Not just another wrapper.

While other local LLM tools are built on bloated frameworks and consume gigabytes of space, Alenface is engineered for speed and precision.

Hugging Face Integration

Seamlessly search, download, and manage GGUF model variants directly within the app.

PrismML Bonsai Ready

We provide continuous support for the Bonsai architecture, ensuring you can run cutting-edge AI on a standard laptop.

Installation Note

Alenface is currently in its Bootstrap Phase (unsigned). To run the downloaded app, clear macOS Gatekeeper via Terminal:

xattr -cr /Applications/Alenface.app

The Path Forward: Phase Zero

The immediate goal is to transition from test builds to a professionally distributed app. We are raising funds for Apple Developer Code Signing, dedicated build servers, and cross-platform foundations (Windows/Linux).

$5 Supporter

Help us clear the "Phase Zero" hurdles and get your name in the "About" screen of the application.

$15 Founding Dev

Get early access to signed builds, join the Windows Port roadmap, and get your name in the "About" screen.

$50 Visionary

Gain access to the Linux Port insider track, get direct tech support, and receive deep-dive technical updates.

Spread the Magic

Can't support financially? Be our zero-bloat evangelist. Record a video of Alenface running a 1GB+ model with lightning speed, or mention us on Reddit as the ultimate lightweight alternative to Electron clients.

Join the Mission on Patreon