Skip to content

Download & Install

Download the latest release from GitHub Releases. No Ollama, no GPU, no internet required after first launch.

PlatformDownloadDetails
Windows x64ghost_x.x.x_x64-setup.exeNSIS installer · No admin required · WebView2 auto-bootstrap
macOS Apple Siliconghost_x.x.x_aarch64.dmgM1/M2/M3/M4 · DMG with drag-to-install
macOS Intelghost_x.x.x_x64.dmgIntel-based Macs · macOS 10.15+
Linux DEBghost_x.x.x_amd64.debDebian · Ubuntu · Pop!_OS · Mint
Linux RPMghost_x.x.x_x86_64.rpmFedora · RHEL · CentOS · openSUSE
Linux Universalghost_x.x.x_amd64.AppImageAny distro · No install needed
PlatformDownloadDetails
Android ARM64Ghost_x.x.x_android-aarch64.apkMin SDK 24 · Tauri v2 WebView
iOSComing soonxcarchive ready · Requires macOS build
Terminal window
# Debian/Ubuntu
wget https://github.com/ghostapp-ai/ghost/releases/latest/download/ghost_0.11.0_amd64.deb
sudo dpkg -i ghost_0.11.0_amd64.deb
# Fedora/RHEL
wget https://github.com/ghostapp-ai/ghost/releases/latest/download/ghost_0.11.0_x86_64.rpm
sudo rpm -i ghost_0.11.0_x86_64.rpm
# AppImage (any distro)
wget https://github.com/ghostapp-ai/ghost/releases/latest/download/ghost_0.11.0_amd64.AppImage
chmod +x ghost_0.11.0_amd64.AppImage
./ghost_0.11.0_amd64.AppImage
  • Rust (latest stable)
  • Bun >= 1.0 (or Node.js >= 18)
  • Platform-specific Tauri v2 dependencies (see guide)
  • Ollama (optional — Ghost uses native AI by default)
Terminal window
# Clone the repo
git clone https://github.com/ghostapp-ai/ghost.git
cd ghost
# Install frontend dependencies
bun install
# Run in development mode
# (native AI model downloads on first run, ~23MB)
bun run tauri dev
Terminal window
# Desktop (Windows, macOS, or Linux — auto-detected)
bun run tauri build
# Android (requires Android SDK + NDK 27+)
bun run tauri android build --target aarch64

The desktop installer will be generated in src-tauri/target/release/bundle/.

Optional: Ollama for Higher-Quality Models

Section titled “Optional: Ollama for Higher-Quality Models”
Terminal window
# Higher-quality 768D embeddings (vs 384D native)
ollama pull nomic-embed-text
# Agent reasoning model (for tool calling)
ollama pull qwen3:8b
RequirementMinimumRecommended
RAM2 GB8 GB+ (for larger chat models)
Storage100 MB500 MB+ (with AI models)
OSWindows 10, macOS 10.15, Ubuntu 20.04Latest versions
CPUAny x86_64 or ARM64AVX2/NEON SIMD support
GPUNot requiredCUDA/Metal for faster inference

On first launch, Ghost will:

  1. Show an onboarding wizard with hardware detection
  2. Download the native AI model (~23MB) — requires internet once
  3. Auto-discover your Documents, Desktop, Downloads, and Pictures folders
  4. Start indexing files in the background

Subsequent launches are instant (<500ms) with the cached model.