Assistive Technology

ConnectAble

AAC Communication System for Non-Verbal Users

A personal Augmentative and Alternative Communication (AAC) system that helps non-verbal users communicate through a symbol-based board. The system learns how a specific user communicates over time and predicts what they want to say next using a 3-layer AI cascade that improves nightly.

Timeline 2026
Role Full-Stack Developer
Type Personal Project

The Challenge

Non-verbal users rely on AAC devices to communicate, but most commercial tools are static — they don't learn individual patterns, context, or preferences over time. Users must manually build every phrase from scratch, and generic suggestions rarely match their actual communication needs.

The Solution

ConnectAble combines a 66-symbol communication board with a personalized AI prediction system that learns nightly from the user's phrase history. A 3-layer cascade — bigram lookup, ChromaDB vector search, and LLM fallback — ensures fast, relevant suggestions that improve the more the system is used.

Key Features

Symbol Communication Board

A 66-symbol grid organized into 5 categories (food, feelings, actions, places, people) lets users build sentences by tapping buttons. The top bar displays real-time AI phrase suggestions that update as each symbol is selected.

  • 66 symbols across 5 categories
  • Real-time suggestion updates
  • Accepted suggestions logged for future learning

3-Layer AI Prediction Cascade

Phrase predictions run through a cascade designed for speed and accuracy. Layer 1 is an instant bigram lookup from a local vocabulary store. Layer 2 does semantic vector search over past phrases via ChromaDB. Layer 3 falls back to Ollama's phi3 LLM when the phrase database is sparse.

  • Layer 1: Bigram next-word lookup (instant)
  • Layer 2: ChromaDB semantic similarity search
  • Layer 3: Ollama phi3 LLM fallback

Nightly Learning Pipeline

Every night at 2 AM, the system retrains from logged phrase history — rebuilding the bigram map and re-embedding phrases into ChromaDB. The model improves continuously without user intervention.

  • Automated nightly retraining at 2 AM
  • Bigram map rebuilt from usage history
  • ChromaDB embeddings updated with new phrases

Text-to-Speech

Supports two TTS modes: fully offline using pyttsx3 / macOS say, or high-quality streaming audio via ElevenLabs for users who need a more natural voice.

  • Offline TTS (pyttsx3 / macOS say)
  • ElevenLabs streaming audio
  • Configurable per user

Agent Tab

A dedicated agent interface lets users send natural language messages that are classified into intents — make a call, order food, set a reminder, or general chat — and dispatched to tool handlers automatically.

  • Intent classification via Ollama phi3
  • Tool handlers for calls, food, reminders
  • General chat fallback

Private by Design

All phrase data is stored locally in an AES-256 encrypted SQLite database (SQLCipher). The LLM runs locally via Ollama. No user data leaves the device unless ElevenLabs TTS is explicitly enabled.

  • AES-256 encrypted SQLite (SQLCipher)
  • Local LLM inference via Ollama
  • No cloud dependency by default

Technology Stack

Frontend

React 18 TypeScript Vite Tailwind CSS

Backend

FastAPI SQLite + AES-256 Python 3.11+

AI & ML

ChromaDB sentence-transformers Ollama phi3

TTS & APIs

pyttsx3 ElevenLabs 12 REST endpoints

System Overview

66
AAC symbol buttons across 5 communication categories
3
Prediction layers — bigram, vector search, LLM fallback
12
REST API endpoints covering health, logging, suggestions, TTS, analytics, and agent
0
Cloud dependencies by default — fully offline capable

Key Learnings

01

Cascade Design for Latency

Putting the cheapest operation first (bigram lookup) and the most expensive last (LLM inference) keeps suggestions fast for common phrases while maintaining quality for novel inputs.

02

Privacy as a Feature

For assistive technology, communication history is highly sensitive. Designing for offline-first with local encryption made privacy a core feature rather than an afterthought.

03

Personalization Takes Time

The nightly training pipeline means the system improves gradually. Seeding the database with starter phrases (via seed_phrases.py) was critical for useful predictions from day one.

04

Graceful Degradation

Building the frontend to work standalone with fallback data when the backend is offline ensured the communication board was always usable, even during development or connectivity issues.

Interested in Collaboration?

I'm always open to discussing assistive technology, AI projects, or internship opportunities.