AI-Powered Communication

Every voice deserves to be heard.

AACAI builds free AI tools for Augmentative & Alternative Communication. Not for profit. For the people who depend on it.

Explore Projects Learn More

Free, browser-based tools. No installs, no accounts.

Our Vision

Until every voice is fully heard, we're not done.

Millions rely on AAC to express themselves. Too often, the tools are slow, expensive, or impersonal. We believe AI can change that. AACAI builds free, working prototypes that prove it.

Working Prototypes

Real tools you can use today, not concepts. Each project ships as a free, browser-based application anyone can open.

Built in the Open

Every project is developed publicly and improved through feedback from AAC users, clinicians, and developers.

AI-First Design

Each project explores how AI can make communication faster, more natural, and more personal for AAC users.

Projects

What we're building.

Each project tackles a real challenge in AAC - shipping as a working tool, not a whitepaper.

Live Prototype

Eye Gaze AAC Board

A full communication board with vocabulary categories, spell mode, text-to-speech, and adaptive word prediction - all navigable by eye tracking with customizable dwell time and visual feedback. This is a proof of concept to demonstrate what AI-enhanced AAC can look like - not intended as a daily communication tool.

Designed for literate adults who can read and spell but require eye gaze access due to conditions such as ALS, cerebral palsy, locked-in syndrome, muscular dystrophy, spinal muscular atrophy, multiple sclerosis, Rett syndrome, brainstem stroke, spinal cord injury, or other conditions that limit hand and touch access. Symbol-based boards for emerging literacy users are on our roadmap. Also for SLPs, caregivers, and researchers evaluating AI-enhanced AAC.

Requires an eye tracking device such as the Tobii Dynavox PCEye. An explore mode is available for previewing the app with a mouse.

AI features require your own API keys. Get a Claude API key for word prediction and an OpenAI API key for AI voice output. The app works without them using local fallbacks.

Open Prototype

How It Works

From gaze to speech in five stages.

Each eye movement passes through a real-time pipeline - calibrated, smoothed, snapped, confirmed, and spoken.

01

Gaze Capture & Calibration

A multi-point calibration maps raw tracker coordinates to screen positions using polynomial regression.

02

Adaptive Smoothing

A velocity-adaptive filter smooths jitter when still but stays responsive during fast saccades.

03

Snap-to-Grid

The cursor magnetically snaps to the nearest target. Hysteresis prevents flickering between buttons.

04

Dwell Selection

Stability detection confirms intentional holds. Adaptive timing fires faster when gaze is perfectly still.

05

AI Prediction & Speech

Claude-powered word prediction suggests next words. TTS generates natural speech with local fallbacks.

Communication Modalities

Symbol-Based Systems Gesture Recognition Speech-Generating Devices Eye-Gaze Tracking Brain-Computer Interfaces Text-to-Speech Predictive Language Models

Get Involved

Stay in the loop.

Get updates on new projects, prototypes, and ways to contribute. No spam, ever.

Thanks for signing up. We'll be in touch.

We respect your privacy. Unsubscribe anytime.