AI-Powered Communication
Every voice deserves to be heard.
AACAI builds free AI tools for Augmentative & Alternative Communication. Not for profit. For the people who depend on it.
Free, browser-based tools. No installs, no accounts.
AI-Powered Communication
AACAI builds free AI tools for Augmentative & Alternative Communication. Not for profit. For the people who depend on it.
Free, browser-based tools. No installs, no accounts.
Our Vision
Millions rely on AAC to express themselves. Too often, the tools are slow, expensive, or impersonal. We believe AI can change that. AACAI builds free, working prototypes that prove it.
Real tools you can use today, not concepts. Each project ships as a free, browser-based application anyone can open.
Every project is developed publicly and improved through feedback from AAC users, clinicians, and developers.
Each project explores how AI can make communication faster, more natural, and more personal for AAC users.
Projects
Each project tackles a real challenge in AAC - shipping as a working tool, not a whitepaper.
A full communication board with vocabulary categories, spell mode, text-to-speech, and adaptive word prediction - all navigable by eye tracking with customizable dwell time and visual feedback. This is a proof of concept to demonstrate what AI-enhanced AAC can look like - not intended as a daily communication tool.
Designed for literate adults who can read and spell but require eye gaze access due to conditions such as ALS, cerebral palsy, locked-in syndrome, muscular dystrophy, spinal muscular atrophy, multiple sclerosis, Rett syndrome, brainstem stroke, spinal cord injury, or other conditions that limit hand and touch access. Symbol-based boards for emerging literacy users are on our roadmap. Also for SLPs, caregivers, and researchers evaluating AI-enhanced AAC.
Requires an eye tracking device such as the Tobii Dynavox PCEye. An explore mode is available for previewing the app with a mouse.
AI features require your own API keys. Get a Claude API key for word prediction and an OpenAI API key for AI voice output. The app works without them using local fallbacks.
Open PrototypeHow It Works
Each eye movement passes through a real-time pipeline - calibrated, smoothed, snapped, confirmed, and spoken.
A multi-point calibration maps raw tracker coordinates to screen positions using polynomial regression.
A velocity-adaptive filter smooths jitter when still but stays responsive during fast saccades.
The cursor magnetically snaps to the nearest target. Hysteresis prevents flickering between buttons.
Stability detection confirms intentional holds. Adaptive timing fires faster when gaze is perfectly still.
Claude-powered word prediction suggests next words. TTS generates natural speech with local fallbacks.
Communication Modalities
Get Involved
Get updates on new projects, prototypes, and ways to contribute. No spam, ever.
Thanks for signing up. We'll be in touch.
We respect your privacy. Unsubscribe anytime.