Augmentative & Alternative Communication (AAC) Gets a “Face” Lift

  • Lois Jean Brady, MA, CCC-SLP

    Matthew Guggemos, MA, CCC-SLP

    This article discusses how tablets can stimulate a sense of engagement and enjoyment while, at the same time, teaching communication and social skills to people with autism spectrum disorder. Powerful tablet features, such as video-modeling and text-to-speech, hold promise as effective, evidence-based tools for teaching communication skills. Equally important, tablets can be used to stimulate other key behaviors, such as joint attention, which positively impact social interaction and communication skills in people with ASD.

    iPad & Communication

    In just a few short years, the iPad has established itself as an increasingly popular and a highly effective tool in teaching communication skills. Educational software companies have responded to this wave of consumer demand by releasing numerous augmentative and alternative communication (AAC) apps on the iPad. Although each app employs its own array of symbols, gestures, formats, pictures, and speech output, most of these design features have been used for decades on other devices. In short, the majority of AAC apps take a familiar, long-standing approach to AAC: icon-based or keyboard-generated speech production.

    Yet the iPad has many fantastic features and capabilities that can create a more personal and engaging AAC experience than traditional approaches. Hi-definition video, clear audio, touch screens, Bluetooth connectivity, and web access make iPads (or other similar tablet computers) ideal modern AAC devices.

    There is current research that supports the use of video modeling as a powerful teaching tool (1,2), and there’s growing evidence that iPad-based AAC apps can be an effective communication system for people who are unable to speak — particularly many children with autism.

    So we asked a question: what if we could animate images of users speaking and pair that with a speech-generation program? The answer is we can — and it works! We believe that creativity, science, and unconventional thinking are keys to developing apps that both incorporate the iPad’s modern features and improve communication skills for people who have autism, traumatic brain injuries, selective mutism, and other disorders.

    Introducing InnerVoice AAC  

    InnerVoice is the only app on the market that is specifically designed from the bottom up to harness the powerful therapeutic value of video self-modeling (VSM) while incorporating iPad features such as Bluetooth connectivity. Inspired by the Autism Speaks Hackathon(s), InnerVoice immerses users into a total communication environment — where they not only hear the desired message but see it being produced. This award-winning and affordable app takes full advantage of all that the iPad has to offer.

    Video Self-Modeling (VSM)  

    InnerVoice harnesses the highly effective technique of Video Self Modeling (VSM) where individuals watch themselves successfully performing a desired behavior, such as speech, thus becoming their own self-models. This positive imagery is so powerful that the effects of VSM should be seen almost instantaneously (1,2). In his book “Seeing Is Believing”, Dr. Tom Buggey writes, “One of the attributes of self-modeling is that positive results are seen almost immediately.”

    Using InnerVoice, an individual simply chooses a desired avatar (picture of self or favorite person), which can communicate personal ideas and feelings. Speaking avatars visually and auditorily model communication behaviors for individuals who have an inability to communicate. At the same time, avatars capture and hold not only the users’ attention but also the attention of their communicative partners, creating an environment where communication is internally reinforcing and continually thriving.

    Research on video self-modeling has yielded two key findings that make this technique effective. First, the best video models are those who have a close resemblance to the viewer. That way, a viewer can watch “himself” performing a skill. Second, viewers can watch themselves succeed. Self-modeling allows people to see themselves as successful; therefore, they believe that they can be successful (1,2).

    Most AAC apps are picture-based speech-generation systems: the device speaks words or phrases based on what a user taps or types. Such apps often fail, however, to trigger two crucial parts of the brain, mirror neurons and the Fusiform gyrus, which have shown low neural activity in persons with autism (3,4,5,6,7).

    Remote Prompting (RP)

    Echolalia (ech-o-la-li-a) is the repetition of words, without understanding, by an individual learning to talk. Researchers have found that up to 85% of individuals with autism have echolalia. This creates a problem with existing forms of prompting (see Levels of Prompting diagram on page 20).

    Remote prompting is a new approach to teaching communicative independence using iDevices (iPad, iPhone and iPod Touch). Using InnerVoice, prompts are sent via Bluetooth from the educator’s device to the user’s iPad to ensure the child will perform the correct skill and reduce the probability of errors and frustration.

    Remote Prompting reduces confusing verbal explanations that interfere with the communicative intent or message. The problem of verbal prompts interfering with learning is particularly noticeable when teaching first- and second-person pronouns such as “me,” “my,” “you,” and “your.” The more one explains who “you” and “me” are, the more unclear the definitions become for beginning communicators — especially those with autism who exhibit echolalia. Using speech to describe these ambiguous language concepts can quickly turn into a confusing, and sometimes frustrating, sea of un-clarity. For example if a speaker named Andrea says, “Hi, what’s your name?” and the responder replies, “Hi, what’s your name,” it’s uncertain whether the responder is clear about his role in the exchange. Correcting this response verbally quickly causes problems — particularly when saying, “You say ‘Hi, my name is Jerry” This often elicits, “You say ‘Hi, my name is Jerry.” In short, attempting to teach language concepts to a person with echolalia quickly becomes almost an impossible situation that devolves into an endless loop of repetition…and frustration.

    InnerVoice utilizes remote prompting by providing an alternative imitation model, which the person with autism can mimic by looking at the screen of his mobile device. The mobile device simply displays the socially relevant response (auditory and/or graphic) — avoiding unnecessary and confusing verbal descriptions. Remote prompting can also help with greetings, farewells and other face-to-face interactions in the same way.


    Edutainment — teaching through a medium that both educates and entertains — iTherapy and MotionPortrait’s system emphasizes engagement and joint attention. At the same time, InnerVoice incorporates speech-pathologist-designed, evidence-based practices that improve speech, language, communication, and social skills. Recent research articles have postulated that education, entertainment, and video self-modeling can provide tremendous benefits for individuals with autism (8,9).


    InnerVoice is the product of a partnership between iTherapy and MotionPortrain. iTherapy provided the clinical trials and ensured that evidence-based, best practices were adhered to in developing InnerVoice. MotionPortrait provided the best photo-based 3D avatars and synchronized mouth movements on the market today.

    The principal developers for InnerVoice are Lois Jean Brady and Matthew Guggemos. For more information on InnerVoice, please visit


    Lois Jean Brady has over 25 years of experience as a Speech-Language Pathologist, Certified Autism Specialist (CAS), Assistive Technology and Computer Based Intervention. Career accomplishments include: Two time winner of the Autism Speaks Hackathon(s), recipient of the Benjamin Franklin Award for “Apps for Autism” and an Ursula Award for Autism TodayTV.

                Matthew Guggemos is a licensed speech-language pathologist, professional drummer, and the winner of the 2013 Mensa Intellectual Benefit to Society Award. Matthew specializes’ in treating ASD, and he has over 15 years of experience teaching literacy. He received training in the Orton-Gillingham approach, emphasizing multi-sensory techniques for teaching individuals with dyslexia.


    1. Buggey, Tom. Seeing is believing: video self-modeling for people with autism and other developmental disabilities. Bethesda, MD: Woodbine House, 2009. Print.

    2. Bellini, S. & Akullian, J., (2007). A meta-analysis of video modeling and video self-modeling interventions for children and adolescents with autism spectrum disorders. Exceptional Children, 73, 261-284.

    3. Bolte S, Hubl D, Feineis-Matthews S, Prvulovic D, Dierks T, Poustka F. Facial affect recognition training in autism: can we animate the fusiform gyrus? Behav Neurosci 2006; 120: 211–6.

    4. Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 1997; 17: 4302–11.

    5. Kawashima R, Sugiura M, Kato T, Nakamura A, Hatano K, Ito K, et al. The human amygdala plays an important role in gaze monitoring. Brain 1999; 122: 779–83.

    6. Pierce K, Haist F, Sedagat F, Courchesne E. The brain response to personally familiar faces in autism: findings of fusiform activity and beyond. Brain 2004; 127: 2703–16.

    7. Schumann CM, Hamstra J, Goodlin-Jones BL, Lotspeich LJ, Kwon H, Buonocore MH, et al. The amygdala is enlarged in children but not adolescents with autism; the hippocampus is enlarged at all ages. JNeurosci 2004.

    8. . Paraskeva, F., Mysirlaki, S., & Papgianni, A. (2010). Multiplayer online games as educational tools: Facing new challenges in learning. Computers and Education, 54, 498-505.

    9. Van Eck, R. (2006). Digital game-based learning: It’s not just the digital natives who are restless… Educase Review, 41,2, 1-16.


    Autism Statistics

    From 2001 to 2009, the average growth rate for autism ranged from 12% to 19% across 6 bay area counties (San Mateo, Santa Clara, Alameda, Santa Cruz, San Francisco, and Marin) (PACE)

    In the past eight years, the number of students with autism in Santa Clara County has more than tripled from 1 in 348 to 1 in 104 (PACE)

    In 2009 alone, Bay Area schools recorded a 13% growth rate—down from 19% in 2001. But the total number of students with autism continued to climb to 6,218 individuals in 2009 (PACE)

    Marin County special education enrollment: 257 children with autism (2012), up from 179 (2008) (, Lucille Packard)

    In 2009 alone, the number of students with autism in six Bay Area counties increased to 707—this is almost 4 new students added every day of the regular academic year (PACE)

    California Data

    From 2001-2009 the number of California students affected by Autism grew by 40,000 individuals (PACE)

    As of 2008, the peak autism cohort in California schools is around five years of age (PACE). This means that the peak cohort in 2013 would be around ten years of age. These students will be transitioning out of school in the next 8 to 10 years.

    In a survey conducted by the Autism Society of California (2012), 77% of children with autism were enrolled in special education.

    Natural Level Data

    ASDs are almost 5 times more common among boys (1 in 54) than among girls (1 in 252). (CDC)

    About 1 in 88 children has been identified with an ASD. This marks a 23% increase since the CDC’s last report in 2009 and a 78% increase since their first report in 2007. (CDC)

    1 in 50 school-aged children have autism, according to a survey (2012) (CDC, *national survey of 65,556 parents of children with ASD between the ages of 6-17)

    The above statistics have been compiled from various sources by occupational therapy students at Dominican University, as part of a study done at Marin Autism Collaborative. For more information, please look up


Comments are closed.