American Speech-Language-Hearing Association

American Speech-Language-Hearing Association

  • Certification
  • Publications
  • Continuing Education
  • Practice Management
  • Audiologists
  • Speech-Language Pathologists
  • Academic & Faculty
  • Audiology & SLP Assistants

Speech Sound Disorders

On this page:

About Speech Sound Disorders

Signs and symptoms of speech sound disorders, causes of speech sound disorders, seeing a professional, other resources.

Children may say some sounds the wrong way as they learn to talk. They learn some sounds earlier, like p, m, or w. Other sounds take longer to learn, like z, v, or th. Most children can say almost all speech sounds correctly by 4 years old. A child who does not say sounds by the expected ages may have a speech sound disorder. You may hear the terms "articulation disorder" and "phonological disorder" to describe speech sound disorders like this.

To learn more about what you should expect your child to be able to say, see these two resources:

  • ASHA's Communication and Feeding Milestones: Birth to 5 Years
  • Your Child's Communication Development: Kindergarten Through Fifth Grade

Adults can also have speech sound disorders. Some adults have problems that started when they were children. Others may have speech problems after a stroke or traumatic brain injury . To learn more about adult speech disorders after a stroke or traumatic brain injury, see apraxia of speech in adults and dysarthria .

Your child may substitute one sound for another, leave sounds out, add sounds, or change a sound. It can be hard for others to understand them.

It is normal for young children to say the wrong sounds sometimes. For example, your child may make a "w" sound for an "r" and say "wabbit" for "rabbit." They may leave sounds out of words, such as "nana" for "banana." This is okay when they are young. It may be a problem if they keep making these mistakes as they get older.

You and your child may also sound different because you have an accent or dialect. This is not a speech sound disorder.

The chart below shows the ages when most English-speaking children develop sounds. Children learning more than one language may develop some sounds earlier or later.

Makes cooing sounds
Laughs and makes playful sounds
Makes speech-like babbling sounds like
Babbles longer strings of sounds like

Says and in words

Familiar people understand the child's speech

Says and in words

May still make mistakes on the and sounds

Most people understand the child’s speech

Many children learn to say speech sounds over time, but some do not. You may not know why your child has problems speaking.

Some children have speech problems because the brain has trouble sending messages to the speech muscles telling them how and when to move. This is called apraxia. Childhood apraxia of speech is not common but will cause speech problems.

Some children have speech problems because the muscles needed to make speech sounds are weak. This is called dysarthria .

Your child may have speech problems if he has

  • a developmental disorder, like autism;
  • a genetic syndrome, like Down syndrome;
  • hearing loss, from ear infections or other causes; or
  • brain damage, like cerebral palsy or a head injury.

Adults can also have speech sound disorders. Some adults have problems that started when they were children. Others may develop speech problems after a stroke or traumatic brain injury , or other trauma. To learn more about adult speech disorders, see apraxia of speech in adults , dysarthria , laryngeal cancer , and oral cancer .

Testing for Speech Sound Disorders

A speech-language pathologist, or SLP, can test your child's speech. The SLP will listen to your child to hear how they say sounds. The SLP also will look at how your child moves their lips, jaw, and tongue. The SLP may also test your child’s language skills. Many children with speech sound disorders also have language disorders. For example, your child may have trouble following directions or telling stories.

It is important to have your child’s hearing checked to make sure they do not have a hearing loss. A child with a hearing loss may have more trouble learning to talk.

The SLP can also help decide if you have a speech problem or speak with an accent. An accent is the unique way that groups of people sound. Accents are NOT a speech or language disorder. 

Treatment for Speech Sound Disorders

SLPs can help you or your child say sounds correctly and clearly. Treatment may include the following:

  • Learning the correct way to make sounds
  • Learning to tell when sounds are right or wrong
  • Practicing sounds in different words
  • Practicing sounds in longer sentences

See ASHA information for professionals on the Practice Portal’s Speech Sound Disorders page.

  • Identify the Signs
  • Typical Speech and Language Development

In the Public Section

  • Hearing & Balance
  • Speech, Language & Swallowing
  • About Health Insurance
  • Adding Speech & Hearing Benefits
  • Advocacy & Outreach
  • Find a Professional
  • Advertising Disclaimer
  • Advertise with us

ASHA Corporate Partners

  • Become A Corporate Partner

Stepping Stones Group

The American Speech-Language-Hearing Association (ASHA) is the national professional, scientific, and credentialing association for 234,000 members, certificate holders, and affiliates who are audiologists; speech-language pathologists; speech, language, and hearing scientists; audiology and speech-language pathology assistants; and students.

  • All ASHA Websites
  • Work at ASHA
  • Marketing Solutions

Information For

Get involved.

  • ASHA Community
  • Become a Mentor
  • Become a Volunteer
  • Special Interest Groups (SIGs)

Connect With ASHA

American Speech-Language-Hearing Association 2200 Research Blvd., Rockville, MD 20850 Members: 800-498-2071 Non-Member: 800-638-8255

MORE WAYS TO CONNECT

Media Resources

  • Press Queries

Site Help | A–Z Topic Index | Privacy Statement | Terms of Use © 1997- American Speech-Language-Hearing Association

Articulation Disorder: Symptoms, Causes, and Treatments

Articulation disorder begins in childhood but can last into adulthood if left untreated. This article addresses the top questions about articulation disorder, including symptoms and treatment options for both children and adults.

By Ability Central

12 February, 2024

A toddler girl in blue jeans and a frilly blue top plays alone at daycare, pretending to speak into a bright blue toy telephone

Almost 8% of U.S. children ages 3-17 have had a disorder related to voice, speech, language, or swallowing. Of those, about 30% have more than one disorder. The differences between diagnoses can be confusing, but Ability Central is here to help.

This article will address the specifics of articulation disorder by answering:

  • What is articulation disorder?

What is the difference between articulation disorder and phonological disorder?

What are examples of articulation disorder speech errors, is articulation disorder a speech disorder or a language disorder, what other speech disorders are there besides articulation disorder , what does articulation disorder look like in adults, how does articulation disorder affect communication, what are the treatment options for articulation disorder, where can i find help for articulation disorder, what is articulation disorder .

Articulation is the process people use to produce sounds, make syllables, and pronounce words. It includes everything from tongue placement to how the lips move. A person with articulation disorder may have trouble pronouncing words or speaking clearly. This is typically most evident in certain sounds, like replacing “th” with “s” or “r” with “w.” 

Articulation disorder is also called:

  • Functional speech disorder
  • Articulation delay
  • Function speech sound disorder
  • Speech articulation disorder

Articulation disorder occurs in children but may last into adulthood without appropriate intervention.

Phonological disorders are more complex. Some sounds, like the K and the G, require subtle tongue or mouth movement changes. Sometimes, children unknowingly take shortcuts as they are learning to speak, like saying “gog” instead of “dog” or substituting “wat” for “rat.”

While these shortcuts are often a normal part of speech development, overreliance on shortcuts can cause a systematic problem in a child’s speech that goes beyond one or two articulation problems. 

An articulation error only affects a single sound, resulting from difficulties moving the mouth or tongue. On the other hand, when there is a consistent pattern of these articulation errors, it is called a phonological disorder.

With articulation disorder, a person usually makes one or two articulation errors. Articulation disorder examples include:

  • The person has a lisp, and their ‘s’ and ‘z’ sounds are distorted.
  • They substitute sounds for both the ‘r’ and ‘er.’
  • They have substitutions for multiple letter sounds, such as ‘th,’ ‘I,’ ‘sh,’ and ‘ch.’

While the terms “speech disorders” and “language disorders” are sometimes used interchangeably, they are two separate diagnoses. 

Speech is the ability to produce specific sounds and sound combinations. It is strictly verbal. Language, however, refers to the overall system of words and symbols. It can be written, spoken, or nonverbal. To that end, a speech disorder affects the way a person speaks, while a language disorder affects their comprehension and use of language.

Therefore, articulation disorder is a speech disorder, although it often occurs alongside certain language disorders. These co-occurring language disorders might include:

  • Expressive language disorder. Someone with expressive language disorder struggles to communicate their message when speaking. It is not a speech disability, speech disorder, or speech impairment. To learn more, see Expressive Language Disorder: Top Seven Questions Answered.
  • Receptive language disorder. Someone with receptive language disorder has difficulty understanding the meaning of what others say. See Receptive Language Disorder: Top Seven Questions Answered for more information.
  • Mixed receptive-expressive language disorder.  Expressive and receptive language can both be affected in the same person. When both expressive and receptive language disorders co-exist, it is called mixed receptive-expressive language disorder.

Articulation disorder is not the only diagnosis that affects speech. Other disorders that can cause speech impairments and impediments include: 

  • Apraxia of speech
  • Attention deficit/hyperactivity disorder (ADHD)
  • Autism -related speech disorders
  • Orofacial myofunctional disorders
  • Receptive disorders
  • Resonance disorders
  • Selective mutism
  • Stuttering and other fluency disorders

Articulation disorder most commonly occurs in children. If left untreated, the symptoms can last well into adulthood.

Some speech impairments and articulation problems can begin in adulthood as the result of:

  • Brain injury
  • Degenerative neurological or motor disorder
  • Dental issues
  • Hearing loss

As a speech disorder, articulation disorder makes it difficult to form spoken words that other people will understand. For this reason, kids and adults with articulation disorder might struggle to talk on the phone, form friendships, or speak up in school or the workplace.

Speech disorders can be isolating, embarrassing, or frustrating. People with articulation disorder might prefer texting or email over phone calls and video meetings. Especially for school-aged children, articulation disorder symptoms can lead to low self-esteem, avoidance of social situations, and fear of public speaking. 

It’s critical to start speech therapy for a child with articulation disorder right away. This is not only to assist with the speech symptoms themselves, but to learn social skills and compensation techniques that make the child feel most comfortable and confident. 

A speech-language pathologist (SLP) can diagnose and treat articulation disorder. Speech and language therapy may include the following:

  • Identifying the sounds the person cannot make.
  • Correcting the way a person creates certain sounds.
  • Learning how to correctly use the tongue, lips, and mouth to form letters and words.
  • Strengthening speech muscles.
  • Practicing sound formation at home.

A mobile device with speech therapy apps may help with practice at home. Many apps “gamify” speech therapy, making treatment more fun and engaging for kids. Your SLP or pediatrician is a great place to start for recommendations.

Ability Central offers a searchable database of nonprofits specializing in communication difficulties like articulation disorder. Use our Service Locator tool to find an organization near you that can help with everything from diagnosis to treatment. 

In addition, Ability Central hosts a library of articles on related conditions, including:

  • Attention deficit/hyperactivity disorder symptoms (ADHD)
  • Autism spectrum disorder
  • Expressive language disorder
  • Receptive language disorder

Article Type:

Keep in touch with us.

Sign up for emails to stay in the loop with Ability Central.

Your data security and privacy is important to us. Read our Privacy Policy.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

What Is a Speech Sound Disorder?

Halfpoint Images / Getty Images

Speech sound disorders are a blanket description for a child’s difficulty in learning, articulating, or using the sounds/sound patterns of their language. These difficulties are usually clear when compared to the communication abilities of children within the same age group.

Speech developmental disorders may indicate challenges with motor speech. Here, a child experiences difficulty moving the muscles necessary for speech production. This child may also face reduced coordination when attempting to speak.

Speech sound disorders are recognized where speech patterns do not correspond with the movements/gestures made when speaking.  

Speech impairments are a common early childhood occurrence—an estimated 2% to 13% of children live with these difficulties. Children with these disorders may struggle with reading and writing. This can interfere with their expected academic performance. Speech sound disorders are often confused with language conditions such as specific language impairment (SLI).

This article will examine the distinguishing features of this disorder. It will also review factors responsible for speech challenges, and the different ways they can manifest. Lastly, we’ll cover different treatment methods that make managing this disorder possible.

Symptoms of Speech Sound Disorder

A speech sound disorder may manifest in different ways. This usually depends on the factors responsible for the challenge, or how extreme it is.

There are different patterns of error that may signal a speech sound disorder. These include:

  • Removing a sound from a word
  • Including a sound in a word
  • Replacing hard to pronounce sounds with an unsuitable alternative
  • Difficulty pronouncing the same sound in different words (e.g., "pig" and "kit")
  • Repeating sounds or words
  • Lengthening words
  • Pauses while speaking
  • Tension when producing sounds
  • Head jerks during speech
  • Blinking while speaking
  • Shame while speaking
  • Changes in voice pitch
  • Running out of breath while speaking

It’s important to note that children develop at different rates. This can reflect in the ease and ability to produce sounds. But where children repeatedly make sounds or statements that are difficult to understand, this could indicate a speech disorder.

Diagnosis of Speech Sound Disorders

For a correct diagnosis, a speech-language pathologist can determine whether or not a child has a speech-sound disorder.

This determination may be made in line with the requirements of the DSM-5 diagnostic criteria . These guidelines require that:

  • The child experience persistent difficulty with sound production (this affects communication and speech comprehension)
  • Symptoms of the disorder appear early during the child’s development stages
  • This disorder limits communication. It affects social interactions, academic achievements, and job performance.
  • The disorder is not caused by other conditions like a congenital disorder or an acquired condition like hearing loss . Hereditary disorders are, however, exempted. 

Causes of Speech Sound Disorders

There is no known cause of speech sound disorders. However, several risk factors may increase the odds of developing a speech challenge. These include:

  • Gender : Male children are more likely to develop a speech sound disorder
  • Family history : Children with family members living with speech disorders may acquire a similar challenge.
  • Socioeconomics : Being raised in a low socioeconomic environment may contribute to the development of speech and literacy challenges.
  • Pre- and post-natal challenges : Difficulties faced during pregnancy such as maternal infections and stressors may worsen the chances of speech disorders in a child. Likewise, delivery complications, premature birth, and low-birth-weight could lead to speech disorders.
  • Disabilities : Down syndrome, autism , and other disabilities may be linked to speech-sound disorders.
  • Physical challenges : Children with a cleft lip may experience speech sound difficulties.
  • Brain damage : These disorders may also be caused by an infection or trauma to a child’s brain . This is seen in conditions like cerebral palsy where the muscles affecting speech are injured.

Types of Speech Sound Disorders

By the time a child turns three, at least half of what they say should be properly understood. By ages four and five, most sounds should be pronounced correctly—although, exceptions may arise when pronouncing “l”, “s”,”r”,”v”, and other similar sounds. By seven or eight, harder sounds should be properly pronounced. 

A child with a speech sound disorder will continue to struggle to pronounce words, even past the expected age. Difficulty with speech patterns may signal one of the following speech sound disorders:

This refers to interruptions while speaking. Stuttering is the most common form of disfluency. It is recognized for recurring breaks in the free flow of speech. After the age of four, a child with disfluency will still repeat words or phrases while speaking. This child may include extra words or sounds when communicating—they may also make words longer by stressing syllables.

This disorder may cause tension while speaking. Other times, head jerking or blinking may be observed with disfluency. 

Children with this disorder often feel frustrated when speaking, it may also cause embarrassment during interactions. 

Articulation Disorder

When a child is unable to properly produce sounds, this may be caused by inexact placement, speed, pressure, or movement from the lips, tongue, or throat.  

This usually signals an articulation disorder, where sounds like “r”, “l”, or “s” may be changed. In these cases, a child’s communication may be understood by only close family members.

Phonological Disorder

A phonological disorder is present where a child is unable to make the speech sounds expected of their age. Here, mistakes may be made when producing sounds. Other times, sounds like consonants may be omitted when speaking.  

Voice Disorder

Where a child is observed to have a raspy voice, this may be an early sign of a voice disorder. Other indicators include voice breaks, a change in pitch, or an excessively loud or soft voice.  

Children that run out of breath while speaking may also live with this disorder. Likewise, children may sound very nasally, or can appear to have inadequate air coming out of their nose if they have a voice disorder.

Childhood apraxia of speech occurs when a child lacks the proper motor skills for sound production. Children with this condition will find it difficult to plan and produce movements in the tongue, lips, jaw, and palate required for speech.  

Treatment of Speech Sound Disorder

Parents of children with speech sound disorders may feel at a loss for the next steps to take. To avoid further strain to the child, it’s important to avoid showing excessive concern.

Instead, listening patiently to their needs, letting them speak without completing their sentences, and showing usual love and care can go a long way.

For professional assistance, a speech-language pathologist can assist with improving a child’s communication. These pathologists will typically use oral motor exercises to enhance speech.

These oral exercises may also include nonspeech oral exercises such as blowing, oral massages and brushing, cheek puffing, whistleblowing, etc.

Nonspeech oral exercises help to strengthen weak mouth muscles, and can help with learning the common ways of communicating.

Parents and children with speech sound disorders may also join support groups for information and assistance with the condition.

A Word From Verywell

It can be frustrating to witness the challenges in communication. But while it's understandable to long for typical communication from a child—the differences caused by speech disorders can be managed with the right care and supervision. Speaking to a speech therapist, and showing love o children with speech disorders can be important first steps in overcoming these conditions.

Eadie P, Morgan A, Ukoumunne OC, Ttofari Eecen K, Wake M, Reilly S. Speech sound disorder at 4 years: prevalence, comorbidities, and predictors in a community cohort of children . Dev Med Child Neurol . 2015;57(6):578-584. doi:10.1111/dmcn.12635

McLeod S, Harrison LJ, McAllister L, McCormack J. Speech sound disorders in a community study of preschool children . Am J Speech Lang Pathol . 2013;22(3):503-522. doi:10.1044/1058-0360(2012/11-0123)

Murphy CF, Pagan-Neves LO, Wertzner HF, Schochat E. Children with speech sound disorder: comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills . Front Psychol . 2015;6:64. Published 2015 Feb 4. doi:10.3389/fpsyg.2015.00064

Penn Medicine. Speech and Language Disorders-Symptoms and Causes .

PsychDB. Speech Sound Disorder (Phonological Disorder) .

Sices L, Taylor HG, Freebairn L, Hansen A, Lewis B. Relationship between speech-sound disorders and early literacy skills in preschool-age children: impact of comorbid language impairment . J Dev Behav Pediatr . 2007;28(6):438-447. doi:10.1097/DBP.0b013e31811ff8ca

American Speech-Language-Hearing Association. Speech Sound Disorders: Articulation and Phonology .

American Speech-Language-Hearing Association. Speech Sound Disorders .

MedlinePlus. Phonological Disorder .

National Institute on Deafness and Other Communication Disorders. Articulation Disorder .

National Institute of Health. Phonological Disorder.

Lee AS, Gibbon FE. Non-speech oral motor treatment for children with developmental speech sound disorders . Cochrane Database Syst Rev . 2015;2015(3):CD009383. Published 2015 Mar 25. doi:10.1002/14651858.CD009383.pub2

By Elizabeth Plumptre Elizabeth is a freelance health and wellness writer. She helps brands craft factual, yet relatable content that resonates with diverse audiences.

Skip to content

Speech Sound Disorders

Center for childhood communication, what are speech sound disorders .

As children learn to speak, they make words easier to say by deleting or changing sounds. As they grow older, children say more speech sounds. This makes their words easier to understand. If your child has a speech sound disorder, they cannot say sounds and words like other children their age.

Three types of speech sound disorders include:

  • Articulation disorder: difficulty saying certain speech sounds. You may notice your child drops, adds, distorts or substitutes sounds in words.
  • Phonological process disorder: where your child uses patterns of errors. The mistakes may be common in young children learning speech skills. When the errors continue past a certain age, it may be a disorder.
  • Disorders that involve a combination of articulation and phonological process disorders.

Some sound changes may be part of your child’s accent or family dialect, and not a true speech disorder.

Causes of speech sound disorders

Speech sound disorders can be caused in a few ways:

  • Developmental (your child was born with the disorder)
  • Motor or neurological ( childhood apraxia of speech )
  • Structural ( cleft lip and palate )
  • Sensory or perceptual conditions (like hearing loss )

Symptoms of speech sound disorders 

Signs of a speech sound disorder can include:

  • Substituting sounds in words (saying “wain” instead of “rain”)
  • Distorting sounds in words (saying “thoap” instead of “soap”)
  • Adding sounds to words (saying “puhlay” instead of “play”)
  • Saying only one syllable in a word (saying “bay” instead of “baby”)
  • Simplifying a word by repeating a syllable (saying “baba” instead of “bottle”)
  • Leaving out a consonant sound (saying “at” or “ba” instead of “bat” or saying “tar” instead of “star”)
  • Saying words differently each time (saying “buh” for “go” the first time, then “agah” for “go” the second time)

Testing and diagnosis for speech sound disorders

One of our speech-language pathologists (SLP) may assess your child’s speech through formal testing, language samples, play-based activities, and observations of your child’s mouth structures and movements. Our SLP will determine if your child’s sound errors are expected for their age. If not, they may have a speech sound disorder. Treatment with a CHOP SLP can help your child with their speech development.  

Treatment for speech sound disorders 

Our SLP will create goals to support your child’s speech development. Goals may include recognizing speech sounds and learning how to say speech sounds and words. Each child is unique and may have different needs. The therapy approach will depend on the specific diagnosis and your child’s needs. Once your child says a sound in therapy on their own, it will take time for them to say it consistently. Our SLP will work patiently with your child toward their speech development goals.

Speech-language therapy sessions involve you, your child, their other caregiver(s) and a SLP. Sessions may be play-based or structured with tabletop activities. This will depend on your child’s needs and abilities. Sessions also include your child's interests and your family's culture. This leads to better engagement, relevance, learning and fun.

Early recognition and diagnosis of speech sound disorders can help your child overcome speech problems. With proper treatment and support, your child can learn how to communicate clearly.

The Royal Children's Hospital Melbourne

The Royal Children's Hospital Melbourne

  • My RCH Portal

RCH logo

  • Health Professionals
  • Patients and Families
  • Departments and Services
  •  Health Professionals
  •  Departments and Services
  •  Patients and Families
  •  Research

Kids Health Information

  • About Kids Health Info
  • Fact sheets
  • Translated fact sheets
  • RCH TV for kids
  • Kids Health Info podcast
  • First aid training

In this section

Speech problems – articulation and phonological disorders

RCH Logo

Articulation and phonology ( fon-ol-oji ) refer to the way sound is produced. A child with an articulation disorder has problems forming speech sounds properly. A child with a phonological disorder can produce the sounds correctly, but may use them in the wrong place.

When young children are growing, they develop speech sounds in a predictable order. It is normal for young children to make speech errors as their language develops; however, children with an articulation or phonological disorder will be difficult to understand when other children their age are already speaking clearly.

A qualified speech pathologist should assess your child if there are any concerns about the quality of the sounds they make, the way they talk, or their ability to be understood.

Signs and symptoms of articulation and phonological disorders 

Articulation disorders.

Articulation refers to making sounds. The production of sounds involves the coordinated movements of the lips, tongue, teeth, palate (top of the mouth) and respiratory system (lungs). There are also many different nerves and muscles used for speech.

If your child has an articulation disorder, they:

  • have problems making sounds and forming particular speech sounds properly (e.g. they may lisp, so that s sounds like th )
  • may not be able to produce a particular sound (e.g. they can't make the r sound, and say 'wabbit' instead of 'rabbit'). 

Phonological disorders

Phonology refers to the pattern in which sounds are put together to make words.

If your child has a phonological disorder, they:

  • are able to make the sounds correctly, but they may use it in the wrong position in a word, or in the wrong word, e.g. a child may use the d sound instead of the g sound, and so they say 'doe' instead of 'go'
  • make mistakes with the particular sounds in words, e.g. they can say k in 'kite' but with certain words, will leave it out e.g. 'lie' instead of 'like'. 

Phonological disorders and phonemic awareness disorders (the understanding of sounds and sound rules in words) have been linked to ongoing problems with language and literacy. It is therefore important to make sure that your child gets the most appropriate treatment.

It can be much more difficult to understand children with phonological disorders compared to children with pure articulation disorders. Children with phonological disorders often have problems with many different sounds, not just one.

When to see a doctor

If you (or anyone else in regular contact with your child, such as their teacher) have any concerns about your child's speech, ask your GP or paediatrician to arrange an assessment with a speech pathologist. You can also arrange to see a speech pathologist directly; however, the fees may be higher.

A qualified speech pathologist should assess your child if there are any concerns about their speech. A speech pathologist can identify the cause, and plan treatment with your child and family. Treatment may include regular appointments and exercises for you to do with your child at home.

With appropriate speech therapy, many children with articulation or phonological disorders will have significant improvement in their speech.

Brain injuries

Articulation or phonological difficulties are generally not a direct result of brain injury. Children with an acquired brain injury may have different difficulties with their speech patterns. These are generally caused by  dyspraxia or dysarthria. Some children with acquired brain injuries may also have difficulties with literacy and language. See our fact sheets Dysarthria and  Dyspraxia .

Key points to remember

  • Articulation and phonology refer to the making of speech sounds.
  • Children with phonological disorders or phonemic awareness disorders may have ongoing problems with language and literacy. 
  • If there are any concerns about your child's speech, ask your GP to arrange an assessment with a qualified speech pathologist.
  • With appropriate speech therapy, many children with articulation or phonological disorders will have a big improvement in their speech.

For more information

  • Kids Health Info fact sheet: Verbal dyspraxia
  • Kids Health Info fact sheet: Word-finding difficulties
  • Speech Pathology Australia: Resources for the public
  • See your GP or speech pathologist.

Common questions our doctors are asked

Could my child just catch up eventually and grow out of an articulation/phonological disorder?

Some speech disorders can persist well into teenage and adult life. When a person is older, it is much more difficult to correct these problems. Most children with a diagnosed articulation/phonological disorder will need speech therapy.

What causes articulation and phonological disorders?

In most children, there is no known cause for articulation and phonological disorders. In some, the disorder may be due to a structural problem or from imitating behaviours and the creation of bad habits. Regardless of the cause, your child's speech therapist will be able to assist with the recommended treatment.

Developed by The Royal Children's Hospital Paediatric Rehabilitation Service and Speech Pathology department. Adapted with permission from a fact sheet from the Brain Injury Service at Westmead Children's Hospital. We acknowledge the input of RCH consumers and carers. 

Reviewed July 2018.  

This information is awaiting routine review. Please always seek the most recent advice from a registered and practising clinician.

Kids Health Info is supported by The Royal Children’s Hospital Foundation. To donate, visit www.rchfoundation.org.au . 

This information is intended to support, not replace, discussion with your doctor or healthcare professionals. The authors of these consumer health information handouts have made a considerable effort to ensure the information is accurate, up to date and easy to understand. The Royal Children's Hospital Melbourne accepts no responsibility for any inaccuracies, information perceived as misleading, or the success of any treatment regimen detailed in these handouts. Information contained in the handouts is updated regularly and therefore you should always check you are referring to the most recent version of the handout. The onus is on you, the user, to ensure that you have downloaded the most up-to-date version of a consumer health information handout.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Speech Sound Disorders in Children: An Articulatory Phonology Perspective

Aravind kumar namasivayam.

1 Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada

2 Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada

Deirdre Coleman

3 Independent Researcher, Surrey, BC, Canada

Aisling O’Dwyer

4 St. James’s Hospital, Dublin, Ireland

Pascal van Lieshout

5 Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada

Speech Sound Disorders (SSDs) is a generic term used to describe a range of difficulties producing speech sounds in children ( McLeod and Baker, 2017 ). The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistic theory and procedures, which largely posit a firm boundary between phonological processes and phonetics/articulation ( Shriberg, 2010 ). Thus, in many current SSD classification systems the complex relationships between the etiology (distal), processing deficits (proximal) and the behavioral levels (speech symptoms) is under-specified ( Terband et al., 2019a ). It is critical to understand the complex interactions between these levels as they have implications for differential diagnosis and treatment planning ( Terband et al., 2019a ). There have been some theoretical attempts made towards understanding these interactions (e.g., McAllister Byun and Tessier, 2016 ) and characterizing speech patterns in children either solely as the product of speech motor performance limitations or purely as a consequence of phonological/grammatical competence has been challenged ( Inkelas and Rose, 2007 ; McAllister Byun, 2012 ). In the present paper, we intend to reconcile the phonetic-phonology dichotomy and discuss the interconnectedness between these levels and the nature of SSDs using an alternative perspective based on the notion of an articulatory “gesture” within the broader concepts of the Articulatory Phonology model (AP; Browman and Goldstein, 1992 ). The articulatory “gesture” serves as a unit of phonological contrast and characterization of the resulting articulatory movements ( Browman and Goldstein, 1992 ; van Lieshout and Goldstein, 2008 ). We present evidence supporting the notion of articulatory gestures at the level of speech production and as reflected in control processes in the brain and discuss how an articulatory “gesture”-based approach can account for articulatory behaviors in typical and disordered speech production ( van Lieshout, 2004 ; Pouplier and van Lieshout, 2016 ). Specifically, we discuss how the AP model can provide an explanatory framework for understanding SSDs in children. Although other theories may be able to provide alternate explanations for some of the issues we will discuss, the AP framework in our view generates a unique scope that covers linguistic (phonology) and motor processes in a unified manner.

Introduction

In clinical speech-language pathology (S-LP), the distinction between articulation and phonology and whether a speech sound error 1 arises from motor-based articulation issues or language/grammar based phonological issues has been debated for decades (see Shriberg, 2010 ; Dodd, 2014 ; Terband et al., 2019a for a comprehensive overview on this topic). The theory-neutral term Speech Sound Disorders (SSDs) is currently used as a compromise to bypass the constraints associated with the articulation versus phonological disorder dichotomy ( Shriberg, 2010 ). The present definition describes SSD as a range of difficulties producing speech sounds in children that can be due to a variety of limitations related to perceptual, speech motor, or linguistic processes (or a combination) of known (e.g., Down syndrome, cleft lip and palate) and unknown origin ( Shriberg et al., 2010 ; McLeod and Baker, 2017 ).

The history of causality research for childhood SSDs encompasses several theoretically motivated epochs ( Shriberg, 2010 ). While the first epoch (1920s-1950s) was driven by psychosocial and structuralist views aimed at uncovering distal causes, the second epoch (1960s to 1980s) was driven by psycholinguistic and sociolinguistic approaches and focused on proximal causes. The more recent third and fourth epochs reflect the utilization of advances in neurolinguistics (1990s) and human genome sequencing (post-genomic era; 2000s) and these approaches address both distal and proximal causes ( Shriberg, 2010 ). With these advances, several different systems for the classification of SSD subtypes in children have been proposed based on their distal or proximal cause (e.g., see Waring and Knight, 2013 ). Some of the major SSD classification systems include the Speech Disorders Classification System ( Shriberg et al., 2010 ), the Model of Differential Diagnosis ( Dodd, 2014 ) and the Stackhouse and Wells (1997) Psycholinguistic Framework. However, a critical problem in these classification systems as noted by Terband et al. (2019a) is that the relationships between the different levels of causation are underspecified. For example, the links between the etiology (distal; e.g., genetics), processing deficits (proximal; e.g., psycholinguistic factors), and the behavioral levels (speech symptoms) are not clearly elucidated. In other words, even though the term SSD is theory-neutral, the poorly specified links between the output level (behavioral) speech symptoms and higher-level motor/language/lexical/grammar processes limits efficient differential diagnosis, customizing intervention and optimizing outcomes (see Terband et al., 2019a for a more detailed review on these issues). Thus, there is a critical need to understand the complex interactions between the different levels that ultimately cause the observable speech symptoms ( McAllister Byun and Tessier, 2016 ; Terband et al., 2019a ).

There have been several theoretical attempts at integrating phonetics and phonology in clinical S-LP. In this context, the characterization of speech patterns in children either solely as the product of performance limitations (i.e., challenges in meeting phonetic requirements arising from motor and anatomical differences) or purely as a consequence of phonological/grammatical competence has been challenged ( Inkelas and Rose, 2007 ; Bernhardt et al., 2010 ; McAllister Byun, 2012 ). McAllister Byun (2011 , 2012) and McAllister Byun and Tessier (2016) suggest a “phonetically grounded phonology” approach where individual-specific production experience and speech-motor development is integrated into the construction of children’s phonological/grammatical representations. The authors discuss this approach using several examples related to the neutralization of speech sounds in word onset (with primary stress) positions. They argue that positional velar fronting in these positions (where coronals sounds are substituted for velar) in children is said to result from a combination of jaw-dominated undifferentiated tongue gesture (e.g., Gibbon and Wood, 2002 ; see Section “Speech Delay” for details on velar fronting and undifferentiated tongue gestures) and the child’s subtle articulatory efforts (increased linguo-palatal contact into the coronal region) to replicate positional stress ( Inkelas and Rose, 2007 ; McAllister Byun, 2012 ). McAllister Byun (2012) demonstrated that by encoding this difficulty with a discrete tongue movement as a violable “MOVE-AS-UNIT” constraint, positional velar fronting could be formally discussed within the Harmonic Grammar framework ( Legendre et al., 1990 ). In such a framework the constraint inventory is dynamic and new constraints could be added on the basis of phonetic/speech motor requirements or removed over the course of neuro-motor maturation. In the case of positional velar fronting, the phonetically grounded “MOVE-AS-UNIT” constraint is eliminated from the grammar as the tongue-jaw complex matures ( McAllister Byun, 2012 ; McAllister Byun and Tessier, 2016 ).

In the present paper, we intend to reconcile the phonetic-phonology dichotomy and discuss the interconnectedness between these levels and the nature of SSDs using an alternative perspective. This alternative perspective is based on the notion of an articulatory “gesture” that serves as a unit of phonological contrast and characterization of the resulting articulatory movements ( Browman and Goldstein, 1992 ; van Lieshout and Goldstein, 2008 ). We discuss articulatory gestures within the broader concepts of the Articulatory Phonology model (AP; Browman and Goldstein, 1992 ). We present evidence supporting the notion of articulatory gestures at the level of speech perception, speech production and as reflected in control processes in the brain and discuss how an articulatory “gesture”-based approach can account for articulatory behaviors in typical and disordered speech production ( van Lieshout, 2004 ; van Lieshout et al., 2007 ; D’Ausilio et al., 2009 ; Pouplier and van Lieshout, 2016 ; Chartier et al., 2018 ). Although, other theoretical approaches (e.g., Inkelas and Rose, 2007 ; McAllister Byun, 2012 ; McAllister Byun and Tessier, 2016 ) are able to provide alternate explanations for some of the issues we will discuss, the AP framework in our view generates a unique scope that covers linguistic (phonology) and motor processes in a unified and transparent manner to generate empirically testable hypotheses. There are other speech production models, but as argued in a recent paper, the majority of those are more similar to the Task Dynamics (TD) framework ( Saltzman and Munhall, 1989 ) in that they address specific issues related to the motor implementation stages (with or without feedback) and not so much include a principled account of phonological principles, such as formulated in AP ( Parrell et al., 2019 ).

Articulatory Phonology

This section on Articulatory Phonology (AP; Browman and Goldstein, 1992 ) lays the foundation for understanding speech sound errors in children diagnosed with SSDs from this specific perspective. The origins of the AP model date back to the late 1970s, when researchers at the Haskins laboratories developed a unique and alternative perspective on the nature of action and representation called the Task Dynamics model (TD; Saltzman and Munhall, 1989 ). This model was inspired by concepts of self-organization related to functional synergies as derived from the Dynamical Systems Theory (DST; Kelso, 1995 ).

DST in general describes behavior as the emergent product of a “ self organizing, multi-component system that evolves over time ” ( Perone and Simmering, 2017 , p. 44). Various aspects of DST have been studied and applied in a diverse range of disciplines such as meteorology (e.g., Zeng et al., 1993 ), oceanography (e.g., Dijkstra, 2005 ), economics (e.g., Fuchs and Collier, 2007 ), and medical sciences (e.g., Qu et al., 2014 ). Recently, there has also been an uptake of DST informed research related to different areas in cognitive and speech-language sciences, including language acquisition and change ( Cooper, 1999 ); language processing ( Elman, 1995 ); development of cognition and action ( Thelen and Smith, 1994 ; Spencer et al., 2011 ; Wallot and van Orden, 2011 ); language development ( van Geert, 1995 , 2008 ); 2nd language learning and development ( de Bot et al., 2007 ; de Bot, 2008 ); speech production (see van Lieshout, 2004 for a review; van Lieshout and Neufeld, 2014 ; van Lieshout, 2017 ); variability in speech production ( van Lieshout and Namasivayam, 2010 ; Jackson et al., 2016 ); connection between motor and language development ( Parladé and Iverson, 2011 ); connection between cognitive aspects of phonology and articulatory movements ( Tilsen, 2009 ); and visual word recognition ( Rueckle, 2002 ); and visuospatial cognitive development ( Perone and Simmering, 2017 ).

The role of DST in speech and language sciences, in particular with respect to speech disorders, is still somewhat underdeveloped, mainly because of the challenges related to applying specific DST analyses to the relatively short data series that can be collected in speech research ( van Lieshout, 2004 ). However, we chose to focus on the AP framework, as it directly addresses issues related to phonology and articulation using DST principles related to relative stable patterns of behaviors (attractor states), that emerge when multiple components (neural, muscular, biomechanical) underlying these behaviors interact through time in a given context (self-organization) as shown in the time varying nature of the relationship between coupled structures (synergies) that express those behaviors ( Saltzman and Munhall, 1989 ; Browman and Goldstein, 1992 ). Some examples of studies using this AP/DST approach can be found in papers on child-specific neutralizations in primary stress word positions ( McAllister Byun, 2011 ), articulation issues related to /r/ production ( van Lieshout et al., 2008 ), apraxia of speech ( van Lieshout et al., 2007 ), studies on motor speech processes involved in stuttering ( Saltzman, 1991 ; van Lieshout et al., 2004 ; Jackson et al., 2016 ), phonological development ( Rvachew and Bernhardt, 2010 ), SSDs ( Gildersleeve-Neumann and Goldstein, 2015 ), and in children with repaired cleft-lip histories ( van Lieshout et al., 2002 ). In the next few sections we will review the concept of synergies and the development of speech motor synergies, which are directly related to DST principles of self-organization and coupling, followed by how the AP model uses these concepts to discuss linguistic/phonological contrast.

Speech Motor Synergies

The concept of speech motor synergy was derived from DST principles based on the notion that complex systems contain multiple (sub)components that are (functionally and/or physically) coupled ( Kelso, 1995 ). This means that these (sub)components interact and function as a coordinated unit where patterns emerge and dissolve spontaneously based on self-organization, that is, without the need of a pre-specified motor plan ( Turvey, 1990 ). These patterns are generated due to internal and external influences relating to inter-relationships between the (sub)components themselves, and the constraints and opportunities for action provided in the environment ( Smith and Thelen, 2003 ). Constraints or specific boundary conditions that influence pattern emergence may relate to physical, physiological, and functional/task constraints (e.g., Diedrich and Warren, 1995 ; Kelso, 1995 ; van Lieshout and Namasivayam, 2010 ). Such principles of pattern formation and coupling have already been demonstrated in physical (e.g., Gunzig et al., 2000 ) and biological systems (e.g., Haken, 1985 ), including neural network dynamics (e.g., Cessac and Samuelides, 2007 ). Haken et al. (1985) , Kelso et al. (1985) , and Turvey (1990) at the time were among the first to apply these principles also to movement coordination. Specifically, a synergy in the context of movement is defined as a functional assembly of (sub)components (e.g., neurons, muscles, joints) that are temporarily coupled or assembled in a task-specific manner, thus constrained to act as a single coordinated unit (or a coordinative structure; Kelso, 1995 ; Kelso et al., 2009 ). In motor control literature, the concept of coordinative structures or functional synergies are typically modeled as (non-linear) oscillatory systems ( Kelso, 1995 ; Newell et al., 2003 ; Profeta and Turvey, 2018 ). By strengthening or weakening the coupling within and between the system’s interacting (sub)components, synergies may be tuned or altered. For movement control, the synergy tuning process occurs with development and learning or may change due to task demands or constraints (e.g., Smith and Thelen, 2003 ; Kelso et al., 2009 ).

With regards to speech production, perturbation paradigms similar to the ones used in other motor control studies have demonstrated critical features of oral articulatory synergies (e.g., Folkins and Abbs, 1975 ; Kelso and Tuller, 1983 ; van Lieshout and Neufeld, 2014 ), which in AP terms can be referred to as gestures. Functional synergies in speech production comprise of laryngeal and supra-laryngeal structures (tongue, lips, jaw) coupled to achieve a single constriction (location and degree) goal. Perturbing the movement of one structure will lead to compensatory changes in all functionally coupled structures (including the articulator that is perturbed) to achieve the synergistic goal ( Kelso and Tuller, 1983 ). For example, when the jaw is perturbed in a downward direction during a bilabial stop closure, there is an immediate compensatory lowering of the upper lip and an increased compensatory elevation of the lower lip ( Folkins and Abbs, 1975 ). The changes in the nature and stability of movement coordination patterns (i.e., within and between specific speech motor synergies) as they evolve through time can be captured quantitatively via order parameters such as relative phase. Relative phase values are expressed in degrees or radians, and the standard deviation of relative phase values can provide an index of the stability of the couplings ( Kelso, 1995 ; van Lieshout, 2004 ). Whilst order parameters capture the relationship between the system’s interacting (sub)components, changes in order parameter dynamics can be triggered by alterations in a set of control parameters. For example, changes in movement rate may destabilize an existing coordination pattern and result in a different coordination pattern as observed during gait changes (such as switching from a walk to a trot and then a gallop) as a function of required locomotion speed ( Hoyt and Taylor, 1981 ; Kelso, 1995 ). For speech, such distinct behavioral patterns as a function of rate have not been established. However, in the coordination between lower jaw, upper and lower lip as part of a lip closing/opening synergy, typical speakers have shown a strong tendency for reduced covariance in the combined movement trajectory, despite individual variation in the actual sequence and timing of individual movements ( Alfonso and van Lieshout, 1997 ). This can be considered a characteristic of an efficient synergy. The same study also included people who stutter and reported more instances of not showing reduced covariance in this group, in line with the notion that stuttering is related to limitations in speech motor skill ( van Lieshout et al., 2004 ; Namasivayam and van Lieshout, 2011 ).

Recent work has provided more insights regarding cortical networks in control of this coordination between speech articulators ( Bouchard et al., 2013 ; Chartier et al., 2018 ). Chartier et al. (2018) mapped acoustic and articulatory kinematic trajectories to neural electrode sites in brains of patients, as part of their clinical treatment of epilepsy. Similar to limb control studies that discovered single motor cortical neurons that encoded complex coordinated arm and hand movements ( Aflalo and Graziano, 2006 ; Saleh et al., 2012 ), coordinated movements involving articulators for specific vocal-tract configurations were encoded at the single electrode level in the ventral sensorimotor cortex (vSMC). That is, activity in the vSMC reflects the synergies used in speech production rather than individual movements. Interestingly, the study found four major clusters of articulatory kinematic trajectories that encode the main vocal tract configurations (labial, coronal, dorsal, and vocalic) necessary to broadly represent the production of American English sounds. The encoded articulatory kinematic trajectories exhibited damped oscillatory dynamics as inferred from articulatory velocity and displacement relationships (phase portraits). These findings support theories that envision vocal tract gestures as articulatory units of speech production characterized by damped oscillatory dynamics [ Fowler et al., 1980 ; Browman and Goldstein, 1989 ; Saltzman and Munhall, 1989 ; see Section Articulatory Phonology and Speech Sound Disorders (SSD) in Children].

The notion of gestures at the level of speech perception has been discussed in the Theory of Direct Perception ( Fowler, 1986 ; Fowler and Rosenblum, 1989 ). This theory posits that listeners perceive attributes of vocal tract gestures, arguing that this reflects the common code shared by both the speaker and listener ( Fowler, 1986 , 1996 , 2014 ; Fowler and Rosenblum, 1989 ). These concepts are supported by a line of research studies which propose that the minimal objects of speech perception reflect gestures realized by the action of coordinative structures as transmitted by changes to the acoustic (and visual) signal, rather than units solely defined by a limited set of specific acoustic features ( Diehl and Kluender, 1989 ; Fowler and Rosenblum, 1989 ; Fowler, 1996 ). The Direct Perception theory thus suggests that speech perception is driven by the structural global changes in external sensory signals that allow for direct recognition of the original (gesture) source and does not require special speech modules or the need to invoke the speech motor system ( Fowler and Galantucci, 2005 ). Having a common unit for production and perception provides a useful framework to understand the broader nature of both sensory and motor involvement in speech disorders. For example, this can inform future studies to investigate how problems in processing acoustic information and thus perceiving the gestures from the speaker, may interfere with the tuning of gestures for production during development. Similarly, issues related to updating the state of the vocal tract through somato-sensory feedback (a critical component in TD; Saltzman and Munhall, 1989 ; Parrell et al., 2019 ) during development may also lead to the mistuning of gestures in production, potentially leading to the type of errors in vocal tract constriction degree and/or location as discussed in Section “Articulatory Phonology and Speech Sound Disorders (SSD) in Children.” However, for the current paper, the focus will be on production aspects only.

Development of Speech Motor Synergies

In this section, we will discuss the development and refinement of articulatory synergies and how these processes facilitate the emergence of speech sound contrasts. Observational and empirical data from several speech motor studies (as discussed below) were synthesized to create the timeline map of the development and refinement of speech motor control and articulatory synergies as illustrated in Figure 1 . Articulatory synergies in infants have distinct developmental schedules. Speech production in infants is thought to be restricted to sounds primarily supported by the mandible ( MacNeilage and Davis, 1990 ; Davis and MacNeilage, 1995 ; Green et al., 2000 ). Early mandibular movements (∼1 year or less) are ballistic in nature and restricted to closing and opening gestures due to the limited fine force control required for varied jaw heights ( Locke, 1983 ; Kent, 1992 ; Green et al., 2000 ). Vowel productions in the first year are generally related to low, non-front, and non-rounded vowels; implying that the tongue barely elevates from the jaw, and there is limited facial muscle (lip) interaction (i.e., synergy) with the jaw ( Buhr, 1980 ; Kent, 1992 ; Otomo and Stoel-Gammon, 1992 ; but see Giulivi et al., 2011 ; Diepstra et al., 2017 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-10-02998-g001.jpg

Data driven timeline map of the development of speech motor control and articulatory synergies.

Sound sequences that do not require complex timing and coordination within/between articulatory gestures are easier to produce and the first to emerge ( Green et al., 2000 ; Green and Nip, 2010 ; Figure 1 ). For instance, young children are unable to coordinate laryngeal voicing gesture with supra-laryngeal articulation and hence master voiced consonants and syllables earlier than voiceless ones ( Kewley-Port and Preston, 1974 ; Grigos et al., 2005 ). The synergistic interaction between the laryngeal and supra-laryngeal structures underlying voicing contrasts is acquired closer to 2 years of age (∼20–23 months; Grigos et al., 2005 ), and follows the maturation of jaw movements (around 12–15 months of age; Green et al., 2002 ; Figure 1 ) and/or jaw stabilization ( Yu et al., 2014 ).

In children, up to and around 2 years of age, there is limited fine motor control of jaw height (or jaw grading) and weak jaw-lip synergies during bilabial production, but relatively stronger inter-lip spatial and temporal coupling ( Green et al., 2000 , 2002 ; Nip et al., 2009 ; Green and Nip, 2010 ). A possible consequence of these interactions is that their production of vowels is limited to that of extremes (high or low; /i/, /u/, /o/, and /ɑ/), and lip rounding/retraction is only present when the jaw is in a high position ( Wellman et al., 1931 ; Kent, 1992 ; Figure 1 ). As speech-related jaw-lip synergies are emerging, it is not surprising that children’s ability to execute lip rounding and retraction is possible when degrees of freedom can be reduced (i.e., when jaw is held in a high position). Observation of such a reduction in degrees of freedom in emerging synergies has been observed in other non-speech systems ( Bernstein, 1996 ). Interestingly, although the relatively strong inter-lip coordination pattern found in 2-year-olds is facilitative for bilabial productions, it needs to further differentiate to gain independent control of the functionally linked upper and lower lips prior to the emergence of labio-dental fricatives (/f/ and /v/; Green et al., 2000 ; Figure 1 ). This process is observed to occur between the ages of 2 and 3 years ( Stoel-Gammon, 1985 ; Green et al., 2000 ). Green et al. (2000 , 2002) suggest that upper and lower lip movements become adult-like with increasing contribution of the lower-lip toward bilabial closure between the ages of 2 and 6 years. Further control over jaw height (with the addition of /ε/ and /ɔ/) and lingual independence from the jaw is developed around 3 years of age ( Kent, 1992 ). The latter is evident from the production of reliable lingual gliding movements (diphthongs: /aʊ/, /ɔɪ/, and /aɪ) in the anterior-posterior dimension ( Wellman et al., 1931 ; Kent, 1992 ; Otomo and Stoel-Gammon, 1992 ; Donegan, 2013 ). Control of this dimension also coincides with the emergence of coronal consonants (e.g., /t/ and /d/; Smit et al., 1990 ; Goldman and Fristoe, 2000 ). By 4 years of age, all front and back vowels are within the spoken repertoire of children, suggesting a greater degree of control over jaw height and improved tongue-jaw synergies ( Kent, 1992 ). Intriguingly, front vowels and lingual coronal consonants emerge relatively late ( Wellman et al., 1931 ; Kent, 1992 ; Otomo and Stoel-Gammon, 1992 ). This is possibly due to the fine adjustments required by the tongue tip and blade to adapt to mandibular angles. Since velar consonants and back vowels are produced by the tongue dorsum, they are closer to the origin of rotational movement (i.e., condylar axis) and are less affected than the front vowels and coronal consonants ( Kent, 1992 ; Mooshammer et al., 2007 ). With maturation and experience, finer control over tongue musculature develops, and children begin to acquire rhotacized (retroflexed or bunched tongue) vowels (/ɝ/ and /ɚ/) and tense/lax contrasts ( Kent, 1992 ).

The later development of refined tongue movements is not surprising, since the tongue is considered a hydrostatic organ with distinct functional segments (e.g., tongue tip, tongue body; Green and Wang, 2003 ; Noiray et al., 2013 ). Gaining motor control and coordinating the tongue with neighboring articulatory gestures is difficult ( Kent, 1992 ; Smyth, 1992 ; Nittrouer, 1993 ). Cheng et al.’s (2007) study demonstrated a lower degree and more variable tongue tip to jaw temporal coupling in 6- to 7-year-old children relative to adults ( Figure 1 ). This contrasts with the earlier developing lip-jaw synergy reported by Green et al. (2000) , wherein by 6 years of age, children’s temporal coupling of lip and jaw was similar to adults. The coordination of the tongue’s subcomponents follows different maturation patterns. By 4–5 years, synergies that use the back of the tongue to assist the tongue tip during alveolar productions are adult-like ( Noiray et al., 2013 ), while synergies relating to tongue tip release and tongue body backing are not fully mature ( Nittrouer, 1993 ; Figure 1 ). The extent and variability of lingual vowel-on-consonant coarticulation between 6 and 9 years of age is greater than in adults; implying that children are still refining their tuning of articulatory gestures ( Nittrouer, 1993 ; Nittrouer et al., 1996 , 2005 ; Cheng et al., 2007 ; Zharkova et al., 2011 ).

These findings suggest that articulatory synergies have varying schedules of development: lip-jaw related synergies develop earlier than tongue-jaw or within tongue-related synergies ( Cheng et al., 2007 ; Terband et al., 2009 ). Most of this work has been done on intra-gestural coordination (i.e., between individual articulators within a gesture), but it is clear that both the development of intra- and inter-gestural synergies are non-uniform and protracted ( Whiteside et al., 2003 ; Smith and Zelaznik, 2004 ). Variability of intra-gestural synergies (e.g., upper- and lower-lip or lower lip–jaw) in 4- and 7-year-olds has been found to be greater than with adults but decreases with age until it plateaus between 7 and 12 years ( Smith and Zelaznik, 2004 ). Adult-like patterns are reached at around 14 years, and likely continuously refine and stabilize even up to the age of 30 years ( Smith and Zelaznik, 2004 ; Schötz et al., 2013 ; Figure 1 ). Overall, these findings suggest that the development of speech motor control is hierarchical, sequential, non-uniform, and protracted.

Gestures, Synergies and Linguistic Contrast

As mentioned above, within the AP model, the fundamental units of speech are articulatory “gestures.” Articulatory “gestures” are higher-level abstract specifications for the formation and release of task-specific, linguistically relevant vocal tract constrictions. The specific goals of each gesture are defined as Tract Variables ( Figure 2 ) and relate to vocal tract constriction location (labial, dental, alveolar, postalveolar, palatal, velar, uvular, and pharyngeal) and constriction degree (closed, critical, narrow, mid, and wide; Figure 2 ). While constriction degree is akin to manner of production (e.g., fricatives /s/ and /z/ are assigned a “critical” value; stops /p/ and /b/ are given a “closed” value), constriction location allows for distinctions in place of articulation ( Browman and Goldstein, 1992 ; Gafos, 2002 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-10-02998-g002.jpg

A schematic representation of the AP model with key components ( Nam and Saltzman, 2003 ; Goldstein et al., 2007 ). TT, tongue tip; TB, tongue body; CD, constriction degree; CL, constriction location; Vel (or V in panel 3), Velum; GLO (or G in panel 3), glottis; LA, lip aperture; LP, lip protrusion (see text for more details).

The targets of each Tract Variable are implemented by specifying the lower-level functional synergy of individual articulators (e.g., articulator set of lip closure gesture: upper lip, lower lip, jaw) and their associated muscles ensembles (e.g., orbicularis oris, mentalis, risorius), which allows for the flexibility needed to achieve the task goal ( Saltzman and Kelso, 1987 ; Browman and Goldstein, 1992 ; Alfonso and van Lieshout, 1997 ; Gafos, 2002 ; Figure 2 ). The coordinated actions of the articulators toward a particular value (target) of a Tract Variable is modeled using damped mass spring equations ( Saltzman and Munhall, 1989 ). The variables in the equations specify the final position, the time constant of the constriction formation (i.e., the speed at which the constriction should be formed; stiffness), and a damping factor to prevent articulators from overshooting their targets ( Browman and Goldstein, 1989 ; Kelso et al., 1986a , b ; Saltzman and Munhall, 1989 ). For example, if the goal is to produce constriction at the lips (bilabial closure gesture), then the distance between the upper lip and lower lip (lip aperture) is set to zero. The resulting movements of individual articulators lead to changes in vocal tract geometry, with predictable aerodynamic and acoustic consequences.

The flexibility within the functional articulatory synergy implies that the task-level goals could be achieved with quantitatively different contributions from individual articulatory components as observed in response to articulatory perturbations or in adaptation to the linguistic context in which the gesture is produced ( Saltzman and Kelso, 1987 ; Browman and Goldstein, 1992 ; Alfonso and van Lieshout, 1997 ; Gafos, 2002 ). In other words, the task-level goals are discrete, invariant or context-free, but the resulting articulatory motions are context-dependent ( Browman and Goldstein, 1992 ). Gestures are phonological primitives that are used to achieve linguistic contrasts when combined into larger sequences (e.g., segments, words, phrases). The presence or absence of a gesture, or changes in gestural parameters like constriction location results in phonologically contrastive units. For example, the difference between “bad” and “ban” is the presence of a velum gesture in the latter, while “bad” and “pad” are differentiated by adding a glottal gesture for the onset of “bad”. Parameter differences in gestures such as the degree of vocal tract constriction yields phonological contrast by altering manner of production (e.g., “but” and “bus”; tongue tip constriction degree: complete closure for /t/ vs. a critical opening value to result in turbulence for /s/) ( Browman and Goldstein, 1986 , 1992 ; van Lieshout et al., 2008 ).

Gestures have an internal temporal structure characterized by landmarks (e.g., onset, target, release) which can be aligned to form segments, words, sentences and so on ( Gafos, 2002 ). These gestures and their timing relationships are represented by a gestural score in the AP model ( Figure 2 ; Browman and Goldstein, 1992 ). Gestural scores are estimated from articulatory kinematic data or speech acoustics by locating kinematic/acoustic landmarks to determine the timing relationships between gestures ( Nam et al., 2012 ). The timing relationships in the gestural score are typically expressed as relative phase values ( Kelso et al., 1986a , b ; van Lieshout, 2004 ). Words may differ by altering the relative phasing between their component gestures. For example, although the gestures are identical in “pat” and “tap,” the relative phasing between the gestures are different ( Saltzman and Byrd, 2000 ; Saltzman et al., 2006 ; Goldstein et al., 2007 ). As mentioned above, the coordination between individual gestures in a sequence is referred to as inter-gestural coupling/coordination ( van Lieshout and Goldstein, 2008 ). Inter-gestural level timing is not rigidly specified across an entire utterance but is sensitive to peripheral (articulatory) events ( Saltzman et al., 1998 ; Namasivayam et al., 2009 ; Tilsen, 2009 ). The presence of a coupling between inter-gestural level timing oscillators and feedback signals arising from the peripheral articulators was identified in experimental work by Saltzman et al. (1998) . In that study, unanticipated lip perturbation during discrete and repetitive production of the syllable /pa/ resulted in phase-shifts in the relative timing between the two independent gestures (lip closure and laryngeal closure) for the phoneme /p/ and between successive /pa/ syllables ( Saltzman et al., 1998 ). This confirms the critical role of somato-sensory information in the TD model ( Saltzman and Munhall, 1989 ; Parrell et al., 2019 ).

Dynamical systems can express different self-organizing coordination patterns, but for many systems, certain patterns of coordination seem to be preferred over others. These preferred patterns are induced by “attractors” ( Kelso, 1995 ), which reflect stable states in the coupling dynamics of such a system 2 . The coupling relationships used in speech production are similar to those identified for limb control systems ( Kelso, 1995 ; Goldstein et al., 2006 ) and capitalize on intrinsically stable modes of coordination (specifically, in-phase and anti-phase modes; Haken et al., 1985 ). These are patterns that are naturally achieved without training or learning; however, they are not equally stable ( Haken et al., 1985 ; Nam et al., 2009 ). In-phase coordination patterns, for instance, are relatively more stable than anti-phase patterns ( Haken et al., 1985 ; Kelso, 1995 ; Goldstein et al., 2006 ). Other coordination patterns are possible, but they are more variable, may require higher energy expenditure and can only be acquired with significant training ( Kelso, 1984 ; Peper et al., 1995 ; Peper and Beek, 1998 ; Nam et al., 2009 ). For example, when participants are asked to oscillate two limbs or fingers, they spontaneously switch coordination patterns from the less stable anti-phase to the more stable in-phase as the required movement frequency increases, but not vice versa ( Kelso, 1984 ; Haken et al., 1985 ; Peper et al., 2004 ). These two modes of coordination likely form the basis of syllable structure ( Goldstein et al., 2006 ). The onset consonant (C) and vowel (V) planning oscillators (see below) are said to be coupled in-phase, while the CC onset clusters and the nucleus (V) and coda (C) gestures are coupled in anti-phase mode. As the in-phase coupling mode is more stable, this can explain the dominance of CV syllable structure during babbling and speech development as well as across languages ( Goldstein et al., 2006 ; Nam et al., 2009 ; Giulivi et al., 2011 ).

Using the TD framework in the AP model ( Nam and Saltzman, 2003 ), speech production planning processes and dynamic multi-frequency coupling between gestural and rhythmic (prosodic) systems have been explained using the notion of coupled oscillator models ( Goldstein et al., 2006 ; Nam et al., 2009 ; Tilsen, 2009 ; Gafos and Goldstein, 2012 ). The coupled oscillator models for speech gestures are associated with non-linear (limit cycle) planning level oscillators which can be coordinated in relative time by specifying a phase relationship between them. During an utterance, the planning oscillators for multiple gestures generate a representation of the various (and potentially competing) coupling specifications, referred to as a coupling graph ( Figure 2 ; Saltzman et al., 2006 ). The activation of each gesture is then triggered by its respective oscillator after they settle into a stable pattern of relative phasing during the planning process ( van Lieshout and Goldstein, 2008 ; Nam et al., 2009 ). In this manner, the coupled oscillator model has been used to control the relative timing of multiple gestural activations during word or sentence production. To recap, individual gestures are modeled as critically damped mass-spring systems with a fixed-point attractor where speed, amplitude and duration are manipulated by adjustments to dynamic parameter specifications (e.g., damping and stiffness variables). In contrast, gestural planning level systems are modeled using limit cycle oscillators and their relative phases are controlled by potential functions ( Tilsen, 2009 ; Pouplier and Goldstein, 2010 ).

Similar to the bidirectional relationship between inter-gestural timing and peripheral articulatory state, interactions between gestural and rhythmic level oscillators have also been noted. To explain the dynamic interactions between gestural and rhythmic (stress and prosody) systems, speech production may rely on a similar multi-frequency system of coupled oscillators as proposed for limb movements ( Peper et al., 1995 ; Tilsen, 2009 ). The coupling strength and stability in such systems varies not only as a function of type of phasing (in-phase or anti-phase), but also by the complexity of coupling (ratio of intrinsic oscillator frequencies of the coupled structures), movement amplitude and the movement rate at which the coupling needs to be maintained ( Peper et al., 1995 ; Peper and Beek, 1998 ; van Lieshout and Goldstein, 2008 ; van Lieshout, 2017 ). For example, rhythmic movement between the limbs has been modeled as a system of coupled oscillators that exhibit (multi)frequency locking. The most stable coupling mode is when two or more structures (oscillators) are frequency locked in a lower-order (e.g., 1:1) ratio. Multi-frequency locking for upper limbs is possible at higher order ratios of 3:5 or 5:2 (e.g., during complex drumming) but only at slower movement frequencies. As the required movement rate increases, the complex frequency coupling ratios will exhibit transitions to simpler and inherently more stable ratios ( Peper et al., 1995 ; Haken et al., 1996 ). Studies on rhythmic limb coupling show that increases in movement frequency are inversely related to decreases in coupling strength and coordination stability. The increases in movement frequency or rate may be associated with a drop in the movement amplitude that mediates the differential loss of stability across the frequency ratios ( Haken et al., 1996 ; Goldstein et al., 2007 ; van Lieshout, 2017 ). However, smaller movement amplitude in itself (independent from duration and rate) can also decrease coupling strength and coordination stability ( Haken et al., 1985 ; Peper et al., 2008 ; van Lieshout, 2017 ). Amplitude changes are presumably used to stabilize the output of a coupled neural oscillatory system. Smaller movement amplitudes may decrease feedback gain, resulting in a reduction of the neural oscillator-effector coupling strength and stability ( Peper and Beek, 1998 ; Williamson, 1998 ; van Lieshout et al., 2004 ; van Lieshout, 2017 ). Larger movement amplitudes facilitate neural phase entrainment by enhancing feedback signals, but a certain minimum sensory input is required for entrainment to occur ( Williamson, 1998 ; Ridderikhoff et al., 2005 ; Peper et al., 2008 ; Kandel, 2013 ; van Lieshout, 2017 ). Several studies have demonstrated the critical role of movement amplitude on coordination stability in different types of speech disorders such as stuttering and apraxia ( van Lieshout et al., 2007 ; Namasivayam et al., 2009 ; for review see Namasivayam and van Lieshout, 2011 ).

Such complex couplings between multi-frequency oscillators may be found at different levels in the speech system such as between slower vowel production and faster consonantal movements ( Goldstein et al., 2007 ), or between shorter-time scale gestures and longer-time scale rhythmic units (moras, syllables, feet and phonological phrases; Tilsen, 2009 ). Experimentally, the interaction between gestural and rhythmic systems have been identified by a high correlation between inter-gestural temporal variability and rhythmic variability ( Tilsen, 2009 ), while behaviorally, such gesture-rhythm interactions are supported by observations of systematic relationships between patterns of segment and syllable deletions, and stress patterns in a language ( Kehoe, 2001 ; for an alternative take on neutralization in strong positions using constraint-based theory and AP model see McAllister Byun, 2011 ). Issues in maintaining the stability of complex higher order ratios in multi-frequency couplings (especially at faster speech rates) between slower vowel production and faster consonantal movements have also been implicated in the occurrence of speech sound errors in healthy adult speakers ( Goldstein et al., 2007 ). More about this aspect in the next section.

The development of gestures is tied to organs of constriction in two ways: between-organ and within-organ differentiation ( Goldstein and Fowler, 2003 ). There is empirical data to support that these differentiations occur over developmental timelines ( Cheng et al., 2007 ; Terband et al., 2009 ; see Section Development of Speech Motor Synergies). When a gesture corresponds to different organs (e.g., bilabial closure implemented via upper and lower lip plus jaw), between-organ differentiation is observed at an earlier stage in development. For within-organ differentiation, children must learn that for a given organ, different gestures may require different variations in vocal tract constriction location and degree. For example, /d/ and /k/ are produced by the same constriction organ (tongue) but use different constriction locations (alveolar vs. velar). Within-organ differentiation is said to occur at a later stage in development via a process called attunement ( Studdert-Kennedy and Goldstein, 2003 ). During the attunement process, initial speech gestures produced by an infant (i.e., based on between organ contrasts) become tailored (attuned) toward the perceived finer grained differentiations in gestural patterns in the ambient language (e.g., similar to phonological attunement proposed by Shriberg et al., 2005 ). In sum, gestural planning, temporal organization of gestures, parameter specification of gestures, and gestural coupling (between gestures, and between gestures and other rhythmic units) result in specific behavioral phenomena including casual speech alternations (e.g., syllable deletions, assimilations), as will be discussed next.

Describing Casual Speech Alternations

The AP model accounts for variations and errors in the speech output by demonstrating how the task-specific gestures at the macroscopic level are related to the systematic changes at the microscopic level of articulatory trajectories and resulting speech acoustics (e.g., speech variability, coarticulation, allophonic variation, and speech errors in casual connected speech; Saltzman and Munhall, 1989 ; Browman and Goldstein, 1992 ; Goldstein et al., 2007 ). Browman and Goldstein (1990b) argue that speech sound errors such as consonant deletions, assimilations, and schwa deletions can result from an increasing overlap between different gestures, or from reducing the size (magnitude) of articulatory gestures (see also van Lieshout and Goldstein, 2008 ; Hall, 2010 ). The amount of gestural overlap is assumed to be a function of different factors, including style (casual vs. formal speech), the organs used for making the constrictions, speech rate, and linguistic constraints ( Goldstein and Fowler, 2003 ; van Lieshout and Goldstein, 2008 ).

The gestural processes surrounding consonant and schwa deletions can be explained by alterations in gestural overlap resulting from changes in relative timing or phasing in the gestural score. The gestural overlap has different consequences in the articulatory and acoustic output, depending on whether the gestures share the same Tract Variables and corresponding articulatory sets (homorganic) or whether they employ different Tract Variables and constricting organs (heterorganic). Heterorganic gestures (e.g., lip closure combined with a tongue tip closure) will result in a Tract Variable motion for each gesture that is unaffected by the other concurrent gesture; and their Tract Variables goals will be reached, regardless of the degree of overlap. However, when maximum overlap occurs, one gesture may completely obscure or hide the other gesture acoustically during release (i.e., gestural hiding; Browman and Goldstein, 1990b ). In homorganic gestures, when two gestures share the same Tract Variables and articulators, as in the case of a tongue tip (TT) constriction to produce /θ/ and /n/ (e.g., during production of /tεn θimz/) they perturb each other’s Tract Variable motions. The dynamical parameters of the two overlapping gestural control regimes are ‘blended.’ These gestural blendings are traditionally described phonologically as assimilation (e.g., /tεn θimz/ → [tεn̪ θimz]) or allophonic variations (e.g., front and back variation of /k/ in English: “ key ” and “ caw ”; Ladefoged, 1982 ) ( Browman and Goldstein, 1990a , b ).

Articulatory kinematic data collected using an X-Ray Microbeam system (e.g., Browman and Goldstein, 1990b ) have provided support for the occurrence of these gestural processes (hiding and blending). Consider the following classic examples in the literature ( Browman and Goldstein, 1990b ). The production of the sequence “nabbed most” is usually heard by the listener as “nab most” and the spectrographic display reveals no visible presence of /d/. However, the presence of the tongue tip raising gesture for /d/ can be seen in X-ray data ( Browman and Goldstein, 1990b ), but it is inaudible and completely overlapped by the release of the bilabial gestures /b/ and /m/ ( Hall, 2010 ). Similarly, in fast speech, words like “ potential” sound like “ptential,” wherein the first schwa between the consonants /p/ and /t/ seems to be omitted, but in fact is hidden by the acoustic release of /p/ and /t/ ( Byrd and Tan, 1996 ; Davidson, 2006 ; Hall, 2010 ). These cases show that relevant constrictions are formed, but they are acoustically and perceptually hidden by another overlapping gesture ( Browman and Goldstein, 1990b ). Assimilations have also been explained by gestural overlap and gesture magnitude reduction. In the production of “ seven plus seven ,” which often sounds like “ sevem plus seven ,” the coronal nasal consonant /n/ appears to be replaced by the bilabial nasal /m/ in the presence of the adjacent bilabial /p/. In reality, the tongue tip /n/ gesture is reduced in magnitude and overlapped by the following bilabial gesture /p/ ( Browman and Goldstein, 1990b ; Hall, 2010 ). The AP model accounts for rate-dependent speech sound errors by gestural overlap and gestural magnitude reduction ( Browman and Goldstein, 1990b ; Hall, 2010 ). Auditory-perceptual based transcription procedures would describe the schwa elision and consonant deletion (or assimilation processes) in the above examples by a set of phonological rules schematically represented as d →∅/C_C (i.e., /d/ is deleted in the presence of two adjacent consonants in “nabbed most” → “nab most” ; Hall, 2010 ). However, these rules do not capture the fact that movements for the /d/ or /n/ are still present. Furthermore, articulatory data indicate that such speech sound errors are often not the result of whole-segment or feature substitutions/deletions, but are due to co-production of unintended or intrusion gestures to maintain the dynamic stability in the speech production system instead ( Pouplier and Goldstein, 2005 ; Goldstein et al., 2007 ; Pouplier, 2007 , 2008 ; Slis and van Lieshout, 2016a , b ).

The concept of intrusion gestures is illustrated with kinematic data from Goldstein et al. (2007) study where participants repeated bisyllabic sequences such as “cop top” at fast and slow speech rate conditions. Goldstein et al. (2007) noticed unique speech sound errors in that both the intended and extra/unintended (intruding) gestures were produced at the same time. True substitutions and deletions of the targets occurred rarely, even though, substitution errors are the most commonly reported error type in speech sound error studies when using auditory-perceptual transcription procedures ( Dell et al., 2000 ). Goldstein et al. (2007) explained their findings based on the DST concepts of stable rhythmic synchronization and multi-frequency locking (see Section Gestures, Synergies and Linguistic Contrast). The word pairs “cop top” differ in their onset consonant but share the syllable rhyme. Thus, each production of “cop top” contains one tongue tip (/t/), one tongue dorsum (/k/) gesture, but two labial (/p/) gestures. This results in the initial consonants being in a 1:2 relationship with the coda consonant. Such multi-frequency ratios are intrinsically less stable ( Haken et al., 1996 ), especially under fast rate conditions. As speech rate increased, they observed an extra copy of tongue tip inserted or co-produced during the /k/ production in “cop” and a tongue dorsum intrusion gesture during the /t/ production in “top.” Adding an extra gesture (the intrusion) results in a more stable harmonic relationship where both the initial consonants (tongue tip and tongue dorsum gestures) are in a 2:2 (or 1:1) relationship with the coda (lip gestures) consonant ( Pouplier, 2008 ; Slis and van Lieshout, 2016a , b ). Thus, gestural intrusion errors can be described as resulting from a rhythmic synchronization process, where the more complex and less stable 1:2 frequency-locked coordination mode is dissolved and replaced by a simpler and intrinsically more stable 1:1 mode by adding gestures. Unlike what is claimed for perception-based speech sound errors (e.g., Dell et al., 2000 ), the addition of “extra” cycles of the tongue tip and/or tongue dorsum oscillators results in phonotactically illegal simultaneous articulation of /t/ and /k/ ( Goldstein et al., 2007 ; Pouplier, 2008 ; van Lieshout and Goldstein, 2008 ; Slis and van Lieshout, 2016a , b ). The fact that /kt/ co-production is phonotactically illegal in English makes it difficult for a listener to even detect its presence. Pouplier and Goldstein (2005) further suggest that listeners only perceive intrusions that are large in magnitude (frequently transcribed as segmental substitutions errors), while smaller gestural intrusions are not heard, and targets are scored as error-free despite conflicting articulatory data ( Pouplier and Goldstein, 2005 ; Goldstein et al., 2007 ; see also Mowrey and MacKay, 1990 ).

Articulatory Phonology and Speech Sound Disorders (SSD) in Children

In this section, we briefly describe the patterns of speech sound errors in children as they have been typically discussed in the S-LP literature. This is followed by an explanation of how the development, maturation, and the combinatorial dynamics of articulatory gestures (such as phasing or timing relationships, coupling strength and gestural overlap) can offer a well-substantiated explanation for several of these more atypical speech sound errors. We will provide a preliminary and arguably, tentative mapping between several subtypes of SSDs in children and their potential origins as explained in the context of the AP and TD framework ( Table 1 ). We see this as a starting point for further discussion and an inspiration to conduct more research in this specific area. For example, one could use the AP/TD model (TADA; Nam et al., 2004 ) to simulate specific problems at the different levels of the model to systematically probe the emerging symptoms in movement and acoustic characteristics and then verify those with actual data, similar to recent work on apraxia and stuttering using the DIVA framework ( Civier et al., 2013 ; Terband et al., 2019b ). Since there is no universally agreed-upon classification system in speech-language pathology, we will limit our discussion to the SSD classification system proposed by Shriberg (2010 ; Vick et al., 2014 ; see Waring and Knight, 2013 for a critical evaluation of the current childhood SSD classification systems) and phonological process errors as described in the widely used clinical assessment tool Diagnostic Evaluation of Articulation and Phonology (DEAP; Dodd et al., 2006 ). We will refer to these phonological error patterns as process errors/speech sound error patterns, in line with their contemporary usage as descriptive terms, without reference to phonological or phonetic theory underpinnings.

Depicts speech sound disorder classification (and subtypes; based on Vick et al., 2014 ; Shriberg, 2017 ), most commonly noted error types, examples, and proposed levels of breakdown or impairment within the Articulatory Phonology model and Task Dynamics Framework ( Saltzman and Munhall, 1989 ; Browman and Goldstein, 1992 ).

Classification or subtypeError typeExamplesProposed levels of breakdown
Speech Delay (Process Errors)Gliding/ræbIt/ → [wæbIt]Tract variable, Gestural score
Vocalization of liquids/æpl/ → [æpʊ]Tract variable, Gestural score
Velar fronting/go/ → [do]Tract variable
Coronal backing/tu/ → [ku]
Palatal fronting (depalatalization)// →[s]
Backing of fricatives/s/ →/[ʃ]
Stopping of fricatives/zu/ → [du]Tract variable
Prevocalic voicing/pIg/ → [bIg]Gestural planning oscillators
Postvocalic devoicing/bæg/ → [bæk]
Weak syllable deletion/tɛləfoʊn/ → [tɛfoʊn]Gestural planning oscillators
Vowel epenthesis/pliz/ → [pəliz]Gestural planning oscillators, Inter-gestural coordination.
Vowel additions/bæt/ → [bæta]
Final consonant deletion/sit/ → [si]Gestural planning oscillators, Inter-gestural coordination.
Cluster reduction/sneIk/ → [neIk] [seIk]Inter-gestural coordination, Gestural score Activation.
Articulation Impairment/s/ and /r/distortions[sʌn] → [ɬʌn] or [s̪ʌn]Tract variable
Childhood apraxia of speech (CAS)(a) Inconsistent speech errors on repeated productions,
(b) Lengthened and disrupted coarticulatory transitions between sounds and syllables, and
(c) Inappropriate prosody that includes both lexical and phrasal stress difficulties ( ).
Inter-gestural coupling graphs,
Inter-gestural planning oscillators,
Gestural score activation,
Inter-gestural timing,
Gesture activation durations,
Dynamic gestural specifications at the level of tract variables and articulatory synergies.
Speech Motor Delay (SMD)(a) Immature motor control system.
(b) Higher articulatory kinematic variability of upper lip, lower lip and jaw, larger upper lip displacements.
(c) Fewer accurate phonemes, errors in vowel and syllable duration, errors in glide production, epenthesis errors, consonantal distortions, and less accurate lexical stress.
Inter-gestural planning oscillators
Gestural score activation
Inter-gestural timing
Gesture activation durations
Dynamic gestural specifications at the level of tract variables and articulatory synergies
Developmental dysarthria(a) Neuro-motor timing and execution
(b) Reduced speaking rates and prolonged syllable durations.
(c) Decreased vowel distinctiveness and sound distortions,
(d) Reduced strength of articulatory contacts
(e) Voice and prosodic abnormalities
(f) Reduced respiratory support and/or incoordination
Inter-gestural coordination and dynamic specifications at the level of Tract variables and Articulatory Synergies

Speech Delay

According to Shriberg et al. (2010) and Shriberg et al. (2017) , children with Speech Delay (age of occurrence between 3 and 9 years) are characterized by “delayed acquisition of correct auditory–perceptual or somatosensory features of underlying representations and/or delayed development of the feedback processes required to fine tune the precision and stability of segmental and suprasegmental production to ambient adult models” ( Shriberg et al., 2017 , p. 7). These children present with age-inappropriate speech sound deletions and/or substitutions, among which patterns of speech sound errors as described below:

Gliding and Vocalization of Liquids

Gliding is described as a substitution of a liquid with a glide (e.g., rabbit /ræbIt/ → [wæbIt] or [jæbIt], please /pliz/ → [pwiz], look /lʊk/ → [wʊk]; McLeod and Baker, 2017 ) and vocalization of liquids refers to the substitution of a vowel sound for a liquid (e.g., apple /æpl/ → [æpʊ], bottle /bɑtl/ → [bɑtʊ]; McLeod and Baker, 2017 ). The /r/ sounds are acoustically characterized by a drop in the third formant ( Alwan et al., 1997 ). In terms of movement kinematics the /r/ sound is a complex coproduction of three vocal tract constrictions/gestures (i.e., labial, tongue tip/body, and tongue root), requires a great deal of speech motor skill, and is mastered by most typically developing children between 4 and 7 years of age ( Bauman-Waengler, 2016 ). Ultrasound data suggests that children may find the simultaneous coordination of three gestures motorically difficult and may simplify the /r/ production by dropping one gesture from the segment ( Adler-Bock et al., 2007 ). Moreover, the syllable final /r/ sounds are often substituted with vowels because they share only a subset of vocal tract constrictions with the original /r/ sound and this is better described as a simplification process ( Adler-Bock et al., 2007 ). For example, the child may drop the tongue tip gesture but retain the lip rounding gesture and the latter dominates resulting vocal tract acoustics ( Adler-Bock et al., 2007 ; van Lieshout et al., 2008 ). Kinematic data derived from electromagnetic articulography ( van Lieshout et al., 2008 ) also points to a limited within-organ differentiation of the tongue parts and subtle issues in relative timing between different components of the tongue in /r/ production errors. These arguments also have support from longitudinal observational data on positional lateral gliding in children (/l/ is realized as [j]; Inkelas and Rose, 2007 ). Positional lateral gliding in children is said to occur when the greater gestural magnitude of prosodically strong onsets in English interacts with the anatomy of the child’s vocal tract ( Inkelas and Rose, 2007 ; McAllister Byun, 2011 , 2012 ). Within the AP model, reducing the number of required gestures (simplification) and poor tongue differentiation issues would likely have their origins at the level of Tract Variables while issues in relative timing between the tongue gestures are likely to arise at the level of the Gestural Score ( Table 1 ).

Stopping of Fricatives

Stopping of fricatives involves a substitution of a fricative consonant with a homorganic plosive (e.g., zoo /zu/ → [du], shoe /ʃu/ → [tu], see /si/ → [ti]; McLeod and Baker, 2017 ). Fricatives are another class of late acquired sounds that require precise control over different parts of the tongue to produce a narrow groove through which turbulent airflow passes. Within the AP model, the stopping of fricatives may arise from an inappropriate Tract Variable constriction degree specification (Constriction Degree: /d/ closed vs. /z/ critical; Goldstein et al., 2006 ; see Table 1 ), possibly as a simplification process secondary to limited precision of tongue tip control. Alternatively, neutralization (or stopping) of fricatives especially in prosodically strong contexts has also been explained from a constraint-based grammar perspective. For example, the tendency to overshoot is greater in initial positions where a more forceful gesture is favored for prosodic reasons. This allows the hard to produce fricative to be replaced by a ballistic tongue-jaw gesture that does not violate the MOVE-AS-UNIT constraint ( Inkelas and Rose, 2007 ; McAllister Byun, 2011 , 2012 ) as described in the “Introduction Section.”

Vowel Addition and Final Consonant Deletion

Different types of vowel insertion errors have been observed in children’s speech. An epenthesis is typically a schwa vowel inserted between two consonants in a consonant cluster (e.g., please /pliz/ → [pəliz] CCVC → CVCVC; blue /blu/ → [bəlu] CCV → CVCV), while other types of vowel insertions have also been noted (e.g., bat /bæt/ → [bæta]; CVC → CVCV) ( McLeod and Baker, 2017 ). A final consonant deletion involves the deletion of a consonant in a syllable or word final position (seat /sit/ → [si], cat /cæt/ → [cæ], look /lʊk/ → [lʊ]; McLeod and Baker, 2017 ). Both these phenomena could be explained by the concept of relative stability. As noted earlier, the onset consonant and the vowel (CV) are coupled in a relatively more stable in-phase mode as opposed to the anti-phase VC and CC gestures ( Goldstein et al., 2006 ; Nam et al., 2009 ; Giulivi et al., 2011 ). Thus, the maintenance of relative stability in VC or CC coupling modes may be more difficult with increasing cognitive-linguistic (e.g., vocabulary growth) or speech motor demands (e.g., speech rate), and there may be a tendency to utilize intrusion gestures as a means to stabilize the speech motor system (i.e., by decreasing frequency locking ratios; e.g., 2:1 to 1:1; Goldstein et al., 2007 ). We suspect that such mechanisms underlie vowel intrusion (error) gestures in children. In CVC syllables (or word structures), greater stability in the system may be achieved by dropping or deleting the final consonant and thus retaining the more stable in-phase CV coupling ( Goldstein et al., 2006 ). Moreover, findings from ultrasound tongue motion data during the production of repeated two- and three-word phrases with shared consonants in coda (e.g., top cop) versus no-coda positions (e.g., taa kaa, taa kaa taa) have demonstrated a gestural intrusion bias only for the shared coda consonant condition ( Pouplier, 2008 ). These findings suggest that the presence of (shared) coda consonants is a trigger for a destabilizing influence on the speech motor system ( Pouplier, 2008 ; Mooshammer et al., 2018 ). From an AP perspective, the stability induced by deleting final consonants or adding intrusion gestures (lowering frequency locking ratios) can be assigned to limitations in inter-gestural coordination and/or possible gestural selection issues at the level of Gestural Planning Oscillators ( Figure 2 ). We argue that (vowel) intrusion sound errors are not a “symptom” of an underlying (phonological) disorder, but rather the result of a compensatory mechanism for a less stable speech motor system. Additionally, children with limited jaw control may omit the final consonant /b/ in /bɑb/ in a jaw close-open-close production task, due to difficulties with elevating the jaw. This would typically be associated with the Tract Variable level in the AP model or at later stages during the specification of jaw movements at the Articulatory level (see Figure 2 and Table 1 ).

Cluster Reduction

Cluster reduction refers to the deletion of a (generally more marked) consonant in a cluster (e.g., please /pliz/ → [piz], blue /blu/ → [bu], spot /spɒt/ → [pɒt]; McLeod and Baker, 2017 ). From a stability perspective, CC onset clusters are less stable (i.e., anti-phasic) and in the presence of increased demands or limitations in the speech motor system (e.g., immaturity; Fletcher, 1992 ), they are more likely replaced by a stable CV coupling pattern by omitting the extra consonantal gesture ( Goldstein et al., 2006 ; van Lieshout and Goldstein, 2008 ; Nam et al., 2009 ). Alternatively, there is also the possibility that when two (heterorganic) gestures in a cluster are produced they may temporally overlap, thereby acoustically and perceptually hiding one gesture (i.e., gestural hiding; Browman and Goldstein, 1990b ; Hardcastle et al., 1991 ; Gibbon et al., 1995 ). Within the AP model, cluster reductions due to stability factors and gestural hiding may be ascribed to the Gestural Score Activation level (a gesture may not be activated in a CCV syllable to maintain stable CV structure) and to relative phasing issues (increased temporal overlap) at the level of inter-gestural coordination ( Figure 2 and Table 1 ; Goldstein et al., 2006 ; Nam et al., 2009 ).

Weak Syllable Deletion

Weak syllable deletion refers to the deletion of an unstressed syllable (e.g., telephone /tɛləfoʊn/ → [tɛfoʊn], potato /pəteɪtoʊ/ → [teɪtoʊ], banana /bənænə/ → [nænə]; McLeod and Baker, 2017 ). Multisyllabic words pose a unique challenge in that they comprise of complex couplings between multi-frequency syllable and stress level oscillators (e.g., Tilsen, 2009 ). Deleting an unstressed syllable in a multisyllabic word may allow reduction of complexity by frequency locking in a stable lower order-mode between syllable and stress level oscillators. Within the AP model, this process is regulated at the level of Gestural Planning Oscillators (see Table 1 ; Goldstein et al., 2007 ; Tilsen, 2009 ).

Velar Fronting and Coronal Backing

Fronting is defined as a substitution of a sound produced in the back of the vocal tract with a consonant articulated further toward the front (e.g., go /go/ → [do], duck /dk/ → [dt], key /ki/ → [ti]; McLeod and Baker, 2017 ). Backing on the other hand, is defined as a substitution of a sound produced in the front of the vocal tract with a consonant articulated further toward the back (e.g., two /tu/ → [ku], pat /pæt/ → [pæk], tan /tæn/ → [kæn]; McLeod and Baker, 2017 ). While fronting is frequently observed in typically developing young children, backing is rare for English-speaking children ( McLeod and Baker, 2017 ). Children who exhibit fronting and backing behaviors show evidence of undifferentiated lingual gestures, according to electropalatography (EPG) and electromagnetic articulography studies ( Gibbon, 1999 ; Gibbon and Wood, 2002 ; Goozée et al., 2007 ). Undifferentiated lingual gestures lack clear differentiation between the movements of the tongue tip, tongue body, and lateral margins of the tongue. For example, tongue-palate contact is not confined to the anterior part of the palate for alveolar targets, as in normal production. Instead, tongue-palate contact extends further back into the palatal and velar regions of the vocal tract ( Gibbon, 1999 ). It is estimated that 71% of children (aged 4-12 years) with a clinical diagnosis of articulation and phonological disorders produce undifferentiated lingual gestures. These undifferentiated lingual gestures are argued to arise from decreased oro-motor control abilities, a deviant compensatory bracing mechanism (i.e., an attempt to counteract potential disturbances in tongue tip fine motor control; Goozée et al., 2007 ) or may represent an immature speech motor system ( Gibbon, 1999 ; Goozée et al., 2007 ). Undifferentiated lingual gestures are not a characteristic of speech in typically developing older school-age children or adults ( Gibbon, 1999 ). In children’s productions of lingual consonants, there is a decrease in tongue-palate contact on EPG with increasing age (6 through 14 years) paralleled by fine-grained articulatory adjustments ( Fletcher, 1989 ). The tongue tip and tongue body function as two quasi-independent articulators in typical and mature speech production systems (see section Development of Synergies in Speech ). However, in young children, the tongue and jaw (tongue-jaw complex) and different functional parts of the tongue may be strongly coupled in-phase (i.e., always move together), and thus lack functionally independent regions ( Gibbon, 1999 ; Green et al., 2002 ). Undifferentiated lingual patterns may thus result from simultaneous (in-phase) activation of regions of the tongue and/or tongue-jaw complex in young children and persist over time ( van Lieshout et al., 2008 ).

Standard acoustic-perceptual transcription procedures do not reliably detect undifferentiated lingual gestures ( Gibbon, 1999 ). Undifferentiated lingual gestures are sometimes transcribed as phonetic distortions or phonological substitutions (i.e., velar fronting or coronal backing) in some contexts, but may be transcribed as correct productions in other contexts ( Gibbon, 1999 ; Gibbon and Wood, 2002 ). The perception of place of articulation of an undifferentiated gesture is determined by changes in tongue-palate contact during closure (i.e., articulatory drift; Gibbon and Wood, 2002 ). For example, closure might be initiated in the velar region, cover the entire palate, and then be released in the coronal or anterior region (or vice versa). Undifferentiated lingual gestures could therefore yield the perception of either velar fronting or coronal backing. The perceived place of articulation is influenced by the direction of the articulatory drift and the last tongue-palate contact region ( Gibbon and Wood, 2002 ). Children with slightly more advanced lingual control, relative to those described with widespread use of undifferentiated gestures, may still present with fine-motor control or refinement issues (e.g., palatal fronting /ʃ/ →[s]; backing of fricatives /s/ →[ʃ]; Gibbon, 1999 ). Velar fronting and coronal backing can be envisioned as incorrect in relative phasing at the level of inter-gestural coordination 3 (see Table 1 ). For instance, the tongue tip-tongue body or tongue-jaw complex may be in a tight synchronous in-phase coupling, but the release of constriction may not. It may also be a problem in Tract Variable constriction location specification ( Table 1 ).

Prevocalic Voicing and Postvocalic Devoicing

Context sensitive voicing errors in children are categorized as prevocalic voicing and postvocalic devoicing. Prevocalic voicing is a process in which voiceless consonants in syllable initial positions are replaced by voiced counterparts (e.g., pea /pi/ → [bi]; pan /pæn/ → [bæn]; pencil /pεnsəl/ → [bεnsəl]) and postvocalic devoicing is when voiced consonants in syllable final position are replaced by voiceless counterparts (e.g., Bag /bæg/ → [bæk], pig /pIg/ → [pIk]; seed /sid/ → [sit]; McLeod and Baker, 2017 ). Empirical evidence suggests that in multi-gestural segments, segment-internal coordination of gestures may be different in onset than in coda position ( Krakow, 1993 ; Goldstein et al., 2006 ). When a multi-gestural segment is produced in a syllable onset, such as a bilabial nasal stop (e.g., [m]), the necessary gestures (bilabial closure gesture, glottal gesture and velar gesture) are synchronously produced (i.e., in-phase), creating the most stable configuration for that combination of gesture; this makes the addition of voicing in onset position easy. However, in coda position, the bilabial closure gesture, glottal gesture (for voicing) and velar gesture must be produced asynchronously (i.e., in a less stable anti-phase mode; Haken et al., 1985 ; Goldstein et al., 2006 , 2007 ). It is thus less demanding to coordinate fewer gestures in the anti-phase mode across oral and laryngeal speech subsystems in a coda position. This would explain why children (with a developing speech motor system) may simply drop the glottal gesture (devoicing in coda position) to reduce complexity. Note, that in some languages (e.g., Dutch), coda devoicing is standard irrespective of the original voicing characteristic of that sound. Within the AP model, prevocalic voicing and postvocalic devoicing (i.e., adding or dropping a gesture) may be ascribed to gestural selection issues at the level of Gestural Planning Oscillators ( Figure 2 and Table 1 ).

Recent studies also suggest a relationship between jaw control and acquisition of accurate voice-voiceless contrasts in children. The production of a voice-voiceless contrast requires precise timing between glottal abduction/adduction and oral closure gestures. Voicing contrast acquisition in typically developing 1- to 2-year-old children may be facilitated by increasing the jaw movement excursion, speed and stability ( Grigos et al., 2005 ). In children with SSDs (including phonological disorder, articulation disorder and CAS) relative to typically developing children, jaw deviances/instability in the coronal plane (i.e., lateral jaw slide) have been observed ( Namasivayam et al., 2013 ; Terband et al., 2013 ). Moreover, stabilization of voice onset times for /p/ production has been noted in children with SSDs undergoing motor speech intervention focused on jaw stabilization ( Yu et al., 2014 ). These findings are not surprising given that the perioral (lip) area lacks tendon organs, joint receptors and muscle spindles ( van Lieshout, 2015 ), and the only reliable source of information to facilitate inter-gestural coordination between oral and laryngeal gestures comes from the jaw masseter muscle spindle activity ( Namasivayam et al., 2009 ). Increases in jaw stability and amplitude may provide consistent and reliable feedback used to stabilize the output of a coupled neural oscillatory system comprising of larynx (glottal gestures) and oral articulators ( van Lieshout, 2004 ; Namasivayam et al., 2009 ; Yu et al., 2014 ; van Lieshout, 2017 ).

Articulation Impairment

Articulation impairment is considered a motor speech difficulty and generally reserved for speech sound errors related to rhotics and sibilants (e.g., derhotacized /r/: bird /bɝd/ → [bɜd]; dentalized/lateralized sibilants: sun /sn/ → [ɬʌn] or [s̪ʌn]; McLeod and Baker, 2017 ). A child with an articulation impairment is assumed to have the correct phoneme selection but is imprecise in the speech motor specifications and implementation of the sound ( Preston et al., 2013 ; McLeod and Baker, 2017 ). Studies using ultrasound, EPG and electromagnetic articulography data have shown several aberrant motor patterns to underlie sibilant and rhotic distortions. For rhotics, these may range from undifferentiated tongue protrusion, absent anterior tongue elevation, absent tongue root retraction and subtle issues in relative timing between different components of the tongue gestures ( van Lieshout et al., 2008 ; Preston et al., 2017 ). Correct /s/ productions involve a groove in the middle of the tongue along with an elevation of the lateral tongue margins ( Preston et al., 2016 , 2017 ). Distortions in /s/ production may arise from inadequate anterior tongue control, poor lateral bracing (sides of the tongue down) and missing central groove ( McAuliffe and Cornwell, 2008 ; Preston et al., 2016 , 2017 ).

Within the AP model, articulation impairments may potentially arise at three levels: Tract Variables , Gestural Scores and dynamical specification of the gestures. We discussed rhotic production issues at the Tract Variables and Gestural Score levels in the Gliding and vocalization of liquids section as a reduction in the number of required gestures (i.e., some parts of the tongue not activated during /r/), limited tongue differentiation, and/or subtle relative timing issues between the different tongue gestures/components. Errors in dynamical specifications of the gestures could also result in speech sound errors. For example, incorrect damping parameter specification for vocal tract constriction degree may result in the Tract Variables (and their associated articulators) overshooting (underdamping) or undershooting (overdamping) their rest/target value ( Browman and Goldstein, 1990a ; Fuchs et al., 2006 ).

Childhood Apraxia of Speech (CAS)

The etiology for CAS is unknown, but it is hypothesized to be a neurological sensorimotor disorder with a disruption at the level of speech motor planning and/or motor programing of speech movement sequences (American Speech–Language–Hearing Association ( ASHA, 2007 ). A position paper by ASHA (2007) describes three important characteristics of CAS which include inconsistent speech sound errors on repeated productions, lengthened and disrupted coarticulatory transitions between sounds and syllables, and inappropriate prosody that includes both lexical and phrasal stress difficulties ( ASHA, 2007 ). Within the AP and TD framework, the speech motor planning processes described in linguistic models can be ascribed to the level of inter-gestural coupling graphs, inter-gestural planning oscillators and gestural score activation; while processes pertaining to speech motor programing would typically encompass dynamic gestural specifications at the level of tract variables and articulatory synergies ( Nam and Saltzman, 2003 ; Nam et al., 2009 ; Tilsen, 2009 ).

Traditionally, perceptual inconsistency in speech production of children with CAS has been evaluated via word-level token-to-token variability or at the fine-grained segmental-level (phonemic and phonetic variability; Iuzzini and Forrest, 2010 ; Iuzzini-Seigel et al., 2017 ). These studies provide evidence for increased variability in speech production of CAS relative to those typically developing or those with other speech impairments (e.g., articulation disorders). Data suggest that speech variability issues in CAS may arise at the level of articulatory synergies (intra-gestural coordination). Children with CAS demonstrate higher lip-jaw spatio-temporal variability with increasing utterance complexity (e.g., word length: mono-, bi-, and tri-syllabic) and greater lip aperture variability relative to children with speech delay ( Grigos et al., 2015 ). Terband et al. (2011) analyzed articulatory kinematic data on functional synergies in 6- to 9-year-old children with SSD, CAS, and typically developing controls. The results indicated that the tongue tip-jaw synergy was less stable in children with CAS compared to typically developing children, but the stability of lower lip-jaw synergy did not differ ( Terband et al., 2011 ). Interestingly, differences in movement amplitude emerged between the groups: CAS children exhibited a larger contribution of the lower lip to the oral closure compared to typically developing controls, while the children with SSD demonstrated larger amplitude of tongue tip movements relative to CAS and controls. Terband et al. (2011) suggest that children with CAS may have difficulties in the control of both lower lip and tongue tip while the children with SSD have difficulties controlling only the tongue tip. Larger movement amplitudes found in these groups may indicate an adaptive strategy to create relatively stable movement coordination (see also Namasivayam and van Lieshout, 2011 ; van Lieshout, 2017 ). The presence of larger movement amplitudes to increase stability in the speech motor system has been reported as a potential strategy in other speech disorders, including stuttering ( Namasivayam et al., 2009 ); adult verbal apraxia and aphasia ( van Lieshout et al., 2007 ); cerebral palsy ( Nip, 2017 ; Nip et al., 2017 ); and Speech-Motor Delay [SMD, a SSD subtype formerly referred to as Motor Speech Disorder–Not Otherwise Specified (MSD-NOS); Vick et al., 2014 ; Shriberg, 2017 ; Shriberg et al., 2019a , b ]. This fits well with the notion that movement amplitude is a factor in the stability of articulatory synergies as predicted in a DST framework (e.g., Haken et al., 1985 ; Peper and Beek, 1998 ) and evidenced in a recent study on speech production ( van Lieshout, 2017 ). Additional mechanisms to improve stability in movement coordination were documented in gestural intrusion error studies ( Goldstein et al., 2007 ; Pouplier, 2007 , 2008 ; Slis and van Lieshout, 2016a , b ) as discussed in section “Describing Casual Speech Alternations,” and are more present in adult apraxia speakers relative to healthy controls ( Pouplier and Hardcastle, 2005 ; Hagedorn et al., 2017 ).

With regards to the lengthened and disrupted coarticulatory transitions, findings suggest that abnormal and variable anticipatory coarticulation (assumed to reflect speech motor planning) may be specific to CAS and not a general characteristic of children with SSD ( Nijland et al., 2002 ; Maas and Mailend, 2017 ). The lengthened and disrupted coarticulatory transitions between sounds and syllables can be explained by possible limitations in inter-gestural overlap in children with CAS. A reduction in overlap of successive articulatory gestures (i.e., reduced coarticulation or coproduction) may result in the speech output becoming “segmentalized” (e.g., as seen in adult apraxic speakers; Liss and Weismer, 1992 ). Segmentalization gives the perception of “pulling apart” of successive gestures in the time domain and possibly adds to perceived stress and prosody difficulties in this population (e.g., Weismer et al., 1995 ). These may arise from delays in the activation of the following gesture and/or errors in gesture activation durations.

Inappropriate prosody (lexical and phrasal stress difficulties) in CAS is often characterized by listener perceptions of misplaced or equalized stress patterns across syllables. A potential source of this problem is that children with CAS may produce subtle and not consistently perceptible acoustic differences between stressed and unstressed syllables ( Shriberg et al., 1997 ; Munson et al., 2003 ). Children with CAS unlike typically developing children, do not shorten vowel duration in weaker stressed initial syllables as an adjustment to the metrical structure of the following syllable ( Nijland et al., 2003 ). Furthermore, syllable omissions have been particularly noted in CAS children who demonstrated inappropriate phrasal stress ( Velleman and Shriberg, 1999 ). These interactions between syllable/gestural units and rhythmic (stress and prosody) systems have been discussed earlier in the context of multi-frequency systems of coupled oscillators (e.g., Tilsen, 2009 ). We speculate that children with CAS may have difficulty with stability in coupling (i.e., experience weak or variable coupling) between stress and syllable level oscillators.

Speech-Motor Delay

Speech-Motor Delay (formerly MSD-NOS; Vick et al., 2014 ; Shriberg, 2017 ; Shriberg and Wren, 2019 ; Shriberg et al., 2019a , b ) is a subpopulation of children presenting with difficulties in speech motor control and coordination that is not consistent with features of CAS or Dysarthria ( Shriberg, 2017 ; Shriberg et al., 2019a , b ). Information on the nature, diagnosis, and intervention protocols for the SMD subpopulation is emerging ( Vick et al., 2014 ; Shriberg, 2017 ; Namasivayam et al., 2019 ). Current data suggests that this group is characterized by poor motor control (e.g., higher articulatory kinematic variability of upper lip, lower lip and jaw, larger upper lip displacements). Behaviorally, they produce errors such as fewer accurate phonemes, errors in vowel and syllable duration, errors in glide production, epenthesis errors, consonantal distortions, and less accurate lexical stress ( Vick et al., 2014 ; Shriberg, 2017 ; Namasivayam et al., 2019 ; Shriberg and Wren, 2019 ; Shriberg et al., 2019a , b ). As many of the precision and stability deficits in speech and prosody in SMD (e.g., consonant distortions, epenthesis, vowel duration differences and decreased accuracy of lexical stress) and adaptive strategies to increase speech motor stability (e.g., larger upper lip displacements; van Lieshout et al., 2004 ; Namasivayam and van Lieshout, 2011 ) overlap with CAS and other disorders discussed earlier, we will not reiterate possible explanations for these within the context of the AP model. SMD is considered a disorder of execution: a delay in the development of neuromotor precision-stability of speech motor control. Children with SMD are at increased risk for persistent SSDs ( Shriberg et al., 2011 , 2019a , b ; Shriberg, 2017 ).

Developmental Dysarthria

Dysarthria “is a collective name for a group of speech disorders resulting from disturbances in muscular control over the speech mechanism due to damage of the central or peripheral nervous system. It designates problems in oral communication due to paralysis, weakness, or incoordination of the speech musculature” ( Darley et al., 1969 , p. 246). Dysarthria may be present in children with cerebral palsy (CP) and may be characterized by reduced speaking rates, prolonged syllable durations, decreased vowel distinctiveness, sound distortions, reduced strength of articulatory contacts, voice abnormalities, prosodic disturbances (e.g., equal stress), reduced respiratory support or respiratory incoordination and poor intelligibility ( Pennington, 2012 ; Mabie and Shriberg, 2017 ; Nip et al., 2017 ). Speakers with CP consistently produce greater lip, jaw and tongue displacements in speech tasks relative to typically developing peers ( Ward et al., 2013 ; Nip, 2017 ; Nip et al., 2017 ). These increased displacements were argued to arise from either a reduced ability to grade force control (resulting in ballistic movements) or alternatively, can be interpreted as a strategy to increase proprioceptive feedback to stabilize speech movement coordination ( Namasivayam et al., 2009 ; Nip, 2017 ; Nip et al., 2017 ; van Lieshout, 2017 ). Further, children with CP demonstrate decreased spatial coupling between the upper and lower lips and reduced temporal coordination between the lips and between lower lip and jaw ( Nip, 2017 ) relative to typically developing peers. These measures of inter-articulator coordination were found to be significantly correlated with speech intelligibility ( Nip, 2017 ).

Within the AP model, the neuromotor characteristics of dysarthria such as disturbances in gesture magnitude or scaling issues (overshooting, undershooting), imprecise articulatory contacts (resulting in sound distortions), slowness (reduced speaking rate and prolonged durations), and coordination issues could be related to inaccurate gestural specifications of dynamical parameters (e.g., damping and stiffness), inaccurate gesture activation durations, imprecise constriction location and degree, and inter-gestural and intra-gestural (i.e., articulatory synergy level) timing issues ( Browman and Goldstein, 1990a ; van Lieshout, 2004 ; Fuchs et al., 2006 ). Inter-gestural and intra-gestural timing issues may characterize difficulties in coordinating the subsystems required for speech production (respiration, phonation and articulation) and difficulties in controlling the many degrees of freedom in a functional articulatory synergy, respectively ( Saltzman and Munhall, 1989 ; Browman and Goldstein, 1990b ; van Lieshout, 2004 ). Overall, dysarthric speech characteristics would encompass the following levels in the AP/TD framework: inter-gestural coordination, and dynamic specifications at the level of Tract Variables and Articulatory Synergies ( Table 1 ).

Clinical Relevance, Limitations and Future Directions

In this paper, we briefly reviewed some of the key concepts from the AP model ( Browman and Goldstein, 1992 ; Gafos and Goldstein, 2012 ). We explained how the development, maturation, and the combinatorial dynamics of articulatory gestures in this model can offer plausible explanations for speech sound errors found in children with SSDs. We find that many of these speech sound error patterns are in fact present in speech of typically developing children and more importantly, even in the speech of typical adult speakers, under certain circumstances. Based on our presentation of behavioral and articulatory kinematic data we propose that such speech sound errors in children with SSD may potentially arise as a consequence of the complex interaction between the dynamics of articulatory gestures, an immature speech motor system with limitations in speech motor skills and specific boundary conditions related to physical, physiological, and functional constraints. In fact, much of these speech sound errors themselves may reflect compensatory strategies (e.g., decreasing speech rate, increasing movement amplitude, bracing, intrusion gestures, cluster reductions, segment/gesture/syllable deletions, increasing lag between articulators) to provide more stability in the speech motor system as has been found in both typical and disordered speakers ( Fletcher, 1992 ; van Lieshout et al., 2004 ; Namasivayam and van Lieshout, 2011 ).

Based on the presented evidence, we speculate that in general children with SSDs may occupy the low end of the speech motor skill continuum similar to what has been argued for stuttering ( van Lieshout et al., 2004 ; Namasivayam and van Lieshout, 2011 ) and that the differences we notice in speech sound errors between the subtypes of SSD may in fact be differences in how these individuals develop strategies for coping with the challenges of being on the lower end of the speech motor skill continuum. This is a critical shift in thinking about the (distal and proximal) causes for speech sound errors in children with SSD (or in adults for that matter). Many of these children show similarities in their behavioral symptoms and perhaps the traditional notion of separating phonological from motor issues should be questioned (see also Maassen et al., 2010 ) and replaced with a broader understanding of how all levels involved in speech production are part of a complex system with processing stages that are highly integrated and coupled at different time scales (see also Tilsen, 2009 , 2017 ). The AP perspective and the associated DST principles provide a suitable basis for this kind of approach given its transparency between higher and lower levels of control through the concept of gestures.

Despite the uniqueness of the AP approach in offering new insights into the underlying mechanisms of speech sound errors in children, there are some limitations of using this approach. For example, the current versions of the AP model does not have an auditory feedback channel and is unable to account for any effects of auditory feedback perturbations. Further, although there are some recent attempts at describing the neural mechanisms underlying the components of the AP model (e.g., Tilsen, 2016 ) the model generally does not explicitly specify neural structures as some other models have done (e.g., DIVA model; Tourville and Guenther, 2011 ; for a detailed comparison between models of speech production see Parrell et al., 2019 ).

Critically, the theoretical concepts of gestures/synergies in speech production from this framework are yet to be taught widely in professional S-LP programs and related disciplines (see also van Lieshout, 2004 ). There are several reasons for this knowledge translation issue with the top ones being a lack of availability of accessible reviews and tutorials on this topic, limited empirical data on the nature of SSDs in children from an AP framework, and most importantly the absence of convenient, reliable and published practical methods to assess the status of gestures and synergies in speech production in a clinical setting. Although, some intervention approaches like the Prompts for Restructuring Oral Muscular Phonetic Targets approach (PROMPT; Hayden et al., 2010 ) and the Rapid Syllable Transitions Treatment program (ReST; Thomas et al., 2014 ) aim at addressing speech movement gestures and transitions between them, they lack empirical outcome data related to their impact at the level of gestures and articulatory synergies. It is also unclear at this point whether or not it is possible to provide tools to identify differences in timing relationships in jaw-lip or tongue tip-jaw coupling that would work well in a clinical setting. Using purely sensory (visual and auditory) means to observe speech behaviors will always be subject to errors and biases common to perception-based evaluation procedures (e.g., Kent, 1996 ). At the moment, there is a paucity of literature in this area which opens up great opportunities for future research. With technologies like real time Magnetic Resonance Imaging finding its way into the analysis of typical and disordered speech (e.g., see Hagedorn et al., 2017 ) and relatively low cost automatic video-based face-tracking systems ( Bandini et al., 2017 ) starting to emerge for clinical purposes, we hope that speech-language pathologists will have the tools they need to support their assessment and intervention planning based on a better understanding and quantification of the dynamics of speech gestures and articulatory synergies. To this end, we hope that this paper provides an initial step in this direction as an introduction to the AP framework for clinical audiences and a motivation for a larger cohort of researchers for developing testable hypothesis regarding the contribution of gestures and articulatory synergies to sub-types of SSD in children.

The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistics and auditory-perceptual based transcription procedures ( Shriberg, 2010 ; see Section Articulatory Phonology and Speech Sound Disorders in Children ). A major problem as noted earlier (in the Introduction section) is that, the complex relationships between the etiology (distal), processing deficits (proximal) and the behavioral levels (speech symptoms) is under-specified in current SSD classification systems ( Terband et al., 2019a ). It is critical to understand the complex interactions between these levels as they have implications for differential diagnosis and treatment planning ( Terband et al., 2019a ). There have been some theoretical attempts made toward understanding these interactions (e.g., Inkelas and Rose, 2007 ; McAllister Byun, 2012 ; McAllister Byun and Tessier, 2016 ), and we hope this paper will trigger a stronger interest in the field of S-LP for an alternative “gestural” perspective and increase the contributions to the limited corpus of research literature in this area.

Author Contributions

AN: main manuscript writing, synthesis and interpretation of literature, brain storming concepts and ideas, and creation of tables and figures. DC and AO: main manuscript writing, brain storming concepts and ideas, references, and proofing. PL: overall supervision of manuscript, writing subsections, and original conceptualization.

Speech Sound (Articulation) Disorders

What is articulation.

Articulation is the process of making speech sounds by moving the tongue, lips, jaw, and soft palate . Children learn speech by imitating the sounds they hear as you talk about what you are doing during the day, sing songs, and read books to them.

Speech sound development

Children begin developing speech as an infant. By 6 months of age, babies coo and play with their voices, making sounds like "oo,” “da,” “ma,” and “goo." As babies grow, they begin to babble, making more consonants like "b" and "k" with different vowel sounds.

Although children begin to develop speech as infants, they do not learn to make all speech sounds at one time. Your child will continue to imitate sounds and word shapes. These imitations will turn into natural, unplanned speech.

Every sound has a different, but predictable, range of ages for when the child should make the sound correctly.

General articulation milestones:

  • By age 3, speech should be understandable about 80 percent of the time.
  • By age 4, speech should be understandable almost all the time, although there may still be sound errors.
  • By age 8, children should be able to make all of the sounds of the English language correctly.

Articulation errors are a normal part of speech development. Most children will make mistakes as they learn to say new words. Not all sound replacements and omissions are considered speech errors. Instead, they may be related to a dialect or accent.

The chart below gives age ranges for when children learn to make certain speech sounds.

Speech Sounds by Age

articulation and speech disorder

These are general guidelines for speech sound development. Talk with a speech language pathologist or other health care provider if you have concerns about your child’s speech.

Articulation delays and disorders

An articulation delay or disorder happens when errors continue past a certain age. These errors can occur at the beginning, middle, or end of a word. The 3 most common articulation errors are:

  • Replacing one sound for another, like “bacuum” for “vacuum.”
  • Omitting a sound, like “bue” for “blue.”
  • Distorting a sound is when you recognize the sound but it sounds off. A lisp is a distortion of the “s” sound and is caused when the tongue sticks out past the teeth.

Causes of articulation delays and disorders

For many children, the causes of speech sound disorders are not known. Your child may not learn how to make the sounds correctly or may not learn the rules of speech on their own. Physical problems can also affect articulation. These physical problems include:

  • Illnesses that last a long time. Being in a hospital or having a serious illness may reduce the normal activities and interactions that help children learn speech and language.
  • Hearing loss. Speech is learned by listening. Hearing loss and ear problems such as frequent ear infections can slow down speech sound development in young children.
  • Brain tumors. Tumors may affect the speech centers of the brain. They also can weaken muscles of the lips, palate, tongue, or vocal cords.
  • Structural differences. The physical structure of the jaw, tongue, lips or palate can affect articulation. Structural differences due to injury or birth defects such as cleft lip and palate can lead to speech delays or disorders.
  • Developmental or neurological disorders. Disorders such as stroke, cerebral palsy, autism, or brain injury can cause speech sound problems.
  • Cannot be understood; they may get frustrated or act out because they cannot express themselves.
  • Avoid situations where they need to speak.
  • Get embarrassed or worried about how they sound or because others make fun of the way they speak.

What you can do to help

  • Talk to your child during playtime. This is a chance to make talking fun and model correct speech sounds.
  • When talking, face your child and position yourself near eye level.

Toddler playing with caregiver

Mother and toddler playing, face to face, eye level

  • Do not interrupt or constantly correct your child.
  • Do not reinforce errors by imitating them. Instead, model the correct way to make the sound. For example, if your child says, “That’s a wellow duck,” you say, “Yes, that’s a yellow duck. A yellow baby duck. The sun is yellow, too.”
  • Praise your child for saying the sound correctly or give encouragement for trying.
  • Read to your child. Use reading to surround your child with the targeted sound. For example, read  Goodnight Moon  if the child is working on the /g/ sound.
  • Use meals, bath time, bedtime, playtime, and other daily routines to work on speech. These activities can be great learning moments.

If you have concerns about your child’s speech, talk to your doctor. It is important to identify and treat any physical conditions that may be contributing to articulation delays.

A speech language pathologist can help assess whether your child has an articulation disorder and develop a speech therapy plan.

  • Articulation is the process of making speech sounds. Articulation errors are a normal part of speech development.
  • Speech sound errors or articulation disorders can happen for a variety of reasons. Often, the cause is not known.
  • There are ways you can help your child with speech sounds.
  • Your doctor may refer you to a speech language pathologist for speech therapy to help with an articulation delay or disorder.

— Reviewed: August 2022

Phonological Disorder vs Articulation Disorder: What’s the Difference?

While articulation and phonological disorders may appear similar on the surface, they are distinct in several aspects, ranging from their symptoms to their management strategies.

Fortunately, there are clear indicators to differentiate between the two.

In this article, we'll dissect both articulation and phonological disorders, highlighting their fundamental differences, root causes, early indicators, and approaches to intervention.

In this article, we will discuss:

How can you Distinguish Articulation vs. Phonological Disorders?

What is an Articulation Disorder? What is a Phonological Disorder? How do you Treat Articulation Disorders vs Phonological Disorders?

When Should You Seek Professional Help?

Fluency vs Articulation Disorders

Understanding the difference between articulation and phonological disorders is essential, as each impacts speech in unique ways. While both may affect speech, they do so in distinct manners.

Articulation disorders involve difficulties in physically producing speech sounds, leading to distortions, substitutions, or omissions of sounds. Phonological disorders, on the other hand, involve patterns of sound errors and a lack of understanding of the sound rules of the language.

Armed with this foundational knowledge, let’s delve deeper into the world of articulation and phonological disorders, exploring their early signs and strategies for effective communication!

Defeat speech barriers: Distinguish Phonological Disorder from Articulation Disorder

affordable speech therapy

What is an Articulation Disorder?

An articulation disorder is marked by difficulties in physically producing speech sounds. This disorder goes beyond mere pronunciation issues; it reflects challenges in the movement and coordination of the mouth and speech organs necessary for clear speech. Children or adults with articulation disorders might find it hard to form certain sounds correctly, leading to speech that is often difficult to understand.

Fluency Disorder Therapy

What causes Articulation Disorders?

The causes of articulation disorders are multifaceted. they can arise from physical abnormalities such as structural differences in the jaw or palate, including conditions like cleft palate. neurological issues, which affect the control and coordination of the muscles involved in speech, are also contributing factors..

Hearing loss can also play a significant role, as it limits the auditory feedback needed for developing accurate speech sounds. In some cases, these disorders may be part of a broader developmental delay or have a genetic component, especially if there is a family history of speech difficulties.

What are the Symptoms of Articulation Disorders?

An articulation disorder is primarily identified through specific types of speech errors. The symptoms can be categorized as follows:

Substitutions : One sound is consistently replaced with another.

Omissions : Certain sounds are left out of words.

Distortions : Sounds are produced in an unusual manner, often making the spoken words hard to understand.

Additions : Extra sounds are inserted into words.

What is a Phonological Disorder?

Articulation disorder Therapy

Phonological disorders are characterized by difficulty in understanding and using the sound system of a language. This disorder is not about the inability to produce sounds, but rather about the incorrect application of the rules governing sound patterns in speech. It reflects a higher-level cognitive or linguistic difficulty, indicating that the brain's processing of sound patterns is somehow disrupted or delayed.

What are the Causes of Phonological Disorders?

The origins of phonological disorders often lie in developmental issues. They may manifest during the critical periods of speech and language acquisition in early childhood. Persistent middle ear infections causing temporary hearing loss in young children can also contribute to these disorders, as they impact the child's ability to hear and thus learn sounds correctly. In some cases, phonological disorders may be a part of a broader language impairment or linked to familial predispositions.

What are the Symptoms of Phonological Disorders?

The hallmark of a phonological disorder is the presence of patterned errors in speech. These patterns can be observed in the following ways:

Systematic Sound Substitutions : Replacing certain sounds consistently with others (e.g., replacing all 'k' sounds with 't' sounds).

Simplification of Sound Combinations: Omitting consonants in blends (e.g., "pane" for "plane").

Patterned Sound Errors:  Following specific patterns in errors, like omitting all final consonants.

How do you Treat Articulation Disorders vs Phonological Disorders?

Treatment for articulation and phonological disorders requires distinct approaches tailored to the specific challenges of each condition.

Articulation Disorder Treatment:

Speech Therapy : Focused on teaching correct production of the problematic sounds.

Motor Exercises:  To improve the coordination and movement of speech organs.

Practice and Repetition : Regular practice of sound production in different contexts.

Phonological Disorder Treatment:

Speech Therapy:  Emphasizing the understanding and use of the language's sound system based on your specific needs.

Phonological Awareness Activities:  To help recognize and correct sound patterns.

Parent and Caregiver Involvement:  Teaching strategies to support speech development at home.

Both disorders benefit from early intervention and individualized treatment plans. Speech-language pathologists play a crucial role in diagnosing and treating these disorders, using a variety of techniques and strategies to improve speech.

If you or someone you know is exhibiting signs of articulation or phonological disorders, such as consistent speech sound errors or difficulty in understanding sound patterns, it is important to seek professional evaluation. Early intervention is key in addressing these disorders effectively.

Our team at Better Speech  is here to assess and address a wide range of speech sound disorders. For those uncertain about the next steps, our experienced Speech-Language Pathologists offer guidance and support for a journey toward clearer and more effective communication.

At Better Speech we know you deserve speech therapy that works. Our team specializes in diagnosing and treating a variety of speech and language disorders. Reach out to our skilled Speech-Language Pathologists for guidance on managing and improving communication skills. At Better Speech, we offer online speech therapy services convenient for you and tailored to your child's individual needs. Our services are affordable and effective - get Better Speech now.

Frequently Asked Questions

Can a child have both an articulation and a phonological disorder.

Yes, it's possible for a child to have both types of disorders simultaneously. This combination requires a comprehensive approach in therapy that addresses both individual sound production and overall sound pattern understanding.

How effective is speech therapy for these disorders?

How long does it take to see improvement with speech therapy, how do parents support speech therapy for these disorders, how do articulation and phonological disorders impact school and work.

About the Author

Aycen Zambuto

Aycen Zambuto

I’m a seasoned educator in speech therapy with over six years of experience helping people navigate challenges in communication. Throughout this time, I’ve found joy in guiding individuals through a variety of therapeutic journeys, from toddlers with apraxia to seniors with dysphonia.

I’m passionate about demystifying this complex world of speech therapy and helping readers around the globe achieve clear and effective communication. When I’m not writing about speech, you’ll often find me reading, traveling or spending time with friends and family.

  • At Home Speech Therapy
  • Articulation Speech Therapy

Related Posts

Effective Treatments for Childhood Apraxia of Speech

Fluency Disorder vs Language Disorder: What's the Difference?

Articulation Disorders: Causes, Symptoms & Treatment

19_edited.png

Get Free Guide to Improve Speech

Improve your communication skills

18 copy.png

Improve your child’s speech

17 copy.png

by Patricia D. Myers

I'm not an English native speaker and I wanted to improve my speech. Better Speech onboarding process is AWESOME, I met with different people before being matched with an AMAZING Therapist, Christina. My assigned therapist created a safe place for me to be vulnerable and made all the sessions fun and helpful. Thanks to her, I received great feedback from my clients.

by John L. Wilson

​ Better Speech is a great program that is easy to use from home and anywhere online. Shannon was amazing at engaging our shy son - and building on their relationship each session! Her commitment to knowing him improved his confidence to speak and practice more. Truly appreciate her dedication. She cares for her clients.

by Christy O. King

​ Better Speech is an excellent opportunity to improve your speech in the convenience of your home with flexible scheduling options. Our therapist Miss Lynda was nothing short of amazing! We have greatly appreciated and enjoyed the time spent together in speech therapy. Her kind, engaging and entertaining spirit has been well received. She will surely be missed.

by Patricia W. Lopez

This service is so easy, i signed up, got a therapist and got to set up an appointment right away that worked with my schedule. so glad to see that services like speech therapy are finally catching up to the rest of the convenience age! therapy is great, i can't believe how many good tips, exercises and methods in just the first session. really recommend it!

Associates in Pediatric Therapy Logo

Serving Kentucky, Indiana, and Tennessee

  • Helpful Articles
  • Parent Resources
  • Physician Resources
  • Billing FAQ
  • APT Experience
  • Students / Volunteers
  • Sell Your Practice
  • For Employees

Articulation Disorder vs. Phonological Disorder: What’s The Difference?

So, you just received your child’s speech and language evaluation, and they were diagnosed with an articulation or phonological disorder…but, what does that mean?

Articulation  and  phonological  disorders fall under an umbrella term:  Speech sound disorders,  which refers to any difficulties with producing or understanding sounds.

What is an articulation disorder?

Articulation  refers to your child’s ability to produce individual sounds. These sound errors will be consistent no matter where they are in a word.

The different types of articulation errors include the following:

  • Substitutions : replacing one sound with another. EX: ‘broder’ for ‘brother’
  • Omissions: deleting a sound in a word. EX: ‘poon’ for ‘spoon’
  • Distortions : sounds that are produced in an unfamiliar way, lisps are common distortions EX: ‘thun’ and ‘sun’
  • Additions : adding an extra sound to a word EX: ‘buhlack’ for ‘black’

Articulation errors may also be attributed hearing difficulties, or structural abnormalities, such as missing teeth, cleft palate, etc. An oral mechanism exam can be performed by your speech language pathologist to determine any structural abnormalities. It is also recommended that your child get a hearing screening.

What is a phonological disorder?

A  phonological  disorder refers to difficulty understanding the sound system and speech rules. Children may be able to say a sound in some words but not in others. For example, a child may be able to say the sound ‘b’ in the word ‘bee’ but will leave off the ‘b’ at the end of the word ‘web.’

Children with a phonological disorder will demonstrate use of one or more phonological patterns:

  • Fronting: replacing ‘K’ and ‘G’ sounds with ‘T’ and ’D’
  • Gliding: Replacing ‘R’ and ‘L’ sounds with a ‘W’.
  • Final consonant deletion: leaving off the final sound in words even though they can produce the sound in the beginning or middle of other words

This is not an exhaustive list. There are lots of phonological processes. Some of these processes are normal as a child develops, but if they persist beyond a certain age it is recommended to seek speech therapy.

So, what does therapy look like?

Therapy looks different depending on the nature of the speech sound disorder. An  articulation  approach is motor based, meaning that the speech language pathologist will work with the child to help them coordinate their lips, tongue, jaws, and cheeks to produce their target sound. A  phonological  approach focuses on the pattern the child is using and teaching the child that sounds have different meanings. There are several different ways to target phonological disorders and your speech language pathologist will determine the best fit for your child.

In some children,  articulation  and  phonological  disorders can occur at the same time. A speech language pathologist will do a speech sound analysis, which is a list of all sound errors and patterns used in your child’s speech.

There are some speech errors that are appropriate depending on your child’s age. You should talk to your child’s speech language pathologist or seek consultation if you have concerns about your child’s speech and their ability to be understood by others.

– Jordan Lamblin, M.S., CCC-SLP

Phonological disorder . phonological disorder | speech sound disorders | speech language and communication problems we help | speech clinic | slt for kids | speech & language therapy, across manchester & the north west. (n.d.). retrieved september 13, 2022, from https://sltforkids.co.uk/speech-clinic/speech-language-and-communication-problems-we-help/speech-sound-disorders/phonological-disorder/, “speech sound disorders: articulation and phonology.”  speech sound disorders: articulation and phonology , american speech-language-hearing association, https://www.asha.org/practice-portal/clinical-topics/articulation-and-phonology/..

The Center For Speech & Language Development

What is Speech Articulation Disorder?

speech articulation disorder

Articulation refers to the way people produce speech sounds to make words to communicate. Occasionally, as kids learn to talk, they have a hard time creating certain phonemes or saying specific types of words. This might be a speech articulation disorder.

About 8% of young children experience some kind of speech articulation disorder or phonological disorder. There are a variety of therapy and treatment options to help improve their sound production for communication.

What is an articulation disorder

Very specifically, articulation is the way people create sounds. It requires someone to put their lips, tongue, and jaw in the right position and use the right amount of airflow to create the correct sound. This takes countless nerves and muscles! 

Most children follow a similar pattern when learning to talk. It’s normal for young children to make mistakes and mispronounce words when they are very young. However, kids with an articulation disorder continue to have trouble pronouncing certain words or making specific sounds beyond the age where it’s considered a normal part of speech communication development.

Some children have a hard time with placement, timing, or the direction and speed of moving their jaw, lips, tongue, or airflow. This makes it hard for them to communicate clearly. 

Common symptoms of articulation disorders

Children with articulation disorders usually have problems making certain groups of sounds and forming particular words. They might add, change, or leave off some sounds when they talk . For example, they may not be able to make an ‘r’ sound. So, they might say “wabbit” instead of “rabbit.” 

Some kids have trouble pronouncing words that start with two consonants. For example, they might say “cap” instead of “clap.” Kids may omit a sound within a sound cluster. Therefore, they might say “pinkle” instead of “sprinkle”. Your little one might reduce syllables by saying ‘nana’ instead of ‘banana.’ 

These little variations are cute when kids are very young. But, it can become a problem if your child keeps making those mistakes as they get older. It can lead to difficulty communicating, teasing from their peers, and other issues.

It can be difficult to understand children with an articulation disorder, especially compared to other kids their age who are talking more clearly.

How to identify articulation disorders

Most kids say sounds and words incorrectly as they learn to talk. Some sounds come earlier and more quickly–like ‘p’ and ‘m.’ Other sounds are a bit harder to master, like ‘s,’ ‘r,’ and ‘l.’

Your pediatrician will probably use a timeline of development for children’s speech to see how your child is progressing compared to the typical speech communication development. 

By the time kids are three years old, strangers should be able to understand their speech at least half the time. By age 5, kids should be able to pronounce most sounds correctly , though they might still have a little difficulty with sounds like l, s, r, v, sh, ch, or th. 

If a pediatrician suspects a child might have a speech sound disorder (like articulation disorder), they usually refer the child and parents to a speech-language pathologist (SLP). The SLP will listen to how the child talks and makes sounds. They will also assess the way they move their lips, jaw, and tongue when speaking. During the evaluation, a speech pathologist may also test children’s hearing to ensure they do not have any hearing loss that might contribute to the problem. 

Once the SLP completes a thorough assessment, they can determine whether a child has speech articulation disorder or another issue that may be impacting their speech and language development. From there, a treatment plan with appropriate goals will be developed.

Treatments For Speech Articulation Disorder

Speech-language pathologists use a wide range of strategies to improve kids’ articulation skills. 

Kids with speech articulation disorders might benefit from articulation therapy . A speech therapist will help your child improve their speech sound production to increase their oral motor strength and coordination while focusing on motor planning. When necessary, the therapist will also provide cues for correct sound production in all word positions.

Articulation therapy usually happens with age-appropriate tasks that are related to the child’s specific needs. That way, the exercises are fun and positive experiences. 

In addition to working with a speech-language pathologist, there are many things parents and caregivers can do at home to help children overcome articulation disorders. These tasks will be provided by your SLP through a home program. Below are a few strategies that an SLP might suggest to assist your child in speech sound development when done together with therapy.

Please keep in mind – children can easily be overwhelmed if their speech is constantly corrected, especially if they aren’t able to produce the sound correctly without some oral motor assistance. And, some children can’t even hear the difference between how they say a word and the correct pronunciation, so they either don’t understand the correction or they continue practicing an incorrect model. 

Practice revision : Revision is when you repeat what your child said, except you pronounce the words correctly. Parents and caregivers can incorporate revision techniques throughout everyday life. 

Model correct speech : As you play with your child and go about your daily routine, modeling correct speech and pronunciation is an excellent way to slip in speech lessons. Whether you’re playing a game, cooking, or going for a walk, practice identifying objects and pronouncing words correctly. This is both a language and speech strategy, not just a speech sound strategy

Read books and play games together : Reading is a powerful tool to help every child develop good language and communication skills. Listening to a story is entertaining and allows your child to hear the correct articulation of words and sounds. If you are concerned about your child’s speech or have questions about the exercises we’ve listed, please contact The Center for Speech and Language Development . Our therapists can assess your little one’s language development and create an effective treatment plan to help your child build healthy speech communication skills.

Speak Live Play

Articulation Disorder: Key Things to Know

' data-src=

Speech problems – articulation and phonological disorders

Articulation and phonology are crucial aspects of speech. An articulation disorder occurs when a child struggles with forming speech sounds correctly. At the same time, a phonological disorder involves using sounds inaccurately in context. These disorders can hinder effective communication. Addressing these issues through therapy helps children improve their speech and overcome challenges.

As children grow, speech sounds develop in a predictable sequence. It’s normal for them to make errors while honing their language skills. But suppose a child’s articulation or phonological abilities hinder their clarity compared to peers. In that case, it’s worth seeking an assessment from a qualified speech therapist. They can evaluate speech sounds, communication style, and overall intelligibility.

Signs and Symptoms Of Speech Disorders

Articulation disorders.

Articulation, in simple terms, is the process of creating sounds. It involves the synchronized movements of various parts like the lips, tongue, teeth, palate (the roof of your mouth), and the respiratory system, mainly the lungs. These parts work together to make the sounds we use for speech. Specifically, they help us form words and communicate effectively.

It is important to note that articulation is not only about making sounds but also about the intricate coordination of nerves and muscles involved in speech. Sometimes, individuals may face challenges in this area, leading to speech disorders such as articulation disorders. These disorders can impact the clarity and accuracy of speech.

To maintain and improve articulation, it is crucial to engage in activities that strengthen the relevant muscles and promote coordination. This may involve incorporating exercises and techniques specifically tailored to engage the speech production muscles. By incorporating these practices into your daily routine, you can improve your ability to express yourself clearly and effectively.

If your child is experiencing an articulation disorder , they may be facing challenges with pronouncing certain sounds correctly. For example, they may have a lisp, causing the “s” sound to be pronounced like “th.” Additionally, they may struggle with producing specific sounds, such as substituting “wabbit” for “rabbit” due to difficulty with the “r” sound. Supporting your child in addressing these speech challenges is crucial for their communication development.

Phonological disorders

Phonology is the study of how sounds come together to make words. It’s like figuring out the building blocks of language. Imagine you’re solving a puzzle, but instead of using pieces, you’re using sounds to create words. This helps us understand how we talk and how some people may have trouble speaking, like with lactation, speech disorders, or articulation disorders. By studying phonology, we can better understand these challenges and find ways to help individuals who struggle with them.

Suppose your child has a speech disorder called a phonological disorder. In that case, they may make some mistakes when saying certain sounds in words. For example, they might use the wrong sound in a word or use a sound in the wrong position. Let’s say they use the “d” sound instead of the “g” sound, so they say “doe” instead of “go.” They might also leave out certain sounds in certain words. For instance, they can say “k” in “kite,” but they might leave it out in a word like “lie” instead of “like.” It’s important to know that this is something called a phonological disorder, which affects the way they make sounds. But don’t worry; with the right help and support, they can improve their speech and overcome these challenges.

Need help with sounds and sound rules in words? That’s called a phonemic awareness disorder. It’s connected to language and reading difficulties. So, getting the right treatment is super important for you. Understanding kids with this disorder can be tough, much harder than those with just articulation issues. See, kids with phonological disorders struggle with lots of sounds, not just one. So, let’s make sure you get the help you need! 

When To See A Doctor

Suppose you, or anyone else who regularly interacts with your child, such as their teacher, have any worries regarding your child’s speech. In that case, it is advisable to consult with your GP or pediatrician to organize an evaluation with a speech therapist . Alternatively, you can directly schedule an appointment with a speech therapist, although please note that this may incur higher fees.

Suppose you have any concerns regarding your child’s speech. In that case, it is recommended that a qualified speech therapist evaluate them. A speech therapist can identify the underlying cause and collaborate with you and your family to develop a tailored treatment plan. This may involve regular appointments and targeted exercises that can be practiced with your child at home. Rest assured, seeking professional guidance can greatly contribute to your child’s speech development.

Many children with articulation or phonological disorders can experience substantial improvement in their speech through effective speech therapy.

Brain Injuries

Articulation or phonological challenges are typically not directly linked to brain injury. Children and adults with an acquired brain injury may experience distinct speech pattern difficulties, which are often associated with dyspraxia or dysarthria . Additionally, some children and adults facing acquired brain injuries may encounter literacy and language challenges. Explore further insights on adults’ Dysarthria and Dyspraxia to enhance your understanding.

Key Points To Remember

Articulation and phonology are key aspects of speech sound production. Children experiencing phonological disorders or phonemic awareness challenges may face difficulties in language and literacy development. If you have concerns about your child’s speech, consult your GP to arrange an assessment with a qualified speech therapist. Effective speech therapy can lead to significant improvements in the speech of children with articulation or phonological disorders. Enhance your child’s speech development with appropriate intervention.

Speak Live Play Is Here To Help

Understanding articulation and phonological disorders is crucial for effective communication in children. If your child struggles with speech sound disorders, seek early intervention from expert therapists at Speak Live Play. With personalized treatment plans and targeted exercises, significant improvements can be achieved. Feel free to consult with a qualified speech therapist for a thorough evaluation and guidance in enhancing your child’s communication skills. Take the first step today with Speak Live Play.

Previous Post Unlocking Smooth Speech: Harnessing Mindfulness Techniques

Next post top benefits of feeding therapy for toddlers.

Angela Pilini

Author Angela Pilini

Leave a reply cancel reply.

Save my name, email, and website in this browser for the next time I comment.

(424) 279-8379 [email protected] We travel to you!

Parent / Caregivers Name (required)

Child’s Age (required)

Email (required)

Recent Blogs

  • The Importance of Pretend Play June 12, 2024
  • 5 Tips for Encouraging Kids to Brush Their Teeth June 7, 2024
  • Encouraging Your Child’s Self-Expression May 20, 2024
  • Nurturing Children with Articulation Disorders May 16, 2024

Associations

articulation and speech disorder

Quick Links

  • Meet The Team
  • In-Home Speech Therapy
  • Teletherapy
  • Occupational Therapy
  • Feeding Therapy
  • Who We Treat
  • Testimonials
  • Support Group Sign Up

© 2024 Speak Live Play.

  • Adult Speech Therapy
  • Speech Therapy for Kids
  • Speech Teletherapy
  • Swallowing Therapy For Adults
  • Occupational Therapy For Kids
  • Occupational Therapy For Adults
  • Lactation Counseling
  • Physical Therapy
  • Aphasia – Los Angeles
  • Head and Neck CA
  • Trachesotomy / Ventilator
  • Speech and Voice Disorders
  • Cognitive Disorders | Los Angeles
  • Motor Speech Disorders in Adults
  • Guillain Barre
  • AAC for adult patients
  • Autism Spectrum Disorder
  • Articulation / Speech Sound Disorders
  • Early Intervention / Language Delay
  • Social Communication
  • Childhood Apraxia of Speech
  • Pediatric Voice Disorders
  • Stuttering / Fluency
  • Hearing Disorders
  • Cleft Lip / Palate
  • Auditory Processing
  • Text For an Appointment!
  • Call For an Appointment!
  • Schedule An Appointment Online

help for toddler speech delay

Articulation vs Phonological Disorder: Understanding Speech Delays

articulation and speech disorder

Speech development plays a vital role in communication and overall development. However, some individuals experience speech delays or speech sound disorders that can impact their ability to express themselves effectively.

For example, it is crucial to differentiate between articulation vs phonological disorders to provide appropriate intervention.

In this article, we will explore the characteristics, causes, assessment, and treatment of speech sound disorders . By understanding the distinctions between the two, we can better support individuals with speech delays and facilitate their journey toward improved communication skills.

Causes and Risk Factors

Assessment and diagnosis, treatment and intervention, intervention strategies for articulation and phonological disorders, support and resources for individuals with articulation and phonological disorders, beyond baby talk: from speaking to spelling: a guide to language and literacy development for parents and caregivers by kathy hirsh-pasek and roberta michnick golinkoff, it takes two to talk: a practical guide for parents of children with language delays by lynn koegel, ph.d., and patricia schreibman, ph.d., 1. how can i differentiate between a child’s normal speech development and a potential speech disorder, 2. how can parents and caregivers support a child with articulation or phonological disorders, navigating speech challenges, what are articulation disorders.

An articulation disorder refers to difficulties in producing speech sounds accurately due to problems with the coordination of articulatory muscles.

Individuals with articulation disorders may exhibit difficulties in pronouncing specific sounds or sound patterns. This can lead to unintelligible speech, affecting their oral communication abilities.

Articulation disorders can arise from various factors. Some may have a genetic predisposition to speech sound errors, while others may experience delays in their motor skill development, which then affects their ability to coordinate the muscles required for speech sound production.

Environmental factors, such as chronic ear infections or exposure to limited language input, can also contribute to articulation difficulties.

Speech-language pathologists (SLPs) play a crucial role in assessing and diagnosing articulation disorders. Through comprehensive evaluations, including speech sound assessments and analysis of speech samples, SLPs can determine the specific nature and severity of the articulation disorder.

These evaluations may involve standardized tests, informal observations, and interviews with parents or caregivers.

Early intervention is essential for addressing articulation disorders. SLPs employ various techniques and approaches tailored to the individual’s needs. Therapy may focus on improving specific speech sounds through targeted exercises and practice.

Additionally, SLPs often collaborate with parents and caregivers to provide strategies for reinforcing speech skills in daily activities. Consistent practice and support at home can significantly enhance the effectiveness of speech therapy.

What Are Phonological Disorders?

A phonological disorder involves difficulties with the phonological system, which encompasses the rules and patterns that govern speech sounds in a language.

Unlike articulation disorders that focus on individual sounds, phonological disorders affect the overall sound system, resulting in challenges with phonological patterns, syllable structures, and phonological processes. This can significantly impact intelligibility and the ability to produce age-appropriate speech.

Phonological disorders can stem from various factors. Language-based difficulties, such as delays in language acquisition or limited exposure to a rich linguistic environment, can contribute to the development of phonological disorders.

Cognitive factors, such as difficulties with auditory processing or memory, may also play a role. Genetic and familial influences can increase the likelihood of phonological disorders in some cases.

SLPs employ comprehensive assessments to evaluate and diagnose phonological disorders. These assessments involve analyzing the child’s speech sound patterns, phonological processes, and overall intelligibility.

Various tools, including standardized tests, speech samples, and parent interviews, are utilized to gain a holistic understanding of the individual’s phonological abilities and difficulties.

Intervention for phonological disorders focuses on addressing the underlying phonological patterns and processes. SLPs work with individuals to develop awareness and use of correct sound patterns, emphasizing the generalization of skills across different words and contexts.

Language-focused interventions may also be incorporated to enhance overall communication abilities. Collaborative efforts between SLPs, parents, educators, and other professionals are essential for facilitating consistent practice and supporting the generalization of skills outside of therapy sessions.

phonological vs articulation disorder

Articulation vs Phonological Disorders

Accurate diagnosis is vital for providing appropriate intervention. While these speech sound disorders may share some overlapping features, understanding the distinctions between the two is crucial.

Articulation disorders primarily focus on difficulties with individual sounds, while phonological disorders encompass broader challenges with sound patterns and processes. A comprehensive assessment conducted by a qualified SLP can help differentiate between the two, guiding the development of an effective treatment plan.

Intervention strategies for articulation and phonological disorders are tailored to each individual’s unique needs. SLPs create individualized treatment plans based on the specific difficulties identified during the assessment process.

As for activities in speech therapy, they often revolve around improving speech intelligibility and promoting age-appropriate speech production. Collaboration with parents, educators, and other professionals is also vital for implementing strategies in various settings and fostering consistent progress.

Individuals with articulation and phonological disorders benefit from a supportive network. It is important to leverage these resources to access information, share experiences, and seek professional help when needed.

For instance, speech therapy resources, both online and offline, provide valuable information and guidance for individuals and their families. Online communities and organizations dedicated to speech delays offer support, advice, and opportunities for connecting with others facing similar challenges.

Book Recommendations for Supporting Your Child’s Speech

articulation and speech disorder

This book is a comprehensive guide to helping children develop their language and literacy skills. It covers everything from the early stages of babbling and pointing to the more advanced skills of reading and writing.

The authors provide clear and concise explanations of the research on child development, as well as practical advice on how to promote language and literacy learning at home.

How This Book Can Help

One of the strengths of the book is its emphasis on the importance of early exposure to language. The authors argue that children who are exposed to a rich language environment from a young age are more likely to develop strong language and literacy skills.

They provide a number of suggestions for how parents and caregivers can create a language-rich environment. These include reading to children, talking to them about their experiences, and singing songs and rhymes.

Another strength of the book is its focus on the importance of play . The authors argue that play is essential for language and literacy development. They provide a number of suggestions for how parents and caregivers can use play to promote language and literacy learning, such as playing with blocks, dress-up, and make-believe.

Overall, Beyond Baby Talk is an excellent resource for parents and caregivers who want to help their children develop their language and literacy skills. It is well-written, informative, and practical. We highly recommend it to anyone interested in helping their child succeed in school.

41vVS6ZqNEL. SL500

It Takes Two to Talk is a comprehensive and practical guide for parents of children with language delays . The book is based on the principles of Applied Behavior Analysis (ABA), a scientific approach to teaching that has been shown to be effective in helping children with a variety of developmental delays.

The book is divided into three parts:

  • Part I provides an overview of language development and discusses the signs and symptoms of language delays.
  • Part II presents a step-by-step guide to teaching children with language delays. The guide covers a variety of topics, including joint attention, requesting, following directions, and using language in different contexts.
  • Part III provides resources for parents, including a list of books, websites, and organizations that can provide additional support.

This book is a valuable resource for parents of children with language delays. It is well-written and easy to follow. The authors provide clear and concise explanations of the principles of ABA and how they can be applied to teaching children with language delays.

The book also includes a variety of activities and exercises that parents can use to help their children learn.

It Takes Two to Talk is an excellent resource for parents looking for help with their child’s language development. It is comprehensive, practical, and easy to follow. We highly recommend it to any parent concerned about their child’s language skills.

FAQs About Articulation and Phonological Disorders

It is important to monitor a child’s speech development milestones to identify any potential speech disorders. While variations in speech development are common, certain signs may indicate a need for further evaluation.

If a child consistently struggles with producing a wide range of speech sounds accurately, or if their speech remains unclear and difficult to understand beyond the expected age, it may suggest a speech disorder.

Additionally, if the child’s speech significantly deviates from their peers or if they experience frustration or difficulty communicating, it is advisable to consult a speech-language pathologist (SLP) for a comprehensive evaluation.

Parents and caregivers play a crucial role in supporting children with articulation or phonological disorders. Here are some tips for providing support:

Create a language-rich environment.

Expose the child to a variety of language experiences, including reading books, engaging in conversations, and encouraging verbal expression.

Model correct speech sounds.

Emphasize clear and accurate speech during conversations and provide opportunities for the child to hear and imitate correct pronunciation.

Practice speech exercises and techniques.

Work closely with a speech-language pathologist (SLP) to learn specific exercises and techniques that can help improve speech sounds and patterns. Consistency in practicing these exercises at home can reinforce progress made during therapy sessions.

Foster a positive and supportive atmosphere.

Encourage the child’s efforts and provide constructive feedback. Celebrate small achievements and provide reassurance during challenging moments.

Collaborate with professionals.

Maintain open communication with the child’s SLP and other professionals involved in their care. Seek guidance, ask questions, and actively participate in the child’s therapy sessions to reinforce progress outside of therapy.

Distinguishing between articulation vs phonological disorders is essential for effective intervention and support. By understanding the characteristics, causes, assessment, and treatment approaches for both disorders, we can better assist individuals with speech delays on their journey toward improved oral communication skills.

Early identification, accurate diagnosis, and collaborative efforts between professionals, parents, and caregivers play a vital role in fostering positive outcomes for individuals with articulation and phonological disorders. With the right support and resources , individuals with speech delays can overcome their challenges and achieve successful communication.

articulation and speech disorder

Articulation and Phonological Disorders

What are articulation and phonological disorders.

Articulation and phonological disorders are difficulties producing speech sounds or groups of speech sounds that persist beyond the typical period of speech development and/or result in difficulty understanding speech.

Articulation disorders usually include one or two speech sound errors such as

  • lisps and ‘s’ and ‘z’ distortions
  • substitution for ‘r’ and ‘er’
  • substitutions for ‘th,’ ‘I,’ ‘sh,’ ‘ch’

Phonological disorders include multiple sound errors that typically cause problems with the intelligibility of the child’s speech and can include

  • deleting final consonants from a word
  • deleting consonants from a blend of two or three consonants (blow for bow)
  • replacing “continuing” (f, s) with “stop” sounds (p, t)
  • replacing “back” (k, g) with “front” sounds (t, d)

What is the evaluation procedure?

A comprehensive evaluation will be conducted to determine what aspects of communication have been affected. This evaluation is scheduled for two hours. In addition, all potential clients are asked to complete a case history form to assist in preparation of the UCF evaluation.

If you already have received a speech and language evaluation at another location within the past three months, please send us the report with your case history form. This will allow us to determine if or what additional diagnostics need to be completed.

In addition, we will need relevant medical reports from previous speech/language evaluations. We also will need radiological reports (e.g., swallow study reports or written results of brain scans).

What type of treatment do we provide?

Therapeutic programs are uniquely developed to help the child in his/her speech intelligibility. Therapy includes evidence-based approaches shown to change the speech sound production of preschool and school-age children. Therapy for these disorders may include the following:

  • discrimination training
  • traditional speech sounds production training
  • auditory bombardment of target sounds in context
  • cycles training
  • minimal-pair training

A more intensive intervention program is sometimes needed to correct misarticulated sounds. The Communication Disorders Clinic is designed to provide more intensive and individualized treatment options. Frequency of therapeutic services range from once a week to multiple sessions a week.

Application Forms

Hear from our clients and their families.

Steven – A client with developmental disorders uses his communication skills in the community with success.

Note: “www.ucfspeechlanguagetherapy.com” has been changed to healthprofessions.ucf.edu/cdclinic/ .

Steven – “I order a hamburger, french fries, fruit and two waters.”

Gift box icon

GIVE A GIFT

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Health Topics
  • Brochures and Fact Sheets
  • Help for Mental Illnesses
  • Clinical Trials

Attention-Deficit/Hyperactivity Disorder in Children and Teens: What You Need to Know

Attention-Deficit/Hyperactivity Disorder in Children and Teens: What You Need to Know

  • Download PDF
  • Order a free hardcopy

Have you noticed that your child or teen finds it hard to pay attention? Do they often move around during times when they shouldn’t, act impulsively, or interrupt others? If such issues are ongoing and seem to be impacting your child’s daily life, they may have attention-deficit/hyperactivity disorder (ADHD).

ADHD can impact the social relationships and school performance of children and teens, but effective treatments are available to manage the symptoms of ADHD. Learn about ADHD, how it’s diagnosed, and how to find support.

What is ADHD?

ADHD is a developmental disorder associated with an ongoing pattern of inattention, hyperactivity, and/or impulsivity. Symptoms of ADHD can interfere with daily activities and relationships. ADHD begins in childhood and can continue into the teen years and adulthood.

What are the symptoms of ADHD?

People with ADHD experience an ongoing pattern of the following types of symptoms:

  • Inattention—having difficulty paying attention
  • Hyperactivity—having too much energy or moving and talking too much
  • Impulsivity—acting without thinking or having difficulty with self-control

Some people with ADHD mainly have symptoms of inattention. Others mostly have symptoms of hyperactivity-impulsivity. Some people have both types of symptoms.

Signs of inattention may include:

  • Not paying close attention to details or making seemingly careless mistakes in schoolwork or during other activities
  • Difficulty sustaining attention in play and tasks, including conversations, tests, or lengthy assignments
  • Trouble listening closely when spoken to directly
  • Finding it hard to follow through on instructions or to finish schoolwork or chores, or starting tasks but losing focus and getting easily sidetracked
  • Difficulty organizing tasks and activities, such as doing tasks in sequence, keeping materials and belongings in order, managing time, and meeting deadlines
  • Avoiding tasks that require sustained mental effort, such as homework
  • Losing things necessary for tasks or activities, such as school supplies, books, eyeglasses, and cell phones
  • Being easily distracted by unrelated thoughts or stimuli
  • Being forgetful during daily activities, such as chores, errands, and keeping appointments

Signs of hyperactivity and impulsivity may include:

  • Fidgeting and squirming while seated
  • Getting up and moving around when expected to stay seated, such as in a classroom
  • Running, dashing around, or climbing at inappropriate times or, in teens, often feeling restless
  • Being unable to play or engage in hobbies quietly
  • Being constantly in motion or on the go and/or acting as if driven by a motor
  • Talking excessively
  • Answering questions before they are fully asked or finishing other people’s sentences
  • Having difficulty waiting one’s turn, such as when standing in line
  • Interrupting or intruding on others, for example, in conversations, games, or activities

How is ADHD diagnosed in children and teens?

To be diagnosed with ADHD, symptoms must have been present before the age of 12. Children up to age 16 are diagnosed with ADHD if they have had at least six persistent symptoms of inattention and/or six persistent symptoms of hyperactivity-impulsivity present for at least 6 months. Symptoms must be present in two or more settings (for example, at home or school or with friends or relatives) and interfere with the quality of social or school functioning.

Parents who think their child may have ADHD should talk to their health care provider. Primary care providers sometimes diagnose and treat ADHD. They may also refer individuals to a mental health professional, such as a psychiatrist or clinical psychologist, who can do a thorough evaluation and make an ADHD diagnosis. Stress, sleep disorders, anxiety, depression, and other physical conditions or illnesses can cause similar symptoms to those of ADHD. Therefore, a thorough evaluation is necessary to determine the cause of the symptoms.

During an evaluation, the health care provider or mental health professional may:

  • Examine the child’s mental health and medical history.
  • Ask permission to talk with family members, teachers, and other adults who know the child well and see them in different settings to learn about the child’s behavior and experiences at home and school.
  • Use standardized behavior rating scales or ADHD symptom checklists to determine whether a child or teen meets the criteria for a diagnosis of ADHD.
  • Administer psychological tests that look at working memory, executive functioning (abilities such as planning and decision-making), visual and spatial skills, or reasoning skills. Such tests can help detect psychological or cognitive strengths and challenges as well as identify or rule out possible learning disabilities.

Does ADHD look the same in all children and teens?

ADHD symptoms can change over time as a child grows and moves into the preteen and teenage years. In young children with ADHD, hyperactivity and impulsivity are the most common symptoms. As academic and social demands increase, symptoms of inattention become more prominent and begin to interfere with academic performance and peer relationships. In adolescence, hyperactivity often becomes less severe and may appear as restlessness or fidgeting. Symptoms of inattention and impulsivity typically continue and may cause worsening academic, organizational, and relationship challenges. Teens with ADHD also are more likely to engage in impulsive, risky behaviors, including substance use and unsafe sexual activity.

Inattention, restlessness, and impulsivity continue into adulthood for many individuals with ADHD, but in some cases, they may become less severe and less impairing over time.

What causes ADHD?

Researchers are not sure what causes ADHD, although many studies suggest that genes play a large role. Like many other disorders, ADHD probably results from a combination of factors. In addition to genetics, researchers are looking at possible environmental factors that might raise the risk of developing ADHD and are studying how brain injuries, nutrition, and social environments might play a role in ADHD.

What are the treatments for ADHD in children and teens?

Although there is no cure for ADHD, currently available treatments may help reduce symptoms and improve functioning. ADHD is commonly treated with medication, education or training, therapy, or a combination of treatments.

Stimulants are the most common type of medication used to treat ADHD. Research shows these medications can be highly effective. Like all medications, they can have side effects and require an individual’s health care provider to monitor how they may be reacting to the medication. Nonstimulant medications are also available. Health care providers may sometimes prescribe antidepressants to treat children with ADHD, although the Food and Drug Administration (FDA) has not approved these medications specifically for treating ADHD. Sometimes an individual must try several different medications or dosages before finding what works for them.

For general information about stimulants and other medications used to treat mental disorders, see NIMH's Mental Health Medications webpage . The FDA website  has the latest medication approvals, warnings, and patient information guides.

Psychotherapy and Psychosocial Interventions

Several psychosocial interventions have been shown to help children and their families manage symptoms and improve everyday functioning.

  • Behavioral therapy aims to help a person change their behavior. It might involve practical assistance, such as help organizing tasks or completing schoolwork, learning social skills, or monitoring one’s own behavior and receiving praise or rewards for acting in a desired way.
  • Cognitive behavioral therapy helps a person to become more aware of attention and concentration challenges and to work on skills to improve focus.
  • Family and marital therapy can help family members learn how to handle disruptive behaviors, encourage behavior changes, and improve interactions with children.

All types of therapy for children and teens with ADHD require parents to play an active role. Psychotherapy that includes only individual treatment sessions with the child (without parent involvement) is not effective for managing ADHD symptoms and behavior. This type of treatment is more likely to be effective for treating symptoms of anxiety or depression that may occur along with ADHD.

For general information about psychotherapies used for treating mental disorders, see NIMH’s Psychotherapies webpage .

Parent Education and Support

Mental health professionals can educate the parents of a child with ADHD about the disorder and how it affects a family. They also can help parents and children develop new skills, attitudes, and ways of relating to each other. Examples include parenting skills training, stress management techniques for parents, and support groups that help parents and families connect with others who have similar concerns.

School-Based Programs

Children and adolescents with ADHD typically benefit from classroom-based behavioral interventions and/or academic accommodations. Interventions may include behavior management plans or teaching organizational or study skills. Accommodations may include preferential seating in the classroom, reduced classwork load, or extended time on tests and exams. The school may provide accommodations through what is called a 504 Plan or, for children who qualify for special education services, an Individualized Education Plan (IEP).

To learn more about special education services and the Individuals with Disabilities Education Act (IDEA), visit the U.S. Department of Education's IDEA website  .

Complementary Health Approaches

Unlike specific psychotherapy and medication treatments that are scientifically proven to improve ADHD symptoms, complementary health approaches for ADHD, such as natural products, do not qualify as evidence-supported interventions. For more information, visit the National Center for Complementary and Integrative Health website  .

How can I find help for my child?

The Substance Abuse and Mental Health Services Administration (SAMHSA) provides the Behavioral Health Treatment Services Locator  , an online tool for finding mental health services and treatment programs in your state. For additional resources, visit NIMH's Help For Mental Illnesses webpage or see NIMH Children and Mental Health fact sheet .

If you or someone you know is in immediate distress or is thinking about hurting themselves, call the National Suicide Prevention Lifeline   toll-free at 1-800-273-TALK (8255). You also can text the Crisis Text Line   (HELLO to 741741) or use the Lifeline Chat on the National Suicide Prevention Lifeline website.

How can I help my child at home?

Therapy and medication are the most effective treatments for ADHD. In addition to these treatments, other strategies may help manage symptoms. Encourage your child to:

  • Get regular exercise, especially when they seem hyperactive or restless.
  • Eat regular, healthy meals.
  • Get plenty of sleep.
  • Stick to a routine.
  • Use homework and notebook organizers to write down assignments and reminders.
  • Take medications as directed.

In addition, you can help your child or teen by being clear and consistent, providing rules they can understand and follow. Also, keep in mind that children with ADHD often receive and expect criticism. You can look for good behavior and praise it and provide rewards when rules are followed.

What should I know about my child participating in clinical research?

Clinical trials are research studies that look at new ways to prevent, detect, or treat diseases and conditions. Although individuals may benefit from being part of a clinical trial, participants should be aware that the primary purpose of a clinical trial is to gain new scientific knowledge so others may receive better help in the future.

Researchers at NIMH and around the country conduct many studies with patients and healthy volunteers. Clinical trials for children are designed with the understanding that children and adults respond differently, both physically and mentally, to medications and treatments. Talk to your health care provider about clinical trials, their benefits and risks, and whether one is right for your child. For more information, visit NIMH's clinical trials webpage .

Where can I find more information on ADHD?

The Centers for Disease Control and Prevention (CDC) is the nation’s leading health promotion, prevention, and preparedness agency. You can find information on CDC's website  about ADHD symptoms, diagnosis, and treatment options, as well as additional resources for families and providers.

The information in this publication is in the public domain and may be reused or copied without permission. However, you may not reuse or copy images. Please cite the National Institute of Mental Health as the source. Read our copyright policy to learn more about our guidelines for reusing NIMH content.

For More Information

MedlinePlus   (National Library of Medicine) ( en español  ) ClinicalTrials.gov  ( en español  )

U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health NIH Publication No. 21-MH-8159 Revised 2021

  • Subscribers

News Room

Georgetown Public Hospital partners with Smile Train Guyana for Cleft Palate Speech Therapy Training

'  data-src=

GEORGETOWN, July 30 (GPHC) – Last week, from July 21st to 26th, the Georgetown Public Hospital Corporation (GPHC) in collaboration with Smile Train Guyana successfully hosted an intensive Cleft Palate Speech Therapy Training for local speech therapists and Rehabilitation Assistants. This initiative aimed to enhance the skills of professionals in diagnosing and treating cleft palate speech disorders across Guyana.

The training saw participation from representatives of the David Rose School for the Handicapped, Palm’s Rehabilitation Clinic, Diamond Special Need Speech Therapy and Audiology Centre, Ministry of Education Diagnostic Centre, Ptolemy Reid Center, Fort Wellington Hospital, and Lethem Regional Hospital. A total of 9 therapists and 4 Rehabilitation Assistants was trained, including 1 Rehabilitation Assistant and 2 Speech Language Therapists based at the Speech Therapy Department, GPHC.

In March 2024, four representatives from Guyana attended a similar training in Barbados. Inspired by this experience, the idea was conceived to invite Dr. Catherine Crowley, Speech Language Pathologist and Professor of Practice at Teachers College, Columbia University, New York City, to Guyana. Dr. Crowley, who is also a member of Smile Train’s Global Medical Advisory Board, led the training sessions, providing invaluable expertise and guidance.

The training not only focused on building the capacity of local professionals to diagnose and treat cleft palate speech disorders but also aimed to empower these newly trained therapists to further train others in regions that could not attend the session. Rehabilitation Assistants, strategically placed at various health facilities across all ten regions of Guyana, perform essential physical, occupational, and speech therapy services. Currently, regions 4, 5, and 10 have dedicated speech language therapists.

During the training, ten patients who had previously undergone cleft palate surgeries, along with their parents, participated and received two daily therapy sessions, each lasting 45 minutes. Remarkably, two patients were discharged after demonstrating significant competency in their therapy sessions. The impact of this training extends beyond children who have benefitted from cleft palate surgeries. Adults who have lost speech capacity due to conditions such as tracheostomy or swallowing disorders will also benefit from the expertise of the newly trained therapists and assistants. Speech therapy is crucial for patients with cleft palate repairs to help them utilize their new palate to produce correct sounds and overcome habitual errors caused by the previous condition.

Dr. Crowley emphasized the importance of speech therapy following surgical interventions, stating, “While the surgical repairs are life-changing, patients need support to use their new palate effectively, which is where speech therapy plays a vital role.” During the training, Dr. Crowley was supported by ten graduate students from Columbia University, who volunteered to assist with the sessions.

Ideally, patients who have undergone cleft repairs should receive quality speech therapy for 12 weeks to a year to achieve optimal speech improvement.

This collaborative effort between Smile Train Guyana and Georgetown Public Hospital Corporation underscores a shared commitment to improving the lives of individuals with cleft palate conditions and ensuring that both children and adults in Guyana receive the necessary support to enhance their speech and overall quality of life.

SOURCE: Georgetown Public Hospital Corporation (GPHC) Press Release

GFF’s grassroot programme nurturing young talents in Georgetown

Elections Fraud Trial: Parag details ‘chaos and shouting’ to prevent declaration of fake results

‘Worrying’ increase in AIDS-related deaths in Guyana, other Caribbean countries –…

Fair justice, equality for Guyanese living with HIV – Judges’ Forum…

U.S assisting Guyana in fight against Malaria

Gov’t, Stride592 collaborate to launch cardiovascular advocacy project

Your email address will not be published.

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

Welcome, Login to your account.

Recover your password.

A password will be e-mailed to you.

IMAGES

  1. Understanding Articulation and Phonological Disorders

    articulation and speech disorder

  2. Articulation Disorders: An Explainer by Cleveland Speech Therapists

    articulation and speech disorder

  3. Speech & sound disorders in children

    articulation and speech disorder

  4. Articulation, Phonology, and Speech Sound Disorders

    articulation and speech disorder

  5. PPT

    articulation and speech disorder

  6. The SLP's Guide to Speech Sound Disorders: Articulation & Phonological

    articulation and speech disorder

VIDEO

  1. Articulation Speech

  2. Speech Therapy For Kids

  3. Articulation Disorder kise kahte h? #articulationdisorder #speechdisorder #bypriyankamam#viralshort

  4. Online speech therapy

  5. What is Childhood Apraxia of Speech?

  6. 34 month old with Childhood Apraxia of Speech

COMMENTS

  1. Speech Sound Disorders-Articulation and Phonology

    See the Speech Sound Disorders Evidence Map for summaries of the available research on this topic.. The scope of this page is speech sound disorders with no known cause—historically called articulation and phonological disorders—in preschool and school-age children (ages 3-21).. Information about speech sound problems related to motor/neurological disorders, structural abnormalities, and ...

  2. The SLP's Guide to Speech Sound Disorders: Articulation & Phonological

    An articulation disorder is characterized by difficulty producing individual speech sounds. The impairment is at the phonetic/motoric level, meaning that a sound may be substituted or distorted in a predictable way. Example: A student produces the /s/ and /sh/ sounds with lateral airflow (e.g., a lateral lisp).

  3. Speech Sound Disorders

    They learn some sounds earlier, like p, m, or w. Other sounds take longer to learn, like z, v, or th. Most children can say almost all speech sounds correctly by 4 years old. A child who does not say sounds by the expected ages may have a speech sound disorder. You may hear the terms "articulation disorder" and "phonological disorder" to ...

  4. Speech and Language Disorders

    It may be caused by: Genetic abnormalities. Emotional stress. Any trauma to brain or infection. Articulation and phonological disorders may occur in other family members. Other causes include: Problems or changes in the structure or shape of the muscles and bones used to make speech sounds.

  5. Articulation Disorder: Symptoms, Causes, and Treatments

    As a speech disorder, articulation disorder makes it difficult to form spoken words that other people will understand. For this reason, kids and adults with articulation disorder might struggle to talk on the phone, form friendships, or speak up in school or the workplace.

  6. Speech Sound Disorder: Types, Causes, Treatment

    Gender: Male children are more likely to develop a speech sound disorder; Family history: Children with family members living with speech disorders may acquire a similar challenge.; Socioeconomics: Being raised in a low socioeconomic environment may contribute to the development of speech and literacy challenges.; Pre- and post-natal challenges: Difficulties faced during pregnancy such as ...

  7. Speech Sound Disorders

    Three types of speech sound disorders include: Articulation disorder: difficulty saying certain speech sounds. You may notice your child drops, adds, distorts or substitutes sounds in words. Phonological process disorder: where your child uses patterns of errors. The mistakes may be common in young children learning speech skills.

  8. Speech problems

    Articulation and phonology ( fon-ol-oji) refer to the way sound is produced. A child with an articulation disorder has problems forming speech sounds properly. A child with a phonological disorder can produce the sounds correctly, but may use them in the wrong place. When young children are growing, they develop speech sounds in a predictable ...

  9. Speech Sound Disorders in Children: An Articulatory Phonology

    Speech Sound Disorders (SSDs) is a generic term used to describe a range of difficulties producing speech sounds in children (McLeod and Baker, 2017).The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistic theory and procedures, which largely posit a firm boundary between phonological processes and phonetics ...

  10. Speech Sound (Articulation) Disorders

    A speech language pathologist can help assess whether your child has an articulation disorder and develop a speech therapy plan. Key Points. Articulation is the process of making speech sounds. Articulation errors are a normal part of speech development. Speech sound errors or articulation disorders can happen for a variety of reasons. Often ...

  11. Treating Childhood Speech Sound Disorders: Current Approaches to

    Speech-language pathologists (SLPs) face the challenge of weighting these three elements when making clinical decisions for children with speech sound disorders (SSDs) relating to target selection, therapy approaches, and the structural or procedural aspects of intervention.

  12. Speech sound disorder

    A speech sound disorder ( SSD) is a speech disorder affecting the ability to pronounce speech sounds, which includes speech articulation disorders and phonemic disorders, the latter referring to some sounds ( phonemes) not being produced or used correctly. The term "protracted phonological development" is sometimes preferred when describing ...

  13. Phonological Disorder vs Articulation Disorder: What's the Difference?

    An articulation disorder is marked by difficulties in physically producing speech sounds. This disorder goes beyond mere pronunciation issues; it reflects challenges in the movement and coordination of the mouth and speech organs necessary for clear speech. Children or adults with articulation disorders might find it hard to form certain sounds ...

  14. Understanding Articulation vs. Phonological Disorders

    Evaluating a child's speech is an important part of identifying and diagnosing speech sound disorders. Speech-language pathologists (SLPs) use a variety of methods to evaluate a child's speech, including:. Case history: SLPs will ask questions about the child's medical history, developmental history, and family history. This information can help the SLP identify potential risk factors for a ...

  15. Articulation Disorder vs. Phonological Disorder: What's The Difference

    Articulation and phonological disorders fall under an umbrella term: Speech sound disorders, which refers to any difficulties with producing or understanding sounds. What is an articulation disorder? Articulation refers to your child's ability to produce individual sounds. These sound errors will be consistent no matter where they are in a word.

  16. What is Speech Articulation Disorder?

    Articulation refers to the way people produce speech sounds to make words to communicate. Occasionally, as kids learn to talk, they have a hard time creating certain phonemes or saying specific types of words. This might be a speech articulation disorder. About 8% of young children experience some kind of speech articulation disorder or ...

  17. Treatment Options for Motor Speech Disorder in Adults

    Speech Therapy Strategies for Motor Speech Disorders. Speech therapy is the cornerstone of treatment for motor speech disorders. It involves targeted interventions by a speech-language pathologist (SLP) to address specific speech difficulties and improve the motor execution of instructions to produce speech, thereby enhancing communication skills.

  18. Articulation Disorder: Key Things to Know

    Articulation and phonology are crucial aspects of speech. An articulation disorder occurs when a child struggles with forming speech sounds correctly. At the same time, a phonological disorder involves using sounds inaccurately in context. These disorders can hinder effective communication. Addressing these issues through therapy helps children ...

  19. Articulation vs Phonological Disorder

    An articulation disorder refers to difficulties in producing speech sounds accurately due to problems with the coordination of articulatory muscles. Individuals with articulation disorders may exhibit difficulties in pronouncing specific sounds or sound patterns. This can lead to unintelligible speech, affecting their oral communication abilities.

  20. What Is Speech Therapy?

    Speech therapy treats various disorders involving hearing, speech, language, literacy, social communication, voice quality, executive functioning (for example, memory and problem-solving), feeding ...

  21. Articulation and Phonological Disorders

    Articulation disorders usually include one or two speech sound errors such as. lisps and 's' and 'z' distortions. substitution for 'r' and 'er'. substitutions for 'th,' 'I,' 'sh,' 'ch'. Phonological disorders include multiple sound errors that typically cause problems with the intelligibility of the child's ...

  22. MSU researchers use VR to understand stuttering in children

    This pediatric lab is directed by Bridget Walsh, a certified speech-language pathologist, Brandt-Endowed Associate Professor, and the director of undergraduate studies for MSU's Department of Communicative Sciences and Disorders.. Supported by a grant from the National Institutes of Health, Walsh and her team want to learn why stuttering persists for some children while others outgrow the ...

  23. Attention-Deficit/Hyperactivity Disorder in Children and Teens ...

    Stress, sleep disorders, anxiety, depression, and other physical conditions or illnesses can cause similar symptoms to those of ADHD. Therefore, a thorough evaluation is necessary to determine the cause of the symptoms. During an evaluation, the health care provider or mental health professional may:

  24. Georgetown Public Hospital partners with Smile Train Guyana for Cleft

    The Georgetown Public Hospital Corporation (GPHC) in collaboration with Smile Train Guyana successfully hosted an intensive Cleft Palate Speech Therapy Training for local speech therapists and Rehabilitation Assistants. This initiative aimed to enhance the skills of professionals in diagnosing and treating cleft palate speech disorders across Guyana.