| Home | E-Submission | Sitemap | Contact us |  
top_img
Clinical Archives of Communication Disorders > Volume 7(1); 2022 > Article
Bawayan and Brown: Language sample analysis consideration and use: a survey of school-based speech language pathologists

Abstract

Purpose

The purpose of the current study was to extend previous knowledge on the language sample analysis (LSA) practices of school-based speech-language pathologists (SLPs) by gathering information on the processes, procedures, clinical judgments, and decisions that current school-based SLPs make during the context of conducting an LSA.

Methods

School-based SLPs responded to a survey on current practices, perceived knowledge, knowledge of current recommended practices, and education and training in LSA.

Results

Results indicated the majority of school-based SLPs used LSA (90%) during evaluations to supplement information provided on norm-referenced tests and as a naturalistic language measure. However, the results also demonstrate a lack of knowledge of current recommended practices. The respondents, on average, only answered 50% of the knowledge questions correctly.

Conclusions

Participant responses to knowledge and practice questions indicated a continued gap in current LSA practice including the context of collected samples, the transcription and recording process, and the analysis measures completed. Additionally, the results indicated a need to look closely at the undergraduate and graduate curriculum on LSA as the respondents indicated the largest amount of education and training coming from these programs.

INTRODUCTION

During the diagnostic evaluation process, school-based SLPs make diagnostic judgments using decision-making skills to determine the presence or absence of a disorder using a variety of assessment measures. Diagnostic evaluations should include multiple measurements and a variety of assessment tools provided in the child’s native language that are not discriminatory or culturally biased and provide information that will be relevant to identifying the educational need for special education services [13]. Gathering specific information from both formal and informal measures provides the most reliable measure of the child’s language performance while increasing functional relevance [4,5]. The use of formal norm-referenced and criterion-referenced tests in conjunction with quality informal assessment measures represent a child’s true language performance for accurate disability diagnosis, eligibility for service delivery, and a functionally-relevant treatment plan [68].
SLPs have the option to choose a number of different language assessments to provide supplemental information to the outcomes of norm-referenced language tests. Language sampling is arguably the most historically and widely researched informal language assessment measure [4,9,10]. Language samples are versatile informal assessments through which a SLP elicits spontaneous language in a variety of contexts (e.g., conversation, free play, narration, expository discourse) to collect information and measure performance across language domains. SLPs analyze language samples using a variety of microstructural and macrostructural measures such as grammaticality, mean length of utterance (MLU), number of different words (NDW), story grammar elements, turn length, and continuity to describe a child’s language performance at the sentence or discourse level [6,11,12]. Measurements of language complexity and productivity collected during naturalistic settings provide SLPs with information indicating differences in typical and disordered language. Specific weaknesses in language identified through the derived measures provide information that can be compared to specific criteria such as grade and age-level standards or to normative data (e.g., preschool MLU normative data) to determine the presence of a disorder and eligibility for services [1315].
Research on language samples provides information on the reliability of the description of a child’s language performance [1619], identification of differences in the morphology of children with language disorders [13], inspection of the reliability of outcome measures based on age level and linguistic characteristics [18,20] and development of normative data [2123]. Outside of norm-referenced tests, LSA remains one of the most reliable evidence-based language assessment measures when specific processes and procedures are followed [5,15,18,2025].

Current Language Sample Analysis Practices

Despite robust evidence base for LSA, current practice trends demonstrate variability in the use of language samples. Four surveys have been completed exploring the LSA practices of school-based SLPs. Hux et al. [26] was the first to survey school-based SLPs from the Midwestern United States on their LSA practices and attitudes towards the completion and reliability of language samples. Results of the survey indicated that respondents used a variety of contexts to elicit language samples. Conversational language samples were indicated most frequently (86%), followed by story-retell tasks (54%), and story-generation tasks (34%). Almost half (49%) of the respondents reported using a self-designed analysis protocol. When asked to indicate frequently used analysis measures, 81% reported using mean length of utterance (MLU) and 80% reported using qualitative language descriptors. The most frequently identified reason for selecting preferred analysis measures was due to familiarity and comfort level. This survey provided information on the types of elicitation contexts and analysis methods; however, did not determine the percentage of SLPs who conducted language samples [26].
Kemp and Klee [27] surveyed 253 SLPs across the United States who worked primarily with preschool-aged children to determine the practice patterns and use of language samples. Eighty-five percent of the survey respondents reported using LSA during clinical assessments for language impairment [27]. The median number of language samples collected per year in any given year was 25 (range 12–47). The respondents predominately favored self-designed analysis procedures over packaged procedures which was a difference compared to the results of Hux et al. [26]. Frequent responses on general procedures included indices such as mean length of utterance (MLU), sentence structure, semantics, and developmental norms. Of those respondents who did not use LSA, barriers included the time it takes to collect, transcribe, and analyze samples, lack of training, and knowledge on language samples.
Westerveld and Claessen [28] surveyed the language sample practices of 257 pediatric SLPs in Australia using a questionnaire based on Hux et al. [26] and Kemp and Klee [27]. Results of the survey were similar to those completed in the United States with 91% of respondents completing language samples [28]. Context of the sample was based on the age of the student with conversational samples used more frequently with younger students. A surprising and concerning result was that the most frequently used context for high school students was personal narratives rather than expository discourse which would be considered more appropriate [29,30]. Additionally, over half of the respondents followed recommended practices by recording, listening, and transcribing language samples. When asked about analysis methods, over half of the respondents used MLU and structural stages such as Brown’s morphemes. The results of Westerveld and Claessen’s [28] survey study confirmed and expanded the results of Kemp and Klee [27] related to the percentage of SLPs who conduct language samples, variety of elicitation context, and analysis methods.
Pavelko et al. [22] aimed to update the literature on the current practices of school-based SLPs and the use of LSA in the United States. This large-scale survey extended Kemp and Klee’s survey by focusing on school-based SLPs serving a wider age range of students. Pavelko et al. [22] included responses from 1,336 school-based SLPs across the United States. The most notable change in findings from Kemp and Klee [27] and Westerveld and Claessen [28] to the Pavelko et al. [22] survey was a reduction in the number of respondents who reported using LSA during a given school year; respectively, the rate dropped from 85% to 67%. The respondents indicated analyzing fewer than 10 language samples a year [31]. Considering on average in the US over half of the students on school-based SLP caseloads have goals addressing language needs across the domains of morphology, syntax, semantics, and pragmatics and the time spent completing diagnostic evaluations per week, SLPs are completing more than 10 language evaluations a year [32]. Therefore, the results indicating a collection of fewer than 10 language samples a year is concerning.
The surveys identified barriers to completion of LSA as lack of time to collect and complete language samples, insufficient resources, and inadequate training and expertise. Pavelko et al. [22] extended the knowledge of current barriers to practice by inquiring about the willingness to attend training on theories and principles of LSA. Seventy-one percent of respondents were willing to attend training and specified the areas of need as transcribing samples, analyzing samples, interpreting the results of samples, and goal planning using the information provided in a sample [31]. The most frequently selected areas of training were in the analysis (84%) and interpretation (83%) of language samples. The results demonstrate that SLPs identify areas of weakness in analysis and interpretation which includes the context of analysis, the different analysis measures that could be used, and the ability to interpret the data within the larger picture of a child’s language and are open to improving their skills. Consistent barriers to implementation of LSA remain, specifically related to the time it takes to collect and analyze samples and limited knowledge in the area of analyzing and interpreting information collected during a language sample. A contributing factor to the barriers within the area of analyzing and interpreting the data from language samples is the gap in SLPs’ knowledge of the language forms elicited during different contexts. Additional training provided to SLPs in the analysis and interpretation of data from a language sample including the appropriateness of contexts for different ages would improve clinical assessment and treatment planning.
Research demonstrates SLPs are more likely to use their own clinical expertise, such as experience with certain analysis measures, when making decisions on analysis measures for language samples instead of using evidence-based protocols and procedures [27,31]. However, the validity of developing self-designed protocols and selection of analysis measures remains unknown. Clinical expertise used during diagnostic decision making includes using past experiences and knowledge of language and disorders. However, clinical expertise separate from scientific literature introduces risks such as introduction of biases, inconsistency in identification, and the use of ineffective or inefficient procedures. When there is preponderance of reliance on clinical expertise over current researched practice guidelines, there is a risk of erroneously identifying language disorders [31,33]. Gathering information on the knowledge and levels of confidence in language sample transcription and analysis of school-based SLPs addresses the barriers to implementation in practice. Understanding LSA knowledge, implementation knowledge, or confidence in their processes regarding collection, transcription, and analysis will help determine specific training needs.
The purpose of the current survey was to extend previous knowledge on the LSA practices of school-based SLPs. Through survey methodology, we gathered information on the processes, procedures, and clinical judgments current school-based SLPs make during the context of conducting an LSA. The current study aimed to answer the following questions:
  1. What training and education do school-based SLPs receive on evidence-based LSA procedures?

  2. How often do school-based SLPs conduct LSAs?

  3. What type of contexts (e.g., narrative, conversational) do SLPs use when conducting LSAs?

  4. What procedures do SLPs use when conducting and analyzing LSAs?

  5. How do school-based SLPs interpret the data and results collected during LSA?

  6. What is the relationship between SLP’s knowledge of LSA recommended practices and frequency of use?

METHODS

Instrument Development

The survey entitled Language Sample Analysis Practices of the School-Based SLP included 50 questions addressing specific LSA practices. This survey was designed to extend the scope of previous LSA surveys by exploring methods and procedures of analysis. Specifically, knowledge of procedures, knowledge and comfort level of analysis measures across different language sample contexts, and descriptions of education or instruction previously received on LSA was addressed. Questions on demographics, workplace characteristics, frequency of conducting language samples, and general language sample practices (12 of the 50 questions) were adapted from survey questions used in Pavelko et al. [22] and Kemp and Klee [27]. The questions addressing methods SLPs use when analyzing language samples and knowledge of current LSA recommended practices were based on a current review of the literature. The survey used a variety of multiple-choice questions, scale rating, and open-ended response questions to explore current practices. A panel of four experts with an average of 18 years of experience in the field of speech language pathology and education reviewed the content and format of survey. Changes were made to the survey based on the feedback provided by the expert panel. Pilot testing with two school-based SLPs was completed after the panel reached consensus with the survey instrument. The SLPs reported that they completed the survey in 25 minutes and did not indicate issues with question clarity or relevance.
The survey was separated into four different sections. The first section—composed of eight close-ended, multiple-choice questions—explored demographics and caseload characteristics including work experience, years of experience, setting, and type of informal language assessment used. In the second section, respondents rated their level of knowledge related to the collection, transcription, and analysis of conversational and narrative language samples. Knowledge was defined as a 10-point rating scale with a one being unfamiliar with the topic to a 10 being able to give a detailed explanation of the topic. Additionally, respondents rated their level of agreement with knowledge statements about the administration of language samples and value potential within a comprehensive evaluation. Each of these statements aligned with or were in contradiction with current scientific evidence on language samples including length of the sample, use of the sample, age range for particular contexts, valid measure of performance, use for goal planning, and determining presence or absence of language disorder (e.g., language sample analysis findings are appropriate for intervention goal planning).
The third section focused on language sample education and training using four questions—three close-ended questions and one open-ended questions. The questions addressed the origin of the respondents’ LSA knowledge, and last training on LSA. The fourth section included 11 questions (nine multiple-choice, two open-ended) on language sample processes and procedures. This section explored when language samples are used, why they are used, the context of the sample, average number of collected samples, transcription practices, typical analysis procedures, and factors that would increase LSA use for those respondents who indicated using LSA. Previous surveys asked one question addressing the combined method for transcription and analysis [26,27,31]. This section extended previous survey data by aiming to describe the nature of the methods used to analyze data, discriminating between a procedure for transcription from a procedure for analyzing information.

Participant Recruitment

Before recruitment began, the study was approved by the internal review board of a large southeastern university. The survey was open for responses from October through December of 2019. Recruitment was conducted via speech-language and hearing association websites, university communication sciences and disorders programs, and social media websites. Respondents were encouraged to forward the survey to other school-based SLPs. Initial recruitment efforts were made in October 2019 with additional posts and emails sent in November 2019 and December 2019 to increase the response rate. The survey was intended for current school-based SLPs. A question in the survey asked if respondents currently worked in a school. If they did not select the school choice, they were redirected to the end of the survey using skip logic. Therefore, only those who responded that they currently practiced in a school setting were allowed to continue the survey.
The survey was administered through Qualtrics (www.qualtircs.com). Recruitment emails contained a brief explanation on the origin, purpose, and description of the survey. The description of the survey included the approximate amount of time to complete. Respondents were asked to complete the survey once. The email also contained a link for the participant to follow to complete the survey. Respondents completed the survey on a computer or mobile online device. Survey respondents had the option to leave the current page and revisit previous answers. The survey questions appeared in small groups of 2–3 related questions per page. Questions using a rating scale followed by an open-ended description were located on the same survey page. The percentage of survey completion was located at the top of the survey which provided respondents with a general guide of the remaining length. When respondents completed the survey questions, they submitted their answers. All responses were anonymous.

RESULTS

A total of 116 people started the survey. Ten percent of the initial respondents were not school-based SLPs and thus did not meet criteria to participate in the study. Another 17% of the remaining respondents did not complete the survey past the first 4 questions. Therefore, the analytic sample included 90 participants. The demographics of the participants including years practicing, years as a school-based SLP, and full-time position at one school, caseload size, and average population are located in Table 1. Participants represented all of the regions of the United States including the South, Southeast, Northeast, West, and Midwest. Additionally, there were international representation from Canada and Australia. Of the 90 participants in the analytic sample, 92% had ASHA certification (CCC-SLP n=82, 92%). Of those without ASHA certification, 63% were CF (CF-SLP n=5), 25% were international providers (n=2), and 12% (n=1) reason unidentified. Participants were asked to rank their primary, secondary, tertiary, and quaternary populations they served. Options included preschool, elementary school, middle school, and high school. Half of the participants selected elementary school as their primary population. The participants’ secondary population served was most often preschoolers. Tertiary and quaternary population served were most frequently middle school and high school. The participants reported a wide range of caseload sizes with a mean size of 52 students (SD=22.29, range= 8–165).
Respondents were asked to select all of the informal assessment measures used from a list including language sample analysis, dynamic assessment, systematic observation, interviewing, report measures, and curriculum-based measures. Sixty percent of the analytic sample used dynamic assessment, 54.4% used systematic observations, 66.7% used interviewing, 84.4% used report measures and 36.7% used curriculum-based measures. Of the analytic sample, 90% chose LSA as an informal language assessment measure (n=81). Through logistic reasoning, the nine participants that did not use LSA completed the survey questions that addressed demographics, perceived knowledge, knowledge, and education and training. Through skip logic, the respondents were directed to complete the section on processes and procedures. Therefore, the results of the procedures section did not include those participants.

Data Analysis

Descriptive statistics and inferential statistics were used to analyze responses. Descriptive statistics were used to analyze the responses to demographic and workplace data, knowledge of LSA, frequency of collection data, and education and training information. To answer questions one through five addressing training and education, frequency of LSA collection, LSA contexts, procedures for conduction LSA and analysis measures, multiple choice questions and rating scales were analyzed using frequencies for ordinal data and descriptive statistics such as mean, standard deviation, range, and standard error for scale variables. To address question six on the relation between knowledge and use of specific sampling contexts, training, education, and level of knowledge, and frequency of use, correlation coefficients were calculated. Since multiple assumptions were violated, the non-parametric Spearman’s rank correlation coefficient was used to evaluate the relationship between variables. The absolute value of the correlational coefficient was used to denote effect size with the following representations: 0.0–.10 a negligible correlation, 0.10–0.39 a weak correlation, 0.40–0.69 a moderate correlation, 0.70–0.89 a strong correlation, and 0.90–1.00 a very strong correlation [34,35]. There is debate as to the value of statistical significance and effect size use for correlational data as statistical significance demonstrates that the relationship is not zero; therefore, a presentation of a confidence interval was also included to demonstrate the magnitude of estimation error of Spearman’s rank coefficient rs [34,35]. The results of the survey are organized by each research question.

Research Question 1: Training and Education

The first research question examined the training and education SLPs receive on LSA procedures. Respondents were asked to indicate where they received training in LSA. There were seven available options of where knowledge was acquired: undergraduate, graduate, CFY, conference, trainings, self-taught, and research journals. The respondents indicated between 0–100% knowledge for each option with the sum of the seven options equaling 100%. The results are located in Table 2.
The majority of respondents indicated that they received their education and training on language samples from their undergraduate (M=19.82%, SD=24.52) and graduate programs (M=40.29%, SD=26.89). SLPs indicated that they received on average 4.40% of their knowledge from research journals. Additionally, conference and trainings received low rates (6.04% and 6.01% respectively). Table 2 demonstrates the differences between the respondents who reported completing LSA as a part of their practice and those who do not. Because of the largely unequal group sizes between LSA users (n=81) and non-LSA (n=8), differences between group responses were explored descriptively.
Differences were observed in the allocation of percentages for those who did not complete LSA. The respondents who do not complete LSA reported a higher percentage for undergraduate (M=37.5%, SD=30) training programs than those who completed LSAs. Additional differences for the respondents who did not complete LSA include higher percentages allocated to knowledge from conferences and no indication of receiving education from trainings specifically for language samples.
Respondents were also asked to identify the last time they completed training of any kind across the scope of practice for SLPs and specifically the last training completed for LSA. Most SLPs had completed a general training within the last 3 months (81%). However, when looking specifically at the last LSA training, most respondents were between one year (34.5%) and more than five years ago (31%). Table 3 demonstrates the differences in the trainings completed between LSA users and non-users.
There were minimal to no differences between the groups on when they last completed an overall training. However, there were notable differences in groups report of when they last attended an LSA training. More LSA users had completed a LSA training in the last year (71%) than the non-LSA users (50%).

Research Question 2: Frequency of Conducting LSA

The second research question focused on the frequency of conducting LSA. Respondents indicated the number of samples completed within the last year. The multiple-choice question presented options from one sample completed to 30 or more in increments of five. The total number of language samples respondents collected over the last year varied. The highest percentage of respondents collected between five and ten language samples in the last year (24%), followed by over 30 collected language samples (21.3%). Over half of the respondents completed more than 10 language samples across the last year (58.7%).

Research Question 3: LSA Context

The third research question addressed the contexts that SLPs use when collecting LSA. Context refers to the type of tasks used by a SLP to elicit language. The respondents could select all of the contexts they use when eliciting a language sample. The contexts included conversation and play-based, story-retell, story generation, expository, picture description, and observation of child communication. The most frequently collected context was conversation and play-based (58.7%). The next highest category was story retell at 26.7%. Story generation (5.3%), picture description (5.3%), and observation (4%) were selected the least amount of time. None of the respondents selected the expository context.

Research Question 4: LSA Procedures

The fourth research question examined the procedures for conducting and analyzing language samples. The survey instrument included questions on the collection, transcription, and analysis procedures in separate questions. Specifically, the respondents answered questions on when they use LSA, recording procedures, and transcription procedures. Over half of the respondents selected all of the choices to indicate that they used LSA for initial evaluations, re-evaluations, and progress monitoring (52.6%). Of the remaining respondents, 32.9% used LSA for evaluations only, 9.2% for progress monitoring, and 2.6% use LSA for initial evaluations and 2.6% for re-evaluations.
Respondents were asked if they recorded the sample. Most of the respondents indicated that they record language samples (74.7%). However, when asked specifically about transcription practices, only 41.3% of respondents recorded the sample and then transcribed it later, 32% of respondents used a combination of live transcription and recorded for later transcription, and 26.7% transcribed the samples live.
Respondents reported their analysis procedures in two separate questions. Participants were asked to rank how often they used analysis measures in syntax, semantics, pragmatics, and cohesion categories on a scale from never to always. The most consistently used category of analysis measure was syntax with 71.6% reporting they always use syntax analysis measures, 56.8% always use cohesion analysis, whereas the least consistently used was semantics at 43.2% and pragmatic analysis as 48.6%. Semantic, pragmatic, and cohesion analysis measures were more often ranked in the as ‘sometimes’ or ‘half of the time’ use.
After answering the general category of analysis question, participants were asked to select all analysis measures used in the past. They were presented with a list of 20 measures across syntax, semantics, structure, and discourse. The mean number of analysis measures used was 5.37 (SD=4.35, range= 0–17). Table 4 depicts the frequency of use of each selected analysis measure. The measures Assigning Structural Stage and Index of Productive Syntax were not chosen by any of the respondents as ever having used them. The most frequently used analysis measure was MLU (97%). The other most frequently used measures were grammaticality (72.2%), tense (63.9%), and NTW (58.3%).

Research Question 5: Data Interpretation

The fifth research question examined how school-based SLPs interpret the data and results of the analysis of a language sample. Questions in this section examined why SLPs use language samples and if they use available normative data to compare results from analysis. Respondents were asked to rank the following reasons for conducting LSA: supplementing standard score, naturalistic language measure, goal planning, following district process/procedures, eligibility decisions, and progress monitoring. The ranking scale consisted of five options: always, almost always, half of the time, sometimes, and never. For descriptive analysis, these categories were collapsed into three categories: frequently (frequently includes always and almost always), occasionally (occasionally includes half the time and sometimes), and never. Most of the respondents indicated that they frequently use LSA as a naturalistic measure of a child’s language abilities (64.5%). SLPs use LSA frequently to supplement the standard score from a norm-referenced test (59.2%) and to assist with the planning of treatment goals (55.3%). SLPs occasionally identified using LSA to follow the processes of their district (32%) or for progress monitoring (39.5%). Respondents were asked if they used available normative data such as Brown’s Morphological Stages or the SALT normative database to make decisions about a child’s performance on a language sample. Over half of the respondents (56.7%) reported consistently using available normative data to interpret the results of the collected language sample.

Research Question 6: Relation Between Knowledge and Frequency of Use

The sixth research question examined the relation between SLP’s knowledge in collecting, transcribing, and analysis of language samples across different contexts and the frequency of use of specific language sample contexts. Respondents were asked to rate their level of knowledge on collection, transcription, and analysis across the contexts of conversation, story retell, story generation, and picture description on a 10-point scale. The average perceived knowledge ranking was 7.09 (SD=1.65, range=2.20–10.00). SLPs rated their knowledge on the collection of conversational language samples higher than any other context. They rated their knowledge on the tasks for the expository context lower than any other context. The results for perceived knowledge and knowledge of current recommended practices included respondents who indicated using LSA and those who did not use LSA. The differences in the perceived knowledge for those who did not indicate using LSA are located in Table 5.
Additionally, respondents ranked their agreement regarding 15 statements of evidence-based practice recommendations for conducting LSA. The statements that align with evidence-based practice recommendations are indicated with an asterisk; these statements could be considered “correct.” Participant response of agreement aligned with the correct statements and low agreement for statements that were in contradiction to evidence for LSA. The mean was 7.08 questions answered correctly (SD=2.741, range=0–13). No respondent answered all of the knowledge questions correctly. The differences in answers for each knowledge question and total knowledge questions correct for those who did not indicate using LSA are located in Table 6.
To determine the relation between the knowledge variables, context of samples collected, and demographic characteristics we completed correlational tests. Spearman’s rank correlational coefficient (rs) was used due to the ranking variables. The tests yielded significant correlations for the following variables: knowledge and perceived knowledge, perceived knowledge and years practicing, years practicing and knowledge, years practicing and recording, and years practicing, and total analysis measures use. Statistical significance when using Spearman’s rank correlation tests indicate that correlations are not zero. The correlational data indicates a moderate correlation between years practicing and knowledge, a weak correlation between knowledge and perceived knowledge, and a weak correlation between years practicing and recording. Additionally, there was a weak negative correlation between years practicing and total analysis measures used. The results from the correlational tests are located in Table 7.

DISCUSSION

Overall, the purpose of this survey was to gather information from school-based SLPs on their knowledge of LSA evidence-based practice recommendations, educational opportunities related to LSA, and procedures used when conducting LSA. This information was collected to understand the previously reported barrier of insufficient education in the implementation of LSA [27,31]. The results of the survey indicate that SLPs use various forms of informal language assessment measures including LSA. Ninety percent of respondents to this survey indicated using LSA as an informal language assessment measure and the majority of those respondents completed more than 10 samples a year (58%). However, the results also demonstrate a lack of knowledge of current recommended practices. The respondents, on average, only answered 50% of the knowledge questions correctly. The lack of knowledge demonstrated on the survey could be due to a lack of education and training in the area of LSA. Respondents indicated that the majority of what is known about LSA came from their undergraduate and graduate training. Participant responses to knowledge and practice questions indicated a continued gap in current LSA practice including the context of collected samples, the transcription and recording process, and the analysis measures completed.

Consideration and Use of LSA

A higher percentage of SLPs indicated using LSA and indicated a higher number of samples completed in this survey than reported previously in other LSA surveys [27,31]. Reasons for the higher percentage of SLPs in this survey who report using LSA may include nonresponse bias to the survey and lack of time frame stipulation. Respondents self-selected to take the survey; therefore, someone who does not conduct LSA may not have responded due to the title of the survey. The percentage of the SLPs who conducted LSA may be inflated in this survey due to the nonresponse bias. Additionally, the respondents did not receive a time frame when questioned about the type of informal language assessment used much like the question used in Kemp and Klee [27]. The lower percentage of SLPs who indicated using LSA in the Pavelko et al. [31] study could have been due to the specified year.
The respondents in this survey also indicated using more language samples over the last year than the respondents in Pavelko et al. [31]. More than half of the respondents in this survey (58.7%) indicated that they completed more than 10 language samples in the last year; conversely, over half of the respondents in Pavelko et al. [31] completed fewer than 10 language samples. Reasons for this difference could include responder bias and differences in demographic characteristics. For example, if a SLP self-selected to participate in this survey on language sample practices, the data indicated there was a greater chance they used LSA on a regular basis. Therefore, there may have been differences in the demographic characteristics or workplace characteristics of the respondents in the current survey than the respondents in Pavelko et al. [31].

LSA Knowledge and Practice Patterns

Despite the results demonstrating SLPs using language samples at a higher frequency and larger number of samples completed during the course of a school year, there remains a research to practice gap in the use of recommended collection practices. The majority of respondents agreed that language samples were valid measures of a child’s language and can differentiate a child with a language disorder from a child with typically developing language. Additionally, respondents agreed language samples should be recorded, are useful for goal planning, and are efficient assessment tools. However, there was variability with the level of agreement on the appropriate context to elicit the most advanced language for each age group. Over half of the respondents correctly identified the context to elicit the most advanced language for each age group. However, the knowledge that was displayed was not transferred to current procedures. There remains a gap in knowledge and application of LSA recommended practices, specifically related to the elicitation context, recording and transcription practices, and analysis measures.
The most frequently identified elicitation context was conversational and play-based language samples. This is concerning when considering the context of the sample with the population served. The majority of the respondents worked primarily in elementary settings. Current recommend evidence-based practices indicate that a conversational language sample does not elicit the highest language capabilities of students in elementary school [36,37]. Additionally, it is concerning that 30% of respondents worked in middle and high school settings, yet expository context was not selected by any respondent as a frequently used context. One of the most frequently identified reasons for collecting language samples was for a naturalistic language measure. There appears to be a misconception that naturalistic language only refers to language used in conversations or during play-based activities. Naturalistic settings for an older elementary, middle, or high school student also include academic contexts such as a narrative generation or expository context. A conversational or play-based sample is not recommended for the highest level of language used for academic language [3638].
Evaluations of language should mirror the needs of the students based on age, grade level, and language expectations. If language evaluations do not assess the language that will be needed for specific academic levels, then the goals that are derived from the assessment may not be beneficial to the student’s educational performance. As federal guidelines indicate a need for a varied assessment, eligibility must point to an educational need, and goals should be aligned with curriculum and related educational performance areas. Therefore, a complete and thorough evaluation needs to include valid measures of social and academic language performance. The end goal of LSA should drive the elicitation context. If discourse, topic maintenance, and appropriate topic initiation is the goal, then a conversational language sample is appropriate. However, if the goal of analysis is complex language, cohesion, flow, or consistent theme, then contexts such as story generation or expository generation would be more appropriate [39].
Recording the sample for later transcription remains an area of practice in which SLPs are not following the recommended guidelines. The majority of participants who completed language samples also recorded the language sample (74%). This is an increase over the 43% of participants in Pavelko et al. [31] who audio recorded their language samples. This increase could be due to the demographics of the current survey sample. Pavelko et al. [31] demonstrated that SLPs who were early on in their career were more likely to record the sample for later transcription. Half of the participants in the current survey had been working fewer than 10 years. Despite the majority of participants who recorded the sample (74%) only 41% recorded and then later transcribed. The majority of the respondents either transcribed live or used a combination of transcribing live and later transcription. The current recommended practices are for a SLP to record the language sample first for later transcription, so that the transcription is accurate [40]. Transcribing in real time does not allow for the most naturalistic and uninhibited elicitation techniques. In an effort to make the process of language sampling more efficient, the SLP diminishes the accuracy of the transcription and overlooks potential opportunities to elicit more complex or productive language.
The final component within the knowledge to practice gap is the lack of varied analysis procedures. The respondents in this survey reported using an average of five different analysis measures out of a list of 20 different measures. This indicates that SLPs may not individualize the analysis measures for each child but complete the same process of analysis each time a language sample is collected, regardless of the needs of the student. The majority of the analysis measures most frequently chosen are measures of syntax. The data appear to demonstrate that SLPs conduct LSA using the same analysis measures each time they elicit a language sample. The end goal of analysis and type information needed should dictate the context and analysis measures. If a SLP is conducting a language sample to supplement a norm-referenced score, the SLP needs to select the analysis measures that would support the goal. For example, if a SLP was interested in the language complexity of a 5th grade student due to a low grammar score received on a norm-referenced test, then the SLP could analyze a language sample for complexity measures such clausal density, subordinate clauses, number of adverbs, and number of verbs. However, if a SLP were interested in the language productivity of a preschooler, measures of number of different words (NDW) and MLU would be appropriate within a conversational context.

Education and Training

The majority of the respondents’ training and education came from their undergraduate and graduate training programs (60.21%) over the other sources of information. Very few SLPs reported going to trainings or reading journal articles on LSA as the source of their information and training. After undergraduate and graduate training, the other identified source was self-taught knowledge. Due to the nature of the question, it is unclear on the source of the information for the self-taught knowledge. The validity of the source of information for self-taught knowledge is questionable. This is concerning as there are journal articles, webinars, and tutorials that are available for SLPs to participate in to increase their education and training in recommended practices for LSA (e.g., 11, 21, 41).
There is a need to examine LSA information being provided in personnel preparation programs given that SLPs reported that their primary source of learning about LSA is in undergraduate and graduate training programs. There are implications for preparation programs to ensure that they are preparing students for practice within the schools. Preparation includes providing education and training on general LSA evidence-based procedures as well as how to decrease personal biases, increase cultural competence, and to use reliable and systematic procedures. The education and practice provided in undergraduate and graduate training programs impacts individuals long after their training is complete. Therefore, it is important that personnel preparation programs extend instruction on knowledge and skills of LSA to emphasize differential individualized assessment.

Limitations and Future Directions

The results of the survey should be interpreted with caution in terms of generalization as the results are representative of the relatively small sample of respondents who completed the survey. The limitations of this study include the different number of respondents and non-respondents and self-reporting biases. The analytic sample for this study includes 90 respondents. This is a relatively small sample size and could have limited the ability to find significant differences in the data. Analysis from a larger sample size may have demonstrated clearer patterns or trends of data yielding additional significant correlations. Despite representing all regions of the United States, the processes and procedures, knowledge, and education of the respondents may not fully represent the population of current school-based SLPs. The recruitment process relied on self-selection through various professional websites and university email systems. There may be selection biases between those who chose began the survey, those who did not, and those who completed the survey. The results of this survey are based on self-reporting of processes and procedures of LSA with no direct observation of the process of SLPs in completing language samples. Therefore, there may be bias in the responses to processes and procedures as SLPs may have reported in a way that is not fully representative of their practices.
The results of this survey indicate a gap between school SLPs’ knowledge and practice as well as a gap between their knowledge and practice and evidence-based LSA guidelines. The results also indicated future directions to include exploring informal assessment practices specifically for SLPs who work with adolescents in middle and high school. This survey was open to all current school-based SLPs; however, with a narrower scope, more specific information on the informal assessment practices for SLPs working with middle and high school would be beneficial. This includes examining the specific contexts of language samples used during informal language assessments. Additionally, the results indicated a need to look closely at the undergraduate and graduate curriculum on LSA as the respondents indicated the largest amount of education and training coming from these programs. Future work should explore effective education, training, and practice in language sampling.

CONCLUSIONS

Informal language assessment is an important part in a complete and thorough evaluation of language. Completing a language sample provides valuable information on the naturalistic language of children and can be used for identifying language disorders, differentiating language disorder profiles, progress monitoring, and treatment planning. Currently school-based SLPs use a variety of procedures when conducting LSAs. There continues to be a research to practice gap on the evidence-based guidelines for LSA. Continued examinations of LSA knowledge, skills, and practice patterns are worthwhile to support comprehensive language evaluation practices that include naturalistic and functional measures.

Table 1
Participant Descriptions: Years Practicing, Years School-Based SLP, Full-Time in One School
Question Response choice Frequency
Years practicing 0–5 yr 22.2
5–10 yr 27.8
10–15 yr 13.3
15–20 yr 10.0
20+ yr 26.7

Years school based 0–5 yr 28.9
5–10 yr 23.3
10–15 yr 18.9
15–20 yr 10.0
20+ yr 18.9

Full-time one school Yes 54.4

Population served-ranked First Choice
 Preschool 18.5
 Elementary School 50.8
 Middle School 13.8
 High School 16.9
Second Choice
 Preschool 67.7
 Elementary School 21.5
 Middle School 10.8
 High School 0
Third Choice
 Preschool 7.7
 Elementary School 21.5
 Middle School 58.5
 High School 12.3
Fourth Choice
 Preschool 6.2
 Elementary School 6.2
 Middle School 16.9
 High School 70.8
Table 2
Differences in Education and Training Percentages Between LSA Users and Non-Users
Variable LSA Use No LSA Use


M SD SE M SD SE
Undergraduate Knowledge 17.93 23.323 2.693 37.50 30.000 10.607

Graduate Knowledge 41.84 27.176 3.117 26.63 20.625 7.292

CFY Knowledge 8.42 13.981 1.604 2.50 7.071 2.500

Conference Knowledge 5.59 10.707 1.228 10.25 21.259 7.516

Trainings Knowledge 6.64 14.931 1.713 0 0 0

Self-Taught Knowledge 13.67 18.768 2.167 17.50 34.641 12.247

Research Journals Knowledge 4.28 9.547 1.095 5.63 8.210 2.903
Table 3
Differences in Trainings Completed between LSA Users and Non-Users
Variable LSA Use % No LSA Use %
Last training completed
 1–3 mo ago 80.3 87.5
 6 mo ago 6.6 0
 1 yr ago 9.2 12.5
 More than 5 yr ago 3.9 0

Last LSA training completed
 1–3 mo ago 23.7 25.0
 6 mo ago 11.8 0
 1 yr ago 35.5 25.0
 More than 5 yr ago 28.9 50.0
Table 4
Analysis Measure Frequency of Use
Analysis measure Percentage of use
Content Form Analysis 11.1
Developmental Sentence Scoring 13.9
MLU 97.2
Mean Length of Response 34.7
Type Token Ratio 38.9
Clausal Density 13.9
Subordinating Clause 23.6
NDW 43.1
NTW 58.3
Mazes 29.2
Percent Mazes 27.8
Elaborated Noun Phrase 16.7
Story Grammar Components 48.6
Grammaticality 72.2
Verbs 30.6
Adverbs 19.4
Adjectives 27.8
Tense 63.9
Table 5
Differences in Perceived Knowledge for LSA Users and Non-Users
Variable LSA Use No LSA Use


M SD SE M SD SE
Collection of conversational, play-based, or interview language samples 8.19 1.476 0.164 7.00 2.398 0.799

Transcription of conversational, play-based, or interview language samples 7.64 1.860 0.207 7.25 2.188 0.773

Analysis of conversational, play-based, or interview language samples 7.20 1.920 0.213 6.88 2.532 0.895

Collection of narrative (e.g., story retell, story generation, wordless picture book) language samples 7.96 1.616 0.182 6.56 2.833 0.944

Transcription of narrative (e.g., story retell, story generation, wordless picture book) language samples 7.51 1.821 0.204 7.25 2.188 0.773

Analysis of narrative (e.g., story retell, story generation, wordless picture book) language samples 6.95 1.949 0.217 6.38 2.066 0.730

Collection of expository language samples 6.54 2.271 0.261 5.75 2.964 1.048

Transcription of expository language samples 6.34 2.381 0.273 5.75 2.964 1.048

Analysis of expository language samples 5.77 2.438 0.278 5.13 2.748 0.972

Collection of picture description language samples 7.88 1.900 0.211 6.75 2.964 1.048

Transcription of picture description language samples 7.52 1.994 0.222 6.75 2.964 1.048

Analysis of picture description language samples 6.91 2.014 0.224 5.75 3.284 1.161

Total Perceived Knowledge Rank 7.21 1.480 0.166 6.07 2.560 0.852
Table 6
Differences in Knowledge for LSA Users and Non-Users
Statement LSA Use No LSA Use


Agree Uncertain Disagree Agree Uncertain Disagree
Conversational language samples should be between 101–200 utterances in length to provide valid results. 61.0% 15.6% 23.4% 37.5% 50% 12.5%

Language samples are a valid measure of children’s language performance.* 90.9% 5.2% 4.7% 100% 0% 0%

Language samples do not need to be recorded for later transcription and analysis. 9.1% 9.1% 81.8% 12.5% 12.5% 75.0%

Language sample analysis findings differentiate children with and without language disorders.* 72.7% 16.9% 10.4% 87.5% 12.5% 0%

Language sample analysis findings are appropriate for intervention goal planning.* 89.6% 5.2% 5.2% 75% 25% 0%

To represent a preschool child’s most advanced language forms, conversational language samples are appropriate.* 57.9% 28.9% 13.2% 75% 12.5% 12.5%

To represent a preschool child’s most advanced language forms, narrative language samples are appropriate. 53.2% 28.6% 18.2% 50% 37.5% 12.5%

To represent a preschool child’s most advanced language forms, expository language samples are appropriate. 23.4% 42.9% 33.8% 50% 25% 25%

To represent an elementary child’s most advanced language forms, conversational language samples are appropriate. 55.8% 24.7% 19.5% 62.5% 25% 12.5%

To represent an elementary child’s most advanced language forms, narrative language samples are appropriate.* 79.2% 18.2% 2.6% 75% 0% 25%

To represent an elementary child’s most advanced language forms, expository language samples are appropriate. 59.7% 32.5% 7.8% 37.5% 37.5% 25%

To represent a secondary (middle or high school) student’s most advanced language forms, conversational language samples are appropriate. 48.1% 19.5% 32.5% 50% 50% 0%

To represent a secondary (middle or high school) student’s most advanced language forms, narrative language samples are appropriate. 64.5% 26.3% 9.2% 50% 50% 0%

To represent a secondary (middle or high school) student’s most advanced language forms, expository language samples are appropriate.* 68.8% 27.3% 3.9% 50% 50% 0%

Language samples are an efficient tool to measure progress made during therapy.* 70.1% 16.9% 13% 75% 12.5% 12.5%

Total Knowledge Correct 7.17 2.659 .295 6.22 3.456 1.152

The statements that align with evidence-based practice recommendations are indicated with an asterisk; statements could be considered “correct.” Bolded values indicate correct responses.

Table 7
Correlation Test Results for Knowledge, Perceived Knowledge, and Years Practicing
Variable rs 95% Confidence Interval
Knowledge vs. Perceived Knowledge 0.253* 0.05–0.44
Knowledge vs. Context 0.166 −0.04–0.36
Perceived Knowledge vs. Years Practicing 0.190* −0.02–0.38
Perceived Knowledge vs. Context 0.097 −0.11–0.30
Years Practicing vs. Knowledge 0.438* 0.25–0.59
Years Practicing vs. Recording 0.220* 0.01–0.41
Years Practicing vs. Total Analysis Measures −0.219* −0.41–−0.01
Years Practicing vs. Total LSA Collected 0.044 −0.16–0.25
Years Practicing vs. Context 0.110 −0.10–0.31

Variables with *significant at p<0.05 level.

REFERENCES

1. American Speech-Language-Hearing Association. Guidelines for the roles and responsibilities of the school-based speech-language pathologist [Guidelines]. Available 2010 from: www.asha.org/policy.

3. Taylor-Goh S. Royal College of Speech and Language Therapists clinical guidelines: 5.3 school aged children with speech, language, and communication difficulties. Bicester: Speechmark Publishing Ltd, 2005.

4. Haynes WO, Pindzola RH. Diagnosis and evaluation in speech pathology. 5th ed. Needham Heights: Allyn & Bacon, 1998.

5. Owens RE. Language disorders: a functional approach to assessment and intervention. 6th ed. NewYork: Pearson, 2014.

6. Danahy Ebert K, Scott CM. Relationships between narrative language samples and norm-referenced test scores in language assessments of school-age children. Language, Speech, and Hearing Services in Schools. 2014;45(4):337–350.
crossref pmid
7. Laing SP, Kamhi A. Alternative assessment of language and literacy in culturally and linguistically diverse populations. Language, Speech, and Hearing Services in Schools. 2003;34(1):44–55.
crossref pmid
8. Robertson SA. Assessment of prelinguistic and emerging language skills of children with developmental language disorders. In : Kamhi AG, Masterson JJ, Apel K, editors. Clinical decision making in developmental language disorders. p. 23–38. Baltimore: Paul H. Brookes Publishing Co, 2008.

9. Brown R. A first language: the early stages. Cambridge: Harvard University Press, 1973.

10. Gallagher TM. Pre-assessment: a procedure for accommodating language use variability. In : Gallagher TM, Prutting CA, editors. Pragmatic assessment and intervention issues in language. p. 1–28. San Diego: College-Hill Press, 1983.

11. Eisenberg S. Using general language performance measures to assess grammar learning. Topics in Language Disorders. 2020;40(2):135–148.
crossref
12. Pezold MJ, Imgrund CM, Storkel HL. Using computer programs for language sample analysis. Language, Speech, and Hearing Services in Schools. 2020;51(1):103–114.
crossref pmid
13. Hewitt LE, Hammer CS, Yont KM, Tomblin JB. Language sampling for kindergarten children with and without SLI: mean length of utterance, IPSYN, and NDW. Journal of Communication Disorders. 2005;38(3):197–213.
crossref pmid
14. Costanza-Smith A. The clinical utility of language samples. Perspectives of Language Learning and Education. 2010;17(1):9–15.
crossref
15. Stockman IJ. The promises and pitfalls of language sample analysis as an assessment tool for linguistic minority children. Language, Speech, and Hearing Services in Schools. 1996;27(4):355–366.
crossref
16. Casby MW. An examination of the relationship of sample size and mean length of utterance for children with developmental language impairment. Child Language Teaching and Therapy. 2011;27(3):286–293.
crossref pdf
17. Evans JL, Craig HK. Language Sample Collection and Analysis. Journal of Speech, Language, and Hearing Research. 1992;35(2):343–353.
crossref pmid
18. Tommerdahl J, Kilpatrick C. Analysing frequency and temporal reliability of children’s morphosyntactic production in spontaneous language samples of varying lengths. Child Language Teaching and Therapy. 2013;29(2):171–183.
crossref pdf
19. Tommerdahl J, Kilpatrick CD. The reliability of morphological analyses in language samples. Language Testing. 2014;31(1):3–18.
crossref pdf
20. Kapantzoglou M, Fergadiotis G, Restrepo MA. Language sample analysis and elicitation technique effects in bilingual children with and without language impairment. Journal of Speech, Language, and Hearing Research. 2017;60(10):2852–2864.
crossref pmid
21. Miller JF, Andriacchi K, Nockerts A. Using language sample analysis to assess spoken language production in adolescents. Language, Speech, and Hearing Services in Schools. 2016;47(2):99–112.
crossref pmid
22. Pavelko S, Owens RE. Sampling utterances and grammatical analysis revised (SUGAR): new normative values for language sample analysis measures. Language, Speech, and Hearing Services in Schools. 2017;48(3):197–215.
crossref pmid
23. Rice ML, Smolik F, Perpich D, Thompson T, Rytting N, Blossom M. Mean length of utterance levels in 6-month intervals for children 3 to 9 years with and without language impairments. Journal of Speech, Language, and Hearing Research. 2010;53(2):333–349.
crossref pmc
24. Evans JL, Miller J. Language sample analysis in the 21st century. Seminars in Speech and Language. 1999;20(2):101–116.
crossref pmid
25. Schuele CM. The many things language sample analysis has taught me. Perspectives on Language Learning and Education. 2010;17(1):32–37.
crossref
26. Hux K, Morris-Friehe M, Sanger DD. Language sampling practices: a survey of nine states. Language, Speech, and Hearing Services in Schools. 1993;24(2):84–91.

27. Kemp K, Klee T. Clinical language sampling practices: results of a survey of speech-language pathologists in the United States. Child Language Teaching and Therapy. 1997;13(2):161–176.
crossref pdf
28. Westerveld MF, Claessen M. Clinician survey of language sampling practices in Australia. International Journal of Speech Language Pathology. 2014;16(3):242–249.
crossref pmid
29. Lundine JP. Assessing expository discourse abilities across elementary, middle, and high school. Topics in Language Disorders. 2020;40(2):149–165.
crossref
30. Nippold MA, Scott CM. Expository discourse in children, adolescents, and adults. New York: Psychology Press, 2010.

31. Pavelko SL, Owens RE, Ireland M, Hahs-Vaughn DL. Use of language sample analysis by school-based SLPs: results of a nationwide survey. Language, Speech, and Hearing Services in Schools. 2016;47(3):246–258.
crossref pmid
32. American Speech-Language-Hearing Association. 2020 Schools Survey: SLP caseload and workload characteristics [PDF]. Retrieved 2020, from: https://www.asha.org/siteassets/surveys/2020-schools-survey-slp-caseload.pdf.

33. Selin CM, Rice ML, Girolamo T, Wang CJ. Speech-language pathologists’ clinical decision making for children with specific language impairment. Language, Speech, and Hearing Services in Schools. 2018;50(2):283–307.
crossref pmid pmc
34. Gliner JA, Morgan GA, Leech NL. Research methods in applied settings: an integrated approach to design and analysis. 3rd ed. New York: Routledge, 2017.

35. Schober P, Boer C, Schwarte LA. Correlation coefficients: appropriate use and interpretation. Anesthesia and Analgesia. 2018;126(5):1763–1768.
crossref pmid
36. Nippold MA, Hesketh LJ, Duthie JK, Mansfield TC. Conversational versus expository discourse. Journal of Speech, Language, and Hearing Research. 2005;48(5):1048–1064.
crossref
37. Southwood F, Russell AF. Comparison of conversation, freeplay, and story generation as methods of language sample elicitation. Journal of Speech, Language, and Hearing Research. 2004;47(2):366–376.
crossref
38. Nippold MA, Frantz-Kaspar MW, Cramond PM, Kirk C, Hayward-Mayhew C, MacKinnon M. Conversational and narrative speaking in adolescents: examining the use of complex syntax. Journal of Speech, Language, and Hearing Research. 2014;57(3):876–886.
crossref
39. Petersen DB, Gillam SL, Gillam RB. Emerging procedures in narrative assessment: the index of narrative complexity. Topics in Language Disorders. 2008;28:115–130.

40. Heilmann JJ. Myths and realities of LSA. Perspectives on Language Learning and Education. 2010;17(1):4–8.

41. Timler GR. Using language sample analysis to assess pragmatic skills in school-age children and adolescents. Perspectives on Language Learning and Education. 2018;3(1):23–25.
crossref
TOOLS
PDF Links  PDF Links
PubReader  PubReader
ePub Link  ePub Link
XML Download  XML Download
Full text via DOI  Full text via DOI
Download Citation  Download Citation
  Print
Share:      
METRICS
0
Crossref
4,124
View
71
Download
Related article
Editorial Office
#409, 102 SK-Hub BULD, 461 Samil-daero, Jongno-gu, Seoul 03147, Korea
FAX: +82-2-795-2726   E-mail: editor@e-cacd.org
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © The Korean Association of Speech-Language Pathologists.                 Developed in M2PI