Language sample analysis consideration and use: a survey of school-based speech language pathologists
Article information
Abstract
Purpose
The purpose of the current study was to extend previous knowledge on the language sample analysis (LSA) practices of school-based speech-language pathologists (SLPs) by gathering information on the processes, procedures, clinical judgments, and decisions that current school-based SLPs make during the context of conducting an LSA.
Methods
School-based SLPs responded to a survey on current practices, perceived knowledge, knowledge of current recommended practices, and education and training in LSA.
Results
Results indicated the majority of school-based SLPs used LSA (90%) during evaluations to supplement information provided on norm-referenced tests and as a naturalistic language measure. However, the results also demonstrate a lack of knowledge of current recommended practices. The respondents, on average, only answered 50% of the knowledge questions correctly.
Conclusions
Participant responses to knowledge and practice questions indicated a continued gap in current LSA practice including the context of collected samples, the transcription and recording process, and the analysis measures completed. Additionally, the results indicated a need to look closely at the undergraduate and graduate curriculum on LSA as the respondents indicated the largest amount of education and training coming from these programs.
INTRODUCTION
During the diagnostic evaluation process, school-based SLPs make diagnostic judgments using decision-making skills to determine the presence or absence of a disorder using a variety of assessment measures. Diagnostic evaluations should include multiple measurements and a variety of assessment tools provided in the child’s native language that are not discriminatory or culturally biased and provide information that will be relevant to identifying the educational need for special education services [1–3]. Gathering specific information from both formal and informal measures provides the most reliable measure of the child’s language performance while increasing functional relevance [4,5]. The use of formal norm-referenced and criterion-referenced tests in conjunction with quality informal assessment measures represent a child’s true language performance for accurate disability diagnosis, eligibility for service delivery, and a functionally-relevant treatment plan [6–8].
SLPs have the option to choose a number of different language assessments to provide supplemental information to the outcomes of norm-referenced language tests. Language sampling is arguably the most historically and widely researched informal language assessment measure [4,9,10]. Language samples are versatile informal assessments through which a SLP elicits spontaneous language in a variety of contexts (e.g., conversation, free play, narration, expository discourse) to collect information and measure performance across language domains. SLPs analyze language samples using a variety of microstructural and macrostructural measures such as grammaticality, mean length of utterance (MLU), number of different words (NDW), story grammar elements, turn length, and continuity to describe a child’s language performance at the sentence or discourse level [6,11,12]. Measurements of language complexity and productivity collected during naturalistic settings provide SLPs with information indicating differences in typical and disordered language. Specific weaknesses in language identified through the derived measures provide information that can be compared to specific criteria such as grade and age-level standards or to normative data (e.g., preschool MLU normative data) to determine the presence of a disorder and eligibility for services [13–15].
Research on language samples provides information on the reliability of the description of a child’s language performance [16–19], identification of differences in the morphology of children with language disorders [13], inspection of the reliability of outcome measures based on age level and linguistic characteristics [18,20] and development of normative data [21–23]. Outside of norm-referenced tests, LSA remains one of the most reliable evidence-based language assessment measures when specific processes and procedures are followed [5,15,18,20–25].
Current Language Sample Analysis Practices
Despite robust evidence base for LSA, current practice trends demonstrate variability in the use of language samples. Four surveys have been completed exploring the LSA practices of school-based SLPs. Hux et al. [26] was the first to survey school-based SLPs from the Midwestern United States on their LSA practices and attitudes towards the completion and reliability of language samples. Results of the survey indicated that respondents used a variety of contexts to elicit language samples. Conversational language samples were indicated most frequently (86%), followed by story-retell tasks (54%), and story-generation tasks (34%). Almost half (49%) of the respondents reported using a self-designed analysis protocol. When asked to indicate frequently used analysis measures, 81% reported using mean length of utterance (MLU) and 80% reported using qualitative language descriptors. The most frequently identified reason for selecting preferred analysis measures was due to familiarity and comfort level. This survey provided information on the types of elicitation contexts and analysis methods; however, did not determine the percentage of SLPs who conducted language samples [26].
Kemp and Klee [27] surveyed 253 SLPs across the United States who worked primarily with preschool-aged children to determine the practice patterns and use of language samples. Eighty-five percent of the survey respondents reported using LSA during clinical assessments for language impairment [27]. The median number of language samples collected per year in any given year was 25 (range 12–47). The respondents predominately favored self-designed analysis procedures over packaged procedures which was a difference compared to the results of Hux et al. [26]. Frequent responses on general procedures included indices such as mean length of utterance (MLU), sentence structure, semantics, and developmental norms. Of those respondents who did not use LSA, barriers included the time it takes to collect, transcribe, and analyze samples, lack of training, and knowledge on language samples.
Westerveld and Claessen [28] surveyed the language sample practices of 257 pediatric SLPs in Australia using a questionnaire based on Hux et al. [26] and Kemp and Klee [27]. Results of the survey were similar to those completed in the United States with 91% of respondents completing language samples [28]. Context of the sample was based on the age of the student with conversational samples used more frequently with younger students. A surprising and concerning result was that the most frequently used context for high school students was personal narratives rather than expository discourse which would be considered more appropriate [29,30]. Additionally, over half of the respondents followed recommended practices by recording, listening, and transcribing language samples. When asked about analysis methods, over half of the respondents used MLU and structural stages such as Brown’s morphemes. The results of Westerveld and Claessen’s [28] survey study confirmed and expanded the results of Kemp and Klee [27] related to the percentage of SLPs who conduct language samples, variety of elicitation context, and analysis methods.
Pavelko et al. [22] aimed to update the literature on the current practices of school-based SLPs and the use of LSA in the United States. This large-scale survey extended Kemp and Klee’s survey by focusing on school-based SLPs serving a wider age range of students. Pavelko et al. [22] included responses from 1,336 school-based SLPs across the United States. The most notable change in findings from Kemp and Klee [27] and Westerveld and Claessen [28] to the Pavelko et al. [22] survey was a reduction in the number of respondents who reported using LSA during a given school year; respectively, the rate dropped from 85% to 67%. The respondents indicated analyzing fewer than 10 language samples a year [31]. Considering on average in the US over half of the students on school-based SLP caseloads have goals addressing language needs across the domains of morphology, syntax, semantics, and pragmatics and the time spent completing diagnostic evaluations per week, SLPs are completing more than 10 language evaluations a year [32]. Therefore, the results indicating a collection of fewer than 10 language samples a year is concerning.
The surveys identified barriers to completion of LSA as lack of time to collect and complete language samples, insufficient resources, and inadequate training and expertise. Pavelko et al. [22] extended the knowledge of current barriers to practice by inquiring about the willingness to attend training on theories and principles of LSA. Seventy-one percent of respondents were willing to attend training and specified the areas of need as transcribing samples, analyzing samples, interpreting the results of samples, and goal planning using the information provided in a sample [31]. The most frequently selected areas of training were in the analysis (84%) and interpretation (83%) of language samples. The results demonstrate that SLPs identify areas of weakness in analysis and interpretation which includes the context of analysis, the different analysis measures that could be used, and the ability to interpret the data within the larger picture of a child’s language and are open to improving their skills. Consistent barriers to implementation of LSA remain, specifically related to the time it takes to collect and analyze samples and limited knowledge in the area of analyzing and interpreting information collected during a language sample. A contributing factor to the barriers within the area of analyzing and interpreting the data from language samples is the gap in SLPs’ knowledge of the language forms elicited during different contexts. Additional training provided to SLPs in the analysis and interpretation of data from a language sample including the appropriateness of contexts for different ages would improve clinical assessment and treatment planning.
Research demonstrates SLPs are more likely to use their own clinical expertise, such as experience with certain analysis measures, when making decisions on analysis measures for language samples instead of using evidence-based protocols and procedures [27,31]. However, the validity of developing self-designed protocols and selection of analysis measures remains unknown. Clinical expertise used during diagnostic decision making includes using past experiences and knowledge of language and disorders. However, clinical expertise separate from scientific literature introduces risks such as introduction of biases, inconsistency in identification, and the use of ineffective or inefficient procedures. When there is preponderance of reliance on clinical expertise over current researched practice guidelines, there is a risk of erroneously identifying language disorders [31,33]. Gathering information on the knowledge and levels of confidence in language sample transcription and analysis of school-based SLPs addresses the barriers to implementation in practice. Understanding LSA knowledge, implementation knowledge, or confidence in their processes regarding collection, transcription, and analysis will help determine specific training needs.
The purpose of the current survey was to extend previous knowledge on the LSA practices of school-based SLPs. Through survey methodology, we gathered information on the processes, procedures, and clinical judgments current school-based SLPs make during the context of conducting an LSA. The current study aimed to answer the following questions:
What training and education do school-based SLPs receive on evidence-based LSA procedures?
How often do school-based SLPs conduct LSAs?
What type of contexts (e.g., narrative, conversational) do SLPs use when conducting LSAs?
What procedures do SLPs use when conducting and analyzing LSAs?
How do school-based SLPs interpret the data and results collected during LSA?
What is the relationship between SLP’s knowledge of LSA recommended practices and frequency of use?
METHODS
Instrument Development
The survey entitled Language Sample Analysis Practices of the School-Based SLP included 50 questions addressing specific LSA practices. This survey was designed to extend the scope of previous LSA surveys by exploring methods and procedures of analysis. Specifically, knowledge of procedures, knowledge and comfort level of analysis measures across different language sample contexts, and descriptions of education or instruction previously received on LSA was addressed. Questions on demographics, workplace characteristics, frequency of conducting language samples, and general language sample practices (12 of the 50 questions) were adapted from survey questions used in Pavelko et al. [22] and Kemp and Klee [27]. The questions addressing methods SLPs use when analyzing language samples and knowledge of current LSA recommended practices were based on a current review of the literature. The survey used a variety of multiple-choice questions, scale rating, and open-ended response questions to explore current practices. A panel of four experts with an average of 18 years of experience in the field of speech language pathology and education reviewed the content and format of survey. Changes were made to the survey based on the feedback provided by the expert panel. Pilot testing with two school-based SLPs was completed after the panel reached consensus with the survey instrument. The SLPs reported that they completed the survey in 25 minutes and did not indicate issues with question clarity or relevance.
The survey was separated into four different sections. The first section—composed of eight close-ended, multiple-choice questions—explored demographics and caseload characteristics including work experience, years of experience, setting, and type of informal language assessment used. In the second section, respondents rated their level of knowledge related to the collection, transcription, and analysis of conversational and narrative language samples. Knowledge was defined as a 10-point rating scale with a one being unfamiliar with the topic to a 10 being able to give a detailed explanation of the topic. Additionally, respondents rated their level of agreement with knowledge statements about the administration of language samples and value potential within a comprehensive evaluation. Each of these statements aligned with or were in contradiction with current scientific evidence on language samples including length of the sample, use of the sample, age range for particular contexts, valid measure of performance, use for goal planning, and determining presence or absence of language disorder (e.g., language sample analysis findings are appropriate for intervention goal planning).
The third section focused on language sample education and training using four questions—three close-ended questions and one open-ended questions. The questions addressed the origin of the respondents’ LSA knowledge, and last training on LSA. The fourth section included 11 questions (nine multiple-choice, two open-ended) on language sample processes and procedures. This section explored when language samples are used, why they are used, the context of the sample, average number of collected samples, transcription practices, typical analysis procedures, and factors that would increase LSA use for those respondents who indicated using LSA. Previous surveys asked one question addressing the combined method for transcription and analysis [26,27,31]. This section extended previous survey data by aiming to describe the nature of the methods used to analyze data, discriminating between a procedure for transcription from a procedure for analyzing information.
Participant Recruitment
Before recruitment began, the study was approved by the internal review board of a large southeastern university. The survey was open for responses from October through December of 2019. Recruitment was conducted via speech-language and hearing association websites, university communication sciences and disorders programs, and social media websites. Respondents were encouraged to forward the survey to other school-based SLPs. Initial recruitment efforts were made in October 2019 with additional posts and emails sent in November 2019 and December 2019 to increase the response rate. The survey was intended for current school-based SLPs. A question in the survey asked if respondents currently worked in a school. If they did not select the school choice, they were redirected to the end of the survey using skip logic. Therefore, only those who responded that they currently practiced in a school setting were allowed to continue the survey.
The survey was administered through Qualtrics (www.qualtircs.com). Recruitment emails contained a brief explanation on the origin, purpose, and description of the survey. The description of the survey included the approximate amount of time to complete. Respondents were asked to complete the survey once. The email also contained a link for the participant to follow to complete the survey. Respondents completed the survey on a computer or mobile online device. Survey respondents had the option to leave the current page and revisit previous answers. The survey questions appeared in small groups of 2–3 related questions per page. Questions using a rating scale followed by an open-ended description were located on the same survey page. The percentage of survey completion was located at the top of the survey which provided respondents with a general guide of the remaining length. When respondents completed the survey questions, they submitted their answers. All responses were anonymous.
RESULTS
A total of 116 people started the survey. Ten percent of the initial respondents were not school-based SLPs and thus did not meet criteria to participate in the study. Another 17% of the remaining respondents did not complete the survey past the first 4 questions. Therefore, the analytic sample included 90 participants. The demographics of the participants including years practicing, years as a school-based SLP, and full-time position at one school, caseload size, and average population are located in Table 1. Participants represented all of the regions of the United States including the South, Southeast, Northeast, West, and Midwest. Additionally, there were international representation from Canada and Australia. Of the 90 participants in the analytic sample, 92% had ASHA certification (CCC-SLP n=82, 92%). Of those without ASHA certification, 63% were CF (CF-SLP n=5), 25% were international providers (n=2), and 12% (n=1) reason unidentified. Participants were asked to rank their primary, secondary, tertiary, and quaternary populations they served. Options included preschool, elementary school, middle school, and high school. Half of the participants selected elementary school as their primary population. The participants’ secondary population served was most often preschoolers. Tertiary and quaternary population served were most frequently middle school and high school. The participants reported a wide range of caseload sizes with a mean size of 52 students (SD=22.29, range= 8–165).
Respondents were asked to select all of the informal assessment measures used from a list including language sample analysis, dynamic assessment, systematic observation, interviewing, report measures, and curriculum-based measures. Sixty percent of the analytic sample used dynamic assessment, 54.4% used systematic observations, 66.7% used interviewing, 84.4% used report measures and 36.7% used curriculum-based measures. Of the analytic sample, 90% chose LSA as an informal language assessment measure (n=81). Through logistic reasoning, the nine participants that did not use LSA completed the survey questions that addressed demographics, perceived knowledge, knowledge, and education and training. Through skip logic, the respondents were directed to complete the section on processes and procedures. Therefore, the results of the procedures section did not include those participants.
Data Analysis
Descriptive statistics and inferential statistics were used to analyze responses. Descriptive statistics were used to analyze the responses to demographic and workplace data, knowledge of LSA, frequency of collection data, and education and training information. To answer questions one through five addressing training and education, frequency of LSA collection, LSA contexts, procedures for conduction LSA and analysis measures, multiple choice questions and rating scales were analyzed using frequencies for ordinal data and descriptive statistics such as mean, standard deviation, range, and standard error for scale variables. To address question six on the relation between knowledge and use of specific sampling contexts, training, education, and level of knowledge, and frequency of use, correlation coefficients were calculated. Since multiple assumptions were violated, the non-parametric Spearman’s rank correlation coefficient was used to evaluate the relationship between variables. The absolute value of the correlational coefficient was used to denote effect size with the following representations: 0.0–.10 a negligible correlation, 0.10–0.39 a weak correlation, 0.40–0.69 a moderate correlation, 0.70–0.89 a strong correlation, and 0.90–1.00 a very strong correlation [34,35]. There is debate as to the value of statistical significance and effect size use for correlational data as statistical significance demonstrates that the relationship is not zero; therefore, a presentation of a confidence interval was also included to demonstrate the magnitude of estimation error of Spearman’s rank coefficient rs [34,35]. The results of the survey are organized by each research question.
Research Question 1: Training and Education
The first research question examined the training and education SLPs receive on LSA procedures. Respondents were asked to indicate where they received training in LSA. There were seven available options of where knowledge was acquired: undergraduate, graduate, CFY, conference, trainings, self-taught, and research journals. The respondents indicated between 0–100% knowledge for each option with the sum of the seven options equaling 100%. The results are located in Table 2.
The majority of respondents indicated that they received their education and training on language samples from their undergraduate (M=19.82%, SD=24.52) and graduate programs (M=40.29%, SD=26.89). SLPs indicated that they received on average 4.40% of their knowledge from research journals. Additionally, conference and trainings received low rates (6.04% and 6.01% respectively). Table 2 demonstrates the differences between the respondents who reported completing LSA as a part of their practice and those who do not. Because of the largely unequal group sizes between LSA users (n=81) and non-LSA (n=8), differences between group responses were explored descriptively.
Differences were observed in the allocation of percentages for those who did not complete LSA. The respondents who do not complete LSA reported a higher percentage for undergraduate (M=37.5%, SD=30) training programs than those who completed LSAs. Additional differences for the respondents who did not complete LSA include higher percentages allocated to knowledge from conferences and no indication of receiving education from trainings specifically for language samples.
Respondents were also asked to identify the last time they completed training of any kind across the scope of practice for SLPs and specifically the last training completed for LSA. Most SLPs had completed a general training within the last 3 months (81%). However, when looking specifically at the last LSA training, most respondents were between one year (34.5%) and more than five years ago (31%). Table 3 demonstrates the differences in the trainings completed between LSA users and non-users.
There were minimal to no differences between the groups on when they last completed an overall training. However, there were notable differences in groups report of when they last attended an LSA training. More LSA users had completed a LSA training in the last year (71%) than the non-LSA users (50%).
Research Question 2: Frequency of Conducting LSA
The second research question focused on the frequency of conducting LSA. Respondents indicated the number of samples completed within the last year. The multiple-choice question presented options from one sample completed to 30 or more in increments of five. The total number of language samples respondents collected over the last year varied. The highest percentage of respondents collected between five and ten language samples in the last year (24%), followed by over 30 collected language samples (21.3%). Over half of the respondents completed more than 10 language samples across the last year (58.7%).
Research Question 3: LSA Context
The third research question addressed the contexts that SLPs use when collecting LSA. Context refers to the type of tasks used by a SLP to elicit language. The respondents could select all of the contexts they use when eliciting a language sample. The contexts included conversation and play-based, story-retell, story generation, expository, picture description, and observation of child communication. The most frequently collected context was conversation and play-based (58.7%). The next highest category was story retell at 26.7%. Story generation (5.3%), picture description (5.3%), and observation (4%) were selected the least amount of time. None of the respondents selected the expository context.
Research Question 4: LSA Procedures
The fourth research question examined the procedures for conducting and analyzing language samples. The survey instrument included questions on the collection, transcription, and analysis procedures in separate questions. Specifically, the respondents answered questions on when they use LSA, recording procedures, and transcription procedures. Over half of the respondents selected all of the choices to indicate that they used LSA for initial evaluations, re-evaluations, and progress monitoring (52.6%). Of the remaining respondents, 32.9% used LSA for evaluations only, 9.2% for progress monitoring, and 2.6% use LSA for initial evaluations and 2.6% for re-evaluations.
Respondents were asked if they recorded the sample. Most of the respondents indicated that they record language samples (74.7%). However, when asked specifically about transcription practices, only 41.3% of respondents recorded the sample and then transcribed it later, 32% of respondents used a combination of live transcription and recorded for later transcription, and 26.7% transcribed the samples live.
Respondents reported their analysis procedures in two separate questions. Participants were asked to rank how often they used analysis measures in syntax, semantics, pragmatics, and cohesion categories on a scale from never to always. The most consistently used category of analysis measure was syntax with 71.6% reporting they always use syntax analysis measures, 56.8% always use cohesion analysis, whereas the least consistently used was semantics at 43.2% and pragmatic analysis as 48.6%. Semantic, pragmatic, and cohesion analysis measures were more often ranked in the as ‘sometimes’ or ‘half of the time’ use.
After answering the general category of analysis question, participants were asked to select all analysis measures used in the past. They were presented with a list of 20 measures across syntax, semantics, structure, and discourse. The mean number of analysis measures used was 5.37 (SD=4.35, range= 0–17). Table 4 depicts the frequency of use of each selected analysis measure. The measures Assigning Structural Stage and Index of Productive Syntax were not chosen by any of the respondents as ever having used them. The most frequently used analysis measure was MLU (97%). The other most frequently used measures were grammaticality (72.2%), tense (63.9%), and NTW (58.3%).
Research Question 5: Data Interpretation
The fifth research question examined how school-based SLPs interpret the data and results of the analysis of a language sample. Questions in this section examined why SLPs use language samples and if they use available normative data to compare results from analysis. Respondents were asked to rank the following reasons for conducting LSA: supplementing standard score, naturalistic language measure, goal planning, following district process/procedures, eligibility decisions, and progress monitoring. The ranking scale consisted of five options: always, almost always, half of the time, sometimes, and never. For descriptive analysis, these categories were collapsed into three categories: frequently (frequently includes always and almost always), occasionally (occasionally includes half the time and sometimes), and never. Most of the respondents indicated that they frequently use LSA as a naturalistic measure of a child’s language abilities (64.5%). SLPs use LSA frequently to supplement the standard score from a norm-referenced test (59.2%) and to assist with the planning of treatment goals (55.3%). SLPs occasionally identified using LSA to follow the processes of their district (32%) or for progress monitoring (39.5%). Respondents were asked if they used available normative data such as Brown’s Morphological Stages or the SALT normative database to make decisions about a child’s performance on a language sample. Over half of the respondents (56.7%) reported consistently using available normative data to interpret the results of the collected language sample.
Research Question 6: Relation Between Knowledge and Frequency of Use
The sixth research question examined the relation between SLP’s knowledge in collecting, transcribing, and analysis of language samples across different contexts and the frequency of use of specific language sample contexts. Respondents were asked to rate their level of knowledge on collection, transcription, and analysis across the contexts of conversation, story retell, story generation, and picture description on a 10-point scale. The average perceived knowledge ranking was 7.09 (SD=1.65, range=2.20–10.00). SLPs rated their knowledge on the collection of conversational language samples higher than any other context. They rated their knowledge on the tasks for the expository context lower than any other context. The results for perceived knowledge and knowledge of current recommended practices included respondents who indicated using LSA and those who did not use LSA. The differences in the perceived knowledge for those who did not indicate using LSA are located in Table 5.
Additionally, respondents ranked their agreement regarding 15 statements of evidence-based practice recommendations for conducting LSA. The statements that align with evidence-based practice recommendations are indicated with an asterisk; these statements could be considered “correct.” Participant response of agreement aligned with the correct statements and low agreement for statements that were in contradiction to evidence for LSA. The mean was 7.08 questions answered correctly (SD=2.741, range=0–13). No respondent answered all of the knowledge questions correctly. The differences in answers for each knowledge question and total knowledge questions correct for those who did not indicate using LSA are located in Table 6.
To determine the relation between the knowledge variables, context of samples collected, and demographic characteristics we completed correlational tests. Spearman’s rank correlational coefficient (rs) was used due to the ranking variables. The tests yielded significant correlations for the following variables: knowledge and perceived knowledge, perceived knowledge and years practicing, years practicing and knowledge, years practicing and recording, and years practicing, and total analysis measures use. Statistical significance when using Spearman’s rank correlation tests indicate that correlations are not zero. The correlational data indicates a moderate correlation between years practicing and knowledge, a weak correlation between knowledge and perceived knowledge, and a weak correlation between years practicing and recording. Additionally, there was a weak negative correlation between years practicing and total analysis measures used. The results from the correlational tests are located in Table 7.
DISCUSSION
Overall, the purpose of this survey was to gather information from school-based SLPs on their knowledge of LSA evidence-based practice recommendations, educational opportunities related to LSA, and procedures used when conducting LSA. This information was collected to understand the previously reported barrier of insufficient education in the implementation of LSA [27,31]. The results of the survey indicate that SLPs use various forms of informal language assessment measures including LSA. Ninety percent of respondents to this survey indicated using LSA as an informal language assessment measure and the majority of those respondents completed more than 10 samples a year (58%). However, the results also demonstrate a lack of knowledge of current recommended practices. The respondents, on average, only answered 50% of the knowledge questions correctly. The lack of knowledge demonstrated on the survey could be due to a lack of education and training in the area of LSA. Respondents indicated that the majority of what is known about LSA came from their undergraduate and graduate training. Participant responses to knowledge and practice questions indicated a continued gap in current LSA practice including the context of collected samples, the transcription and recording process, and the analysis measures completed.
Consideration and Use of LSA
A higher percentage of SLPs indicated using LSA and indicated a higher number of samples completed in this survey than reported previously in other LSA surveys [27,31]. Reasons for the higher percentage of SLPs in this survey who report using LSA may include nonresponse bias to the survey and lack of time frame stipulation. Respondents self-selected to take the survey; therefore, someone who does not conduct LSA may not have responded due to the title of the survey. The percentage of the SLPs who conducted LSA may be inflated in this survey due to the nonresponse bias. Additionally, the respondents did not receive a time frame when questioned about the type of informal language assessment used much like the question used in Kemp and Klee [27]. The lower percentage of SLPs who indicated using LSA in the Pavelko et al. [31] study could have been due to the specified year.
The respondents in this survey also indicated using more language samples over the last year than the respondents in Pavelko et al. [31]. More than half of the respondents in this survey (58.7%) indicated that they completed more than 10 language samples in the last year; conversely, over half of the respondents in Pavelko et al. [31] completed fewer than 10 language samples. Reasons for this difference could include responder bias and differences in demographic characteristics. For example, if a SLP self-selected to participate in this survey on language sample practices, the data indicated there was a greater chance they used LSA on a regular basis. Therefore, there may have been differences in the demographic characteristics or workplace characteristics of the respondents in the current survey than the respondents in Pavelko et al. [31].
LSA Knowledge and Practice Patterns
Despite the results demonstrating SLPs using language samples at a higher frequency and larger number of samples completed during the course of a school year, there remains a research to practice gap in the use of recommended collection practices. The majority of respondents agreed that language samples were valid measures of a child’s language and can differentiate a child with a language disorder from a child with typically developing language. Additionally, respondents agreed language samples should be recorded, are useful for goal planning, and are efficient assessment tools. However, there was variability with the level of agreement on the appropriate context to elicit the most advanced language for each age group. Over half of the respondents correctly identified the context to elicit the most advanced language for each age group. However, the knowledge that was displayed was not transferred to current procedures. There remains a gap in knowledge and application of LSA recommended practices, specifically related to the elicitation context, recording and transcription practices, and analysis measures.
The most frequently identified elicitation context was conversational and play-based language samples. This is concerning when considering the context of the sample with the population served. The majority of the respondents worked primarily in elementary settings. Current recommend evidence-based practices indicate that a conversational language sample does not elicit the highest language capabilities of students in elementary school [36,37]. Additionally, it is concerning that 30% of respondents worked in middle and high school settings, yet expository context was not selected by any respondent as a frequently used context. One of the most frequently identified reasons for collecting language samples was for a naturalistic language measure. There appears to be a misconception that naturalistic language only refers to language used in conversations or during play-based activities. Naturalistic settings for an older elementary, middle, or high school student also include academic contexts such as a narrative generation or expository context. A conversational or play-based sample is not recommended for the highest level of language used for academic language [36–38].
Evaluations of language should mirror the needs of the students based on age, grade level, and language expectations. If language evaluations do not assess the language that will be needed for specific academic levels, then the goals that are derived from the assessment may not be beneficial to the student’s educational performance. As federal guidelines indicate a need for a varied assessment, eligibility must point to an educational need, and goals should be aligned with curriculum and related educational performance areas. Therefore, a complete and thorough evaluation needs to include valid measures of social and academic language performance. The end goal of LSA should drive the elicitation context. If discourse, topic maintenance, and appropriate topic initiation is the goal, then a conversational language sample is appropriate. However, if the goal of analysis is complex language, cohesion, flow, or consistent theme, then contexts such as story generation or expository generation would be more appropriate [39].
Recording the sample for later transcription remains an area of practice in which SLPs are not following the recommended guidelines. The majority of participants who completed language samples also recorded the language sample (74%). This is an increase over the 43% of participants in Pavelko et al. [31] who audio recorded their language samples. This increase could be due to the demographics of the current survey sample. Pavelko et al. [31] demonstrated that SLPs who were early on in their career were more likely to record the sample for later transcription. Half of the participants in the current survey had been working fewer than 10 years. Despite the majority of participants who recorded the sample (74%) only 41% recorded and then later transcribed. The majority of the respondents either transcribed live or used a combination of transcribing live and later transcription. The current recommended practices are for a SLP to record the language sample first for later transcription, so that the transcription is accurate [40]. Transcribing in real time does not allow for the most naturalistic and uninhibited elicitation techniques. In an effort to make the process of language sampling more efficient, the SLP diminishes the accuracy of the transcription and overlooks potential opportunities to elicit more complex or productive language.
The final component within the knowledge to practice gap is the lack of varied analysis procedures. The respondents in this survey reported using an average of five different analysis measures out of a list of 20 different measures. This indicates that SLPs may not individualize the analysis measures for each child but complete the same process of analysis each time a language sample is collected, regardless of the needs of the student. The majority of the analysis measures most frequently chosen are measures of syntax. The data appear to demonstrate that SLPs conduct LSA using the same analysis measures each time they elicit a language sample. The end goal of analysis and type information needed should dictate the context and analysis measures. If a SLP is conducting a language sample to supplement a norm-referenced score, the SLP needs to select the analysis measures that would support the goal. For example, if a SLP was interested in the language complexity of a 5th grade student due to a low grammar score received on a norm-referenced test, then the SLP could analyze a language sample for complexity measures such clausal density, subordinate clauses, number of adverbs, and number of verbs. However, if a SLP were interested in the language productivity of a preschooler, measures of number of different words (NDW) and MLU would be appropriate within a conversational context.
Education and Training
The majority of the respondents’ training and education came from their undergraduate and graduate training programs (60.21%) over the other sources of information. Very few SLPs reported going to trainings or reading journal articles on LSA as the source of their information and training. After undergraduate and graduate training, the other identified source was self-taught knowledge. Due to the nature of the question, it is unclear on the source of the information for the self-taught knowledge. The validity of the source of information for self-taught knowledge is questionable. This is concerning as there are journal articles, webinars, and tutorials that are available for SLPs to participate in to increase their education and training in recommended practices for LSA (e.g., 11, 21, 41).
There is a need to examine LSA information being provided in personnel preparation programs given that SLPs reported that their primary source of learning about LSA is in undergraduate and graduate training programs. There are implications for preparation programs to ensure that they are preparing students for practice within the schools. Preparation includes providing education and training on general LSA evidence-based procedures as well as how to decrease personal biases, increase cultural competence, and to use reliable and systematic procedures. The education and practice provided in undergraduate and graduate training programs impacts individuals long after their training is complete. Therefore, it is important that personnel preparation programs extend instruction on knowledge and skills of LSA to emphasize differential individualized assessment.
Limitations and Future Directions
The results of the survey should be interpreted with caution in terms of generalization as the results are representative of the relatively small sample of respondents who completed the survey. The limitations of this study include the different number of respondents and non-respondents and self-reporting biases. The analytic sample for this study includes 90 respondents. This is a relatively small sample size and could have limited the ability to find significant differences in the data. Analysis from a larger sample size may have demonstrated clearer patterns or trends of data yielding additional significant correlations. Despite representing all regions of the United States, the processes and procedures, knowledge, and education of the respondents may not fully represent the population of current school-based SLPs. The recruitment process relied on self-selection through various professional websites and university email systems. There may be selection biases between those who chose began the survey, those who did not, and those who completed the survey. The results of this survey are based on self-reporting of processes and procedures of LSA with no direct observation of the process of SLPs in completing language samples. Therefore, there may be bias in the responses to processes and procedures as SLPs may have reported in a way that is not fully representative of their practices.
The results of this survey indicate a gap between school SLPs’ knowledge and practice as well as a gap between their knowledge and practice and evidence-based LSA guidelines. The results also indicated future directions to include exploring informal assessment practices specifically for SLPs who work with adolescents in middle and high school. This survey was open to all current school-based SLPs; however, with a narrower scope, more specific information on the informal assessment practices for SLPs working with middle and high school would be beneficial. This includes examining the specific contexts of language samples used during informal language assessments. Additionally, the results indicated a need to look closely at the undergraduate and graduate curriculum on LSA as the respondents indicated the largest amount of education and training coming from these programs. Future work should explore effective education, training, and practice in language sampling.
CONCLUSIONS
Informal language assessment is an important part in a complete and thorough evaluation of language. Completing a language sample provides valuable information on the naturalistic language of children and can be used for identifying language disorders, differentiating language disorder profiles, progress monitoring, and treatment planning. Currently school-based SLPs use a variety of procedures when conducting LSAs. There continues to be a research to practice gap on the evidence-based guidelines for LSA. Continued examinations of LSA knowledge, skills, and practice patterns are worthwhile to support comprehensive language evaluation practices that include naturalistic and functional measures.
References
Individuals with Disabilities Education Act, 20 U.S.C. § 1400 2004