AbstractPurposeOne purpose of this preliminary case report was to document the challenges that can arise when giving adult neurogenic tests via telepractice. Another purpose was to develop recommendations on adaptations that can be made when speech-language pathologists give the tests employed in this report. Though one test was adapted by its authors to be given remotely, the other two tests have little or no published adaptations regarding remote administration.
MethodsA 33 year old female and a 51 year old male, both of whom have acquired neurogenic communicative disorders, were given the Apraxia Battery for Adults-2nd edition, the Ross Information Processing Assessment-2nd edition, and the Quick Aphasia Battery-Remote via telepractice across 2–3 sessions per participant. The female participant used an iPad during remote testing, while the male participant used a Chromebook.
ResultsAll three tests had some challenges when given remotely. Though slight technical issues were encountered, the research team needed to make various test adaptations, such as putting pictures into a Google presentation so that patients could adequately see the target picture. In other instances, wording to questions were changed slightly while still retaining the general integrity of the original item.
INTRODUCTIONOver 2 million Americans live with aphasia and nearly 180,000 individuals are identified with aphasia each year [1]. Stroke can be a leading cause of aphasia. Also, approximately 1.5 million Americans sustain a traumatic brain injury (TBI) each year. According to the Centers for Disease Control and Prevention [2], an estimated 223,000 individuals were hospitalized in 2019 due to a TBI. Given the number of persons hospitalized as a result of a TBI, it is not surprising that an estimated 10.9% of adults in the U.S. have a cognitive disability involving limitations in concentrating, memory recall, or decision making [2]. The prevalence of acquired apraxia of speech (AOS) is more difficult to discern as it is a common comorbidity among neurologic communication disorders in general which includes disorders of aphasia and dysarthria. Based on these reports, many people likely need the expertise that speech-language pathologists (SLPs) can provide. However, not all may be able to access the services they need.
Need for remote test administrationIndividuals living in remote or rural geographic locations face many healthcare disparities. Their access to speech-language pathology services at optimal quality, frequency, and intensity is often compromised due to a variety of factors including but not limited to location, mobility challenges, transportation limitations, restricted choice of providers, and time constraints in traveling the distance for services desired. Cognitive, communication, and/or swallowing disorders lead to vulnerability. Geographic location adds further strain to such individuals’ access to quality rehabilitation services.
Twenty percent of the U.S. population lives in rural areas [3]. Of the world population, 44% are classified as rural [4]. The World Health Organization equity objective asserts that all individuals have equal access to healthcare services regardless of geographic location [5]. However, incongruities between rural and urban populations in their access to therapeutic services are reported worldwide [6,7]. The consequences of service disparity include higher rates of chronic disease and adversity contributing to poor health and quality of life [8]. Development of effective practices for improving assessment and interventions for all persons with cognitive-communicative or swallowing disorders who desire treatment is essential.
Geographical barriers to healthcare span ages and disability categories. Patients in rural areas are often less likely than patients in urban areas to access rehabilitation services following a stroke [9]. Rural survivors of TBIs up to 1–1.5 years post injury and hospitalization were more likely to remain dependent and experience restricted health status than their urban counterparts [10]. Of over 1,500 individuals living with multiple sclerosis surveyed across 50 states, those residing in rural areas reported limitations in healthcare access as well as negative impressions relating to the quality of care they received [11].
Similarly, clinicians working in remote areas face challenges in providing quality skilled therapy services to their communities. Barriers facing rural practitioners include recruitment and retention issues, hospital closures, efficiency demands, travel constraints, caseload capacity, and restricted access to specialized training including specialty services and bilingual resources [12–14]. Brems et al. [15] studied 1,500 healthcare providers in New Mexico and Alaska and found that the smaller a clinician’s practice community is, the greater the barriers. The barriers of most significance included resource limitations, provider travel, service access, and training constraints.
While the global need to close the gap in service delivery differences between rural and urban areas is clear, the question of telepractice efficacy has also been investigated. Covert et al. [16] compared telepractice speech-language therapy and in-person therapy and found individuals receiving telepractice services had increased patient attendance and similar patient outcomes in expressive language compared to in-person services. Additionally, this study suggested enhanced cost effectiveness for both patients and facilities and improved clinician efficiency when services were provided via telepractice. In a study completed by Rgalski et al. [17], telepractice was effective in treating individuals with progressive aphasia. Additional research suggests teleconferencing is suited for group aphasia therapy [18]. Four studies involving telepractice for remote assessment purposes found no significant difference between assessment scores and results between in-person and telepractice settings regardless of severity of aphasia in most communication domains [19]. Limitations identified included SLPs’ difficulty in assessing naming accuracy and correctly identifying paraphasias across the computer screen. Technological challenges identified included issues with sustained internet connectivity, audio/visual delay, reduced quality of visual cues and stimuli compared to in-person services as well as patient concerns related to privacy and equipment. Within the area of TBI rehabilitation, studies in telepractice suggest it as an effective treatment for communication partner training, discourse treatment, social communication skills training, and metacognitive therapy specifically [20].
Another important factor to consider is telepractice’s effect on patient satisfaction in rural communities. Dunkley et al. [21] compared rural SLPs’ and patients’ views on access to and use of technology for telepractice and found that rural patients had better access and more positive attitudes toward telepractice services than SLPs expected. A systematic review of telepractice in rural settings completed by Harkey et al. [22] reported high rates of patient satisfaction for occupational, physical, and speech pathology services delivered to rural communities.
Tests that can be used for adult neurogenic patientsSLPs can use various tests to evaluate their adult neurogenic patients. These can include the Apraxia Battery for Adults 2nd Edition (ABA-2; Dabul, 2000), the Quick Aphasia Battery (QAB; Wilson et al, 2018), and the Ross Information Processing Assessment 2nd Edition (RIPA-2; Ross-Swain, 1996).
The Apraxia Battery for Adults-Second Edition [22] was designed to provide information to determine if a person has AOS, nonverbal oral apraxia, and/or limb apraxia. The test also provides corresponding severity levels. The original version of the test was normed on 40 adult male veterans who had apraxia, aphasia, and co-occurring apraxia or dysarthria with co-occurring aphasia. The current version of the test added items with increased complexity. The ABA-2 includes the following six subtests: Diadochokinetic Rate, Increasing Word Length, Limb Apraxia and Oral Apraxia, Latency Time and Utterance Time for Polysyllabic Words, Repeated Trials, and Inventory of Articulation Characteristics of Apraxia. Sample items include on Subtest 2, Increasing Word Length, an individual is asked to repeat words that add more syllables and complexity to each production (e.g., thick, thicken, thickening). For Subtest 5, Repeated Trials, persons are to repeat a word three times (e.g., flashlight, umbrella) to determine if there is any change in performance across the three trials. On Subtest Six, Inventory of Articulation Characteristics of Apraxia, individuals complete a variety of tasks, including describing an action picture, reading aloud the “My Grandfather” passage, and lastly, counting forward and backward from 1–30. Certain subtests require SLPs time aspects of patients’ responses. For example, on Subtest 4, Latency Time and Utterance Time for Polysyllabic Words, SLPs present a picture to participants, then measure Latency time, which refers to the time between the picture’s presentation and when individuals start the utterance. Next, SLPs must record the Utterance time, which is how long persons take to say the pictured item. Times are rounded to the nearest second.
Subtests are scored in various ways based on the instructions. Some subtests use a 3-point scale, while others use a 6-point scale, recording the number of seconds for production and how many errors occurred. For example, Subtest 2, Increasing Word Length, uses a three-point scale, in which a score of 2 indicates the response was correct, without hesitation and errors. A score of 1 indicates clients self-corrected, had a significant delay before their answer, evidenced visual and audible searching, and/or had one or more articulatory errors, but still produced the correct number of syllables. Receiving a 0 in this subtest means that participants did not give a response, attempted to but did not produce a word, said the incorrect number of syllables, or misarticulated the word. Subtest 3, Limb Apraxia and Oral Apraxia, uses a six-point scale, in which 5 indicates the participant gives an accurate, prompt, complete, and readable gesture, while a score of 0 is given when participants cannot perform the correct gesture after a demonstration. Levels of severity (i.e., none, mild, moderate, and severe) are based on raw scores.
The Quick Aphasia Battery [23] is a generally quick assessment that examines language function, yet still obtains robust results similar to lengthier aphasia tests SLPs may use, such as the Western Aphasia Battery-Revised [24]. The QAB provides more information than a bedside screener but is time-efficient with an administration time of approximately 15 minutes. It has parallel versions to better allow SLPs to measure progress while lessening concerns that patients will memorize the entire test. The QAB was normed on 83 participants: 28 with acute stroke and aphasia, 25 acute stroke patients with no aphasia, 16 chronic stroke patients with aphasia, and 14 healthy controls. The QAB is comprised of eight subtests, each containing a range of 5–12 items, and probes various areas of language with different levels of difficulty (e.g., some subtests can start more simply and get more challenging). The Quick Aphasia Battery-Remote [23] allows SLPs to give the test via teleconference and includes the following eight subtests: Level of Consciousness, Connected Speech, Sentence Comprehension, Repetition, Motor Speech, Extra Sentence Comprehension, Picture Naming, Reading Aloud, Word Comprehension, Extra Word Comprehension, Written Word Comprehension, and Writing. Sample items include for the Connected Speech, individuals are asked to talk for three minutes on topics selected by the clinicians, such as describing the best trip ever taken, the worst trip ever taken, and their first job. For Sentence Comprehension, persons are asked to answer yes/no questions, such as “Are babies named by parents?” and “Are people taxed by governments?” The Writing subtests has individuals write down the names of pictures or what is happening in a picture.
Scoring varies by subtest. Subtests 2 (Connected Speech) and 8 (Motor Speech) are scored based on levels of severity, specifically: severe, marked, moderate, mild, or normal while most other remaining subtests are scored using a multidimensional system. For many of the remaining subtests, generally speaking, a score of 4 indicates the response was correct and completed in a timely manner. A score of 3 corresponds to a response that was correct but delayed, self-corrected, or patients requested the item be repeated. If persons obtained a score of 2, that could indicate a partially correct response, while a score of 1 means the answer was incorrect but related. A score of 0 means there was an incorrect or unrelated response, or was no response. There is currently no scoring system for the writing portion of the QAB-Remote.
The RIPA-2 [24] evaluates cognitive abilities in adults and normed on 126 adults who suffered a TBI. The test contains 10 subtests with each subtest containing 10 items arranged hierarchically from simple to more complex tasks. The ten subtests include: Immediate Memory, Recent Memory, Temporal Orientation (Recent Memory), Temporal Orientation (Remote Memory), Spatial Orientation, Orientation to Environment, Recall of General Information, Problem Solving and Abstract Reasoning, Organization, and Auditory Processing and Retention. Sample items include on Subtest 1, Immediate Memory, participants are prompted to repeat increasingly longer randomly ordered strings of numbers and words, and ends with asking individuals to repeat a complex sentence. As another example, Subtest 8, Problem Solving and Abstract Reasonings, asks patients what would they do if they were driving on the freeway and ran out of gas, and also asks what would be three reasons for moving to a new town. Subtest 9, Organization, contains the only timed tasks in which persons must provide members in categories within 1 minute. Patients are to respond to all test RIPA-2 items verbally.
Subtests are scored on a 3-point scale. Generally a score of 3 represents a correct response, 2 is a self-corrected response or a correct response that is accompanied by irrelevant information, 1 corresponds to an errored response, and 0 represents no response or an unintelligible response. Additionally, there is a diacritical scoring system used to qualitatively describe the participant’s response behavior, such as “p” for perseveration and “r” for repetition of stimulus requested for completion of the task. Raw scores are converted to percentile ranks and standard scores before assigning each subtest one of the following levels severity: mild, moderate, marked, or severe.
There are few studies on the use of specific speech-language assessments via telepractice with the exception of recent investigations of the remote testing using the WAB-R [23]. The QAB is comparable to the WAB-R in purpose and reliability [23]. Dekhytyar et al. [26] investigated the remote administration of the WAB-R in 20 individuals with chronic acquired aphasia and found comparable results across in-person and telepractice administrations. There were no significant differences identified between the two administrations with respect to assessment results. Prior to administration, testing materials were mailed to the individuals as needed and picture book stimuli were uploaded to a digital file for computer-based administration. A number of administration modifications were made for remote administration including the use of shared screen and controls, screenshots of responses for scoring, and alternative commands that allow a clinician to view the individual’s response (e.g., “Touch your left eye.” vs. “Touch your left knee”). The authors reported the following recommendations to enhance ease of telepractice assessment: caregiver assistance for initial set-up of technology (e.g., assistance for accessing telepractice platform and ensuring microphone/sound are on), caregiver education to ensure validity of test results by limiting the caregiver’s communication support during testing, and consideration of mobility needs such as the use of a mouse.
Of the three tests described above and used in the current report, only one published study was found in which one of the tests, the ABA-2 [22] was described as being administered remotely [27]. This study was published prior to the onset of today’s widely available teleconference systems, and the researchers actually built a specific telerehabilitation system. Eleven participants with AOS were given the ABA-2 in person and via the telerehabilitation system. Some adaptations to giving the test remotely were made, such as scanning the images and putting the digital files into the telerehabilitation software program. No significant differences occurred between the two testing situations, though the researchers suggested that persons with severe AOS may be better evaluated in person.
Purpose of this case reportBased on the ever increasing need and use of telepractice in the field of speech-language pathology [28–30] as well as the lack of information on the challenges and possible telepractice recommendations when giving some of the available adult neurogenic tests, one purpose of this case report was to document the challenges that can arise when giving such tests via telepractice. Another purpose of this preliminary report was to develop recommendations on adaptations that can be considered when SLPs give the tests used in the current project: ABA-2 [22], RIPA-2 [24], and QAB-Remote [23].
METHODSParticipantsTwo adults participated for this case report (University of Northern Iowa’s Institutional Review Board Protocol #21-0130). The first participant was a 33-year-old female who premorbidly was a nursing student. She suffered a left hemisphere middle cerebral artery ischemic stroke on June 6, 2018 in which she exhibited global aphasia and right-sided hemiplegia. She received inpatient and outpatient speech, physical, and outpatient therapies until March 2019. Since then, she has received individual speech-language therapy through the local university speech and hearing clinic. Prior to the COVID pandemic, she could not drive and also moved away from the local area to a small rural community. As a result, she has received speech therapy remotely twice a week since January 2020. Her most recent individual treatment goals have focused on improving her spelling and reading aloud skills as well as studying for an online final from her nursing program. She also currently participates in an aphasia group via telepractice offered through a large metropolitan hospital located in another state. She is a regular guest lecturer in the graduate Aphasia course. While she does not work outside of the home, she helps care for boyfriend’s elementary school-aged daughter and makes crafted items that she sells to help provide an income.
The second participant was a 52-year-old male who premorbidly had his own insurance agency and was active in coaching local high school sports. He suffered a left hemisphere middle cerebral artery stroke related to a carotid dissection on May 25, 2017 in which he also exhibited global aphasia and right-sided hemiplegia. He received inpatient and outpatient physical and occupational therapies until May 2018, and had inpatient and outpatient speech therapy until being discharged in November 2019 due to a plateau in his progress. He was evaluated at the local university speech and hearing clinic in March 2021. Since then, he has received individual therapy twice a week via telepractice through the university clinic; he lives a few hours away in a small rural community. His most recent goals involve improving his writing of function words (e.g., “and,” “the”) and his reading aloud abilities, and developing and practicing a father-of-the-bride speech. He attended the university’s aphasia group for one semester via Zoom, however, since the rest of the group members attended in person, he believed it was not the ideal experience for him and discontinued with the group. Though he is unable to work, he is also a regular guest lecturer in the graduate Aphasia course.
ProceduresAll testing was done via Zoom. Participant 1 completed all telepractice testing using her iPad across two one-hour sessions. The ABA-2 [22] and QAB-Remote [23] were administered during the first session, and the RIPA-2 [24] was administered in the following session. Participant 2 completed all remote testing using his Chromebook across 3 50-minute sessions. One assessment was administered per session in the following order: ABA-2 (first session), QAB-Remote Form 3A (second session), then the RIPA-2 (last session). Both participants completed all tests while in their homes.
No specific remote adaptations have been reported by the ABA-2’s [22] test author or publisher, though some adaptations similar to those described by Hill et al. [27] were made by the current team. The ABA-2 requires a picture book in Subtest 4: Latency Time and Utterance Time for Polysyllabic Words. An action picture and the Grandfather ready passage are used for Subtest 6: Inventory of Articulation Characteristics of Apraxia. The pictures and reading passage were all scanned into a Google Slide and then shared with each of the participants via the shared screen feature on Zoom.
The QAB- Remote Form 3A [23] was administered using the authors’ set of available presentation slides, which included pictures as well as written words and sentences. All materials were freely available in a zip drive after selecting the Remote version of the test from the QAB website [31]. The researchers’ shared their screen during the subtests that required that the participants identify pictures, name pictures, read aloud utterances, or write the name of pictures. For Subtest 3 of the QAB-Remote, individuals are to look at a stimulus card with objects in a field of six with the corresponding instruction of “Show me the.” Each participant had a different response preference. Participant 1 was given control of the screen and placed her cursor on the correct picture. Participant 2 was prompted to say the corresponding number that matched the picture as numbers are a strength for him (e.g., “Elephant” is labeled with the number 4, so the correct response to “Show me the elephant” would be “Number 4”). The QAB-Remote required pictures for Subtest 3W: Written Word Comprehension as well as a page for participants to write down their responses. A Google document was made and shared with each specific participant so that they could write their respective responses during this subtest.
For the RIPA-2 [24], some questions require altered wording (e.g., “How long have you been living where you currently are?” instead of “How long have you been in this hospital?”). These adaptations were necessary as some questions were directed towards patients being in a hospital, which does not pertain to either participant as they both live at home. Other questions in the RIPA-2 (Ross-Swain, 1996) were changed after testing was completed based on the participants’ surroundings and resources. For example, when Participant 2 was asked what the time was, he looked around his home to find a clock for help answering the question.
RESULTSScores on the ABA-2 [22], QAB-Remote [23], and the RIPA-2 [24], are in Tables 1–4. Based on the tests given, Participant 1 presented with mildly impaired aphasia, moderate anomia, and mild AOS (Tables 1 and 2). On the ABA-2 [22], Participant 1 had no difficulty with increasing word length and did not evidence any limb or oral apraxia. She had fewer errors with automatic speech tasks (e.g., counting) rather than volitional tasks (e.g., picture description). She was classified as having no aphasia on the QAB-Remote comprehension and the reading tasks, but evidenced difficulty on the remaining items. On the RIPA-2 (Ross-Swain, 1996), she generally had responses that placed her in the mild-moderate level of impairment (Table 3). Errors and self-corrections were most prominent on Subtest 1: Immediate memory, however, they were also present throughout the test, but to a lesser extent. Participant 1 also demonstrated consistently delayed responses. Many of her lower scores can be attributed to self-corrections with the exception of Subtest 1: Immediate Memory, in which her lower scores can mostly be attributed to errors. Overall, her auditory comprehension was a strength, and she tended to exhibit speech that was telegraphic in nature with several restarts (e.g., g-giving, St. M-Martin) and self-corrections.
Participant 2 presented with characteristics consistent with severe aphasia, severe anomia, and mild-moderate AOS (Tables 1 and 2). On the ABA-2 [22], He demonstrated several articulatory errors and presented with mild limb apraxia. At times, his receptive language difficulties led to difficulty understanding some of the tasks based on the instructions included in the test. On the QAB-Remote, he had no difficulty with the word comprehension subtest, but had difficulties with the remaining subtests. On the RIPA-2 [24], Participant 2’s responses classified him as primarily having moderate impairments, though his severity level for the entire test ranged from mild-to-severe (Table 4). On the RIPA-2 [24], similar to Participant 1, Participant 2 had severe impairments with Subtest 1: Immediate Memory which requires persons repeat increasingly longer and more complex strings of utterances. There were several notable occasions throughout the test where he received low scores on items due to anomia rather than cognitive difficulties. Overall, Participant 2’s auditory comprehension was generally a strength; he also evidenced telegraphic speech and several restarts. He expressed more frustration with his communicative challenges than Participant 1 did.
DISCUSSIONBased on the tests used for this case report, both participants evidenced various deficits, though they had some differing patterns. Participant 1 did not have any receptive language impairments according to the QAB-Remote [23], but did evidence mild AOS, and difficulty with expressive language specifically in the areas of word finding abilities and providing grammatically correct responses. Her speech contained many self-corrections, restarts, and delayed verbal responses. Participant 2 did not evidence any difficulty with word level receptive language tasks on the QAB-Remote. He had more difficulty with sentence level comprehension tasks and presented with moderate-severe AOS and expressive language deficits. He often said nouns and verbs, but did not use grammatically correct utterances. He also had anomia. Similar to Participant 1, he had self-corrections, restarts, and delayed verbal responses.
Both participants had cognitive impairments according to the RIPA-2 [24]. However, the research team believed both participants’ co-occurring AOS and expressive language deficits made it more difficult for both of them to have correct responses. Informally, both participants demonstrate abilities in their daily life that indicate they have adequate cognitive skills. Participant 1 makes her boyfriend’s young daughter breakfast in the morning and helps her get ready for school. Participant 1 also spends the majority of the day at home alone while her boyfriend is at work and his daughter is at school. As previously noted, she makes and sells craft items. She can independently keep track of her schedule and is independent in her cares. For Participant 2, despite looking for a clock to give the specific time on the RIPA-2, he knows what time it is during treatment sessions as he regularly reminds the clinical supervisor when it is time for her to go supervise another session that runs concurrently during his session. Prior to the start of a testing session, Participant 2 reminded his wife to respond to questions posed in an email. When asked to say how long he’s lived in his home, he correctly responded by specifically stating the number of years, months, and days. His wife works outside of the home. As a result, Participant 2 spends his days home alone and also had to remember when the testing sessions were, which differed in day and time from his usual twice weekly speech-language therapy sessions. Both participants logged onto their devices and into Zoom independently. Thus, while useful evaluation information was obtained for both participants, the presence of aphasia can interfere with the ability to properly evaluate cognition [32]. Even neurologically intact adults can be classified as having mild-moderate cognitive impairments on the RIPA-2 [33]. Nonetheless, the research team wanted to give the RIPA-2 via teleconference to determine if it feasibly could be given in that manner.
Clinical significanceClinicians need to have the ability to assess patient performance in various areas, including determining patients’ level of difficulty, and as a result, decide upon a treatment plan and prognosis. There also continues to be a need for remote testing of adult patients, yet a lack of guidelines for such testing exists. Even tests designed for remote delivery, such as the QAB-Remote [23], can still require additional careful decision-making to ensure an ideal evaluation (see Remote Testing Recommendations below). Prior research has shown positive patient outcomes, high patient satisfaction levels, improved clinician efficiency, and cost effectiveness when telepractice has been utilized, including specifically with speech-language pathology services [16,22]. Additionally, generally no significant differences occurred on most aphasia assessment scores when comparing clinical services being done in-person or via telepractice [19]. This pilot study provides some initial guidelines in terms of what kinds of challenges and subsequent recommendations SLPs can consider if they decide to remotely administer the ABA-2 [22], QAB-Remote [23], and/or the RIPA-2 [24] to their adult patients.
Remote testing challengesRemote testing can be very beneficial because of its accessibility. Many individuals are unable to travel to healthcare facilities, perhaps because they cannot drive and/or facilities are too far away from where they live [9]. Persons served may be immune compromised, or there may be a shortage of SLPs [12–15], and telepractice could be the only way clinicians can see all of their patients. In addition, while many individuals may prefer in-person clinical services, clients tend to report they are satisfied with remotely delivered speech and language services, have found clinicians to be professional in their interactions, and treatment outcomes have been positive [16,19,20,29].
Challenges may be encountered with remote testing. One of these challenges include typical technical issues, such as the screen freezing or audio cutting out during test administration or when a client is giving a verbal response. Similar challenges have been reported by others [19]. When Participant 2 was given the ABA-2 [22], the sound cut out for approximately 1 second. It was easy to repeat the item Participant 2 did not hear, and the audio cutting out did not happen again. During administration of the RIPA-2 [24] to Participant 2, which was given on a different day than the other two tests, the screen froze three times. Testing picked up where it left off and ultimately, test scoring was not affected. Though Hall et al. [19] reported challenges in accurately evaluating naming and denoting paraphasias in persons with aphasia during telepractice, these situations did not emerge during any of the testing sessions. Nonetheless, it is possible properly hearing what the clinician and/or client have said could be a significant barrier to adequately completing a test. During the QAB-Remote, there were a couple of items that Participant 2 could not hear what was said and asked “What?” He is not elderly, does not have a hearing loss, nor wears hearing aids. While persons can have difficulty with speech perception as they age [34], it is possible the lack of face-to-face interaction more negatively affects the ability to hear others when teleconference is utilized [35]. Thus, it can be imperative for SLPs to try and have an optimal communication situation as much as possible, such as ensuring there are no background noises, that the client is sitting in a well lit area away from any other distractions. Such suggestions could be of even more importance for elderly clients.
Another challenge due to remote testing was the inability to record remote sessions due to university requirements pertaining to the Health Insurance Portability and Accountability Act (HIPAA). Most testing required verbal responses, however, within the ABA-2 [22] there is a section on Limb and Oral Apraxia. These sections were unable to be reviewed following administration due to the lack of screen recording. When discussing this particular section after the testing session, researchers documented slight response differences (e.g., “throw a ball” and “snap your fingers”), but were unable to double check. Another area where the lack of a screen recording proved to be challenging was within the RIPA-2 (Ross-Swain, 1996). When Participant 1 was asked questions relating to the date or time, it was noted by one of the clinicians that she may have looked at her screen or a calendar in the background. Participant 2 actually did look around his kitchen to find the clock on Subtest 3. Directions for that particular item do not clearly state “Without looking at a clock,” and the research team did not expect him to search his home for a clock. Times are listed in the corners of various devices, so it could be easy for a client to utilize external aids when answering that item. Also within the RIPA-2, clients are asked to memorize three words at the start of the assessment. Since videos likely only show their upper body, some clients may have the ability to write down the three words and refer back to them when asked to recall them. This could possibly skew the actual performance on that item. This did not appear to be the case with either of the participants in this study, however, it could be a plausible scenario with others.
When queried after each test what they thought about that particular test being given via Zoom, both participants thought the ABA-2 [22], RIPA-2 [24], and QAB-Remote [23] could all be given adequately to persons with acquired neurogenic disorders via telepractice. This is encouraging feedback as other reports indicate patients and caregivers are satisfied with telepractice options for speech-language-hearing services [29].
Remote testing recommendationsGeneral RecommendationsAfter administering the three different assessments to both participants via telepractice, general recommendations for remote testing are as follows: 1) Participants should make sure they have a strong reliable internet connection prior to the start of testing to reduce any audio breaks or loss of connection. 2) If possible and pending facility and/or company HIPAA compliance standards, it is also recommended that the whole assessment be screen recorded. This may be particularly beneficial during assessments that require non-verbal tasks. 3) Ensure as much as possible an ideal communication situation with lack of or at least limited background distractions and have the client sitting in a well lit area. Additional recommendations for each test are described below.
ABA-2 RecommendationsA few relatively straightforward recommendations can aid remote administration of this test. For Subtest 3: Limb Apraxia and Oral Apraxia, the limb apraxia items may necessitate SLPs request that they can clearly see what patients are doing as attempts may be made outside of view of the camera (e.g., playing the piano). That may require clients to hold up their hands/arms in view of the camera, if possible, or if feasible, move their device’s camera so that it captures the gesture. Scored in its typical fashion, demonstrations are not provided unless there is no response for 10 seconds or the response is unsuccessful [22]. Instead of this approach, Hill et al. [27] gave participants verbal instructions for a task, asked them to produce it, then gave a visual demonstration of gesture, and asked individuals to produce the gesture again. If participants correctly performed the task after the verbal direction, performance after the visual demonstration was disregarded. Though this approach was not employed in the current study, it is another adaptation clinicians could consider using.
For Subtest 4: Latency Time and Utterance Time for Polysyllabic Words, patients are shown pictures while the SLP must time both the latency time (i.e., duration of time it takes from the person first being shown the picture to when the person begins the utterance) and utterance time (i.e., how long it takes for the patient to say the target word). Pictures are located in a spiral ring presentation book. It is unrealistic for any clinician to be able to hold up the target picture to a camera while simultaneously conducting latency and utterance measures, then flip over to the next picture and start again. Instead, SLPs should scan each picture and put them into a Google presentation, Word document, or some other shareable format in which there is one picture per page so that each item is large enough to be clearly seen. When clinicians get to this subtest, they can share the screen with their patients, and then advance to each picture when ready while also taking the above-noted measurements. Subtest 6: Inventory of Articulation Characteristics of Apraxia includes an action picture that persons are to describe as well as reading the Grandfather Passage aloud. Similarly as described above, the action picture and the reading passage should be scanned and also placed onto their own slides or page, large enough for patients to see and then shared via Zoom or whatever telepractice program is being used. Dekhytyar et al. [26] also recently discussed using screen sharing when giving tests remotely.
RIPA-2 RecommendationsA specific recommendation pertaining to this test would be trying to ensure that clients are not writing down the three objects that they need to memorize and recall later. This may not likely be an issue for most, but some clients are given the same tests repeatedly. If they have learned to anticipate that they will be given these three objects to recall, they then may try to take advantage of a useful but not allowed strategy of writing down the names of these objects. Clients should also be instructed that they cannot use external aids to determine what time it is in Subtest 3: Temporal Orientation (Recent Memory). Clinicians will also need to adjust some of the wording used to best meet the needs of the testing situation while retaining the general integrity of the test question. More specifically, the following wording adaptations included in Table 5 were used or could be made.
QAB-Remote RecommendationsRecommendations for the QAB-Remote [23] mirror those given for the ABA-2 [22] in terms of ensuring clinicians can see what the client is doing. More specifically, for Subtest 1, the two commands ask patients to “Close your eyes” and “Point to the ceiling.” SLPs will need to ensure that they can indeed see the patient carrying out both tasks. When asked to “Close your eyes,” patients may need to move their face close enough to their camera to be seen, especially if background lighting is not ideal. For the remainder of the subtests, the QAB-Remote conveniently includes slides containing the target items and the capability to increase the size of the pictures or written words so that the client can better see the target items. For Subtest 3W, Written Word Comprehension, the researchers made a Google doc that contained each of the target items and provided space for participants to write in their responses. Though individuals could also write responses on the Zoom Chat feature, using Google docs allows clinicians to see the responses occurring in real time, and can better determine if any delays or corrections are made. However, if Google docs is not viable, the Chat feature will still provide SLPs with helpful data.
LIMITATIONS AND FUTURE RESEARCHLimitations for this preliminary case report must be considered. There were only two participants who both evidenced generally non-fluent, telegraphic speech, apraxic errors, and strong auditory comprehension. This small sample size does not allow the results to be generalized. It is possible that differing results would have occurred had more participants with similar profiles been included in this preliminary report. Also, descriptions are limited as persons with fluent aphasia (e.g., Wernicke’s aphasia) were not included nor were there any participants who had only cognitive impairments without co-occurring aphasia or AOS, yet their profiles clearly differ from the adults in this report [36]. It is unknown how the inclusion of such individuals would have impacted the above described test administration challenges and subsequent recommendations. An additional limitation is that only the ABA-2 [22], QAB-Remote [23], and RIPA-2 [24] were used. Little information was found on potential challenges and subsequent recommendations when giving these three tests via telepractice [27]. However, SLPs can give a range of neurogenic tests to their patients based on their workplace and/or personal preferences [37].
Based on these limitations, future research could involve replicating this study with a larger number of participants, giving the same tests used in the current report via telepractice to a greater number of individuals with similar profiles (i.e., non-fluent, telegraphic speech, good auditory comprehension) as well as to persons that present with differing profiles, such as those with Wernicke’s aphasia. It is possible that differing client profiles (e.g., non-fluent vs. fluent aphasia, mild cognitive impairments vs. severe cognitive impairments) will warrant different remote test adaptations. Future studies should give the tests used in this study to participants in person and remotely, then compare test scores to better determine the reliability of remote testing of the ABA-2 [22], QAB-Remote [23], and RIPA-2 [24]. Prior studies have reported no significant differences between aphasia assessment scores in many of the communication domains regardless of participants’ severity level [19]. Also, it would be beneficial for future researchers to determine if other available tests, such as the Boston Naming Test [38] or the Saint Louis University Mental Status Examination [39] can feasibly be administered remotely and then determine if any adaptations or recommendations need to be considered. Ideally, future research will allow SLPs to come to a consensus on test modifications to in order to properly conduct remote assessments.
Table 1
Table 2Table 3Table 4Table 5REFERENCES1. National Aphasia Association. Aphasia frequently asked questions. [Internet]. 2016. [cited 2023]. Available from: https://www.aphasia.org/aphasia-faqs/.
2. Centers for Disease Control and Prevention. National center on birth defects and developmental disabilities. Division of Human Development and Disability. Disability and Health Data System (DHDS) [Internet], 2020. Available from: https://dhds.cdc.gov.
3. U.S. Census Bureau. 2022. [Internet]. [cited 2023 Apr 7]. Available from: https://www.census.gov/newsroom/press-releases/2022/urban-rural-populations.html.
4. World Bank. 2021. Rural population: (% of total population). [Internet]. [cited 2023 Mar 20]. Available from: https://data.worldbank.org/indicator/SP.RUR.TOTL.ZS.
5. World Health Organization. Division of analysis, research and assessment. Equity in health and health care: A WHO/SIDA initiative [Internet]. World Health Organization, 1996. [cited 2023]. Available from: https://apps.who.int/iris/handle/10665/63119.
6. Gallego G, Dew A, Lincoln M, Bundy A, Chedid RJ, Bulkeley K, et al. Access to therapy services for people with disability in rural australia: a carers’ perspective. Health Soc Care Community. 2017;25(3):1000–1010. doi: 10.1111/hsc.12399.
7. Ora HP, Kirmess M, Brady MC, Sorli H, Becker F. Technical features, feasibility, and acceptability of augmented telerehabilitation in post-stroke aphasia-experiences from a randomized controlled trial. Front Neurol. 2020;11
https://doi.org/10.3389/fneur.2020.00671.
8. Harkey LC, Jung SM, Newton ER, Patterson A. Patient satisfaction with telehealth in rural settings: a systematic review. Int J Telerehabil. 2020;12(2):53–64. doi: 10.5195/ijt.2020.6303.
9. Iyer M, Bhavsar GP, Bennett KJ, Probst JC. Disparities in home health service providers among medicare beneficiaries with stroke. Home Health Care Serv Q. 2016;35(1):25–38. doi: 10.1080/01621424.2016.1175991.
10. Schootman M, Fuortes L. Functional status following traumatic brain injuries: population-based rural-urban differences. Brain Inj. 1999;13(12):995–1004.
https://doi.org/10.1080/026990599121007.
11. Buchanan RJ, Stuifbergen A, Chakravorty B, Wang S, Zhu L, Kim M. Urban/rural differences in access and barriers to health care for people with multiple sclerosis. J Health Hum Serv Adm. 2006;29(3):360–375.
12. Miller K, Miller KL, Knocke K, Pink GH, Holmes GM, Kaufman BG. Access to outpatient services in rural communities changes after hospital closure. Health Serv Res. 2021;56(5):788–801.
https://doi.org/10.1111/1475-6773.13694.
13. Morton ME, Gibson-Young L, Sandage MJ. Framing disparities in access to medical speech-language pathology care in rural Alabama. Am J Speech Lang Pathol. 2022;31(6):2847–2860.
https://doi.org/10.1044/2022_AJSLP-22-00025.
14. Penaloza C, Scimeca M, Gaona A, Carpenter E, Mukadam N, Gray T, et al. Telerehabilitation for word retrieval deficits in bilinguals with aphasia: effectiveness and reliability as compared to in-person language therapy. Front Neurol. 2021;12
https://doi.org/10.3389/fneur.2021.589330.
15. Brems C, Johnson ME, Warner TD, Roberts LW. Barriers to healthcare as reported by rural and urban interprofessional providers. J Interprof Care. 2006;20(2):105–118. doi: 10.1080/13561820600622208.
16. Covert LT, Slevin JT, Hatterman J. The effect of telerehabilitation on missed appointment rates. Int J Telerehabil. 2018;10(2):65–72. doi: 10.5195/ijt.2018.6258.
17. Rogalski EJ, Saxon M, McKenna H, Wieneke C, Rademaker A, Corden ME, et al. Communication bridge: a pilot feasibility study of internet-based speech-language therapy for individuals with progressive aphasia. Alzheimers Dement. 2016;2(4):213–221.
https://doi.org/10.1016/j.trci.2016.08.005.
18. Lanyon LE, Rose ML, Worrall L. The efficacy of outpatient and community based aphasia group interventions: a systematic review. Int J Speech Lang Pathol. 2013;15(4):359–374.
https://doi.org/10.3109/17549507.2012.752865.
19. Hall N, Boisvert M, Steele R. Telepractice in the assessment and treatment of individuals with aphasia: a systematic review. Int J Telerehabil. 2013;5(1):27–38. doi: 10.5195/ijt.2013.6119.
20. Togher L, Douglas J, Turkstra LS, Welch-West P, Janzen S, Harnett A, et al. Guidelines for cognitive rehabilitation following traumatic brain injury, part IV: cognitive-communication and social cognition disorders. J Head Trauma Rehabil. 2023;38(1):65–82.
https://doi.org/10.1097/HTR.0000000000000835.
21. Dunkley C, Pattie L, Wilson L, McAllister L. A comparison of rural speech-language pathologists’ and residents’ access to and attitudes towards the use of technology for speech-language pathology service delivery. Int J Speech Lang Pathol. 2010;12(4):333–343. doi: 10.3109/17549500903456607.
22. Dabul BL. Apraxia battery for adults. 2nd ed. PRO-ED, 2000.
23. Wilson SM, Eriksson DK, Schneck SM, Lucanie JM. A quick aphasia battery for efficient, reliable, and multidimensional assessment of language function. PLoS One. 2018;13:1–29.
https://doi.org/10.1371/journal.pone.0192773.
24. Ross-Swain D. Ross information processing assessment. 2nd ed. PRO-ED, 1996.
25. Kertesz A. Western aphasia battery-revised. PRO-ED, 2006.
26. Dekhtyar M, Braun EJ, Billot A, Foo L, Kiran S. Teleconference administration of the western aphasia battery-revised: feasibility and validity. Am J Speech Lang Pathol. 2020. 29:673–687. Available from: https://pubs.asha.org/doi/10.1044/2019_AJSLP-19-00023.
27. Hill AJ, Theodoros D, Russell T, Ward E. Using telerehabilitation to assess apraxia of speech in adults. Int J Lang Commun Disord. 2009;44:731–747. doi: 10.1080/13682820802350537.
28. Edwards-Gaither L, Harris O, Perry V. Viewpoint telepractice 2025: exploring telepractice service delivery during COVID-19 and beyond. Perspect ASHA Spec Interest Groups. 2023;8:412–417. doi: 10.1044/2022_PERSP-22-00095.
29. Kim S, Roman AM, Moore A. Patient and caregiver satisfaction regarding telepractice versus in-person services at a university speech, language, and hearing clinic. Clin Arch Commun Disord. 2022;7:83–93.
http://dx.doi.org/10.21849/cacd.2021.00598.
30. Mallipeddi NV, Mehrotra A, Van Stan JH. Telepractice in the treatment of speech and voice disorders: what could the future look like? Perspect ASHA Spec Interest Groups. 2023;8:418–423.
https://doi.org/10.1044/2022_PERSP-22-00098.
31. The university of queensland australia. QAB [Internet]. 2023. [cited 2023]. Available from: https://aphasialab.org/qab/.
32. Fonseca J, Ferreira J, Pavão Martins I. Cognitive performance in aphasia due to stroke: a systematic review. Int J Disabil Hum Dev. 2017;16:127–139. Available from: https://doi.org/10.1515/ijdhd-2016-0011.
33. Friehe M, Schultz J. Performance of non-brain-injured adults on the ross information processing assessment-2. J Med Speech Lang Pathol. 2003;11:147–157.
34. Pichora-Fuller MK, Schneider BA, Daneman M. How young and old adults listen to and remember speech in noise. J Acoust Soc Am. 1995;97:593–608.
35. Narasimha S, Chalil Madathil K, Agnisarman S, Rogers H, Welch B, Ashok A, et al. Designing telemedicine for geriatric patients: a review of the usability studies. Telemed e-Health. 2017;23:459–472.
https://doi.org/10.1089/tmj.2016.0178.
36. Hazamy AA, Obermeyer J. Evaluating informative content and global coherence in fluent and non-fluent aphasia. Int J Lang Commun Disord. 2020;55:110–120. doi: 10.1111/1460-6984.12507.
37. Frith M, Togher L, Ferguson A, Levick W, Docking K. Assessment practices of speech-language pathologists for cognitive communication disorders following traumatic brain injury in adults: an international survey. Brain Inj. 2014;28(13–14):1657–1666. doi: 10.3109/02699052.2014.947619. PMID: 25158134.
38. Kaplan E, Goodglass H, Weintraub S. Boston naming test. Lea & Febiger, 1998.
|
|