Analysis of “Who’s Doing the Pointing: Investigating Facilitated Communication in a Classroom Setting with Students with Autism”
Today’s blog post is a continuation of a series analyzing controlled studies of Facilitated Commmunication (FC) using criteria from founder Rosemary Crossley’s book Facilitated Communication Training, plus two additional questions based on concerns mentioned by proponents regarding naturalistic settings and participant fatigue. You can read more about these questions in a previous blog post.
Because current day facilitators refuse to participate in reliably controlled testing designed to rule in or rule out facilitator influence or control (e.g., cueing), it is necessary to look back at historical research to understand the documented problems with FC and the (often inadvertent) physical, verbal, and visual cues facilitators provide to their clients or loved ones that affect letter selection.
As with many studies of traditional touch-based FC, the authors of Who’s Doing the Pointing: Investigating Facilitated Communication in a Classroom Setting with Students with Autism found that their facilitator was (most likely) not cueing the clients intentionally and was unaware of the full extent to which she was doing so. I’ll note here that traditional touch-based FC does not rely as heavily on verbal cues found in Spelling to Communicate (S2C) and Rapid Prompting Method (RPM). Nor does it rely on hand signals (e.g., pointing to various parts of the board, closing and opening a hand). It would be interesting to learn how facilitators using S2C, RPM, and other forms of FC would perform under the blind conditions used in FC studies prior to 2014 when, to my knowledge, the last reliably controlled authorship study was published. (See blog post here).
For clarification, the FC technique used in this study involved a facilitator or assistant isolating the pointing finger of the participant(s) and providing backward pressure while the participant(s), purportedly, pushed toward the desired letter on a keyboard or letter board. The authors of the study did not specify where the facilitator held on to the participants, but, generally, this backward pressure is applied at the wrist, arm, shoulder or shirt sleeve of individuals being subjected to FC. I’d suggest that, with the act of “providing backward pressure,” the facilitator increases the chances of influencing letter selection, whether the action is conscious or not. (See ideomotor response)
By 1998, as the authors state in the article, a significant number of experimental studies “indicated that the validity and reliability of FC were ‘extremely low’.” (p. 74). Several of these studies specifically attempted to assess FC with school age students with autism and “found that the responses were accurate only when the facilitators had been exposed to the stimuli. (p. 74). Understanding the significance of these findings, the authors set up their own study to examine the question of authorship in their own students.
Perhaps it is because this study reinforced the concept of facilitator influence and control that it is often omitted from pro-FC reference lists.
We have provided a summary of this study in the Controlled Studies section of our website, but encourage you to read the full report if you have not already done so.
Was the partner (facilitator) trained and experienced with the facilitated communication method? Unknown.
The report did not mention specific FC training with the speech-language pathologist who facilitated the two clients in this study. The SLP was certified at the time by the American Speech-Language-Hearing Association (ASHA) and had approximately 16 years of experience working with “students with severe language disorders, autism, and so forth.” In addition, the person had 6 years of experience as a university instructor in speech-language pathology.” (p. 74)
Did the aid user (individual being facilitated) previously communicate fluently with that partner (facilitator)? No.
The SLP/facilitator did not attempt FC with either student prior to the investigation. (p. 74)
Was the aid user (individual being facilitated) satisfied there was a genuine reason for the validation being sought and give consent to the procedure? Unknown.
The authors did not mention obtaining informed consent from the students or from their parents or guardians. The FCed activities and data collection took place during routine classroom instruction.
The authors noted “there was no formal testing procedures used in this investigation” and that the activities were” part of the students’ daily routine.” (p.77)
Did the aid user have experience with the validation task required and demonstrate the skills required by the testing procedure? Yes.
During the week prior to data collection the SLP worked with the two participants in this study, which included instructional activities (e.g., identifying pictures, objects, or words printed on index cards). (p. 75)
Were researchers responsive to proponent concerns that testing be conducted in as natural a setting as possible? Yes.
The authors included a discussion of proponent concerns that “the formality of the testing situation produces poor results.” (p. 74)
The researchers conducted their study during classroom activities and, in the week prior to data collection, acclimated participants to the SLP/facilitator. The activities were “functional” (e.g., in line with activities already being taught to the participants and in their Individual Education Plans or IEPs).
Starting the week prior to testing, the facilitator wore sunglasses when interacting with the students. The purpose of this was to let the students get used seeing the facilitator with the glasses on. Later, during test conditions calling for the facilitator to be “blinded” from seeing the letter board, the facilitator inserted a cardboard cutout into the sunglasses to block her view. (p. 75)
“None of the students appeared to notice nor did any of them comment on the fact that the SLP wore sunglasses throughout her time in the classroom.” (p. 77)
Note: Test protocols also allowed for the facilitator to praise correct responses and/or provide prompts if the answer was incorrect. However, “second or prompted responses were not recorded for data purposes during this investigation.” (p. 75)
Were participants given opportunities to take breaks and/or end the session when tired, agitated, or simply done for the day? It appears so.
The authors noted that every effort was made to “provide physical and emotional support for the students during a typical instructional activity.” (p. 77) Even though they did not address the issue of taking breaks or ending the session directly, it is plausible that, because the information used in the data collection portion of the testing was part of the students’ daily programming, the participants were unaware that they were being tested. The authors noted: “The activities in the present investigation were part of the students’ daily routine and, therefore, should have been no more or less threatening to [the participants] than any other daily activity.” (p. 77)
Author Findings
The authors of this test found that “the students’ responses were influenced by the SLP/facilitator’s ability to see the picture or written stimuli. “ (p. 76)
Conclusion
The authors mention Crossley’s explanation for failed tests, which she blamed on “controlled, unnatural situations.” In her book, Facilitated Communication Training, Crossley painted a bleak picture of evaluators aggressively questioning participants and ignoring their physical and psychological well-being. She wrote:
“People with severe communication impairments often have associated or secondary impairments that make them especially vulnerable in testing situations. The most obvious problem is lack of self-confidence together with lack of social experience—many people who can speak can be rendered mute by aggressive questioning. Anyone with spasticity could be rendered so tense as to preclude communication altogether. People with less well-known problems may have their ability to communicate deliberately sabotaged. Some people with neurological damage have hyperactive startle reflexes—they go rigid (and some may actually convulse) when there is a sudden noise, such as a click of a switch on a tape recorder. Others are visually disinhibited, that is, they cannot stop themselves from looking toward anything that moves within their field of vision. Their communication would be affected if the observers kept shuffling their papers while they were trying to type or point.” (p. 96, Crossley FCT)
No wonder proponents seem terrified of researchers and the prospect of participating in message passing tests where participants would, purportedly, be aggressively interrogated.
However, as this and other controlled studies indicate,, the people conducting this controlled test were not strangers in lab coats, nor was the setting sterile or unfamiliar to the participants. The were classroom teachers, aides, and a speech/language pathologist familiar to the students who were (in my opinion, correctly) responding to critical reviews of FC (see Controlled Studies) with their own investigation, and providing their students with both physical and emotional support.
Looking at their credentials, as included at the end of the article, I find it difficult to imagine these people fit Crossley’s profile of callous researchers:
Rosemary G. Kerrin, M.S., was, at the time, a doctoral candidate in the Department of Special Education at the University of New Orleans with professional interests in “developing effective communication/language intervention strategies for students with autism and other developmental disabilities.” (p. 78)
Jane Y. Murdoch, PhD, was a professor of special education at the University of New Orleans with an interest in communication/language disorders and applied behavior analysis. (p. 78)
William R. Sharpton was a professor of special education at the University of New Orleans with an interest in inclusive education, services for transition-age youth, and systems change strategies. (p.78)
Nichelle Jones was an educator and teacher of students with autism in the Jefferson Parish Public Schools. (p. 78)
Indeed, these educators seemed to do everything they could to make the testing situation as naturalistic as possible. They did not employ any formal testing procedures (as identified by Crossley) that could have interfered or influenced the facilitated responses in the investigation. (p. 77) And, still, the results showed facilitator influence and control over letter selection. It seems, then that the resistance to (and possibly the fear of failing) reliably controlled message passing tests originates from the facilitators themselves and not with the researchers or testing conditions.
I find it quite telling that these authors found it important to mention that:
1) If FC was used in the way Crossley described the procedure (e.g., with physical prompting that is gradually faded) then they would not have reason to question authorship. (p. 78) But reports of unexpected literacy skills and use of abstract symbols in individuals with profound communication difficulties being subjected to FC was, for them, a cause for concern; and
2) FC seemed to be based, at least in part, on “similar physical prompting/fading techniques employed in many applied behavior analysis programs.” But, as the authors note, when prompting and fading are employed effectively, the resulting student performance is likely to be less dramatic, and progress in mastering literacy skills much slower. (p. 78)
As we’ve seen, authorship tests can be conducted in “naturalistic settings” using information gleaned from students’ daily activities and curriculum. Rather than conduct this type of study on their own, at this point, I think it’s best that educators seek the help of professionals familiar with the structure of controlled testing (to avoid the pitfalls of poorly designed studies) and follow ethical guidelines (recommended by Institutional Review Boards).
Having said that, kudos to the authors of this study who responded to the red flags (e.g., unexpected literacy, lack of facilitator fading, prompt dependency) and, sought to verify authorship in facilitated communication with their students.
The authors in this study described a “simple yet effective method of assessing authorship of facilitated responses.” Their choice to use glasses to “blind” the facilitator was a good one for a couple reasons. First, glasses are commonplace and much less intrusive than, say divided tables or other structures used in earlier authorship studies. Second, individuals with autism, characteristically, pay little to no attention to people’s eyes. It’s likely that is one reason why the individuals with autism didn’t notice or comment on the facilitator’s glasses is that they were not paying attention to that area of the facilitator’s face. Katherine’s discussed the ramifications of poor eye contact and joint attention in previous blog posts. (See Is diminished Joint Attention not a problem for word learning in autism? and Is Eye Contact Really Overrated?)
The protocols used in this study, if implemented carefully, can provide a relatively quick and easy solution to authorship testing, especially since facilitator cueing is a well-documented problem in FC/S2C/RPM. No specialized equipment is necessary and data collection can take place during routine academic activities as long as the facilitator is sufficiently blinded from seeing the letter board.
It takes integrity and courage to question FC. Skeptics and critics of the technique are often accused of being against individuals with profound communication difficulties. In addition, facilitators are taught by workshop leaders not to test FC (in any of its forms) and risk being ostracized by the FC community for expressing doubts about the technique(s). But, the professionals in this study put their concerns for their students ahead of the reputation of a technique that, to date, still has no reliable evidence to back up claims the messages produced are independent and free from facilitator control.
It may well be that facilitators are not fully aware of the extent to which they control letter selection (it’s difficult to multi-task and stay fully aware of one’s own behavior) but, as this and other authorship studies show, a lack of awareness on the part of facilitators doesn’t mean cueing is not present. Given this, I’d argue that facilitators have a professional and an ethical duty to test FC under reliably controlled conditions. Otherwise (based on the evidence to date) we need to presume that the facilitators—not those being subjected to FC—are the ones doing the pointing. (See Systematic Reviews).
Note: In current day practices, I think “Who’s controlling letter selection?” is a more accurate question than “Who’s doing the pointing?” In 1998, facilitators were more likely to use “traditional” touch-based FC, where the facilitator holds on to the wrist, elbow, shoulder, back, etc. of the individual being subjected to FC. In 2024, facilitators also employ Spelling to Communicate (S2C) and Rapid Prompting Method (RPM) where facilitators, primarily, hold on to a letter board while the individual being subjected to FC extends a finger toward the board. In some cases, this makes it appear that the individual being subjected to FC is “independently” pointing to the letter board, even though the facilitator provides visual, physical, and auditory cues that, in all likelihood, are influencing and/or controlling letter selection. On this website, we use the term FC to describe all forms of facilitator dependent techniques, regardless of whether the facilitator holds onto the person or to the letter board.