Narrowing Responses to Questions by Using Facilitator Prompts? Review of Sheehan and Matuozzi (1996)
One of our readers (thank you!) suggested I review a pro-FC 1996 study by Sheehan and Matuozzi titled “Investigation of the validity of facilitated communication through the disclosure of unknown information.” The article is already listed in the Critiques of Pro-FC Articles section of our website, but I have reread the article and reviewed it just as I would any other study.
I’m sure I read this study many years ago, maybe even before I had a good understanding of the differences between anecdotes, testimonials, evidence-based testing, and the like, but reading the article now, I am struck by how poorly controlled this study is—probably one of the worst I have read to date. Although the researchers seemed to understand the need to control for information during the testing (e.g. to rule in or rule out facilitator control during letter selection), they made some choices in implementing the test protocols that, in my opinion, seriously jeopardized the integrity of their study. I wonder how this study got past peer review.
During the first (presentation) stage of the testing, each participant was shown a video, read a passage, or shown a picture by an “original” facilitator outside the visual or auditory range of a second or “naive” facilitator.” I’ll explain the role of the “naive” facilitator in a moment, but the role of the “original” facilitator was to present, then discuss the stimuli (video, picture, or passage) with the participant using FC. The researchers posited that by giving the participants the option of choosing among different activities (viewing a short video, being read a passage, or looking at pictures), the participants would be more engaged with the study. Giving them options would also add a layer of randomness to the testing. The second or “naïve” facilitator would not know which of the activities (video, picture, passage) the participant engaged with during any given FC session.
During the next (questioning) stage, the second or “naïve” facilitator would support the participant using FC to spell out the answers to questions about the video/picture seen or passages listened to during the presentation stage. The second or “naive” facilitator’s role, then, was to support the participants in answering questions using FC but without the benefit of knowing which test stimuli (video, picture or passage) was being investigated.
In theory, this sounds like a reasonable strategy for testing FC authorship. Using an “original” facilitator (one who had access to test stimuli) and a “naïve” facilitator (one with no access to test stimuli) in the two separate stages of the study seems to be a valid attempt by the researchers to minimize facilitator influence and increase the chances that the FC-generated written output was based on information participants had been exposed to, but the “naïve” facilitator had not.
Of course, using two separate facilitators complicates authorship testing. The strategy I just described would only be valid in testing the independent authorship between participants and the “naive” facilitators, since only the “naive” facilitators were controlled during the testing and, presumably, had no knowledge of the test stimuli. Unless the “original” facilitators were also tested under similarly blinded conditions and their influence and control over letter selection was ruled out (something proponents have, thus far, failed to do), any FC-generated messages produced during the testing sessions using the “original” facilitators could not be considered representative of the independent thoughts of the participants.
Returning to the Sheehan and Matuozzi study, their strategy to control for facilitator behavior and to prove FC authorship could only be successful if 1) it could be demonstrated that the “naïve” facilitators themselves weren’t controlling letter selection through visual, auditory or physical cues while “supporting” the participants using FC,, and 2) the “naive” facilitators stayed naïve throughout the testing.
In other words, the presence of a “naive” facilitator does not guarantee that FC-generated messages represent the thoughts of the individuals being subjected to the technique. “Blinding” facilitators from test stimuli, however, would give researchers the best opportunity to demonstrate that the participants both understood the materials presented to them and had the written language skills to spell out answers when supported by a facilitator who knew nothing about the content under investigation.
As I mentioned earlier, despite the seemingly reasonable (basic) structure of the study, Sheehan and Matuozzi made some serious errors in implementing their test protocols that, unfortunately, undermined the integrity of their study.
First, the authors reported that there were not enough facilitators (ideally, they needed 6) to provide one “original” facilitator and one “naïve” facilitator for each of the three participants. Instead, the facilitators took turns being the “original” facilitator for one or more of the participants and the “naïve” facilitator for one or more of the other participants. While acting as the “original” facilitator in the presentation stage of the study, each of the facilitators employed in the testing would have been exposed to the variety of activities (videos, passages, images) available to participants. In turn, these same facilitators, now acting as a “naïve” facilitators would carry that knowledge into the questioning stage of the test, increasing their chances that they (the facilitators) could guess the topic being discussed and (perhaps inadvertently) influence participant responses.
I shouldn’t have to point this out, but once facilitators are exposed to test stimuli, they can no longer be considered “naïve.” Sheehan and Matuozzi mentioned time constraints and a “paucity” of funds in their reporting of the study. Perhaps they should have considered these constraints before designing their study and moving forward with it. Blinding three of the facilitators (one for each participant), for example, and employing one person to present the test stimuli to the participants might have solved this problem.
Second, even though the researchers took pains to place the “naïve” facilitators out of visual and auditory range during the presentation stage of the study, the “original” facilitator remained present in the room when the “naïve” facilitator was brought in to support participants in answering questions. Even if the “naïve” facilitator had no knowledge of the test stimuli, the “original” facilitator was fully aware of the information discussed with participants in the presentation stage of the testing. This knowledge increases the chances that the “original” facilitator could provide visual or auditory cues to the “naïve” facilitator (however inadvertent) that could influence participant responses through a well-known phenomenon called the Clever Hans effect. The researchers knew about critic concerns regarding cueing but rejected the idea that subtle cues by the “original” facilitators could influence the actions of the “naive” facilitators. One way to eliminate even the possibility of cueing between facilitators would have been to remove the “original” facilitator from visual or auditory range during the questioning stage of the testing and simply had the “naive” facilitator ask the participant what s(he) talked about with the other facilitator.
But allowing the “original” facilitator to remain in the testing room during questioning isn’t even the most dubious choice researchers made in designing their so-called “controlled” test. During the questioning phase of the test, both the “original” facilitator and the “naïve” facilitator were allowed to interact with participants to “clarify” the facilitated answers. Both facilitators were allowed to ask yes/no questions to elicit responses, manually assist the individual (e.g., by covering up certain parts of the letter board to “disallow” repeatedly incorrect responses), give participants multiple choice answers to narrow down responses, and the like.
This interference by the facilitators seems strange to me since the stimulus material (pictures, books, videos) selected for the study was supposedly age appropriate and, as was reported, “at a level at which the individual had demonstrated an understanding through facilitated communication regardless of behavioral difficulty or seeming unresponsiveness.” In other words, the materials selected for the test contained information drawn from the participants’ everyday and academic life that should have been familiar to them. Therefore, the questions asked of participants in the second stage of the study should not have been difficult for them to answer (if, that is, participants had the requisite skills to comprehend the materials presented to them and had the literacy skills to spell out their answers).
I suppose an argument could be made for a truly “naïve” facilitator to ask questions and try to draw out answers from a reluctant participant, but the “original” facilitators in this case had knowledge of the information provided to the participants in the presentation phase of the testing. They could have easily influenced the facilitated responses by providing the “naïve” facilitator clues to the test stimuli in the questions they (the “original” facilitators) asked of participants. Disallowing questions from the “original” facilitators, along with removing them from testing room once the second stage of the testing began (as well as ensuring the “naive” facilitators had no access to test stimuli) would have eliminated the problem of overtly or covertly leading participants to desired responses (provided there was no overlap between “original” and “naive” facilitators).
Even with all the questioning and narrowing down of options for the participants, the so-called “disclosures of unknown information” were limited.
The first participant (Lester), along with being facilitated, appeared to have some (limited) independent verbal and written communication skills. With support from the facilitator, he produced 49 successful responses out of 289 communicative interactions (16.9%) with “much inquiry, feedback, encouragement, redirection, several instances of written close statements, two written choice formats, and three instances of manual assists.” In other words, lots of cajoling and interference by facilitators.
Of the three participants, Lester was the only person to respond independently at times (that is, without facilitator interference) by typing, pointing and/or by providing verbal responses. The other two participants, Renee and Paul, only responded via FC and, reportedly, required a significant number of inquiries, redirections, and clarifications (up to 61 interventions) to produce the so-called “unknown” information (8.6% and 2.2% respectively).
The authors of this study seemed particularly invested in getting their participants to produce…anything…via facilitated communication, regardless of how much the facilitators intervened in the process. They also seemed to think that the phenomenon of FC was so complicated and mysterious that testing for authorship was simply “too elusive” to investigate.
It isn’t, though.
Authorship testing is not particularly difficult and has been used successfully to document the flaws of FC (i.e., facilitator control over authorship) since the early 1990s in the U.S. and earlier in Australia and Denmark (see Recommended Reading below), but researchers have to carefully control for facilitator behaviors. (See Controlled Studies and Systematic Reviews). That means facilitators can’t be part of choosing test stimuli or be allowed to use a 20-question type investigation to narrow down (i.e., guess at) answers to questions that the participants presumably already know and can type independently.
I think this study is a fine example of why facilitators cannot regulate themselves. The authors stated in their report that they were aware of the reliably controlled tests conducted in the early-to-mid 1990s when they designed their study but, despite that, they failed to adhere to test protocols that would separate facilitator behaviors from those of the participants. It seems strange to me that they would take the time to design a test with “blind” controls by using “original” and “naïve” facilitators, but then proceed to undermine the blind controls by (overtly and covertly) exposing the “naïve” facilitators to test stimuli.
I don’t mean to imply that anything nefarious was going on when the researchers designed or implemented the test. It is quite likely that Sheehan and Matuozzi were sincere in their belief that FC “worked.” The first author (Sheehan), for example, was so invested in the technique that in the early 1990s she appeared as a pro-FC witness in one of the first false allegations of abuse cases in the United States (See Department of Social Services ex rel. Jenny S. v. Mark S.). I just think, as this study shows, that FC proponents can become so emotionally and sometimes professionally invested in the technique that their belief in it—and desire to make it “work” despite all evidence against it—overshadows rigorously sound scientific inquiry.
Recommended Reading:
Heinzen, T., Lilienfeld, S., Nolan, S.A. (2015). The Horse That Won’t Go Away: Clever Hans, Facilitated Communication, and the Need for Clear Thinking. McMillan Learning ISBN 978-1464145742
Jacobson, J.W., Mulick, J.A., and Schwartz, A.A. (1995, September). A history of facilitated communication: Science, pseudoscience, and antiscience. Science Working Group on Facilitated Communication. American Psychologist. 50 (9), 750-765.
Mostert, M. (2001, June). Facilitated communication since 1995: A review of published studies. Journal of Autism and Developmental Disorders, 31 (3), 287-313. DOI: 10.1023/A:1010795219886
Prior, Margot and Cummins, Robert. (1992). Questions about Facilitated Communication and Autism. Journal of Autism and Developmental Disorders. Vol. 22 (2); 331-337.
Twachtman-Cullen, Diane. (1997). A passion to believe: Autism and the facilitated communication phenomenon (Essays in Developmental Science). Boulder, Colorado: Westview Press. ISBN 978-0813390987
Von Tetzchner, S. (1997, January 1). Historical issues in intervention research: Hidden knowledge and facilitating techniques in Denmark. International Journal of Language and Communication Disorders. 32 (1), 1-18. DOI: 10.3109/13682829709021453
Von Tetzchner, S. (1996, June 1). Facilitated, automatic and false communication: current issues in the use of facilitating techniques. European Journal of Special Needs Education, 11 (2), 151-166. DOI: 10.1080/0885625960110201
Select Blog Posts
Myths about myths, validity, and natural message-passing tests, Part 1
Myths about myths, validity, and natural message-passing tests, Part 2
Putting FC/S2C/RPM to the Test
Questions to ask facilitators and yourself while observing FC/S2C/RPM sessions