24 October 2023
In response to ChatGPT’s recent influence on higher education, the Modern Language Association (MLA) and the Conference on College Composition & Communication (CCCC) joined together as the MLA-CCCC Joint Task Force on Writing and AI to provide guidance for their respective members on best practices for integrating generative artificial intelligence (AI) into the classroom. The first result of their collaboration appeared in July 2023: “MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” I appreciate their working paper for its clear introduction to generative AI and look forward to the future working papers they will produce.
In this review, I will summarize the sections and then conclude with a few tentative thoughts.
“Introduction”
I concur with the “Introduction’s” statement that “writing is an important mode of learning that facilitates the analysis and synthesis of information, the retention of knowledge, cognitive development, social connection, and participation in public life” (MLA-CCCC 4). The joint working group came together to write their report because of concerns that these goals of teaching “could be under threat” because of the widespread availability of generative AI (4). As an instructor of co-requisite English classes, Technical & Business Writing courses, and Literature classes, this assertion reflects my values as an educator.
“History, Nomenclature, and Key Concepts”
I find their “History, Nomenclature, and Key Concepts” section particularly helpful. The authors helpfully emphasize that generative AI should not be understood as “the human-like, seemingly sentient AI that is still the stuff of science fiction” (MLA-CCCC 5). Rather, they note that generative AI “refers to the computer systems that can produce, or generate, various forms of traditionally human expression, in the form of digital content, including language, images, video, and music” (5).
They helpfully note that the generative AI that affects much of higher education should be understood as “large language modes” (LLMs) (4). These LLMs function “by using statistics and probability to predict what the next character … is likely to be in an ongoing sequence, thereby ‘spelling’ words, phrases, and entire sentences and paragraphs” (6). The LLMs work because they are programmed with “vast bodies of preexisting content …, which, to some extent, predetermine their content” (6).
The production of text where there previously was none gives the illusion that the LLM is creating content like humans do. However, the Task Force notes that LLMs “produc[e] … word sequences that look like intentional human text through a process of statistical correlation” (6) based on the content that was previously fed into the program. The authors emphasize that although the content produced by the LLMs “mimic[s] the writing of sentient humans,” they “do not … ‘think’ in the way that … takes place in human cognition” (6). Their purpose here seems to be to defuse the concerns of a 2001: A Space Odyssey type of HAL 9000 rebellion against humans!
“Risks to Language, Literature, and Writing Instruction and Scholarship”
The working paper’s next section, “Risks to Language, Literature, and Writing Instruction and Scholarship,” is of particular interest to me. The Task Force is concerned that the ease of producing summaries or paraphrases may impede “critical writing instruction or faculty assessment approaches” (MLA-CCCC 6). They are also concerned with the problem of LLMs making up sources or producing content without verifiable sources (6). LLMs also hide the biases of the content fed into the generative AI (6).
I was impressed at the comprehensive list of 17 bullets that posit the risks to students, teachers, programs, and the profession (7-8). The authors note that students may be adversely affected by LLMs because the generative AI takes the place of “writing, reading, and thinking” (7). They worry that LLMs devalue “writing or language study” (7).
They also note that the addition of AI detection in programs like Turnitin can “alienate” students from writing and learning because of increased technological “surveillance” (7). The Task Force worries that teachers’ focus on LLMs may be at the expense of other aspects of their jobs and at personal/professional cost to them (7).
The “Risks to the [P]rograms and the [P]rofession” section details concerns with challenges previously expressed:
- the devaluation of writing
- the lack of resources (time and money) for responding to changes wrought by LLMs
- the challenge to academic integrity by LLMs that fail to cite or make up sources
- the potential for inequitable access to technology
- the variety of responses to generative AI across fields
- the possibility that contingent faculty will be left behind in training
- the potential for administrators to exploit workloads. (8)
“Benefits to Language, Literature, and Writing Instruction and Scholarship”
In addition to their list of concerns, the Task Force has an optimistic list of benefits that can be derived from LLMs. Most optimistically, they see the potential to “democratize writing, allowing almost anyone, regardless of educational background, socioeconomic advantages, and specialized skills, to participate in a wide range of discourse communities” (MLA-CCCC 8). They also see the potential for generative AI to speed up literary analysis in scholarship because the “LLMs can detect layouts, summarize text, extract metadata labels from unstructured text, and group similar text together to enable search” (8).
The authors provide another bulleted list of benefits for “language instruction,” “literary studies,” and “writing instruction” (8-10). Below, I will allude to some of the more interesting benefits listed.
Benefits to language instruction include the “creat[ion of] translations that include explanations and wording options” (8). They also suggest that “students can develop expertise … while using generative AI … to produce a rough draft of a translation” (9).
The authors suggest that generative AI can be beneficial to literary studies because the prompts can “respond to specific literary passages as an aid to class discussions” (9). The LLMs “can [be] … use[d] as instruments of creative wordplay” (9). Generative AI can produce text in the style of authors (9). The Task Force states that “basic interpretations of literary texts” can be produced (9). The LLMs can also provide recommendations for other works similar to those under study in a course (9).
The Task Force lists potential benefits for writing instruction, such as “stimulat[ing] thought and develop[ing] drafts that are still the student’s own work” (9). LLMs can help to “develop multimodal writing projects” (9). Teachers can use AI to “demonstrat[e] … key rhetorical concepts” and “provide models of written prose that … highlight differences in genre, tone, diction, literary style, and disciplinary focus” (9). Generative AI can also help students “from diverse and various linguistic and educational backgrounds” by granting them “access to the ‘language of power,'” that is, Standard English (10).
“Principles and Recommendations”
The MLA-CCCC Joint Task Force provides one dozen recommendations for consideration as LLMs increase in influence over higher education (10-11). They are concerned that all full- and part-time faculty should be supported and that writing should continue to be valued (10-11). They encourage discussion of academic integrity that “support[s] students rather than punish[es] them” (10). They encourage critical AI literacy at all levels in higher education (10-11).
Final Thoughts
As I grapple with the influence that LLMs like ChatGPT have brought to higher education, I continue to return to key principles of the teaching of writing instruction: the value of taking writing classes is that they focus on process over product. We teachers of writing focus on the process of writing so that students can produce better products. The entire iterative process of writing (preparation, planning, freewriting, researching, drafting, revising, editing, and proofreading) helps students to become better thinkers and communicators.
My greatest concern with LLMs in my writing classes is that they pose a threat to the iterative process of writing. ChatGPT can produce a final product that bypasses the prewriting and revision that makes learning happen in a meaningful way. Tyler J. Carter reminds us that teaching process in writing helps “learners construct new forms of knowledge by integrating what they already know with the kinds of knowledge that they [are] learning in school” (404) and “that knowledge of writing comes out of the individual writer reflecting on their experiences in the world” (405). The writing process emphasizes active thinking and reflection from the first kernel of an idea through the final draft.
Any discussion of generative AI needs to acknowledge the effect that LLMs have on the process of learning. My challenge as an instructor of writing is ensure that LLMs enhance the learning process and help students to think about and understand their subjects more deeply than if they were to write without the aid of generative AI.
Works Cited
Carter, Tyler J. “Apples and Oranges: Toward a Comparative Rhetoric of Writing Instruction and Research in the United States.” College English, vol. 85, no. 5, May 2023, pp. 387-414.
MLA-CCCC Joint Task Force on Writing and AI. “MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” MLA-CCCC Joint Task Force on Writing and AI, July 2023, https://hcommons.org/app/uploads/sites/1003160/2023/07/MLA-CCCC-Joint-Task-Force-on-Writing-and-AI-Working-Paper-1.pdf.