Tuesday, April 25, 2023

Why (A.)I. Write

These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages. The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design.

Claude E. Shannon, "The Mathematical Theory of Communication" - 1949

When I use a chatbot to compose a text, I am using a tool to do some of the work required to pull ideas together into a coherent whole. I do this when I already know what that coherent whole should look like. I might do this for a marketing message, an insurance appeal, or a time off request.

I use it for the kinds of texts that have a clear purpose, one I know before I start writing. These texts also have a pre-established structure (a genre) that the chatbot can imitate effectively enough for a human reader to recognize the "writing moves" the bot is making. 

It takes work to create that text, and if software can do some of the work for me, great. 

Because in such cases, the act of writing isn't going to change the way I think about what I am trying to achieve when I write those routine texts.

But I'm not using a ChatBot to write this text. 

Why not?

Why am I am doing the work required to pull ideas together into a coherent whole without a ChatBot.

It's because I do not yet know what that coherent whole should look like.  

That's why I write

This is the writing I want students to learn about in our Writing Program. It is the writing required to solidify ideas, to critique an unfamiliar argument, or to inquire into new research areas. This is writing that helps an author organize ideas for themselves so those ideas can then be used later. 

I learned about teaching this kind of writing when I studied Writing Across the Curriculum, an area that promotes "writing to learn" in classrooms. The not-so-radical idea is when information is new to students and those students want to use that information later, they need to take time to organize that information in their own minds. Writing is one of the most effective tools for such work.

I'm doing it now. I am organizing the information I have about A.I. ChatBots in my mind. I'm trying to identify contradictions in what I've learned. I'm seeking blind spots in my understanding. Every time I go back and re-read my draft, I'm making sure it jives with the other stuff I want to say here, other information I've learned, and my core beliefs about writing and technology. 

This is the mental work required to sort out how I will react professionally to a new disruptive technology.     

It's a ton of work to do in my head, and I am still using plenty of contemporary technology to compose this. I'm writing using a computer and a WYSIWYG blog editor running on my web browser with a Grammarly plugin, providing real-time feedback on my spelling, syntax, and grammar.  

The tools are helping, but they are not doing the important work.

These tools don't even know what I'm trying to do. 

The screenshot of a "free Premium suggestion" from Grammarly demonstrates as much. The tool is asking if I "want to sound more positive" in one of the sentences from this post. It doesn't recognize that I am highlighting one of the things algorithmic text analysis cannot do. Attempting to sound more positive would muddy the message. The bot, however, has been programmed to help writers sound more positive no matter what.

That algorithmic misstep reminds me of something that comes up in a writing workshop I facilitate with Civil Engineering students here at Sac State. We talk about style in experimental reports, specifically the expected use of passive voice: 'measurements were taken' as opposed to 'we took measurements.' And at every workshop, I ask if writing teachers have ever told the students to avoid the passive voice. A few hands go up every time. Because many teachers who work primarily in the Humanities have learned to avoid the passive voice no matter what.  

And you know what? In a lot of settings, those writing teachers are correct. The writing formula they teach works for most students most of the time. It's only a problem when those writing teachers don't know what the engineers are trying to do.

And there it is. There's the idea I've been writing to find.

If our Writing Program is designed to teach students how to meet the writing expectations of already familiar situations, then ChatBots are an existential threat.

If, on the other hand, we are teaching students to write because it is an essential part of the process of creation, critique, and critical engagement, then we can treat ChatBots like a new tool and find a place for it in our classrooms. 



No comments:

Post a Comment

Why Sac State Writes

I had the pleasure of meeting many of our incoming students this Saturday at Admitted Students Day on the Sac State campus.  It was a fun an...