Experience Co-Writing a Blog Post with ChatGPT

Like many people, I’m trying to find ways that generative AI can make my life easier. Though I don’t think ChatGPT would be a good tool for writing a research paper, at least beyond the first and last paragraphs, I wondered if it would be a good tool for more public-facing writing, like blog posts. So I co-wrote an article summary about Maton’s Legitimation Code Theory and Semantic Waves using ChatGPT. I didn’t ask it to summarize the paper, and instead, I asked it about the main concepts discussed in the paper.

The first prompt that I used with ChatGPT (free version, GPT-3.5) was

“Explain legitimation code theory at a 12th-grade reading level. Include explanations and examples of semantic gravity and semantic density.”

This prompt gave me an almost entirely wrong answer, talking about how LCT was a way of understanding how different areas of knowledge are valued in society, that math and science have higher semantic gravity and density than arts and literature, and that this was why they were valued more in society. When I said that LCT doesn’t place value on knowledge, it said I was right and that it was about the status and legitimacy of knowledge, which is also wrong (and still sounds a lot like value to me). I started a new prompt with additional information.

“Explain the theory of Legitimation Code Theory created by Karl Maton at a 12th-grade reading level. Include definitions and examples of low semantic gravity, high semantic gravity, low semantic density, and high semantic density. Also explain how semantic waves help people to build new knowledge.”

This prompt gave me correct information about semantic gravity and density but still insisted that LCT describes the value of knowledge. At this point, I bummed off my husband’s subscription to ChatGPT using GPT-4, which gave the mostly correct answer used in the post. The example it gave of semantic gravity was not a good analogy (SG+ = knowing how to change the tire on a specific make and model of a car, which is superficially context dependent; SG- = knowing how the engine of any car works), so I replaced it with the algorithm example. I did ask ChatGPT “what is an algorithm” to help form the example, though. Interestingly, it started with a low semantic gravity explanation (i.e., set of instructions) before giving examples with high semantic gravity (i.e., machine learning algorithm).

GPT-3.5 initially couldn’t provide a definition of semantic waves, and when I explained the concept, it mostly just parroted back my explanation. GPT-4 provided an inexact explanation of semantic waves, “They refer to the way that knowledge is built up over time, starting with simple, concrete knowledge and gradually becoming more abstract and complex.” While this is partially true, it’s not how Maton would describe it, primarily because it implies that semantic waves are unidirectional. You can also see that I had to correct which portion of the semantic wave was tied to high or low semantic gravity and density and replace repetitive adjectives about semantic gravity to include adjectives about semantic density.

Writing Process: C+

I kept the topic sentences from GPT-4 because they were easy to understand and mostly correct. The main changes I made were adding references to similar concepts, like Cognitive Load Theory, providing more detail from the paper, and making better examples. Overall, I found the process much like trying to use previously written text in a new paper (e.g., text from a grant proposal in a journal article). I typically prefer to write new text from scratch rather than re-purpose text that was written for a different audience or at a different level — a lesson learned the hard way over years of writing. I find it much easier to write all new text than try to Frankenstein together old and new text in a cohesive way. However, I used it because I thought it might help translate concepts with high semantic gravity or density into more easily understood language, and it did that well. I’ll continue to play with it for more public-facing writing, like blog posts, and maybe to help prepare presentations.

Learning Process: A-

While I didn’t enjoy co-writing with ChatGPT very much, it did create a good evaluation-based learning activity. Instead of writing something based entirely on my interpretation, I instead went line by line through GPT’s response and decided whether it was false, misleading, or insufficient. Correcting its writing to avoid all of these pitfalls helped me develop my own knowledge in a teaching-is-a-good-way-to-learn sense. For example, I thought deeply about whether I could come up with a better example of semantic density and decided that the theory of relativity compared to the distance formula is a really good example. I only give it an A- because the results were ultimately up to my discretion, and GPT isn’t a tutor or instructor that can provide trustworthy feedback, at least not about academic theories.

One thought on “Experience Co-Writing a Blog Post with ChatGPT

  1. Pingback: Article Summary: Maton (2014) Legitimation Code Theory and Semantic Waves | Lauren Margulieux

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s