Question for zn/zooey

Recent Forum Topics Forums The Public House Question for zn/zooey

Viewing 17 posts - 1 through 17 (of 17 total)
  • Author
    Posts
  • #147472
    Avatar photonittany ram
    Moderator

    When you give your students writing assignments, how do you know they are not using AI to do the work?

    My wife is a physician.  She was playing around with an AI app and asked it to write a hand-out on a specific medical issue geared towards Spanish-speaking patients.  She said what the app produced was perfect.  She would write emails to colleagues and before sending them she would ask the app to ‘fix’ them (ie. fix grammatical errors, make the tone less or more formal, etc) and in moments it was done.  She eventually stopped using the app because she said the wording didn’t really sound like her after modification, but if you weren’t familiar with her style you’d never know it was written by AI.

    I suppose if you were familiar with a student’s writing, you could tell if they started using AI, but what if you weren’t?  How would you ever know?   I know there are computer programs that detect plagiarism, but I don’t know how you would detect this, especially if the student is savvy enough to go through what the AI produced and fix anything that seemed ‘off’.

    #147475
    Avatar photozn
    Moderator

    AI can’t write papers. With a paper you phrase the question in such a way that you have to be familiar with the class issues to respond. Plus I deliberately come down hard on methods and approaches and styles that don’t fit what we’re doing. I am very specific about that. AI writing in this context is very easy to recognize, and if you can’t trace it and prove plagiarism, no matter–invariably AI writing is going to do things I openly and clearly and very specifically don’t allow.

    #147512
    Avatar photoZooey
    Moderator

    Determining something is AI, and proving it, are different things. AI-generated text sounds perfunctory, it’s “soulless,” it has no “voice” to it. Human writing is like a fingerprint. Everybody is different. I can tell immediately if a student is suddenly using compound/complex sentences when, in the past, they used only simple sentences in the passive voice. (I recall an incident in our board history, now that I think about it, when a poster was accused of posting under two different identities, denied it, and then I undressed him by an analysis of how he used stylistic devices and punctuation {garlic in the shadows? What was his name}.

    What I’m saying is…I can tell. I can’t always prove it to people who don’t understand writing well, and that’s a problem. The entire issue is one that academia is working with right now. Some places already have policies. Some places are wondering if Composition needs to be taught any longer. Some are working to find a framework to deal with it, since it won’t go away.

    I don’t know what zn means by “invariably AI writing is going to do things I openly and clearly and very specifically don’t allow.” I try to write prompts that require synthesizing material that isn’t (likely to be) available to AI. If that’s what he means, then we are approaching the problem the same way.

    There is a program called turnitin which schools can subscribe to which searches its databases for duplicate material, and flags it. That takes care of a lot of plagiarism. It has an AI detection feature that was pretty good for a while, but it seems to me it has been dialed back a lot in the past 2-3 months, possibly out of concern for litigation. That is…it is showing AI likelihood at much lower rates generally than it used to.

    This term, for the final essay (and this isn’t new, but I haven’t done it in a few years), the students will be required to handwrite an essay in a timed-writing setting, without the use of laptops or phones.

    It’s an interesting and challenging issue. But I am developing a growing resentment at the fact that I am being strained at having to police students – to whom I am only too happy to teach knowledge and skills that will benefit them for their entire lives – but who don’t care about that knowledge, or those skills, and only want a “grade” that will usher them into the land of plenty with less effort.

    #147540
    Avatar photoBilly_T
    Participant

    I think we’re not too far away from AI being virtually undetectable. I’d give it 1-3 years, especially with all the money and corporate power behind it, racing for the crown. In five years, it will be the norm, a part of the woodwork. It’s a true threat to creatives, especially, and as a writer and artist, I’m worried. Actors/writers/directors feared it enough to go on strike.

    Throw in the race for quantum computing, and we’re in for a true revolution/evolution that will bring some good, but, IMO, a lot of bad, primarily for the non-rich. Literally millions of jobs will be lost to AI. In short, this is looking more and more like a Gattaca/Metropolis scenario, but primarily virtual, rather than biological, though the latter will eventually break through as well.

    AI is in its infancy. Too much profit out there for it to remain in that stage for long. If they can recreate a Hemingway short story right now, or make a short talk with Nietzsche possible, it won’t be long before students can input their own essay history, with grammar, style, and idiosyncrasies logged and processed, and then spin out whatever they want. But, again, it’s likely to start out along class lines, and go from there. Money will talk long before everyone can walk, etc.

     

    #147548
    Avatar photoZooey
    Moderator

    Man, we are getting so close to the good stuff!

    #147581
    Avatar photoBilly_T
    Participant

    Bumped into this article today on AI, focusing on schools:

    (Did a copy and paste from reader’s mode, using Notepad++. It looks better with regular formatting. Broke into two parts:)

     

    AI is forcing teachers to confront an existential question
    Molly Roberts

    https://www.washingtonpost.com/opinions/2023/12/12/ai-chatgpt-universities-learning/

    “Remember what I told you last week? Forget it.”

    This is how Marc Watkins starts many a faculty meeting in the University of Mississippi’s department of writing and rhetoric. The self-fashioned AI guru has been tracking the capabilities of the large language models, such as ChatGPT, that are already transforming how his students write and read — in some cases, by doing both for them.

    There is no better place to see the promise and the peril of generative artificial intelligence playing out than in academia. And there’s no better place to see how academia is handling the explosion in ChatGPT and its ilk than at Ole Miss.

    In the spring, after students went back to campus eager to enlist robots in their essay-writing, Watkins and his colleagues created the Mississippi AI Institute (not to be confused with the Mississippi AI School, a Mississippi State University venture focused on the artificial insemination of cattle).

    The hope is that the institute’s work can eventually be used by campuses across the country. For now, a two-day program this past June at Ole Miss might be the only one of its kind to pay teachers a stipend to educate themselves on artificial intelligence: how students are probably using it today, how they could be using it better and what all of that means for their brains.

    The only way to describe what these tools have done to the teaching of writing is to borrow a phrase any professor would mark down as a cliché. They have changed everything.

    AI is forcing educators to rethink plagiarism guidelines, grading and even lesson plans. But above all, it is demanding that they decide what education is really about — that teachers ask, in short, “What are we here for, anyway?”

    ChatGPT has become to generative AI what Kleenex is to tissues. This most mentioned of tools, however, might be the least of teachers’ worries. Boutique services geared toward composing college essays, the very task Watkins and his colleagues are trying to teach, abound.

    Some of their names jangle with techno-jargon, while others strive for the poetic, or at least the academic: Wordtune, Elicit, Fermat.

    “Help me write,” read the words atop a Google doc equipped with its AI assistant tool, presumably in the voice of whoever is staring at a blank document waiting for words to come. Watkins finds this disturbing in its vagueness. Help me how?

    Other technologies are more explicit about what they’re providing. Wordtune offers the opportunity to select a “spice” to add to your paper.

    The “rewrite” option can polish a sloppy sentence; the “explain” option can elaborate on a vague one. There’s also “make a joke” (groan-inducing at best) and “statistical fact” (somewhat more useful, if you’re not worried about AI’s documented propensity to hallucinate). “Counterargument” can — well, you get the picture.

    Do you write ad copy? White papers? Plain old emails — or dissertations? Lex, another tool, wants to know. Answer that you write op-eds, and it informs you that “with that type of writing, it can be hard to maintain objectivity while presenting a poignant argument, amidst the pressure of constant deadlines.” (Tell me about it.)

    Or you can plug in what you’ve got so far and tell the tool to critique it. Dominic Tovar, an Ole Miss freshman pursuing an engineering degree, likes feeding text into the tool and having it tell him what needs fixing: This sentence is incoherent. This paragraph is too wordy. When things get really rough, he can always type “+++,” a command that prompts Lex to generate the next paragraph — but he thinks students should consider that degree of assistance a last resort.

    Other services aim narrower.

    Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.

    Caleb Jackson, a 22-year-old junior at Ole Miss studying part-time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.

    Speaking of bibliographies, many students have found themselves filled with despair upon realizing they aren’t actually finished with a paper until they have compiled several pages of APA-style citations complete with annotations. No more! Now, a service called Sutori will handle the pesky copy-pasting and formatting for you.

    ChatGPT is sort of in a class of its own, because it can be almost anything its users want it to be as long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.

    “Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.

    Larry Wilson, an Air Force veteran back in school at 43, says ChatGPT and image generators such as Dall-E even aid him in creative pursuits. He crafts comic strips and graphic novels. Sometimes, it’s “difficult getting things in your head out.” But with generative AI, he can explain his vision to a system, and it turns that vision into a tangible image or video. If he sketches out a character to the AI, it returns what he calls an “abundance” of actions, utterances and more that he can insert into the opus of the hour.

    Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography probably doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.

    Here’s the bottom line: It’s likely impossible to catch kids using AI to cheat. The detection tools lauded at first as universities’ last bulwark against a horde of scribbling machines have fallen out of favor. They do a poor job identifying cheaters where they do exist — and yet somehow often seem to identify them where they don’t.

    See, most notoriously, a professor at Texas A&M University at Commerce who threatened to fail his entire class after using ChatGPT to detect whether it had written their essays. Turns out, it didn’t work.

    Or look at Vanderbilt University. The college, in announcing its disabling of one such tool, points out that detectors are more likely to flag material written by non-native English speakers. Its bulletin notes that other companies that pounced on the demand for detectors in the spring have given up. Anyway, none of them was ever able to explain how they could distinguish man from machine — perhaps because, when it came down to it, they couldn’t.

    At the Ole Miss summer institute, faculty members see for themselves. “My mother is a fish,” one professor plugs into a service called Turnitin. This is the famous five-word chapter of “As I Lay Dying” by William Faulkner, son of Oxford, Miss. — an ingenious shift into the consciousness of a young boy. The result? Ninety-three percent AI generated. (Probably because the sentence is suspiciously simple, but it goes to show that these detection tools don’t yet appreciate modernism.)

    Of course, if the machines can’t detect other machines, that doesn’t mean humans can’t try to. Unsurprisingly, there’s a bit of a “know it when you see it” phenomenon with AI-written work in classes taught by teachers who’ve seen hundreds if not thousands of papers by human students. The trouble for these teachers is figuring out how to react when they do believe they see it.

    Sarah Campbell, presenting at the summer institute, described a student essay that appeared, as she put it, “written by an alien.” Or written in the year 1950. Or perhaps written in 1950 by an alien. She responded by asking the student to coffee, where she told the student that she had obviously let them down: “You didn’t know how desperate I am to hear your voice.”

    This practically trademarkable Good Teaching Moment cuts to the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed, they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.

    Figuring out exactly which ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.

    “AI is not meant to avoid opportunities to learn through structured assignments and activities.”

     

    #147582
    Avatar photoBilly_T
    Participant

    This line comes from the AI policy for Tom Brady’s Ole Miss education class. His students discussed the strengths and weaknesses of the tools (“strong in summarizing, editing and helping to brainstorm ideas”; “poor at creating long segments of text that are both topical and personal”), put those in the context of academic honesty and devised the rules themselves.

    That one line sums up the point: AI is not meant to avoid opportunities to learn.

    What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.

    The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.

    This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting AI write an essay in its entirety. What about letting AI brainstorm an idea, on the other hand, or write an outline, or gin up a counterargument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.

    These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own. But — here comes a human-generated counterargument — they might also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.

    Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says, wanted students to “get away from the high school paper and go further, to write something larger, like a thesis.”

    Perplexity, Lex and other AI tools showed him what he was doing wrong, so that he could do it right next time. And the tools themselves told him he was improving. One system gave critical feedback on his first paper; on the second, Jackson said, “The AI literally said, ‘That was a great paper to read.’”

    Maybe. Or maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.

    All this invites the most important question there is: What is learning for?

    The answers are myriad. (ChatGPT, asked, counted exactly 11.) But they break down something like this: Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use AI to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.

    If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development, even the instrumental aim of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.

    But more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one. Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.

    This glummest of outlooks suggests AI will stunt personal growth even if it doesn’t harm professional prospects. While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.

    One approach is to alter standards for grading so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.

    Hopefully, these things are also the most valuable because they are what make us human.

    Stephen Monroe, chair of the Ole Miss writing and rhetoric department, has a theory. It involves player pianos, those mechanical instruments that send musical notes floating through fancy hotel lobbies without a musician.

    The player piano plays perfectly — yet the result is, as he puts it, “hollow and gimmicky.” You’d hardly buy a concert hall ticket to watch one of these machines perform even the most gorgeous or most technically demanding of sonatas. But you’d pay up, don a gown and sit, rapt, “to hear a human being play that very same sonata on that very same piano.”

    The beautiful might seem less beautiful when we know it comes from lines of code or vast arrays of transistors rather than from flesh, blood, heart and soul. Every triumph might seem that much less triumphant.

    If you ask the Ole Miss educators, their students know this. If you ask the students, some of them, at least, know it, too.

    Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, I earned that C.’” He pauses. “That might sound crazy.”

    Dominic Tovar agrees. Let AI take charge of everything, and “they’re not so much tools at that point. They’re just replacing you.”

    Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trails off, sounding dispirited.

    The kids are even more reluctant to cede the most personal aspects of their writing to AI, even when allowed. Guy Krueger, who teaches Writing 101, put it simply to his class: If you’ve gone on a date, would you ask ChatGPT to describe the date for you? The response was a resounding no. (Well, one kid did say yes.)

    This lingering fondness for humanity among humans is reassuring. Whether it will fade over time, however, is far from certain.

    Claire Mischker, a lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near-certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.”

    The central theme of the course was empathy.

     

     

     

    #147638
    Avatar photowv
    Participant

    interesting thread

     

    w

     

    v

    #148881
    Avatar photoBilly_T
    Participant

    Took the quiz, and didn’t do so well. Was difficult to know if the images were real or AI generated. I looked chiefly for impossible symmetries, but that didn’t seem to help all that much.

    https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

    #148882
    Avatar photozn
    Moderator

    I don’t know what zn means by “invariably AI writing is going to do things I openly and clearly and very specifically don’t allow.”

    You basically summed it up. You cannot prove a paper is written by AI but you can tell it is. And virtually everything AI writing does that is bad, and doesn’t work as decent writing, is something you can grade down for being poor writing.

    #148919
    Avatar photoZooey
    Moderator

    I don’t know what zn means by “invariably AI writing is going to do things I openly and clearly and very specifically don’t allow.”

    You basically summed it up. You cannot prove a paper is written by AI but you can tell it is. And virtually everything AI writing does that is bad, and doesn’t work as decent writing, is something you can grade down for being poor writing.

    At the beginning of each term, I hand out a rubric I developed that I use to assess all writing. Then I pass out a student essay for them to grade, using the rubric. Then I poll the class to see how everyone scored it, and we discuss it. I do that with 3 papers. The 3 papers are pretty bad. The best one is a C-, and the students often ask to see a good paper, and for the hell of it, I fed the prompt into ChatGPT, and passed out the results. I was very curious what they would make of it because the fluency and conventions were very strong. Well-written and mistake free. But it was full of lofty generalities, and it didn’t directly answer the prompt. One of my classes spotted right away that it had no ideas in it, but I had to point it out to my other class, and explain how the paper didn’t say anything, even if it said nothing pretty well.

    #148927
    Avatar photozn
    Moderator

    One of my classes spotted right away that it had no ideas in it, but I had to point it out to my other class, and explain how the paper didn’t say anything, even if it said nothing pretty well.

    That’s a good example. AI is Hallmark level empty. It has no “eyes” for seeing key details and reasoning about them. It is also prone to vague abstractions.

    I like what you said about your assignments there, it’s interesting.

    You know what might be fun in this thread to add to the discussion? Ask AI a question about the Rams that all of us know is nuanced and important, and then post what AI says. A question like how did Kyren Williams help the Rams improve their game in the second half of the season. People in the thread would see from such a post that no actual person would ever write that way.

    #148928
    Avatar photoZooey
    Moderator

    You know what might be fun in this thread to add to the discussion? Ask AI a question about the Rams that all of us know is nuanced and important, and then post what AI says. A question like how did Kyren Williams help the Rams improve their game in the second half of the season. People in the thread would see from such a post that no actual person would ever write that way.

    I’m on board. Start a thread, and I will contribute to it after I think about this for a few minutes.

    #148929
    Avatar photozn
    Moderator

    Just do it here! It fits the “what does AI sound like as a writer” discussion.

    #148947
    Avatar photoZooey
    Moderator

    Well, this isn’t a good example, but here’s what I got on my first attempt:

    User
    You
    Explain how the Los Angeles Rams changed their offensive line philosophy this season as opposed to 2022, and evaluate how well the change in their scheme worked.
    ChatGPT
    ChatGPT

    I don’t have real-time information on specific changes made by the Los Angeles Rams to their offensive line philosophy for the current season or 2022. My knowledge is up to date only until January 2022. For the latest updates on the Rams’ offensive line changes and their performance in the current season, I recommend checking recent sports news, team announcements, or official statements from the Los Angeles Rams organization.

    If you have any other general questions or if there’s a specific aspect of the Rams’ history or performance that I can provide information on, feel free to ask!

    #148948
    Avatar photozn
    Moderator

    Try an historical one. Ask about the Faulk trade or something like that.

    #148977
    Avatar photoZooey
    Moderator

    Try an historical one. Ask about the Faulk trade or something like that.

    Okay, I will. I will fart around with this, but I don’t have a timeline for this project. When I read the line, “My knowledge is up to date only until January 2022,” I smelled bullshit. I’ll go back and ask, but I don’t think it could say anything intelligent about the OL scheme prior to Jan 2022, either. Sometimes AI just plain old makes shit up.

    In my first encounter, I fed it a prompt that I thought it simply couldn’t do, and it did okay with it, but it completely lied about the word count of the essay. Other people have reported that it has invented sources when required to provide them. So…it doesn’t appear to be above “making shit up.”

    I will play more with it later.

Viewing 17 posts - 1 through 17 (of 17 total)
  • You must be logged in to reply to this topic.

Comments are closed.