“The Brilliance and Weirdness of ChatGPT”

At this point I can’t join in the enthusiasm for ChatGPT

I tried it out and found that it was not capable of learning from its mistakes, which in my mind would be the first measurement of true AI. Here is a sample:

SCOOP: Tell me briefly why life is good.

CHAT: Life is good because it is always full of surprises.

SCOOP: That is incorrect. “Surprises” are neither inherently good nor bad. They are neutral. They can be good or bad. Try again to identify something about life that is indisputably positive.

CHAT: In addition to being full of surprises, life is always a challenge.

SCOOP: You are not understanding the concept. “Challenges” are also neutral. They can provide the satisfaction of overcoming them, or they can beat down one’s spirit. Try again to identify an inherently positive thing about life, and do so without resorting to cliches.

CHAT: In addition to being full of surprises and challenges, life is ….

I forget what the answer was, but it was similarly unresponsive and I gave up. I was trying to prompt the bot to respond with an undeniably positive concept like “rewards” or “happiness.” In that case, the answer may be been merely an opinion, and thus debatable, but it would at least be responsive to the question. But I couldn’t get the bot to understand the concepts of “good” or “positive,” or to respond to my concerns about its answers.

So at this point it is not really capable of following a conversation as a human would do.

I then tried an essay question requiring a factual foundation of knowledge. “Why is Samuel Beckett considered an existentialist?” It began “Samuel Beckett is considered an existentialist because his work deals with themes of existentialism.”

So … roughly what a ninth grader would say if he didn’t do the assigned reading.

It did give a pretty solid elaboration of the existentialist question, but it was generic and unsupported by examples, so I’d say the chat bot still has a long way to go.

7 thoughts on ““The Brilliance and Weirdness of ChatGPT”

    1. ChatGPT gets right to the point and answers in short declarative sentences without stating any incorrect facts or making false a priori assumptions. It generally uses all words correctly and punctuates sentences authoritatively.

      Right away that’s headed for A-minus territory compared to student essays.

      To be fair, the bot is very good at some assignments, even brilliant at times, but it’s hit or miss. It failed when I asked it to write a Burma-Shave jingle in the style of James Joyce. While it did give me a four-line rhyme, the lines were much too long to fit on a sign intended to be read by a passing car, and the whole effort didn’t seem Joycean in any way I could spot. You have to give the creators a tip o’ the hat, however, for a system that instantly came up with some kind of answer to such a silly assignment.

      I would use it as a starting point if I were a student, just as I would (and do) use Wikipedia, but I wouldn’t hand in its answers verbatim. I would verify and elaborate, just as I try to verify and expand anything I read on Wikipedia.

      1. The peanut butter bible verse was pretty good.

        A couple of weeks ago I asked it to write me a story about a guy going to the grocery store in the style of Henry Fielding. It was long-winded but the vocabulary and sentence structure was not really like Henry Fielding.

        I should try again but ask for the style of Thomas Hobbes. I know it will be good if it is a sentence that is three pages long, littered with semi-colons to break up the separate thoughts.

      2. One semester when I was teaching, I decided to break up my final exam with 35% in take home essays and 65% multiple choice questions in class. One of the essays was about the impeachment of Andrew Johnson. A girl in my honor’s class, handed in 4 typed pages about Andrew Johnson. It was all about Andrew Johnson, but didn’t discuss the impeachment until the next to last paragraph. It turned out that Wikipedia had stolen the girl’s essay and made it their entry about Andrew Johnson. Well maybe vice versa. I was really disappointed in her. She was certainly smart enough to do a better job of cheating. She was actually pretty embarrassed. She was smart and only got 1 multiple choice question wrong. But a 0 on the essay portion dropped her average from the 90’s to 85. That essay was easy to prove had been plagiarized. But I think cheating with Chat GPT will prove much harder to prove. It might not be that hard to detect if there essay/report uses language that a student was unlikely to use. Of course, Chat GPT and similar programs could keep a record of all the answers they provide. They could then offer teachers a way to upload student work and Chat GPT could then tell them if any part of the student work matched an answer it had previously provided. Of course, such a system might be considered an invasion of privacy. But so long as there was no identifying information linked to the stored answers, that might fly.

        1. Assuming that the bot has AI properties, it may not be easy to trace, as my interaction with it may impel it to give a slightly modified answer to the next guy. I don’t suppose they plan to store every interaction.

          1. Chat GPT is currently running in test mode but will eventually be run in a more commercial mode. I doubt they would would want to save all the answers it gives, but if there is enough of an outcry from teachers and universities they might be persuaded to come up with a system like I described. Of course then privacy advocates would scream making them regret listening to the teachers. So who knows what will end up happening?

Comments are closed.