Sorry Professor, Artificial Intelligence Did My Homework – The Bowdoin Orient

Alfonso Garcia

About a month ago I had started reading about OpenAI’s Dall-E. OpenAI is an artificial intelligence research lab and Dall-E is an image generation program. When prompted with specific details, it produces a new image. For example, drawing random images like “an astronaut playing basketball in space in a minimalist style” is within reach of this program.

I continued reading and discovered that it was a different version of another program from OpenAI called Generative Pre-Trained Transformer 3, or GPT-3. GPT-3 is an extensive predictive language learning model that essentially creates human-like sentences. Because it came out in 2020, I thought it might be more accessible.

Then I had a light bulb moment. We had been given an assignment for my philosophy class, Philosophy of Mind, but when I saw the topics, I thought, “Yep, I’m using my essay.” Instead, I ended up emailing Prof. Sehon asking if he would be interested in grading a written essay on artificial intelligence (AI). He replied with something better: “don’t tell your classmates and we’ll revise in class!”

The assignment was to explain, evaluate, and argue against Professor Sehon’s assertion that “the explanation of commonsense reason for human action is irreducibly teleological.” I don’t want you to stop reading because of any unnecessary confusion, so I’ll leave it at that for now. I told Professor Sehon not to rely too much on the quality of the article. I thought there was no way AI could write a decent philosophy paper.

I got to work writing a sentence in the program for it to go wild. I kept writing a sentence or two, then I deleted what I had written. Finally, I got a decent starting point and let the AI ​​generate some text. The AI ​​understood this to be an essay in philosophy and began its first sentence of the second paragraph with “Famous, Kant argued that…” I was shocked.

The AI ​​has also started offering its own thought experiment! I thought this was one of the funniest thought experiments I’ve ever read. The AI ​​wrote about the Emperor of China and his son with Down syndrome. The Emperor’s advisers tell him that he would be happier if his son didn’t have Down syndrome. If the Emperor let the doctors operate on his son, he would be cured! But the Emperor is adamant about not curing his son. He is very proud of his son’s condition.

Somehow the IA relates this to Immanuel Kant and writes, “If you find the opinion that it is wrong to treat people with Down syndrome with medical intervention to make them more similar to other children of the emperor, so ask yourself why this opinion is. deceived.

After the absolutely absurd example, I put a sentence with the word “Sehon”. The AI ​​finally did what I thought it would do before: parrot the typed words. Because “Sehon” wasn’t a familiar word to the language learning model, he ended up spitting out the prompt three or four times. Here I decided the AI ​​was done – she had written her first philosophy paper.

I sent it to Professor Sehon, and in one of the next classes we went through it, paragraph by paragraph, giving him feedback on the pretext that it was an article written by a former student .

The first paragraph was okay. He did his job introducing the topic, with some misunderstandings, but he got the general idea. The second paragraph began to explain teleology so that any reader can understand what it means. The IA said: “Teleology is a form of reasoning that uses a purpose or goals to explain an action instead of using other causes.” He also offered an example of teleology, and the class tended to like it. “Examples are helpful,” they said. “I have a good understanding of the direction the trial is taking.” In general, there weren’t any big issues here other than me trying to hold back a few laughs. Next comes AI’s magnum opus: The Chinese Emperor’s Thought Experiment.

Professor Sehon started reading the piece of text as we followed along. When we got to the part about curing the emperor’s son’s Down syndrome, there were some laughs and concerns for the student who wrote the article. Through a few laughs, some of my classmates said, “They know Down syndrome is not curable, right? But, on the whole, people tended to like the essay. He got the message across.

We got to the last paragraph and everyone was confused. Each sentence started with “explain and evaluate” or something similar, but Professor Sehon said, “I think the student had problems submitting or something,” and that seemed like a pretty reasonable explanation. for the class.

We usually all have a clicker in class to do polls. Professor Sehon asked us, “How would you rate it?” I gave it a D. I thought a lot of people would give Ds and Fs. Surprisingly, most people gave Bs and Cs. The AI ​​was gone! When people found out it wasn’t written by a person, they were amazed and impressed. How could an AI write a seemingly compelling essay on such a specific topic?

Considering we did it in a Philosophy of Mind class, I’d say trying AI gives it a stronger case for AI intelligence. The task may be narrow, but it’s always exciting to see where AI technologies could go.

Comments are closed.