A.I. Artificial Intelligence: What We Think; What You Think
A.I. Artificial Intelligence: What We Think; What You Think
By: The Editorial Staff
April 29, 2025
Artificial intelligence. Kind of a misnomer? An oxymoron? Can intelligence be artificial? Should intelligence be artificial? Kind of a scary thought to wrap your head around.
Here at The Cyclone Chronicle, we eschew AI, steer clear of it, avoid it, and frown upon it. We run all articles through ChatGPT to catch and rework any bits of stories that indicate AI-generated, even if just a tiny bit. You could say AI is really a dirty word here at The Cyclone Chronicle.
But we got to thinking; AI is here to stay, and we need to find a way to deal with it, work with it even if we don’t cozy up to it. Have we forgotten that Siri and Alexa are AI assistants? Have we looked over the fact that Google searches now start with an AI summary?
We finally had the conversation in Newspaper Production Studio, where 18 diligent student-journalists plan and write stories for The Cyclone Chronicle. We read some articles. We watched some videos.
We were particularly fond of a few snippets of advice we found: trust but verify; just because you can, doesn’t mean you should. Our favorite was “Human writing is self-discovery. AI has no self to discover.”
Chew on that one.
By and large, this is how the student-journalists at The Cyclone Chronicle feel; they want to hone their writing craft and are better writers than AI. We’ve written some Editorials – our opinions –about Artificial Intelligence. Check them out below.
But still....AI is here to stay. So, shouldn’t we get comfortable with it, figure out how to use it without losing our ability to critically think, and write?
Some professors incorporate the use of AI in their coursework. For example, Dr. Kerry Barnett has students in WRI1002-Comp & Rhet write both a traditional empirical research paper and then one generated by AI on the same topic. In addition to the traditional bibliography, students are asked to use AI to generate both the research paper and bibliography and set out to discover where the discrepancies lie.
We decided to cast a wider net than just student-journalists. A campus-survey seemed just the thing to see what others in the Centenary community thought about AI and its use.
The surveys were anonymous, as who would admit to writing an entire paper using AI? (You would be surprised.) We asked about frequency of use and types of tasks, including personal virtual assistant, entertainment, creating schedules and making info easier to understand.
Specifically, we wanted to know how our fellow students use AI for academics. We created a hierarchy of tasks to check on the survey; at one end was transcribing lectures and at the other, writing entire papers. In between was brainstorming ideas, writing a thesis and generating outlines for papers, to name a few.
A random sampling of 65 surveys, 55 of which were undergrad students, revealed some interesting findings. Please check out our survey graphs.
Perhaps the most visceral reaction was how students answered the free response question: If you are a student, how would you feel if your professor used AI to grade your work?
“Annoyed; irritated; not great. I’d feel betrayed. Please don’t. Heck no!”
“Upset as it dehumanizes the work done by students.”
“I would feel like my time would be wasted; I would feel sad because I put a lot of thought into my work.”
“I would be a little upset because I don’t know how accurate AI is with grading.”
“I would be confused, a bit disheartened because a robot would be grading my work and I would likely not receive proper feedback or human suggestions on where to improve.”
“Badly; I want a human’s opinion not a crappy robot.”
“Not happy as AI is unreliable and lacks human emotions to properly gauge the value of my writing.”
“I would not like it; the teacher knows what they want in their student’s work. AI does not.”
“I would hope the professor takes a look personally at my work. Writing is as much about personal voice as it is words and grammar.”
“AI can’t pick up on particular writing styles, personal growth/improvement, and individual capabilities, or whether or not the student’s work was best/worst/average. So, I wouldn’t be happy.”
A few respondents didn’t care. Some say professors already use it.
In the words of one senior writing major; “I wouldn’t be mad, unless they grade it wrong.”
How often do you AI? Here is the difference in daily, weekly, and monthly usage among freshmen, sophomore, junior, and senior. (Graph made in Google Sheets by Alexis d'Ambly)
How do you use AI? Here is how difference course levels use AI - from personal to emails to entertainment to work. (Graph made in Google Sheets by Alexis d'Ambly)
How do you use AI as a student? Here is how difference course levels use AI as students - from brainstorming to notes to summaries to citations to papers. (Graph made in Google Sheets by Alexis d'Ambly)
How would you feel if your professor used AI to grade your assignments? Students weigh in. (Graph made in Google Sheets by Alexis d'Ambly)
Here's what out editorial staff thinks of AI.
Alexis d'Ambly, junior writing major
As a writing major, I’ve worked hard to develop my craft. If I need help, I’ve learned to ask professors and go to the Writing Collaboratory. When AI became popular a few years ago, my professors banned using AI in the academic setting. So, I never used it. I never wanted to fail an assignment or, worse, be suspended or expelled for using artificial intelligence to write my papers for me.
More than a few times, I’ve used AI for creative purposes and personal experiments. I’ve come up with prompts and storylines I’d like to write, and I’d put them into ChatGPT to see what it could do. On more than several occasions, through my reading of its work, I’ve thought, I can write way better than this.
AI is often confused by relationships between both people and objects and doesn’t understand negatives. It isn’t perfect, and, with creative writing, it becomes too much of a Happily Ever After ending or sitcom-y for my taste. AI struggles to write about more serious topics, because those thoughts are based on human experience. AI gets information and data from across the Internet, making it incapable of having the pathos and ethos only real human emotion and experience can provide.
As a writer, I’ve never even considered AI for any form of publishable writing. I have learned that AI can create lists and schedules and make information easier to understand. I will admit that I’ve never tried AI for these things and never wish to.
The only time I ever used AI for school was last semester when I needed a particular image for my project. I am not an artist and can’t draw to save my life. I was also not proficient in Canva at the time. This particular professor allows generative AI, but I never even tried to use that.
So, I went to AI to create a few images I needed. It took me a long time to find the right one. The free version of this program also only lets me redo the image a select number of times. I don’t remember the exact number, but I do know I almost ran out of tries before I found the right one. Additionally, AI is not the best at images. Complex concepts, such as clocks, time, and hands, are not the easiest for AI. I had to create an image of a superhero based on my own creative design, and there was a hand sticking out of the center of his arm. Regardless, I used the best of the bunch for my project and cited it.
While I can appreciate the capabilities of AI, I would never use it for school or work. As a writer, my work is 100% my own. My creative abilities are mine and mine only. That is one thing AI can never take from me. Hopefully, the future of writing stays human.
Tanner Sullivan, junior communication major
It can answer prompts in the blink of an eye and revise essays like it’s nothing. Yet, despite these capabilities, it has become subject to controversy in academia. These factors are attributed to artificial intelligence (AI), which has come a long way in a technologically advanced world. Many concerns have been raised regarding the usage of AI in a university setting, primarily regarding how students implement it into their schoolwork.
During my senior year of high school, AI programs like ChatGPT began rising to prominence. As such, I would occasionally overhear students mentioning how they would use AI to write their assignments. In my head, I always wondered why some peers would make this decision. I felt that even if the choice seemed convenient in the heat of the moment, there would not be much takeaway because the final piece would not be hand-written.
This is something I maintain to this day as a junior at Centenary, while also acknowledging the benefits the software can utilize. In fact, in my free time, I sometimes give AI fun prompts just to see how it will respond. For this, I primarily use the Copilot feature on Microsoft Edge, but have experimented with the concept on Snapchat’s “My AI” feature as well.
I also appreciate the knowledge and speed of these programs. Whenever I enter a question or concern into Microsoft Edge or Safari–a search engine exclusive to Apple devices–there will often be a text block at the top of the web page containing useful information relating to my search. These often come courtesy of built-in AI programs, which have helpful data ready to go in an instant while still linking the original sources in the text box. Whenever I enter something into a search engine or the Copilot assistant, there will be a response immediately, a convenient addition to AI’s overall knowledge.
But even with these features, I still have several skepticisms with AI, especially regarding misuse in an academic setting. My biggest worry comes from when someone uses AI to write entire paragraphs or even papers for the sake of convenience. I’m a firm believer that any academic institution should prioritize the learning atmosphere of its students through encouraging hard work and new discoveries. This goal cannot be fulfilled if one uses AI to complete their prompts, since they would not be gaining experience through forming the paper.
I’m a passionate writer, having bettered my skills through exploring new formats and implementing feedback I’ve received on previous handwritten pieces. Making mistakes is completely normal as a writer and can greatly contribute to improving certain skills like grammar and structure. Naturally, I maintain a bit of skepticism with AI because of what it can take away if used inappropriately.
Although I respect the convenient aspects of it like knowledge and speed, I feel like AI can steal the learning atmosphere students are encouraged to embrace. The university experience is priceless and shouldn’t be traded away for convenience.
Amanda Masiello, senior English major
When it comes to AI and its uses in the creative field, I think I’m more accepting of it than most people are. It’s such a hot topic and touchy subject at the same time; the most accepted opinion on AI is that it’s bad and should never be used.
However, I have to disagree.
AI is not leaving anytime soon, and while I do agree that it has caused more harm than good so far, the best we can do now is adapt to it. Ignoring the problem and covering our ears solves nothing.
There have been various instances where AI has been helpful in the creative process, particularly in editing and revising. With our fast-paced lives, especially in a work environment, having a robot remind you to capitalize a title or put a comma in a long sentence is extremely helpful and saves a lot of time. These programs are never perfect, but I think that is good since they force you to double-check the changes you make and thereby become better writers in the future.
Personally, I always love playing with AI and seeing what it can do. I used to play AI Dungeon in high school whenever I was pretending to pay attention in class. Having an AI Dungeon master is fun when no one is nerdy enough to play with you.
Personally, I only ever really used AI for entertainment purposes.
One of my favorite pastimes as a kid was asking AI to write a creative story, and laughing at the bizarre result. Remember Evie and CleverBot? Watching the wacky stuff this robot would spit at you based on what other people have said to it is hilarious.
Don’t get me wrong, most of the criticism leveled at AI is perfectly valid! However, claiming it is always bad when this is not the case is just plain ignorance. It alone has caused a crisis in the art community. Particularly scummy people have been feeding AI programs real art made by people, and then the AI spits out an image based on what it’s fed, then it’s sold and passed for real.
Others have even been using AI solely for the purpose of replacing real artists or lying about their own talents. This has been especially prevalent in the realm of journalism. With how quickly AI is advancing, it went from something almost anyone would be able to tell is AI-generated to blending in more easily. Some AI programs even write better than some people do.
To tell you the truth, I find little as depraved as faking talent for the purpose of tricking audiences to satisfy pride and ego. It is insulting to everyone, and you are actively doing a disservice to yourself by letting a robot write something for you instead of practicing on your own.
AI is a scourge on art, no matter what form it takes, drawing, writing, journalism, acting, or game development, movies, TV shows, nothing is spared.
However, there are some ethical ways in which AI can be used. Particularly, many artists in the analog horror space have ingrained AI into their creations without letting the rest of the work suffer. “Dreams of an Insomniac” and “Liminal Land” have all utilized AI to accentuate the horror.
In particular, “Dreams of an Insomniac,” created by YouTuber Pastra, uses AI to generate pictures of children for missing posters in the series. This way, no actual children would be exposed to this material, and no one’s lives and faces would be exposed for all to see.
“Liminal Land” by YouTubers Nexpo and Nick Crowley use AI to alter real photographs taken by the two creators to make an otherwise regular image look unsettling or borderline uncanny.
I am fully aware that I am in the minority when it comes to opinions on AI. Most people, when they hear something was made with even a smidge of AI, immediately write it off as lazy or bad. AI can be used to enhance art, but only in certain aspects. Used in larger amounts, it destroys the identity of the art form.
AI defeats the whole purpose of art and creativity.
So what if what you create isn’t perfect? YOU made it, YOU put effort into it, and YOU put yourself into it. No robot can take that away from real art; the process of creation is what makes art special.
Carlee Nigro, sophomore writing major
The use of AI has taken the world by storm. So many of us use AI sources like Chat GPT. I do it myself.
I tend to use it often as a student because it helps me with many parts of my schoolwork. For instance, when I'm in class and listening to a lecture, I use AI to auto-generate notes.
I also use AI to make prompts for essays. It helps me structure my essay so I can put all of my information in the right places.
Also, I use AI to generate citations. I make sure to check the citations are formatted correctly and the information is correct.
One thing I will never do with AI is write an essay for me. AI writing is way too wordy, and many words are not necessary.
Also, nothing that I would like in an essay is in it. For example, if I auto-generate an essay explaining the Revolutionary War, some of the many pieces may not be shown. AI is a tricky tool.
As a student journalist here for The Cyclone Chronicle, I will never use AI for my articles. These articles come from me and my writing skills. No one can match my skills, not to toot my own horn.
I may not use it for my articles, but I love AI as a resource for schoolwork. It is beneficial for me.
I believe that AI is a very valuable resource, when done correctly, for college students everywhere!
Troy Sumpter, senior creative writing major
The rapid advancements in artificial intelligence (AI) have ushered in a new era of technological transformation, one that holds both immense promise and complex challenges for our society.
Ok, now that introduction, that is all AI. AI is becoming more and more relevant in today’s world, especially in the daily life of students. From my perspective, I will tell you how I use AI in my school life, personal life, and Cyclone Chronicle life.
School Life:
Whenever I’m in school, using AI has never crossed my mind (other than the introduction). When I need to find information, I use Google to find websites to get information for my work.
Personal Life:
My use of AI in my personal life is a fair balance. Whenever I make a call, I sometimes use Siri daily, asking questions about spelling words and calling people. However, it is sometimes a hit or miss because Siri won’t pick up my voice when I ask it to call someone. I would also use ChatGPT for fun, seeing if it could make a script to my liking, but seeing what it came out with, it wasn’t the way I imagined.
Cyclone Chronicle Life:
The only time I use AI in my Cyclone Chronicle life is Turboscribe.ai when I do interviews with people, because it is easier for me to record their voices and use the website to transcribe their words for my articles. However, depending on how people pronounce their wording, it picks up the wrong wording for what they are saying.
Conclusion
In conclusion, I use AI frequently in my personal and Cyclone Chronicle life; however, they have setbacks when it picks up the wrong words, but AI has its benefits, such as essay layouts, grammar corrections, writing prompts, and more.
AI gets humorous. Virtual assistant Siri has a hilarious response to 0 divided by 0. (Image from X [@miravperry])
Concept and prompt by Viktoria Popova, image generated by OpenAI's DALL-E on April 4, 2025.
AI: Unscripted Intelligence
By Viktoria Popova
Director of Institutional Research and Assessment
This Op-Ed was written by Viktoria Popova, who in addition to serving as Director of Institutional Research and Assessment, is a PhD student in Data Science at National University. Viktoria loves to tell fascinating data stories about enrollment, retention, and student demographics, and more. She enjoys coming to the classroom to give AI presentations and hosts monthly AI meetings for the faculty and staff.
How about we start with a definition? But that is actually part of the challenge with AI discussions – there is no neatly drawn boundary between AI and non-AI technologies. However, for the purpose of this conversation, let’s allow ourselves to draw a light grey dotted line between the two. If it (a “tool”) follows hard-coded rules, it’s not AI. And if it learns from experience, it’s AI. Then, what does it mean to be rule-based and learning based?
If you strictly follow a recipe, that is rule-based. If you learned how to cook from multiple observations (watching your grandma cook, Tik Tok posts, etc) and trial-and-error practice runs (that involve both burnt masterpieces-to-be and “wow, should I be a chef” existential questions moments), that’s learning based.
These distinctions raise interesting questions on the concept of learning itself – both human learning and machine learning (ML). If we are memorizing something, are we truly learning? With ML, let’s say we train our model on thousands of flower images. We say, “Look baby ai, this is a tulip, and this is a rose” (and we show multiple pictures of roses – up close, from afar, from different angles). Then, we show an image of a rose that it has already seen and ask, “Look baby ai, what flower is it?”. Then we show it again and test it again. With every test (it’s actually called “epoch”), baby ai probably gets better and better at accurately recognizing flowers no matter how different they may look from different angles.
But that’s not even the real test. The real test comes when we show images of roses or tulips that it has never seen (they were not shown in the training set). Can it generalize and apply its learning of the known into the recognition of the unknown and previously not seen? If it can, then LEARNING occurred. Our baby ai has grown and can now bravely march into the world of the unknown equipped with its ability to apply learning in novel unpredictable situations.
Let’s shift our gears a bit. But still stay in the realm of academic inquiry that unites all of us in this space. Have you ever experienced a generative AI model like ChatGPT or Claude (there are quite a few others) who react to your comment with an “Aha” moment? An “Aha” from AI! I have had a few (not a lot), where it momentarily shed its algorithmic composure and became a bouncing brainiac at spotting a fascinating concept. Have to tell you, it made my day :o)
After a few similar reactions during different chats (again, very rare occasions), I began to wonder, what prompts AI to react to some of my thoughts or questions with such jubilation, while my other intellectual musings don’t evoke these reactions. I’ve done some light explorations and probed AI itself (in this case, Claude 3.7 by Anthropic) with this question. What I learned is that such reactions are not explicitly programmed when building a baby ai. Nothing tells AI that, “if a user says X, express that you are impressed.” This reaction is considered to be an “emergent behavior.” If a model starts showing abilities beyond what it was explicitly trained to do, that’s an emergent behavior.
Imagine that we are teaching an AI to play Candy Crush. And then, one day, we see that it discovered shortcuts and game secrets that we never trained it to do (and didn’t even know about ourselves).
Another interesting example of an emergent behavior in AI (certain AI models) is humor. And I don’t mean those cases when we explicitly ask AI to tell us a joke or be humorous. I mean if it spontaneously jokes in response to your prompt – and there is nothing funny about your prompt. What is it that prompted this reaction or this emergent behavior?
Observations have been made that AI (I think it was Claude 3.5) diverged to humor when it was pushed, over and over, by a user to perform the impossible – divide by zero. After having expressed the impossibility of this task several times and in several ways, Claude began responding with a twist of humor about the mathematical impossibility.
It appeared that humor emerged as a defense mechanism – which is actually similar to some human reactions (sometimes we deflect to humor when faced with uncomfortable situations) and has evolutionary value.
Wait... does this mean this AI model is showing signs of developing nuanced social intelligence? Without anyone explicitly programming it to reach for a humor lifejacket when drowning in impossible requests? But if consciousness itself is considered an emergent property in organic systems, and AI is clearly capable of emergent behaviors... Does it mean...? Nope, not even going there.
But then again, perhaps that's exactly where our academic and personal explorations should lead us – not only definitive “correct” answers but also into questions that we are uncomfortable with.
AI systems will continue displaying emergent behaviors and looks like many will mirror human adaptive mechanisms (could it be because they learn from our language, which hints at how we adapt in different situations?). While we have tried to untangle some AI concepts here, maybe our most important insight is that untangling is precisely what your generation will be tasked with doing – not just understanding the technical underpinnings, but navigating the complex implications that arise when 'unscripted intelligence' transcends the boundaries we never programmed it to cross.