When I last mentioned AI, I spoke of academia needing to adapt — but I had no idea how much evolution would lie ahead for us! It began with curiosity, publications, and trepidation, but Learning Management Systems and software saw profitability. Now, AI has become so pervasive that basic course modules offer instructors the option to use AI to “enhance” materials. Some instructors have welcomed their new AI overlords happily, all too ready to pass the work onto their new unpaid interns who won’t mindlessly scroll online watching cat videos. Other instructors… “WE HAVE TO GO BACK TO PENCIL AND PAPER!!!” Have you seen our students’ handwriting lately, my colleagues? I certainly have no time to play silly games unless they involve illustrations from The Oatmeal.
Now, I have mixed feelings about AI. I have tested the limits of ChatGPT in terms of research capabilities, and I can confirm that it cuts the time exponentially; however, you are still limited to paywalls, so trying break down genealogical brick walls is still not possible without spending some serious cash. Of course, the AI program you use also has its own limits based on its coding and how much data the coders allowed it to access — unless it can pull from the open web, typically the bot’s algorithm has a cutoff date. These limitations can produce these lovely things called “hallucinations.” Now, I’m not talking about seeing people jumping out of the shadows or hearing voices; I’m talking about fabricating information based on the information the AI already has. Hallucinations lead to misinformation, which leads to… trouble! Do you know what other technological advancement came with similar issues? Wikipedia!
Ah, but why do I mention Wikipedia? Usually, people understand concepts better when you compare them to something familiar. Wikipedia isn’t a true encyclopedia; it is an open-source website where individuals can create an account and add information in real-time — factual or not. Unless and until a moderator comes along to verify the information, I could literally go to the Mickey Mouse Wikipedia entry, claim he was the first president of my country, and walk away. While most people born here would know that’s incorrect, imagine a child in a foreign country decided to cite that information on a school project! Now, I’m being facetious for the sake of example, but the point stands that we need to be responsible about consuming and repeating information regardless of where we found it.
So, that’s just one issue with AI that makes ethical usage critical, but we haven’t even mentioned something truly alarming: the environmental impacts! If you want to market a product as saving me time and creating more efficiency in my day-to-day tasks, I would prefer that the product does not require the use of fresh water when we’re having conversations locally about unacceptable amounts of lead and PFAs in our drinking water supply. If the technology isn’t sustainable, we need to review how it functions. I may not be an environmental engineer, but I do know the major is offered to students who are required to take my classes. I do know that we have the collective brain capacity to figure out how to make the technology work better for us instead of taking water from impoverished communities to generate images of people with extra limbs. Leave the art to the artists, the music to the musicians, the writing to the writers — I’m sorry if you feel like you need AI to fit into one of these categories, but the people who practice their skills deserve better than losing out to slop.
My final issue with AI: the utter lack of critical thinking. While I have used several bots to test their limits and improve my own understanding of some topics/ inspire myself, I see too many people using AI to think for them. Yes, it is quite simple to enter the assignment guidelines into Google Gemini and then copy and paste the resulting output into an assignment dropbox — but what does a student truly gain from that experience? What did they learn during that process? If the goal of an assignment is to research their chosen career path, what does that indicate to me and the student about said goals? Is the student truly interested in that career? The answer to those questions are beyond the scope of my expertise, and we need to seriously consider why we are depending so heavily on technology when we are fully capable creatures. Tools are meant to be used — we should not become the tools.
Be First to Comment