Teaching used to cover familiar, predictable ground if one did this for a fairly long time.

On Nov. 30, 2022, ChatGPT was launched for use by the public. Developed by OpenAI, Chat Generative Pre-trained Transformer is a “large language model-based chatbot” that can chat in a humanlike way and respond to a user’s need for communication — all tasks from researching to writing the assignment — and producing the output in the desired “length, format, style, level of detail, and language used.”

One million users of ChatGPT were recorded within five days of its public launch. In December 2022, Google issued a “code red” over ChatGPT and the threats the chatbot presents to the future of search engines.

Search engines are not the only ones feeling the heat. Communication teachers at the University of the Philippines (UP) Cebu recently met to discuss how to respond as an institution to students and even teachers using artificial intelligence (AI) products for work.

Technology has an unnerving habit of racing ahead, enabling not just people’s research and learning but also boosting ways for practicing and getting away with plagiarism and academic malpractice.

The unethical practice of copying someone’s work and claiming this as one’s own minus the proper attributions demands that teachers read and review the submitted manuscripts of their students to detect the practice of plagiarism.

Software facilitates teachers’ work in reviewing and marking manuscripts, with universities like UP including in its online databases Turnitin, a service that aids students and teachers to check their works if these contain texts that fall within the plagiarism spectrum.

Recently, Turnitin included a service for detecting the use of AI in a submitted work. However, the service stresses the same caveat: human knowledge must still oversee artificial intelligence in determining whether academic misconduct is actually involved in the percentage of similarity detected in a manuscript.

Plagiarism is relatively clearcut to determine since student and teacher can check the use of words or the context of the paragraph or composition.

AI usage is more contentious because among faculty and students, there is diversity, even a clash of views on whether AI is simply another tool and thus permissible and inevitable to use as the calculator once supplanted the abacus, and computers did the same later with calculators; or whether AI encroaches on and robs learners of their agency, creativity, and right of self-expression.

Negotiating the dilemmas of standards and ethics challenges learning because AI exposes the distinctions between those who are considered as digital natives as differentiated from digital migrants. The young, initiated early to use and regard new media as essential extensions, are often pitted against their elders, who regard technology as mere tools and loath to subordinate their agency and subjectivity to the artificial and synthetic.

The academe also realizes that the industry, which values speed, efficiency, and profits, values AI as a necessary prop to stimulate human creativity and productivity. The perennial challenge of matching academic and industrial ethos and outputs is given a new twist by the issue of AI, viewed as a threat by professors who will not yield critical thinking and creativity to the artificial, in contrast to the employers and bosses who will invest in anything that boosts productivity and profits.

Should the academe prepare students to work with AI, as demanded by industry? Is the student better prepared by harnessing innate talents and abilities and training him or her not to be hostaged to technology?

Given the pace of technology’s evolution, academic stakeholders must converse and converge in readiness for an AI-saturated future.