Technology has thrown things like ChatGPT at us lately. By now, those technologically connected, knowing and working with the English language at any level of proficiency, must likely be drowning in information about it. There has been such song-and-dance about the inevitability of technologies like ChatGPT that it tells us how the discourse around a technology is framed. This is regardless of real issues that affect the technological world, like the digital divide and wonky internet connectivity, not only in India, but in many parts of the world.
Some of the technology’s fans in the sphere of higher education appear to somewhat uncritically accept ChatGPT’s instantly productive abilities, though the credibility and veracity of its responses have been questioned. It gets many things wrong. It often gives generic answers to specific questions on a subject it is asked questions on. But, it is the relative lack of ethical probing with this technology that is most galling. As a teacher of subjects such as literature, writing, and academic writing at the university level, where writing and reading are serious requirements and continuous learning and reflection processes, I see that there are legitimate concerns over the future of written assessments. But more than the advance of technology, they point to what ethical and pedagogical challenges obtain.
How do liberal arts and humanities teachers respond to ChatGPT? It has to be on grounds of ethics. In the West, some institutions have, and in my view correctly, curtailed its use in class work. It has pushed educators to be judicious with technology use in class, so that that time can be apportioned for deeper textual engagement. This can also gradually raise the reading bar higher, with students being made to critically compare texts in a guided manner. I’m no anti-technologist, but the technology can be put to better use in other subjects. On the other hand, there’s the whole role of the digital humanities and the use of technology within the arts and humanities, but that field too raises questions over the ethicality of technology and who gets to use what and how and when in the domains of the humanities.
This ethical tilt makes one question the rapid pace of time in education. There’s uncritical currency granted to ideas like “skills” and tangible “outcomes” in education. It sets the experience of education as one pushing educators and learners to get students to aim for presentable ‘artefacts’, and concrete-seeming ‘products’ across subjects rather than creating room for an educated and meaningful pause, generating among students the ability to ask piercing questions to any text or social phenomenon. Technologies like ChatGPT offer easy solutions in a climate hungry for quick answers to quick questions, a fast-food version of thought. Sure, ‘skills’ and ‘outcomes’ are important, but it does not always hold true and apply to every course in every field under the sun. Even if they are, should they come at the cost of the means to get them there? At the cost of ethics?
In the West (and now here, too), undergraduate students are introduced to ways of textual and philosophical engagement via the Great Books courses, which have themselves got moulded and remoulded over time, cancelling out or calling into account all kinds of canonical or other biases in university-level education. Essentially, a foundational course like this has been based on the principles of ethics, whose complex framework sets the tone for the students’ future education. It impacts even hardcore technical subjects that involve deep ‘skilling’, ‘artefact’ production, tangible and manifest ‘outcomes’. To combat ChatGPT, ethics, and all its attendant issues, must make a subversive return to educational thinking.