The ethical implications of using generative chatbots in higher education

chatbot for educational institutions

Each essay it produces is unique, even when it is given the same prompt again, and its authorship is (practically) impossible to spot. It looked as if ChatGPT would undermine the way we test what students have learned, a cornerstone of education. Meet the teachers who think generative AI could actually make learning better. The university of the future as anticipated by many scholars and policy makers has already started. Technology, if used ethically and strategically, can support faculty in their mission to prepare their students for the needs of our society and the future of work. While COVID-19 forced an emergency transformation to online learning at universities, learning how to teach efficiently and effectively online using different platforms and tools is a positive addition to education and is here to stay.

Since its 2022 launch, AI chatbots like ChatGPT have sparked concerns in education. While risks about students’ independent thinking and language expression skills deteriorating exist, banning the tool from academic institutions should not be the answer (Dwivedi et al., 2023). Teachers and professors are uneasy about potential academic fraud with AI-driven chatbots such as ChatGPT (Meckler and Verma, 2022). The proficiency of ChatGPT spans from assisting in scholarly investigations to finalizing literary compositions for learners (Roose, 2022; Shankland, 2022). However, students may exploit technologies like ChatGPT to shortcut essay completion, endangering the growth of essential competencies (Shrivastava, 2022).

The University of Tennessee, Knoxville,  originally implemented a chatbot for their Big Orange Tix program to handle routine student inquiries about sports event ticketing. Given the complexity of the program, especially for new students, the chatbot became invaluable in fielding common questions and providing quick, accurate responses. The chatbot’s  ability to assist students after hours proved especially useful. Following this initial success, the university has expanded the chatbot’s use to other areas, including the IT, Student Life, and campus store websites, with plans to incorporate it into more websites. Generative AI chatbots excel at optimizing engagement by analyzing metrics that reveal student behavior, preferences, and pain points. These metrics allow the chatbot to adapt its responses in real time, ensuring that each interaction is as effective and personalized as possible.

Hybrid Learning

While the majority of researchers surveyed believe AI could lead to a “revolutionary change in society” (AI Index Steering Committee, 2023, p. 337), they also warned of the potential dangers posed by technology development. Similarly, Kim Lepre, a seventh-grade English teacher in California, explains that when used correctly, ChatGPT can simplify and improve educators’ everyday lives. Lepre uses the program to differentiate instruction, generate quizzes and even email parents, saving more time to interact with students.

  • The use of blockchain for data tampering prevention is also supported by academic research.
  • In the context of integrating AI technologies into education, the issue of plagiarism emerges as a critical ethical concern (Teel et al., 2023).
  • They report that ChatGPT allows doctors to input specific information and develop a formal discharge summary in seconds.

Recently Shields asked ChatGPT to generate ten different project options for her sci-fi unit. Instead of a traditional essay assignment, the program suggested imaginative projects such as creating and explaining a poster of an alien. Then, they generate a new chat by typing a prompt or instructions into the chat bar. Users then have the ability to instruct ChatGPT to edit, adjust, or regenerate a response.

After Trump’s win, next LAPD chief faces questions about immigration enforcement

Rowan College at Burlington County in New Jersey addressed this need through a mid-semester check-in campaign, using a simple text message asking students to reply with an emoji to indicate how they were feeling. This allowed faculty and staff to gauge student sentiment and connect them with relevant resources for mental health, academic support, or campus engagement. By meeting students where they felt comfortable via digital communication such as  SMS and live chat, the college provided timely support while creating an environment that fosters student success and well-being. This personalized, data-driven approach enables institutions to allocate resources effectively and promote a culture of care and success. Finally, digital literacy training must cover the risk of plagiarism when using AI chatbots. Ghosal (2023) notes ChatGPT’s downside of lacking plagiarism verification as it picks sentences from training data.

The hand-wringing around it reminds him of the anxiety many teachers experienced a couple of years ago during the pandemic. With students stuck at home, teachers had to find ways to set assignments where solutions were not too easy to Google. But what he found was that Googling—what to ask for and what to make of the results—was itself a skill worth teaching.

Indeed, Smith (2023) informs that OpenAI is working on the next major software upgrade for ChatGPT, GPT-5, which is expected to launch in winter 2023. If a report about the GPT-5 abilities is correct, it could bring ChatGPT chatbot for educational institutions to the point of AGI, making it indistinguishable from a human. OpenAI expects the intermediate ChatGPT version of GPT-4.5 to be launched in September or October 2023 if GPT-5 cannot be ready at that time (Chen, 2023).

But the text tends to feel generic, lacking the self-revelation and reflection that admissions officers look for. Students who receive text messages from the chatbot at CSU Northridge can be alerted if their grades are slipping or be directed to university services, like tutors or academic counselors, to get back on track, Adams said. She said the service helps students better understand what they need to do to succeed. However, like most powerful technologies, the use of chatbots offers challenges and opportunities. Users should stay informed about the latest developments and best practices in AI ethics.

chatbot for educational institutions

The author thoroughly analyzes ChatGPT’s application in healthcare education, considering both optimistic perspectives and legitimate concerns. Based on a comprehensive analysis of 70 research publications, the author investigates the utility of large language models in healthcare teaching, research, and practice. According to the review, ChatGPT offers several benefits, including improving research equity and variety, improving scientific writing, facilitating healthcare research and training, and encouraging individualized learning and critical thinking in healthcare education. According to the author, ChatGPT’s promising uses could lead to paradigm shifts in medical practice, study, and training. Embrace this AI chatbot, nevertheless, is suggested with care given its existing limits.

Thus, while chatbots can provide valuable support, educators can offer more than the full range of educational experiences in a more traditional learning environment. Like all AI systems, chatbots learn from large amounts of data gathered from the internet, which unavoidably represents societal biases. If the data used to train these models contains biased attitudes, the AI system will likely assimilate and reproduce these biases, even unintentionally (Bolukbasi et al., 2016). This could manifest as gender, racial, or other biases, significantly impacting a student’s learning experience and worldview when surfaced in an educational context.

Artificial intelligence

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that ChatGPT may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Recent news already provides information about the subsequent versions of ChatGPT.

Indeed, Lee et al. (2022) investigate a computer-generated conversational agent-aided evaluation system and realize that it advances student-achievement results, including scholarly accomplishment, assurance, learning mentality, and enthusiasm. They infer that chatbots can heighten learner participation in the educational process. The present article constitutes an excellent first example regarding the benefits of chatbots in research.

Will Chatbots Teach Your Children? – The New York Times

Will Chatbots Teach Your Children?.

Posted: Thu, 11 Jan 2024 08:00:00 GMT [source]

That is an incredibly emotional experience…You wouldn’t want to automate that…And I think, right now, higher education is not really good at figuring out where human beings need to be involved,” said Jacobs. But chances are this would not be the first time you have communicated with a computer program without knowing it. In recent years, chatbots have become a common tool for banks and large companies around the world. Still, USMLE administrators are intrigued by the potential for chatbots to influence how people study for the exams and how the exam asks questions.

For years, universities have been trying to banish the plague of essay mills selling pre-written essays and other academic work to any students trying to cheat the system. But now academics suspect even the essay mills are using ChatGPT, and institutions admit they are racing to catch up with – ChatGPT App and catch out – anyone passing off the popular chatbot’s work as their own. OpenAI’s artificial intelligence chatbot was opened to the public in November 2022, and in less than a week surpassed the one million users mark, with people using it for things like creating code and writing essays.

And If someone confides to Ed that they are experiencing food insecurity, for instance, they might be connected to an appropriate school official to connect them to resources. In some ways the AI system is an acknowledgement that the hundreds of edtech tools the school district has purchased don’t integrate very well with each other, and that students don’t use many of the features of those tools. Another example could be offering personalized insights and recommendations for campus life based on students’ interests and course material. Alongside ChatGPT – which is primarily used by students for help with academic tasks – universities have adopted a range of chatbots to help with other processes, including in admissions and student support. Pausing in the AI development race will leave countries behind, and developed economies cannot afford to pay such a price.

In other words, it allows learners to use software to learn individually, without the need for a class or a teacher (Shawar and Atwell, 2007). Learners benefit from immediate responses to questions and being guided through complex topics at their own pace. The issue of algorithmic bias highlights the importance of taking a deliberate and critical approach to developing and implementing AI in education. If these challenges are met, chatbots may be able to contribute positively to the educational landscape without perpetuating societal biases. However, little research exists on how education professionals and policymakers can practically mitigate dataset biases. The handling of such sensitive student information immediately raises significant privacy concerns.

They acknowledge that ChatGPT can increase efficiency in the banking, hospitality, and IT sectors. However, concerns include practice disruptions, privacy and security hazards, biases, and false information. According to the paper, research is needed in knowledge, ethics, transparency, digital transformation, education, and learning. The handling of generative AI, biases in training data, appropriate implementation contexts, ideal human-AI collaboration, text accuracy assessment, and ethical and legal issues all need further study. You can foun additiona information about ai customer service and artificial intelligence and NLP. Highlights include concerns about biases, dated data, the need for protective policies, and transformational effects on employment, teaching, and learning. Adopting AI chatbots in HEIs presents various risks, such as privacy breaches, unlawful use, stereotyping, false information, unexpected results, cognitive bias, reduced human interaction, limited accessibility, and unethical data gathering (OpenAI, 2022).

  • The giveaway comes in how the applicant speaks, thinks out loud, and responds to unexpected questions.
  • Similarly, Norris (2019) explores strategies to thwart academic web fraud, such as predictive analytics systems.
  • The chatbot’s  ability to assist students after hours proved especially useful.
  • There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.

Spilling PII is not just a nightmare for the person whose information is stolen, it can also lead to lasting reputational damage for the institution and potential compliance penalties from the federal government. Click the banner to remind yourself of the good things AI can do for the student experience. Much of the early panic over ChatGPT has subsided as instructors have realized the limitations of the AI, tools have been developed to detect its use and thought leaders have encouraged colleges to embrace tools like ChatGPT. Check out these higher education IT thought leaders and social media personalities who are making waves in their fields. Ammar holds an MTech in information and communication technology from Indian Institute of Technology Jodhpur.

Check Point (2023) reports underground hacking communities using OpenAI to design malicious tools, and skilled threat actors will likely follow. Perry et al. (2022) conducted a large-scale study on using an AI code assistant for security tasks and found that participants with AI access produced less reliable code. The use of blockchain for data tampering prevention is also supported by academic research.

Another professor at Furman University told Business Insider he caught a student turning in a paper written by ChatGPT because although it was very well-written, there was a lot of misinformation. However, not many universities have created rules to fight against this type of cheating. Furthermore, students, parents, educators, and relevant stakeholders should be aware of the data protection measures. Transparency about data handling practices can alleviate privacy concerns and ensure the trust of all stakeholders in a particular adopted chatbot. The challenge of balancing the benefits of personalised education with the privacy rights of students will continue to be a critical issue in the application of chatbots and similar ML in education. As scholars further delve into this AI-driven educational paradigm, prioritising data privacy will be important to creating a sustainable and ethical AI-integrated educational environment.

AI tutors have been assisting students since at least 2016, and university-branded chatbots have been around just as long. University chatbots took on even greater importance during the height of the COVID-19 pandemic, when reinforcing any kind of connection between students and their campus was a major challenge. To be effective, chatbots should provide a consistent and user-friendly customer experience. Both simple and intelligent chatbots should have easy access to data and be able to update that data based on the conversational exchange between the chatbot and the user. It should also understand the context of the screen(s) it’s linked to and stay current in terms of content.

AI hallucinations can occur due to various reasons, including data discrepancies in large datasets, training errors during encoding and decoding, and a biased sequence (Ji et al., 2022). This poses a significant challenge for educators and students using generative chatbots. While this paper does not discuss the specifics of hallucination in natural language generation, the author acknowledges that it is important for educators and students to be aware of the particularly problematic issue of AI hallucinations. This level of autonomy is generally encouraged through contemporary educational strategies that promote self-directed learning, a method shown to increase student motivation, engagement, and learning outcomes (Wiliam, 2010).

For one thing, the USMLE is given in-person without test-takers having access to web-based tools to find answers, notes Alex J. Mechaber, MD, vice president of the USMLE at the National Board of Medical Examiners. Mechaber also points out that the chatbots did not take the full exam, but answered certain types of sample questions from various sources. Educators say chatbots can accelerate and deepen learning in several ways if students and teachers use them well. The Australian state of Queensland will be blocking ChatGPT from schools until the software can be “assessed for appropriateness,” according to the Guardian. It will join New South Wales, the first Australian state to declare ChatGPT banned on school grounds.

Educators can guide students in formulating practical questions and help them interpret and analyze the responses they receive. Conversations with ChatGPT encourage students to think critically, evaluate information, and refine their questioning techniques. This process enhances their ability to ask thoughtful and relevant questions and cultivates a deeper understanding of the subject matter. Privacy and data protection should be paramount when deploying ChatGPT in an educational setting. Educational institutions must prioritize students’ privacy and ensure their personal information is securely stored and protected.

Geologists raise concerns over possible censorship and bias in Chinese chatbot

For example, if a prospective student frequently asks about financial aid, the chatbot can prioritize providing detailed information on scholarships, grants and loan options, along with tips on how to apply. Goldman Sachs (Hatzius et al., 2023) predicts that ChatGPT and other generative AI could eliminate 300 million jobs worldwide. Researchers estimate that AI could replace 7% of US employment, complement 63%, and leave 30% unaffected. Taulli (2019) suggests that automation technology will take over “repetitive processes” in fields like programming and debugging. Positions requiring emotional intelligence, empathy, problem-solving, critical decision-making, and adaptabilities, like social workers, medical professionals, and marketing strategists, are difficult for AI to replicate.

Perceptions and usage of AI chatbots among students in higher education – ResearchGate

Perceptions and usage of AI chatbots among students in higher education.

Posted: Sun, 30 Jun 2024 07:00:00 GMT [source]

Meanwhile, the issue of algorithmic bias demands thought about using datasets that are mostly underrepresented for many minority groups and, thus, lack diversity. The chatbot interacted with data stored by Snowflake, according to the district’s contract with AllHere, though any connection between AllHere and the Snowflake data breach is unknown. Get the most critical news and information about students’ rights, safety and well-being delivered straight to your inbox.

chatbot for educational institutions

In other words, oral presentations must solely be done by a human, whereas the benefits of AI can still be realised to aid student preparation. Nevertheless, this approach may be considered a short-term solution to the constantly evolving AI technology, especially in the realms of online presentations and interviews. De Vries (2020) argues that deep fakes can blur the lines between what is fact and fiction by generating fake video footage, pictures and sounds. Similarly, AI-powered platforms such as AI Apply can quickly transcribe real-time questions posed during online presentations, formulate a rapid answer, and then vocalise it as if it were the student (Fitria, 2023). However, the author argues that this is a challenge that the wider society will likewise have to grapple with, as there will be implications for political deception, identity scams, and extortion (De Vries, 2020). Moreover, there is a need for transparency about these biases and an ongoing dialogue about their implications.

chatbot for educational institutions

Users should prioritize the privacy and data protection of individuals when using chatbots. They should avoid sharing sensitive personal information and refrain from using the model to extract or manipulate personal data without proper consent. Users should be cautious about the information generated by chatbots and not rely solely on them as sources of information. They should critically evaluate and fact-check the responses to prevent the spread of misinformation or disinformation. Ethical issues such as bias, fairness, and privacy are relevant in university settings.