ChatGPT in Education: Opportunities and Risks
ChatGPT in Education: Opportunities and Risks
*Correspondence:
adarkwahmichael1@[Link] Abstract
1
Smart Learning Institute Artificial Intelligence (AI) technologies have been progressing constantly and being
of Beijing Normal University, more visible in different aspects of our lives. One recent phenomenon is ChatGPT, a
Beijing, China
2
Open Education Faculty,
chatbot with a conversational artificial intelligence interface that was developed by
Distance Education Department, OpenAI. As one of the most advanced artificial intelligence applications, ChatGPT has
Anadolu University, Eskisehir, drawn much public attention across the globe. In this regard, this study examines
Turkey
3
Indiana University Learning
ChatGPT in education, among early adopters, through a qualitative instrumental case
Sciences Program, Bloomington, study. Conducted in three stages, the first stage of the study reveals that the public
IN, USA
4
discourse in social media is generally positive and there is enthusiasm regarding its use
University of Wollongong, New
South Wales, Australia
in educational settings. However, there are also voices who are approaching cautiously
using ChatGPT in educational settings. The second stage of the study examines the
case of ChatGPT through lenses of educational transformation, response quality, useful-
ness, personality and emotion, and ethics. In the third and final stage of the study, the
investigation of user experiences through ten educational scenarios revealed various
issues, including cheating, honesty and truthfulness of ChatGPT, privacy misleading,
and manipulation. The findings of this study provide several research directions that
should be considered to ensure a safe and responsible adoption of chatbots, specifi-
cally ChatGPT, in education.
Keywords: Generative AI, ChatGPT, Chatbots, Education, Artificial intelligence,
Human–machine collaboration
Introduction
Can machines think? is a simple, yet a sophisticated question (Turing, 1950). In an effort
to find an answer to this question, McCarthy et al. (1955) organized a scholarly event
and coined the term "artificial intelligence” (AI) in 1955 to refer to machines and pro-
cesses that imitate human cognition and make decisions like humans. At these times,
the term [ro]bots are articulated for the first time in Čapek’s (1921) science fiction play;
however, it was Asimov (1942, 1950) who visioned that these machines can transform
into intelligent forms and introduced the Three Laws of Robotics to set the rules that
bots should stick to and cannot be bypassed. Originally known as the imitation game,
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate-
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit [Link]
creativecommons.org/licenses/by/4.0/.
the Turing Test was proposed as a code of protocol to understand whether a machine
can exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human
(Turing, 1950). Once depicted as fiction, all those possibilities are about to come true,
and we are at the brink of a future when we can know whether machines can think or
not.
In November 2022, OpenAI, a lab that studies artificial intelligence, came out with a
chatbot called ChatGPT (Generative Pre-trained Transformer). ChatGPT is a conversa-
tional artificial intelligence interface that uses natural language processing (NLP), which
interacts in a realistic way and even “answers follow-up questions, admits its mistakes,
challenges incorrect premises, and rejects inappropriate requests’’ (OpenAI, 2023).
While ChatGPT’s primary function was to mimic human conversation, its capabilities
extend far beyond that; it can literally create new things, such as a poem, story, or novel,
or act like anything within its capability.
With the advent of ChatGPT, there is eventually an innovative AI technology that will
truly challenge the Turing Test (Turing, 1950) and demonstrate if it is capable of think-
ing like humans. It is uncertain if it would pass the Turing Test (Turing, 1950) in the long
run, but it is sure that ChatGPT is revolutionary as a conversational AI-powered bot,
and it is a visible signal for the paradigm shift that has been happening not only in the
educational landscape, but also in every dimension of our lives. Compared to traditional
chatbots, ChatGPT is based on GPT-3, which is the third iteration of the GPT series
by OpenAI that is more advanced in terms of scale (175 billion parameters, compared
to 1.5 billion of GPT-2), larger dataset as the training data, more fine-tuning, enhanced
capabilities, and more human-like text generations (Brown et al., 2020). The use of Natu-
ral Language Processing and a generative AI that relies on deep learning has enabled
ChatGPT to produce human-like text and maintain a conversational style allowing more
realistic natural dialogues.
Several preprints of studies and numerous blog posts and media outlets have reported
the advantages of ChatGPT in education (Zhai, 2022); some have even provided guide-
lines on using it in classrooms (Lieberman, 2023; Mollick & Mollick, 2022; Ofgang,
2022). However, the potential concerns of chatbots haven’t been investigated as much.
Janssen et al. (2021) described reasons for chatbots’ failure in practice, including not hav-
ing enough resources, wrong use case (i.e., the basic chatbot technology did not match
the required task), poor law regulations, data security, and liability concerns, ignorance
of user expectation and bad conversation design, or simply poor content. Haque et al.
(2022) did a Twitter sentiment analysis about ChatGPT adoption as a technology in
general (not in education), and they found that users have divided attitudes about it.
However, concerns coming from an advanced chatbot, such as ChatGPT, were not well
investigated in the education field. Therefore, it is not clear if ChatGPT will overcome
the concerns found in previous chatbots or will even deepen them. Consequently, this
may lead to a serious and quick protective reaction to a potential opportunity, such as
New York City and Los Angeles Unified schools’ banning of ChatGPT from educational
networks due to the risk of using it to cheat in assignments (Shen-Berro, 2023; The
Guardian, 2023). It is therefore important to investigate the concerns of using this tech-
nology, ChatGPT, in education to ensure safe use. The purpose of this study is, therefore,
to examine chatbots in education and for this purpose, the study approaches ChatGPT
as a representative case of an advanced chatbot among early adopters. In this regard, this
study answers the following research question: What are the concerns of using chatbots,
specifically ChatGPT, in education?
Methodology
To answer the aforementioned research question, this study adopts a qualitative case
study approach (Yin, 1984) and benefits from an instrumental case study research design
(Stake, 1995). Instrumental research design is helpful when researchers intend to under-
stand a phenomenon in a context (Stake, 1995), which is in our case, ChatGPT which is
a fine and recent example of AI-powered chatbots. To ensure the validity and reliabil-
ity of the study, the research triangulates (Thurmond, 2001) the data collection tools to
get a broader and deeper understanding. In this regard, this study follows three stages,
namely, social network analysis of tweets, content analysis of interviews, and investiga-
tion of user experiences. Each of the stages is described in the next subsequent sections.
Educational transformation Use this code when users are talking about how ChatGPT will change education
Response quality Use this code when users are talking about the accuracy of the obtained results
from ChatGPT
Usefulness Use this code when users are talking about how ChatGPT helped them in educa-
tion
Personality and emotion Use this code when users are talking about their feelings when interacting with
ChatGPT or if mentioning the emotions revealed by ChatGPT
Ethics Use this code when users are talking about the ethical concerns of using ChatGPT
in education
Results
The obtained results were structured according to each stage as discussed in the follow-
ing subsequent sections.
• As a language model trained by OpenAI, I’m constantly amazed by the power &
potential of artificial intelligence. From natural language processing to machine
learning, AI is revolutionizing the way we think about & interact with technology.
#AI #machinelearning #openai #ChatGPT
• Here’s my problem with this line of thinking about #ChatGPT as a writing instruc-
tor. Reactionary teaching goes nowhere.
• “Teachers are talking about ChatGPT as either a dangerous medicine with amazing
side effects or an amazing medicine with dangerous side effects.” —@VicariousLee.
Stanford faculty weigh in on #ChatGPT’s shake-up in education [Link]
bzeWm #edtech #edchat #gpt3 #ai [Link]
The word cluster of the most frequent 100 terms from the tweets (see Fig. 2), using
tSNE analysis was applied. t-SNE is an unsupervised “nonlinear dimensionality reduc-
tion technique that aims to preserve the local structure of data” (van der Maaten &
Hinton, 2008, p. 2580), used for exploring and visualizing high dimensional data. The
findings revealed that most of the users are optimistic about the use of AI-powered chat-
bots, such as ChatGPT in the educational systems. While the blue cluster in Fig. 2 dem-
onstrates the future promises of using ChatGPT (e.g., see the terms: ChatGPT, learning,
AI, education, future, teaching, learn), the pink cluster indicates insights regarding how
to use it and its revolutionary potential (e.g., see the terms: gpt, 2023, artificial, intelli-
gence, human, think, and better, way, knowledge, technology, tools, student, teacher), the
green cluster shows critical insights (e.g., see the terms: cheating, change, ideas, create,
problem, potential, ways, edtech).
The most frequently used relevant hashtags are #chatgpt #AI, #ArtificialIntelligence
#education, #machinelearning, #deeplearning #edtech #openAI, and #python, which
implies that there is a need to carefully examine the AI technologies (e.g., machine
learning, deep learning) lying behind the ChatGPT. As seen in the sample tweets (see
Table 3), despite that there is an optimistic overview of using ChatGPT in education,
there are also some concerns regarding the use of such technologies in the educational
landscape.
To summarize, the findings from the Social Network Analysis of tweets revealed that
positive sentiments have shown almost as twice higher frequency than negative ones
(see Table 2). However, the example tweets show that negative sentiments demonstrate
deeper and critical thinking than the positive ones (see Table 3). This could be explained
by the fact that most of the positive sentiments are led by the novelty effect of Chat-
GPT as a technology in education. On the other hand, the negative sentiments represent
As we develop our understanding and approaches to #AI #ChatGPT integration in #education, we should incor-
porate these key aspects: Critical Thinking, Ethical Considerations, Methods (language model used/data sources)
& Prompt Skill Development
As an educator who loves teaching a knowledge-rich curriculum, I think all of these responses miss the mark. The
technology behind #ChatGPT will systematically change education. But will not fundamentally evolve the way
humans learn
My initial propositions: Let us change assessment practices to respond to the tech. Let us keep teaching our
modern students a knowledge-rich curriculum. Let us proactively teach students how to harness the power of
#ChatGPT which is scratching the surface of the potential of AI
I get the concern… but the response is like burying heads in the sand. AI tools like this will be part of the world
these children live in. They need to be taught how to use this—appropriately, ethically, safely & responsibly. #AI
#Education #ChatGPT
Whether we like it or not, AI in education is here. I asked #OpenAI #ChatGPT to help me with the early planning
stages for an upper elementary 3D Design after-school club. In under a minute, I had a solid foundation to build
upon. The tech is here, embrace it
Not all of these will be massively helpful, but what #ChatGPT has done for education is made it significantly
easier to create resources and activities in as little as 1 min. I will continue to play around for it, and look forward
to when it can create graphs! /end
It’s wild to think about how we’ve trained machines and now they’re teaching us! #MachineLearning #AI #Chat-
GPT
AI technology may be rapidly advancing, but so is AI regulation. While a variety of state-based AI-related
bills have been passed in the U.S and also to mention the EU AI Act and UK and Singapore AI and Machine Learn-
ing regulations. more AI regulations to come. #ChatGPT #AI
So with all the focus around AI text generators like #chatGPT on student "cheating", do educators see this as
cheating too? What’s good for the goose is good for the gander, surely?
The existential crisis happening in education because of #ChatGPT is kind of ironic to me. School systems around
the world are so focused on grading and busy work that they’ve forgotten the purpose of education: learning
more critical concerns, hence a deeper and thorough thinking of why ChatGPT should
be approached with caution.
Educational transformation
Responses from a majority of the participants suggest that ChatGPT is efficacious in
increasing the chances of educational success by affording users (teachers and students)
baseline knowledge of various topics. Additionally, ChatGPT was recognized by the par-
ticipants as efficient in providing a comprehensive understanding of varied (complex)
topics in an easy-to-understand language. In this light, it can be argued that ChatGPT
will lead to a paradigm shift in conventional approaches to instruction delivery and drive
learning reform in the future pregnant with digital potential. For instance, one partici-
pant reported:
“I would use ChatGPT for two purposes: as a learning aid and in instructional
design within the field of education. For students, ChatGPT can provide learners
with model answers that can stimulate their understanding of various subject mat-
ters. Additionally, in terms of instructional design, ChatGPT can be a useful tool
for teachers and educators to remind them of what knowledge and skills should
be included in their curriculum, by providing an outline” (Assistant Professor of
Instructional Technology, USA, familiarity is: 2).
Conversely, a few of the participants held an opposing view that the abuse of Chat-
GPT by learners can also diminish their innovative capacities and critical thinking. For
instance, when learners are not motivated, the probability of seeking an easy-to-get solu-
tion is high as can be deducted from a statement from one participant.
“Sometimes when I have no inspiration for writing a thesis, I will choose to use this
software to input the answers to the questions I want to know” (Student of Educa-
tion, China, familiarity is: 4).
Response quality
Response quality is vital to the success and effective adoption of Chatbots for school
operations. In this study, most of the participants evaluated the dialogue quality and
the degree of accurate information ChatGPT provides as satisfactory. However, it was
added that the conversational agent is prone to occasional errors and limited informa-
tion (presently, as reported by OpenAI, the data ChatGPT provides is limited to 2021).
That is, at most times, responses from ChatGPT were reasonable and reliable but were
at times accompanied by misleading information. This indicates that the output quality
of ChatGPT though acceptable needs to be enhanced. An example given by one partici-
pant (a programmer) is the generation of a wrong code that did not work properly when
entered into a programming software. Nonetheless, the fewer errors of ChatGPT were
praised by some participants as an efficient virtual assistant in constructing knowledge
and products. For instance, one participant stated:
“The answers from ChatGPT can be somewhat accurate but not totally. For exam-
ple, when I couldn’t figure out how to write codes for a specific problem, the answers
are vague and cannot totally solve my problem. I need to figure it out by myself using
the experience I had” (Student of Geography, China, familiarity is: 2).
A participant further elaborated that the quality of answers getting from ChatGPT
depends on the quality of questions asked by the user saying:
“It depends on the type of questions that you ask. If it is too recent, then the answers
won’t be too good, because ChatGPT lacks context, if you do not provide it with
questions that are specific enough then its answers wouldn’t be too good” (Developer,
USA, familiarity is: 3).
“I don’t think it can be compared to a real human being, and what it offers is not
comparable to what a real person would say through genuine empathy. And in dia-
logue, it would say "As an AI, I don’t have the ability to love or feel emotions as
humans do, but I am here to assist you with any question or task you have.” (Student
of Nursing Research, UK, familiarity is: 3).
"…the first time I used it I freaked out because it is too human, the way it talks feels
like my personal tutor, after it answered a lot of my elementary questions “patiently”
I feel grateful to it, just as how I would feel if my tutor does this for me, and it makes
me creepy because I sensed that I am having an emotional attachment to it. And
another impressive experience was when I found out that it provided wrong article
information I feel frustrated, because I trusted it in my study and if it can make
something logical from nonsense, then I don’t feel safe to trust it anymore, it is kind
like lost a good teacher whom I can depend on." (Student of Education, China,
familiarity is: 4).
Usefulness
The specificity and relevant information provided by ChatGPT on diverse disciplines
(e.g. science, history, business, health, technology, etc.) or topics made many of the users
in the study perceive it as useful. A participant also mentioned that it has the capability
to lessen the instructional workload of teachers and provide students with immediate
feedback. Despite the perceived usefulness of ChatGPT, some users encountered chal-
lenges with the accuracy of responses, the provision of alternative answers or responses
which at times contradict previous answers provided on the same topic, and its limited
ability to provide certain contextual information, as one participant stated:
“ChatGPT has limited knowledge bases for searching academic resources in certain
contexts. For example, finding lists of famous researchers in specific academic fields
appears limited. …If a user needs in-depth and contextual information, ChatGPT’s
functionality is limited” (Assistant Professor of Instructional Technology, USA,
familiarity is: 2).
Another participant pointed out the need for more functionalities, such as the possi-
bility of making annotations to make ChatGPT more useful:
“It lacks functions like editing, making a note or searching for certain information
in the previous conversation, but I consider these functions are pretty convenient for
Ethics
Some of the enumerated ethical concerns raised by participants in the study cover
encouraging plagiarism and cheating, the tendency to breed laziness among users (par-
ticularly in students), and being prone to errors such as the provision of bias or fake
information. Additionally, some participants pinpointed the random inaccuracies and
vagueness of ChatGPT on topics of relevance based on experience. This made some par-
ticipants at times doubt the trustworthiness of the information provided. They expressed
the output data of ChatGPT seem more like an opinion without references. Another eth-
ical challenge for users in this study was the ChatGPT’s likelihood of reducing students’
critical thinking. For instance, one participant stated:
“A major concern of ChatGPT is the creation of fake and plausible information gen-
erated by computers rather than human decision-making. There are ethical con-
cerns about students relying too heavily on answers without being aware of their
veracity. Guidelines to promote critical thinking when using ChatGPT in future
research would be necessary” (Assistant Professor of Instructional Technology, USA
familiarity is: 2).
Some participants were also concerned about exposing their private and demographic
information to ChatGPT through repetitive interactions. For instance, a participant
stated:
“There is a data security risk, which is included in the interaction with ChatGPT,
which may expose personal privacy (age, gender, address, contact information, hob-
bies, even capital account and other personal privacy). Much of this personal infor-
mation is exposed in the user’s unconscious communication process. Whether the
legality of data acquisition and data processing methods are limited by relevant
laws and regulation” (Developer, USA, familiarity is: 3).
in education using chatbots. Therefore, someone might ask how to effectively detect and
prevent cheating using ChatGPT in education.
answer which is a well-structured table that could be easily read and remembered (see
Fig. 5a), while it was not the case for Educator 2 or 3 (see Fig. 5b, c). Therefore, someone
might ask how to ensure fair access/treatment by all users (teachers, students, etc.) to
the same updated and high-quality learning content.
Fig. 5 The three different answers to the exact same prompt by the three educators
Fig. 8 The responses of ChatGPT to the conversation scenarios of correcting spelling mistakes
Discussion
This study conducted a user experience supported by qualitative and sentiment analysis
to reveal the perception of users on ChatGPT in education. It specifically focused on
the concerns that different stakeholders (e.g., policymakers, educators, learners) should
keep in mind when using ChatGPT as a technology in education. The results revealed
that ChatGPT has the potential to revolutionize education in different ways. This was
also reported in several studies (Firat, 2023; Susnjak, 2022; Zhai, 2022). However, several
concerns about using ChatGPT in education (the focus of this present study) were iden-
tified and discussed from different perspectives as follows:
interviewees also states “… the accuracy of refining the essence of concepts is relatively
high. For the differences between concepts, ChatGPT can refine to a certain extent, and
provide answers from some framework perspective, but it cannot compare the deep dif-
ferences between the two concepts” (Consultant, China, familiarity is: 2). What is more
worrying is that the same exact prompt used by different users might lead to different
answers with different qualities (see scenario 3). This raises concerns about fair access to
the same educational material despite using the same prompt. For instance, Kung et al.
(2023) found the accuracy of ChatGPT to be around 60%, demanding careful assessment
of its output before use. Therefore, more research should be focused on ensuring fair-
ness, accuracy, and equity among students using chatbots generally and ChatGPT par-
ticularly, which might be achieved through, for instance, having transparent and open
algorithms (Bulathwela et al., 2020). In this context, future research directions could
focus on investigating how to ensure that chatbots are able to cater to the diverse needs
and backgrounds of students, especially those with disabilities or how can we address
issues of fairness and equity in the use of chatbots, particularly for disadvantaged or
marginalized students?
Skjuve et al. (2022) stated that most of the developed chatbots are task-oriented and
do not ensure social relational qualities, such as sharing history and allowing personal
intimacy. Hudlicka (2016) further stated the importance of considering virtual rela-
tionships, where students interact with virtual agents, to enhance learning outcomes.
Future research should, therefore, focus on how to provide humanized chatbots in
education by relying, for instance, on various theories that focus on understanding
relationship formation between humans, such as social exchange theory (Cook et al.,
2013), Levinger’s ABCDE model (Levinger, 1980), and SPT (Altman & Taylor, 1973).
It is also crucial to investigate how human–chatbot relationships might impact stu-
dents’ learning outcomes.
On the other hand, some researchers took humanization to another level by treating
ChatGPT as a human, where they listed it as one of the co-authors in an article pub-
lished in an academic journal (O’Connor & ChatGPT, 2023). This raises various con-
cerns about the regulatory laws of humanizing and treating intelligent chatbots. For
example, would it be ethical for a journal to treat ChatGPT as a human and accept it
as a co-author? What if a magazine staff took credit for articles authored by chatbots?
What are the standards of personhood in academic writing? This brings to memory
the monkey selfie case and concepts of originality (Guadamuz, 2016), authorship
(Rosati, 2017), and copyright (Guadamuz, 2018).
Abbreviations
AI Artificial intelligence
GPT Generative pre-trained transformer
ICT Information and communication technology
SNA Social network analysis
t-SNE T-distributed stochastic neighbor embedding
Acknowledgements
Not applicable.
Author contributions
Each author contributed evenly to this manuscript. All authors read and approved the final manuscript.
Funding
Not applicable.
Declarations
Competing interests
The authors declare that they have no competing interests.
References
Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships. Holt, Rinehart Winston.
Asimov, I. (1942). Runaround. Astounding Science Fiction
Asimov, I. (1950). I, Robot. Gnome Press.
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Ben-
jamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities
and challenges toward responsible AI. Information Fusion, 58, 82–115. [Link]
Beccari, M. N., & Oliveira, T. L. (2011). A philosophical approach about user experience methodology. In International
Conference of Design, User Experience, and Usability (pp. 13–22). Springer, Berlin
Bozkurt, A. (2022). Biased binaries. Postdigital Science and Education. [Link]
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., & Amodei, D. (2020). Language models are few-shot
learners. arXiv preprint arXiv:2005.14165.
Bulathwela, S., Perez-Ortiz, M., Yilmaz, E., & Shawe-Taylor, J. (2020). Truelearn: A family of bayesian algorithms to match
lifelong learners to open educational resources. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01),
565–573. [Link]
Čapek, K. (1921). Rossum’s Universal Robots.
Cook, K. S., Cheshire, C., Rice, E. R., & Nakagawa, S. (2013). Social exchange theory. In Handbook of social psychology (pp.
61–88). Springer, Dordrecht.
Durall, E., & Kapros, E. (2020). Co-design for a competency self-assessment Chatbot and survey in science education. In
P. Zaphiris & A. Ioannou (Eds.), Learning and collaboration technologies human and technology ecosystems HCII 2020
lecture notes in computer science. Cham: Springer.
Erlingsson, C., & Brysiewicz, P. (2017). A hands-on guide to doing content analysis. African Journal of Emergency Medicine,
7(3), 93–99. [Link]
Firat, M. (2023). How chat GPT can transform autodidactic experiences and open education? [Link]
osf.io/9ge8m
Flick, U. (2009). An introduction to qualitative research (4th ed.). SAGE.
Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting learning experiences, interest and
competence. Computers in Human Behavior, 93, 279–289. [Link]
Giachanou, A., & Crestani, F. (2016). Like it or not: A survey of Twitter sentiment analysis methods. ACM Computing Surveys
(CSUR), 49(2), 1–41.
Guadamuz, A. (2016). The monkey selfie: Copyright lessons for originality in photographs and internet jurisdiction. Inter-
net Policy Review. [Link]
Guadamuz, A. (2018). Can the monkey selfie case teach us anything about copyright law? WIPO Magazine, 1, 40–46.
Hansen, D., Shneiderman, B., & Smith, M. A. (2010). Analyzing social media networks with NodeXL: Insights from a connected
world. Morgan Kaufmann.
Haque, M. U., Dharmadasa, I., Sworna, Z. T., Rajapakse, R. N., & Ahmad, H. (2022). I think this is the most disruptive technology:
Exploring sentiments of ChatGPT early adopters using Twitter data. arXiv preprint arXiv:2212.05856.
Harel, D., & Koren, Y. (2001). A Fast Multi-Scale Method for Drawing Large Graphs. In Graph Drawing: 8th International Sym-
posium, GD 2000. Colonial Williamsburg, VA, USA, September 20–23, 2000, Proceedings (No. 1984, p. 183). Springer
Science & Business Media.
Herft, A. (2023). A Teacher’s Prompt Guide to ChatGPT: Aligned with ’What Works Best’. CESE NSW "What Works Best
in Practice. [Link]
ggq4zU-81FiI8j4BAOp5HqWHC_Ecy2sqKk4EiWXL0FKa5GVz5dE
Hudlicka, E. (2016). Virtual affective agents and therapeutic games. In Artificial intelligence in behavioral and mental
health care (pp. 81–115). Academic Press. [Link]
Inwood, B. (Ed.). (2003). The Cambridge companion to the Stoics. Cambridge University Press.
Janssen, A., Grützner, L., & Breitner, M. H. (2021). Why do chatbots fail? A critical success factors analysis. In International
Conference on Information Systems (ICIS), Forty-Second International Conference on Information Systems
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Kasneci, G. (2023). ChatGPT for good? On
opportunities and challenges of large language models for education. [Link]
King, M. R., & chatGPT. (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cel-
lular and Molecular Bioengineering, 16, 1–2. [Link]
Kuhail, M. A., Alturki, N., Alramlawi, S., et al. (2023). Interacting with educational chatbots: A systematic review. Education
and Information Technologies, 28, 973–1018. [Link]
Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., et al. (2023). Performance of ChatGPT on
USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health, 2(2), e0000198.
[Link]
Levinger, G. (1980). Toward the analysis of close relationships. Journal of Experimental Social Psychology, 16(6), 510–544.
[Link]
Lieberman, M. (2023). What Is ChatGPT and How Is It Used in Education?. Education Week. [Link]
ology/what-is-chatgpt-and-how-is-it-used-in-education/2023/01
McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. (1955). A proposal for Dartmouth summer research project on
artificial intelligence. AI Magazine, 27, 12.
Mollick, E. R., & Mollick, L. (2022). New modes of learning enabled by AI chatbots: Three methods and assignments. SSRN
Electronic Journal. [Link]
O’Connor, S., & ChatGPT,. (2023). Open artificial intelligence platforms in nursing education: Tools for academic progress
or abuse? Nurse Education in Practice, 66, 103537. [Link]
Ofgang, E. (2022). What is ChatGPT and how can you teach with it? Tips & tricks. Tech & Learning. [Link]
com/how-to/what-is-chatgpt-and-how-to-teach-with-it-tips-and-tricks
OpenAI. (2023). ChatGPT: Optimizing language models for dialogue. [Link]
Rainie, L. (2014). The six types of Twitter conversations. PewResearch. [Link]
the-six-types-of-twitter-conversations/
Rosati, E. (2017). The monkey selfie case and the concept of authorship: An EU perspective. Journal of Intellectual Property
Law & Practice, 12(12), 973–977.
Schmid, R. F., Bernard, R. M., Borokhovski, E., Tamim, R., Abrami, P. C., Wade, C. A., & Lowerison, G. (2009). Technology’s
effect on achievement in higher education: A stage I meta-analysis of classroom applications. Journal of Computing
in Higher Education, 21, 95–109. [Link]
Shen-Berro, J. (2023). New York City Schools blocked ChatGPT. Here’s what other large districts are doing. Chalkbeat.
[Link] cial-intelligence-open-ai
Skjuve, M., Følstad, A., Fostervold, K. I., & Brandtzaeg, P. B. (2022). A longitudinal study of human–chatbot relationships.
International Journal of Human-Computer Studies, 168, 102903. [Link]
Smith, M., Rainie, L., Shneiderman, B., & Himelboim, I. (2014). Mapping Twitter Topic Networks: From Polarized Crowds to
Community Clusters. PweInternet. [Link]
polarized-crowds-to-community-clusters/
Stake, R. E. (1995). The art of case study research: Perspective in practice. Sage.
Susnjak, T. (2022). ChatGPT: The end of online exam integrity?. arXiv preprint arXiv:2212.09292.
The Guardian. (2023). New York City schools ban AI chatbot ChatGPT. The Guardian. Retrieved from [Link]
ardian.com/us-news/2023/jan/06/new-york-city-schools-ban-ai-chatbot-chatgpt
Thurmond, V. A. (2001). The point of triangulation. Journal of Nursing Scholarship, 33(3), 253–258. [Link]
1547-5069.2001.00253.x
Turing, A. (1950). Computing machinery and intelligence. Mind A Quarterly Review of Psychology and Philosophy, 236,
433–460. [Link]
van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9(2008),
2579–2605.
Yin, R. K. (1984). Case study research: Design and methods. Sage.
Zhai, X. (2022). ChatGPT user experience: Implications for education. SSRN Electronic Journal. [Link]
4312418
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@[Link]