univerge site banner
Original Article | Open Access | Asian J. Soc. Sci. Leg. Stud., 2026; 8(2), 579-587. | doi: 10.34104/ajssls.026.05790587

Kepler College Students' Perceptions on using Generative AI in Education: A Descriptive Perspective on ChatGPT 

Christian Shema Nsenga* Mail Img Orcid Img ,
Vedaste Nsengiyumva Mail Img Orcid Img ,
Jean d%27Amour Nsabimana Mail Img Orcid Img

Abstract

This study explores Kepler College students' perceptions of using generative AI, specifically ChatGPT, in education. Through a descriptive research design, data was collected from 134 students via an online survey to assess their familiarity with ChatGPT, its application in academic settings, concerns about academic integrity, and its potential impact on future employment. The findings reveal that 74% of students (mean = 3.74) are familiar with ChatGPT, and 88% (mean = 4.31) believe they should be explicitly taught how to use it for academic purposes. Students generally view ChatGPT as a helpful tool for learning, with 88% (mean = 4.4) agreeing that it is effective for learning and 83% (mean = 4.14) stating that it allows them to study more efficiently. However, concerns about accuracy persist, as only 54% (mean = 2.72) believe ChatGPT's answers are always accurate, and 66% (mean = 3.3) worry that it could make academic cheating easier. Additionally, 72% (mean = 3.61) think employers should consider ChatGPT proficiency a valuable skill, while 65% (mean = 3.25) believe it could threaten jobs. The study concludes with a proposed framework for a Generative AI Literacy Hub, which includes components such as introduction to ChatGPT, prompt writing, evaluating AI accuracy, and integrating AI with active learning. This framework aims to address students' concerns and promote the responsible use of generative AI in education.

Introduction

The use of Generative AI has raised controversial issues in current education. Teachers at different levels have been arguing whether artificial intelligence is a tool to embrace or reject. Academic institutions are facing growing pressure to help students use AI without compromising academic integrity, and education institutions worry that students might lean on AI without using their critical thinking skills. Research has shown that though AI holds promise for improving educational outcomes, its implementation demands meticulous attention to ethical principles (Benouachane, 2024; Nistala et al., 2024).  

Similarly (Aklam, 2023) points out that ChatGPT has brought a lot of promise, though it may also involve serious problems if it is misused. This paper is conceived to investigate students' perceptions about the use of Generative AI, specifically ChatGPT in the teaching and learning process. The idea of Artificial intelligence (AI) began in the 1950s with the purpose of emulating human beings' intellectual abilities (Antebi, 2021). The early definition of AI was “the science of making machines do things that would require intelligence if done by men.” AI is a branch of computer science (Data Science and Artificial Intelligence, n.d.) And closely related to data science considering the amount of data it uses (Antebi, 2021; Wang, 2023). 

Chan and Colloton, (2024) list three types of AI depending on the extent of machine intelligence: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). This study aims at collecting students' views about using generative AI, specifically ChartGPT in searching and generating information. 

Problem Statement 

In Rwanda, the ministry of ICT has developed a national AI policy (MINICT, 2022) to serve as a guideline to using AI for development in different areas including education but also to mitigate the risks and threats of AI. In this context, high level discussions are being held about AI in Education (Primature, 2023) and some researchers are interested in learning about the use of AI in their fields including but not limited to education and medicine (Iyamuremye & Ndihokubwayo, 2024; Whitestone et al., 2023) and so are some academic institutions (Rwanda Polytechnic, 2023; Mohammadiounotikandi and Babaeitarkami, 2024).

A number of authors have written about students' perceptions of the use of ChatGPT in higher education settings (Shoufan, 2023; Singh et al., 2023). Shoufan, (2023) study participants were 56 senior students taking a computer engineering course in Khalifa University, United Arab Emirates. The purpose of the study was to understand how students perceive ChatGPT in learning. The study concluded that students find ChatGPT interesting and helpful for study and in the future helpful for work. However, some students revealed that some answers generated by ChatGPT are not always accurate and worry about academic integrity when using ChatGPT. In addition, Singh et al. (2023) concluded a study on students' perception on ChatGPT for 430 Master of Science in Computer Science students taking a Research Methods Modules in University of Hertfordshire, UK. The studies revealed that students knew ChatGPT but were not using it for academic purposes and are doubtful about its use and request for more guidelines from the university. 

Considering the limited knowledge about students' perceptions on ChatGPT in higher learning institutions in Rwanda, this study seeks to bridge this gap by researching perceptions Kepler College students have on using ChatGPT in their learning and this can inform AI policies and prompt more studies in the field. 

Research Objectives 

This study aims to evaluate Kepler College students' perceptions on the usage of ChatGPT, its application in academic settings, its impact on academic integrity, and its potential effects on employment opportunities.  

Review of Literature

Theories on generative AI demonstrate that generative AI functions as a cognitive tool to reduce learners' extraneous cognitive load (Lovell, 2020, Knowles, 2023). The theory underpinning generative AI is found under John Sweller's Cognitive Load Theory (CLT). According to the Medical College of Wisconsin, (2022). Cognitive Load Theory is based on the model of human information processing. The theory is Framed  on  the  central tenet  that human cognitive architecture has an information processing system that generates various procedures designed to reduce cognitive load and facilitate learning (Howley-Rouse, 2021).   In fact, information enters the sensor memory and is transferred to working memory where it is processed before being sent to the long term memory where it is stored (Garnett, 2020). Long-term memory stores information in codes referred to as “schemas” (Lovell, 2020). AI's capacity to cater for learners' cognitive needs aligns with Cognitive Load Theory by optimizing the use of cognitive memory and reducing cognitive load as the users share the load with AI tools (Knowles, 2024; Newport, 2024). Hence, this theory will guide this study as AI can be used in education to reduce unnecessary cognitive strain and make students engage deeply with their ideas.

Artificial Intelligence

The idea of Artificial intelligence (AI) began in the 1950s with the purpose of emulating human beings' intellectual abilities (Antebi, 2021). The early definition of AI was “the science of making machines do things that would require intelligence if done by men.” AI is a branch of computer science (Data Science and Artificial Intelligence, n.d.), and closely related to data science considering the amount of data it uses (Antebi, 2021). Chan and Colloton, (2024) list three types of AI depending on the extent of machine intelligence: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). This study aims at collecting students' views about using generative AI, specifically ChartGPT in searching and generating information. 

ChatGPT: a Generative AI tool

Generative AI relies on machine learning to create new content from data (Chan & Colloton, 2024). Chan and Colloton explain that Generative AI has various applications that include “image and audio synthesis, and text, code and video generation.” UNESCO (2023) claims that even though Generative AI can generate new content, it is not able to find new solutions to real world issues because it does not understand social links that real language conveys. This is emphasized by OpenAI, the founder of one of the most used generative AI tools, ChatGPT, that states “while tools like ChatGPT can often generate answers that sound reasonable, they cannot be relied upon to be accurate” (OpenAI, 2023). This current Chat  generative pre-trained transformer 4 is built on OpenAI's GPT 3.5 and has a 1,000,000 GB (reported but not confirmed) amount of training data compared to 17,000 GP for GPT 3 (UNESCO, 2023; Singh et al., 2023) and more GPT versions are currently on sale.  

ChatGPT in Education 

In education, ideas about using or not using ChatGPT diverge (Singh et al., 2023). Those who are in favor of using it base it on the ability to help students to learn to write, to read, to think critically, and to solve problems. Additionally, they also base it on the idea of being able to help teachers in lesson planning, grading and developing professionally. On the other hand, some authors highlight challenges in using ChatGPT, including but not limited to copyright issues, increasingly relying on it by both students and teachers, limited knowledge in integrating it in teaching effectively and plagiarism (Kasneci et al., 2023). Some of these challenges need to be addressed by governmental agencies by regulating the use of generative AI in Education (UNESCO, 2023). ChatGPT was launched in November 2022 (Knowles, 2024) and its start marked a new era in the history of artificial intelligence and education. Researchers have advocated the integration of AI into education (Manke, 2023; Pohl, 2023; Wu, 2023).

Methodology

Research design

This study employs a descriptive research design to assess Kepler College students' views on the use of generative artificial intelligence tools, such as ChatGPT, for searching and generating information. The study is quantitative in nature, as the researchers coded participants' responses and converted them into numerical data for analysis. It also adopts a cross-sectional approach, with data collected at a single point in time in 2025 through a voluntary self-administered online questionnaire sent via student's emails at Kepler College. 

Sampling and Sample size

The study involved a population of 344 participants, comprising students from the Kepler College Project Management cohorts of 2022 and 2023, as well as the Business Analytics cohort of 2024. A sample of 182 respondents was selected using a random sampling technique to ensure equal access for all students. The sample size was calculated using Yamane's formula, as shown in the equation below, to accurately represent the student body:

             n = N / (1 + Ne^2)  

Where:

n = sample size

N = population size (344 in this case)

e = margin of error (we used 0.05 for a 95% confidence level)

Substitute the values into the formula:

n =   344/(1+344⋅(0.05)^2 )

n = 344/1.86

n ≈ 185

Data Collection Technique and Analysis

Data was gathered through a structured online questionnaire, distributed via Google forms. The survey covered students' familiarity with ChatGpt, its use in academics, concerns about plagiarism and academic integrity, and its potential impact on future employment. Data collected was analyzed using SPSS, a statistical tool, commonly used for quantitative data analysis. Descriptive statistics such as mean, and standard deviation were calculated to provide an overview of the central tendency and variability in the students' perceptions of the tool's usefulness in academic settings

Results and Discussion

This chapter presents the findings of the study, analyzing the data collected from the research instruments. Table 1 below summarizes the response rate from the online survey, highlighting the percentage of participants who successfully completed and returned the survey.

Table 1: Research Instrument Return Rate.

Table 1 shows the questionnaire return rate. A total of 134 students responded to the survey out of 186 sampled, representing a return rate of 80%. Some reasons for this return rate include the fact that the KC Project Management cohort of 2023 was academically active during the research period, while the cohort of 2022 was not, and the Business Analytics cohort of 2024 was newly enrolled in Year 1. Nevertheless, the overall return rate of 80 % from all respondents still exceeds the average questionnaire return rate of 68% from research conducted  in 2020 (Holtom et al., 2022). 

Demographics 

In this paper, the demographic of the respondents was collected in terms of the student cohorts as shown below.  

Fig. 1: Demographic analysis of the respondents.

Fig. 1 presents the demographic analysis of the research, which involved a total of 134 respondents from Kepler College. With an 80% return rate from a Google Form distributed to students, the findings reveal that 54.5% of respondents are part of the Project Management cohort of 2023, indicating a robust presence of actively engaged students on campus. In contrast, 23.9% of respondents belong to the Project Management cohort of 2022, who are currently participating in internships off campus. Finally, the KC 2024 cohort, comprising 21.6% of the respondents, consists of new students at Kepler College, reflecting their initial experiences and integration into the college environment.

Familiarity and Educational Use of ChatGPT

The findings from the survey reveal several key insights into Kepler College students' perceptions of ChatGPT and its role in education, as explained below. 

Table 2: Familiarity and Educational Use of ChatGPT.

The majority of students (mean = 3.74) reported being familiar with ChatGPT, indicating that the tool is widely recognized among the student body. This aligns with the strong agreement (mean = 4.31) that students should be explicitly taught how to use ChatGPT for their studies. This high mean score underscores the students' will for training on using ChatGPT effectively in their academic work. Furthermore, students overwhelmingly view ChatGPT as a helpful and effective technology for learning (mean = 4.4), with a majority also agreeing that it allows them to study more efficiently (mean = 4.14). In summary, these findings suggest that students recognize the potential of ChatGPT to enhance their learning experiences, provided they receive proper training on its use.

Concerns about Academic Integrity and Accuracy

The study also intended to gather students' perceptions on their concerns about Academic integrity and the accuracy of ChatGPT in providing responses to prompts. 

Table 3: Concerns about Academic Integrity and Accuracy.

Despite the positive perceptions of ChatGPT's utility, students expressed concerns about its accuracy and potential impact on academic integrity. The mean score for the statement "ChatGPT answers are always accurate" was relatively low (mean = 2.72), indicating that students are skeptical about the reliability of the answers from ChatGPT. This skepticism is further reflected in the moderate agreement (mean = 3.79) that tools like Turnitin or other AI-text detection systems can distinguish between AI-generated and human-written text. Additionally, students expressed concerns that ChatGPT could make academic cheating easier (mean = 3.3) and negatively affect learning by allowing students to find answers without effort (mean = 2.93). These findings highlight the need for institutions to address ethical concerns and implement measures to ensure academic integrity while using AI tools.

Comparative Advantage and Prompt Writing

Table 4: Comparative Advantage and Prompt Writing.

When comparing ChatGPT to other search engines and databases, students were moderately positive about its advantages. The mean score for "ChatGPT is better than other search engines like Google" was 3.43, while the mean for "ChatGPT is better than online databases like EBSCO and JSTOR" was slightly lower at 3.17. This suggests that while students see ChatGPT as a useful tool, they do not necessarily view it as very superior to traditional research methods. Additionally, students reported moderate confidence in writing prompts for ChatGPT (mean = 3.44), with the majority acknowledging that formulating questions for the tool can be tricky (mean = 3.43). This indicates room for improvement in teaching students how to craft effective prompts to maximize the utility of ChatGPT.

Table 5: Impact on Employment.

Students also considered the implications of ChatGPT for their future careers. A majority (mean = 3.61) agreed that employers should consider proficiency in using ChatGPT a valuable skill for new hires, reflecting the growing importance of AI literacy in the job market. However, there was also concern that ChatGPT could threaten jobs (mean = 3.25), indicating that students are aware of the potential disruptions AI could bring to the workforce. This duality in perceptions suggests that while students see the value of AI skills, they are also cautious about the broader implications of AI adoption in the workplace.

Overall, the findings suggest that Kepler College students are cautiously optimistic about the use of ChatGPT in education. They recognize its potential to enhance learning efficiency and effectiveness but are also aware of its limitations, particularly regarding accuracy and academic integrity. The strong desire for explicit training on how to use ChatGPT (mean = 4.31) indicates that students are eager to integrate AI tools into their studies but need guidance to do so responsibly. The concerns about academic dishonesty and the moderate confidence in prompt writing further highlight the need for structured training programs, such as the proposed Generative AI Literacy Hub, to help students navigate the complexities of using generative AI in education. By addressing these challenges, educators can ensure that ChatGPT is used as a complementary tool that enhances, rather than undermines, the learning process.

Recommendations 

After presenting the findings and analyzing and interpreting them on Kepler College Students' perceptions of generative AI in Education, a conceptual framework of a proposed training program is proposed. The Generative AI Literacy Hub is built on the basic idea of Cognitive Load Theory: students learn better when their mental effort is not overloaded with unnecessary tasks. By helping students ask clearer questions, use ChatGPT properly, and check its answers, the framework reduces extra mental “noise” and lets them focus on understanding the content. In this way, ChatGPT takes over some routine work, so students can use their limited mental energy for deeper thinking, which is exactly what Cognitive Load Theory recommends for effective learning.

The Generative AI Literacy Hub framework presented in the Fig. 1 provides a comprehensive approach to introducing and integrating ChatGPT into the classroom after perceptions of students' feedback. 

The framework consists of four main components:

  • Introduction to ChatGPT: This component focuses on teaching students how to use ChatGPT effectively, ensuring they understand the technology's strengths and limitations. By teaching  students on the proper use of ChatGPT, the framework aims to align their understanding with the desired learning outcomes.
  • Prompt Writing: The data shows that students are moderately confident in writing prompts, this component can focus on improving their prompt-writing skills. Effective prompt design is crucial for getting responses from generative AI models.
  • Evaluating AI Accuracy: This component addresses concerns about  students being unable to evaluate the accuracy of ChatGPT's responses. It emphasizes teaching students how to critically evaluate the information provided by the model and verify it using other reliable sources. This skill is essential for developing students' critical thinking and information literacy.
  • AI Use with Active Learning: This component aims to mitigate the concern that ChatGPT could negatively affect learning. By promoting the use of ChatGPT as a complementary tool, rather than a replacement for critical thinking and problem-solving skills, the framework encourages students to engage actively with the learning process.

Conclusion

To sum up, the study found out that Kepler College students view ChatGPT as an important learning tool that offers clear benefits but also raises serious questions. They report high familiarity with the tool and appreciate its usefulness for efficient studying and as a future employability skill, yet they remain uneasy about the accuracy of its answers and the risk of facilitating cheating and shallow learning. Students do not regard ChatGPT as a replacement for traditional search engines or academic databases, but they express a strong need for guidance, especially in writing effective prompts and evaluating AI-generated information. In response, this study proposes establishing a Generative AI Literacy Hub at Kepler College to introduce students to generative AI, strengthen their prompt-writing skills, and train them to critically assess AI outputs, and support staff and students in integrating AI into teaching and learning in a responsible, ethically grounded way.

Ethical Clearance

This research was conducted in accordance with the ethical standards of the institutional research commit-tee. Ethical clearance for the study was obtained from the Kepler College Directorate of Research and Community Service. All participants were informed about the objectives of the research, and their voluntary consent was obtained prior to participation. Confidentiality and anonymity of all participants were strictly maintained throughout the study.

Author contributions

C.S.N.: conceived and designed the study, collected and analyzed the data. V.N.: contributed to the literature review and proofreading, while J.d'A.N.; and N.U.:  supported the interpretation of the results, proofreading, and drafting of the manuscript. All authors reviewed, edited, and approved the final version of the manuscript.

Acknowledgment

The researcher(s) sincerely acknowledges the respondents for their participation and expresses opinion to all individuals who provided guidance and support in the completion of this successful study.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this research article. All authors have read and approved the final version of the manuscript and have no financial or personal relationships that could have influenced the work reported in this paper.

Supplemental Materials:

| 4.00 KB

UniversePG does not own the copyrights to Supplemental Material that may be linked to, or accessed through, an article. The authors have granted UniversePG a non-exclusive, worldwide license to publish the Supplemental Material files. Please contact the corresponding author directly for reuse.

Article References:

  1. Antebi, L. (2021). What is artificial intelligence? Artificial Intelligence and National Security in Israel. Institute for National Security Studies, pp. 31–40. https://www.jstor.org/stable/resrep30590.7  
  2. Chan, C. K. Y., & Colloton, T. (2024). Generative AI in higher education: The ChatGPT effect. Taylor & Francis. https://doi.org/10.4324/9781003459026  
  3. IU International University of Applied Sciences, (n.d.). Data science and artificial intelligence: Key differences. https://www.iu.org/blog/programme-comparison/ai-vs-data-science/   
  4. Garnett, S. (2020). Cognitive load theory: A handbook for teachers. Crown House Publishing.
  5. Holtom, B., Baruch, Y., & Ballinger, G. A. (2022). Survey response rates: Trends and a validity assessment framework. Human Relations, 75(8). https://doi.org/10.1177/00187267211070769  
  6. Howley-Rouse, A. (2021). An introduction to cognitive load theory. The Education Hub. https://theeducationhub.org.nz/an-introductio n-to-cognitive-load-theory /
  7. Iyamuremye, A., & Ndihokubwayo, K. (2024). Exploring secondary school students' interest and mastery of atomic structure and chemical bonding through ChatGPT. Educational J. of Artificial Intelligence and Machine Learning, 1, 1–13. https://doi.org/10.58197/prbl/9hk37296  
  8. Kasneci, E., et al. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274  
  9. Knowles, A. M. (2024). Machine-in-the-loop writing: Optimizing the rhetorical load. Computers and Composition, 71. https://doi.org/10.1016/j.compcom.2024.102826  
  10. Lovell, O. (2020). Sweller's cognitive load theory in action. John Catt Educational. https://www.goodteaching.ca/uploads/6/0/4/9/60496921/swellers_cognitive_load_theory.pdf  
  11. Medical College of Wisconsin. (2022). Cognitive load theory: A guide to applying cognitive load theory to your teaching. https://www.mcw.edu/-/media/MCW/Education/Academic-Affairs/OEI/Faculty-Quick-Guides/Cognitive-Load-Theory.pdf       
  12. MINICT, (2022). The National AI Policy. Republic of Rwanda - Ministry of ICT and Innovation.
  13. Mohammadiounotikandi A., and Babaeitarkami S. (2024). The ethics of artificial intelligence: balancing progress with responsibility, Int. J. Mat. Math. Sci., 6(2), 30-37. https://doi.org/10.34104/ijmms.024.030037 
  14. Newport, C. (2024). What kind of writer is ChatGPT? The New Yorker. https://www.newyorker.com/culture/annals-of-inquiry/what-kind-of-writer-is-chatgpt  
  15. Nistala, S., Lu, Y., & Huang, Z. (2024). Balancing Innovation and Ethics: Transformative Potential and Ethical Challenges of Generative AI in Education. http://dx.doi.org/10.2139/ssrn.5031907 
  16. Primature, (2023). PM Ngirente received the Southern New Hampshire University (SNHU) President. https://www.primature.gov.rw/news-detai l/pm-ngirente-received-the-southern-new-hampshire-university-snhu-president 
  17. Rwanda Polytechnic, (2023). RP-IPRC Gishari brings to a close the 2023 AI-powered next-generation entrepreneurship training.
  18. Shoufan, A. (2023). Exploring students' perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224  
  19. Singh, H., Tayarani-Najaran, M.-H., & Yaqoob, M. (2023). Exploring computer science students' perception of ChatGPT in higher education: A descriptive and correlation study. Education Sciences, 13(9), 924. https://doi.org/10.3390/educsci13090924  
  20. Tabassum T., and Ali MM. (2025). Artificial intelligence and fintech: redefining the landscape of financial services, Can. J. Bus. Inf. Stud., 7(4), 417-425. https://doi.org/10.34104/cjbis.025.04170425 
  21. UNESCO, (2023). Guidance for generative AI in education and research. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research  
  22. Wang JF. (2023). The impact of artificial intelligence (AI) on customer relationship management: a qualitative study, Int. J. Manag. Account. 5(5), 74-88. https://doi.org/10.34104/ijma.023.0074090 
  23. Whitestone, N., Nkurikiye, J., & Mathenge, W. (2023). Feasibility and acceptance of artificial intelligence-based diabetic retinopathy screening in Rwanda. British J. of Ophthalmology. https://doi.org/10.1136/bjo-2022-322683  

Article Info:

Academic Editor

Dr. Antonio Russo, Professor, Faculty of Humanities, University of Trieste, Friuli-Venezia Giulia, Italy

Received

March 7, 2026

Accepted

April 10, 2026

Published

April 17, 2026

Article DOI: 10.34104/ajssls.026.05790587

Corresponding author

Christian Shema Nsenga*

Assistant Lecturer and Library Manager, College, Kigali, Rwanda


Cite this article

Nsenga CS, Nsengiyumva V, Nsabimana JN, and Uwase N. (2026). Kepler college students' perceptions on using generative AI in education: a descriptive perspective on ChatGPT, Asian J. Soc. Sci. Leg. Stud., 8(2), 579-587.  https://doi.org/10.34104/ajssls.026.05790587  

Views
13
Download
3
Citations
Badge Img
Share