Augmenting Design Insights in Educational Qualitative Research: An Empirical Framework and Ethical Considerations for Integrating Generative

Main Article Content

FAN ZHANG
Rainal Hidayat Wardi

Abstract

Generative Artificial Intelligence (AI) has rapidly entered educational qualitative research to support ideation, transcription, coding, and synthesis, particularly in design-informed user research. Yet empirically grounded guidance remains limited for integrating these tools without weakening interpretive rigor or breaching ethical obligations. Using a multiple-case study of four education-related teams and 12 semi-structured interviews, supplemented by workflow observations and document analysis, this study developed the Collaborative Insight Generation (CIG) Framework. Findings showed that AI served as a "brainstorming partner","preliminary theme explorer" and "report assistant" while introducing risks including prompt fragility, over-reliance, hallucinations, and reduced contextual nuance. The CIG Framework outlines four phases—Preparation & Scoping, Iterative & Interactive Analysis, Critical Synthesis & Validation, and Transparent Reporting & Archiving—prioritizing human-led inquiry and methodological transparency while embedding privacy protection, accountability, and audit-ready documentation across the research lifecycle. The framework advances educational qualitative research by translating human–AI collaboration into teachable practices for research training and supervision and by offering safeguards for ethically defensible insight generation.

Downloads

Download data is not yet available.

Article Details

Section

Educational Management

References

Al-Kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., & Alfandi, O. (2024). Ethical Challenges and Solutions of Generative AI: an interdisciplinary perspective. Informatics, 11(3), 58. https://doi.org/10.3390/informatics11030058

Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120. https://doi.org/10.1609/aimag.v35i4.2513

Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021). Beware explanations from AI in health care. Science, 373(6552), 284–286. https://doi.org/10.1126/science.abg1834

Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31

Baxter, P., & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The Qualitative Report, 13(4), 544–559. https://doi.org/10.46743/2160-3715/2008.1573

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Association for Computing Machinery. https://doi.org/10.1145/3442188.3445922

Berger, R. (2015). Now I see it, now I don’t: Researcher’s position and reflexivity in qualitative research. Qualitative Research, 15(2), 219–234. https://doi.org/10.1177/1468794112468475

Beyer, H., & Holtzblatt, K. (1998). Contextual design: Defining customer-centered systems. Morgan Kaufmann.

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability and Transparency, 149–159. https://arxiv.org/abs/1712.03586

Bowen, G. A. (2009). Document analysis as a qualitative research method. Qualitative Research Journal, 9(2), 27–40. https://doi.org/10.3316/QRJ0902027

Braidotti, R. (2019). Posthuman knowledge. Polity Press.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Brinkmann, S. (2014). Interview. In U. Flick (Ed.), The SAGE handbook of qualitative data analysis (pp. 277–291). Sage Publications. https://doi.org/10.4135/9781446282243.n19

Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth semistructured interviews: Problems of unitization and intercoder reliability and agreement. Sociological Methods & Research, 42(3), 294–320. https://doi.org/10.1177/0049124113500475

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505–528. https://doi.org/10.1007/s11948-017-9901-7

Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, Article 1421273. https://doi.org/10.3389/fhumd.2024.1421273

Chopra, F., & Haaland, I. (2023). Conducting Qualitative Interviews with AI. Social Science Research Network. https://doi.org/10.2139/ssrn.4572954

Cook, D. A., Ginsburg, S., Sawatsky, A. P., Kuper, A., & D’Angelo, J. D. (2025). Artificial intelligence to support qualitative data analysis: promises, approaches, pitfalls. Academic Medicine, 100(10), 1134–1149. https://doi.org/10.1097/acm.0000000000006134

Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). Sage Publications.

Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2

Denzin, N. K. (2012). Triangulation 2.0. Journal of Mixed Methods Research, 6(2), 80–88. https://doi.org/10.1177/1558689812437186

Denzin, N. K., & Lincoln, Y. S. (2018). The SAGE handbook of qualitative research (5th ed.). SAGE Publications.

Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of Management Journal, 50(1), 25–32. https://doi.org/10.5465/amj.2007.24160888

Evers, J. C. (2011). From the Past into the Future. How Technological Developments Change Our Ways of Data Collection, Transcription and Analysis. Forum: Qualitative Social Research (Freie Universität Berlin). https://doi.org/10.17169/fqs-12.1.1636

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Goodman, E., Kuniavsky, M., & Moed, A. (2012). Observing the user experience: A practitioner's guide to user research. Morgan Kaufmann.

Gustafsson, J. (2017). Single case studies vs. multiple case studies: A comparative study. Publications (Konstfack University of Arts, Crafts, and Design). http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-33017

Hitch, D., Richards, K., Gupta, A., Thanekar, U., Edwards, J., & Goldingay, S. (2025). The ethical implications of using AI in qualitative research. In CABI eBooks (pp. 137–149). https://doi.org/10.1079/9781800626607.0013

Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human–AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007

Jiang, T., Sun, Z., Fu, S., & Lv, Y. (2024). Human-AI interaction research agenda: A user-centered perspective. Data and Information Management, 8(4), 100078. https://doi.org/10.1016/j.dim.2024.100078

Kong, X., Fang, H., Chen, W., Xiao, J., & Zhang, M. (2025). Examining human–AI collaboration in hybrid intelligence learning environments: insight from the Synergy Degree Model. Humanities and Social Sciences Communications, 12(1). https://doi.org/10.1057/s41599-025-05097-z

Kvale, S., & Brinkmann, S. (2009). InterViews: Learning the craft of qualitative research interviewing (2nd ed.). Sage publications.

Labedzki, R., Mikolajczyk, K., Bilyk, A., & Trojanowska, M. (2025). Understanding Human-AI Collaboration: A Systematic review of challenges and research methods in management. Communications in Computer and Information Science, 332–348. https://doi.org/10.1007/978-3-031-94171-9_32

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. Zenodo. https://doi.org/10.5281/zenodo.3240529

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Sage.

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2018). Model Cards for Model Reporting. ArXiv. https://doi.org/10.1145/3287560.3287596

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679

Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2019). From What to How. An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices. ArXiv. https://doi.org/10.1007/s11948-019-00165-5

Nguyen, D. C., & Welch, C. (2025). Generative Artificial intelligence in qualitative data analysis: Analyzing—Or just chatting? Organizational Research Methods. https://doi.org/10.1177/10944281251377154

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic analysis: Striving to meet the trustworthiness criteria. International Journal of Qualitative Methods, 16(1), 1–13. https://doi.org/10.1177/1609406917733847

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing.

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice (4th ed.). Sage publications.

Phan, A. N. Q., & Le, C. (2025). AI as research partner: key implications of using AI for data visualisation in qualitative research. International Journal of Social Research Methodology, 1–8. https://doi.org/10.1080/13645579.2025.2518562

Ridder, H. (2017). The theory contribution of case study research designs. BuR - Business Research, 10(2), 281–305. https://doi.org/10.1007/s40685-017-0045-z

Salmona, M., & Kaczynski, D. (2015). Don’t blame the software: using qualitative data analysis software successfully in doctoral research. Forum: Qualitative Social Research (Freie Universität Berlin), 17(3), 23. https://doi.org/10.17169/fqs-17.3.2505

Sanchez, H. S., Eski, M., & Batlle, I. C. (2024). Bricolage for innovative Qualitative Social Science Research: A perspective on its Conceptual hallmarks. Qualitative Inquiry, 31(8–9), 802–816. https://doi.org/10.1177/10778004241265987

Shneiderman, B. (2020). Humancentered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118

Sinha, R., Solola, I., Nguyen, H., Swanson, H., & Lawrence, L. (2024). The Role of Generative AI in Qualitative Research: GPT4’s Contributions to a Grounded Theory Analysis. Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 320331. https://doi.org/10.1145/3663433.3663456

Spradley, J. P. (1980). Participant observation. Holt, Rinehart and Winston.

Stake, R. E. (2006). Multiple case study analysis. Guilford Press.

Tene, O., & Polonetsky, J. (2013). Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology and Intellectual Property, 11(5), 239–273.

Than, N., Fan, L., Law, T., Nelson, L. K., & McCall, L. (2025). Updating “The Future of Coding”: Qualitative Coding with Generative Large Language Models. Sociological Methods & Research, 54(3), 849–888. https://doi.org/10.1177/00491241251339188

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6628), 119. https://doi.org/10.1126/science.adg7879

Wang, Q., Madaio, M., Kane, S., Kapania, S., Terry, M., & Wilcox, L. (2023). Designing Responsible AI: Adaptations of UX Practice to Meet Responsible AI Challenges (pp. 1–16). https://doi.org/10.1145/3544548.3581278

Weidinger, L., Uesato, J., Rauh, M., Griffin, C., Huang, P., Mellor, J., Glaese, A., Cheng, M., Balle, B., Kasirzadeh, A., Biles, C., Brown, S., Kenton, Z., Hawkins, W., Stepleton, T., Birhane, A., Hendricks, L. A., Rimell, L., Isaac, W., . . . Gabriel, I. (2022). Taxonomy of Risks posed by Language Models. 2022 ACM Conference on Fairness, Accountability, and Transparency, 214–229. https://doi.org/10.1145/3531146.3533088

Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2018). Artificial Intelligence and the Public Sector—Applications and Challenges. International Journal of Public Administration, 42(7), 596–615. https://doi.org/10.1080/01900692.2018.1498103

Yang, Y., & Ma, L. (2025). Artificial intelligence in qualitative analysis: a practical guide and reflections based on results from using GPT to analyze interview data in a substance use program. Quality & Quantity, 59(3), 2511–2534. https://doi.org/10.1007/s11135-025-02066-1

Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). Sage Publications.

Zhang, H., Wu, C., Xie, J., Lyu, Y., Cai, J., & Carroll, J. M. (2025). Harnessing the power of AI in qualitative research: Exploring, using and redesigning ChatGPT. Computers in Human Behavior Artificial Humans, 4, 100144. https://doi.org/10.1016/j.chbah.2025.100144

Similar Articles

You may also start an advanced similarity search for this article.