António Pedro Costa, University of Aveiro (Portugal)
Researcher at the Research Centre on Didactics and Technology in the Education of Trainers (CIDTFF), Department of Education and Psychology, University of Aveiro, and collaborator at the Laboratory of Artificial Intelligence and Computer Science (LIACC), Faculty of Engineering, University of Porto.
Grzegorz Bryda, Jagiellonian University (Poland)
An Assistant Professor at the Jagiellonian University, Head of Summer School for Qualitative Data Analysis and Research Methods. Interested in Cognitive Sociology, Narrative Research Methodology and Data Analysis, Digital Humanities and Corpus Linguistics, CAQDAS + Content Analysis + Natural Language Processing and Text Mining in QDA.
Using Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), and Large Language Models (LLMs) raises ethical considerations concerning data privacy, the authenticity of inquiry, and potential biases in analysis. Researchers navigate these by employing transparent methodologies, actively interpreting data, and critically assessing AI-generated insights. Ensuring data privacy and addressing biases require a combination of technological safeguards, ethical research practices, and ongoing scrutiny of AI tools’ methodologies and outcomes. In an era where AI is increasingly applied to qualitative data analysis, ethical considerations surface as a crucial component of scholarly scrutiny. As machines penetrate deeply into areas once the exclusive domain of human analytical prowess, they bring forward profound questions regarding data privacy and the moral responsibility of researchers. Central to this discourse is “What are the ethical implications of using AI in qualitative data analysis, particularly concerning data privacy protection?”
Using AI to analyse qualitative data gives rise to substantial ethical apprehensions, specifically safeguarding data privacy. As Mühlhoff (2021) conceptualised, predictive privacy is crucial to protect people and groups from discriminatory ML and big data analytics practices. It is particularly crucial to consider the possibility of privacy infringements when data from numerous individuals is utilised. However, Rainer Mühlhoff’s 2021 paper does not specify a primary outcome, as it focuses on the ethical aspects of predictive analytics, introducing “predictive privacy” and advocating for a collectivist privacy approach rather than measuring specific outcomes. Peltz and Street (2020) and Danilevskyi and Perez Tellez (2023) highlight the ongoing decline of personal privacy and underscore the necessity of ethical principles in AI. Data-driven methods like AI pose real concerns for individual privacy, eroding privacy protections and posing potential dangers to autonomy and democratic ideals. Huriye (2023) emphasises the ethical ramifications of AI in data analysis, emphasising the necessity of a human-centred approach and stakeholder cooperation. The author underscores the imperative to identify bias, safeguard privacy, ensure accountability, and maintain transparency as primary ethical concerns surrounding AI technology in developed nations. He also emphasises the need to contextualise African countries’ cultural, political, and economic subtleties. He also advocates for a human-centric approach involving local stakeholders in shaping ethical AI guidelines. Christoforaki and Beyan (2022) have highlighted the ethical importance of addressing biased decision-making and ensuring openness and accountability in AI systems. They critically examine the multifaceted impact of artificial intelligence on sensitive sectors, advocating for robust interdisciplinary dialogue to address the ethical challenges posed by its integration into society. Dent et al. (2019) and Huang (2023) emphasise the significance of maintaining human rights, valuing privacy, and safeguarding personal data, specifically within the realm of AI in education. The authors argue that establishing a comprehensive data protection framework is essential to navigating the intricate balance between benefiting from AI in educational settings and upholding the privacy fundamental to human dignity.
Incorporating AI into Computer-Assisted Qualitative Data Analysis Software (CAQDAS) presents unprecedented potential and intricate ethical dilemmas in the swiftly evolving landscape of qualitative data analysis.
As researchers lean increasingly towards automation to sift through vast quantities of data, the inclination to rely on AI for rapid and complex analysis kindles a pertinent debate: How can we balance the practicality of automation and the indispensability of human oversight in CAQDAS? Different ways exist to make AI-assisted CAQDAS work well with automation and human oversight. Koulu (2020) and Barmer et al. (2021) highlights human supervision’s importance in guaranteeing ethical and legal adherence and upholding transparency and confidence. Koulu (2020) argues that the recommendation to use human oversight as a safeguard against the risks posed by growing dependence on algorithms is straightforward and visible. However, this approach might need to safeguard fundamental rights sufficiently. Recognising the critical role of human judgment in legal decisions underscores the need to scrutinise the significance of such oversight. At sample in European Union policy, human oversight could face challenges as a mechanism to regulate algorithmic systems in automated legal decision-making. It may become a token gesture that fails to safeguard fundamental rights truly. Like this, it is essential to comprehend and tackle these potential weaknesses to ensure human oversight remains effective. To Barmer et al. (2021), it is essential to understand the context for successful AI engineering, the need for human-machine teaming with trust and transparency, and critical oversight to address ethical concerns and risks in AI systems. Conversely, Endsley (2017) and Heitmeyer et al. (2015) suggests combining artificial intelligence techniques with formal modelling to improve system performance and guarantee high reliability. These authors propose a new methodology for designing human-centric decision systems through artificial intelligence and software engineering.
Falco et al. (2021) propose independent audits as a practical method for governance and focus on the broader influence of these systems from an organisational and policy perspective. It recommends the adoption of an independent audit mechanism that can ensure transparency and reliability and outlines the governing principles of Accountability, Assurance, and Adaptability (AAA). These findings highlight considerable insights for policymakers navigating the complexities of integrating such advanced technologies into societal frameworks. At the same time, Heer (2019) supports the development of interactive technologies that encourage human control and proficient decision-making. The study points out some problems that have happened in the past, like focusing too much on fully automated translation that makes interactive systems less valuable, not enough past accuracy, different evaluation criteria for different problems, and the difficulty of finding a balance between user control and passive acceptance of algorithmic suggestions while trying to balance computer help with human higher-level reasoning and creativity.
These authors emphasise the significance of prioritising a human-centred approach to AI in CAQDAS. Automation is employed to augment human capabilities and improve decision-making. The incursion of AI into CAQDAS marks a turning point in qualitative research. These technological developments beckon the ushering in of novel capabilities for pattern recognition, predictive analysis, and the dissection of complex data sets—a horizon brimming with potential for scholarly innovation. Nevertheless, such advancements raise pertinent ethical questions that are both pragmatic and philosophical. Navigating the ethical and practical aspects of AI’s role in CAQDAS demands a careful and prospective examination to foster the responsible progression of this valuable research tool. Data privacy protection is paramount as algorithms are entrusted with sensitive and critical data analysis tasks. It is essential to keep a close eye on the ethical integrity of CAQDAS that use AI because of the problems with implicit bias in machine-taught algorithms and the risk of taking away the subtleties that come with qualitative data. As a result, the growing connection between the reflectiveness of human researchers and the computational power of AI calls for a critical discussion of AI’s role in a range of qualitative research areas and social settings. Within the burgeoning domain of data analysis technology, two critical questions arise: What are the prospective pathways for the evolution of CAQDAS in light of the relentless march of AI and ML? and How might scholars adapt to profit from such digital transformations while conscientiously leveraging these emergent tools in their research?
References
Barmer, H., Dzombak, R., Gaston, M. E., Palat, V., Redner, F., Smith, C. J., & Smith, T. (2021). Human-Centered AI. IEEE Pervasive Comput., 22, 7–8. https://api.semanticscholar.org/CorpusID:239651672
Christoforaki, M., & Beyan, O. (2022). AI Ethics—A Bird’s Eye View. Applied Sciences, 12(9), 4130. https://doi.org/10.3390/app12094130
Danilevskyi, M., & Perez Tellez, F. (2023). On the compliance with ethical principles in AI. Proceedings of the 2023 Conference on Human Centered Artificial Intelligence: Education and Practice, 50–50. https://doi.org/10.1145/3633083.3633223
Dent, K., Dumond, R., & Kuniavsky, M. (2019). A framework for systematically applying humanistic ethics when using AI as a design material. Temes de Disseny, 35, 178–197. https://doi.org/10.46467/TdD35.2019.178-197
Endsley, M. R. (2017). From Here to Autonomy. Human Factors: The Journal of the Human Factors and Ergonomics Society, 59(1), 5–27. https://doi.org/10.1177/0018720816681350
Falco, G., Shneiderman, B., Badger, J., Carrier, R., Dahbura, A., Danks, D., Eling, M., Goodloe, A., Gupta, J., Hart, C., Jirotka, M., Johnson, H., LaPointe, C., Llorens, A. J., Mackworth, A. K., Maple, C., Pálsson, S. E., Pasquale, F., Winfield, A., & Yeong, Z. K. (2021). Governing AI safety through independent audits. Nature Machine Intelligence, 3(7), 566–571. https://doi.org/10.1038/s42256-021-00370-7
Heer, J. (2019). Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences, 116(6), 1844–1850. https://doi.org/10.1073/pnas.1807184115
Heitmeyer, C. L., Pickett, M., Leonard, E. I., Archer, M. M., Ray, I., Aha, D. W., & Trafton, J. G. (2015). Building high assurance human-centric decision systems. Automated Software Engineering, 22(2), 159–197. https://doi.org/10.1007/s10515-014-0157-z
Huang, L. (2023). Ethics of Artificial Intelligence in Education: Student Privacy and Data Protection. Science Insights Education Frontiers, 16(2), 2577–2587. https://doi.org/10.15354/sief.23.re202
Huriye, A. Z. (2023). The Ethics of Artificial Intelligence: Examining the Ethical Considerations Surrounding the Development and Use of AI. American Journal of Technology, 2(1), 37–45. https://doi.org/10.58425/ajt.v2i1.142
Koulu, R. (2020). Proceduralizing control and discretion: Human oversight in artificial intelligence policy. Maastricht Journal of European and Comparative Law, 27(6), 720–735. https://doi.org/10.1177/1023263X20978649
Mühlhoff, R. (2021). Predictive privacy: towards an applied ethics of data analytics. Ethics and Information Technology, 23(4), 675–690. https://doi.org/10.1007/s10676-021-09606-x
Peltz, J., & Street, A. C. (2020). Artificial Intelligence and Ethical Dilemmas Involving Privacy. In Artificial Intelligence and Global Security (pp. 95–120). Emerald Publishing Limited. https://doi.org/10.1108/978-1-78973-811-720201006