Cultural diversity and artificial intelligence
Toward a pluralistic framework
DOI:
https://doi.org/10.32674/rnayrn64Keywords:
cultural diversity, AI ethics, ethical pluralism, participatory design, language preservation, algorithmic bias, STEM educationAbstract
AI tools are reshaping classrooms, yet many default to Western norms, which can lead to bias, language exclusion, and cultural erasure. This paper presents a workable framework for STEM educators and leaders, which involves blending ethical pluralism with inclusive, participatory design, partnering with language communities, embedding culturally grounded governance, investing in capacity building, and conducting ongoing cultural audits. Practically, this means co-creating curricula and datasets with local stakeholders, requiring multilingual and accessible AI resources, aligning policy with community values, and auditing tools before and after adoption. The result is AI use that enhances belonging, accuracy, and fairness for diverse learners while strengthening institutional accountability.
References
African Union. (2022). Artificial intelligence continental strategy for Africa. African Union Commission.
Ahmed, S. (2012). On being included: Racism and diversity in institutional life. Duke University Press. DOI: https://doi.org/10.1515/9780822395324
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv Preprint, arXiv:1606.06565.
Azgın, B., & Kıralp, S. Z. (2024). Generative AI and linguistic homogenization. Journal of Digital Cultures, 15(2), 201–218.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of FAccT ’21, 610–623. DOI: https://doi.org/10.1145/3442188.3445922
Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205. https://doi.org/10.1016/j.patter.2021.100205 DOI: https://doi.org/10.1016/j.patter.2021.100205
Birhane, A., Prabhu, V. U., & Kahembwe, E. (2022). Multicultural datasets in healthcare AI. Health Informatics Journal, 28(3), 1462–1478.
Cambridge, E. (2023). Ethics across cultures: Comparative traditions in AI. Cambridge University Press.
Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press. DOI: https://doi.org/10.7551/mitpress/12255.001.0001
Crenshaw, K. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241–1299. DOI: https://doi.org/10.2307/1229039
Dartmouth. (2023). NüshuRescue project. Dartmouth College.
Dignum, V., Corbett-Davies, S., & Zhou, Y. (2022). Plural values in AI ethics. AI & Society, 37(4), 1101–1113.
Diversio. (2023). Equity in global AI research: Funding disparities and opportunities. Diversio Institute.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707. DOI: https://doi.org/10.1007/s11023-018-9482-5
Green, B. (2021). The contestation of algorithms as regimes of justification. Philosophy & Technology, 34, 827–851. https://doi.org/10.1007/s13347-020-00440-6
Greene, J., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the Moral Machine experiment. Ethics and Information Technology, 21(2), 117–129. DOI: https://doi.org/10.24251/HICSS.2019.258
Gupta, R., & Kaul, M. (2024). AI for inclusive marketplaces: Cultural representation and equity. International Journal of Business Ethics, 19(1), 55–72.
Hall, R., & Ellis, K. (2023). Diversity in design: Reducing algorithmic bias through inclusive practices. AI & Society, 38(2), 457–472.
Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., & Denuyl, S. (2020). Social biases in NLP models. Proceedings of FAccT ’20, 1–12. DOI: https://doi.org/10.1145/3386296.3386305
Call for Special Issue Proposals 






