From 'black box' to 'glass box': using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models

Autores

DOI:

https://doi.org/10.5585/13.2024.26510

Palavras-chave:

XAI, explainable artificial intelligence, algorithmic opacity, transparency

Resumo

Artificial intelligence (AI) has been extensively employed across various domains, with increasing social, ethical, and privacy implications. As their potential and applications expand, concerns arise about the reliability of AI systems, particularly those that use deep learning techniques that can make them true “black boxes”. Explainable artificial intelligence (XAI) aims to offer information that helps explain the predictive process of a given algorithmic model. This article examines the potential of XAI in elucidating algorithmic decisions and mitigating bias in AI systems. In the first stage of the work, the issue of AI fallibility and bias is discussed, emphasizing how opacity exacerbates these issues. The second part explores how XAI can enhance transparency, helping to combat algorithmic errors and biases. The article concludes that XAI can contribute to the identification of biases in algorithmic models, then it is suggested that the ability to “explain” should be a requirement for adopting AI systems in sensitive areas such as court decisions.

Downloads

Não há dados estatísticos.

Biografia do Autor

Otavio Morato de Andrade, Universidade Federal de Minas Gerais (UFMG) / Belo Horizonte, MG - Brasil

Doutorando em Direito na Universidade Federal de Minas Gerais (UFMG), com período sanduíche na Université libre de Bruxelles – Bélgica. Mestre em Direito pela UFMG. Pós-graduado em Direito Civil pela Pontifícia Universidade Católica de Minas Gerais (PUC-MG). Bacharel em Direito pela UFMG. Bacharel em Ciências Contábeis pela PUC-MG e Bacharel em Administração pela PUC-MG. Editor-Chefe da Revista do CAAP.

Marco Antônio Sousa Alves, Universidade Federal de Minas Gerais (UFMG) / Belo Horizonte, MG – Brasil

Professor Adjunto de Teoria e Filosofia do Direito e do Estado da Faculdade de Direito da Universidade Federal de Minas Gerais (UFMG). Membro permanente do Programa de Pós-Graduação em Direito (PPGD/UFMG). Doutor em Filosofia pela UFMG, com estágio de pesquisa na École des Hautes Études en Sciences Sociales (EHESS/Paris). Mestre em Filosofia e bacharel em Direito e em Filosofia pela UFMG.

Referências

ALVES, Marco Antônio Sousa. Cidade inteligente e governamentalidade algorítmica: liberdade e controle na era da informação. Philósophos, Goiânia, v. 23, n. 2, p. 191-232, 2018. https://www.revistas.ufg.br/philosophos/article/view/52730. Access: 10 ago. 2021.

ANDRADE, Otávio Morato de. Governamentalidade algorítmica: democracia em risco? 1st ed. São Paulo: Dialética, 2022. 224 p.

ANDRADE, Otávio Morato de. Open machine? Explainable Artificial Intelligence as technique realized in light of Simondon. In press, 2024.

ANGWIN, Julia; LARSON, Jeff; SURYA, Mattu; KIRCHNER, Lauren. Machine Bias. Pro Publica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Access: August 19, 2021.

ARTEAGA, Cristian. Interpretable machine learning for image classification with LIME: increase confidence in your machine-learning model by understanding its prediction. Towards Data Science, October 21, 2019. https://towardsdatascience.com/interpretable-machine-learning-for-image-classification-with-lime-ea947e82ca13. Access: 26 set. 2021.

BERNS, Thomas; REIGELUTH, Tyler. Éthique de la communication et de l’information : une initiation philosophique en contexte tecnologique avancé. Bruxelles : Éditions de l’Université de Bruxelles, 2021.

BRUNO, Fernanda. Máquinas de ver, modos de ser: vigilância, tecnologia e subjetividade. Porto Alegre: Sulina, 2013.

CONFALONIERI, Roberto; COBA, Ludovik; WAGNER, Benedikt; BESOLD, Tarek. A historical perspective of explainable artificial intelligence. Wires Data Mining and Knowledge Discovery, v. 11, e1391, 2021. https://wires.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/widm.1391. Access: 19 ago. 2021.

CORTIZ, Diogo. Inteligência artificial: conceitos fundamentais. In: VAINZOF, Rony; GUTIERREZ, Adriei. Inteligência artificial: sociedade, economia e Estado. São Paulo: Thomson Reuters, p. 45-60, 2021.

DOŠILOVIĆ, Filip; BRČIĆ, Mario; HLUPIĆ, Nikica. Explainable artificial intelligence: a survey. 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), p. 210-215, 2018. https://ieeexplore.ieee.org/abstract/document/8400040. Access: 19 ago. 2021.

GHAHRAMANI, Zoubin. Unsupervised learning. September 16, 2004. http://datajobstest.com/data-science-repo/Unsupervised-Learning-Guide-[Zoubin-Ghahramani].pdf. Access: 20 set. 2021.

GUNNING, David; STEFIK, Mark; CHOI, Jaesik; MILLER, Timothy; STUMPF, Simone; YANG, Guang-Zhong. XAI – Explainable artificial intelligence. Science Robotics, v. 4, n. 37, 2019. https://robotics.sciencemag.org/content/4/37/eaay7120/tab-article-info. Access: 15 ago. 2021.

FLORIDI, Luciano; TADDEO, Mariarosaria. How AI can be a force for good. Science, v. 361 n. 6404, p. 751-752, 2018. https://www.science.org/doi/10.1126/science.aat5991. Access: 26 set. 2021.

KELION, Leo. Apple’s “sexist” credit card investigated by US regulator. BBC News, November 11th 2019. https://www.bbc.com/news/business-50365609. Access: 12 ago. 2021.

LUNDBERG, Scott; LEE, Su-In. A unified approach to interpreting model predictions. arXiv:1705.07874, November 25, 2017. https://arxiv.org/abs/1705.07874. Access: 26 set. 2021.

MOLNAR, Christoph. Interpretable machine learning: a Guide for Making Black Box Models Explainable. Leanpub, 2021. https://christophm.github.io/interpretable-ml-book/index.html. Access: 13 ago. 2021.

MÜLLER, Klaus-Robert; SAMEK, Wojciech. Towards explainable artificial intelligence. arXiv:1909.12072v1, September 26, 2019. https://arxiv.org/abs/1909.12072. Access: August 15 2021.

MUELLER, Shane; HOFFMAN, Robert; CLANCEY, William; EMREY, Abigail; KLEIN, Gary. Explanation in Human-AI Systems: a literature meta-review synopsis of key ideas and publications and bibliography for explainable AI. DARPA XAI Literature Review, february 2019. https://arxiv.org/abs/1902.01876. Access: August 14, 2021.

NAJIBI, Alex. Racial discrimination in face recognition technology. Harvard Online: Science Policy and Social Justice, October 24, 2020. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/. Access: September 26, 2021.

NATARAJAN, Sridhar; NASIRIPOUR, Shahien. Viral Tweet About Apple Card Leads to Goldman Sachs Probe. Bloomberg, November 9, 2019. https://www.bloomberg.com/news/articles/2019-11-09/viral-tweet-about-apple-card-leads-to-probe-into-goldman-sachs. Access: September 26, 2021.

NUNES, Dierle; ANDRADE, Otávio Morato de. A explicabilidade da inteligência artificial e o devido processo tecnológico. Conjur, São Paulo, July 7, 2021. https://www.conjur.com.br/2021-jul-07/opiniao-explicabilidade-ia-devido-processo-tecnologico. Access: September 26, 2021.

NUNES, Dierle José Coelho; ANDRADE, Otávio Morato de. O uso da inteligência artificial explicável enquanto ferramenta para compreender decisões automatizadas: possível caminho para aumentar a legitimidade e confiabilidade dos modelos algorítmicos? Revista Eletrônica do Curso de Direito da UFSM, v. 18(1), p. e69329, Santa Maria, 2023. Disponível: https://periodicos.ufsm.br/revistadireito/article/view/69329. Access: September 5, 2021.

NUNES, Dierle; MARQUES, Ana Luiza. Inteligência artificial e direito processual: vieses algorítmicos e os riscos de atribuição de função decisória às máquinas. Revista de Processo, v. 285, p. 421-447, November 2018. https://www.academia.edu/37764508/INTELIG%C3%8ANCIA_ARTIFICIAL_E_DIREITO_PROCESSUAL_VIESES_ALGOR%C3%8DTMICOS_E_OS_RISCOS_DE_ATRIBUI%C3%87%C3%83O_DE_FUN%C3%87%C3%83O_DECIS%C3%93RIA_%C3%80S_M%C3%81QUINAS_Artificial_intelligence_and_procedural_law_algorithmic_bias_and_the_risks_of_assignment_of_decision_making_function_to_machines. Access: September 26, 2021.

RAMOS, Oscar Garcia. “Black box”: there’s no way to determine how the algorithm came to your decision. Oscar G. Ramos Blog, May 27, 2020. https://www.oscargarciaramos.com/blog/9gfzdns1lmwlz58k4w2yxvzjukxp22. Access: September 26, 2021.

RIBEIRO, Marco Túlio; SINGH Sameer; GUESTRIN, Carlos. “Why should I trust you?”: explaining the predictions of any classifier. arXiv:1602.04938, February 16, 2016. https://arxiv.org/abs/1602.04938. Access: September 26, 2021.

ROSSETTI, Regina; ANGELUCI, Alan. Ética algorítmica: questões e desafios éticos do avanço tecnológico da sociedade da informação. Galáxia, n. 46, p. 1-18, 2021. http://dx.doi.org/10.1590/1982-2553202150301. Access: September 26, 2021.

ROUVROY, Antoinette; BERNS, Thomas. Governamentalidade algorítmica e perspectivas de emancipação: o díspar como condição de individuação pela relação? Revista Eco Pós, v. 18, n. 2, p. 35-56, 2015. https://revistaecopos.eco.ufrj.br/eco_pos/article/view/2662. Access: September 20, 2021.

SAKO, Mari; PARNHAM, Richard. Technology and innovation in legal services: final report for the solicitors regulation authority. University of Oxford, 2021. https://www.sra.org.uk/globalassets/documents/sra/research/full-report-technology-and-innovation-in-legal-services.pdf?version=4a1bfe. Access: September 16, 2021.

SERENGIL, Sefik. How SHAP can keep you from black box AI. Blog Sefik Ilkin Serengil, July 1st, 2019. https://sefiks.com/2019/07/01/how-shap-can-keep-you-from-black-box-ai. Access: September 10, 2021.

SIMONITE, Tom. When it comes to Gorillas, Google Photos remains blind. Wired, November 1st, 2018. https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind. Access: September 10, 2021.

SUNSTEIN, Cass. Republic 2.0. Princeton: Princeton University Press, 2009.

SURDEN, Harry. Machine learning and law. Washington Law Review, v. 89, n. 1, 2014. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2417415. Access: September 26, 2021.

VILLANI, Cédric. For a meaningful artificial intelligence: towards a French and European strategy. A parliamentary mission from 8th september 2017 to 8th march 2018. Paris, 2019. https://books.google.com.br/books?id=9cVUDwAAQBAJ&lpg=PP1&hl=pt-BR&pg=PP1#v=onepage&q&f=false. Access: September 20, 2021.

WELLS, Lindsay; BEDNARZ, Tomasz. Explainable AI and reinforcement learning: a systematic review of current approaches and trends. Front Artificial Intelligence, May 20, 2021. https://www.frontiersin.org/articles/10.3389/frai.2021.550030/full. Access: September 10, 2021.

ZUBOFF, Shoshana. A era do capitalismo de vigilância: a luta por um futuro humano na nova fronteira do poder. Rio de Janeiro: Intrínseca, 2020.

Downloads

Publicado

2024-06-28

Como Citar

MORATO DE ANDRADE, Otavio; SOUSA ALVES, Marco Antônio. From ’black box’ to ’glass box’: using Explainable Artificial Intelligence (XAI) to reduce opacity and address bias in algorithmic models. Revista Thesis Juris, [S. l.], v. 13, n. 1, p. 03–25, 2024. DOI: 10.5585/13.2024.26510. Disponível em: https://uninove.emnuvens.com.br/thesisjuris/article/view/26510. Acesso em: 14 nov. 2024.

Edição

Seção

Artigos