KEYNOTE SPEAKER
Francisco Herrera
Francisco Herrera (SM’15) received his M.Sc. in Mathematics in 1988 and Ph.D. in Mathematics in 1991, both from the University of Granada, Spain. He is a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada and Director of the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). He is an EurAI Fellow 2009 and IFSA Fellow 2013. He’s an academician at the Spanish Royal Academy of Engineering (2019) and Corresponding academician in the Cuban Academy of Sciences (2023).
AFFILIATION:
University of Granada, Spain
KEYNOTE TITLE
Explainable AI: A road still to travel
KEYNOTE Summary
The framework for the development of AI technologies under the trustworthy AI paradigm has emerged and hatched during the last few years, with a huge number of studies on technical proposals for each of the requirements, with the aim of achieving safe and responsible AI.
If we allow ourselves to fix attention on a technical requirement that appears recurrently in ethical and fundamental principles, as well as in requirements, it is explainability. Explainability can be defined from the perspective of the audience, as: “ Given an audience, an explainable AI is one that produces details or reasons to make its functioning clear or easy to understand.” From a transparency perspective, six types of audience can be fixed, developer, designer, owner, user, regulator, and society, On the other hand, different levels of transparency analysis can be done: a first one as algorithmic, interaction, and social; and a second one from a more technological and governance view, data, algorithm and process evaluation. It is interesting to analyze the relationship among all them and the role that different types of explanations can play.
A different approach to XAI that has been discussed recently is the validation of the perse model, which can aid in its auditability. This view of XAI is associated with how XAI can hide great opportunities and potential for important research needed to ensure the safety of AI systems. The future will require a deeper analysis of those kinds of explanations to make explainable AI technologies a useful tool from a practical perspective.
This talk will focus on this deep discussion of the ideas of XAI, and the road ahead to make explainability a useful element in the design of safe and responsible AI.
PERSONAL INFO
Francisco Herrera (SM’15) received his M.Sc. in Mathematics in 1988 and Ph.D. in Mathematics in 1991, both from the University of Granada, Spain. He is a Professor in the Department of Computer Science and Artificial Intelligence at the University of Granada and Director of the Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI). He is an EurAI Fellow 2009 and IFSA Fellow 2013. He’s an academician at the Spanish Royal Academy of Engineering (2019) and Corresponding academician in the Cuban Academy of Sciences (2023).
He has been the supervisor of 65 Ph.D. students. He has published more than 600 journal papers, receiving more than 153000 citations (Scholar Google, H-index 184). He has been nominated as a Highly Cited Researcher (in the fields of Computer Science and Engineering, respectively, 2014 to present, Clarivate Analytics) and included in the “Best Computer Science Scientists” in the Research.com 2024 ranking in the position 11th.
His current research interests include among others, computational intelligence, data science, information fusion and decision making, trustworthy artificial intelligence and general purpose artificial intelligence.
Committed to the importance of transmitting to society the results of the research, the formation of new generations of researchers, and the development of an ecosystem of digital innovation and artificial intelligence in Granada.