Eyke Hüllermeier
Representation and Quantification of Uncertainty in Machine Learning
Due to the steadily increasing relevance of machine learning for practical applications, many of which are coming with safety requirements, the notion of uncertainty has received increasing attention in machine learning research in the recent past. This talk will address questions regarding the representation and adequate handling of (predictive) uncertainty in ( supervised) machine learning. A specific focus will be put on the distinction between two important types of uncertainty, often referred to as aleatoric and epistemic, and how to quantify these uncertainties in terms of suitable numerical measures. Roughly speaking, while aleatoric uncertainty is due to randomness inherent in the data generating process, epistemic uncertainty is caused by the learner’s ignorance about the true underlying model. Going beyond purely conceptual considerations, the use of ensemble learning methods will be discussed as a concrete approach to uncertainty quantification in machine learning.
Short bio: Eyke Hüllermeier is a full professor at the Institute of Informatics at LMU Munich, Germany, where he heads the Chair of Artificial Intelligence and Machine Learning. He studied mathematics and business computing, received his PhD in computer science from Paderborn University in 1997, and a Habilitation degree in 2002. Prior to joining LMU, he held professorships at several other German universities and spent two years as a Marie Curie fellow at the IRIT in Toulouse (France). Currently, he is also a Chief Scientist at the Fraunhofer Institute for Mechatronic Systems Design. His research interests are centered around methods and theoretical foundations of artificial intelligence, with a specific focus on machine learning and reasoning under uncertainty. Besides, he is interested in the application of AI methods in other disciplines, ranging from the natural sciences and engineering to the humanities and social sciences. He has published more than 400 articles on related topics in top-tier journals and major international conferences, and several of his contributions have been recognized with scientific awards.
Pawel Zielinski
A framework of distributionally robust possibilistic optimization
In this talk, I will consider a class of optimization problems with uncertain constraint coefficients. Pos- sibility theory is used to model the uncertainty, namely, a joint possibility distribution in constraint coefficient realizations, called scenarios, is specified. This possibility distribution induces a necessity measure in scenario set, which in turn describes an ambiguity set of probability distributions in scenario set. The distributionally robust approach is then applied to convert the imprecise constraints into deterministic equivalents. Namely, the left-hand side of an imprecise constraint is evaluated by using a risk measure with respect to the worst probability distribution that can occur. In this talk, the Conditional Value at Risk will be used as the risk measure, which generalizes the strict robust and ex- pected value approaches, commonly used in literature. A general framework for solving such a class of problems will be presented. Some cases which can be solved in polynomial time will be identified.
Short bio: Pawel Zielinski is a Full Professor of Computer Science at the Department of Fundamentals of Computer Science at Wroclaw University of Science and Technology, Wroclaw, Poland. His research addresses mathematical optimization and operations research, including optimization under uncertainty, robust optimization, scheduling, and planning in uncertain environment. The research is mainly focused on the design and analysis of algorithms for hard discrete optimization problems with uncertain parameters. He authored a number of results on their computational properties. His scientific output includes more than 100 papers of international scope, over 50 of which were published in prestigious journals from the JCR list. These publications were results of a series of scientific projects (European or national), of which he was the principal investigator or the main investigator, and cooperation with leading international research centers, among others, the University of Toulouse.
Professor Zielinski received his M.Sc. degree from the University of Wroclaw, Poland, in 1993, Ph.D. degree from the Wroclaw University of Science and Technology, in 1997, Habilitation from the Adam Mickiewicz University, Poznan, Poland, in 2009, and Professor title in 2021, all in Computer Science.
Website: https://cs.pwr.edu.pl/zielinski/
DBLP: https://dblp.org/pid/72/5778.html
Google Scholar: https://scholar.google.com/citations?user=v7XBBAEAAAAJ
Serena Villata
Assessing trustworthiness and quality of formal and natural arguments
The field of artificial argumentation is emerging as an important aspect of Artificial Intelligence research. The reason is based on the recognition that if we are to develop robust intelligent systems, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task. Humans do so by using argumentation either internally, by evaluating arguments and counterarguments, or externally, by for instance entering into a discussion or debate where arguments are exchanged. In this talk, I will first introduce the issue of assessing trustworthiness and quality of formal and natural arguments and then I will discuss some solutions we proposed to address this issue, at the crossroads of formal argumentation, fuzzy logic and natural language processing.
Short bio: Serena Villata is a tenured research fellow (CR1) in computer science at the CNRS and she pursues her research at the I3S laboratory where she is a member of the Wimmics team. Her research area is Artificial Intelligence (AI), and her current work focuses on artificial argumentation, with a specific focus on legal and medical texts, political debates and social network harmful content (abusive language, disinformation). Her work conjugates argument-based reasoning frameworks with natural language arguments extracted from text. She is the author of more than 150 scientific publications in AI. Since July 2019, she has been awarded with a Chaire in Artificial Intelligence at the Interdisciplinary Institute for Artificial Intelligence 3IA Cote d’Azur on “Artificial Argumentation for Humans”. She became the Deputy Scientific Director of the 3IA Côte d’Azur Institute in January 2021. Since December 2019, she is a member of the National Pilot Committee for Digital Ethics (CNPEN).