Hector Geffner
Hector is an Alexander von Humboldt Professor at RWTH Aachen University in Germany. Before joining RWTH in 2023, he was an ICREA Research Professor at the Universitat Pompeu Fabra in Barcelona, Spain. Hector obtained a Ph.D. in Computer Science at UCLA and worked at the IBM T.J. Watson Research Center in New York and at the Universidad Simon Bolivar in Caracas. Distinctions for his work include the 1990 ACM Dissertation Award and three ICAPS Influential Paper Awards. He currently leads a project on representation learning for acting and planning (RLeap) funded by an ERC grant.
Abstract: Recent developments in AI have shown the remarkable power of deep learning, deep reinforcement learning, and LLMs. The resulting systems, however, require large amounts of data, are not transparent or reliable, and struggle in reasoning and planning tasks. In this tutorial, I'll review a top-down approach to learning, reasoning, and planning, that makes a clear distinction between what is to be learned and how, and which builds on both symbolic and neural methods. Three concrete, key learning problems in planning will be considered: learning lifted world models, learning general policies and heuristics, and learning problem decompositions.
Andreas Pieris
Andreas Pieris is an Assistant Professor in the Department of Computer Science at the University of Cyprus and an Associate Professor in the School of Informatics at the University of Edinburgh. Prior to this, he was an Assistant Professor at the University of Edinburgh, a Postdoctoral Researcher at the Vienna University of Technology, and a Postdoctoral Researcher at the University of Oxford. He earned a D.Phil. in Computer Science from the University of Oxford in 2011. His research interests are database theory with emphasis on knowledge-enriched and uncertain data, knowledge representation and reasoning, and logic in computer science. He has published numerous papers, most of them in leading international conferences and journals. He is the recipient of the Ray Reiter Best Paper Award at the 22nd International Conference on Principles of Knowledge Representation and Reasoning (KR2025). He has served on the program committees of international conferences and workshops, including the top-tier database and AI conferences. He has also served as one of the general chairs of the 25th Joint International Conference on Extending Database Technology and Database Theory (EDBT/ICDT 2022) and as one of the general chairs of the 16th Reasoning Web Summer School (RW2020).
Abstract: An ontology specifies an abstract model of a domain of interest via a formal language that is typically based on logic. Tuple-generating dependencies (tgds) and equality-generating dependencies (egds), originally introduced as a unifying framework for database integrity constraints, and later on used in data exchange and integration, are well suited for modeling ontologies that are intended for data-intensive tasks. In recent years, there has been an extensive study of tgd- and egd-based ontologies and of their applications. In those studies, model theory plays a crucial role and it typically proceeds from syntax to semantics. In other words, the syntax of an ontology language is introduced first and then the properties of the mathematical structures that satisfy ontologies of that language are explored. There is, however, a mature and growing body of research in the reverse direction, i.e., from semantics to syntax. Here, the starting point is a collection of model-theoretic properties and the goal is to determine whether or not those properties characterize some ontology language. Such results are welcome as they pinpoint the expressive power of an ontology language in terms of insightful model-theoretic properties. The goal of the lecture is to present a comprehensive overview of model-theoretic characterizations of tgd- and egd-based ontology languages that are encountered in symbolic AI.
Efthymia Tsamoura
Abstract: TBA
Jonas Haldimann:
Jonas is a Postdoctoral Research Fellow at the Artificial Intelligence Research Unit of the University of Cape Town (UCT). He holds a Doctorate in Computer Science from the FernUniversität in Hagen, Germany, where his doctoral research focused on non-monotonic reasoning with propositional conditionals, syntax splittings, and belief change theory. Following his PhD, he undertook a postdoctoral position at TU Wien in Vienna before joining UCT. During this post-doc he broadened the scope of his research to include defeasible reasoning in richer representation languages, with a particular interest in defeasible Description Logics.
Richard Booth
Richard Booth is a senior lecturer at the School of Computer Science and Informatics at Cardiff University, which he joined in 2015. His research interests are in knowledge representation and logic-based approaches to artificial intelligence, specifically belief revision, reasoning with conditionals, argumentation, reasoning about expertise and trust, and computational social choice. Before joining Cardiff, he worked as a post-doc or lecturer at, among others, University of Luxembourg, Mahasarakham University (Thailand) and Leipzig University.
Non-monotonic Conditionals: A Contemporary Introduction
Abstract: In the field of knowledge representation and reasoning, rules are a convenient way of representing knowledge. But in real-life applications, every rule has its exceptions. Formally, such rules “if A then typically B” can be represented by conditionals. This course introduces reasoning with conditionals, a form of non-monotonic reasoning. Their study in AI dates back to seminal works by Reiter and McCarthy, and is still an active research field today. We will cover two core topics: (a) finding adequate semantic models for conditionals, (b) finding adequate inductive inference operators, i.e., operators determining what inferences should follow from a set of conditionals. Beginning with foundational systems and tools, such as the system P postulates and ordinal conditional functions (OCFs), we explore established and more recent inductive inference operators, including rational closure, system W, and disjunctive rational closure. Key representation results connect syntactic postulates to semantic models. Later parts of the course cover advanced properties of inference operators like syntax splitting and knowledge-based monotony. Students will gain both a historical perspective and state-of-the-art knowledge about reasoning with conditionals, enabling them to critically assess research in this area and giving them a first insight into the field of knowledge representation and reasoning.
Nina Pardal
Nina Pardal is a Lecturer in Databases in AI and Data Science at the Laboratory for Foundations of Computer Science of the University of Edinburgh. Her research spans logic, knowledge representation and reasoning, and graph theory, with a special focus on formal frameworks for reasoning in the presence of inconsistency and uncertainty. She has worked on repairs and consistent query answering across different data models, semiring semantics, and complexity-theoretic aspects of knowledge representation and reasoning.
Abstract: TBA
Antonio Rago
Antonio Rago is a Lecturer in Computer Science at King’s College London, currently focused on XAI, as well as computational logic and argumentation. Within XAI, his work has mostly concerned the use of techniques from symbolic AI to define, develop and evaluate explanations of black box models, including various forms of neural networks. These works have ranged from the interactivity of explanations to providing formal robustness guarantees, and applications of his work include recommender systems, opinion polling in e-democracy, judgmental forecasting and Formula 1 race strategy.
Timotheus Kampik
Timotheus Kampik is a Fellow of the Wallenberg AI, Autonomous Systems and Software Program and Assistant Professor at Umeå University, Sweden. He also serves as Principal Scientist for Business Process Intelligence at SAP. His research interests include computational argumentation, multi-agent systems, and conceptual processes.
Computational Argumentation and Machine Learning
Abstract: Computational Argumentation (CA) is a collection of approaches for dialectical reasoning, where collections of arguments are typically evaluated ascertain their acceptability by means of semantics. Recently, CA has gained substantial traction as a research area within knowledge representation and reasoning, notably because of its potential to bridge human and machine reasoning, and symbolic (logic-based) and subsymbolic (machine learning-based) inference. This tutorial gives an overview of how CA and machine learning can be used in tandem to go beyond the boundaries of what is achievable when either of the two is used in isolation. Specifically, it focuses on: (i) how CA can be used to enhance machine learning, e.g., by improving accuracy or facilitating explainability; and (ii) how machine learning can be used for CA-based reasoning, e.g., by improving the computation time or convergence of certain semantics.
Emiliano Lorini
Emiliano Lorini is a Senior Researcher (“Directeur de Recherche”) at the Centre National de la Recherche Scientifique (CNRS). He is the Head of the Artificial Intelligence Department at the Institut de Recherche en Informatique de Toulouse, France. In 2014, he was awarded the CNRS Bronze Medal. He is an Associate Editor of the journal Artificial Intelligence. The general aim of his research is to develop logics and formal models that combine logic and game theory to formalize (i) the reasoning and decision-making processes of intelligent agents and their interactions, and (ii) the socio-cognitive and normative concepts underlying these interactions.
Abstract: In this tutorial, I will introduce a family of logical languages and a rule-based semantic framework for formalizing causal reasoning and causal concepts including actual causality and counterfactual conditionals. I will then present techniques for automating these languages, based on satisfiability and model checking. Finally, I will illustrate how these languages and decision procedures can be applied to causal reasoning in the legal domain.
Florent Capelli
Florent Capelli is a junior professor working at CRIL of Université d'Artois in Lens since 2023. After obtaining his PhD from Université Paris Diderot in 2016, he worked at Birkbeck, University of London, and moved as an assistant professor at Université de Lille in 2017. His research focuses on the study of aggregation problems, from enumeration to counting problems, where one aims to understand a large set of solutions without necessarily exploring it completely. He is particularly interested in problems from knowledge compilation, database theory, proof complexity, and optimization. His favorite algorithmic techniques are based on tractable circuits, data structures from the field of knowledge compilation, which uses syntactic restrictions to offer trade-offs between size and tractability. He regularly serves as a PC member or reviewer for theory, AI, and databases conferences.
Abstract: Knowledge compilation studies algorithms allowing to convert between different representations of Boolean functions. This is particularly relevant when one is interested in getting a better understanding of the models of a given Boolean function. More often than not and because it is mainly the most natural way of doing it, the models of a Boolean function are implicitly defined as the one satisfying a set of constraints. The main problem with this representation is that even finding one model is NP-hard. While SAT solvers often excel at this task, they will be less efficient for tasks requiring an understanding of every model, such as when one is interesting in counting the satisfying assignments, uniformly sample a subset of them, or finding some "optimal" satisfying assignment a weight on them. On the other hands, listing every model can often be too expensive and a successful line of research has been to study data structures allowing to represent Boolean functions in a succinct yet tractable way.
In the first part of this tutorial, we will study a few such representations, explain why they are interesting and present the main algorithms used in practice to construct them. We will also showcase a few existing tools and their applications for real problems. In a second part, we will study lower bounds on such representations by exhibiting Boolean functions which cannot be efficiently represented using the data structures from the previous part.