Ta strona wykorzystuje ciasteczka ("cookies") w celu zapewnienia maksymalnej wygody w korzystaniu z naszego serwisu. Czy wyrażasz na to zgodę?

Czytaj więcej

Contrastive explanations—a contribution to the framework for social design of explainable AI

Contrastive explanations—a contribution to the framework for social design of explainable AI

Kategoria: Aktualności Wydziału

Human interactivity and Language Lab & Traincrease Seminar, online, 1.12.2022, 15:30.
Our Guest will be Prof. Katharina Rohlfing, Paderborn University, with the lecture:
Contrastive explanations—a contribution to the framework for social design of explainable AI 
(research within the framework of a German Collaborative Center)
We will hold the meeting virtually at the link: https://uw-edu-pl.zoom.us/my/hill.uw
Please find the abstract and an introductory paper linked below. During the meeting Prof. Rohlfing will present her research in the context of the explainability in AI problem and we will have a chance to discuss both this topic and the ideas on how to make our research connected to societal issues, while keeping them quite basic in nature.
Abstract
Technological advancements in machine learning affecting humans’ lives on the one hand and also regulatory initiatives fostering transparency in algorithmic decision making on the other hand drive a recent surge of interest in explainable AI (XAI). Explainability is discussed as a solution to sociotechnical challenges such as intelligent software providing incomprehensible decisions or big data enabling fast learning but becoming too complex to fully comprehend and judge its achievements. With explainable AI, more insights into the functions, decisions, and usefulness of algorithms are expected. If an explanation is successful, it results in an understanding. Current XAI research is centering around one-way interaction from which solutions to achieve understanding are derived. In the presentation, I will point to an important resource for achieving understanding that has been overlooked so far: the interaction with the addressee. The A05 project of the TRR 318 gives insights into how the cognitive processes should be considered in the design of interaction with the addressee.

Relevant paper, which will facilitate participation in the discussion: https://pub.uni-bielefeld.de/download/2949334/2957341/Rohlfing-etal-2021-TCDS.pdf

Prof. Katharina Rohlfing is a renowned researcher in the domain of language development and human-robot interaction. She received her MA in Linguistics, Philosophy and Media Studies from the University of Paderborn and her Ph.D. in Linguistics from the Bielefeld University, Germany and worked as a DAAD and DFG Fellow at the San Diego State University, the University of Chicago and Northwestern University. From 2008 to 2015, she headed  the Emergentist Semantics Group within the Center of Excellence Cognitive Interaction Technology, Bielefeld University. She is currently Professor of Psycholinguistics at Paderborn University, where she is a head of SprachSpielLabor and Spokeswoman for and Project Leader in the Transregional Collaborative Research Centre 318 „Constructing Explainability”.