Nathalie Baracaldo, PhD

Topic: Can I keep my data? Uncovering the Challenges and Oportunities of Vertical and Horizontal Federated Learning

Data privacy and regulations prevent the free transmission and sharing of information to a central place. While these regulations aim to ensure data owners can maintain clear control of their data, they also inhibit the training of machine learning (ML) algorithms and the analysis of processes that would benefit multiple stakeholders. Federated learning (FL) has provided an alternative to ensure multiple data owners collaboratively train ML models without sharing their data with each other. FL has helped mitigate some of the privacy risks of data exposure; however, there are multiple open challenges that need to be addressed. In this talk, I will give an overview of the current solutions and opportunities for vertical and horizontal FL and some of the salient challenges that arise in regulated environments. Some of the topics I will discuss include the impact of FL on inference of private data, fairness, and transparency. 

Nathalie Baracaldo leads the AI Security and Privacy Solutions team and is a Research Staff Member at IBM’s Almaden Research Center in San Jose, CA. Nathalie is passionate about delivering machine learning solutions that are highly accurate, withstand adversarial attacks and protect data privacy. Her team focuses on two main areas: federated learning, where models are trained without directly accessing training data and adversarial machine learning, where defenses are designed to withstand potential attacks to the machine learning pipeline. In 2020, Nathalie received the IBM Master Inventor distinction for her contributions to the IBM Intellectual Property and innovation. She has published more than twenty papers in peer-reviewed conferences and journals, receiving three best paper awards.  Nathalie received her Ph.D. degree from the University of Pittsburgh, USA in 2016. 

 Iris Reinhartz-Berger, PhD

Topic: How do Domain Models and Software relate?

Domain models are representations of areas of knowledge that use common concepts for describing phenomena, requirements, problems, capabilities, and solutions. In traditional software engineering processes, domain models have been manually created and used for designing and developing software. However, domain modeling is time consuming and error prone. Moreover, domain models may not be fully clear and coherent for novice developers. Various teams, who may differently perceive the domain of discourse, may develop software systems which introduce inconsistencies and variabilities to the domain model.

In this talk, we will review the challenges in automating domain modeling and discuss how variability analysis can contribute. Specifically, we will concentrate on an ontological and semantic approach that examines software behaviors and uses them to analyze both variability and commonality of the domain of discourse. The approach may get different types of software artifacts, such as textual requirements, design models, code, and test cases, and results in feature diagrams specifying the main features in the domain and the dependencies among them.

Iris Reinhartz-Berger is the head of the Department of Information Systems, University of Haifa, Israel. She received her MSc and PhD in Information Management Engineering from the Technion – Israel Institute of Technology and her BSc in computer science and applied mathematics from the Technion – Israel Institute of Technology. Her research interests include conceptual modeling, domain analysis, modeling languages and techniques for analysis and design, and systems development processes. She co-organized a series of domain engineering workshops and co-edited a book entitled “Domain Engineering: Product Lines, Languages, and Conceptual Models”. She co-chair EMMSAD – Exploring Modeling Methods for Systems Analysis & Development – working conference. Her research is published in top ranked conferences and journals, such as ER, CAISE, IEEE TSE, JSS and IST.



 Dr. Rafael Accorsi

Topic: How to Set Up a Process Analytics Center of Excellence

With constantly growing availability of large quantities of enterprise data paired with the constant need to optimize workflows, more and more enterprises are setting out to establish a “competence center” or “center of excellence” (CoE) on process analytics and improvement. Whist this idea is not particularly radical, it is also far from straightforward. In fact, most of the established “CoE”s eventually fail to deliver the expected value. So what should one watch for in order to succeed? This talk walks through the main critical “cruxes” to pay attention in this journey.Rafael is a Management & Strategy Consulting Director at PricewaterhouseCoopers Switzerland practice heading the global team responsible for data-driven process analytics, mining and excellence.

He has a double-hated role of using data to deliver large transformation projects on the one hand, and developing new data-driven approaches to tackle business problems on the other. Furthermore he advises on the overall process mining strategy at PricewaterhouseCoopers. Prior to joining PwC, Rafael was an Assistant Professor at the University of Freiburg, researching in the area of process mining, data protection and applied cryptography.

Rafael obtained his PhD in Computer Science from the University of Freiburg, and his MSc. degree in Mathematical and Computational Logic from the University of Amsterdam.


Prof. Dr. Peter Fettke

Topic: Next-Generation Enterprise Modeling in the Era of Artificial Intelligence and Robotic Process Automation

Dr. Peter Fettke is a professor of business informatics at Saarland University and principal researcher, research fellow and group leader at the German Research Center for Artificial Intelligence (DFKI), Saarbrücken. Peter, with his group of about 30 people, is interested in concepts, methods, and techniques at the intersection between business informatics and artificial intelligence, namely the modeling of computer-integrated systems, automated planning, and deep learning. Peter is the author of more than 150 peer-reviewed publications. His work is among the most cited articles in leading international journals on business informatics and he is one of the top 5 most cited scientists at DFKI. He is also a sought-after reviewer for renowned conferences, journals, and research organizations.