Hi! PARIS Summer School

Keynote Speakers

HiPARIS_Summerschool_2022_KEYNOTE3_JoanBRUNA
Joan BRUNA
New York University
Mathematics of Neural Networks in the Billion-parameter Age
Keynote 3
Joan Bruna is an Associate Professor at Courant Institute, New York University (NYU), in the Department of Computer Science, Department of Mathematics (affiliated) and the Center for Data Science. He belongs to the CILVR group and to the Math and Data groups. From 2015 to 2016, he was Assistant Professor of Statistics at UC Berkeley and part of BAIR (Berkeley AI Research). Before that, he worked at FAIR (Facebook AI Research) in New York. Prior to that, he was a postdoctoral researcher at Courant Institute, NYU. He completed his PhD in 2013 at Ecole Polytechnique, France. Before his PhD he was a Research Engineer at a semi-conductor company, developing real-time video processing algorithms. Even before that, he did a MsC at Ecole Normale Superieure de Cachan in Applied Mathematics (MVA) and a BA and MS at UPC (Universitat Politecnica de Catalunya, Barcelona) in both Mathematics and Telecommunication Engineering. For his research contributions, he has been awarded a Sloan Research Fellowship (2018), a NSF CAREER Award (2019), a best paper award at ICMLA (2018) and the IAA Outstanding Paper Award.
Biography Joan Bruna is an Associate Professor at Courant Institute, New York University (NYU), in the Department of Computer Science, Department of Mathematics (affiliated) and the Center for Data Science. He belongs to the CILVR group and to the Math and Data groups. From 2015 to 2016, he was Assistant Professor of Statistics at UC Berkeley and part of BAIR (Berkeley AI Research). Before that, he worked at FAIR (Facebook AI Research) in New York. Prior to that, he was a postdoctoral researcher at Courant Institute, NYU. He completed his PhD in 2013 at Ecole Polytechnique, France. Before his PhD he was a Research Engineer at a semi-conductor company, developing real-time video processing algorithms. Even before that, he did a MsC at Ecole Normale Superieure de Cachan in Applied Mathematics (MVA) and a BA and MS at UPC (Universitat Politecnica de Catalunya, Barcelona) in both Mathematics and Telecommunication Engineering. For his research contributions, he has been awarded a Sloan Research Fellowship (2018), a NSF CAREER Award (2019), a best paper award at ICMLA (2018) and the IAA Outstanding Paper Award.
HiPARIS_Summerschool_2022_KEYNOTE1_AristideGIONNIS
Aristides GIONIS
KTH Royal Institute of Technology
Opinion dynamics in online social networks: models and computational methods
Keynote 1
Aristide Gionnis is a WASP professor at KTH Royal Institute of Technology and an adjunct professor at Aalto University. He works in algorithms, data mining, graph mining, and social-network analysis.
Biography Aristide Gionnis is a WASP professor at KTH Royal Institute of Technology and an adjunct professor at Aalto University. He works in algorithms, data mining, graph mining, and social-network analysis.
HiPARIS_Summerschool_2022_KEYNOTE4_HelenMARGETTS
Helen MARGETTS
University of Oxford
Keynote 4
Helen Margetts is Professor of Society and the Internet and Professorial Fellow at Mansfield College. She is a political scientist specialising in the relationship between digital technology and government, politics and public policy. She is an advocate for the potential of multi-disciplinarity and computational social science for our understanding of political behaviour and development of public policy in a digital world. She has published over a hundred books, articles and policy reports in this area, including Political Turbulence: How Social Media Shape Collective Action (with Peter John, Scott Hale and Taha Yasseri, 2015); Paradoxes of Modernization (with Perri 6 and Christopher Hood, 2010); Digital Era Governance (with Patrick Dunleavy, 2006, 2008); and The Tools of Government in the Digital Age (with Christopher Hood, 2007). Since 2018, Helen has been Director of the Public Policy Programme at The Alan Turing Insitute, the UK’s national institute for data science and artificial intelligence. The programme works with policy-makers to research and develop ways of using data science and AI to improve policy-making and service provision, foster government innovation and establish an ethical framework for the use of data science in government. The programme comprises over 25 research projects involving 60 researchers across 10 universities. As well as being programme director, Helen is theme lead for criminal justice in the AI for Science and Government programme and principal investigator on research projects Hate Speech: Measures and Counter-measures, Social Information and Public Opinion and Political Volatility
Biography Helen Margetts is Professor of Society and the Internet and Professorial Fellow at Mansfield College. She is a political scientist specialising in the relationship between digital technology and government, politics and public policy. She is an advocate for the potential of multi-disciplinarity and computational social science for our understanding of political behaviour and development of public policy in a digital world. She has published over a hundred books, articles and policy reports in this area, including Political Turbulence: How Social Media Shape Collective Action (with Peter John, Scott Hale and Taha Yasseri, 2015); Paradoxes of Modernization (with Perri 6 and Christopher Hood, 2010); Digital Era Governance (with Patrick Dunleavy, 2006, 2008); and The Tools of Government in the Digital Age (with Christopher Hood, 2007). Since 2018, Helen has been Director of the Public Policy Programme at The Alan Turing Insitute, the UK’s national institute for data science and artificial intelligence. The programme works with policy-makers to research and develop ways of using data science and AI to improve policy-making and service provision, foster government innovation and establish an ethical framework for the use of data science in government. The programme comprises over 25 research projects involving 60 researchers across 10 universities. As well as being programme director, Helen is theme lead for criminal justice in the AI for Science and Government programme and principal investigator on research projects Hate Speech: Measures and Counter-measures, Social Information and Public Opinion and Political Volatility
HiPARIS_Summerschool_2022_KEYNOTE2_BalajiPADMANABHAN
Balaji PADMANABHAN
University of South Florida
From Artificial Intelligence to Augmented Intelligence
Keynote 2
Balaji Padmanabhan is the Anderson Professor of Global Management at USF’s Muma College of Business, where he is also the Director of the Center for Analytics & Creativity. He has a Bachelor's degree in Computer Science from Indian Institute of Technology (IIT) Madras and a PhD from New York University (NYU)’s Stern School of Business. He has worked in the data science, AI/machine learning and business analytics areas for 25 years. He has published in data science and related areas at premier journals and conferences in the field and has served on the editorial board of leading journals including Management Science, MIS Quarterly, INFORMS Journal on Computing, Information Systems Research, Big Data, ACM Transactions on MIS and the Journal of Business Analytics. He also works extensively with businesses on data science problems, and has advised over twenty firms in a variety of industries through consulting, executive teaching and research partnerships.
Biography Balaji Padmanabhan is the Anderson Professor of Global Management at USF’s Muma College of Business, where he is also the Director of the Center for Analytics & Creativity. He has a Bachelor's degree in Computer Science from Indian Institute of Technology (IIT) Madras and a PhD from New York University (NYU)’s Stern School of Business. He has worked in the data science, AI/machine learning and business analytics areas for 25 years. He has published in data science and related areas at premier journals and conferences in the field and has served on the editorial board of leading journals including Management Science, MIS Quarterly, INFORMS Journal on Computing, Information Systems Research, Big Data, ACM Transactions on MIS and the Journal of Business Analytics. He also works extensively with businesses on data science problems, and has advised over twenty firms in a variety of industries through consulting, executive teaching and research partnerships.

Tutorial Speakers

HiPARIS_Summerschool_2021_TUTORIAL2A_Mitali-Banerjee
Mitali BANERJEE
HEC
Image Recognition Using Deep-Learning: Implementation and Application
Tutorial 3A
This 3 hour module will offer a hands-on introduction to deep learning based image recognition tools. Participants will gain familiarity with preparing and importing images into software (python) and applying one of the foundational deep learning architectures to classify the images and create vector representations. We will discuss different applications of the output of deep learning tools to extract managerial and scientific insights. In particular, the course will discuss applications of these tools to creating large-scale measures that have otherwise proven to be elusive to measure or susceptible to bias in measurement. Pre-requisites: - Basic knowledge of linear algebra is helpful but not required - Basic knowledge of python (e.g. libraries such as pandas and numpy) is helpful but not required. - Basic familiarity with standard regression OLS models. You should be familiar with what it means to estimate relationships between variables using OLS models. - A gmail account is required to open the google collab notebooks which will be shared before the class.
Topic's brief abstract This 3 hour module will offer a hands-on introduction to deep learning based image recognition tools. Participants will gain familiarity with preparing and importing images into software (python) and applying one of the foundational deep learning architectures to classify the images and create vector representations. We will discuss different applications of the output of deep learning tools to extract managerial and scientific insights. In particular, the course will discuss applications of these tools to creating large-scale measures that have otherwise proven to be elusive to measure or susceptible to bias in measurement. Pre-requisites: - Basic knowledge of linear algebra is helpful but not required - Basic knowledge of python (e.g. libraries such as pandas and numpy) is helpful but not required. - Basic familiarity with standard regression OLS models. You should be familiar with what it means to estimate relationships between variables using OLS models. - A gmail account is required to open the google collab notebooks which will be shared before the class.
HiPARIS_Summerschool_2022_TUTORIAL3B_IsabelleBLOCH
Isabelle BLOCH
Sorbonne Université
Hybrid Artificial Intelligence and Image Understanding
Tutorial 3B
The tutorial will review a few methods for symbolic AI, for knowledge representation and reasoning, and show how they can be combined with learning approaches for image understanding. Examples in medical image understanding will illustrate the talk.
Topic's brief abstract The tutorial will review a few methods for symbolic AI, for knowledge representation and reasoning, and show how they can be combined with learning approaches for image understanding. Examples in medical image understanding will illustrate the talk.
HiPARIS_Summerschool_2022_TUTORIAL1B_RemiFLAMARY
Rémi FLAMARY
École polytechnique
Optimal Transport for Machine Learning
Tutorial 1B
This tutorial aims at presenting the mathematical theory of optimal transport (OT) and providing a global view of the potential applications of this theory in machine learning, signal and image processing and biomedical data processing. The first part of the tutorial will present the theory of optimal transport and the optimization problems through the original formulation of Monge and the Kantorovitch formulation in the primal and dual. The algorithms used to solve these problems will be discussed and the problem will be illustrated on simple examples. We will also introduce the OT-based Wasserstein distance and the Wasserstein barycenters that are fundamental tools in data processing of histograms. Finally we will present recent developments in regularized OT that bring efficient solvers and more robust solutions. The second part of the tutorial will present numerous recent applications of OT in the field of machine learning and signal processing and biomedical imaging. We will see how the mapping inherent to optimal transport can be used to perform domain adaptation and transfer learning. Finally we will discuss the use of OT on empirical datasets with applications in generative adversarial networks, unsupervised learning and processing of structured data such as graphs.
Topic's brief abstract This tutorial aims at presenting the mathematical theory of optimal transport (OT) and providing a global view of the potential applications of this theory in machine learning, signal and image processing and biomedical data processing. The first part of the tutorial will present the theory of optimal transport and the optimization problems through the original formulation of Monge and the Kantorovitch formulation in the primal and dual. The algorithms used to solve these problems will be discussed and the problem will be illustrated on simple examples. We will also introduce the OT-based Wasserstein distance and the Wasserstein barycenters that are fundamental tools in data processing of histograms. Finally we will present recent developments in regularized OT that bring efficient solvers and more robust solutions. The second part of the tutorial will present numerous recent applications of OT in the field of machine learning and signal processing and biomedical imaging. We will see how the mapping inherent to optimal transport can be used to perform domain adaptation and transfer learning. Finally we will discuss the use of OT on empirical datasets with applications in generative adversarial networks, unsupervised learning and processing of structured data such as graphs.
HiPARIS_Summerschool_2022_TUTORIAL2B_Alexandre GRAMFORT
Alexandre GRAMFORT
Inria
Supervised learning on multivariate brain signals
Tutorial 2B
Understanding how the brain works in healthy and pathological conditions is considered as one of the major challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90's was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. Presently new tech companies are developing new consumer grade devices for at home recordings of neural activity. By offering noninvasively unique insights into the living brain, these technologies have started to revolutionize both clinical and cognitive neuroscience. The availability of such new devices made possible by pioneering breakthroughs in physics and engineering now pose major computational and statistical challenges for which machine learning currently plays a major role. In this course you will discover hands-on the types of data one can collect to record the living brain. Then you will learn about state-of-the-art supervised machine learning approaches for EEG signals in the clinical context of sleep stage classification as well as brain computer interfaces. ML techniques that will be explored are based on deep learning as well as Riemannian geometry that has proven very powerful to classify EEG data. You will do so with MNE-Python (https://mne.tools) which has become a reference tool to process MEG/EEG/sEEG/ECoG data in Python, as well as the scikit-learn library (https://scikit-learn.org). For the deep learning aspect you will use the Braindecode package (https://braindecode.org) based on PyTorch. The teaching will be done hands-on using Jupyter notebooks and public datasets, that you will be able to work using google colab. Finally this tutorial will be a unique opportunity to see what ML can offer beyond standard applications like computer vision, speech or NLP.
Topic's brief abstract Understanding how the brain works in healthy and pathological conditions is considered as one of the major challenges for the 21st century. After the first electroencephalography (EEG) measurements in 1929, the 90's was the birth of modern functional brain imaging with the first functional MRI (fMRI) and full head magnetoencephalography (MEG) system. Presently new tech companies are developing new consumer grade devices for at home recordings of neural activity. By offering noninvasively unique insights into the living brain, these technologies have started to revolutionize both clinical and cognitive neuroscience. The availability of such new devices made possible by pioneering breakthroughs in physics and engineering now pose major computational and statistical challenges for which machine learning currently plays a major role. In this course you will discover hands-on the types of data one can collect to record the living brain. Then you will learn about state-of-the-art supervised machine learning approaches for EEG signals in the clinical context of sleep stage classification as well as brain computer interfaces. ML techniques that will be explored are based on deep learning as well as Riemannian geometry that has proven very powerful to classify EEG data. You will do so with MNE-Python (https://mne.tools) which has become a reference tool to process MEG/EEG/sEEG/ECoG data in Python, as well as the scikit-learn library (https://scikit-learn.org). For the deep learning aspect you will use the Braindecode package (https://braindecode.org) based on PyTorch. The teaching will be done hands-on using Jupyter notebooks and public datasets, that you will be able to work using google colab. Finally this tutorial will be a unique opportunity to see what ML can offer beyond standard applications like computer vision, speech or NLP.
HiPARIS_Summerschool_2022_TUTORIAL5A_JulienGRAND-CLEMENT
Julien GRAND-CLEMENT
HEC Paris
Decision-making Under Uncertainty
Tutorial 5A
The goal of this tutorial is to understand how uncertainty impacts classical decision-making models and the operational and business consequences. Any decision model that is data-driven may face uncertainty due to errors in the data, in the modeling assumptions, or due to the inherent randomness of the decision process. Overlooking this uncertainty may lead to decisions that are suboptimal, unreliable, or, in some crucial applications, practically infeasible and dangerous for the users. In this tutorial, we will learn to (1) estimate the uncertainty given a decision problem and a dataset, and (2) mitigate the impact of uncertainty with a robust approach. As an application, a robust portfolio management problem will be investigated in detail, though we will see that the problem of uncertainty arises in many (if not most) real decision settings. This tutorial is structured as follows: *1.How to estimate the uncertainty in a decision model? 1.a.Motivating examples: what is the practical impact of uncertainty? 1.a.i.Wrong images classification, variability in demands for supply chains, artificial intelligence in healthcare, Tesla auto-driving, robotics, maintenance, inventory optimization, facility location, project management, etc. 1.a.ii.Introduction of the running example: portfolio management. 1.b.Understanding the origin of the uncertainty: poor data, little data, is the uncertainty inherent to the application? When do we need to take it into account? 1.c.Risk-sensitive decisions vs. parameter uncertainty. 1.d.How to estimate the uncertainty? Examples with simulations with Colab and synthetic data for the portfolio management problem. *2.How to mitigate the impact of uncertainty in practice? Robust portfolio management. 2.a.Deterministic approach: pessimism in parameters estimations. 2.b.Robust and distributional robust approach: how to obtain decisions with guarantees of good performances. 2.c.Evidence from simulations with Colab: trade-offs nominal performances vs. worst-case performances for the portfolio management problem. How to deal with variability? 2.d.(Time-permitting) Two-stage decision-making: how to act when uncertainty is revealed over time? Prerequisites: - Basic knowledge of statistics (means, confidence intervals, quantiles). Knowing linear programming is a plus. For the simulations, all code will be in Python, and a Colab notebook will be available for the participants, with some pre-coded examples.
Topic's brief abstract The goal of this tutorial is to understand how uncertainty impacts classical decision-making models and the operational and business consequences. Any decision model that is data-driven may face uncertainty due to errors in the data, in the modeling assumptions, or due to the inherent randomness of the decision process. Overlooking this uncertainty may lead to decisions that are suboptimal, unreliable, or, in some crucial applications, practically infeasible and dangerous for the users. In this tutorial, we will learn to (1) estimate the uncertainty given a decision problem and a dataset, and (2) mitigate the impact of uncertainty with a robust approach. As an application, a robust portfolio management problem will be investigated in detail, though we will see that the problem of uncertainty arises in many (if not most) real decision settings. This tutorial is structured as follows: *1.How to estimate the uncertainty in a decision model? 1.a.Motivating examples: what is the practical impact of uncertainty? 1.a.i.Wrong images classification, variability in demands for supply chains, artificial intelligence in healthcare, Tesla auto-driving, robotics, maintenance, inventory optimization, facility location, project management, etc. 1.a.ii.Introduction of the running example: portfolio management. 1.b.Understanding the origin of the uncertainty: poor data, little data, is the uncertainty inherent to the application? When do we need to take it into account? 1.c.Risk-sensitive decisions vs. parameter uncertainty. 1.d.How to estimate the uncertainty? Examples with simulations with Colab and synthetic data for the portfolio management problem. *2.How to mitigate the impact of uncertainty in practice? Robust portfolio management. 2.a.Deterministic approach: pessimism in parameters estimations. 2.b.Robust and distributional robust approach: how to obtain decisions with guarantees of good performances. 2.c.Evidence from simulations with Colab: trade-offs nominal performances vs. worst-case performances for the portfolio management problem. How to deal with variability? 2.d.(Time-permitting) Two-stage decision-making: how to act when uncertainty is revealed over time? Prerequisites: - Basic knowledge of statistics (means, confidence intervals, quantiles). Knowing linear programming is a plus. For the simulations, all code will be in Python, and a Colab notebook will be available for the participants, with some pre-coded examples.
Hombert Johan, professeur de finance a Hec Paris
Johan HOMBERT
HEC Paris
Data in Finance: FinTech Lending
Tutorial 1A
This tutorial includes a short lecture followed by an interactive game in which participants play the role of a FinTech lender. Context: Banks and insurers increasingly use alternative data and machine learning to screen consumers and price products. For example, a FinTech using digital footprints to predict default will have a competitive edge over traditional banks. However, there are important pitfalls to avoid when using alternative data and machine learning to score consumers, such as the winner’s curse, the risk of discrimination and the Lucas critique. This tutorial and its interactive game provide an introduction to these issues. Pre-requisites: - Multivariate statistical analysis, in particular OLS / logit regressions, and/or machine learning methods
Topic's brief abstract This tutorial includes a short lecture followed by an interactive game in which participants play the role of a FinTech lender. Context: Banks and insurers increasingly use alternative data and machine learning to screen consumers and price products. For example, a FinTech using digital footprints to predict default will have a competitive edge over traditional banks. However, there are important pitfalls to avoid when using alternative data and machine learning to score consumers, such as the winner’s curse, the risk of discrimination and the Lucas critique. This tutorial and its interactive game provide an introduction to these issues. Pre-requisites: - Multivariate statistical analysis, in particular OLS / logit regressions, and/or machine learning methods
HiPARIS_Summerschool_2022_TUTORIAL2A_WinstonMAXWELL
Winston MAXWELL
Telecom Paris
Operationalizing AI Regulation
Tutorial 2A
How will Europe’s future AI regulation impact the design, testing and use of AI applications such as credit scoring, recruitment algorithms, anti-fraud algorithms and facial recognition? We will explore how AI concepts such as explainability, fairness, accuracy, robustness and human oversight will be implemented into the future regulation, and how the regulation compares to other international standards on trustworthy AI. The course will focus on two concrete use cases, facial recognition and credit scoring, to see how the European regulatory framework would apply throughout the lifecycle of the project, walking students through the process of creating a risk management system, including an impact assessment on potential risks for safety and fundamental rights, developing a list of requirements, testing, performance parameters, documentation, and human oversight mechanisms. We’ll explore the potential friction between the European AI Act and other regulatory frameworks such as the European General Data Protection Regulation (GDPR), and lead a debate on how the future regulation will impact AI innovation and research in Europe.
Topic's brief abstract How will Europe’s future AI regulation impact the design, testing and use of AI applications such as credit scoring, recruitment algorithms, anti-fraud algorithms and facial recognition? We will explore how AI concepts such as explainability, fairness, accuracy, robustness and human oversight will be implemented into the future regulation, and how the regulation compares to other international standards on trustworthy AI. The course will focus on two concrete use cases, facial recognition and credit scoring, to see how the European regulatory framework would apply throughout the lifecycle of the project, walking students through the process of creating a risk management system, including an impact assessment on potential risks for safety and fundamental rights, developing a list of requirements, testing, performance parameters, documentation, and human oversight mechanisms. We’ll explore the potential friction between the European AI Act and other regulatory frameworks such as the European General Data Protection Regulation (GDPR), and lead a debate on how the future regulation will impact AI innovation and research in Europe.
HiPARIS_Summerschool_2022_TUTORIAL4A_KlausMILLER
Klaus MILLER
HEC Paris
Impact of Privacy Regulation on Online Advertising Market: GDPR in Europe
Tutorial 4A
We will discuss the impact of privacy regulation on the online advertising market and specifically focus on the case of the European Union’s General Data Protection Regulation (GDPR). Specifically, participants of this tutorial will learn: (1) Why and how the European General Data Protection Regulation (GDPR) impacts the online advertising market, particularly advertisers, publishers and users. (2) How advertisers and publishers leverage users’ personal data to pursue their goals. (3) Which aspects of the GDPR are most relevant for advertisers, publishers and users.(4) How complex it is to go through the process of obtaining user permission for personal data processing, and how IAB’s Transparency and Consent Framework (TCF) intends to help.(5) How many firms a publisher provides with access to its users’ data, and how long it takes a user to respond to all permission requests. (6) Which developments are taking place with regard to personal data processing, among players in the online advertising industry, as well as among regulators and consumer protection agencies. Anyone interested in learning how and why the online advertising industry benefits from using personal data, and how the GDPR impacts this practice should attend this tutorial. The tutorial is based on the book “The Impact of the General Data Protection Regulation (GDPR) on the Online Advertising Market” available completely for free at www.gdpr-impact.com Pre-requisites: - Reading Chapter 1 and Chapter 2 of the referenced book available at gdpr-impact.com - Installed Version of Base R and R Studio for the Empirical Analysis of Cookie Data
Topic's brief abstract We will discuss the impact of privacy regulation on the online advertising market and specifically focus on the case of the European Union’s General Data Protection Regulation (GDPR). Specifically, participants of this tutorial will learn: (1) Why and how the European General Data Protection Regulation (GDPR) impacts the online advertising market, particularly advertisers, publishers and users. (2) How advertisers and publishers leverage users’ personal data to pursue their goals. (3) Which aspects of the GDPR are most relevant for advertisers, publishers and users.(4) How complex it is to go through the process of obtaining user permission for personal data processing, and how IAB’s Transparency and Consent Framework (TCF) intends to help.(5) How many firms a publisher provides with access to its users’ data, and how long it takes a user to respond to all permission requests. (6) Which developments are taking place with regard to personal data processing, among players in the online advertising industry, as well as among regulators and consumer protection agencies. Anyone interested in learning how and why the online advertising industry benefits from using personal data, and how the GDPR impacts this practice should attend this tutorial. The tutorial is based on the book “The Impact of the General Data Protection Regulation (GDPR) on the Online Advertising Market” available completely for free at www.gdpr-impact.com Pre-requisites: - Reading Chapter 1 and Chapter 2 of the referenced book available at gdpr-impact.com - Installed Version of Base R and R Studio for the Empirical Analysis of Cookie Data
HiPARIS_Summerschool_2022_TUTORIAL6B_KrikamolMUANDET
Krikamol MUANDET
Max Planck Institute for Intelligent Systems
Reliable Decision Making and Causal Inference with Kernels
Tutorial 6B
Data-driven decision-making tools have become increasingly prevalent in society today with applications in critical areas like health care, economics, education, and the justice system. To ensure reliable decisions, it is essential that the models learn from data the genuine correlations (i.e., causal relationships) between the outcomes and the decision variables. In this tutorial, I will first give an introduction to the causal inference problem from a machine learning perspective including causal discovery, treatment effect estimation, instrumental variable (IV), and proxy variables. Then, I will review recent development in how we can leverage machine learning (ML) based methods, especially modern kernel methods, to tackle some of these problems.
Topic's brief abstract Data-driven decision-making tools have become increasingly prevalent in society today with applications in critical areas like health care, economics, education, and the justice system. To ensure reliable decisions, it is essential that the models learn from data the genuine correlations (i.e., causal relationships) between the outcomes and the decision variables. In this tutorial, I will first give an introduction to the causal inference problem from a machine learning perspective including causal discovery, treatment effect estimation, instrumental variable (IV), and proxy variables. Then, I will review recent development in how we can leverage machine learning (ML) based methods, especially modern kernel methods, to tackle some of these problems.
HiPARIS_Summerschool_2022_TUTORIAL5B_GeoffroyPEETERS
Geoffroy PEETERS
Télécom Paris
Learning for audio signals
Tutorial 5B
As in many fields, deep neural networks have allowed important advances in the processing of audio signals. In this tutorial, we review the specificities of these signals, elements of audio signal processing (as used in the traditional machine-learning approach) and how deep neural networks (in particular convolutional ones) can be used to perform feature learning (without prior knowledge --1Dconv, TCN--, or using prior knowledge --source/filter, auto-regressive, HCQT, SincNet, DDSP--). We then review the dominant DL architectures, meta-architectures and training paradigms (classification, metric learning, supervised, unsupervised, self-supervised, semi-supervised) used in audio. We exemplify the used of those for some key applications in music and environmental sounds processing: sound event detection, localization, auto-tagging, source separation, generation.
Topic's brief abstract As in many fields, deep neural networks have allowed important advances in the processing of audio signals. In this tutorial, we review the specificities of these signals, elements of audio signal processing (as used in the traditional machine-learning approach) and how deep neural networks (in particular convolutional ones) can be used to perform feature learning (without prior knowledge --1Dconv, TCN--, or using prior knowledge --source/filter, auto-regressive, HCQT, SincNet, DDSP--). We then review the dominant DL architectures, meta-architectures and training paradigms (classification, metric learning, supervised, unsupervised, self-supervised, semi-supervised) used in audio. We exemplify the used of those for some key applications in music and environmental sounds processing: sound event detection, localization, auto-tagging, source separation, generation.

Panelist

HiPARIS_Summerschool_2021_PANELIndus_KERING_Gregory_BOUTTE
Grégory BOUTTE
Kering
Industry Panel
HI! PARIS Corporate Donor Representative of KERING
Chief Client & Digital Officer, KERING
Topic's brief abstract Chief Client & Digital Officer, KERING
Chief Client & Digital Officer, KERING
Biography Chief Client & Digital Officer, KERING
HiPARIS_Summerschool_2021_PANELIndus_REXEL_Guillaume_DUBRULE
Guillaume DUBRULE
REXEL
Industry Panel
HI! PARIS Corporate Donor Representative of REXEL
Group Purchasing and Supplier Relationship Director, Rexel
Topic's brief abstract Group Purchasing and Supplier Relationship Director, Rexel
Group Purchasing and Supplier Relationship Director, Rexel
Biography Group Purchasing and Supplier Relationship Director, Rexel
cropped-favico-1.png
François LEMAISTRE
Vinci
Industry Panel
HI! PARIS Corporate Donor Representative of Vinci
Directeur Général of VINCI Energies
Topic's brief abstract Directeur Général of VINCI Energies
Directeur Général of VINCI Energies
Biography Directeur Général of VINCI Energies

They trust us

Get in Touch

Pr. Gaël RICHARD

Executive Director

contact@hi-paris.fr

Executive Director

Phone

+33 (0)1 75 31 96 60

Copyright © 2022 • Hi! Paris • All right reserved