STIMULATE PhD School at Rome “Tor Vergata”, September 14-18, 2020

We are glad to announce the ONLINE PhD School on 

Machine and Reinforcement Learning, Rare Events and Tensor Networks

which will be held on the Zoom platform over the period  September 14 — September 18, 2020. The school forms part of the training activities of the European Joint Doctorate 

STIMULATE: SimulaTIon in MUltiscaLe physicAl and biological sysTEms

a multidisciplinary training network funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 765048.

University of Rome Tor Vergata in collaboration with the STIMULATE Ph.D. European Joint program is presenting the ONLINE School:  “Machine and Reinforcement Learning, Rare Events and Tensor Networks” which will be held on the Zoom platform over the period September 14 — September 18, 2020. The school provides an introductory course on deep learning and reinforcement learning methods with applications from computer vision to data processing of biological, physical systems, and more! The school will focus also on the importance and sampling techniques of rare events, encompassed in many natural as well as anthropogenic phenomena. Participants will be introduced to the theory of tensor networks which is fundamental to understand the role of entanglement in many-body quantum systems. A detailed program is available on the official webpage: http://www.stimulate-ejd.eu Listeners are welcome to join the School by sending an email to phdschool.stimulate.rome@gmail.com

STIMULATE school Program

Monday 14th at 9.15: Introduction to the school

Monday 14Tuesday 15Wednesday 16Thursday 17Friday 18
09.15/09.30Introduction
09.30/11.00Antonio CelaniAntonio CelaniAntonio CelaniAntonio CelaniAntonio Celani
11.00/11.30Coffee breakCoffee breakCoffee breakCoffee breakCoffee break
11.30/12.15Bernhard MehligBernhard MehligBernhard MehligMichele BuzzicottiMichele Buzzicotti
12.15/12.45Q & AQ & AQ & AQ & AQ & A
Lunch break
14.00/14.45Pietro FaccioliPietro FaccioliPietro FaccioliPietro FaccioliPietro Faccioli
14.45/15.15Coffee breakCoffee breakCoffee breakCoffee breakCoffee break
15.15/16.00Mari Carmen BanulsMari Carmen BanulsMari Carmen BanulsMari Carmen BanulsNazario Tantalo
16.00/16.45Q & AQ & AQ & AQ & AQ & A
16.45/17.00Conclusions

Friday 18th at 16.45: Conclusions

Celani: Introduction to Reinforcement Learning

Mehlig: Introduction to Deep Learning

Faccioli: Rare events

Banuls: Tensor network: basic introduction

Buzzicotti: Deep Learning and RL applications

Tantalo: Inverse problems

DETAILED PROGRAM

Day 1, MONDAY Sept. 14th

09:15-09:30 INTRODUCTION TO THE SCHOOL

09:30-11:00 Lecture 1 & 2 

Speaker: Antonio Celani

Title: What is Reinforcement Learning? Basic concepts and examples

Material: see links below

11:00-11:30 COFFEE BREAK

11:30-12:15 Lecture 3

Speaker: Bernhard Mehlig

Title: Introduction To Neural Networks

Material:  Mehlig, Bernhard. “Artificial neural networks.” arXiv preprint arXiv:1901.05639 (2019). https://arxiv.org/pdf/1901.05639. Chapter 1 

12:15-12:45 Questions & Answers

12:30-14:00 LUNCH BREAK

14:00-14:45 Lecture 4

Speaker: Pietro Faccioli

Title: Stochastic dynamics in classical open systems

Material: https://pietrofaccioli.wixsite.com/physics/statistical-field-theory

14:45-15:15 COFFEE BREAK

15:15-16:00 Lecture 5

Speaker: Mari Carmen Banuls

Title: Quantum information: introductory concepts

Material: Book “Quantum Computation and Quantum Information”, M. Nielsen & I. Chuang  (Cambridge University Press) Chapter 2 

16:00-16:45 Questions & Answers

———————————————————————————————————–

Day 2, TUESDAY Sept.  15th

09:30-11:00 Lectures 6 & 7 

Speaker: Antonio Celani

Title: Computing optimal strategies: Markov decision processes (MDP), Dynamic Programming, Value Iteration

Material: see links below

11:00-11:30 COFFEE BREAK

11:30-12:15 Lecture 8 

Speaker: Bernhard Mehlig

Title: Supervised Learning / Stochastic gradient descent

Material: 

Mehlig, Bernhard. “Artificial neural networks.” arXiv preprint arXiv:1901.05639 (2019). https://arxiv.org/pdf/1901.05639 – Chapters 5-6 

12:15-12:45 Questions & Answers

12:30-14:00 LUNCH BREAK

14:00-14:45 Lecture 9

Speaker: Pietro Faccioli

Title: Rare events and path sampling problem

Material: https://pietrofaccioli.wixsite.com/physics/statistical-field-theory

14:45-15:15 COFFEE BREAK

15:15-16:00 Lecture 10

Speaker: Mari Carmen Banuls

Title: Matrix Product States: Tensor Networks in 1D

Material: Renormalization and tensor product states in spin chains and lattices, J.I. Cirac & F. Verstraete

A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States, R. Orús

16:00-16:45 Questions & Answers

———————————————————————————————————–

Day 3, Wednesday 16th

09:30-11:00 Lectures 11 & 12 

Speaker: Antonio Celani

Title: Decision-making with incomplete information: MDP with function approximation, Partially Observable MDP 

Material: see links below

11:00-11:30 COFFEE BREAK

11:30-12:15 Lecture 13 

Speaker: Bernhard Mehlig

Title: Deep Learning

Material: 

Mehlig, Bernhard. “Artificial neural networks.” arXiv preprint arXiv:1901.05639 (2019). https://arxiv.org/pdf/1901.05639 Chapter 7

12:15-12:45 Questions & Answers

12:30-14:00 LUNCH BREAK

14:00-14:45 Lecture 14

Speaker: Pietro Faccioli

Title: Statistical Mechanics of transition pathways (1/2)

Material: https://pietrofaccioli.wixsite.com/physics/statistical-field-theory

14:45-15:15 COFFEE BREAK

15:15-16:00 Lecture 15

Speaker: Mari Carmen Banuls

Title: Tensor Networks in higher dimensions

Material: A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States, R. Orús

16:00-16:45 Questions & Answers

———————————————————————————————————–

Day 4, Thursday 17th

09:30-11:00 Lectures 16 & 17 

Speaker: Antonio Celani

Title: Learning without a model: Temporal-difference and policy-gradient algorithms

Material: see links below

11:00-11:30 COFFEE BREAK

11:30-12:15 Lecture 18 

Speaker: Michele Buzzicotti

Title: Generative Adversarial Neural Networks

Material: 

Buzzicotti, et al. “Reconstruction of turbulent data with deep generative models for semantic inpainting from TURB-Rot database.” arXiv:2006.09179 (2020), https://arxiv.org/pdf/2006.09179

12:15-12:45 Questions & Answers

12:30-14:00 LUNCH BREAK

14:00-14:45 Lecture 19

Speaker: Pietro Faccioli

Title: Statistical Mechanics of transition pathways (2/2)

Material: https://pietrofaccioli.wixsite.com/physics/statistical-field-theory

14:45-15:15 COFFEE BREAK

15:15-16:00 Lecture 20

Speaker: Mari Carmen Banuls

Title: Tensor Network Algorithms

Material: Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems, F. Verstraete, J.I. Cirac, V. Murg

The density-matrix renormalization group in the age of matrix product states, U. Schollwöck

The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems, P. Silvi et al.

16:00-16:45 Questions & Answers

———————————————————————————————————–

Day 5, Friday 18th

09:30-11:00 Lectures 21 & 22 

Speaker: Antonio Celani

Title: Reinforcement Learning at work: case studies and the road ahead

Material: Sutton and Barto’s book (see below)

11:00-11:30 COFFEE BREAK

11:30-12:15 Lecture 23

Speaker: Michele Buzzicotti

Title: GAN Python implementation

Material: Data and code example, https://colab.research.google.com/drive/1SWTuIXQGZY7FgF-CLJo_RY26crJNH37j?usp=sharing, Chapter 16

12:15-12:45 Questions & Answers

12:30-14:00 LUNCH BREAK

14:00-14:45 Lecture 24

Speaker: Pietro Faccioli

Title: Enhanced path sampling algorithms 

Material: https://pietrofaccioli.wixsite.com/physics/statistical-field-theory

14:45-15:15 COFFEE BREAK

15:15-16:00 Lecture 25

Speaker: Nazario Tantalo

Title: A numerical approach to inverse problems

Material: Numerical Recipes in C.The Art of Scientific Computing, Chapter 18. https://arxiv.org/abs/1903.06476

16:00-16:45 Questions & Answers

16:45-17:00 CONCLUSIONS

Special Thanks to:

Speakers

Prof. Antonio Celani

SHORT CV

Antonio Celani earned his PhD at Politecnico di Torino, Italy in 1998. After post-doctoral fellowships at Max Planck Institute in Munich and at the Observatoire de la Côte d’Azur he became researcher for the French National Research Council (CNRS) in 2000. In 2007 he joined the Institut Pasteur Paris as a research director where he started working at the interface between physics and biology. Since 2014 he also is research scientist at The Abdus Salam International Center for Theoretical Physics – ICTP where he works on subjects at the interface between physics, biology, and artificial intelligence.

Title of the course: An introduction to Reinforcement Learning

MATERIAL

R. Sutton and A. Barto: Reinforcement Learning: an introduction. 2nd edition

Michael L. Littman Reinforcement learning improves behaviour from evaluative feedback Nature 521, 445–451 (2015)

Neftci, E.O., Averbeck, B.B. Reinforcement learning in artificial and biological systems. Nat Mach Intell 1, 133–143 (2019).

Online lectures from my course at ICTP and UNITS (they contain all the maths that I will skim over during my lectures here)

https://drive.google.com/drive/folders/1PdAqcRgrBXHzmbfjt8x9X-IeF7SHxPCc?usp=sharing

Prof. Bernhard Mehlig

SHORT CV:

Born December 6, 1964, is a German physicist and professor of complex systems at the University of Gothenburg. In 2010, he received the Göran Gustafsson Prize in Physics “For paving new paths in statistical physics. He has solved problems in complex systems and turbulence that were previously considered difficult to find analytical solutions to. This includes, for example, the question of how rain is formed in clouds 

Title of the course: Introduction to Deep Learning

MATERIAL:

Mehlig, Bernhard. “Artificial neural networks.” arXiv preprint arXiv:1901.05639 (2019). https://arxiv.org/pdf/1901.05639

Prof. Pietro Faccioli

SHORT CV:

Pietro Faccioli graduated from Trento University and received his PhD from Stony Brook University. He was post-doctoral associate at the European Centre for Theoretical Nuclear Physics and Related Areas and visiting scientist at CEA-Saclay (France). Since 2005 he is affiliated with the Physics Department of Trento University where he is now associate professor and permanent member of INFN.  His research work spans from non-perturbative methods in quantum field theory to biophysics and to dynamics in open quantum systems.  He is co-founder of Sibylla Biotech SRL, a research startup developing a novel approach to drug discovery based on computational biophysics.

Title of the course: Rare events

MATERIAL: Notes can be downloaded at:

https://pietrofaccioli.wixsite.com/physics/statistical-field-theory

Prof. Mari Carmen Banuls

SHORT CV: see CV_short_MCBP_August2020.pdf

The focus of my research is the development and application of Tensor Network methods for the numerical simulation of quantum many body systems.

The term Tensor Network States (TNS) has become a common one in the context of numerical studies of quantum many-body problems. It refers to a number of families that represent different approaches for the efficient description of the state of a quantum many-body system.

My interest lies on the further development of Tensor Network tools, in particular for the investigation of non-equilibrium dynamical problems, and on the application of the toolbox to quantum many-body problems outside the realm of condensed matter physics, for instance, to lattice gauge theories and classical stochastic models.

Title of the course: Tensor networks: basic introduction

MATERIAL:

Book “Quantum Computation and Quantum Information”, M. Nielsen & I. Chuang  (Cambridge University Press) Chapter 2

Renormalization and tensor product states in spin chains and lattices, J.I. Cirac & F. Verstraete

A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States, R. Orús

Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems, F. Verstraete, J.I. Cirac, V. Murg

The density-matrix renormalization group in the age of matrix product states, U. Schollwöck

The Tensor Networks Anthology: Simulation techniques for many-body quantum lattice systems, P. Silvi et al.

Prof. Nazario Tantalo 

SHORT CV:

Nazario Tantalo, born in Italy in 1978, master degree in Physics in 2001 at the Rome University “La Sapienza” under the supervision of Prof. N. Cabibbo, Ph.D. in Physics in 2005 at the Rome University “Tor Vergata” under the supervision of Prof. R. Petronzio. Associate Professor in Theoretical Physics at the Rome University “Tor Vergata” since 2017. His research activity is mostly focused on the study of the non-perturbative phenomenology of strong interacting particles. His contributions to this field of research range from formal theoretical developments within the framework of constructive field theory to state-of-the-art phenomenological predictions and algorithmic/methodological developments within the framework of large-scale non-perturbative lattice simulations.

Title of the course: A numerical approach to inverse problems

MATERIAL: Numerical Recipes in C.The Art of Scientific Computing, Chapter 18. https://arxiv.org/abs/1903.06476

Dott. Michele Buzzicotti

SHORT CV:

Michele Buzzicotti has obtained his PhD in theoretical physics in 2017 at the University of Rome Tor Vergata, where he has an appointment as a Researcher. His main research activity is the study of turbulent flows using numerical simulations. Working from both Eulerian and Lagrangian points of view, he is interested in the development of non-linear, out of equilibrium models, such as Large-Eddy-Simulation closures for the small-scale dynamics of high Reynolds or magnetohydrodynamic flows. He is responsible for the development of High-Performance Computing (HPC) codes for state-of-the-art Direct Numerical Simulation (DNS), which are typically run in different supercomputing centers and can scale up to tens of thousands of processing cores. He is also interested in the application and development of Artificial Intelligence (AI) tools to fluid flow problems such as: deep learning for the data analysis of turbulent flows and reinforcement learning/policy gradient methods for the development of smart swimmers, able to solve optimal navigation tasks in complex environments.

Title of the course: Data reconstruction with artificial neural networks

MATERIAL:

Buzzicotti, et al. “Reconstruction of turbulent data with deep generative models for semantic inpainting from TURB-Rot database.” arXiv:2006.09179 (2020).

https://arxiv.org/pdf/2006.09179

Biferale, et al. “TURB-Rot. A large database of 3d and 2d snapshots from turbulent rotating flows.” arXiv preprint arXiv:2006.07469 (2020).

http://smart-turb.roma2.infn.it

The Organizing Committee: Luca Biferale, Michele Buzzicotti, Madeleine Dale, Roberto Frezzotti, Fabio Guglietta, Mauro Sbragaglia, Nazario Tantalo 

Email to:  phdschool.stimulate.rome@gmail.com