튜토리얼

  • 홈ˆ
  • 행사안내
  • 튜토리얼

최승진 부사장
(BARO AI)

학력: Ph.D., University of Notre Dame (1996)
경력: 포항공대 컴퓨터공학과 교수 (2001-2019)
삼성전자 종합기술원 자문교수 (2018)
삼성리서치 자문교수 (2017-2018)
신한카드 빅데이터센터 자문교수 (2016-2017)

연구실적: ICML, NeurIPS, AISTATS, AAAI, IJCAI, CVPR 등에 다수의 논문
주요연구: Statistical machine learning, probabilistic models and inference
관심분야: Meta-learning, Bayesian optimization, deep generative models

강연제목

Variational Inference

강연요약

A probabilistic model is represented by a joint distribution over a set of hidden variables and a set of observed variables. Probabilistic inference involves calculating posterior distributions over hidden variables given the observations. Two big hammers for probabilistic inference include: (1) sampling methods; (2) variational inference (topic of today). In this talk, I begin with underlying mathematical preliminaries such as Shannon's entropy, KL-divergence, and variational methods. Then we will see how the variational method is applied to Bayesian hierarchical models, leading to 'variational inference'. I will explain how variational inference is applied to a few exemplary probabilistic models, including variational mixture of Gaussians, variational PCA, variational linear regression and variational logistic regression. Finally, I will be talking about deep probabilistic models with variational inference emphasized, from variational autoencoders to neural statisticians and neural processes. This will be a nice 3 hour tutorial for those of you who really wish to dive into the world of probabilistic models.

1. 시간별 강의계획

no

주제

주요내용

1

Variational inference 개요

- Entropy & KL-divergence
- Variational methods
- Evidence lower bound (ELBO)
- Bayesian hierarchical models and variational EM

2

Probabilistic models

- Variational mixture of Gaussians
- Variational PCA
- Variational linear regression
- Variational logistic regression

3

Deep probabilistic models

- Amortized inference
- Variational autoencoders
- Neural statistician
- Neural processes

 

2. 참고문헌
- T. Jaakkola and M. Jordan (2000), "Bayesian parameter estimation via variational methods," Statistics and Computing.
- D. MacKay (2003), "Information Theory, Inference, and Learning Algorithms," 2003.
- M. Beal and Z. Gharahmani (2003), "The variational Bayesian EM algorithm for incomplete data: With application to scoring graphical model structures," Bayesian Statistics 7.
- C. Bishop (2006), "Pattern Recognition and Machine Learning," Springer.
- M. Wainwright and M. Jordan (2008), "Graphical models, exponential families, and variational inference," Foundations and Trends in Machine Learning.
- K. Murphy (2012), "Machine Learning: A Probabilistic Perspective," The MIT Press.
- D. P. Kingma and M. Welling (2014), "Auto-encoding variational Bayes," NeurIPS.
- H. Edwards and A. Storkey (2017), "Towards a neural statistician," ICLR.
- M. Ganelo et al. (2018), "Conditional neural processes," ICML.
- M. Ganelo et al. (2018), "Neural processes," ICML Workshop on Theoretical Foundations and Applications of Deep Neural Networks.

수강자격요건

확률에 대한 기본 지식만 있으면 되고, 머신러닝을 알면 좋으나, 확률모델을 배우고자 하는 사람은 누구나 수강 가능


서울시 서초구 방배로 76 (방배동, 머리재빌딩 401호) 우)06704 | (Tel)1588-2728 | (Fax)02-521-1352 | 고유번호 : 114-82-03170 | 대표 : 나연묵

Copyright (c) KIISE. All rights reserved.