Probabilistic Graphical Models

开始时间: 04/22/2022 持续时间: 11 weeks

所在平台: CourseraArchive

课程类别: 计算机科学

大学或机构: Stanford University(斯坦福大学)

授课老师: Daphne Koller

课程主页: https://www.coursera.org/course/pgm

课程评论: 5 个评论

评论课程        关注课程

课程详情

What are Probabilistic Graphical Models?

Uncertainty is unavoidable in real-world applications: we can almost never predict with certainty what will happen in the future, and even in the present and the past, many important aspects of the world are not observed with certainty. Probability theory gives us the basic foundation to model our beliefs about the different possible states of the world, and to update these beliefs as new evidence is obtained. These beliefs can be combined with individual preferences to help guide our actions, and even in selecting which observations to make. While probability theory has existed since the 17th century, our ability to use it effectively on large problems involving many inter-related variables is fairly recent, and is due largely to the development of a framework known as Probabilistic Graphical Models (PGMs). This framework, which spans methods such as Bayesian networks and Markov random fields, uses ideas from discrete data structures in computer science to efficiently encode and manipulate probability distributions over high-dimensional spaces, often involving hundreds or even many thousands of variables. These methods have been used in an enormous range of application domains, which include: web search, medical and fault diagnosis, image understanding, reconstruction of biological networks, speech recognition, natural language processing, decoding of messages sent over a noisy communication channel, robot navigation, and many more. The PGM framework provides an essential tool for anyone who wants to learn how to reason coherently from limited and noisy observations.

In this class, you will learn the basics of the PGM representation and how to construct them, using both human knowledge and machine learning techniques; you will also learn algorithms for using a PGM to reach conclusions about the world from limited and noisy evidence, and for making good decisions under uncertainty. The class covers both the theoretical underpinnings of the PGM framework and practical skills needed to apply these techniques to new problems.

课程大纲

Topics covered include:

  1. The Bayesian network and Markov network representation, including extensions for reasoning over domains that change over time and over domains with a variable number of entities
  2. Reasoning and inference methods, including exact inference (variable elimination, clique trees) and approximate inference (belief propagation message passing, Markov chain Monte Carlo methods)
  3. Learning parameters and structure in PGMs
  4. Using a PGM for decision making under uncertainty.

There will be short weekly review quizzes and programming assignments (Octave/Matlab) focusing on case studies and applications of PGMs to real-world problems:

  1. Credit Scoring and Factors
  2. Modeling Genetic Inheritance and Disease
  3. Markov Networks and Optical Character Recognition (OCR)
  4. Inference: Belief Propagation
  5. Markov Chain Monte Carlo and Image Segmentation
  6. Decision Theory: Arrhythmogenic Right Ventricular Dysplasia
  7. Conditional Random Field Learning for OCR
  8. Structure Learning for Identifying Skeleton Structure
  9. Human Action Recognition with Kinect

To prepare for the class in advance, you may consider reading through the following sections of the textbook (discount code DKPGM12) by Daphne and Nir Friedman:

  1. Introduction and Overview. Chapters 1, 2.1.1 - 2.1.4, 4.2.1.
  2. Bayesian Network Fundamentals. Chapters 3.1 - 3.3.
  3. Markov Network Fundamentals. Chapters 4.1, 4.2.2, 4.3.1, 4.4, 4.6.1.
  4. Structured CPDs. Chapters 5.1 - 5.5.
  5. Template Models. Chapters 6.1 - 6.4.1.

These will be covered in the first two weeks of the online class.

The slides for the whole class can be found here.

课程评论(5条)

0

ecluzhang 2013-06-27 20:48 0 票支持; 0 票反对

这门课怎么样大家都知道了,我只能说非常刺激,放弃了大概1.7个作业,还好最后80分刚刚低空擦过。
个人觉得作业最难的部分在于怎么理解给出的接口API和数据结构设计,每次的pdf说明都不太清楚,第5、7周的作业太难主要也就是难在这两点上。所以光看懂讲课视频是不够的,实现细节还是得自己想。
大概有机会明年还得再来上一遍,欢迎各种抖M来enroll...

1

杨柳Larry_NLP 2013-05-22 17:35 1 票支持; 0 票反对

经典的概率图模型课程,Daphne Koller 老师所著的同名教材也是一本经典教材。看完视频并且完成所有习题作业和编程项目作业需要花费很多时间。如果时间紧张,也可以根据自己的需要学习一部分,教材里面给出了一张各个章节之间的依赖关系图,可以在掌握前面几章基础章节内容的前提下,对后面的章节有侧重的学习

0

wzyer 2013-05-17 09:29 0 票支持; 0 票反对

我现在觉得最不可思议的事就是我居然第一轮就把这个课上完了。其实回过头想想,现在再上的话可能收获会更大些。

1

wsz-bupt 2013-05-13 22:59 1 票支持; 0 票反对

确实挺有难度的,属于graduate level的课程,一周花10+个小时几乎是必须的,对应的教材有1000+页。如果想弄懂概率图模型,这是一门极好的课~

1

yongsun 2013-05-13 01:30 1 票支持; 0 票反对

收获最大,完成最艰苦的课程!PA经常卡壳,几乎每天2~3个小时做作业…

课程简介

In this class, you will learn the basics of the PGM representation and how to construct them, using both human knowledge and machine learning techniques.

课程标签

PGM 概率图模型 图模型 机器学习 斯坦福大学 概率模型 贝叶斯 贝叶斯网络 马尔可夫 马尔可夫网络 条件随机场 CRF 马尔可夫链 蒙特卡罗 马尔可夫链蒙特卡罗

235人关注该课程

主题相关的课程

Discrete Optimization 关注

Computational Neuroscience 关注

Natural Language Processing 关注

Artificial Intelligence Planning 关注

Natural Language Processing 关注

Neural Networks for Machine Learning 关注

VLSI CAD: Logic to Layout 关注

Linear and Discrete Optimization 关注

Cryptography II 关注

Introduction to Engineering Mechanics 关注