Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions that wil ...

L

Learning Machines 101

###
1
LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes
35:29

35:29

Play later

Play later

Lists

Like

Liked

35:29
This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support lea…

L

Learning Machines 101

###
1
LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges
30:51

30:51

Play later

Play later

Lists

Like

Liked

30:51
This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update …

L

Learning Machines 101

###
1
LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems
33:13

33:13

Play later

Play later

Lists

Like

Liked

33:13
In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior …

L

Learning Machines 101

###
1
LM101-083: Ch5: How to Use Calculus to Design Learning Machines
34:22

34:22

Play later

Play later

Lists

Like

Liked

34:22
This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule …

L

Learning Machines 101

###
1
LM101-082: Ch4: How to Analyze and Design Linear Machines
29:05

29:05

Play later

Play later

Lists

Like

Liked

29:05
The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.” Chapter 4 is titled “Linear Algebra for Machine Learning. Many important and widely used machine learning algorithms may be interpreted as linear machines and this chapter shows how to use…

L

Learning Machines 101

###
1
LM101-081: Ch3: How to Define Machine Learning (or at Least Try)
37:20

37:20

Play later

Play later

Lists

Like

Liked

37:20
This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that …

L

Learning Machines 101

###
1
LM101-080: Ch2: How to Represent Knowledge using Set Theory
31:43

31:43

Play later

Play later

Lists

Like

Liked

31:43
This particular podcast covers the material in Chapter 2 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 2 of my new book, which discusses how to represent knowledge using set theory notation. Chapter 2 is titled “Set Theory for Concept Modeling”.…

L

Learning Machines 101

###
1
LM101-079: Ch1: How to View Learning as Risk Minimization
26:07

26:07

Play later

Play later

Lists

Like

Liked

26:07
This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framew…

L

Learning Machines 101

###
1
LM101-078: Ch0: How to Become a Machine Learning Expert
39:18

39:18

Play later

Play later

Lists

Like

Liked

39:18
This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, chec…

L

Learning Machines 101

###
1
LM101-077: How to Choose the Best Model using BIC
24:15

24:15

Play later

Play later

Lists

Like

Liked

24:15
In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training da…

L

Learning Machines 101

###
1
LM101-076: How to Choose the Best Model using AIC and GAIC
28:17

28:17

Play later

Play later

Lists

Like

Liked

28:17
In this episode, we explain the proper semantic interpretation of the Akaike Information Criterion (AIC) and the Generalized Akaike Information Criterion (GAIC) for the purpose of picking the best model for a given set of training data. The precise semantic interpretation of these model selection criteria is provided, explicit assumptions are provi…

L

Learning Machines 101

###
1
LM101-075: Can computers think? A Mathematician's Response (remix)
36:26

36:26

Play later

Play later

Lists

Like

Liked

36:26
In this episode, we explore the question of what can computers do as well as what computers can’t do using the Turing Machine argument. Specifically, we discuss the computational limits of computers and raise the question of whether such limits pertain to biological brains and other non-standard computing machines. This episode is dedicated to the …

L

Learning Machines 101

###
1
LM101-074: How to Represent Knowledge using Logical Rules (remix)
19:22

19:22

Play later

Play later

Lists

Like

Liked

19:22
In this episode we will learn how to use “rules” to represent knowledge. We discuss how this works in practice and we explain how these ideas are implemented in a special architecture called the production system. The challenges of representing knowledge using rules are also discussed. Specifically, these challenges include: issues of feature repre…

L

Learning Machines 101

###
1
LM101-073: How to Build a Machine that Learns to Play Checkers (remix)
24:58

24:58

Play later

Play later

Lists

Like

Liked

24:58
This is a remix of the original second episode Learning Machines 101 which describes in a little more detail how the computer program that Arthur Samuel developed in 1959 learned to play checkers by itself without human intervention using a mixture of classical artificial intelligence search methods and artificial neural network learning algorithms…

L

Learning Machines 101

###
1
LM101-072: Welcome to the Big Artificial Intelligence Magic Show! (Remix of LM101-001 and LM101-002)
22:07

22:07

Play later

Play later

Lists

Like

Liked

22:07
This podcast is basically a remix of the first and second episodes of Learning Machines 101 and is intended to serve as the new introduction to the Learning Machines 101 podcast series. The search for common organizing principles which could support the foundations of machine learning and artificial intelligence is discussed and the concept of the …

L

Learning Machines 101

###
1
LM101-071: How to Model Common Sense Knowledge using First-Order Logic and Markov Logic Nets
31:40

31:40

Play later

Play later

Lists

Like

Liked

31:40
In this podcast, we provide some insights into the complexity of common sense. First, we discuss the importance of building common sense into learning machines. Second, we discuss how first-order logic can be used to represent common sense knowledge. Third, we describe a large database of common sense knowledge where the knowledge is represented us…

L

Learning Machines 101

###
1
LM101-070: How to Identify Facial Emotion Expressions in Images Using Stochastic Neighborhood Embedding
32:04

32:04

Play later

Play later

Lists

Like

Liked

32:04
This 70th episode of Learning Machines 101 we discuss how to identify facial emotion expressions in images using an advanced clustering technique called Stochastic Neighborhood Embedding. We discuss the concept of recognizing facial emotions in images including applications to problems such as: improving online communication quality, identifying su…