Introduction to Machine Learning

What are the basic concepts in machine learning?
I found that the best way to discover and get a handle on the basic concepts in machine learning is to review the introduction chapters to machine learning textbooks and to watch the videos from the first model in online courses.
Pedro Domingos is a lecturer and professor on machine learning at the University of Washing and author of a new book titled “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World“.
Domingos has a free course on machine learning online at courser titled appropriately “Machine Learning“. The videos for each module can be previewed on Coursera any time.
In this post you will discover the basic concepts of machine learning summarized from Week One of Domingos’ Machine Learning course.
Basic Concepts in Machine Learning
Basic Concepts in Machine Learning
Photo by Travis Wise, some rights reserved.

Machine Learning

The first half of the lecture is on the general topic of machine learning.

What is Machine Learning?

Why do we need to care about machine learning?
A breakthrough in machine learning would be worth ten Microsofts.
— Bill Gates, Former Chairman, Microsoft
Machine Learning is getting computers to program themselves. If programming is automation, then machine learning is automating the process of automation.
Writing software is the bottleneck, we don’t have enough good developers. Let the data do the work instead of people. Machine learning is the way to make programming scalable.
  • Traditional Programming: Data and program is run on the computer to produce the output.
  • Machine Learning: Data and output is run on the computer to create a program. This program can be used in traditional programming.
Machine learning is like farming or gardening. Seeds is the algorithms, nutrients is the data, the gardner is you and plants is the programs.
Traditional Programming vs Machine Learning
Traditional Programming vs Machine Learning

Applications of Machine Learning

Sample applications of machine learning:
  • Web search: ranking page based on what you are most likely to click on.
  • Computational biology: rational design drugs in the computer based on past experiments.
  • Finance: decide who to send what credit card offers to. Evaluation of risk on credit offers. How to decide where to invest money.
  • E-commerce:  Predicting customer churn. Whether or not a transaction is fraudulent.
  • Space exploration: space probes and radio astronomy.
  • Robotics: how to handle uncertainty in new environments. Autonomous. Self-driving car.
  • Information extraction: Ask questions over databases across the web.
  • Social networks: Data on relationships and preferences. Machine learning to extract value from data.
  • Debugging: Use in computer science problems like debugging. Labor intensive process. Could suggest where the bug could be.
What is your domain of interest and how could you use machine learning in that domain?

Key Elements of Machine Learning

There are tens of thousands of machine learning algorithms and hundreds of new algorithms are developed every year.
Every machine learning algorithm has three components:
  • Representation: how to represent knowledge. Examples include decision trees, sets of rules, instances, graphical models, neural networks, support vector machines, model ensembles and others.
  • Evaluation: the way to evaluate candidate programs (hypotheses). Examples include accuracy, prediction and recall, squared error, likelihood, posterior probability, cost, margin, entropy k-L divergence and others.
  • Optimization: the way candidate programs are generated known as the search process. For example combinatorial optimization, convex optimization, constrained optimization.
All machine learning algorithms are combinations of these three components. A framework for understanding all algorithms.

Types of Learning

There are four types of machine learning:
  • Supervised learning: (also called inductive learning) Training data includes desired outputs.  This is spam this is not, learning is supervised.
  • Unsupervised learning: Training data does not include desired outputs. Example is clustering. It is hard to tell what is good learning and what is not.
  • Semi-supervised learning: Training data includes a few desired outputs.
  • Reinforcement learning: Rewards from a sequence of actions. AI types like it, it is the most ambitious type of learning.
Supervised learning is the most mature, the most studied and the type of learning used by most machine learning algorithms. Learning with supervision is much easier than learning without supervision.
Inductive Learning is where we are given examples of a function in the form of data (x) and the output of the function (f(x)). The goal of inductive learning is to learn the function for new data (x).
  • Classification: when the function being learned is discrete.
  • Regression: when the function being learned is continuous.
  • Probability Estimation: when the output of the function is a probability.

Machine Learning in Practice

Machine learning algorithms are only a very small part of using machine learning in practice as a data analyst or data scientist. In practice, the process often looks like:
  1. Start Loop
    1. Understand the domain, prior knowledge and goals. Talk to domain experts. Often the goals are very unclear. You often have more things to try then you can possibly implement.
    2. Data integration, selection, cleaning and pre-processing. This is often the most time consuming part. It is important to have high quality data. The more data you have, the more it sucks because the data is dirty. Garbage in, garbage out.
    3. Learning models. The fun part. This part is very mature. The tools are general.
    4. Interpreting results. Sometimes it does not matter how the model works as long it delivers results. Other domains require that the model is understandable. You will be challenged by human experts.
    5. Consolidating and deploying discovered knowledge. The majority of projects that are successful in the lab are not used in practice. It is very hard to get something used.
  2. End Loop
It is not a one-shot process, it is a cycle. You need to run the loop until you get a result that you can use in practice. Also, the data can change, requiring a new loop.

Inductive Learning

The second part of the lecture is on the topic of inductive learning. This is the general theory behind supervised learning.

What is Inductive Learning?

From the perspective of inductive learning, we are given input samples (x) and output samples (f(x)) and the problem is to estimate the function (f). Specifically, the problem is to generalize from the samples and the mapping to be useful to estimate the output for new samples in the future.
In practice it is almost always too hard to estimate the function, so we are looking for very good approximations of the function.
Some practical examples of induction are:
  • Credit risk assessment.
    • The x is the properties of the customer.
    • The f(x) is credit approved or not.
  • Disease diagnosis.
    • The x are the properties of the patient.
    • The f(x) is the disease they suffer from.
  • Face recognition.
    • The x are bitmaps of peoples faces.
    • The f(x) is to assign a name to the face.
  • Automatic steering.
    • The x are bitmap images from a camera in front of the car.
    • The f(x) is the degree the steering wheel should be turned.

When Should You Use Inductive Learning?

There are problems where inductive learning is not a good idea. It is important when to use and when not to use supervised machine learning.
4 problems where inductive learning might be a good idea:
  • Problems where there is no human expert. If people do not know the answer they cannot write a program to solve it. These are areas of true discovery.
  • Humans can perform the task but no one can describe how to do it. There are problems where humans can do things that computer cannot do or do well. Examples include riding a bike or driving a car.
  • Problems where the desired function changes frequently. Humans could describe it and they could write a program to do it, but the problem changes too often. It is not cost effective. Examples include the stock market.
  • Problems where each user needs a custom function. It is not cost effective to write a custom program for each user. Example is recommendations of movies or books on Netflix or Amazon.

The Essence of Inductive Learning

We can write a program that works perfectly for the data that we have. This function will be maximally overfit. But we have no idea how well it will work on new data, it will likely de very badly because we may never see the same examples again.
The data is not enough. You can predict anything you like. And this would be naive assume nothing about the problem.
In practice we are not naive. There is an underlying problem and we are interested in an accurate approximation of the function. There is a double exponential number of possible classifiers in the number of input states. Finding a good approximate for the function is very difficult.
There are classes of hypotheses that we can try. That is the form that the solution may take or the representation. We cannot know which is most suitable for our problem before hand. We have to use experimentation to discover what works on the problem.
Two perspectives on inductive learning:
  • Learning is the removal of uncertainty. Having data removes some uncertainty. Selecting a class of hypotheses we are removing more uncertainty.
  • Learning is guessing a good and small hypothesis class. It requires guessing. We don’t know the solution we must use a trial and error process. If you knew the domain with certainty, you don’t need learning. But we are not guessing in the dark.
You could be wrong.
  • Our prior knowledge could be wrong.
  • Our guess of the hypothesis class could be wrong.
In practice we start with a small hypothesis class and slowly grow the hypothesis class until we get a good result.

A Framework For Studying Inductive Learning

Terminology used in machine learning:
  • Training example: a sample from x including its output from the target function
  • Target function: the mapping function f from x to f(x)
  • Hypothesis: approximation of f, a candidate function.
  • Concept: A boolean target function, positive examples and negative examples for the 1/0 class values.
  • Classifier: Learning program outputs a classifier that can be used to classify.
  • Learner: Process that creates the classifier.
  • Hypothesis space: set of possible approximations of f that the algorithm can create.
  • Version space: subset of the hypothesis space that is consistent with the observed data.
Key issues in machine learning:
  • What are good hypothesis space?
  • What algorithms work with that space?
  • What can I do to optimize accuracy on unseen data?
  • How do we have confidence in the model?
  • Are there learning problems that are computationally intractable?
  • How can we formulate application problems as machine learning problems?
There are 3 concerns for a choosing a hypothesis space space:
  • Size: number of hypotheses to choose from
  • Randomness: stochastic or deterministic
  • Parameter: the number and type of parameters
There are 3 properties by which you could choose an algorithm:
  • Search procedure
    • Direct computation: No search, just calculate what is needed.
    • Local: Search though the hypothesis space to refine the hypothesis.
    • Constructive: Build the hypothesis piece by piece.
  • Timing
    • Eager: Learning performed up front. Most algorithms are eager.
    • Lazy: Learning performed at the time that it is needed
  • Online vs Batch
    • Online: Learning based on each pattern as it is observed.
    • Batch: Learning over groups of patters. Most algorithms are batch.

Summary

In this post you discovered the basic concepts in machine learning.
In summary, these were:
  • What is machine learning?
  • Applications of Machine Learning
  • Key Elements of Machine Learning
  • Types of Learning
  • Machine Learning in Practice
  • What is Inductive Learning?
  • When Should You Use Inductive Learning?
  • The Essence of Inductive Learning
  • A Framework For Studying Inductive Learning
These are the basic concepts that are covered in the introduction to most machine learning courses and in the opening chapters of any good textbook on the topic.
Although targeted at academics, as a practitioner, it is useful to have a firm footing in these concepts in order to better understand how machine learning algorithms behave in the general sense.

0 Comments