Machine Learning Using Spark.ML and Python

Learn how to use Spark.ML and Python for machine learning.

Machine Learning Using Spark.ML and Python

Learn how to use Spark.ML and Python for machine learning.
238
views
Share the link to this page
Copied

About the Class

You've heard how practically useful classification is for learning about your users. You are keen to try it for yourself. However, you've been unable to find any realistic user classification datasets. Such data is very difficult to find in published data sources.

You're familiar with Python and Apache Spark. You may have done some machine learning using Spark.ML. You have done parts of an end-to-end machine learning task, but there are other parts that you have yet to learn. 

You are good at some aspects of machine learning but less confident at others. You would like to work through a user classification task from beginning to end, starting from raw transactional user log data.

You will use a realistic user classification dataset that closely emulates the type of data one might encounter in a production setting. You will create a machine learning feature set from raw user log data and use it to create a classification model. You won't stop there - you will tune this model and improve it using hyperparameter tuning and data selection.

Spark combines the power of distributed computing with the ease of use of Python and SQL. Level up today.

What will you learn in this course?

  • How to train a classification model on high-dimensional data
  • How to do sensitivity analysis on a trained model.
  • How to assess the accuracy of a trained model.
  • How to improve model accuracy by tuning model hyperparameters.
  • How to improve model accuracy by judiciously selecting the data.
  • How to automate model tuning.
  • How to determine whether a dataset lends itself to classification. This entails analyzing the data and determining whether a classifier can have any chance of attaining a useful level of prediction accuracy. What is the power-law analysis and how to do it on a classification dataset?
  • How to analyze raw user log data.  In practice, data is not served up in an easy-to-use form. It typically is stored in a format that is easy to store. This is because being able to store the data efficiently often outweighs the need to be able to access that data in the future. 
  • How to do machine learning from end-to-end. This has several stages: (a) loading a raw file, (b) putting the data into a format suitable for analysis and for feature set creation, (c) creating the feature data, for example vectorizing the data, properly handling missing values, and reducing the dimensionality, (d) training the machine learning model, (e) assessing the performance of the model, (e) tuning the model.
  • Extracting, Transforming, and Selecting (ETS).  How to convert raw user log data to a tabular format that is most useful for training a classifier. The data may not be provided in a tabular format.  A machine learning engineer often needs to create a table to represent the data. They will also be responsible for identifying and resolving issues with the data that would otherwise hamper the accuracy of the model fit to the data.

What are the learning objectives?

The learner will complete two machine learning projects from beginning-to-end.

  • For the first project, the learner will be given a dataset that initially may seem inscrutable. However, they are soon able to extract useful information from the data. The learner will be able to do this using both interactive analyses as well as automated methods.
  • For the second project, the learner will predict the last item in a sequence. This technique is generally applicable to several types of recommendation systems.

The learner will know how to improve the accuracy of a seemingly marginally useful model. The learner will do a pre-deployment analysis. A machine learning task does not end when the model is trained. Calculating the overall model accuracy is not always sufficient. Assessing the model accuracy under different constraints on the input data is important to a successful deployment. This is because a model can be more or less accurate in some regions of the input space compared to other regions of the input space.

Here are some of the specific concepts that are covered:

  • What is the first thing to do with a messy comma-delimited log file
  • How to load raw data from a file and convert it into a tabular format 
  • What is the power-law analysis and how to use it on a dataset to determine whether that dataset is suitable for classification?
  • How to create feature data for training a logistic regression classifier.
  • How to train a logistic regression classifier on text data. 
  • How data selection can be used to optimize model accuracy, and the tradeoff between coverage and accuracy.
  • How to use sensitivity analysis to determine how well the model performs in different regions of the input space.

What technologies, packages, and functions will students use?

  • Bash
  • Python
  • Matplotlib
  • Pyplot
  • Apache Spark
  • Spark SQL
  • Spark.ML

All libraries used in this course are contained in the pyspark package. This means that we will not need to install additional packages. All modules are either part of the standard python library or imported from the pyspark module.

What terms and jargon will be defined?

Here is a list of technical terms, jargon, and acronyms that will be used in the course:

Feature engineering, machine learning, model fitting, classification, logistic regression, logistic classification, power law, Extract Transform and Select (ETS), pipeline, cross-validation, hyperparameters, feature engineering, feature sets, vectorizer, vocabulary, area under the curve (AUC), data selection, grid search, automated model tuning, Dataframe, Spark.ML, StructType, StructField, CountVectorizer, 

What concepts will be taught?

Active versus Casual is a common way to characterize users.  Active users can have many different behavioral characteristics than casual users. How can you ascertain this from the data?

Sanity checks and sensitivity analysis are pro tips familiar to every seasoned machine learning engineer.  How can you quickly determine whether your model is unbiased? How can you quickly determine how your model will perform on various segments of your user base? 

Prediction Accuracy vs Coverage. Students are usually taught to calculate prediction accuracy by averaging over the entire sample space. One can surface opportunities to improve prediction accuracy by segmenting the space. One can also determine what is the effective coverage of a model. The coverage is determined by the portion of novel data for which the prediction accuracy exceeds a desired threshold. A model might do extremely well for certain data, but be no better than a random guess on other data.

When presented with a large amount of raw data that belongs to two different classes, the analyst might initially believe that there is no discernible statistical difference between the two classes of data. However, if you know how to look more carefully, you can determine differences that can be identified.  Moreover, you can determine whether these effects are strong enough to automate a classification task using machine learning.

What pro tips are taught?

  • Learners may be inclined to use too much data when training a machine learning model. They might give short shrift to dimensionality reduction.
  • Learners may be inclined to impose their own preconceptions on the data, rather than automating analysis and letting the data speak for itself.
  • Learners might be aware of how to tune hyperparameters, but they might not be aware of how data selection can be used to optimize model performance.
  • Learners might not know how to use simple forms of sanity checks and sensitivity analysis to more quickly ascertain whether their model is valid.

What datasets will be used?

Two datasets are used. One is for guessing user demographic class membership from usage log data. The second is a text corpus. 

The usage of the log dataset is special.  This dataset closely emulates data commonly found in production settings, but it is difficult to find in published data sources. The dataset has been carefully created to emulate real demographic data.  Such data is rarely if ever published. One reason is to protect proprietary secrets. Another is to avoid privacy leaks.

This dataset looks and behaves very much like real data. This allows practicing real techniques that are easily transferrable to proprietary data you might encounter in a production setting. 

The data will be transformed several times along the way for the various stages of the task. So, while the entire course is based on a single dataset, the data will be manipulated such that it can seem like a completely different dataset at various stages along the way.

Although the dataset closely emulates the statistical qualities of a real-world demographic dataset, rather than use real-world labels we instead use a comical hypothetical scenario. However, make no mistake, the data is very real in that it captures very realistic qualities of data encountered in a production setting.

This approach has some benefits.  Firstly, it prevents us from being biased by unwarranted assumptions. You may encounter a dataset and make assumptions about it based on your own experience. Instead, you should be data-driven.

Using this hypothetical scenario also emphasizes the generality of this technique. This approach can be used to guess the gender, political affiliation, retiree vs teenager, homeowner vs renter, cancerous vs healthy, hotdog vs non-hotdog, just to name a few.

Additionally, this allows us to have a bit of levity along the way.

The primary dataset labels its two classes as "rabbit" and "duck".  At first, they look alike, but when the learner knows what to look for, they are easy to tell apart.

The text corpus dataset is a standard dataset commonly used to teach or demonstrate text processing. In our case, we are going to use it to demonstrate sequence prediction.  The text corpus is large enough that it poses realistic constraints on our algorithms. It is small enough that training can still be done quickly. In a production situation, the actual text corpus may be orders of magnitude larger. However, the underlying concepts taught here are applicable.

We use the text corpus as a stand-in for user session data.  Examples of this include song identifiers, topic ids, hashtags, URLs, and any other type of identifier that is consumed in a sequential manner. The task is predicting the last item in a sequence. 

We could have instead created a dataset that closely mimics the statistical characteristics of a production session log dataset.  However, using a text corpus has some advantages. Working with text makes this more intuitive. It is easier to understand what is going on when working with sequences of tokens that correspond to sentences of words rather than opaque identifiers.

Author

Mark Plutowski

Data Professional
Has 20 years experience in data-driven analysis and machine learning development. Mark has a Ph.D in Computer Science from UCSD and a Master of Science in Electrical and Computer Engineering from USC. Mark worked at IBM, Sony, and Netflix generating 29 patents. Mark is an experienced educator, having published courseware on multiple online...

School

Mark Plutowski's School

Requirements

  • You should have SQL installed on your PC/Mac
  • You should have knowledge of python lambdas
  • You should have Pyspark and Python installed on your PC/Mac
One-time Fee
$69.99
List Price:  $99.99
You save:  $30
€67.28
List Price:  €96.12
You save:  €28.84
£55.93
List Price:  £79.91
You save:  £23.97
CA$98.01
List Price:  CA$140.02
You save:  CA$42.01
A$107.87
List Price:  A$154.11
You save:  A$46.23
S$94.42
List Price:  S$134.90
You save:  S$40.47
HK$544.78
List Price:  HK$778.29
You save:  HK$233.51
CHF 62.22
List Price:  CHF 88.89
You save:  CHF 26.67
NOK kr779.46
List Price:  NOK kr1,113.56
You save:  NOK kr334.10
DKK kr501.79
List Price:  DKK kr716.88
You save:  DKK kr215.08
NZ$119.98
List Price:  NZ$171.41
You save:  NZ$51.43
د.إ257.07
List Price:  د.إ367.26
You save:  د.إ110.18
৳8,402.58
List Price:  ৳12,004.20
You save:  ৳3,601.62
₹5,911.93
List Price:  ₹8,445.97
You save:  ₹2,534.04
RM312.68
List Price:  RM446.70
You save:  RM134.02
₦118,358.68
List Price:  ₦169,091.08
You save:  ₦50,732.40
₨19,451.98
List Price:  ₨27,789.74
You save:  ₨8,337.75
฿2,421.30
List Price:  ฿3,459.15
You save:  ฿1,037.85
₺2,419.75
List Price:  ₺3,456.94
You save:  ₺1,037.18
B$406.81
List Price:  B$581.19
You save:  B$174.37
R1,266.21
List Price:  R1,808.95
You save:  R542.74
Лв131.43
List Price:  Лв187.77
You save:  Лв56.33
₩98,455.31
List Price:  ₩140,656.47
You save:  ₩42,201.16
₪260.61
List Price:  ₪372.31
You save:  ₪111.70
₱4,125.84
List Price:  ₱5,894.31
You save:  ₱1,768.47
¥10,813.84
List Price:  ¥15,449
You save:  ¥4,635.16
MX$1,432.62
List Price:  MX$2,046.69
You save:  MX$614.07
QR256.35
List Price:  QR366.23
You save:  QR109.88
P959.91
List Price:  P1,371.36
You save:  P411.45
KSh9,063.70
List Price:  KSh12,948.70
You save:  KSh3,885
E£3,476.67
List Price:  E£4,966.88
You save:  E£1,490.21
ብር8,762.26
List Price:  ብር12,518.05
You save:  ብር3,755.79
Kz63,877.12
List Price:  Kz91,256.94
You save:  Kz27,379.82
CLP$68,152.06
List Price:  CLP$97,364.26
You save:  CLP$29,212.20
CN¥507.37
List Price:  CN¥724.85
You save:  CN¥217.47
RD$4,236.71
List Price:  RD$6,052.70
You save:  RD$1,815.99
DA9,355.50
List Price:  DA13,365.57
You save:  DA4,010.07
FJ$159.32
List Price:  FJ$227.62
You save:  FJ$68.29
Q542.77
List Price:  Q775.43
You save:  Q232.65
GY$14,710.67
List Price:  GY$21,016.15
You save:  GY$6,305.47
ISK kr9,775.50
List Price:  ISK kr13,965.60
You save:  ISK kr4,190.10
DH703.21
List Price:  DH1,004.63
You save:  DH301.41
L1,276.61
List Price:  L1,823.81
You save:  L547.19
ден4,112.73
List Price:  ден5,875.58
You save:  ден1,762.85
MOP$563.70
List Price:  MOP$805.33
You save:  MOP$241.62
N$1,272.29
List Price:  N$1,817.64
You save:  N$545.34
C$2,573.53
List Price:  C$3,676.63
You save:  C$1,103.10
रु9,461.06
List Price:  रु13,516.38
You save:  रु4,055.32
S/267.08
List Price:  S/381.56
You save:  S/114.48
K283.05
List Price:  K404.37
You save:  K121.32
SAR262.80
List Price:  SAR375.44
You save:  SAR112.64
ZK1,938.90
List Price:  ZK2,769.98
You save:  ZK831.07
L334.76
List Price:  L478.25
You save:  L143.49
Kč1,707.89
List Price:  Kč2,439.96
You save:  Kč732.06
Ft27,663.65
List Price:  Ft39,521.19
You save:  Ft11,857.54
SEK kr779.03
List Price:  SEK kr1,112.95
You save:  SEK kr333.92
ARS$70,272.32
List Price:  ARS$100,393.34
You save:  ARS$30,121.01
Bs485.83
List Price:  Bs694.07
You save:  Bs208.24
COP$310,661.31
List Price:  COP$443,820.90
You save:  COP$133,159.58
₡35,743.76
List Price:  ₡51,064.70
You save:  ₡15,320.94
L1,776.85
List Price:  L2,538.46
You save:  L761.61
₲551,865.53
List Price:  ₲788,413.13
You save:  ₲236,547.59
$U2,996.30
List Price:  $U4,280.61
You save:  $U1,284.31
zł292.33
List Price:  zł417.63
You save:  zł125.30

What's Included

Language: English
Level: Intermediate
Skills: Data Analysis, Python, Pyplot, Apache Spark, Matplotlib, Machine Learning, Spark SQL, Spark.ML
Age groups: All ages
Duration: 3 hours 12 minutes
17 Videos
4 Documents
0
Saves
238
Views
This class has not been saved

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.