You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/glrm/glrm-tutorial.md
+33-21Lines changed: 33 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,19 +25,19 @@ This tutorial introduces the Generalized Low Rank Model (GLRM) [[1](#references)
25
25
26
26
Across business and research, analysts seek to understand large collections of data with numeric and categorical values. Many entries in this table may be noisy or even missing altogether. Low rank models facilitate the understanding of tabular data by producing a condensed vector representation for every row and column in the data set.
27
27
28
-
Specifically, given a data table A with m rows and n columns, a GLRM consists of a decomposition of A into numeric matrices X and Y. The matrix X has the same number of rows as A, but only a low, user-specified number of columns k. The matrix Y has k rows and number of columns d equal to the total dimension of the embedded features in A. For example, if A has 3 numeric columns and 1 categorical column with 4 distinct levels (e.g., red, yellow, blue and green), then Y will have 7 columns. When A contains only numeric features, the number of columns in A and Y will be identical.
28
+
Specifically, given a data table A with m rows and n columns, a GLRM consists of a decomposition of A into numeric matrices X and Y. The matrix X has the same number of rows as A, but only a small, user-specified number of columns k. The matrix Y has k rows and d columns, where d is equal to the total dimension of the embedded features in A. For example, if A has 4 numeric columns and 1 categorical column with 3 distinct levels (e.g., _setosa_, _versicolor_and _virginica_), then Y will have 7 columns. When A contains only numeric features, the number of columns in A and Y will be identical.
Both X and Y have practical interpretations. Each row of Y is an archetypal feature formed from the columns of A, and each row of X corresponds to a row of A projected into this reduced feature space. We can approximately reconstruct A from the matrix product XY, which has rank k. The number k is chosen to be much less than both m and n: a typical value for 1 million rows and 2,000 columns of numeric data is k = 15. The smaller is k, the more compression we gain from our low rank representation.
32
+
Both X and Y have practical interpretations. Each row of Y is an archetypal feature formed from the columns of A, and each row of X corresponds to a row of A projected into this reduced feature space. We can approximately reconstruct A from the matrix product XY, which has rank k. The number k is chosen to be much less than both m and n: a typical value for 1 million rows and 2,000 columns of numeric data is k = 15. The smaller k is, the more compression we gain from our low rank representation.
33
33
34
-
GLRMs are an extension of well-known matrix factorization methods such as Principal Components Analysis (PCA). While PCA is limited to numeric data, GLRMs can handle mixed numeric, categorical, ordinal, and Boolean data with an arbitrary number of missing values. It allows the user to apply regularization to X and Y, imposing restrictions like non-negativity appropriate to a particular data science context. Thus, it is an extremely flexible approach to analyzing and interpreting heterogeneous data sets.
34
+
GLRMs are an extension of well-known matrix factorization methods such as Principal Components Analysis (PCA). While PCA is limited to numeric data, GLRMs can handle mixed numeric, categorical, ordinal and Boolean data with an arbitrary number of missing values. It allows the user to apply regularization to X and Y, imposing restrictions like non-negativity appropriate to a particular data science context. Thus, it is an extremely flexible approach for analyzing and interpreting heterogeneous data sets.
35
35
36
36
## Why use Low Rank Models?
37
37
38
38
-**Memory:** By saving only the X and Y matrices, we can significantly reduce the amount of memory required to store a large data set. A file that is 10 GB can be compressed down to 100 MB. When we need the original data again, we can reconstruct it on the fly from X and Y with minimal loss in accuracy.
39
39
-**Speed:** We can use GLRM to compress data with high-dimensional, heterogeneous features into a few numeric columns. This leads to a huge speed-up in model-building and prediction, especially by machine learning algorithms that scale poorly with the size of the feature space. Below, we will see an example with 10x speed-up and no accuracy loss in deep learning.
40
-
-**Feature Engineering:** The Y matrix represents the most important combinations of features from the training data. These condensed features, called archetypes, can be analyzed, visualized and incorporated into various data science applications.
40
+
-**Feature Engineering:** The Y matrix represents the most important combination of features from the training data. These condensed features, called archetypes, can be analyzed, visualized and incorporated into various data science applications.
41
41
-**Missing Data Imputation:** Reconstructing a data set from X and Y will automatically impute missing values. This imputation is accomplished by intelligently leveraging the information contained in the known values of each feature, as well as user-provided parameters such as the loss function.
42
42
43
43
## Example 1: Visualizing Walking Stances
@@ -48,9 +48,8 @@ For our first example, we will use data on [Subject 01's walking stances](https:
48
48
49
49
###### Initialize the H2O server and import our walking stance data.
@@ -100,13 +99,12 @@ For our first example, we will use data on [Subject 01's walking stances](https:
100
99
101
100
Suppose that due to a sensor malfunction, our walking stance data has missing values randomly interspersed. We can use GLRM to reconstruct these missing values from the existing data.
102
101
103
-
###### Import walking stance data containing 15% missing values.
###### Count the total number of missing values in the data set.
110
108
sum(is.na(gait.miss))
111
109
112
110
###### Build a basic GLRM with quadratic loss and no regularization, validating on our original data set with no missing values. We change the algorithm initialization method, increase the maximum number of iterations to 2,000, and reduce the minimum step size to 1e-6 to ensure it converges.
@@ -117,7 +115,7 @@ Suppose that due to a sensor malfunction, our walking stance data has missing va
117
115
###### Impute missing values in our training data from X and Y.
118
116
gait.pred2 <- predict(gait.glrm2, gait.miss)
119
117
head(gait.pred2)
120
-
sum(is.na(gait.pred2))
118
+
sum(is.na(gait.pred2)) # No missing values in reconstructed data!
121
119
122
120
###### Plot original and reconstructed data of the x-coordinate of the left acromium. Red x's mark the points where the training data contains a missing value, so we can see how accurate our imputation is.
@@ -179,8 +177,7 @@ Instead, we will use GLRM to condense ZCTAs into a few numeric columns represent
179
177
We now build a deep learning model on the WHD data set to predict repeat and/or willful violators. For comparison purposes, we train our model using the original data, the data with the ZCTA column replaced by the compressed GLRM representation (the X matrix), and the data with the ZCTA column replaced by all the demographic features in the ACS data set.
@@ -189,7 +186,7 @@ We now build a deep learning model on the WHD data set to predict repeat and/or
189
186
train <- whd_zcta[split <= 0.8,]
190
187
test <- whd_zcta[split > 0.8,]
191
188
192
-
###### Build a deep learning model on original WHD data to predict repeat/willful violators. Our response is a categorical column with four levels: N/A = neither repeat nor willful, R = repeat, W = willful, and RW = repeat and willful violator, so we specify a multinomial distribution.
189
+
###### Build a deep learning model on original WHD data to predict repeat/willful violators. Our response is a categorical column with four levels: N/A = neither repeat nor willful, R = repeat, W = willful, and RW = repeat and willful violator, so we specify a multinomial distribution. We skip the first four columns, which consist of case ID and location information that is already captured by the ZCTA.
@@ -210,10 +207,25 @@ We now build a deep learning model on the WHD data set to predict repeat and/or
210
207
validation_frame = test_mod, distribution = "multinomial",
211
208
epochs = 0.1, hidden = c(50,50,50)))
212
209
213
-
###### Compare the performance between the two models. We see that the model built on the reduced WHD data set finishes almost 10 times faster than the model using the original data set, and it yields a lower log-loss error.
210
+
###### Replace each ZCTA in the WHD data with the row of ACS data containing its full demographic information.
validation_frame = test_comb, distribution = "multinomial",
222
+
epochs = 0.1, hidden = c(50,50,50)))
223
+
224
+
###### Compare the performance between the three models. We see that the model built on the reduced WHD data set finishes almost 10 times faster than the model using the original data set, and it yields a lower log-loss error. The model with the combined WHD-ACS data set does not improve significantly on this error. We can conclude that our GLRM compressed the ZCTA demographics with little informational loss.
0 commit comments