Skip to content

Commit 3a092cb

Browse files
committed
Minor update to text in DL tutorial.
1 parent 9596402 commit 3a092cb

4 files changed

Lines changed: 4 additions & 4 deletions

File tree

tutorials/deeplearning/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ By default, H2O Deep Learning uses an adaptive learning rate ([ADADELTA](http://
217217
If `adaptive_rate` is disabled, several manual learning rate parameters become important: `rate`, `rate_annealing`, `rate_decay`, `momentum_start`, `momentum_ramp`, `momentum_stable` and `nesterov_accelerated_gradient`, the discussion of which we leave to [H2O Deep Learning booklet](http://h2o.ai/resources/).
218218

219219
### Tuning
220-
With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as the space needs to be simply be partitioned into the corresponding hyper-space corners to solve this problem.
220+
With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as they can more efficiently partition the space (and hence memorize), which seems to be needed here. Deep Learning is better at discovering non-linear interactions between predictors than at cutting up the space.
221221

222222
```r
223223
m3 <- h2o.deeplearning(

tutorials/deeplearning/deeplearning.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ plot(m2)
187187
#If `adaptive_rate` is disabled, several manual learning rate parameters become important: `rate`, `rate_annealing`, `rate_decay`, `momentum_start`, `momentum_ramp`, `momentum_stable` and `nesterov_accelerated_gradient`, the discussion of which we leave to [H2O Deep Learning booklet](http://h2o.ai/resources/).
188188
#
189189
#### Tuning
190-
#With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as the space needs to be simply be partitioned into the corresponding hyper-space corners to solve this problem.
190+
#With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as they can more efficiently partition the space (and hence memorize), which seems to be needed here. Deep Learning is better at discovering non-linear interactions between predictors than at cutting up the space.
191191
#
192192
m3 <- h2o.deeplearning(
193193
model_id="dl_model_tuned",

tutorials/deeplearning/deeplearning.Rmd

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ By default, H2O Deep Learning uses an adaptive learning rate ([ADADELTA](http://
217217
If `adaptive_rate` is disabled, several manual learning rate parameters become important: `rate`, `rate_annealing`, `rate_decay`, `momentum_start`, `momentum_ramp`, `momentum_stable` and `nesterov_accelerated_gradient`, the discussion of which we leave to [H2O Deep Learning booklet](http://h2o.ai/resources/).
218218

219219
### Tuning
220-
With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as the space needs to be simply be partitioned into the corresponding hyper-space corners to solve this problem.
220+
With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as they can more efficiently partition the space (and hence memorize), which seems to be needed here. Deep Learning is better at discovering non-linear interactions between predictors than at cutting up the space.
221221

222222
```{r covtype_tuned}
223223
m3 <- h2o.deeplearning(

tutorials/deeplearning/deeplearning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ By default, H2O Deep Learning uses an adaptive learning rate ([ADADELTA](http://
217217
If `adaptive_rate` is disabled, several manual learning rate parameters become important: `rate`, `rate_annealing`, `rate_decay`, `momentum_start`, `momentum_ramp`, `momentum_stable` and `nesterov_accelerated_gradient`, the discussion of which we leave to [H2O Deep Learning booklet](http://h2o.ai/resources/).
218218

219219
### Tuning
220-
With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as the space needs to be simply be partitioned into the corresponding hyper-space corners to solve this problem.
220+
With some tuning, it is possible to obtain less than 10% test set error rate in about one minute. Error rates of below 5% are possible with larger models. Deep tree methods are more effective for this dataset than Deep Learning, as they can more efficiently partition the space (and hence memorize), which seems to be needed here. Deep Learning is better at discovering non-linear interactions between predictors than at cutting up the space.
221221

222222
```r
223223
m3 <- h2o.deeplearning(

0 commit comments

Comments
 (0)