Skip to content

Commit 7d7a16f

Browse files
committed
added more information to example
1 parent 90d537e commit 7d7a16f

File tree

2 files changed

+69
-6
lines changed

2 files changed

+69
-6
lines changed

examples/http2-aggressive-splitting/README.md

Lines changed: 46 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,52 @@
1+
This example demonstrates the AggressiveSplittingPlugin for splitting the bundle into multiple smaller chunks to improve caching. This works best with a HTTP2 web server elsewise there is an overhead for the increased number of requests.
2+
3+
The AggressiveSplittingPlugin split every chunk until it reaches the specified `maxSize`. In this example it tries to create chunks with <50kB code (after minimizing this reduces to ~10kB). It groups modules together by folder structure. We assume modules in the same folder as similar likely to change and minimize and gzip good together.
4+
5+
The AggressiveSplittingPlugin records it's splitting in the webpack records and try to restore splitting from records. This ensures that after changes to the application old splittings (and chunks) are reused. They are probably already in the clients cache. Therefore it's heavily recommended to use records!
6+
7+
Only chunks which are bigger than the specified `minSize` are stored into the records. This ensures that these chunks fill up as your application grows, instead of creating too many chunks for every change.
8+
9+
Chunks can get invalid if a module changes. Modules from invalid chunks go back into the module pool and new chunks are created from all modules in the pool.
10+
11+
There is a tradeoff here:
12+
13+
The caching improves with smaller `maxSize`, as chunks change less often and can be reused more often after an update.
14+
15+
The compression improves with bigger `maxSize`, as gzip works better for bigger files. It's more likely to find duplicate strings, etc.
16+
17+
The backward compatibility (non HTTP2 client) improves with bigger `maxSize`, as the number of requests decreases.
18+
19+
``` js
20+
var path = require("path");
21+
var webpack = require("../../");
22+
module.exports = {
23+
entry: "./example",
24+
output: {
25+
path: path.join(__dirname, "js"),
26+
filename: "[chunkhash].js",
27+
chunkFilename: "[chunkhash].js"
28+
},
29+
plugins: [
30+
new webpack.optimize.AggressiveSplittingPlugin({
31+
minSize: 30000,
32+
maxSize: 50000
33+
}),
34+
new webpack.DefinePlugin({
35+
"process.env.NODE_ENV": JSON.stringify("production")
36+
})
37+
],
38+
recordsOutputPath: path.join(__dirname, "js", "records.json")
39+
};
40+
```
41+
142
# Info
243

344
## Uncompressed
445

546
```
647
Hash: db9e88642ccb12a1264f
7-
Version: webpack 2.1.0-beta.15
8-
Time: 1010ms
48+
Version: webpack 2.1.0-beta.16
49+
Time: 1044ms
950
Asset Size Chunks Chunk Names
1051
8fcb4106a762189b462e.js 52.9 kB 7 [emitted]
1152
42f9c68ce2db0b310159.js 55.7 kB 0 [emitted]
@@ -217,8 +258,8 @@ chunk {13} c3adbf94e7e39acf7373.js 33.1 kB [initial] [rendered]
217258

218259
```
219260
Hash: db9e88642ccb12a1264f
220-
Version: webpack 2.1.0-beta.15
221-
Time: 2561ms
261+
Version: webpack 2.1.0-beta.16
262+
Time: 2704ms
222263
Asset Size Chunks Chunk Names
223264
8fcb4106a762189b462e.js 11.2 kB 7 [emitted]
224265
42f9c68ce2db0b310159.js 10.4 kB 0 [emitted]
@@ -840,4 +881,4 @@ chunk {13} c3adbf94e7e39acf7373.js 33.1 kB [initial] [rendered]
840881
}
841882
]
842883
}
843-
```
884+
```

examples/http2-aggressive-splitting/template.md

Lines changed: 23 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,25 @@
1+
This example demonstrates the AggressiveSplittingPlugin for splitting the bundle into multiple smaller chunks to improve caching. This works best with a HTTP2 web server elsewise there is an overhead for the increased number of requests.
2+
3+
The AggressiveSplittingPlugin split every chunk until it reaches the specified `maxSize`. In this example it tries to create chunks with <50kB code (after minimizing this reduces to ~10kB). It groups modules together by folder structure. We assume modules in the same folder as similar likely to change and minimize and gzip good together.
4+
5+
The AggressiveSplittingPlugin records it's splitting in the webpack records and try to restore splitting from records. This ensures that after changes to the application old splittings (and chunks) are reused. They are probably already in the clients cache. Therefore it's heavily recommended to use records!
6+
7+
Only chunks which are bigger than the specified `minSize` are stored into the records. This ensures that these chunks fill up as your application grows, instead of creating too many chunks for every change.
8+
9+
Chunks can get invalid if a module changes. Modules from invalid chunks go back into the module pool and new chunks are created from all modules in the pool.
10+
11+
There is a tradeoff here:
12+
13+
The caching improves with smaller `maxSize`, as chunks change less often and can be reused more often after an update.
14+
15+
The compression improves with bigger `maxSize`, as gzip works better for bigger files. It's more likely to find duplicate strings, etc.
16+
17+
The backward compatibility (non HTTP2 client) improves with bigger `maxSize`, as the number of requests decreases.
18+
19+
``` js
20+
{{webpack.config.js}}
21+
```
22+
123
# Info
224

325
## Uncompressed
@@ -16,4 +38,4 @@
1638

1739
```
1840
{{js/records.json}}
19-
```
41+
```

0 commit comments

Comments
 (0)