Skip to content

feat(moduletemplate): Wrap function module template wrappers with parens#3658

Merged
TheLarkInn merged 2 commits intomasterfrom
feature/lazy_parse_function_module_template
Dec 31, 2016
Merged

feat(moduletemplate): Wrap function module template wrappers with parens#3658
TheLarkInn merged 2 commits intomasterfrom
feature/lazy_parse_function_module_template

Conversation

@TheLarkInn
Copy link
Copy Markdown
Member

What kind of change does this PR introduce?
Feature

Did you add tests for your changes?
N/A

If relevant, link to documentation update:

Summary
All browser engines currently can be signaled to lazy parse code by wrapping iife's with parents. After speaking with @fhinkel and a few other members of the V8 team, we were recommended that we use this heuristic technique. The change is very minimal in that in converts the output template to go from:

/***/ function(module, exports, __webpack_require__) {
		var add = __webpack_require__(/*! ./math */ 1).add;
		exports.increment = function(val) {
			return add(val, 1);
		};
/***/ },

to

/***/ (function(module, exports, __webpack_require__) {
		var add = __webpack_require__(/*! ./math */ 1).add;
		exports.increment = function(val) {
			return add(val, 1);
		};
/***/ }),

Overall findings by @nolanlawson state that load and performance gains of over 10%-50% depending on browser engine. https://github.com/nolanlawson/optimize-js#benchmark-overview

Does this PR introduce a breaking change?
No.

Other information

@TheLarkInn TheLarkInn merged commit c7b06ad into master Dec 31, 2016
@TheLarkInn TheLarkInn deleted the feature/lazy_parse_function_module_template branch December 31, 2016 16:08
@nolanlawson
Copy link
Copy Markdown

Cool! Have you measured the impact of any bundle sizes to see what the load time improvement might be? Note that JS engines are changing all the time, and the impact can vary from codebase to codebase (even have a negative impact in some cases!), so it's good to actually be measuring this stuff.

In optimize-js I've been meaning to overhaul the benchmark system to provide better measurements, but if you'd like to use something yourself, I'd recommend marky for adding performance.marks and measures and then just mark the time before inserting a script element into a page and then mark the time after. Curious to know what you find!

@citypaul
Copy link
Copy Markdown

Is there anywhere we can get an ELI5 (explain like I'm 5) explanation for this? I don't quite understand why this causes such a performance improvement?

@evenstensberg
Copy link
Copy Markdown
Member

@gavinwahl
Copy link
Copy Markdown

@citypaul This description specifically addresses the performance improvement https://gist.github.com/getify/938c4a46013244cbd3cd750c0bc767ce

@TheLarkInn
Copy link
Copy Markdown
Member Author

@nolanlawson, we ourselves don't have large enough example repos to warrent measurable results, we have had folks like @MikaAK who has used the separate plugin I wrote emulating this behavior. We would love to collect a list off large applications so we can do further perf analysis. Hard to get hands on in an easy way.

@citypaul
Copy link
Copy Markdown

citypaul commented Jan 3, 2017

@gavinwahl Thanks for that gist. That's exactly what I was looking for 👍

@statianzo
Copy link
Copy Markdown

We would love to collect a list off large applications so we can do further perf analysis.

@TheLarkInn A large real-world repo with webpack is
https://github.com/instructure/canvas-lms

@TheLarkInn
Copy link
Copy Markdown
Member Author

Thank you @statianzo! Have you tested out a build against webpack in master? I'm interesting to see the gains. Will probably test it myself anyways.

@statianzo
Copy link
Copy Markdown

@TheLarkInn I've been doing some internal tests at Domo with @aaronfrost using the webpack-1 branch. I'll work on getting you some numbers next week.

@TheLarkInn
Copy link
Copy Markdown
Member Author

Much appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants