The world's most in-depth and scientific reviews of tech gear

How We Tested Chromebooks

Monday April 9, 2018

The ideal Chromebook provides lightning fast web browsing, a screen that can display web based video content with respectable quality, a keyboard and trackpad that aren't distracting when you're trying to be productive, a battery that can get you through a full day, a nice touchscreen for Android apps, and does all this in a fairly portable package. We designed our testing plan around this ideals, seeing how well each models could satisfy each one of these mandates.


Performance


When possible we like to use objective quantitative measurements in our testing. Thus we were drawn towards Octane 2.0, the standard benchmarking test for Chromebooks. However, once we ran our benchmarking tests, we realized the results did not match up at all with our experience of actually using these machines (for more on this see the next section). Therefore we threw out the benchmarking results and formulated our scores around the actual performance we experienced when using these computers in real life scenarios.

We started by doing some light browsing opening no more than 5 tabs at once. If we didn't experience any lag or increased load times at that point we would start over, with our first tab dedicated to the more demanding task of streaming music from Pandora. We would then slowly increase the number of tabs, working in a mix of websites and Google documents, until we experienced any lag. If we made it to 10 tabs without a noticeable decrease in performance we started over again, but this time streaming music in yet more demanding high definition Youtube music videos. We then assigned relative scores to each of the models we tested based on at what point in that process we noticed an appreciable drop off in performance, either lag, increased load times, or choppiness in the streamed music.

Why Benchmarking Tests Don't Work Well for Chromebooks


For benchmarking, or running a predetermined set of operations in order to assess performance and speed, has been the best way to compare compare one computer to another. Benchmarking tends work well for computationally taxing operations, like rendering complex graphics or editing high quality video. Since Chromebooks focus all of their attention on web browsing they really only runs the computationally simple JavaScript tasks related to surfing the web. Even if you find a way to run a computationally taxing program (like Photoshop) on a Chromebook, chances are all the heavy computing is being done on a remote server rather than the computer itself. Therefore, most benchmarking tests for Chromebooks (including the Octane 2.0 test that we used) simply measures a machine's ability to run this relatively basic JavaScript. While one machine may technically complete these task a bit faster than another, you likely won't be able to notice it in general web browsing. Additionally, these benchmarking test don't mimic the demands of opening multiple tabs, or essentially completing a number of simple tasks at once, which relies more heavily on random access memory (RAM). We therefore did not use benchmarking scores to rank performance, instead opting for a more functional, real world testing of opening multiple tabs and noting any lagginess that we experienced.

The ASUS Flip was one of the top scorers in our real-world benchmarking testing  yet received the lowest score from the software benchmark. The Lenovo Ideapad received one of the best scores from the software benchmarking test  but was one of the worst performers in our real-world testing.
The ASUS Flip was one of the top scorers in our real-world benchmarking testing, yet received the lowest score from the software benchmark. The Lenovo Ideapad received one of the best scores from the software benchmarking test, but was one of the worst performers in our real-world testing.

Interface and Features


The Chromebook interface essentially boils down into three things: touchscreen, keyboard, and trackpad (and a stylus, where applicable). We evaluated the usability of these two things simply through use. Multiple testers used each one the models we tested for multiple full days of typing and dragging formulas around spreadsheets. This gave us plenty of ammunition with which to accurately rank the various interface attributes of each machine.

Different sizes and styles of keyboards and trackpads provide very different user experiences.
Different sizes and styles of keyboards and trackpads provide very different user experiences.

Dislpay Quality


We evaluated based off of relative resolution, color accuracy, and contrast ratio. We did this in a side by side manner, place multiple Chromebooks next to one another and displaying the same photograph or frame from a video on all of them. This side by side image quality comparison is a process we are well practiced in, as it was used in our projector review. We also gave a small bump up in scoring for larger displays.

Battery Life


In testing battery life we pushed these machines to the limit. We started by opening 5 tabs. We then cranks the screen brightness to the maximum and statrting streaming a 10 hour, 1080p youtube video via a wifi connection. We then let the machines run until they died.


Portability


Here again we tested portability in the real world. We constantly threw Chromebooks into our backpacks and bags, brought them to and from work, brought them to coffee shops, and used them while waiting at the DMV. We evaluated how easy it was it slide each one into a fully stuffed bag, and how easy each one could be carried when sprinting to make a flight. Here smaller screens, and thus smaller machines, got a bump up in the scoring.