When possible we like to use objective quantitative measurements in our testing. Thus we were drawn towards Octane 2.0, the standard benchmarking test for Chromebooks. However, once we ran our benchmarking tests, we realized the results did not match up at all with our experience of actually using these machines (for more on this see the next section). Therefore we threw out the benchmarking results and formulated our scores around the actual performance we experienced when using these computers in real life scenarios.
We started by doing some light browsing opening no more than 5 tabs at once. If we didn't experience any lag or increased load times at that point we would start over, with our first tab dedicated to the more demanding task of streaming music from Pandora. We would then slowly increase the number of tabs, working in a mix of websites and Google documents, until we experienced any lag. If we made it to 10 tabs without a noticeable decrease in performance we started over again, but this time streaming music in yet more demanding high definition Youtube music videos. We then assigned relative scores to each of the models we tested based on at what point in that process we noticed an appreciable drop off in performance, either lag, increased load times, or choppiness in the streamed music.
We also tested battery performance by fully charging each model, and then using it for a full workday. This amounted to about 8 hours of typing, scrolling through spreadsheets, and sorting through photos on Google drive. All of the models we tested were able to make it through a normal workday with some life left, so we did not weight battery performance heavily in our scoring.
Why Benchmarking tests Don't Work Well for Chromebooks
The Chromebook interface essentially boils down into two things: keyboard and trackpad. We evaluated the usability of these two things simply through use. Multiple testers used each one the models we tested for multiple full days of typing and dragging formulas around spreadsheets. This gave us plenty of ammunition with which to accurately rank the various interface attributes of each machine. We also evaluated special features within this metric, specifically the touchscreen interface of the ASUS Flip.
We evaluated based off of relative resolution, color accuracy, and contrast ratio. We did this in a side by side manner, place multiple Chromebooks next to one another and displaying the same photograph or frame from a video on all of them. This side by side image quality comparison is a process we are well practiced in, as it was used in our projector review. We also gave a small bump up in scoring for larger displays.
Here again we tested portability in the real world. We constantly threw Chromebooks into our backpacks and bags, brought them to and from work, brought them to coffee shops, and used them while waiting at the DMV. We evaluated how easy it was it slide each one into a fully stuffed bag, and how easy each one could be carried when sprinting to make a flight. Here smaller screens, and thus smaller machines, got a bump up in the scoring.