Update: See bottom of the piece for JD Power’s partial explanation
“Samsung Ranks Highest in Owner Satisfaction with Tablet Devices” was the headline on JD Power’s latest U.S. Tablet Satisfaction Survey, with the above table showing that Samsung taking the lead from Apple by two points.
Yet when you look at the ratings that make up the individual scores, as Fortune did, that isn’t what they show at all. The six scoring categories are Overall Satisfaction, Performance, Ease of Use, Physical Design, Tablet Features and Cost. Samsung beats Apple in exactly one of those categories: cost …
Here are the scores:
So, Samsung wins on cost (shocker). The two companies tie on overall satisfaction. Apple wins on everything else. Apple racks up 22 points, Samsung 18. Yet somehow that gets translated into a Samsung win?
We’ve also asked JD Power for an explanation, and will update if & when we hear back. In the meantime, it has us wondering about those smartphone scores back in August …
Update: Here’s what JD Powers survey manager Kirk Powers had to say:
It’s important to note that award is given to the brand that has the highest overall score. In this study, the score is comprised of customer’s ratings of five key dimensions or factors. To understand the relative rank of brands within each of these five dimensions we provide consumers with PowerCircle Rankings, which denote the brand that has the highest score within each factor regardless of how much higher their score is. In the case of Apple, although they did score higher on four out of five factors measured, its score was only marginally better than Samsung’s. At the same time, however, Apple’s score on cost was significantly lower than that of all other brands. As such, even though its ratings on other factor was slightly higher than Samsung’s, Apple’s performance on cost resulted in an overall lower score than Samsung.
I could see how that might explain the Physical Design and Tablet Features scores, where one circle difference might indicate a very marginal difference in score, but queried how this translated to the Performance and Ease of Use scores, where two circles difference couldn’t indicate a tiny difference in score. I asked if we could see the actual scores and weightings, and was told:
Can’t show you the actual scores but the process is as follows:
1. Compare index scores by brand for each factor 2. This generates an index gap score for each factor 3. The index gap score for each factor is multiple by that factor weight 4. This produces the net index score diff between brands for each factor 5. Add up all net index scores and you generates the overall gap index score diff. between brands
It still seems odd to me, but without sight of the actual scores all one can really say for sure is that iPad owners were more satisfied with their tablet, but Samsung owners were more satisfied with the price they paid.