Update: See bottom of the piece for JD Power’s partial explanation
“Samsung Ranks Highest in Owner Satisfaction with Tablet Devices” was the headline on JD Power’s latest U.S. Tablet Satisfaction Survey, with the above table showing that Samsung taking the lead from Apple by two points.
Yet when you look at the ratings that make up the individual scores, as Fortune did, that isn’t what they show at all. The six scoring categories are Overall Satisfaction, Performance, Ease of Use, Physical Design, Tablet Features and Cost. Samsung beats Apple in exactly one of those categories: cost …
Here are the scores:
So, Samsung wins on cost (shocker). The two companies tie on overall satisfaction. Apple wins on everything else. Apple racks up 22 points, Samsung 18. Yet somehow that gets translated into a Samsung win?
We’ve also asked JD Power for an explanation, and will update if & when we hear back. In the meantime, it has us wondering about those smartphone scores back in August …
Update: Here’s what JD Powers survey manager Kirk Powers had to say:
It’s important to note that award is given to the brand that has the highest overall score. In this study, the score is comprised of customer’s ratings of five key dimensions or factors. To understand the relative rank of brands within each of these five dimensions we provide consumers with PowerCircle Rankings, which denote the brand that has the highest score within each factor regardless of how much higher their score is. In the case of Apple, although they did score higher on four out of five factors measured, its score was only marginally better than Samsung’s. At the same time, however, Apple’s score on cost was significantly lower than that of all other brands. As such, even though its ratings on other factor was slightly higher than Samsung’s, Apple’s performance on cost resulted in an overall lower score than Samsung.
I could see how that might explain the Physical Design and Tablet Features scores, where one circle difference might indicate a very marginal difference in score, but queried how this translated to the Performance and Ease of Use scores, where two circles difference couldn’t indicate a tiny difference in score. I asked if we could see the actual scores and weightings, and was told:
Can’t show you the actual scores but the process is as follows:
1. Compare index scores by brand for each factor
2. This generates an index gap score for each factor
3. The index gap score for each factor is multiple by that factor weight
4. This produces the net index score diff between brands for each factor
5. Add up all net index scores and you generates the overall gap index score diff. between brands
It still seems odd to me, but without sight of the actual scores all one can really say for sure is that iPad owners were more satisfied with their tablet, but Samsung owners were more satisfied with the price they paid.
Related articles
FTC: We use income earning auto affiliate links. More.
I’ll say it again like I have been for over a year “Samsung is dishing out a lot of money right now”. J.D. Power needs to understand that things like this makes them lose all credibility with consumers.
JD Power has a huge reputation in the consumer satisfaction field, so I’ll certainly be very interested to hear what it has to say about this.
“JD Power has a huge reputation in the consumer satisfaction field”
Not anymore.
Equally interested.
JD Power never had a great reputation in the consumer satisfaction field – they had an OK reputation, and it continues. If you look at their historic accolades it becomes quickly apparent that they get it right only sometimes. They’ve had some real dogs and they are admittedly non-independent.
Non-independent in what way, SP?
I agree 100%.
I was very surprised to see this since JD power is very reputable. Really hard to believe that being cheaper can be what it takes to be no. 1.
You wouldn’t be suggesting that somehow Samsung’s annual 4 billion (with a “b”) PR budget made it’s way into JP Power’s results right? I would be shocked, just shocked, if a company with Samsung’s record of honesty would buy reviews for their products. Impossible. Let’s just stop the witch hunt please!!!
I think u missed /s ? :)
Who did JD interview? Samsung’s executives?
Samsung just paid some blogger in Taiwan to say good things about them. Now samsung pay JD Power.
In the past 2 years, I tried really hard to like Samsung and Android. I switch to them like 3 times with Note1, Tab 7 and Note 3. I found myself keep going back to iPhone.
The interviews look fine – it’s the translation of the six category scores into the overall score and headline that looks deeply odd.
As Korean I should be proud that our national company Samsung beats Apple in JD Power survey… but actually many Koreans including me are very angry about this result, because Samsung won this survey by price – Samsung Tabs are $100-200 more expensive than iPad in Korea. So Korean folks feel very mixed about this result.
Samsung…. winning by any method. If copying does not work, pay bloggers, if that does not work, pay the poll takers. Winning by any method.
Making junk that makes us tons of money….. that comes naturally. LOL
Ps, bought a large screen tv. 376 days later it died. Tech knew exactly what parts to bring to fix it with out being told what was wrong. It took 2 years later that we won a class action suit to make Samsung pay for repairs. They knew PS caps were too small but built and shipped anyway.
Samsung,….. screwing its customers….. any way it can.
Just saying.
Also: Faking benchmarks, suing critical newspapers, pressuring bloggers to write positively, subsidizing like crazy… oh and making a egocentric museum about themselves. Like I always say to the sammy fandroids that obsess with gimicky “features” and fake plastic stiching: “If only you bothered”
If the survey is out of 1000 points and the weights (below) are how J.D. Power figures out the scores (unless I am mistaken), then how are these not the scores received? (I was assuming a 3 out of 5 garners you 3/5 of the points in that category, etc) So the categories must not be weight how they claim they are?
Weight Samsung Apple
Performance 0.26 or 260 3 or 156 5 or 260
Ease of Operation 0.22 or 220 3 or 132 5 or 220
Styling & Design 0.19 or 190 4 or 152 5 or 190
Features 0.17 or 170 4 or 136 5 or 170
Price 0.16 or 160 4 or 128 2 or 32
Total 1000 704 872
No, the numbers don’t add up at all. Still waiting to hear back from JD Powers.
It is unlikely that there were only 5 possible scores within each category. More likely that the scores were adjusted to be out of 5 – including decimals – and then for reporting, got rounded to the nearest integer. So a score of 3.49 would get rounded to 3 stars and a score of 3.5 would get rounded to 4 stars.
So what if the highly unlikely happened and all the Apple scores are actually rounded up from 4.5 to 5 stars (and for price, rounded up from 1.5 to 2 stars) and at the same time, all the Samsung scores were rounded down – 3.499 becomes 3 stars etc?
Assuming that the weights Ethan gives are all correct, Apple will win by the slimmest of margins – 4.020 for Apple to 4.019 for Samsung.
Category Weight Samsung Apple
Performance 26% 3.499 4.500
Ease of Operation 22% 3.499 4.500
Styling & Design 19% 4.499 4.500
Features 17% 4.499 4.500
Price 16% 4.499 1.500
Overall 100% 4.019 4.020
But those total scores do not equal 5 stars – they are both only 4 stars. So presumably, the rounding was not actually as severe as I used in the above. In other words, it is likely that Apple’s unrounded scores were higher and Samsung’s unrounded scores were lower.
My example is the worst case for Apple and the best case for Samsung. Any other unrounded scores will give Apple a bigger win.
So as long as Ethan’s weights are correct, it looks like JD Power has managed to mess up their arithmetic.
Thank you for tallying up these numbers. Even after JD Power’s response, this whole issue still feels weird to me.
You have a numerical score where the average product scores 821 and the best product has a numerical score less than 2% higher than that average. Whomever might be on top, something funny was done to those numbers to produce such a result.
+1. They all basically have the same score which seems rather unlikely. Also,what’s the justification of dishing 3 dots for the Asus and 5 dots for Samsung/Apple?
I’m actually surprised that cost is even a category in a satisfaction report. It’s my fault for not figuring out this sooner but still: Whatever price I paid for a device doesn’t influence if I’m satisfied using it.
I think that stems from cars, where they found many years ago that the more we pay for a car, the more we expect from it. That seems kind of obvious, but it meant you could have a fault on a cheap car and write it off as one of those things but the same fault on an expensive car and be really annoyed by it. So it made sense to factor in price, but these scores make no sense at all.
A 2 on cost? Harsh. I know this was taken before the most recent update, but the iPad Air is not expensive, not for the power, performance, ease of use and design – not in light of the other factors.
OK, so – maybe cost is purposely not weighted against those other factors, but where do you get off getting a two stars for a 499/329 starting price for the iPad’s of that time period.
Seems like that’d rank someone a 3 on cost anyway. Now I don’t rank Apple the highest for cost either – a Google Nexus 7 represents a better value. But if you don’t factor in cost versus those other factors, then that leads to silly conclusions, like you’d give the Polaroid tablet a 5 on cost for having a $99 price point.
But – it’s absolute junk and not worth $50 – you can’t have metrics just stand alone in the ether, it doesn’t make sense.
WAAA WAAA WAAA!!! Seriously?
Piece updated with response from JD Powers
It sounds like the circles are relative, not absolute measurements. So a 1 circle gap on pricing might be a bigger gap than 2 circles on ease of use. Anyway, I wonder how much of this is actually perception. Samsung tablets aren’t exactly cheap, either.
Sadly, without the data we’re left guessing. Certainly seems it’s a more subjective rating system than I would have expected.
This is why I never really “got” or liked statisticals, it’s really not representative of anything in reality, only that which is being summed up. It just doesn’t make sense.
That being said, if you take the 4 middle categories, that is where it all counts, the “overall satisfaction” and “cost” are meaningless. If a product does what you expect it to do, or just works well or beats expectations, then you get the kind of marks in those 4 middle categories that Apple got. Cost doesn’t matter, when you get it right, which is why the “overall satisfaction” didn’t get hit by the higher cost. This is what kept Apple as a company going strong when everyone else in this category was going downward during the worst of the recession a few years ago. …The ones who found the Samsung satisfying, in reality, because of the cost, are the ones that didn’t buy any tablet in the middle of the recession, because they were not willing to put the $ out for such a thing in hard times. In reality, they don’t care as much if it’s a great product, but that it does ok/gets by for little $ (cost is everything).
In the long term, (which most people don’t care to think about), the problem here is that the cheaper product will not last as long, it will have some limitations, may not be able to upgrade OS later, which means they may also not be able to get newest apps, it’ll be outdated by a newer model. It may also not have the same build quality and may have issues later on. This could lead to considering another brand at a future date. I just read that most iPad Air buyers today previously owned an iPad or other iOS device. (My iPad 1 is still going strong 3 years later. Yeah, I can’t get beyond iOS 5, but, for a 1st gen. tablet, it’s not bad at reading e-media (news/books/mags/video), still playing games on it, etc.)
Longevity is certainly one of the reasons it’s worth paying for quality. I typically expect a Mac to still be performing well in 3-4 years, and to get back about a third of the purchase cost when I upgrade. Over its lifetime, it costs around £400 per year. As the UK has only just got 4G, it’s only now I’m upgrading from my iPad 2, which is still performing well (and running iOS 7, albeit a little sluggishly when it comes to animations).
I’m looking forward to see what Consumer Reports has to say about this too. I mean JD Power. Damn, it’s so easy to get them confused these days.
This is like Hyundai beating out Bentley because its so much cheaper.
So, when JD says Apple wins there are no questions from the audience, but when Samsung wins a conspiracy is amiss.
Really? It doesn’t seem strange to you when when Samsung scores relatively poorly in important categories like performance (3 stars) and ease of use (3 stars) that it was declared the overall winner?
A pointless and useless presentation of a study by J.D. Power. It’s more an exercise in self-aggrandizement than anything else. Never more so than with the presentation of an award. None of the information presented should be of any surprise to anyone with even a cursory familiarity with the tablet market. The results as presented in their press release are too generalized, simplistic, and subjective to be of any use. Specifics matter. As does a survey’s methodology, transparency, design, bias control, rationale for category weightings, interpretive validity, etc. Presumably, detailed information is withheld for those companies willing to pay for it.
Not explained are the categories for comparison. Are they solely subjective, or a combination of subjective and objective? For example, is a respondent’s rating of performance based only on their usage experience or a combination of experience and perception influenced by what he knows about a tablet’s specs. Even the wording is ambiguous. Does cost refer to price or to value? If it does refer to price, isn’t value a more useful metric?
Of more worth, since this is a comparative study, is for respondents to have useful and meaningful experience of all the tablets being compared, not just the one they own. Which models are being compared, exactly? Much more revealing would be an exploration of respondents’ expectations, perceptions, and specific reasons for their opinions. In other words, qualitative analysis. While the key findings presented are of some interest, they aren’t exactly revelatory or unexpected. The results of this customer satisfaction survey, the so called “voice-of-the-customer”, as presented, is just too limited. Mostly what we can glean is that J.D. Power presented a customer satisfaction award to Samsung because their tablets are cheaper.