In January I posted an "open letter" here to Consumer Reports magazine about the blatant Apple love they had displayed in two consecutive issues. The most recent issue of the magazine contains a large section on smartphones and a separate article on "chameleon computers" such as the Surface. And again, I have trouble with the magazine's standards for judging these devices as well as some of their editorial decisions.
First, the improvements. In one of the issues I criticized in January, they referred to the Surface RT as "bulky" which made me wonder if they had even touched it. In the new issue, the Surface Pro is ranked highest (tied with the Samsung Ativ) among the group of 10 to 12 inch detachable laptops. Despite this ranking, the story is accompanied by a photo of the 3rd ranked HP Envy and there is absolutely no text description of the Pro beyond the ranked list. The Surface RT is ranked 7th in a category of tablets with 9 to 12 inch screens (tied with the Samsung Ativ and Galaxy). The RT even gets its own callout on one page under the sub-headline of "Best for Work" in which it is compared to the Ativ. The formerly "bulky" Surface RT is now "excellent" and is praised for its kick-stand and the quality of its display. The only knock is the lack of a GPS. Unlike the Pro, the RT scores a photo in its section. Interestingly, the magazine chose to show the RT in portrait position, which is not its best side in my opinion. Overall though a fair bit of work.
On the other hand, Windows Phones are shunted off to a box with the Blackberry. The magazine ranks phones via carrier, so there are no Windows phones listed for Sprint or T-Mobile, only the HTC 8X for Verizon and the HTC and the Nokia Lumia 920 for AT&T. These two phones don't measure up well here. The HTC ranks 12th on Verizon and 11th on AT&T, while the Lumia ranks 13th. In an otherwise balanced article about changes occurring in smartphone features, the editors refer to Microsoft's products as "Windows Mobile" while offering the back-handed compliment that "even recommended phones that use the new Windows Mobile and revamped Blackberry platforms show the kind of innovation that was once Apple's hallmark." Apple fans need not despair at even this whiff of criticism of Apple however. The very next paragraph describes the "elegant iOS operating system". Sadly, the editors fail to tell the reader just how they have discovered the way to measure elegance in their testing.
And that gets to my biggest complaint. The article acknowledges that Consumer Reports' research has revealed that most of their subscribers use iOS or MacOS products. They also reveal that their ranking of phones and tablets comes not from their testing but from satisfaction surveys of their subscribers. I've received these surveys as well as a subscriber. The bizarre result of this technique of ranking products, is that the rankings are first, not objective; and second, vague. For example. The rankings of phones for AT&T included the HTC 8X in 11th place with an overall score of 76. The 920 was in 13th place, also with a score of 76. Also at 76 points was the LG Escape. Six phones were tied at a score of 77, one at 78 and two at 79. The Galaxy S4 stood out with a score of 81. What in the world do these numbers mean? Often when ranking products, Consumer Reports will mention (in tiny type near the charts) that differences of a few points are meaningless. They don't do that here, so are we to believe that from a survey of Apple favoring Consumer Reports subscribers that the HTC One (ranked 3rd with a score of 79) is significantly better than the HTC 8X (way down the list in 11th place with a score of 76). The positions on the list, which is what people will notice, would indicate a significant difference between these phones. The scores may or may not be significant. We can't know because we aren't told.
Producing reports based solely on customer satisfaction scores like this is very problematic. First, the fanboy syndrome affects Consumer Reports subscribers just like it does other people. Humans tend to be defensive of our choices. Second, people all grade differently. Just look at Amazon customer reviews. Even people who love a product can score it very differently. Third, editorial biases skew conclusions. In my January post, I criticized Consumer Reports for the photos it used in two consecutive months. In the most recent issue, the photo selection was better balanced, but the editorial content continues to have issues. In addition to the "elegant" problem I mentioned above, in another part of the article the writer says "Apple all but invented the smart phone and the tablet at least in their current incarnations." Even if that were true, how is it relevant to product testing?
I think there is a real value for Consumer Reports to review and recommend smartphones, tablets, computers and other electronics. I am disturbed that far too much of their work in this category comes from surveys and unexplained rankings, and not the kind of research they do with other consumer products. Surveys have a place, but I'd have a lot more respect for a story that told me that a popular phone model has battery life issues or is touting a feature that doesn't really do anything. Tech magazines are doing a far better job in many cases of evaluating phones and tablets than the scientists and editors at Consumer Reports, yet given their large circulation, their survey based pseudo research is having a greater impact.
First, the improvements. In one of the issues I criticized in January, they referred to the Surface RT as "bulky" which made me wonder if they had even touched it. In the new issue, the Surface Pro is ranked highest (tied with the Samsung Ativ) among the group of 10 to 12 inch detachable laptops. Despite this ranking, the story is accompanied by a photo of the 3rd ranked HP Envy and there is absolutely no text description of the Pro beyond the ranked list. The Surface RT is ranked 7th in a category of tablets with 9 to 12 inch screens (tied with the Samsung Ativ and Galaxy). The RT even gets its own callout on one page under the sub-headline of "Best for Work" in which it is compared to the Ativ. The formerly "bulky" Surface RT is now "excellent" and is praised for its kick-stand and the quality of its display. The only knock is the lack of a GPS. Unlike the Pro, the RT scores a photo in its section. Interestingly, the magazine chose to show the RT in portrait position, which is not its best side in my opinion. Overall though a fair bit of work.
On the other hand, Windows Phones are shunted off to a box with the Blackberry. The magazine ranks phones via carrier, so there are no Windows phones listed for Sprint or T-Mobile, only the HTC 8X for Verizon and the HTC and the Nokia Lumia 920 for AT&T. These two phones don't measure up well here. The HTC ranks 12th on Verizon and 11th on AT&T, while the Lumia ranks 13th. In an otherwise balanced article about changes occurring in smartphone features, the editors refer to Microsoft's products as "Windows Mobile" while offering the back-handed compliment that "even recommended phones that use the new Windows Mobile and revamped Blackberry platforms show the kind of innovation that was once Apple's hallmark." Apple fans need not despair at even this whiff of criticism of Apple however. The very next paragraph describes the "elegant iOS operating system". Sadly, the editors fail to tell the reader just how they have discovered the way to measure elegance in their testing.
And that gets to my biggest complaint. The article acknowledges that Consumer Reports' research has revealed that most of their subscribers use iOS or MacOS products. They also reveal that their ranking of phones and tablets comes not from their testing but from satisfaction surveys of their subscribers. I've received these surveys as well as a subscriber. The bizarre result of this technique of ranking products, is that the rankings are first, not objective; and second, vague. For example. The rankings of phones for AT&T included the HTC 8X in 11th place with an overall score of 76. The 920 was in 13th place, also with a score of 76. Also at 76 points was the LG Escape. Six phones were tied at a score of 77, one at 78 and two at 79. The Galaxy S4 stood out with a score of 81. What in the world do these numbers mean? Often when ranking products, Consumer Reports will mention (in tiny type near the charts) that differences of a few points are meaningless. They don't do that here, so are we to believe that from a survey of Apple favoring Consumer Reports subscribers that the HTC One (ranked 3rd with a score of 79) is significantly better than the HTC 8X (way down the list in 11th place with a score of 76). The positions on the list, which is what people will notice, would indicate a significant difference between these phones. The scores may or may not be significant. We can't know because we aren't told.
Producing reports based solely on customer satisfaction scores like this is very problematic. First, the fanboy syndrome affects Consumer Reports subscribers just like it does other people. Humans tend to be defensive of our choices. Second, people all grade differently. Just look at Amazon customer reviews. Even people who love a product can score it very differently. Third, editorial biases skew conclusions. In my January post, I criticized Consumer Reports for the photos it used in two consecutive months. In the most recent issue, the photo selection was better balanced, but the editorial content continues to have issues. In addition to the "elegant" problem I mentioned above, in another part of the article the writer says "Apple all but invented the smart phone and the tablet at least in their current incarnations." Even if that were true, how is it relevant to product testing?
I think there is a real value for Consumer Reports to review and recommend smartphones, tablets, computers and other electronics. I am disturbed that far too much of their work in this category comes from surveys and unexplained rankings, and not the kind of research they do with other consumer products. Surveys have a place, but I'd have a lot more respect for a story that told me that a popular phone model has battery life issues or is touting a feature that doesn't really do anything. Tech magazines are doing a far better job in many cases of evaluating phones and tablets than the scientists and editors at Consumer Reports, yet given their large circulation, their survey based pseudo research is having a greater impact.
Last edited: