Explaining my Review Scores

This is something I perhaps should have done when I first started the blog… six years ago… but it occurs to me that I’ve never really explained my thought process when scoring my reviews.

Worst. Episode. Ever.Better late than never.

First, I will be honest and say that they are pretty arbitrary. There’s no particular math or codified logic behind them. It’s as much about feelings as rationality.

That said, I do still put a fair bit of thought into them. I often change a score several times before a post’s publication as I go back and forth on my opinions.

The scoring system is identical regardless of whether I’m reviewing books, games, movies, or TV. Since I’m measuring the total quality of the finished product and how it left me feeling, the medium doesn’t really change the process.

I also have a pretty consistent idea of what each number range represents, which I will now outline:

10: Perfect in every way. A score I have never given and likely never will.

9-9.9: Brilliant. The item I am reviewing may have a few minor flaws but is otherwise exemplary in concept and execution. Something that everyone should experience, regardless of taste.

Examples: Lord of the Rings, Warcraft III, X-Men: Days of Future Past, Greatshadow, many Continuum episodes.

8-8.9: Excellent. Strongly above average, with strengths that significantly outweigh any weaknesses. Recommended to most people, unless it’s a genre or franchise you strongly dislike.

Examples: Mass Effect: Andromeda, The Summonstone, Remember Me, Fantastic Beasts and Where to Find Them.

7-7.9: Good. Either items with significant flaws but also impressive strengths to compensate or all-arounders that do everything decently but don’t excel at much. Recommended to all fans of the genre or franchise, and may appeal to others as well.

Examples: The Incredible Adventures of Van Helsing, a lot of Defiance episodes, most books by Lawrence Watt-Evans, X-Men: Apocalypse.

6-6.9: Imperfect. Not bad, but struggling to rise above the pack. Recommended to devoted fans of the genre or franchise, but not the general populace.

Examples: Mass Effect 2, Logan.

5-5.9: Mediocre. May have some things going for it, but usually not enough to make it worth spending time on in a world so awash in entertainment. Possibly worth it for ardent fans of the genre/franchise, but even they’re likely to come away underwhelmed.

Examples: Dungeon Siege II, Honor Amongst Thieves, Diablo: Legacy of Blood.

0-4.9: Bad to terrible. Severely flawed with few if any redeeming qualities. Entries in this range are not worth it for anyone.

Examples: Immortals, Battlestar Galactica: Blood and Chrome, Warlords of Draenor.

I realize that having such exact numbers for what I will freely admit to be an inexact science may seem a bit strange, but I think the granularity is important. There’s a difference between a 6.9, which fell just barely short, versus a flat 6, which is much closer to total mediocrity.

I do not agree with the viewpoint that numbered reviews don’t serve a purpose. It provides a helpful, at-a-glance way to organize things, and it helps provide clarity in cases where it’s difficult to fully articulate the feel of a certain product — cases where something is more or less than the sum of its parts.

The climax of the Shadowmoon Valley storyline in World of Warcraft: Warlords of DraenorMMORPGs are a special case, as they are constantly evolving. That makes giving them a specific numbered rating less helpful, though it can still work if you’re reviewing a specific snapshot of an MMO’s lifespan (like my reviews of WoW expansions).

I have never been paid or otherwise compensated for any of my reviews. I’m not opposed to the idea, but no one has offered. If I did accept compensation for a review, I would offer disclosure of the fact in the review. I’m greedy, but I’m honest.

5 thoughts on “Explaining my Review Scores

  1. “most books by Lawrence Watt-Evans”

    That got a laugh out of me. and then a realization that I’ve not read any books by him in far too long, so off to Amazon I went and…. holy cow! I’ve got some catching up to do!

  2. I don’t understand why reviewers use those kinds of grading curves in the first place. Why have a 100 point scale if the first 49 points are the same? If a 4.9 and a 0.1 are both “not worth it for anyone”, why bother to differentiate between them at all, let alone to the value of half of the entire scale?

    If you’re going to use a points score at all then each point has to be demonstrably different to the next or there’s no reason for them not to have the same score, surely? And if you can define a difference and assign a different score then they can’t all be equally bad (or good), can they?

    This is why I think finely detailed scoring systems like this simply don’t work for reviews. Very broad categories can be useful. Making an argument that a four-star movie is either objectively or subjectively better than a three-star movie is manageable. Trying to make the case for a 7.1 over a 6.9 is a lot harder and even if you could do it what would be the benefit? Is someone going to decide to pick up everything that hits 7.0 and leave anything below it on the shelf?

    Scoring systems are fun, particularly to argue over in the pub, but I’ve never seen much of a practical value in them. On the other hand, the examples you sue to illustrate each band are very instructive. A reader of your reviews could draw some very interesting conclusions from those.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.