Do you consider 7/10 to be a bad score?
Have a real think about that one, and while you ponder that, here's a further, qualifying question to fit into our consumer-oriented editorial remit.
If you were looking forward to a game, would you be less likely to buy it if it received a 7/10 score or equivalent?
Converted mathematically into a star system, 7/10 equates to three and a half stars. Suddenly, it doesn't quite seem so bad. Hell, even 6/10 starts looking fairly attractive at that point. Put into pictures, the idea of suggesting that 7/10 might be something negative becomes an almost laughable notion. Sure, it isn't top marks, but it isn't bad. It's isn't even average. One certainly wouldn't call it mediocre.
And yet that's exactly what this industry seems hell bent on doing.
Part of this has to do with the pervasive and ever-growing power of review aggregation sites. The shadows that Metacritic and, to a lesser extent, GameRankings cast over the industry have been well-documented. There has been much talk of corruption in the past few weeks, but in amongst the personal attacks on industry figures, the howling banshee noises from various entrenched factions of the internet, the slandered "gamers" and the harassed journalists, and everyone caught in between, let's take a look at something simple and clear and easy: bonuses should not be predicated on aggregated review scores when the entire scoring system is broken.
Admittedly, there's a case for qualitative altruism to be made here -- surely rewarding games that enjoy an excellent critical reception is to be applauded? But there are few statistical benchmarks by which games can be judged: you have sales and you have scores and there are problems with each. The issue with rewarding teams based on sales figures is that it favours populist, conservative development in well-worn genres, making money off of well-established trends. The issue with basing rewards on review scores is that the review process can never be fully objective, especially not in an industry where so much is predicated upon personal experience. Add that onto a review aggregation service that remains opaque in its practices and methods and there's a recipe for disaster.
Kotaku's Jason Schreier recently brought up the issue of Destiny's review scores in an article that dredged up the 2010 contract between Bungie and Activision that formed one of the evidence centrepieces in the legal battle from a couple of years back over the movements of Infinity Ward's Jason West and Vince Zampella from Activision's stable to EA's. Here's what the contract read at the time, though rumour has it that it may have been revised since:
Activision shall pay to Licensor a quality bonus (the "Quality Bonus") in the amount of Two Million Five Hundred Thousand Dollars ($2,500,000) should Destiny Game #1 achieve a rating of at least 90 as determined by gamerankings.com (or equivalent reputable services if gamerankings.com is no longer in service) as of thirty (30) days following the commercial release of Destiny Game #1 on Xbox 360.
Given that every site has a different review scale, it seems ludicrous perhaps that millions of dollars could be wagered on the mean of a bunch of largely arbitrary numbers dispensed under wildly different conditions and editorial remits, with different weighting conducted both by the sites (usually transparently if there's a review scoring guide) and by bodies such as Metacritic (defiantly opaque).
Don't get me wrong, I like review scores on a case-by-case basis as a consumer of criticism. I like to familiarise myself with a critical outlet's style and scale. Furthermore, there's a strength of recommendation inherent in giving a game a score. Reviews can provide many different things to many different people -- be that a suggestion on whether or not to buy a game, or an overview and appraisal of a game's features, a critical look at a game's place in culture at the time of release, an evaluation of its worth as entertainment or art or way to waste time with a friend, a value proposition in terms of money or time or quality -- and whichever perspective a review is coming from, I like to think at least that scores provide a solid indication of the critic's conclusions based upon a clear, if subjective, system.
As someone who likes review scores both as a reader and reviewer (numerical systems are admittedly largely arbitrary, especially in an industry that actively makes individuals part of the art/entertainment/product, but they have their uses) I despair that we've become conditioned to view the application of a 7/10 score as some kind of failure state. It's something that we've long railed against here on Dealspwn, perhaps most notably in Jon's condemnation of Metacritic's unbalanced colour scoring system that paints everything between 40-74% in a queasy yellow, with 0-40% in danger-sign red. This is also a site that refuses to reveal its critical weighting, and whose criteria for critical inclusion remains a complete mystery.
When Schreier calls a 77% review score average "resoundingly mediocre" he becomes part of the problem. When Forbes' Erik Kain suggests that 6/10 is equal to a "D" grade, an unmistakeably bad grade, that seems like madness to me. Kain goes on to mention that "each site has its own definition of what a 6 out of 10 means, and that’s true. A three out of five stars can also mean something completely different from a 60/100 (and on a related note I think the five star model is better than the 0-10 model.)". But in the cold light of review aggregation -- a foundation upon which this industry now, sadly, rests -- nothing could be further from the truth.
There are two solutions to this: one is to abolish review scores completely, and it should noted that neither Kotaku nor Forbes deign to dish them out. Review scores are a little like Marmite in the wildly opposed reactions they can provoke -- some love them, some hate them. If you choose to get rid of them, good for you. But the other road is to realise how publishers and mass aggregation sites have corrupted and taken ownership of the critical conversation. To take control of that conversation back is to emancipate those scores of 6, 7, and 8, to fight this notion that anything less than an exceptional 9 or its equivalent is tantamount to a critical write-off.
Put simply: use the entire critical scoring spectrum at your disposal.
Schreier's label of "resoundingly mediocre" for a scoring average of 77% is both laughable and lamentable. It's abjectly incorrect, certainly, but the fact that it's peddled by a top-tier games site only serves to reinforce this problematic notion that anything less than a top 20% finish is bargain bin stuff. It's in Kotaku's interests to rubbish review scores, of course, seeing as they abandoned that practice a while ago, but it also means that everything their writers opine on the matter is coated in heavy bias. It's also exemplary of a fundamental failure to distinguish criticism from hype disappointed. Taken numerically, Shreier's statement makes no sense; placed in the context of Destiny's budget, its marketing, the hype that has surrounded it, and the contract as it stood in 2010... well... mediocre is still the wrong word. Ultimately, though, it's the one that sticks, and reviewers everywhere need to be less okay with that.
The sway that Metacritic in particular holds over the industry is worrying. On one hand, speaking honestly from an editorial perspective, I'd love us to be on there. Immediately, a site on Metacritic becomes an entrenched part of the critical conversation that goes some way to addressing the imbalance in power of access between publisher and press. In terms of critical ubiquity, not to mention receiving review code reliably in a timely fashion, it's a no brainer. However, to be part of that is to take on unwanted responsibility, and to more closely align reviews directly with publishing practices. To predicate bonuses on aggregate scores is to tell developers to pander to reviewers, and to frame development of a game before it's even begun. Moreover, it inextricably and directly tethers critical writing to the livelihoods of others in rather public fashion, and that's not healthy at all.
Much of the blame for that lies with publishers, but all of us need to do what we can to be mindful of the issues at hand, to stand against this insane notion that a 7 is somehow a bad score, and to use language to carefully distinguish between critical practice and disappointment based upon expectation. Hype is part of the process, getting caught up in the marketing machine can be difficult to avoid when we're so plugged in to the industry on a day-to-day basis, but simply identifying these issues and being mindful of the effects goes a long way to reclaiming criticism as an honest practice.
We've never considered 7 or even 6 a bad score on this site at all, and that mindset will continue. It's possible to love deeply mediocre titles while still recognising them as such, and to find games with massive budgets and oodles of polish to be cold, bland, and uninspiring. Our scores are not the final words (or numbers) in a discussion about any of the games we cover here on Dealspwn, but rather jumping off points and discourse starters, and in that respect we may end up reformatting our reviews to reflect that. We've been having plenty of discussions about our own review practices on this site, and we'd love to know what you think. We're going to be tweaking the wording of our rating system in due course, but we'd love some input from you, our readers. Do you find review scores useful? What do you look for in a review? What would you change? I can promise that any ideas you have will be seriously considered.
Let us know what you make of all of this in the box below.