“Two heads are better than one, so wouldn’t two reviews be better than one?” This is one of the more common questions posed about reviews on IGN and other similar sites: why not have two or more reviewers tackle every game, movie, TV show, and tech product, then average out their scores to arrive at the final recommendation?
On paper, it does sound good: it’s certainly true that this approach reduces the frequency and impact of outlier scores, where the individual who is assigned the review just ends up liking or disliking it significantly more than other people on staff, for whatever reason. And I can definitely understand the desire to see the score on something I personally enjoy bumped up or something I didn’t care for taken down a notch when I disagree with a reviewer – we all like to have our opinions validated. But in practice, this simply isn’t a practical way of reviewing things on a large scale, and the problem this labor-intensive approach attempts to fix has already been solved by the internet.
All of this applies to any sort of review, but I’m going to focus on games here because that’s my personal area of expertise. Back in the heyday of printed games magazines, it made a lot of sense for publications like Famitsu or EGM to have several people play and weigh in on the same game. Remember, magazines weren’t free – annual subscriptions were generally cheap, but picking up a single issue off the newsstand usually cost between $5 and $12 – so you were getting real value in not having to buy a whole second magazine for another reviewer’s perspective on a new game. Plus there weren’t nearly as many games coming out in the 1990s and early 2000s as there are today, so you could afford to put several people to work on each one of them without neglecting others.
Today we’re in a very different world. Not only are there exponentially more games coming out all the time, making it much more difficult to devote multiple people to one of them, if you want a variety of perspectives you only need to Google them or search on YouTube. At the push of a button, you’ll be drowning in more opinions than you know what to do with – all for free. You can even visit an aggregator like Metacritic or Opencritic to see dozens of opinions summarized all in one place.
For that reason, from a pure business perspective doing multiple reviews on a single game is a tough sell. Due to the way the internet (and specifically Google) works, producing two reviews of the same thing does not yield double the traffic. If you put both reviews on a single page, you would need double the visitors to produce double the revenue on your investment. If you put each review on its own page so that people have to visit them individually, that doesn’t solve the problem – it can even potentially hurt your position in search results as Google tries to figure out which review is the most important result from your website, and that can crater your traffic. So when you’re working out budgets and how to most effectively spend them, multiple reviews of the same thing don’t make sense. This is the main reason you don’t see IGN – or virtually any other publication – do it on a regular basis.
It’s not all about money, though: All of this is before we even delve into the logistical and perception problems that could arise from attempting to do multiple reviews on a regular basis. For example, advance review copies are often scarce. Whether you’re talking about copies of Elden Ring or seats at advance screenings of Avatar: The Way of Water, it’s often difficult to get a lot of people access to the same highly anticipated piece of media ahead of time. This is especially true with games, which usually take a lot longer to play through than a movie takes to watch, so getting them early is extremely important.
That’s because reviews have to be timely to be relevant to the many people who are searching for them on the hot new thing – and yet they’re also one of the most labor-intensive kinds of content IGN creates. Finishing more than one in time to post at embargle is often going to be difficult or impossible, presenting us with nothing but bad choices when it comes to rolling them out: We could wait around for multiple people to finish before we publish anything, but that would invariably make us late to the party a lot of the time. However, publishing one after the other as they’re ready would inherently give the first one out the door greater weight and lead to all sorts of perception problems, such as accusations about intentionally withholding the rest of them in order to help or hurt one thing or another if the scores end up diverging.
So why not do multiple reviews only for the bigger games, and special occasions? For one, it’s a question of fairness. If the entire point of multiple reviews is to mitigate outlier scores, it would therefore be harder for a game we reviewed multiple times to get a very high or very low score than it would be for the vast majority of others. Also, a lot of bigger games come out at the busiest times of year (which is why those times are so busy!), and that’s when our reviewing resources are already stretched to their thinnest – and of course, getting hold of a lot of copies of these big, important games is not always easy.
Even so, we’ve experimented with multiple reviewers in the past, most notably with our Star Wars Jedi: Fallen Order Staff Review in 2019. Thanks to EA supplying multiple advance copies, eight IGN staffers were able to weigh in, each with their own score, and we were able to publish it just five days after the initial review (which had gone up on November 14, 2019). While the response to the article was overwhelmingly positive in the comments, the traffic that resulted from it was not stellar: it produced just 15% of the pageviews of the single-author review we’d posted five days earlier.
That’s still a respectable amount of traffic for a normal review, but when you take into account the fact that it required eight IGN staff members to each pour about 20 hours (for a total of 160) into one game during November, one of the busiest months of the year for new game releases, it was not a good use of the time. Even if you cut it down to four reviewers per game, that’s still 80 hours that could’ve gone into Death Stranding, Pokemon Sword & Shield, Call of Duty: Modern Warfare, The Outer Worlds, Luigi’s Mansion 3, Disco Elysium, or any of a dozen other games that had come out recently – and IGN is expected to cover all of them. We may seem like a massive organization from the outside, and relative to a lot of our competitor sites out there we are, but our resources are far from limitless and we have to use them wisely.
Lastly, I’m philosophically not a big fan of averaging out review scores because, despite often being displayed numbers, they aren’t intended to be treated as math. For example, trying to use addition and subtraction with review scores is plainly absurd: playing two bad games (4+4) one after the other doesn’t equal one great one (8), and the experience of playing just the first half of a masterpiece (10) isn’t mediocre (5). Averaging works a little bit better than that because it produces results that don’t seem crazy – averaging one 6 and one 8 gives you a 7, which is a perfect compromise. However, this ignores the fact that numbers on review scales aren’t really numbers in the way we traditionally think of them: they’re actually a code that symbolizes an ordered sequence of words, and averaging words doesn’t make nearly as much sense.
Even assuming you roll with averaging scores, in a lot of cases the result is something that no human who played this game actually believes… so why would you trust it? In the example above, one person thought this hypothetical game was “okay” and the other thought it was “great,” but no one described it as “good” – and yet that would end up as the final score. It’s more artificial and less relatable than an individual’s recommendation, and at the end of the day you’re no more likely to agree with that score than any other because you are not an average. You’re a human with opinions that are just as unique as any reviewer’s. And unless you agree with every game’s Metacritic average score, why would this system – with far fewer data points – be any different?
In short, attempting to review things multiple times would create an unsustainable amount of work, dramatically increase the cost, and cause plenty of confusion for very little gain, especially in a world where aggregate scores already exist. We also need to bear in mind that reviews and aggregates are two different things: one is an individual’s opinion and the other is more of a poll of many – and importantly you can’t have an aggregate score at all without people creating individual reviews first.
None of this means we don’t think the varying perspectives of our staff are valuable or meaningful, or that we’re trying to sweep any disagreements under the rug. On the contrary, we encourage them: you only need to tune into one of our many weekly shows, such as Beyond, Unlocked, and Nintendo Voice Chat, to find free-flowing opinions and discussions – some of them dissenting – about the games we review. You can follow any individual IGN staff member on social media if you find their tastes align well with your own and want to get their recommendations straight from the source. On top of that, you’ll find plenty of Top 10/25/100 lists where the rankings are decided by committee rather than one person who just played something for the first time. Finally, every year we post our Game of the Year Awards, which represent the plurality opinion of our staff rather than an individual’s review score. Reviews have their specific time and place, but there will never be a shortage of opinions to be found on IGN for pretty much any significant game, movie, show, or gadget.