The role of scores in music, movies etc reviews

Started by DavidW, January 16, 2009, 04:27:24 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

DavidW

Quote from: DavidW on January 16, 2009, 04:21:14 AM
The horse race format is becoming a serious problem.  Not only do we see polls and lists for people to just pick out one recording that is "the best", but nowadays more and more people look at Classics Today and Musicweb for numerical ratings to find the almighty 10/10 and ignore the others as if they had no merit.  People stop reading critical reviews altogether, and not just in music, but also in movies.  They go to Rotten Tomatoes, imdb or amazon and simply look at the score to tell them if it's "the best".  I can't imagine how uninteresting my music listening or my movie watching would be if I constrained myself to those with large numbers.

I like that some magazines (in every area) are starting to buck the trend by eliminating numerical scores, I'd like to see that everywhere.


This is a potentially wider topic than just music, which is why it's in the Diner.  What do you think of numerical scores?  Do you use them?  To what extent?  Would you defend the value of these scores?

I know that scoring is not new, but in this day and age where websites can find and collect reviews and average them (such as metacritic) we face an issue, especially when averaging over dozens of critics or thousands of joe-six-packs, that many are tempted to see the numbers as absolute and not as a subjective measure (which they are).  I also think that it ends up treating art, such as music and movies, as consumer products.  It tempts people into ranking or shopping for the finer things the same way they would say electronics.

Discuss. ;D

Maciek


karlhenning

Quote from: Maciek on January 16, 2009, 04:53:29 AM
Oh, you mean those scores!

Yes, I misread it the same, and immediately wondered why this was in the Diner. But David means счëт, not партитура  8)



Dr. Dread


Sarastro

Quote from: karlhenning on January 16, 2009, 05:00:28 AM
But David means счëт, not партитура  8)

Now, you've misaddressed this line.

Compare:

Над Варшавой смеется солнце, над Веной смеется весь мир.
with
Nad Warszawa śmieje się słońce, nad Wiedniem śmieje się cały świat.

:D

DavidW

Quote from: mn dave on January 16, 2009, 08:48:13 AM
I give this thread two stars.  ;D

Well this thread will only take off when Pinkie posts, that's how these things work! :D

Dr. Dread

Quote from: DavidW on January 16, 2009, 09:30:51 AM
Well this thread will only take off when Pinkie posts, that's how these things work! :D

Oh, right. Well, let me try...

Modern music is horrible and the whole thing should have ended with Debussy.  ;D


Bulldog

Quote from: DavidW on January 16, 2009, 04:27:24 AM
I know that scoring is not new, but in this day and age where websites can find and collect reviews and average them (such as metacritic) we face an issue, especially when averaging over dozens of critics or thousands of joe-six-packs, that many are tempted to see the numbers as absolute and not as a subjective measure (which they are).  I also think that it ends up treating art, such as music and movies, as consumer products.  It tempts people into ranking or shopping for the finer things the same way they would say electronics.

I suppose there are plenty of folks who would rather look at a rating based on the average of many critics than actually read the reviews - good luck to them.  For me, these averages mean nothing.  But I'm not concerned about it.  The LAZY TYPES have rights too.

Brian

I commented on Roger Ebert's blog that his star ratings had, on average, increased over the last few years; whereas previously four stars had been quite an achievement, now more than a third of his new reviews are four-star affairs, and even flicks like Zack and Miri Make a Porno will make high ratings because he has arrived at the philosophy that, if a movie says what its ambitions are and achieves them, it's a success no matter the size of the ambition. He responded to my comment by saying that the star ratings are utterly meaningless and nobody should pay attention to them. A few days later, his website stopped listing star ratings on the home page.



Brian

#14
Quote from: Brian on January 16, 2009, 01:09:07 PM
I commented on Roger Ebert's blog that his star ratings had, on average, increased over the last few years; whereas previously four stars had been quite an achievement, now more than a third of his new reviews are four-star affairs, and even flicks like Zack and Miri Make a Porno will make high ratings because he has arrived at the philosophy that, if a movie says what its ambitions are and achieves them, it's a success no matter the size of the ambition. He responded to my comment by saying that the star ratings are utterly meaningless and nobody should pay attention to them. A few days later, his website stopped listing star ratings on the home page.
As a film critic, music critic and theatre critic for my school paper, I should probably add my own views on the subject. There has been a recent proposal by my editor to "quantify" the ratings system - to create a rubric so that we can put in our opinions and it will spit out a star rating for us. I am vehemently opposed to the idea. A review, to be sure, is an entirely subjective affair, paragraphs upon paragraphs of somebody's thoughts. There's certainly never any reason to assume the review is "correct" about anything. Today I've received quite a few unusual glares and, well, shouts, from people who read my trashing of Gran Torino (I gave it two stars and called it a good-hearted disaster). The reviewer's job is hard because the reviewer's job is not to judge a movie against an objective standard, but rather to guess what people want in a movie and tell them if it delivers. Or, perhaps, to guess who wants to see this movie, and tell them they should/shouldn't. At any rate, there's a broader theme of identifying an audience for the review and relating to them (1) what happens, (2) how it happens, and (3) if they should want to see it happen.

So (examples in parentheses) my own written reviews often divide time between a discussion of what exactly the work is (McCartney wrote and recorded a song every day for thirteen days. In the morning he would write down the lyrics and craft a melody, and in the afternoon he played all the instruments - electric and acoustic guitars, drums, piano, double bass and sometimes harmonica or mandolin - and sang the complex vocals), an analysis of the merits of the work in question (The ending is the biggest problem of Electric Arguments. "Dance 'Til We're High," the album's emotional climax and its greatest triumph, comes at the halfway point, and is followed by longer and longer tunes which also gradually become duller and duller), and either a description of the perfect audience for the work, or an assurance to the audience for my review that they will feel one way or another (A listener with different tastes could have written a review opposite to this one and given Electric Arguments the same high rating). So on to the question you're asking. What role does the star rating play in all this?

For Roger Ebert, the star rating is a message specifically directed at the film's target audience, telling them if it will satisfy their tastes. (Hence Tropic Thunder and Paul Blart: Mall Cop earn top recommendations!) For me, the star rating is a sort of "corrective" to the content of the review - an extremely subjective measure of exactly how much I liked it, regardless of any other criteria. Example: above I quoted my review of the Paul McCartney album Electric Arguments. As it happens, I gave the CD 4.5 stars out of 5. But I really dislike 5 of the 13 songs on the CD, including four of the last five. The five songs I dislike amount to a whopping 31 minutes - fully half the length of the album! Just on a mathematical breakdown of "good music" versus "bad music," or at least music I like/dislike, I would have to give Electric Arguments a 32 out of 63 (measured in minutes; 2.5 stars) or an 8 out of 13 (measured in tracks; 3 stars). The purpose of awarding it a (disproportionately) high rating was to communicate three things: (1) I really REALLY liked this!, (2) I don't care that there are some crappy songs, because the good ones are so awesome, and most importantly, (3) Despite the two paragraphs of criticism in this column, this is a four-and-a-half star album. In other words, the 4.5 out of 5 rating says "Ignore the fact that I complain about some stuff, and buy this anyways."

Gamespot.com, a video game review site, has a review breakdown where they award 1-10 ratings on the basis of gameplay, how hard the game is, how good the graphics are, how cool the sound effects are, etc. Then the reviewer plugs the numbers into a calculator and gets a strict mathematical average as the game's overall score. But there is an additional category - "tilt." This is the place for the reviewer to adjust the final rating if (s)he thinks the mathematical average is wrong. For instance, if you play a game with terrible graphics (give it a 4 out of 10!) and no sound at all (0 out of 10!), but it's your favorite thing ever and you don't want to give it a really low rating, you'd add a high "Tilt Value" (say, a 9 or a 10) to skew the final average. I view star ratings as "tilt." They reflect not just my subjective view of where a work falls on an arbitrary scale, but also my attempt to clarify or distill my bottom-line personal opinion of the work, and to illustrate how meaningful or meaningless my criticisms might be.

Hope this was interesting.  :)

DavidW

That was interesting Brian.  And I think that it also shows how hard it is to say how a mean of different critics' scores would say anything meaningful.  See you explained how you assign stars, which was different from how you said Ebert assigned stars, and so not only do you have to be aware of how the ratings are assigned per critic, but it also seems to imply that a straight up average of different critics scores could be meaningless if they used a different system for assigning ratings.

Brian

Quote from: DavidW on January 16, 2009, 03:06:35 PM
That was interesting Brian.  And I think that it also shows how hard it is to say how a mean of different critics' scores would say anything meaningful.  See you explained how you assign stars, which was different from how you said Ebert assigned stars, and so not only do you have to be aware of how the ratings are assigned per critic, but it also seems to imply that a straight up average of different critics scores could be meaningless if they used a different system for assigning ratings.
Yup! For a rating average like the Tomatometer to have much meaning, the person doing the math has to get the reviewers to agree on the same scale - and the same method of arriving at a score.

Kuhlau

#17
One of the things I was keen to avoid when starting my reviews blog was a reliance on 'star' ratings. I find the use of these in the reviewing of classical music almost completely arbitary - only the lazy, as Bulldog rightly says, would make purchases based on such ratings. That's why International Record Review makes for a refreshing and intelligent read. It not only avoids any kind of numerical rating system, but also steers clear of that habit into which Gramophone has fallen, where reviews have one-sentence summaries emboldened at the start so lazy people needn't read the 150 words that follow. ::)

FK