Music Testing: The Good, the Bad and the Ugly | radioinfo

Music Testing: The Good, the Bad and the Ugly

Sunday 29 November, 2020

Content from BPR

Perhaps one of the most challenging types of radio research to interpret is the testing of individual songs.

There are no absolute criteria for good nor for bad scores. Of course, songs with high negatives (above 20%) are usually considered undesirable for airplay. Songs with combined positive scores above 60% are considered acceptable for airplay. Songs with a favourite score above 20% and positive scores above 60% are usually the biggest hits. However, beyond these basic assumptions, analysis of song test results becomes a bit more difficult.

Every methodology produces different results. For example, in one methodology, songs with a positive over 60 are considered good testing songs while, in another methodology, songs with a positive over 50 are considered good. The same applies to all of the other results (familiarity, favourite, negative, etc.)

Simultaneous tests are ones in which the same songs are tested in the same time period but with different methodologies. When comparing results of simultaneous tests with different methodologies, it is normal to see differences in the results. This does not mean that one methodology is bad and the other is good. The standards applied to one methodology may be stricter than to another. A simple analogy may help. When a teacher is grading a student’s performance, one teacher may give the student a higher grade than another teacher. In this case, the student’s performance is the same but the teachers’ rating standards may be different.

Of course, the problem with comparing the results of tests with different methodologies is compounded when the test parameters of each tests are different. For example, geographic and demographic differences or different answer options can dramatically affect the test results. In the case of methodologies with different answer options, the results may vary significantly.

Experienced music programmers learn to adapt to various music test methodologies and recognise the difference between them. When test parameters change, the results will change. Perhaps the best approach to music test interpretation is to examine song tests over a period of time across multiple surveys using the same methodology. Over time, practical criteria for song analysis will emerge from a series of tests performed according to stable and consistent parameters.

Andy Beaubien, BPR







Subscribe to the radioinfo podcast on these platforms: Acast, Apple iTunes Podcasts, Podtail, Spotify, Google Podcasts, TuneIn, or wherever you get your podcasts.

Ask Alexa: 'Alexa, play radioinfo flash briefing’ or ask Google Home: "Hey Google. Play the latest Radioinfo flash briefing podcast.”

  Post your job, make sure you are logged in.



Post a Comment


Log InYou must be logged in to post comments.
30 November 2020 - 6:56pm
So what you’re saying is music testing gives variable and random results. Yep. I guess that’s it. Sad that so many PDs follow music testing slavishly, killing great radio songs because their score is a couple of points lower than plastic pop in 2 hour rotation.
Music testing is like asking a kid into a supermarket to do the weekly shop. His cart would be full of burgers, chips and Tim Tams.
jack shit
2 December 2020 - 12:35pm
Too true Walt!
radioinfo ABN: 87 004 005 109  P O Box 6430 North Ryde NSW 2113 Australia.  |  All content © 1996-2021. All Rights Reserved.