Wake up call for station management about equipment maintenance and upgrades

Are you still using a 10 year old computer?

Of course not. 

Your work would be too slow, you could not access new software, your browser would not play media properly and you know the hardware would be likely to fail unexpectedly at any moment.

We regularly update our personal computers and laptops because we want the best performance that we can afford from our IT systems.

While this is common practice in office computers, radioinfo has found that it is not standard practice in some radio engineering installations.

The failure of triple j’s transmission last week is an example of what might happen if stations do not have a regular practice of maintaining and upgrading their digital equipment.

Failure to maintain can happen for many reasons including: financial cutbacks, lack of long term service agreements, changes in engineering staff, management not prioritising equipment, or chief financial officers trying to extend the life of their assets beyond their write off date.

We do not know exactly what went wrong at the ABC last week. The ABC declined to comment on the issue when we asked about it, so we cannot analyse the specific fault that put triple j and other ABC stations off air and disrupted programming for most of the breakfast shift, but we can offer some general advice to all stations to commit the right amount of resources to ensuring that your transmission chain is bullet proof.

Keeping your gear in good shape is an important risk management issue for broadcasters.

Ask yourself:

  • How old is my equipment?
  • Do I have service agreements in place with the installers and/or manufacturers?
  • Do I have the latest version of the software?  Do I know if the latest release will break my operation?
  • Is there a regular routine of virus checking and server performance analysis?
  • Do I have backups and ways to route around each piece of equipment if it fails?
  • Is there a person on the Board or executive team whose portfolio responsibility requires reporting to top level management about the robustness of the studio and transmission equipment?
  • Is there an adequate capital budget assigned each year for upgrades, backups and replacements?
  • Does our depreciation schedule adequately represent the real useful life of our equipment?

Did you decline to put in a service agreement to save a few thousand dollars a year, only to find that a few years later your equipment failed and it cost you much more to replace it and fix the damage done to your reputation, advertisers and staff?

We consulted several broadcast technicians who gave us some insights on best practice in configuring and maintaining digital equipment.

Here is their advice.


Analog routers used switch modules and distribution amplifiers which were regularly rotated and refurbished by a qualified team of engineers.  Analog power-supplies had to be changed from time to time as well. The same service routines are possible with a properly designed digital audio router, which should have individual input and output cards to avoid single points of failure.  Also, it is advised not to route all connections through one single processor or switch rather using a fully distributed system instead. 


A core element in each digital audio system is the control server network, which is most likely based on PC hardware. In order to avoid a server PC failure causing audio drop-outs, control PC networks can be duplicated avoiding failures coming from one isolated PC. Ideally, the particular audio routers connected to such a PC network keep the last status if the PC is disconnected which helps to avoid breakdowns caused by PC networks.

Some manufacturers now use PC technology to run both control and audio simultaneously, which results in a complete breakdown if the PC or audio-over-IP network fails. Ask about this before buying a new system.


Office PCs, IT servers and general purpose switches are built to last around five years notwithstanding that they might be used in “nonstop” professional applications like banks, ATMs, Airline booking systems or similar industrial environments. If radio and TV operators use more and more PCs in the critical paths of their operations they have to consider whether such lifetime cycles are appropriate, and must be prepared to exchange IT components regularly 

Also, operators have to be aware that the lifetime of PC-based infrastructure does not end at a fixed date; rather it suffers from increasing failure rates when it approaches its end, resulting in a less stable functionality requiring attention.


A significant point of failure in modern radio and TV infrastructure comes from the new technology of digitally switched power supplies.  Thanks to extreme efficiency, low heat dissipation and compact light-weight design, switchmode power supplies have become ubiquitous.  However, these digital PSUs rely on low-leakage capacitors which can dry out completely and cause the power supply to stop working suddenly. Life-span could be 10-15 years, or as little as a few years.  Smart operators know that critical switchmode PSUs should be serviced or replaced not later than 10 years after production date. If that is not done, PSUs can burn and may cause massive damages.


A more insidious problem is the skills gap that has arisen as radio and tv stations backfilled retired or retrenched broadcast service engineers with IT specialists.  As more and more technology in broadcasting becomes implemented in software on common off the shelf computer or network hardware, IT skills are vital. 

But it’s not just a matter of skill. We consulted John Maizels, President of Technorama, on this point. He told radioinfo:  “The IT and Broadcast industries work 180 degrees apart.   The IT person knows that hardware and software will fail, but is focused on ensuring that the process has integrity – that is, it works as expected.  If you have to wait a few seconds to get the right result, that’s OK.  To a Broadcast technologist, that couple of seconds is FAR from OK.  It’s dead air, or black picture, and a few seconds of that is enough to cause a viewer or listener to change station and not come back.

“It’s a global problem that training specific to Broadcasting has diminished, and with that has gone the skill and understanding that old-school techs and engineers had; the skill that kept the listeners from noticing a problem even if the sky was falling at the station.  A typical radio station might have several hundred in and out points, many different ways of achieving the right program output, and a highly subjective assessment process – the sound is OK if the PD thinks it’s OK.  So broadcast maintenance can be vastly more complex than the IT desktop IT equivalent. Trained broadcast technologists are valuable.”

Broadcasters do not only use and audio server and one microphone – a normal radio station has several hundred inputs and outputs all coming from different sources, making it much more complex than normal desktop IT maintenance. Many problems are not related to the products, often problems occur due to a lack of dedicated broadcast knowledge and manpower in the stations.


Of course broadcast stations want to reduce costs, but has the industry gone so far in cost reduction that they are risking the continuity of their operation, potentially resulting in revenue losses greater than the annual salary of a suitably experienced broadcast technologist?

The former Head of Transmission at the ABC, Mark Spurway has also contributed his thoughts on best practice approaches to technical risk management:

“The ABC has quite complex transmission and distribution systems for radio due to the large nation wide terrestrial networks broadcasting over several time zones.

“While I cannot comment on the recent outage, a network wide disruption of this size is rare and would most probably be due to a distribution issue either related to satellite delivery or a failure of equipment in the master control area.”

As a general comment there are four fundamental processes that broadcasters can address to minimise the risk of outages, which are always a threat in any 24/7 operation, according to Spurway. They are:

  1. Immediately patch around the problem if it is safe to do so – don’t try and work out technically what has caused the outage before the content can be restored.
  2. Do a risk assessment of the transmission chain from studio to transmitter tower site(s) to determine if there are any single points of failure (SPF) and take action to minimise SPF’s.  This includes Uninterrupted Power Supply (UPS) systems that are becoming more critical in the era of potential unreliable mains power.  
  3. Practise annual ‘war games’ to ensure staff, including Content Directors and Management, are aware of potential scenarios that could take a station off air and provide training to relevant staff to act decisively and quickly to restore the content.  Scenarios can included everything from studio faults, control room equipment, distribution systems, transmitters, antenna and tower malfunctions.  Importantly ensure staff are adequately trained to initially restore content in a 24/7 operation.  It can’t just be left to the Head of IT or Engineering to turn up. 
  4. Finally, document faults and outages restoration processes that can be easily accessed and followed, and review annually.   


We know we have many technical readers on radioinfo and AsiaRadioToday. We welcome your comments on this article.

See related article: The Elephant in the room is a time bomb: Des DeCean


Photo: Shutterstock

Tags: | |