Radio Consoles

IP Audio Networking

WheatstoneNAB BANNER

WheatstoneNewRadio2015Brochure

Wheatstone for radio: You know what they say about radio and silence. Right. So don’t even go there. At Wheatstone, we know you have one thing and one thing only that’s going to raise you above the din of today’s multimedia world. Your sound. If it’s just pictures you want, that’s not us. Wheatstone is all about audio. We process it, route it, and cue it up for you. We get it to do stuff that only radio can fully appreciate, starting with audio IP routing (AoIP) that thinks like you do and radio consoles that are the everyday workhorses of thousands of radio studios today. Cool consoles and mixers. Intelligent audio IP studio networking or TDM routing. AM and FM on-air processors that rock. It’s all right here.

Click to download our NEW RADIO PRODUCTS FOR 2015 Brochure

Which Switch for AoIP?

IP audio networks are very different from standard enterprise or office networks in almost every way, but none more spectacular than the nature and volume of traffic they handle.

Switches in these networks need to be able to handle large, continuous streams of data.

SwitchesCharts 2560Consider these two graphs that were taken over a one-minute period on two different networks. On the left is a simple office network of 18 PCs doing what PCs usually do – browsing the web, accessing printers, moving files around, and sending and receiving e-mail. You can see that the traffic peaks out at about 144 packets per second, and that the traffic is very “bursty.” Sometimes the network’s very busy, and sometimes it’s relatively quiet. This is typical of most computer networks.

Read More...

On the right is a graph taken after three audio-over-IP channels were added to that same network. Note that the scale of the graph is different. We have gone from 144 to 25,000 packets per second, which is 173 times the peak traffic we had before. In addition, note that this traffic is steady, not bursty. That high packet rate stays high, and would go even higher if we added more channels. This is what traffic looks like on an AoIP network – very high bandwidth, all the time.

So, when we’re choosing switches for use in our IP audio networks, we look for some definite features and qualities that can handle this traffic load.

First off, the switch has to have a high-capacity fabric, which is the actual mechanism inside the switch that allows it to pass data among its ports. There are a lot of different ways that switches handle traffic – store and forward, cut-through, fragment-free, adaptive switching – but no matter what type of fabric is used, it’s got to be of sufficient capacity to handle full bandwidth traffic without blocking.

Second, the switch has to be able to snoop IGMP packets and switch them appropriately. Otherwise, multicast traffic is going to flood everywhere and poorly impact traffic.

Third, the switch has to be managed. We can’t set up, monitor, or control the switch correctly without this crucial feature.

And finally, the switch has to have enough ports to support our intended use of the switch, preferably with a reasonable amount of room for expansion.

Switches as Audio Routers

Ethernet switches do just what it sounds like they do. They operate at OSI Layer 2, which is the data link layer, and they look at the MAC (Media Access Control) address in every packet's header. The switch builds a table of what MAC addresses exist on what ports, and sends packets to the right ports. This means that even during heavy traffic conditions, each port only gets traffic it's supposed to get and nothing else.

Switches communicate in "full duplex" mode, meaning each port can send and receive at the same time. Each machine on the network can effectively “hear” while it's “talking,” which really speeds things up.

Under the control of the AoIP protocol such as WheatNet-IP, the switch carries out the actual routing and distribution of audio throughout the network. That work might be handled by a combination of core and/or edge switches, in which case they collectively act as your audio router and distribution system.

Which Switch is Which?

There are two basic types of switches: managed and unmanaged. Unmanaged switches are the off-the-shelf, sold-in-a-colorful-box switches that you find at the office supply store. They're generally used for building small, basic networks. These switches don't have the most powerful switch "fabric" (the guts that do the switching), which means they could eventually crash, flood the network with garbage, or both. For this reason, and others, unmanaged switches are not suitable for AoIP networks.

Managed switches come in two flavors: Layer 2, which are the sort of switches you’ll find in the IP audio network world; and Layer 3 switches, which are highly sophisticated IP routers in and of themselves.

Managed switches are professional-grade switches. They have a configuration interface so you can get inside the switch and set various operating parameters. They're designed for environments where reliability and high availability matter. They have advanced features like Spanning Tree Protocol, and Link Aggregation built in. And you can usually monitor them in real-time to see how traffic patterns are shaping up and where your bottlenecks, if any, might be. Plus, they support (at one level or another) IGMP, which is essential in the AoIP world.

IGMP Required

IGMP is the Internet Group Management Protocol, also known as multicast. It's designed for applications like AoIP that send a lot of data across the network. It allows a source to send a stream (an audio channel, for example) out just once, and for receivers or "subscribers" to tap into that stream and receive it.

When a source is needed, a "group" is created and the source is streamed. When a destination needs that source, it sends out a special message and "subscribes" to that group, and it then receives the stream.

The switch remembers these subscriptions and routes packets accordingly, so only ports that have subscribers on them receive the stream.

How does it do this? IGMP snooping. Most Layer 2 managed switches have IGMP snooping, a feature that lets the switch “look” inside packets that are coming from an IGMP group. It "knows" when a subscriber signs up to receive a stream, and stores this information in a table. The switch then allows those multicast packets to go only to ports that are supposed to get them, so that the streams don't flood the network. When the last subscriber on a port drops out of a group, the switch "prunes" that port's traffic. This optimizes traffic on the network and keeps bandwidth usage as low as possible.

Scotts-Illustration WO_MULTICASTHere is why multicasting is such a good idea for AoIP networks. Shown are three PCs that are set up to receive audio from a server. Without multicast, the server creates three streams, one to go to each PC, and sends them out onto the switch. The switch dutifully sends each stream to its intended receiver. But the switch is now handling a lot of packets -- the whole stream, times three. This isn't efficient, and if we multiplied this out, it would become unworkable.

Scotts-Illustration W_MULTICASTMulticast is a much better way to deliver packets in the AoIP network. With multicasting, a single stream of packets leaves the server, carrying the audio. At the switch, the group table says that ports 1, 3, and 5 have subscribed to the group, so the packets are sent to those three ports in parallel. The switch is handling a third of the traffic through its switch fabric, and if another port subscribes, there's really not much of an impact on the traffic overall.

Switch Configurations

In AoIP, as in other kinds of networking, we use switches in two roles – edge and core. Edge switches are generally small, lower-capacity switches. We still want them to have all the features we discussed before, but they’re meant to be placed on the periphery of the network, like in a studio or other area within the facility. We might, for example, bring the control surface, the audio access point, and perhaps a remote button panel into an edge switch in the studio.

We don’t need many ports on an edge switch – just what’s local, plus one or two ports to connect it to the core switch. This has two advantages: one, it concentrates traffic so we only need one or two runs back to the core switch, rather than one for each device; and second, it gives us the ability to operate the studio as an independent “island” in the event that there’s a problem with the core switch.

Core switches are big and centrally located, and represent the nexus of the facility. All of the edge switches connect back to the core switch, which generally lives in a rack room or central machine room. Devices local to that area are also often brought directly into the core switch. Core switches are often made very large by stacking multiple switch units – with Cisco, this proprietary cabling system is called Stackwise®. Core switches can also be designed in such a way that they offer redundancy.

Scotts-Illustration EDGE_v_CORE_SWITCHsHere we see a facility of edge and core switches. You can see the edge switches located in each studio, with all local devices connected to them.

From each edge switch, there’s a run back to the core switch. As you can see, if the core switch were to fail, each studio would still be able to function as an “island.”

Other Switch Considerations

We suggest keeping AoIP networks separated and isolated from normal office / enterprise networks. If the networks are not isolated, each network has the potential to adversely impact the other – the guy down the hall streaming video can occupy bandwidth that the AoIP network needs, and the AoIP network can generate enough traffic to make web browsing and other activities somewhat slow.

You can do this by using a large, managed switch to create a separate VLAN for the AoIP network; provided the switch fabric has the capacity, this is fairly safe. However, since you might not have full control of that switch if it’s “owned” by the IT department, we generally prefer to see physical separation of the networks, i.e. not sharing any hardware or infrastructure at all with an office network.

Overall, switches are integral to a larger AoIP ecosystem that includes WheatNet-IP I/O BLADEs, control surfaces, NAVIGATOR software and scripting, talent stations, and processing.

Cris Alexander On Technology Disconnect

thumb ChrisStory 2000bYou know that big disconnect where you have new technology on the way in and old technology on the way out, and a budget that doesn’t quite cover it?

We’ve all experienced awkward technology transitions. But there are some engineers, like Cris Alexander, the DOE for Crawford Broadcasting, who seem to manage these better than most. Cris has been using Wheatstone consoles and network systems since at least 2005, when he purchased our TDM router with G-6 consoles. He’s been known to get a budget to stretch like taffy across five major markets and several decades of technology.

 

Read More...

We asked him for a few tips and got back these useful Cris-isms:

Reuse, recycle, reclaim. His solution for the big disconnect between existing TDM technology and newer IP audio networking is classic green economics: bring the most dated studios up to current technology using network hardware that can be repurposed.

Until this past fall, the three production studios for the Denver cluster were all analog. Updating these to new Wheatstone surfaces with WheatNet-IP audio network was a no-brainer. But deciding how to connect them to the four on-air studios and the newsroom that would remain with TDM routing for another five years required some strategy. “We thought about using a MADI card to bridge the WheatNet-IP with the TDM router in the interim, but we’d never be able to get the useful life out of it,” he said.

Instead of MADI, Cris tied the two systems together using the I/O in a standard BLADE access unit that could be reassigned to another studio or part of the network once the facility went AoIP throughout. “MADI for us was life limited, whereas the (WheatNet-IP) BLADE I/O unit could bridge the two easily and cost-effectively, and still serve a useful life after we converted everything to WheatNet-IP,” he explained.

Extend the life of what you have. Cris isn’t in any rush to replace the cluster’s Wheatstone TDM Gibraltar network, however. “It still works and looks like new, is in excellent condition and has years left on it,” he said of this TDM workhorse that remains in the four main studios and newsroom. Just recently he replaced the hard drives on the routing system, which reset the depreciation clock back to almost new and will give him at least another five years of useful service out of the system -- or more. “Actually, we could probably keep this system for another ten years,” he added.

Get same in upgrades. His TDM routed studios have G-6 console surfaces. When it came time to upgrade the production studios to WheatNet-IP, he looked for – and found – the IP equivalent that would give his talent the same feel and function they were used to in the G-6 console. “The E-6s were very similar and we even got the classic style E-6 that matched the appearance of the G-6s. It makes all the difference in bringing together the facility,” he said.

But get the best. In almost all cases it is best to go with the latest generation of equipment if you can afford it, according to Cris. For high-availability access points in the new AoIP network, he went with WheatNet-IP BLADE-3 I/O access units rather than the second-generation equivalent in order to gain a few helpful features that will reduce acquisition costs in the long run. For example, while second generation BLADEs had removed outboard DAs from the balance sheet because of built-in utility mixers, stepping up to third-generation BLADEs at certain access points gave him this, plus audio processing at these access points that will eliminate outboard processing in many cases – and contribute to a better sound overall.

Incidentally, for the access points that use second-generation WheatNet-IP BLADEs, Cris made sure to upgrade their CPU software in order to squeeze every ounce of performance and usability possible from these I/O units.

Look ahead for any disconnects down the road. This is where product design and technology standards in general can make a difference. For example, Cris likes that Wheatstone’s WheatNet-IP BLADE-3 I/O units are AES67 compatible, a standard that Wheatstone engineers helped ratify in 2013 as part of an industry effort to provide interoperability between systems and equipment. “That’s just another thing that helps future-proof our radio stations,” commented Cris.

Once you’ve perfected your approach, duplicate it. Cris tests and perfects new technology transitions at the group’s Denver cluster, where he’s located, and then rolls out the proven results to Crawford’s four other clusters in major markets. There are several new BLADEs and E-6 control surfaces on the way to him as we write this, all of which will be used to upgrade Crawford stations in Detroit, Birmingham, Chicago and Los Angeles.

EDGE Network Interface to Wireless IP Links

Edge-Flowchart v3 420You know those inexpensive wifi IP radios everyone’s talking about for short studio-transmitter hops or for getting the signal back to the studio from the ballpark?

We have something for that, and it even won a Best of Show award from Radio World and Radio magazine.

We call it the Network EDGE, a cost-effective solution for interfacing between high-quality, low-latency studio networks such as WheatNet-IP and low-bandwidth STL connectivity options such as IP wireless radios.

This single rackspace unit can come in handy for any Part 15 wifi link, or any half-duplex system. In fact, our own Jay Tyler has found the Network EDGE to be quite useful for running audio from his covered boatlift to the gazebo at his house.

Click here for the Network EDGE product page

Banish the PC from the Studio. Virtualize IT.

Enco / WheatstoneWhich one of these doesn’t belong? Microphone. Console. Monitor. Or, that noisy, lump-of-a-box that is the PC workstation in your on-air studio?

The PC workstation obviously needs to go, and we don’t mean to the equipment room where all the other noisy things end up. “KVMing” it from the TOC to the on-air studio just adds cabling and complexity that can mess up touchscreen controls.

The point is, you don’t need it, as Greg Armstrong, the DOE for RadiOhio, will tell you. He recently installed thin client replacements no bigger than a laptop that snap onto the back of the studio monitor, doing away with all PCs for his group’s six WheatNet-IP studios and four edit booths in Columbus, Ohio.  

Read more...

Network EDGE wins TWO NAB Best of Show Awards!

RadioAward 420

RW Award 420

We are EXCEPTIONALLY excited to have won BEST OF SHOW awards from both Radio Magazine AND Radio World Magazine for our brand new NETWORK EDGE!

Network EDGE is a designed specifically as a translator between high-quality, low-latency studio networks such as WheatNet-IP and low-bandwidth STL connectivity options such as IP wireless radios.

 

 

 

 

 

 

 

 

 

 

 

 

 

Wheatstone-Eventide Handshaking

IMG 2634smallerIn celebration of Wheatstone's partnership with Eventide, Richard Factor, (left) Chairman of Eventide, and Gary Snow, (right) President of Wheatstone Corporation, did a bit of handshaking of their own at booth C755 at NAB 2015 in Las Vegas.

What are these two up to? WheatNet-IP integration into Eventide products, that's what. Eliminating one more network box in the studio chain, Eventide’s BD600W delay unit is now available with an optional WheatNet-IP network card for easy and seamless integration of profanity delay into the WheatNet-IP audio network. You can see this integration in action, live and up-close, at Eventide's booth #C2848.


 

 

Wheatstone At NAB

We had a WONDERFUL show at NAB. While we never set out to win these things, we always seem to and this year was no exception. FOUR, count 'em, FOUR BEST OF SHOW awards.
A few images from our first day on the show floor:

View the embedded image gallery online at:
http://wheatstone-radio.com/#sigProGalleria5b7091bd62

Want to see more? The full photo galleries are here, updated as the show goes on: NAB 2015 Photos

LPFM. Going Pro.

LPFM B_Pro_1400You can’t be a professional football player without throwing around a few Wilson footballs.

In fact, the footballs that have passed from one NFL great to another have come out of Wilson’s Ada, Ohio, factory, where they’re stitched inside out, steamed and laced to exact specifications, and inflated to 13 psi before being sent off to play the game.

If you’ve just joined the broadcast big leagues and have acquired your first LPFM construction permit, you can guess where we’re going with this. In almost all cases, it’s better to go with a professional broadcast console than to try to get a music store mixer to pass as one.

 

Read More...

A professional broadcast board will give you logic buttons on each fader so you can stop and start sources. It’ll provide speaker muting that mutes monitor speakers when your mic is on, eliminating the possibility of feedback. A broadcast board will have a straightforward way to output programming to air and streaming at the same time, and a means for controlling an ON AIR tally light to alert others that you are currently on the air with a live mic. It won’t have too many controls that provide opportunities for your guest operators to do harm to your program. Nor will it require you or your weekend talent to have to figure out what bus assignment goes where.

It will give you a simple interface to the task at hand: broadcasting. Broadcast consoles are made to easily handle music from a PC and to cue up mics and listener calls, which is why the broadcast console is a much more intuitive work surface for most LPFMs.

On the other hand, sound reinforcement boards are made for live sound applications requiring lots of hands-on sound shaping of source feeds. With this come the many knobs and buttons for equalizing, filtering and mixing handfuls of feeds – all of which is going to cost in you complexity.

The Curious Behavior of Radios

CarRadio LargeLouder is better! Crank it up! Well, not so fast...

Ever wonder what your listeners' FM radios sound like when your station is knee deep in the loudness race and the modulation monitor is always pegged? Our audio processing development guru, Jeff Keith, wondered about that too.

Read More...

So, during one quiet week at the Wheat processing lab, he decided to find out. He selected 15 radio receivers that most represented the majority of radios out there in use, and got out his trusty modulation analyzers, signal generators and other assorted test gear. He ran audio sweeps of de-modulated and de-emphasized FM audio and plotted SMPTE IM distortion of the receiver’s audio output as modulation was raised, among other tests. His main goal was to discover distortion trends in radios during 110% or more modulation. Here are a few of his findings, the details of which will be presented during the upcoming NAB Broadcast Engineering Conference (BEC).

  • The more recent the radio model, the more intolerant of high modulation it is likely to be.
  • Newer AM/FM/HD radio IC chips detect high deviation (over-modulation) and often, in an attempt to fix the problem, create unpleasant audio effects.
  • Many consumer receivers have restrictive intermediate frequency (IF) bandwidths, which can mean perceptibly distorted audio even when tuned to a normally modulated station. The IF bandwidth of one radio measured was barely 100kHz wide at the 3dB point.
  • Half of the receivers tested added significant IM distortion at modulation levels as low as 120%.

Jeff Keith’s paper “The Curious Behavior of Consumer FM Receivers During Hyper-modulation” will be published in the 2015 NAB Broadcast Engineering Conference (BEC) Proceedings and presented during the NAB Engineering Conference, Sunday, April 12.

Gigabit Ethernet. Just the Facts.

Gigabit LargeNumbers don’t lie. That’s what your friendly police officer will tell you when he clocks you going 70 in a 35 mph zone. But, this isn’t entirely true when it comes to the speed of Gigabit Ethernet networks.

Most of us assume that Gigabit Ethernet links transfer data at one gigabit/second, or 10 times faster than 100Mbps Fast Ethernet.

But, in fact, a Gigabit Ethernet cable contains four twisted pairs of wires that are each clocked at 125 Mbps. What the "Gigabit" actually means is that a gigabit of information (data payload plus overhead) can travel across the cable in one second. Because of the efficiency of the modulation scheme and the use of all four pairs in both directions, instead of a pair each way as is the case for Fast Ethernet, Gigabit Ethernet is effectively 10 times faster than 100BaseT (Fast Ethernet).

At an order of magnitude improvement over Fast Ethernet, Gigabit Ethernet allows the audio network to deliver many more packets that much faster and therefore mitigate some issues.

 

The Gig on Latency

Take latency. Latency in an IP audio network is the delay between when audio enters the system and when it comes out. Every audio network has some latency because it takes a small but measureable amount of time to take analog audio in, convert it to digital, construct the AoIP packets, transmit them across the network and then reverse the process at the other end. In any IP system, the transit time across the network of an individual piece of data is not guaranteed or predictable. Ethernet networks are designed to avoid data collisions (which happens when different bits of information try to occupy a wire at exactly the same time) by squeezing out packets in between other packets in a multiplexing process controlled by the network switches. You just don't know when "your" packet is going to get there. The IP audio network deals with this by using temporary storage in buffers on each end. It fills up a pool of information on the transmit side so there is a ready source of data whenever the switch is ready to send a packet. Likewise, it fills up a pool of data on the receive side so there is enough data to carry you over the breaks when the network is busy sending someone else's packets.

Read More...

As long as the transmit and receive buffers fill and drain at the same rate there is no interruption in final data delivery. The buffers absorb the variance in packet delivery. The catch is that for this scheme to work, the buffers are designed to be half full of data on average, so as to be deep enough that the data in the buffer never runs out or overflows during the worst-case variance in packet timing. This means that the receive data can't start playing out until its buffer is half full or the scheme won't work. The length of time it takes to fill the initial buffer half full is a main part of latency.

What does this have to do with Gigabit Ethernet, you might ask? Just about everything, actually.

Because a gigabit link is 10 times faster with 10 times the throughput of Fast Ethernet, packets can get to their destinations faster. Furthermore, the large capacity of the link allows for many more packets to traverse the network without risk of congestion and collisions and delays by the switches trying to find an opening on the wire for a packet. Because there is less concern with congestion, packets can be made smaller and more of them can be sent more frequently. Thus, buffers can be smaller and therefore, latency can be decreased. On the flip side, less link capacity often means larger data payloads, which can be necessary to ease congestion in lower bandwidth environments but at the unfortunate expense of increased latency.

Big Capacity

From the system perspective, the capacity of a link is all-important. As advertised, Gigabit Ethernet can reasonably handle 10 times the capacity of Fast Ethernet. For example, whereas you might push the upper limit of your Fast Ethernet link at 16 stereo audio channels, a Gigabit Ethernet link will be able to easily do 160 stereo audio channels.

One hundred sixty audio channels might seem like overkill in your studio, however it doesn’t take long for signals to add up. The more you ask of your audio network, the more it will need capacity to handle busses and foldbacks, backup sources, mixes, and headphone streams -- not to mention control and monitoring signals. If you want to automatically switch between live assist and dayparts, for example, that takes something like a utility mixer (which is part of our WheatNet-IP BLADEs) to switch them at the right time and level – plus the capacity to handle that switching. Put a few I/O devices in a studio and pipe their audio over a link to your rack room and the channel count goes up quickly.

It’s a given that you will probably need to run more than 16 audio channels through a link at one time. Any time you add more capability onto the system beyond a basic input or output channel, that’s when you need capacity. It’s also nice to have enough of it available for when you want to add something like an audio clip player or multiband audio processing to a network I/0 unit (which we did recently with the introduction of our new BLADE-3 I/O units). Having the available channel capacity allows us to add in the new features and functions that enhance the power and flexibility of the system without running out of network resources.

There’s also the flip side of capacity, or what happens when you run out.

As you add more channels to a link, the possibility of dropouts is increased until they are commonplace and you hear them routinely. It’s a logarithmic function up to the final cliff, not linear.

In fact, there’s a lot at play in the audio network that affects the quality of the end result. IP audio networks are highly stressed, running much more traffic than initially expected. That’s why it makes sense to use a topology (Gigabit Ethernet) that is more tolerant of the workload IP audio puts on it.

For example, the bigger the switch capacity, or what is referred to as switch fabric, the more packets it’ll be able to move. Just as on the Ethernet link itself, IP audio network switches should be sized and configured to handle the amount of traffic you're going to throw at them -- both today and five to 10 years from now when you'll ask your system to handle the new features we haven't even dreamed about yet.

By using Gigabit Ethernet links and switches you'll have the highest capacity, lowest latency, most future-proofed system available today.

Checking in with iHeartMedia Portland

iHeartRadio A_2560-MC
We dropped in on iHeartMedia in Portland recently to revisit a WheatNet-IP audio network that has been in operation since the seven-station cluster moved to Tigard, Oregon, in September 2012. Director of Engineering Chris Weiss showed us around the 17-studio, 25,000-square-foot facility and talked about life with audio over IP.

He recalled a recent remote at the Rose Quarter stadium for the Portland Trail Blazers (basketball sportscast) that involved all seven stations at the same time – an impossible feat before IP audio networking. “It was more a staffing issue; could we have enough promotion and programming staff to handle all this? But from an equipment standpoint, it was easy,” he said.

Read More...

At the center of the operation are the audio network’s core Cisco switches, which are bonded together on a backplane in the TOC, with gigabit/second connections to every other switch and element in the network. “Everything works better at a gig, especially NexGen (automation),” commented Weiss, who monitors network traffic on a regular basis. Normal NexGen traffic hovers around the 100 Mbps mark, whereas on the fiber connection to the hub point for all the cluster’s transmitter sites, Weiss routinely sees steady traffic at about 150 Mbps. “150 megabits. That freaked me out at first because you never see that kind of bandwidth solid on a circuit. But that’s what it takes because it’s running all this AoIP back and forth, and we run a video feed for the Trail Blazers over that,” he said.

The operation includes 56 WheatNet-IP I/O BLADEs, 49 audio drivers, 23 Wheatstone M2 dual-channel mic processors to handle 46 microphones, and 13 control surfaces all connected through a WheatNet-IP audio network.

Look for details in the recent issue of Radio magazine, which features the iHeartMedia Portland facility as its cover story in the February issue.

View the embedded image gallery online at:
http://wheatstone-radio.com/#sigProGalleria3d99162483

Wheatstone BLADEFEST

Enhancing System Performance

September 2014: Wheatstone's WheatNet-IP Engineers get together to try and break a huge system assembled to be representative of all our control surfaces, many, many BLADES and processors, as they'd be used in a very large installation. In the process, they make the products faster, better, and stronger. We called it BLADEFEST. And the engineers who took part were our BLADE RUNNERS...

The above video documents the process. The article below (expanded here) appears in the Jan/Feb 2015 edition of Radio Guide Magazine. 

Read More

twitterfacebook