The end of latency?

David Mellor

David Mellor is CEO and Course Director of Audio Masterclass. David has designed courses in audio education and training since 1986 and is the publisher and principal writer of

Wednesday December 9, 2020

How much would you love to be able to record without latency? In the old-fashioned analogue days you could. Now - perhaps - you can.

In the past, sometimes known as 'the good old days' there was no latency.

Hard to believe but true. The instant a signal went into a microphone in the studio, it came out through the monitors in the control room. Oh yes, that's the good old days when studios had a separate recording space and control room. Now we record in a spare room or shed at home. Economics.

But those were the analogue days. We have now moved on to digital-everything. What isn't digital these days? Not much.

The problem though is that the natural world is analogue, so things need conversion to enter the digital domain. And that's where latency comes in.

It takes time to convert an analogue electrical audio signal to digits, time to let it roam around the inside of your computer, and more time to convert it back again so that you can hear it.

It's only a matter of milliseconds or tens of milliseconds, but remember that the latency in analogue audio was stone-cold zero.

To put some numbers on this, I measured the latency in my system from input to output back in 2005, with of course more primitive equipment than we have today. It was 48 milliseconds. That is easily audible and a distinct distraction.

Today with a reasonably fast computer and a cheap USB audio interface I can get down to 10 milliseconds. But that is with my DAW's buffer set to the minimum value, which isn't practical once your session has more than a few tracks and plug-ins.


But what if I tell you that I have a DAW system that I bought in 1999 that has a latency that I measured at a mere 1.8 milliseconds? That is so low that it really is impossible to detect subjectively.

Where did things go wrong that I could achieve 1.8 milliseconds in 1999 and 10 milliseconds today (which only works when the track count and plug-in count are low)?

The answer is DSP. Digital Signal Processing.

My 1999 system is (not was - I still have it and it still works) Pro Tools MixPlus with a couple of Digidesign 888/24 8-channel interfaces.

It consists of Pro Tools software, the 888/24 interface, and the Mix Core PCI card. I also had a Mix DSP card for more processing power.

The key to the low-latency performance is this...

Audio is not handled by the computer. It is handled by dedicated DSP chips on the Mix Core PCI card, and the optional additional Mix Farm card or Mix DSP cards in the system.

Audio is handled by specialized audio chips. That's how the latency can be so low.

But time moved on and computer processors became faster. Gradually they could handle more and more audio tasks natively, to the point where DSP was no longer a necessity for smooth, fluent multitrack recording and processing. Audio could be handled reasonably well by the computer's processor.

So many newcomers to DAW recording in the 2000s and on looked at the price of systems that used native software (low) and software that required DSP cards (high). The choice was obvious - save a lot of money for the sacrifice of a bit of latency.

Zero-latency monitoring

The people who design DAW systems and audio interfaces are clever. Smarter than the average bear probably.

What they realized was that for a singer to monitor their performance, the audio didn't need to go through analogue-to-digital conversion, the processor of the computer, then digital-to-analogue conversion, adding latency at each stage - The audio signal going into the interface could be routed analogue-wise straight to the headphone output, and mixed in with the audio of the backing track coming from the DAW.

So as far as the performer was concerned it was back to the analogue days of no latency at all.

But this brings problems...

Singers generally don't perform well unless they can hear themselves sounding good. And the dry zero-latency signal with no EQ, compression or reverb was far from ideal.

But, as I said, the people who design these systems are clever. So audio interfaces came to be equipped with DSP processing that was purely for monitoring, and purely for the benefit of the performer. So the performer didn't have to suffer the round-trip through the computer - they could hear their performance sounding good with processing done quickly through DSP.

This sounds great, but...

There are two problems here. One is that the performer hears something different than the engineer. This, according to the basic principles of recording practice that every engineer needs to learn, is never a good thing. (I would add, as an aside, that the engineer ideally should always have a pair of headphones handy that is identical to the headphones that the performer wears, and carries exactly the same signal at exactly the same level. The engineer can then monitor foldback the way the performer hears it.)

The other problem is that the engineer now has two mixes to take care of. In the good old days (back there again) when production and engineering were separate roles then perhaps this would not be an issue. But now that producers commonly do their own engineering, it's one more task that takes the producer's attention away from the music, which of course is where it should be.

Enter Carbon...

And so the extremely clever engineers at Avid decided to sweep away the problems of zero-latency monitoring, and fiddly DSP not-quite-solutions.

Pro Tools | Carbon, or maybe just Carbon for short, is their new audio interface, priced within the reach of the serious project studio, that provides monitoring with DSP and latency as low as you can get. And full no-fiddliness integration with the DAW.

Avid Pro Tools Carbon audio interface

I'm not going too much in detail about Carbon itself because a) right now you need Pro Tools to get the features listed above and I respect all DAWs, and b) I don't need to reinvent the wheel because you can read about Carbon in intense detail in the December 2020 edition of Sound On Sound. (That link will probably expire at some point, but you'll be able to search on the SOS website).

As always, it isn't particular examples of equipment or software that I'm mostly interested in, it's the concepts and how they can be applied to get better results.

Bearing that in mind, the key concept here is that Pro Tools, like all DAWs these days, comes with an excellent selection of plug-ins as standard. Previously you would use these in the conventional way for your monitor and cue mixes, and for your final mix. And of course you will still do that.

But... Wait-for-it emphasis here...

These plug-ins can now run in DSP in the Carbon interface.

I'll say that again because it's massive - Standard DAW plug-ins can now run in DSP in the Carbon interface!

So you can monitor-mix to your heart's content and, at the same time, with the click of a green flash (possibly Harry Potter-inspired) button you can give your performer exactly the same sound as you hear in your monitors. The total integration between DAW and DSP makes this trivially easy.

I could say that this is huge, Carbon is huge, DSP is huge. Yes maybe they are. But what is really huge is that I cannot understate the importance of your performer hearing things the exact same way that you do or, in reverse, you can hear everything in the exact same way your performer does. This makes for a massively more musical workflow in the studio and, don't forget, it isn't about the equipment and software, it's all about what you can achieve with it. And this step forward in low-latency monitoring is important.


Avid has released an interface with DSP that allows the performer and engineer to hear the same audio, with very low latency. That sentence has 22 words but it's going to make a massive impact in the studio.

P.S. In case you're wondering, Carbon doesn't work with software instruments yet, so there's still latency there. Whether that issue will be ameliorated in the future remains to be seen, but we hope.

Pro Tools Carbon Inside View

Like, follow, and comment on this article at Facebook, Twitter, Reddit, Instagram or the social network of your choice.

Come on the Audio Masterclass Pro Home Studio MiniCourse - 60 great hints and tips to get your home recording studio MOVING

It's FREE!

Get It Now >>

An interesting microphone setup for violinist Nigel Kennedy

Are you compressing too much? Here's how to tell...

If setting the gain correctly is so important, why don't mic preamplifiers have meters?

The Internet goes analogue!

How to choose an audio interface

Audio left-right test. Does it matter?

Electric guitar - compress before the amp, or after?

What is comb filtering? What does it sound like?

NEW: Audio crossfades come to Final Cut Pro X 10.4.9!

What is the difference between EQ and filters? *With Audio*

What difference will a preamp make to your recording?

Watch our video on linear phase filters and frequency response with the FabFilter Pro Q 2

Read our post on linear phase filters and frequency response with the Fabfilter Pro Q 2

Harmonic distortion with the Soundtoys Decapitator

What's the best height for studio monitors? Answer - Not too low!

What is the Red Book standard? Do I need to use it? Why?

Will floating point change the way we record?

Mixing: What is the 'Pedalboard Exception'?

The difference between mic level and line level

The problem with parallel compression that you didn't know you had. What it sounds like and how to fix it.

Compressing a snare drum to even out the level

What does parallel compression on vocals sound like?

How to automate tracks that have parallel compression

Why mono is better than stereo for recording vocals and dialogue