📌PLEASE NOTE: This site is still a work in progress!
Knowing the main leads and signals involved in the recording processes is vital in being able understand how recording works. Knowing this can therefore help you to capture the best recording possible with the equipment you have, and it will help you to troubleshoot and come up with solutions when things don’t work.
A signal path, or signal chain, is “the order in which processing occurs.” This can include the full recording signal path in a studio, or an effects chain etc.
Every studio will have slightly different signal chain, dependent on what equipment they have available, but there are some basic similarities between all modern set ups.
Essentially, the task of the modern studio is to convert sound vibrations into digital data which can then be manipulated within a DAW before being converted again into sound energy for playback through speakers and headphones.
The diagram below shows an example of a basic studio signal flow. Start at the acoustic guitar and work your way round to the ear…
It is important to note that not every audio interface contains a preamp; some require a separate unit.
You also need to know about effects chains. There is no right or wrong way in which effects can be used and applied in your signal chain, but different methods can result in altering the tonal qualities or your signals or mix.
There are a number of ways in which effects can be added to a signal path in your DAW.
In your DAW, every track will have it’s own mixer channel strip, allowing you to apply effects to just that track, rather than the whole mix. Adding an effect to an individual channel like this is known as an insert.
If, however, you want to apply one effect to multiple channels, you can do this via an aux send or by creating a subgroup. If you we’re doing this in a DAW, you could of course just apply the effect to one channel and then duplicate the settings on all other channels that you want to be effected but this uses up a lot of processing power and can overload your computer, causing you DAW to crash.
Aux sends and subgroups are very similar as they both allow you to send signals from multiple channels (known as using a bus) to one channel where an effect can be applied to them all at once.
An auxiliary send will allow the signal from your audio channels to be sent to the master channel, as usual. It will also make a copy of the signal from both audio channels, and this copy will be sent to an aux track. The affected signal from the aux track will then be combined with the original signal and sent to the master channel to produce the final stereo output.
A subgroup, on the other hand, will not make a copy of the signal from your audio channels and will reroute the outputs of the channels to the aux track. The signal will be affected by whatever effect is applied to the aux channel, the output of which will then become the input of the master channel, finally producing a stereo output.
As well as how to add effects to signal, you also need to know the order in which effects can be applied. Again, while there is no right or wrong, but the order in which you place your effects can alter the tone of the final output.
A common effects chain might go something like this; gain staging, EQ and compression, creative effects, and reverberation.
It’s common to gain stage first so that you can control what level your signal goes in to the rest of your plugins or effects units. For more information on gain staging, please see this page on the capture of sound.
There is much debate over which one of these effects should be first in your signal path and there is no right or wrong answer. It all depends on how your recorded signal needs to be corrected and what sound/tone you want.
Before discussing why you might order them in one way compared to another, you need to understand how compressing a signal changes not only the dynamic range but also the tone.
When you compress a signal, the loudest signal will always be the fundamental, meaning that it will compress certain frequencies more than others, normally lower frequencies, due to the rules of the harmonic series. This can result in a perceived loss of bass frequencies.
For more information on how compression affects timbre, this page from Pro Audio Files is very informative.
EQ then compression – it can be useful to EQ before you compress because you are removing unwanted frequencies before they can be emphasised by the compressor. A disadvantage of inserting an EQ first, however, is that if you want to go back and change anything in your EQ, it will also change how the compressor works, and you will need to recalibrate it.
Compression then EQ – the main advantage of ordering them this way is that you get more control over the tone because the compressor is not getting in the way of any changes that you make to your EQ.
Subtractive EQ, compression, additive EQ – a common way of eliminating the problems with the other approaches is to get rid of unwanted frequencies first (subtractive EQ) so that the compressor doesn’t emphasise the unwanted frequencies, compress the signal, then apply additive EQ so that your compressor doesn’t compress the frequencies that you want to be boosted.
For more information on compression, please see this page on dynamic processing.
For more information on EQ, please see this page.
Creative effects include chorus, flanger, phaser and tremolo. Again, there are really no rules as to where you place things in your effects chain, but the logic of having modulation effects here is that you have done the corrective work (EQ and compression) to get the right tone, and now you can add the creative element.
More information on effects can be found here.
The whole purpose of reverb is to put a sound in a space. Unless you want an experimental reverb, it is common practice to put reverb at the end of your effects chain so that it is affecting all of the signal, wet (processed) and dry (original).
Analogue cables carry analogue signals using electrical currents.
Analogue signals vary in voltage to represent the propagation of sound waves captured by a microphone.
An XLR cable is “a connector commonly found on microphones and other balanced signals.” XLR cables are used to connect microphones and DI boxes to other equipment, such as audio interfaces and mixing desks. It is a balanced cable and contains three pins; ground, positive and negative. The male end of the cable (with the pins) connects to input ports and the female end of the cable (with the holes) connects to output ports)
TRS (balanced) cables or TS (unbalanced) cables are used to connect guitars, keyboards and other instruments to recording equipment. They use a jack type connector which connects to input and output ports. A TRS cable, like the XLR cable, has three points of contact; ground (sleeve), positive (tip) and negative (ring). A TS cable, however, only has tip (positive) and sleeve (ground) connections.
In every analogue audio cable you come across, there will be a signal core and a ground core inside. The purpose of the ground cable is to “shield the cable from unwanted noise (hum from lights, TV interference etc.) and it does a pretty good job – but because it is a metal cable it can also act as an aerial and introduce noise into the recording itself."
An unbalanced cable “only has the signal cable and the connection to the ground.” This means that it does nothing to prevent the ground core acting as an aerial and introducing unwanted noise on to the output signal. One way in which you can reduce interference, apart from using a balanced cable, is to use shorter cables.
A balanced cable “has two signals in inversion to one another to reduce noise when put back into phase.” In other words, there are two signal cores, positive and negative) and a ground. The two signal cables carry the same audio signal along the cable but one is inverted so that they are out of phase and cancel each other out. Once the signal reaches the other end of the cable, the phase is inverted again. The reason for this is, if at any point there was any interference, the unwanted noise will affect both signals simultaneously. When the phase is inverted at the end of the cable, the signals are back in phase but the interference is out of phase with itself and therefore cancels out.
✔️Obviously, the main advantage of balanced cables over unbalanced cables is that interference is less of an issue. They also have better signal-to-noise ratio when compared to unbalanced cables.
✖️The only disadvantage of balanced cables in comparison to unbalanced cables is the price; unbalanced are a lot cheaper.
Impedance is “the amount of opposition that a circuit applies to a current when a voltage is applied to it.” It is the resistance in an electrical current.
It is important that when connecting the output of one device to the input of another device that the impedance matches. High impedance devices include instruments (keyboards, guitars etc) and low impedance devices include microphones and mixers.
In order to plug a device with a high impedance output (eg. electric guitar) into a low impedance input (eg. mixer), a DI box is used. A DI box converts the low impedance signal to a high impedance and balances the signal. For more information on DI, please see this page on software and hardware.
To connect a low impedance output (eg. microphone) to a high impedance input (eg. guitar amp), you will need a converter. This will take the signal from a balanced, XLR cable and convert it to an unbalanced, high-impedance output (usually a 1/4” jack).
There are three different signal levels (referring to voltage) that you need to know:
Mic level: A balanced audio signal that has lowest level of the three levels. Generated by microphones. A preamp is required to boost the signal to line level.
Instrument level: An unbalanced audio signals that sits between mic and line level in terms of voltage level. Generated by instruments such as electric guitars and keyboards. Requires a preamp to be brought up to line level.
Line level: A balanced audio signal that has the highest level of all three levels. Used to transmit analog sound between audio components.
Digital cables carry digital signals using long strings of binary code. They allow computers and DAW to understand and process analogue sound waves. For more information on analogue and digital signals and converting one into the other, please see this page (analogue to digital conversion section) which explains sample rate and bit depth.
Digital signals use binary to represent the amplitude and frequency of sound waves.
The first digital audio cable you need to know about is the MIDI cable. Unsurprisingly, the MIDI cable carries digital MIDI data/control information from MIDI instruments to computers/DAWs. The original 5-pin MIDI connector is mostly obsolete nowadays and has been replaced with USB connector.
More information on MIDI can be found on this page on sequencing.
You’ve probably heard of and used USB cables and ports many times. USB stands for universal serial bus and it is used to connect many types of hardware devices together. In the process of audio recording, USB cables are used to carry digital audio information and MIDI information.
There are a few different types and versions of USB. Type refers to the shape of the physical connector, which is labelled with a letter (A, B or C). The version refers to the technology that allows the transfer of information. Every time a new version comes out, the speed of transfer is faster. There is also mini and micro USB to be aware of.
You don’t necessarily need to know these different types but it can be quite useful knowledge to have.
Firewire cables aren’t all that common today. The cable was popularised in the 90’s but has always been in competition with USB. Despite the fact that for a long time it could offer faster transfer speeds, USB was always the preferred option. The main benefit of using firewire compatible recording hardware, especially in the large studio, is that it has far lower latency levels than USB 2.0 and some USB 3.0 cables.
There are two types of firewire; the 400 and 800. These names refer to their speeds in Mbps (megabits per second).
Thunderbolt is a cable/protocol designed by Intel and Apple. The original Thunderbolt cable (versions 1 and 2) used a Mini DisplayPort connector and have a transfer speed of 20 Gbps. The more recent versions (Thunderbolt 3 and 4) use a USB C connector and have a transfer speed of 40 Gbps.