• Open Topic: Frequency Balance over Time

    One topic that interests me is how the frequency balance of mixes over time have drastically changed. Of course, this is dependant on genre and is not applicable to all music, however there is a noticeable trend of mixes being less mid heavy and much brighter over time.

    One reason for this could be the switch from analogue gear to digital gear as DAW’s have become the most common method to mix music. This effects the frequency balance because tape will generally carve out some high end information, as well as many of the most used analogue processors and pre-amps doing the same. When recorded DI and mixed digitally, this kind of effect does not take place, meaning the sound will naturally be brighter.

    Another reason for this could be related to the previously discussed ‘Loudness Wars’. Part of competition in making mixes sound louder involves the perceived energy of the song which the low and high end information has a big effect on. Often times, boosting or adding high end information through saturation is referred to as adding ‘excitement’ as a result of this. I believe this has played a part in the mids becoming less dominant in modern music.

    I also do not believe that there is anything wrong with this change, though I think it is interesting to note as someone who’s own work tends to put a lot of focus on the low mids and bass as to my ear it creates a more natural and smooth sound often times. I also believe that the brighter mixes usually work better for types of music I don’t make as much, which plays a role in this also.

  • Andrew Sarlo

    Andrew Sarlo is a mixing engineer best known for his work with Big Thief, Dijon and Bon Iver. He works across a variety of genres but the most interesting of his work to me is his folk/country mixes, particularly on ‘Absolutely’ by Dijon and ‘Dragon New Warm Mountain I Believe in You’ by Big Thief. The characteristics of these albums that stick out to me are the roominess of the sound, which was achieved through a variety of techniques.

    Absolutely was recorded with mostly omnidirectional microphones in an untreated room, with very few studio overdubs (Handley, 2022). This meant that instead of each microphone purely picking up the instrument it was assigned to, it’s picking up the space and instruments around it and how they all interact with each other. This is a step away from most modern engineering that is very cleanly produced in treated rooms that pick up as little of the space as possible with the exception of things like room microphones.

    During the mixing stage of this album, very little processing was done, with a hefty use of graphic eq to attenuate the mids, slight compression and little else.

    A big part of Andrew Sarlo’s philosophy is the “belief in the chemistry of people working together”. (Jeffers, 2019) The idea that a perfect clean recording will never have the same energy and soul in it that an imperfectly captured yet authentic moment in time will. This can be heard throughout all of his work, including the more ‘proffesionally recorded’ ones – there is a sense of every decision being made out of emotion and feel rather than how something is ‘supposed’ to sound.

    This philosophy is inspiring to me as it puts the focus on the joy and passion of music and sets the goal of recording and mixing to capturing that feeling as clearly as possible, as supposed to capturing the instruments as clearly as possible. Using this philosophy in my own future productions will allow me to keep my workflow feeling loose and authentic with an emphasis on the emotion of the music.

    References:

    Handley, J. (2022) ‘Dijon Talks “Absolutely” and His Omnidirectional Process’. Reverb, Available at: https://reverb.com/news/dijon-interview (Accessed: 1st September 2025).

    Sommer, R. (2019) ‘Andrew Sarlo: Sanctuary of Possibility’. Issue 134. Available at: https://tapeop.com/interviews/134/andrew-sarlo (Accessed: 1st September 2025).

  • Flangers

    One effect that I have not talked about yet are flangers, which I use a lot along with other modulation effects such as chorus and phasers.

    Flangers work by duplicating a signal and delaying it by a short amount, usually under 15ms. This means that as the waveforms play alongside each other, they will drift in and out of phase to create a worbly, moving sound. Features that are found on flangers often times could be feedback, in which the output is fed back in to the flanger, as well as LFO’s which will typically effect the delay time.

    Here is an example of a guitar loop I recorded with the flanger bypassed:

    Here is what that sounds like with the flanger enabled:

    To get this effect, I used MetaFlanger from Waves with these settings:

    The delay is set very short, with a slow moving LFO modulating it. I have the feedback increased to exaggerate the sweeping effect and have the high pass filter set at 1k so the lower mids aren’t being effected to avoid muddiness.

    Some artists I listen to a lot that make use of flangers as a core part of their sound include Prince, Tame Impala and D’angelo. While Prince will use them mostly on guitar, an artist like D’angelo uses them more often on vocals. Tame Impala, on the other hand, will sometimes use flangers on the entire mix – this shows the variety that flangers can have and the different ways they can be used.

    The core essence of why I love flangers is the sense of movement. It makes them great for transitions, watery sounds and anything that you don’t want to feel rigid.

  • The History of Surround Sound

    The history of surround sound can be traced back to Disney’s ‘Fantasia’ in 1940 in which multiple speakers would be placed around the theatre, with some select theatres having up to 54 speakers for one viewing.

    It was not until 1977 that major leaps in surround sound technology were made with the release of ‘Star Wars’, which utilised the new Dolby Stereo technology which consisted of 4 channels: Centre, right, left and surround.

    Surround sound as we know it would not come in to play until the 1990’s, however, with the release of 5.1 surround sound technology. This involved a left, right and centre channel in front of the audience and a left and right ‘surround’ speaker behind them, with an additional subwoofer for the low end.

    This is similar to the later released 2010 technology 7.1 surround sound, which involves the same configuration with an extra surround speaker on the left and right side.

    Dolby Atmos was released in 2012, which involves additional speakers above the audience to create a sense of vertical space. This technology is now increasingly being used for music due to its unique ability to deliver immersive, complex mixes that treat each element of the song as an ‘object’ in a 3D space.

    I believe that if incorporated in to the production process this technology creates room for a lot of experimentation to create songs with immense depth, as well as unique psychedelic effects by utilising the space to its full potential.

    Resources:

    How Disney’s Fantasound Brought Surround Sound to Hollywood in 1940

    Dolby Stereo and Surround Sound: The Evolution of Immersive Audio in the Film Industry

  • What is mastering?

    Once a song has been mixed, a mastering engineer will take the final bounce of the track and apply whatever processing they deem necessary to put the final touches on a song. This tends to include EQ, compression and limiting especially.

    An easy way to differentiate mixing from mastering is that mixing involves the processing of all the different elements of the song and mastering involves processing the entire track.

    Typically, mastering is a less elaborate process than mixing as you rarely want to over-do it. When mastering a song any extreme processing can fundamentally change the track in major ways.

    In the video below (Mix with the Masters, 2022), engineer Chris Gehringer masters Lorde’s ‘Solar Power’ (2021).

    In this video, we can see that he has done very minimal processing involving a slight boost around 100hz and about 2.5dB of limiting. While it’s important to keep in mind that the song was mixed by a professional mixing engineer which makes the job a lot easier at the mastering stage, it still serves as an example of how delicate the mastering process can be.

    While it can seem that mastering is an easy process due to how little can be involved at times, this is misleading as the delicate nature of mastering elicits a need for careful listening and decision making that can be overwhelming for many not accustomed to it.

    This is why AI mastering tools such as Izotope Ozone have gained so much popularity recently. These plugins analyse the frequency spectrum, dynamic content and stereo image of a track and make adjustments based on industry standards, with a variety of presets and an ability to go in and change anything you don’t like. Increasingly, this is looking to be a big part of the future of mastering.

    One one hand, this has given those without the resources or experience the ability to make more professional sounding music that can compete with an industry that has an immense barrier to entry. In my books, that is always a positive as I firmly believe music should be for everyone and not just those with the resources and finances to create without restrictions.

    On the other hand, this poses a threat to the art form of mastering as AI can not make the creative decisions that are unique to an individual song and instead attempt to replicate what is already out there. AI mastering also threatens mastering engineers who are not already well established and financially successful the most as the biggest market for these tools are those who would otherwise be paying lower prices for engineers who are up and coming.

    Many mixing engineers take issue with AI mastering for these reasons, as well as some (Wright, 2025) arguing that AI will “never replace the human touch” and that “passion isn’t something that can be faked or emulated”.

    References:

    Lorde (2021) ‘Solar Power’, Solar Power. Available at: https://open.spotify.com/track/7s2kWabRM60W9I61HpKg8C?si=5c055a963a654420 (Accessed: 31st August 2025).

    Mix with the Masters (2022) Mastering ‘Solar Power’ by Lorde with Chris Gehringer. May 12th. Available at: https://www.youtube.com/watch?v=sDi9Bvz2wgE (Accessed: 31st August 2025).

    Wright, A. (2025) ‘For the Love of Sound: Why You Should Choose Human Mastering Over AI Tools’, AlexanderWright, 15th January. Available at: https://alexanderwright.com/blog/human-mastering-over-ai.

  • The Loudness Wars

    The term ‘The Loudness Wars’ refers to the idea that over the years, the standard dynamic range of music has decreased in favour of more compressed mixes and masters that allow for a more consistent volume when listening to music on the radio or on a playlist and that engineers are at ‘war’ for who can make the loudest sounding track. One such engineer is ‘Andrew Scheps’, who claimed that “The loudness war is over because I won” (Griffith, 2023) in reference to his controversial engineering on Metallica’s ‘Death Magnetic’ (2003) album.

    This phenomenon is made clear by looking at the remastered versions of songs that were released prior to the ‘loudness wars’ taking over. For example, this image (Kuokka77, 2012) of the waveform for the 1991 version of ‘Smells Like Teen Spirit’ (Nirvana, 1991) looks like this:

    Meanwhile, the remastered 2011 waveform looks like this.

    As we can see, the original master has a lot more dynamic variation, with the peaks of the drums poking out from the rest of the mix, whereas the 2011 remaster has a much more consistent volume, but how does this translate to the sound?

    Were the original master to come on in a playlist, it may sound a little quieter and less energetic, however once turned up the drums would have much more impact and the song would have more movement and rhythmic force. In contrast, if the 2011 version came on, it would sound louder and more energetic off the bat, however some of the movement and feel of the song may be lost in the process.

    With this information, it seems clear that maintaining the contrast of volume in the instruments without over-compressing the mix and instead simply turning the volume up when necessary is the superior option, however I believe it is a little more complex as just looking at a waveform doesn’t tell the whole story.

    For example, engineers have found ways to account for this problem that wouldn’t necessarily show up in a waveform, such as the creative use of sidechain compression that can maintain the punch of drums by ducking the volume of other elements in the mix at the drum’s peaks. I also believe that there is no inherently better way to do things in music, as each method simply creates a different sound that can achieve a different feeling and if the artistic vision of the artist is to have the instrumental feel more like a compressed wall of noise that maintains high energy without any particular element standing out too much, there is nothing inherently wrong with that and that sound can absolutely work and sound great when used effectively.

    That being said, I also believe that the problem lies in the fact that artists that would have otherwise preferred to have a mix that is more dynamic that places emphasis in the emotions elicited from the contrast of volume in each element of the mix are now unable to fully realise that vision in an effort to compete with the market and the way music is listened to in the streaming era. I feel that we should create space for both forms of mixing and mastering to exist and express itself purely through the creative vision of the artists and not the expectations of the industry.

    I personally like to maintain more of a dynamic range in my own music than is standard currently, however with an understanding of the industry and the loudness wars, if I were to mix and master a pop record for somebody else I would take a different approach that would match the loudness of most modern songs.

    References:

    Griffith, D. (2023) ‘Andrew Scheps talks mixing, production and the legacy of Metallica’s Death Magnetic: “My line is that the loudness war is over because I won… and that was the record that did it”‘, MusicRadar, Available at: https://www.musicradar.com/news/andrew-scheps-mixing-metallica-adele-chili-peppers (Accessed: 30th August 2025).

    Kuokka77 (2012) Nirvana’s Smells Like Teen Spirit – 1991 vs. 2011 (loudness war – gain matched). August 9. Available at: https://www.youtube.com/watch?v=mf8DET7ILR4 (Accessed: 30th August 2025).

    Metallica (2008)Death Magnetic [CD]. Warner Bros. Records.

    Nirvana (1991) ‘Smells Like Teen Spirit’, Nevermind [CD]. DGC Records.

  • The History of Audio Equalization

    Audio equalization first came in to play in the 1920’s for radio broadcasting through the form of large chunky units with set frequency bands that could be adjusted to the broadcaster’s liking. At the time, the RCA 8B equalizer was the standard, which ended up being utilised by mixing engineers and music producers in the following couple decades.

    In 1955, the Pultec EQP-1A was released, a unit that offered a similar effect, though with an increased amount of flexibility and a unique tone and subtle distortion that came from the amplifier in the system. This piece of equipment is still used today for its warm, musical sound and has been emulated my many plugin companies such as Waves and UAD (The UAD Pultec emulations are some of my most frequently used plugins, in fact).

    Over the period of the 50s to the 70’s, it started to become common practice for equalizers to be built in to mixing desks, marking the start of an entirely new workflow that is still dominant in studio settings today. This ushered in the era of Neve and SSL equalizers that are also commonly sought after today for their unique tones and characteristics.

    The next big leap in EQ technology was in the 70’s, when parametric EQ’s came in to fashion. These equalizers allowed for the ability to adjust the frequency bands with a precision previously unfathomable, allowing for precise, surgical corrective and creative usage. This is still an extremely common form of equalization for the flexibility it provides.

    In the 1980’s, digital equalization became a new form of technology in the studio that was both more accessible to a wider audience as well as more flexible and clean than its predecessors. In the current day, digital EQ is the most common form of equalization due to its accessibility only increasing over time, as well as its capacity to recreate many different styles of EQ.

    That being said, the older forms of equalization never died out, as each era and unit have their own unique characteristics that make them sought after to this day, with most mixing engineers either paying incredible amounts for older gear or using VST emulations of them in the DAW.

    I feel that having an understanding of the history of EQ and what made each era and unit special has helped me to cater the types of EQ that I use for the type of sound I want for a song. For example, using the Pultec EQ on low end gives me a tone that I love when compared to a less characteristic digital EQ such as Fabfilter’s Pro-Q 3, however, if I were looking to attenuate a specific frequency that was poking out, the Pro-Q 3 would always be my go to due to its capacity for surgical equalization.

    Sources:

    https://blackroosteraudio.com/en/blogreader/a-brief-history-of-equalization

    https://vintageking.com/blog/history-of-eq/?srsltid=AfmBOopvBdI8oTfsZtYw8uI07UMRfkEjMmPZRcMNaFaKYqaKSNRL_SIg

    https://www.waves.com/9-eq-types-explained

  • The History of Stereo Sound

    Stereo sound can be traced back to developments made by EMI engineer ‘Alan Dower Blumlein’ in the 20th century. He patented his stereo sound technology in 1931, which involved the use of two grooves on records to create variations in amplitude of the stereo channels, as supposed to the previous dominant technology which only had one groove pattern that emitted a mono signal.

    Stereo sound did not rise to become the dominant mode of audio production and listening until the 60’s due to the technology becoming more accessible to the general public. This process was encouraged by the use of stereo sound becoming standard in films, television and music, as well as the emerging popularity of other technologies such as tape machines and more elaborate mixing consoles and effects encouraging further experimentation with stereo sound.

    By the 70’s and 80’s stereo was the standard for all devices, making space for further experimentation with what stereo sound could be. One such attempt was the quadraphonic sound system which involved 4 speakers in the corners of a space, however this did not take off due to its high costs and complex usage. One notable use of this technology was Pink Floyd’s ‘Dark Side of The Moon’, which was mixed in this format and performed live at Pompeii with this setup.

    This technology did not go to waste, however, as it went on to influence the creation of the surround sound format as we know it, which is used predominantly in film but can be found in some more expensive home setups.

    Currently, the emergence of spatial audio seems to me to be the next step forward in stereo sound and I am excited to see the technology become more accessible and fleshed out. In the future, experimenting more with different ways to mix and listen to music is something I want to explore, whether it be quadraphonic sound systems like Pink Floyd, spatial audio or any other technologies that will emerge.

    Sources:

    https://www.abbeyroad.com/news/the-history-of-recorded-music-has-its-roots-firmly-planted-at-no-3-abbey-road-2596

    https://www.emiarchivetrust.org/alan-blumlein-and-the-invention-of-stereo/

  • Visual Representation of the Stereo Imaging in ‘Africa’ by D’angelo

    Pictured is a visual representation of the stereo imaging for ‘Africa’ by D’angelo (including only the main instruments, not sounds that are only present for short moments of the track)

    One interesting element of this song is that the vocals are further back in the mix than the drums and percussion, which is untypical for most commercial mixes. This is the case with many D’angelo tracks, as the rhythm tends to be in the driver’s seat with his music while the vocals serve more as an instrument than the centre of the track.

    While it’s important to rely more on your ears than eyes while mixing, visual representations such as these can help in conceptualising depth and space in mixes. Considering where each instrument is coming from in the stereo field and how it interacts with the space around it is a core element of mixing, whether you are attempting to simulate the realism of a tight room or create ethereal soundscapes.

    While creating a visual representation of every song you’re mixing and every reference track you use in the process may be unrealistic, taking the concept behind it in to your thought process when mixing can help to create more immersive listening experiences. For example, picturing the instruments as actually being played in a room and what that room looks, feels and sounds like, as well as who and where you are in that room, can help to create a sense of depth that is often forgotten; I believe that mixing in the box can make engineers lose sight of this as the process of staring at a flat, rigid DAW for hours can prevent you from hearing the instruments as part of the room, as supposed to the workflow of using analog gear that has you engaging with the space you are in more.

  • Reference Tracks

    In mixing, a reference track is a song that shares similar qualities to the track being mixed that the engineer can use to assist them in the mixing process.

    One reason that reference tracks are used is that having an industry standard point of comparison for a song that is being mixed can help to influence your decision making and allow you to assess where your mix may be falling short.

    Another reason is that a reference track allows you to refresh your ears, as often times if you’ve been working on a mix for extended periods of time you can get used to the sound of the demo or work in progress and not notice that your mix may be, for example, much darker or brighter than what you had originally intended.

    Additionally, sometimes engineers will use the demo of the song being mixed as a reference track to stop themselves from straying too far from the creative vision of the artist.

    I make use of reference tracks when looking to be inspired by people who make bold mixing decisions, which can influence the way I think about the process and the risks I’m willing to take. One such example is Dijon, especially on his album ‘Absolutely’ and single ‘coogie’.

    I also use reference tracks when there are songs that have specific elements to them that I love, such as the drum tone and separation of the elements of the mix in ‘Africa’ and the rest of the ‘Voodoo’ album by D’angelo.

    I similarly use reference tracks when a song captures a specific emotion that I wish to convey in my own mix, such as the liberated and angelic feeling of ‘Set Your Spirit Free’ by Sault.