Sound Design

Last Updated:

Sound design is a key aspect of music creation. From sculpting guitar tones to synthesizing new and interesting synth sound, learning this art can take your artistry to another level.

Learning how to design sounds for music is a deep rabbit hole with a lot of moving parts. But with a little bit of practice and experimentation, you can do a lot.

If you want to learn about sound design, start with the basics we outline below.

Sound Design Basics

Sound design is all about understanding waveforms and harmonics (read more about harmonics).

They are the basic part of any sound we create. Oscillators are used to create these waveforms. They make the basic sounds which are later shaped into different textures and dynamics by filters, envelopes and effects.

It also takes understanding the nuances of how specific sounds… well… sound.

Once you master these parts, you can precisely craft any type of sound you want to use in your music.

Waveforms

Waveforms are the basic building blocks of sound design. They decide the initial sound and quality of an audio piece before we tweak it more.

There are fundamental waveforms which form the basis of all sound:

  • Sine Wave: Pure, single frequency, smooth and mellow sound.
  • Square Wave: Rich in harmonics, sounds hollow and can be harsh.
  • Triangle Wave: Fewer harmonics than square wave, softer and more rounded sound.
  • Sawtooth Wave: Contains all harmonics, bright and buzzy sound.
  • Pulse Wave: Similar to square wave but can be modified to alter tone, sounds thin and nasal.
Waveform Shapes

And there are more complex types that combine different waves:

  • Noise Wave: Contains all frequencies at once, used for sound effects like wind or static.
  • Wavetable: Series of different waveforms that can be scanned or morphed through, very versatile.

Oscillators, Filters and Envelopes

Oscillators are like the pulse of sound design. They create the basic sound waves that are the start of all synth sounds. Multiple oscillators can be combined or modulated to create more complex sounds.

In hardware analog synths, oscillators generate waves with electricity passing through various electrical components. On virtual synths (and digital synths) the oscillators use math models to generate the waves.

Some basic oscillator controls normally include:

  • Waveform Selector: This control lets you choose the shape of the waveform the oscillator produces. Common options include sine, square, triangle, and sawtooth waves, each offering a unique tone.
  • Pitch Control: This control allows you to adjust the frequency of the oscillator, thereby changing the pitch of the sound produced.
  • Detune Control: This control allows you to slightly adjust the pitch of the oscillator, creating a thicker, richer sound when used in tandem with another oscillator.
  • Pulse Width Control: This control is used to adjust the width of the pulses in a pulse or square wave. Changing the pulse width affects the timbre of the sound.
  • Amplitude Control: This control allows you to adjust the volume of the oscillator.
  • Noise Generator: This control adds a noise waveform to the synth sound, which can be used to create percussive sounds or simulate natural sounds like wind or rain.

Learn all about oscillators and how they work in this complete guide.

Filters are also key in shaping sound design. You use them along with oscillators to shape what frequency content your sound will have (all sound is just vibrations – or oscillations – of air at a specific speed – or frequency).

There are different types of filters we can use – high pass, low pass, notch, band pass. Each type has a special role in either reducing or boosting certain sound frequencies.

Filter Types Shown Visually

Using multiple filters at the same time can make sounds richer, while tools like LFOs (low frequency oscillators) can be used to change the filter automatically across time.

The main control on a synth filter is the “cutoff” which sets the point in the wave’s frequency spectrum where the frequency content starts to attenuate (i.e. drop/lower/cut).

Learn more about using filters in sound design in this guide.

An envelope is a key tool in shaping the type of sound you’re trying to design. It helps shape the flow of your sound – how it beginning, how much it fades, how long it holds, and how fast it ends.

These are the Attack, Decay, Sustain and Release settings, also referred to as a sound’s ADSR envelope.

ADSR Envelope Diagram

Through creative editing, using envelopes to change sound brings out more texture in the sound. This makes layering sounds together more effective.

Learn more about how to utilize envelopes in your sound design in this guide.

Modulation Techniques in Sound Design

Sound design uses modulation methods to change sound. These changes can affect the pitch (how high or low a “note” is), amplitude (loudness or strength of the wave), and frequency of the sound.

When we change the pitch, we can make melodies sound like they’re wavering. We can also make the pitch go up and down quickly for a special effect.

Changing the loudness can make a sound seem like it’s pulsing. This can also make the sound more dynamic.

By changing the frequency, we can affect how the sound feels and how it fills a room.

Modulating Pitch

Understanding pitch modulation allows sound designers to make their sounds more dynamic and expressive.

It involves changing the frequency of a sound signal, which in turn alters the perceived pitch – a higher frequency means a higher pitch.

For beginners, it’s important to understand that modulation is often used to add richness, complexity or movement to a sound.

There are two main components in pitch modulation: the carrier, which is the original sound, and the modulator, which is the signal used to change the pitch of the carrier. Here’s what can be modified:

  • Depth of Modulation: controls how much the pitch changes. If the modulation depth is higher, the pitch range will be wider.
  • Rate of Modulation: guides the speed of pitch changes. If the modulation rate is higher, the pitch changes quicker.

Modulation can be done in different ways, such as through vibrato (a periodic change in pitch) or tremolo (a periodic change in volume).

Another common method of pitch modulation is to use a low-frequency oscillator (LFO)learn more – to modulate the pitch. The LFO produces a waveform that is used to control the frequency of the sound source, effectively changing its pitch.

LFOTool Screenshot - Low Frequency Oscillator

Pitch modulation can create both subtle and dramatic effects. It’s important to experiment with different settings and listen carefully to the results as subtle changes can make a big difference in the overall sound.

Modulating Amplitude

Amplitude modulation is a way to change the volume of a sound (i.e. amplitude – the “strength/height” of the wave) over time.

This involves altering a carrier wave (the original sound wave) based on the features of a modulator signal (another sound wave that alters the original).

The amplitude of the carrier signal is changed in proportion to the modulator signal. This results in a complex wave that contains both the carrier and the modulator.

Changing the frequency, shape, or phase of either wave can result in dramatically different sounds. This makes amplitude modulation great for creating complex and evolving sounds.

Here are some basic concepts broken down:

  • Signal Variation: The signal wave’s strength changes according to the original info being transmitted.
  • Modulation Index: Defines the how much variation is applied to the original carrier signal
  • Sidebands: In AM, two sidebands are created that contain the sum and difference of the frequencies in the original signal and the carrier wave.
  • Uses: create complex sounds, in tremolo effects to vary sound volume, and in ring modulation for special effects and noises.

Modulating Frequency Content

Adjusting the frequency of sounds can change the pitch, create an effect, and make static sounds more lively. Frequency modulation is not just for simple effects. It can turn a basic sound wave into a complicated sound.

Sound designers play around with the fundamental and harmonic frequencies of a sound wave to make new sound colors and textures.

More advanced methods often involve using multiple modulators. For example, they might use LFOs to automatically adjust filter cutoffs to create evolving soundscapes.

Here are some basic concepts broken down:

  • Frequency Deviation: The maximum difference between the instantaneous frequency (modulating signal) and the carrier frequency (original signal).
  • Modulation Index: The relationship between the change in frequency and the original frequency of the signal, which is used to evaluate how much a carrier wave changes based on the data being sent.
  • Bandwidth: The range of frequencies over which the signal is transmitted.
  • Phase: the link between the spots of two points in sound waves.
Amplitude and Frequency Modulation Visualization

Effects Processing in Sound Design

In musical sound design, using effects wisely is just as key to creating unique sounds as is the original sound itself.

Adding harmonic distortion can make sound warmer and more complex. Time-based effects like reverb and delay can give sound depth and motion. Equalization and compression are needed tools for shaping and controlling a sound.

Mastering all of these tools will help you design any sound you want.

Harmonic Distortion

Harmonic distortion is a helpful tool in sound design. When used right, it can make audio more engaging and interesting. It can turn simple sounds into more rich ones.

There are many ways to use harmonic distortion creatively. One of them is harmonic saturation. This technique can add warmth and life to digital sounds that might seem dull otherwise.

You can also use distortion to add texture to sounds or it can lead to creating entirely different tones.

  • What It Is: harmonics or overtones are added to the original signal. The harmonics created are multiples of the original sound’s fundamental frequency.
  • Types of Distortion: Soft-clipping, additive, multiplicative, and intermodulation distortion. Each type gives a unique sound characteristic.
  • Use of Distortion: It’s often used in music production to add richness, warmth, and depth to the sound.
  • Saturation: A mild form of distortion used to add a warm and smooth tone to the signal.
  • Overdrive: A heavier form of distortion used to add a gritty and aggressive tone. It’s commonly used in rock and metal music.
  • Frequency Response: Distortion affects the frequency response of the signal, often enhancing certain frequencies while reducing others. This can be used creatively to shape the tone of the sound.
  • Dynamic Range: Distortion can also affect the dynamic range of the signal, reducing the difference between the loudest and softest parts of the sound. This can make the sound seem louder and more powerful.
  • Gain Staging: When applying distortion, it’s important to manage the volume levels of the signal to prevent unwanted clipping or digital distortion.
Normal vs. Distorted Waveform

Time Based Effects

Effects like reverb and delay that depend on time are crucial in sound design. They help shape and extend the space and time features of sound – like making something sound like it’s coming from next door or adding an echo that repeats.

Creative uses of reverb and delay do more than just mimic real spaces. They create a unique mood, adding emotional layers or strange elements to a sound.

Time stretching methods, on the other hand, change the speed of playback of the sound but keep the pitch the same. This can completely alter the sound itself.

  • Reverb, or reverberation, is the persistence of sound after it has been produced. It’s an effect that adds depth and richness to the sound.
  • Types of Reverb: – room, hall, plate, spring, digital/impulse response
  • Reverb Controls: pre-delay, reverb time, wet/dry mix
  • Delay is a time-based effect that records an input signal and then plays it back after a set period of time, creating an echo-like effect.
  • Types of Delay: analog, digital, tape
  • Delay Controls: delay time, feedback, modulation
Waves Reverb Screenshot

Equalization (EQ)

EQ is vital for tweaking different sound frequencies precisely. We can enhance or reduce certain frequencies to mold the audio.

A filter – described above – is basically a very blunt form of EQ. More complex types of EQs – parametric EQs, dynamic EQs, etc – allow a sound designer to be more precise as they sculpt a sound’s frequency content.

Creative equalization can be used as a way to create unique sound tones. Remember, all sound is just frequencies/vibrations. When you shape the frequency content, you sculpt the sound itself.

  • Understanding the Frequency Spectrum: how different frequencies correspond to different sound characteristics
  • Types of EQs: filter, graphic, parametric, semi-parametric, dynamic
  • Boosting and Cutting: Increasing or decreasing certain frequencies to achieve the desired sound.
  • EQ Controls: gain (how much you increase/decrease) , Q (how widely/narrowly you boost and cut), and frequency.
Parametric EQ Screenshot

Compression

Compression is a crucial tool in sound design. It helps manage the dynamic range of audio – the difference between the loudest and quietest parts of a sound/recording. It does this by limiting loud sounds and raising softer ones, achieving a more even – or squeezed together – sound level.

Once can use compression to make sounds thicker or more punchy. There are creative methods like sidechain compression too. This creates a rhythmic effect by reducing a sound when another signal comes in.

  1. Types of Compression: Regular, parallel, multiband, limiting and sidechain compression.
  2. Compressor Controls:
    • Threshold: The level at which compression begins to work. Any sound above this level will be compressed.
    • Ratio: This dictates how much compression is applied once the threshold is breached.
    • Attack and Release: These parameters determine how quickly the compressor responds to and releases a signal that crosses the threshold.
    • Gain Reduction: This term refers to the amount of reduction in the audio signal’s gain that a compressor applies.
    • Make-up Gain: After compression, this is used to bring the level of the audio back up.
    • Knee: This parameter controls how the compressor transitions between non-compressed and compressed states.
Compressed vs. Uncompressed Audio

Learn more about various effects processing possibilities in sound design in this complete guide.

Sound Design Learning Strategy

Becoming good at sound design needs a smart learning plan that includes both theory and hands-on practice.

If you wish to be a sound designer, you should learn in settings where you can play around with how to work with sound. This often means trying out new techniques that help build your skills.

All it takes is a simple DAW (digital audio workstation) on your laptop/computer and a VST synth (most DAWs come bundled with basic virtual synths).

Learning by doing is key to understanding sound design – you have to just experiment and mess around with knobs and faders on filters, oscillators and envelopes.

Doing that will give you more of an understanding of how these components shape a sound, than any article or video can.

What to Do Next

Thanks for reading this complete guide on sound design for beginners. Next up, deep-dive into one area you’d like to get started with:

  • All About Harmonics and Overtones – Read Guide
  • How to Use Wave Shapers –
  • How to Use Reverb –
  • How to Use Delay –
  • How to Use EQ –
  • How to Use Compression –
  • How to Use Distortion –
  • How to Use a BitCrusher –
  • How to Use a Flanger –
  • How to Use a Chorus –
  • How to Use a Phaser –
  • How to Design Any Sound You Hear –