Tutorial makers have a fear of background music.
I know, that’s a big claim. But think about it. How many tutorials have you watched online that have had background music playing? Generally, the ones with music are the ones where the instruction is all given in text on screen. Why? It’s because music will compete with the vocal track. The music can be distracting, or overpower the vocal track making your video unwatchable.
So we leave the music out. Sure we’ll have music during the intro, but then we give up on it, or any background noise. This isn’t a bad thing. Many tutorials can be super successful without any background music. But when you listen to someone talk into a microphone for a long period of time, especially one that is being limited digitally, the frequencies of their voice can begin to cause ear fatigue. This is when the prolonged exposure to the same few frequencies begins to cause symptoms such as discomfort, pain, and loss of sensitivity. You will lose viewers.
The answer is in having a secondary audio track with either sound effects, or music. Something to fill in the frequencies that your voice and microphone aren’t filling to prevent the fatigued listener.
In the video below I’ll be outlining two types of essential audio mixing skills for dealing with at least two tracks of audio: Ducking, and Notching.
In review, Ducking is when you reduce the volume of your secondary audio (like a soundtrack or background noise) to leave volume space for your primary audio. Notching is when you remove the audible frequencies from your secondary audio that may conflict with your primary audio.
Ducking can be done in Camtasia, but Notching requires a dedicated audio editor like Adobe Audition. An example of a video where notching was used can be found here.
Let us know in the comments how you like to use background audio in your eLearning courses!