Rade Kutil
Lehrveranstaltungen
VO+PS Audio Processing (SS26)

Documents for the VO

lecture notes

list of questions

An interactive transfer function simulator with z-transform visualization

PS-Exercises

Here is some guitar sound to use as test input, and also some speech sound.

Look at this demo program for how to program the exercises in Python. Send the solutions via the upload page (you will get a personal link via email) before Tuesday 22:00. Please do not send Jupyter notebooks or zip-files, just send .py-scripts.

  1. Implement the bandpass filter with configurable \(f_c\) and \(f_d\).

  2. Implement a three-way equalizer by first splitting the input signal with a low- and a high-pass filter with the same cut-off frequency, and then splitting the high-pass signal again in the same way. Multiply each channel by some (maybe time-varying) factor and add them back together. Check (and maybe proof) whether the input signal would be unchanged if the factors are all equal to 1.

  3. Implement a phaser with only one allpass. Modulate \(f_c\) with a low-frequency oscillator.

  4. Extend the phaser to four parallel allpasses (instead of sequential as in the lecture). This means, the input to all allpasses should be \(ph_2\), and their output must be averaged (summed and divided by 4). There are separate parameters for each allpass. Modulate the \(f_c\)-parameters independently with non-harmonic low-frequencies. Also, implement the feedback loop. The result should sound like this.

  5. Implement an \(m\)-fold Wah-Wah effect with increasing \(m\). Set \(f_c\) to 3500Hz and use \(q=0.5\). Start with \(m=1\), and increase \(m\) by 1 every 0.5 seconds, The result should sound like this.

  6. Generate a 3 second rectangular wave (e.g. 50 times 1.0, 50 times -1.0, and repeat) and resample it (from e.g. fs=20000Hz) to an almost similar sampling rate (e.g. 20002Hz) using linear, Lanczos and allpass-interpolation. See if you can hear any amplitude fluctuations.

  7. Implement a chorus effect, i.e. add 3 copies of the input sound, each delayed between 0 and 100 samples, modulated by LFOs with non-harmonic frequencies between 1 and 2 Hz. The result should sound like this.

  8. Implement single-sideband modulation. By using the Hilbert transform (do NOT use the Fourier transform to calculate it), modulate the input sound by a sinusoid with increasing frequency, e.g. \( \cos(20\cdot 2\pi t^2)\) (\(t\) in seconds). The result should sound like this.

  9. Implement a noise gate/expander that uses a squarer as detector and reduces levels below -25dB by 1 dB per dB. (Hints: The maximum level is 0dB. Above -25dB, r=0dB (no change), below -25dB, r is linear; r(-25)=0 and r(-35)=-10, and so on.) To test it, read guit3.wav and fox.wav, and mix them together with guit3.wav divided by 10. Choose pretty short attack- and release-time parameters. The result should sound like this.

    Caveat: The output of the squarer is converted to dB by 10*log10 because it is squared, otherwise it is 20*log10.

  10. The distortion function \(g(x)\) should be designed so that it is \(+1\) for \(x\ge 1\), \(-1\) for \(x\le-1\), and \(g(x)=ax+bx^3\) for \(-1\le x\le 1\), where \(a, b\) are chosen so that \(g(1)=1\) and \(g'(1)=0\). Create then a harmonic signal of 3 seconds with a fundamental frequency of \(f=163\text{Hz}\) and 7 harmonics, where the amplitude of the \(k\)-th harmonic (at frequency \((k+1)f)\) is \(1/\sqrt{k+1}\). Use a sampling frequency of 3000Hz. Normalize the signal to \(\pm 1\), then multiply it with an increasing gain from 0.5 at the beginning to 3.0 at the end. Feed this into the distortion function. But before distortion, upsample by a factor of 3 (you can use scipy.signal.resample_poly), and, after distortion, downsample again (\(y_u\)). Also, compare it to applying the distortion without up/downsampling (\(y_r\)). Concatenate \(x, y_u, y_r\) for easier comparison.

  11. Implement an octaver and apply it on fox.wav. First, produce a low-pass filtered signal \(l\) (first order, cut-off 50Hz). Then, when \(l\) has a positive zero-crossing, change a sign \(s\) (\(-1 \longleftrightarrow +1\)). Finally, mix \(3\cdot l\cdot s\) into the source signal. The result should sound like this.

  12. Implement denoising based on STFT. The signal to denoise is fox.wav, the noise signal is guit3.wav, shortened to the length of fox.wav and multiplied by \(0.1\). The denoising coefficients \(c_w\) should be learned from the noise signal as the average of the absolute value of the coefficients in bin \(w\) over all frames, multiplied by \(2\). Concatenate the noisy and denoised signals for comparison. The result should sound like this.

    The STFT (forward and inverse) is available in the scipy-library:


    from scipy import signal

    frameSize = 512
    hopSize = frameSize / 4
    _, _, X = signal.stft (x, fs, window='hann', nperseg=frameSize, noverlap=frameSize-hopSize)

    _, y = signal.istft (X, fs, window='hann', nperseg=frameSize, noverlap=frameSize-hopSize)

2026-04-25 16:27