Rade Kutil
Lehrveranstaltungen
VO+PS Audio Processing (SS22)

Documents for the VO

script

list of questions

PS-Exercises

Here is some guitar sound to use as test input, and also some speech sound.

  1. Implement the bandpass filter with configurable \(f_c\) and \(f_d\).

  2. Implement a three-way equalizer by first splitting the input signal with a low- and a high-pass filter with the same cut-off frequency, and then splitting the high-pass signal again in the same way. Multiply each channel by some (maybe time-varying) factor and add them back together. Check (and maybe proof) whether the input signal would be unchanged if the factors are all equal to 1.

  3. Implement a phaser with only one allpass. Modulate \(f_c\) with a low-frequency oscillator.

  4. Extend the phaser to four allpasses with separate parameters for each allpass. Modulate the \(f_c\)-parameters independently with non-harmonic low-frequencies. Also try to implement the feedback loop. The result should sound like this.

  5. Implement a 4-fold Wah-Wah effect. Modulate \(f_c\) with a low-​frequency oscillator, and calculate \(f_d\) in a constant-q manner. The result should sound like this.

  6. Generate a 5 second sine tone (e.g. 2200Hz) and resample it to an almost similar sampling rate using linear, Lanczos and allpass-interpolation. See if you can hear any difference for high frequencies. (Test with a low sampling frequency, e.g. 5100Hz resampled to 5102Hz.)

  7. Implement a stereo rotary speaker effect. The result should sound like this.

  8. Implement a primitive vocoder based on the Hilbert transform: Read two sounds, e.g. guit3.wav and fox.wav. Transform both by a truncated Hilbert transform. Calculate the instantaneous amplitude of both (i.e. \(\sqrt{x^2+y^2}\), where \(x\) is the original signal, and \(y\) is the Hilbert transform). Then substitute the amplitude of guit3 by the amplitude of fox (divide by the one and multiply by the other). The result should sound like this (not great, but the idea counts …).

  9. Implement a compressor (limiter) that uses a squarer as detector and limits the level at -30dB. (Hints: The maximum level is 0dB. Below -30dB, r=0dB (no change), between -30dB and 0dB, r is linear; r(-30)=0 and r(0)=-30; you should be able to find the formula yourself.) To test it, read guit3.wav and fox.wav, and mix them together with guit3.wav divided by 10 (radio host situation). Normalize the result so that it is not too quiet. Experiment with attack- and release-time parameters. The result should sound like this.

    Some caveats: (1) The output of the squarer is converted to dB by 10*log10 because it is squared, otherwise it is 20*log10. (2) For the second averager, the role of attack and release are reversed.

  10. Implement the distortion transforms from the lecture notes (hard clipping, soft clipping, distortion), and test them on the guitar sound.

  11. Implement an octaver and apply it on fox.wav. To correctly find positive zero-crossings, use a negative amplitude follower \(x_n\): If the signal is less than \(x_n\) (negative peak reached), set it to the signal, else multiply it by 0.999. Also use two state variables, \(r\) (negative peak reached), and \(s\) (sound on). \(r\) is set when the signal becomes less than \(x_n\), and unset when it is set and the signal becomes positive. In the latter case, also flip \(s\) (on to off or off to on). Pass the signal to the output only when it is positive and \(s\) is set. The result should sound like this.

  12. Implement the vocoder effect (mutation, morphing) based on STFT. Use the same approach as with the Hilbert transform (real,imaginary part instead of x,y), or peek ahead in the lecture notes. The STFT (forward and inverse) is available in the scipy-library:


    from scipy import signal

    frameSize = 512
    hopSize = frameSize / 4
    _, _, X = signal.stft (x, fs, window='hann', nperseg=frameSize, noverlap=frameSize-hopSize)

    _, y = signal.istft (X, fs, window='hann', nperseg=frameSize, noverlap=frameSize-hopSize)

    See what happens if you change the frame size to 128 or 4096.

  13. Implement time-stretching based on STFT. Use fox.wav.

  14. Implement pitch-shifting directly (keeping the hop-size the same). For a pitch-factor \(k\) (range 0.5 to 2.0), multiply \(\Delta\varphi\) with \(k\), and also move each coefficient up (or down) in the frequency bins to position \(k w\) (rounded). Bonus: Amplitude interpolation to avoid holes in the array.

  15. Implement an oscillator according to the digital resonator. Control the frequency with an LFO (low frequency oscillator). For large and fast frequency variations there should be audible amplitude variations. Now determine the amplitude by a squarer-detector and an averager (equal attack and release). Correct the amplitude by dividing \(x[t]\) and \(x[t-1]\) by \((a-\bar{a})/10+1\), where \(a\) is the detected amplitude and \(\bar{a}\) is the desired expected amplitude.

  16. No Python this time. For a signal similar to $$x=(\ldots,0,0,1,2,1,0,-1,-2,-1,0,0,\ldots)\, ,$$ calculate the optimal linear prediction coefficients with the Levinson-Durbin algorithm by hand. No window function is used, i.e. it is constant 1. Calculate also the predictions. You will get a link via email to a web page with an individual signal, where you have to enter all the calculated values.

  17. Implement the vocoder effect (mutation, morphing) with LPC. For blocks of, say, 1024 samples, calculate the first \(m\) autocorrelation values. From that, solve the Toeplitz matrix system to calculate \(p\). Do that for both signals, calculate the prediction error signal of one signal and apply \(p\) of the other signal recursively. For each block, the \(p\)s have to be recalculated. Find a good sounding prediction filter size \(m\).

    Some hints:
    Let the first block start at \(m+1\) and the last at len(x)-blocksize latest.
    Consider: scipy.signal.correlate
    Consider: scipy.signal.solve_toeplitz(...)
    Consider: np.inner (x[t-1:t-m-1:-1], p)

  18. Implement a pitch detector based on autocorrelation. For each block of a signal, calculate the autocorrelation with scipy.signal.correlate (only the second half is used). Then, find the first positive zero crossing of the autocorrelation. From there to the end, find the positive maximum. Finally, find the leftmost peak to the right of the first positive zero crossing that is higher than 80% of the maximum. From the position of the peak, calculate the frequency. For the input signal, start with a single sin function with linearly increasing frequency (hint: sin(np.pi*a*t**2) has frequency a*t at time t), then also include up to 7 harmonics with arbitrary phases and amplitudes. Maybe also add noise. Compare the detected pitches to the correct ones in a plot.

  19. Demonstrate panning on the guitar sound. Modulate the panning parameter p at 1Hz between +1 and -1.

  20. Convolve the input signal with two different white-noise signals (length about 4000 samples) to get a decorrelated stereo output. Test it with our test signals and also white noise as input signal. Play the output followed by a convolved but non-decorrelated (use the same white-noise for left and right) signal to hear the difference.

  21. Implement Moorer's reverberator for a 2D room of 10m width and 15m depth. The sound source is at 2m from the left wall and 3m from the back wall, and the listener is 7m from the left and 6m from the back wall. Add the direct sound, three early reflections (one from left, right and back wall), and three IIR comb filters with a delay according to three modes of the room (n=(1,0),(0,1),(1,1)) and no low-pass filtering. Also, no all-pass filter. Choose the feedback of the comb filters at about 0.5 (also try 0.9), and mix their outputs to \(y_1\) with the same factor. The speed of sound is 343m/s.

  22. Ok, this reads long, but the solution is actually really short. Find the impulse response of your own room. To do so, create a MLS with a shift register of size 17. You can use scipy.signal.max_len_seq(17)[0]. Do not forget to make it +1/-1 instead of 1/0, and make sure it is converted to float. Then create a sound of twice that size with the repeating MLS in it, but starting in the middle of the MLS (so: last half of MLS, MLS, and first half of MLS concatenated). Play that sound and record with the microphone at the same time to record the room response to the MLS. Best with loudspeakers far away or turned away from the microphone for a more audible effect. You can try to use sounddevice.playrec (and sounddevice.wait) for this. However, for me, this did not work, I got only zeros, so I output the sound as wav-file and did the playback-recording in an audio application (I used Audacity). If anyone finds a reason for that, that would be appreciated! Then cross-correlate the result with the MLS (with option 'valid'). Search for the onset of the impulse response (first value greater than half of the maximum, and a few samples back), and cut out 200ms of impulse response. Then convolve the Fox sound with the impulse response. Additionally, write the impulse response in a wav-file and send it also to me, so we can compare rooms in the PS.

2022-06-17 02:52