forked from IDMIL/DigitalAudioWorkbench
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathwaves.js
More file actions
669 lines (538 loc) · 26.8 KB
/
waves.js
File metadata and controls
669 lines (538 loc) · 26.8 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
/*
<!-- note to maintainers:
This document serves as both the README for the project and as the source
code for the heart of the simulation. This is done since certain aspects of the
documentation of the project can only be adequately precise by including source
code inline; rather than duplicate the code across the documentation page and
the source document, they are kept together in one place. As such, the prose
block at the beginning and the code block at the end are carefully enclosed in
interlocking delimiters so that javascript ignores the README text and the
README pretty-prints the javascript source. Take care not to disturb these
block delimeters.
Futhermore, take care to limit the scope of the source code in this document
to only that which is essential for understanding the core of the simulation.
-->
# The Digital Audio Workbench
https://idmil.gitlab.io/course-materials/mumt203/interactive-demos
## Introduction
The purpose of the digital audio workbench is to illustrate key concepts in
digital audio theory with interactive visualizations of each stage of the
analog-to-digial conversion (ADC) and digital-to-analog conversion (DAC)
processes. These visualizations are inspired by demonstrations using
oscilloscopes and spectrum analyzers to compare the analog signal input into
the ADC process with the analog signal output by the DAC process, e.g.
https://youtu.be/cIQ9IXSUzuM
By experimenting with the settings of the simulation, numerous key concepts in
digital signal theory can be nicely illustrated, such as aliasing, quantization
error, critical sampling, under and oversampling, and many others. The
interactive interface allows the simulation to be explored freely; users can
examine the signals both visually through numerous graphs, or by listening to
the test signals directly.
## Implementation
Since our demonstration takes place purely in the digital domain, we
unfortunately cannot use real continuous time analog inputs and outputs.
Instead, we simulate the ADC-DAC processes in the discrete time domain. The
analog input and output are represented as discrete time signals with a high
sampling rate; at the time of writing, the maximum sampling rate supported
by WebAudio is 96 kHz.
The ADC process consists of several steps, including antialiasing, sampling,
and quantization. All of these are simulated in our model: antialiasing is
achieved with a windowed sinc FIR lowpass filter of order specified by the
user; sampling is approximated by downsampling the input signal by an
integer factor; and quantization is achieved by multiplying the sampled
signal (which ranges from -1.0 to 1.0) by the maximum integer value possible
given the requested bit depth (e.g. 255 for a bit depth of 8 bits), and then
rounding every sample to the nearest integer. The DAC process is simulated
in turn by zero stuffing and lowpass filtering the sampled and quantized
output of the ADC simultion.
In summary, the continuous time input is simulated by a 96 kHz discrete time
signal, the sampled output of the ADC process is simulated by a downsampled
and quantized signal, and the continuous time reconstruction output by the
DAC is simulated by upsampling the "sampled" signal back to 96 kHz. In our
tests we have found this model to be reasonable; many key concepts, such as
critical sampling, aliasing, and quantization noise are well represented in
our simulation.
For more details, the reader is encouraged to peruse the rest of the source
code in this document. Many comments have been included to aid readers who
are unfamiliar with javascript. Any questions you may have about the
implementation of the simulation can only be definitively answered by
understanding the source code, but please feel free to contact the project
maintainers if you have any questions.
```javascript
*/
// `renderWavesImpl` returns an anonymous function that is bound in the widget
// constructor. This is done in order to seperate the implementation of the
// simulation from the other implementation details so that this documentation
// can be more easily accessed.
const soundTimeSeconds = 1.5;
const fadeTimeSeconds = 0.125;
let audioSources = {}
async function loadAudioSources() {
let audioCtx = new AudioContext({sampleRate: 96000});
sourceFiles = [
["/wav-samples/bach_cello.wav", "cello"],
["/wav-samples/drums.wav", "drums"],
["/wav-samples/sweep_20_4000hz.wav", "sweep"]
]
for (let i = 0; i < sourceFiles.length; i++) {
try {
const response = await fetch(sourceFiles[i][0]);
audioSources[sourceFiles[i][1]] = await audioCtx.decodeAudioData(await response.arrayBuffer());
} catch (e) {
console.error("tried to fetch " + sourceFiles[i][0], e);
}
}
}
function formantFrequencyStrength(freq, formant1, formant2, decayPerOctave) {
if (freq < 1) {
return 0;
}
f1Decay = (formant1 > 1) ? Math.pow(decayPerOctave, Math.abs(Math.log2(formant1) - Math.log2(freq))) : 0;
f2Decay = (formant2 > 1) ? Math.pow(decayPerOctave, Math.abs(Math.log2(formant2) - Math.log2(freq))) : 0;
return Math.max(f1Decay, f2Decay);
}
function calculateHarmonics(settings) {
let harmonic_number = 1;
let harmonic_amplitude = 1;
let invert = 1;
let harmInc = (settings.harmType === "Odd" || settings.harmType === "Even") ? 2 : 1;
// data from SHARC dataset: https://web.archive.org/web/20090226034059/http://www.timbre.ws/sharc/
const clarinetHarmonics = [
1.0, 0.020330578512396693, 0.5368506493506493, 0.045386658795749706,
0.39042207792207795, 0.13839728453364816, 0.49614521841794573, 0.038146399055489964,
0.10071428571428573, 0.05957201889020071, 0.0363370720188902, 0.08095926800472256,
0.03358028335301062, 0.046177685950413216, 0.008293978748524203, 0.026933293978748524,
0.011124557260920896, 0.008400236127508854, 0.0048524203069657615, 0.011481700118063754,
0.008500590318772138, 0.008288075560802834, 0.0031316410861865407, 0.0030991735537190084,
0.0025974025974025974, 0.004126328217237308, 0.000655253837072019, 0.00017709563164108617,
0.00012101534828807555, 0.0004309327036599764, 0.000678866587957497, 0.0006434474616292798,
0.0004929161747343566, 0.0006463990554899646, 0.00035419126328217233, 0.00037190082644628097,
0.0001180637544273908, 0.0005814639905548997
]
for (let i = 0; i < settings.numHarm; i++) {
// the amplitude of each harmonic depends on the harmonic slope setting
if (settings.harmSlope === "lin") harmonic_amplitude = 1 - i / settings.numHarm;
else if (settings.harmSlope === "1/x") harmonic_amplitude = 1 / harmonic_number;
else if (settings.harmSlope === "1/x2") harmonic_amplitude = 1 / harmonic_number / harmonic_number;
else if (settings.harmSlope === "flat") harmonic_amplitude = 1;
else if (settings.harmSlope === "log") {
harmonic_amplitude = Math.exp(-0.1 * (harmonic_number - 1));
} else if (settings.harmSlope === "clarinet") {
harmonic_amplitude = i < clarinetHarmonics.length ? clarinetHarmonics[i] : 0;
} else if (settings.harmSlope === "vowel a") {
harmonic_amplitude = formantFrequencyStrength(harmonic_number * settings.fundFreq,
850, 1610, 0.2);
} else if (settings.harmSlope === "vowel e") {
harmonic_amplitude = formantFrequencyStrength(harmonic_number * settings.fundFreq,
390, 2300, 0.2);
} else if (settings.harmSlope === "vowel i") {
harmonic_amplitude = formantFrequencyStrength(harmonic_number * settings.fundFreq,
240, 2400, 0.2);
} else if (settings.harmSlope === "vowel o") {
harmonic_amplitude = formantFrequencyStrength(harmonic_number * settings.fundFreq,
360, 640, 0.2);
} else if (settings.harmSlope === "vowel u") {
harmonic_amplitude = formantFrequencyStrength(harmonic_number * settings.fundFreq,
250, 595, 0.2);
}
// In case the harmonic slope is 1/x^2 and the harmonic type is "odd",
// by inverting every other harmonic we generate a nice triangle wave.
if (settings.harmSlope === "1/x2" && settings.harmType === "Odd") {
harmonic_amplitude = harmonic_amplitude * invert;
invert *= -1;
}
// the frequency of each partial is a multiple of the fundamental frequency
settings.harmonicFreqs[i] = harmonic_number * settings.fundFreq;
// The harmonic amplitude is calculated above according to the harmonic
// slope setting, taking into account the special case for generating a
// triangle.
settings.harmonicAmps[i] = harmonic_amplitude;
// With harmonic type set to "even" we want the fundamental and even
// harmonics. To achieve this, we increment the harmonic number by 1 after
// the fundamental and by 2 after every other partial.
if (i === 0 && settings.harmType === "Even") harmonic_number += 1;
else harmonic_number += harmInc;
}
}
function getAdditiveSynthSample(settings, n) {
sample = 0;
for (let harmonic = 0; harmonic < settings.numHarm; harmonic++) {
if (settings.harmonicFreqs[harmonic] >= 96000 / 2) {
// our input signal is not truly analog, but it sampled at 96k, the maximum samplerate supported in webaudio.
// If we generate inputs at higher frequencies than that nyquist, it will create aliasing on the input.
return sample;
}
let fundamental_frequency = settings.harmonicFreqs[0];
let frequency = settings.harmonicFreqs[harmonic];
let amplitude = settings.harmonicAmps[harmonic];
// convert phase offset specified in degrees to radians
let phase_offset = Math.PI / 180 * settings.phase;
// adjust phase offset so that harmonics are shifted appropriately
let phase_offset_adjusted = phase_offset * frequency / fundamental_frequency;
let radian_frequency = 2 * Math.PI * frequency;
let phase_increment = radian_frequency / WEBAUDIO_MAX_SAMPLERATE;
let phase = phase_increment * n + phase_offset_adjusted;
// accumulate the amplitude contribution from the current harmonic
sample += amplitude * Math.sin(phase);
}
return sample;
}
function getSamples(settings, destination) {
let sample = 0;
if (settings.inputType === "Additive Synth") {
destination.forEach((_, n, arr) => {
arr[n] = getAdditiveSynthSample(settings, n);
});
} else {
for (const [name, buffer] of Object.entries(audioSources)) {
if (settings.inputType === name) {
buffer.copyFromChannel(destination, 0, 0);
}
}
}
if (settings.noiseFloor > -96) {
const noiseGain = Math.pow(10, settings.noiseFloor / 20);
destination.forEach((x, n, arr) => {
arr[n] = x + (Math.random() * 2 - 1) * noiseGain;
});
}
}
function normalize(arr, targetAmplitude) {
const amp = Math.max(Math.max(...arr), -Math.min(...arr));
// normlize and apply amplitude scaling
arr.forEach((x, n, y) => y[n] = targetAmplitude * x / amp);
}
function filterSignal(signal, frequency, order, mode, filterKernel, fs=WEBAUDIO_MAX_SAMPLERATE) {
// specify the filter parameters; Fs = sampling rate, Fc = cutoff frequency
// The cutoff for the antialiasing filter is set to the Nyquist frequency
// of the simulated sampling process. The sampling rate of the "sampled"
// signal is WEBAUDIO_MAX_SAMPLERATE / the downsampling factor. This is
// divided by 2 to get the Nyquist frequency.
if (mode === "FIR") {
let firCalculator = new Fili.FirCoeffs();
let filterCoeffs = firCalculator.lowpass(
{
order: order
, Fs: fs
, Fc: frequency
});
// generate the filter
let filter = new Fili.FirFilter(filterCoeffs);
// apply the filter
// filter.multiStep(signal);
signal.forEach((x, n, y) => y[n] = filter.singleStep(x));
// time shift the signal by half the filter order to compensate for the
// delay introduced by the FIR filter
const shift = order / 2;
for (let i = 0; i < signal.length - shift; i++) {
signal[i] = signal[i + shift];
}
for (let i = signal.length - shift; i < signal.length; i++) {
signal[i] = 0;
}
if (filterKernel) {
for (let i = 0; i < filterCoeffs.length; i++) {
filterKernel[i] = filterCoeffs[i];
}
}
} else if (mode === "Butterworth") {
let iirCalculator = new Fili.CalcCascades();
let characteristic = "butterworth";
let filterCoeffs = iirCalculator.lowpass({
order: order, // cascade 3 biquad filters
characteristic: characteristic,
Fs: fs, // sampling frequency
Fc: frequency, // cutoff frequency / center frequency for bandpass, bandstop, peak
preGain: false // adds one constant multiplication for highpass and lowpass
// k = (1 + cos(omega)) * 0.5 / k = 1 with preGain == false
});
let filter = new Fili.IirFilter(filterCoeffs);
signal.forEach((x, n, y) => y[n] = filter.singleStep(x));
if (filterKernel) {
filterKernel[0] = 1;
let filter = new Fili.IirFilter(filterCoeffs);
filterKernel.forEach((x, n, y) => y[n] = filter.singleStep(x));
}
}
// return filterCoeffs;
}
function getDither(ditherType) {
switch (ditherType) {
case "Rectangular" :
return (2 * Math.random() - 1);
case "Triangular" :
return (Math.random() - Math.random());
case "Gaussian" :
// box muller transform, mean=0 std=0.5
return 0.5 * Math.sqrt(-2.0 * Math.log(1 - Math.random())) * Math.cos(2.0 * Math.PI * Math.random())
}
}
function addDitherToHistogram(settings, dither) {
const bin = Math.floor(dither / settings.ditherHistogramBinSize) * settings.ditherHistogramBinSize;
if (bin in settings.ditherHistogram) {
settings.ditherHistogram[bin]++;
} else {
settings.ditherHistogram[bin] = 1;
}
}
function quantize(y, quantizationType, stepSize) {
switch (quantizationType) {
case "midTread" :
return stepSize * Math.floor(Math.min(Math.max(-1, y, -0.99)) / stepSize + 0.5);
case "midRise" :
return stepSize * (Math.floor(Math.min(Math.max(-1, y, -0.99)) / stepSize) + 0.5);
}
}
function applyFade(arr, normalize) {
let fade = (_, n, arr) => {
let fadeTimeSamps = Math.min(fadeTimeSeconds * WEBAUDIO_MAX_SAMPLERATE, arr.length / 2);
// The conditional ensures there is a fade even if the fade time is longer than the signal
if (n < fadeTimeSamps)
arr[n] = (n / fadeTimeSamps) * arr[n] / normalize;
else if (n > arr.length - fadeTimeSamps)
arr[n] = ((arr.length - n) / fadeTimeSamps) * arr[n] / normalize;
else arr[n] = arr[n] / normalize;
};
arr.forEach(fade);
}
// Rendering steps ----------------------------------------------------------
function renderOriginal(settings, fft, playback) {
let original = playback ? settings.buffers.originalUnfiltered.playback : settings.buffers.originalUnfiltered.display;
// calculate harmonics ------------------------------------------------------
// The signal is generated using simple additive synthesis. Because of this,
// the exact frequency content of the signal can be determined a priori based
// on the settings. We generate this information here so that it can be used
// not only by the synthesis process below, but also by several of the graphs
// used to illustrate the frequency domain content of the signal.
// We only calculate the harmonics for the simulation; it is assumed they will
// already have been calculated earlier when rendering for playback
if (!playback) {
calculateHarmonics(settings);
}
// render original wave -----------------------------------------------------
// initialize the signal buffer with all zeros (silence)
original.fill(0);
// For the sample at time `n` in the signal buffer `original`,
// generate the sum of all the partials based on the previously calculated
// frequency and amplitude values.
getSamples(settings, original);
normalize(original, settings.amplitude);
settings.reconstructionFilterFrequency = (settings.sampleRate / settings.downsamplingFactor) / 2;
}
function getInterpolatedSample(array, i) {
if (i <= 0) {
return array[0];
}
if (i >= array.length - 1) {
return array[array.length - 1];
}
let lowIndex = Math.floor(i);
let highIndex = lowIndex + 1;
return array[lowIndex] * (highIndex - i) + array[highIndex] * (i - lowIndex);
}
function renderDeltaSigma(settings, fft, playback) {
let originalUnfiltered = playback ? settings.buffers.originalUnfiltered.playback : settings.buffers.originalUnfiltered.display;
let deltaSigma = playback ? settings.buffers.deltaSigma.playback : settings.buffers.deltaSigma.display;
let reconstructed = playback ? settings.buffers.reconstructed.playback : settings.buffers.reconstructed.display;
let step = settings.deltaSigmaStep;
if (settings.deltaSigmaSamplingRate <= (WEBAUDIO_MAX_SAMPLERATE/2)) {
if (!playback && deltaSigma.length !== settings.displaySignalSize) {
deltaSigma = new Float32Array(settings.displaySignalSize);
}
let samplePeriod = Math.floor(WEBAUDIO_MAX_SAMPLERATE / settings.deltaSigmaSamplingRate);
let ds_state = 0;
for (let i = 0; i < originalUnfiltered.length; i += samplePeriod) {
if (ds_state > originalUnfiltered[i]) {
ds_state -= step;
} else {
ds_state += step;
}
for (let j = 0; j < samplePeriod; j += 1) {
deltaSigma[i+j] = ds_state;
reconstructed[i+j] = ds_state;
}
}
} else if (!playback) {
// Simulate a higher sample rate
if (settings.buffers.deltaSigma.display.length !== settings.displaySignalSize * settings.deltaSigmaSamplingRate / WEBAUDIO_MAX_SAMPLERATE) {
settings.buffers.deltaSigma.display = new Float32Array(settings.displaySignalSize * settings.deltaSigmaSamplingRate / WEBAUDIO_MAX_SAMPLERATE);
settings.buffers.reconstructed.display = new Float32Array(settings.displaySignalSize * settings.deltaSigmaSamplingRate / WEBAUDIO_MAX_SAMPLERATE);
}
let upsampledOutput = new Float32Array(settings.buffers.deltaSigma.display.length);
let ds_state = 0;
let scale = originalUnfiltered.length / settings.buffers.deltaSigma.display.length;
for (let i = 0; i < settings.buffers.deltaSigma.display.length; i += 1) {
if (ds_state > getInterpolatedSample(originalUnfiltered, i * scale)) {
ds_state -= step;
} else {
ds_state += step;
}
settings.buffers.deltaSigma.display[i] = ds_state;
settings.buffers.reconstructed.display[i] = ds_state;
}
} else {
// playback at higher sample rate
let fullBuffer = new Float32Array(Math.floor(originalUnfiltered.length * settings.deltaSigmaSamplingRate / WEBAUDIO_MAX_SAMPLERATE));
let ds_state = 0;
let scale = originalUnfiltered.length / fullBuffer.length;
for (let i = 0; i < fullBuffer.length; i += 1) {
if (ds_state > getInterpolatedSample(originalUnfiltered, i * scale)) {
ds_state -= step;
} else {
ds_state += step;
}
fullBuffer[i] = ds_state;
}
filterSignal(fullBuffer, WEBAUDIO_MAX_SAMPLERATE / 2, 200, 'FIR', undefined, settings.deltaSigmaSamplingRate);
scale = 1 / scale;
for (let i = 0; i < reconstructed.length; i += 1) {
let s = getInterpolatedSample(fullBuffer, i * scale);
reconstructed[i] = s;
deltaSigma[i] = s;
}
}
}
function applyAntialiasingFilter(settings, fft, playback) {
let originalUnfiltered = playback ? settings.buffers.originalUnfiltered.playback : settings.buffers.originalUnfiltered.display;
let original = playback ? settings.buffers.original.playback : settings.buffers.original.display;
let filterKernel = playback ? settings.buffers.filterKernel.playback : settings.buffers.filterKernel.display;
// apply antialiasing filter if applicable ----------------------------------
// The antialiasing and reconstruction filters are generated using Fili.js.
// (https://github.com/markert/fili.js/)
// Fili uses the windowed sinc method to generate FIR lowpass filters.
// Like real antialiasing and reconstruction filters, the filters used in the
// simulation are not ideal brick wall filters, but approximations.
// apply antialiasing only if the filter order is set
for (let i = 0; i < originalUnfiltered.length; i++) {
original[i] = originalUnfiltered[i];
}
{
let firCalculator = new Fili.FirCoeffs();
let filterCoeffs = firCalculator.lowpass(
{
order: settings.antialiasing
, Fs: WEBAUDIO_MAX_SAMPLERATE
, Fc: (WEBAUDIO_MAX_SAMPLERATE / settings.downsamplingFactor) / 2
});
}
filterKernel.fill(0);
if (settings.antialiasing > 1) {
let cutoff = (WEBAUDIO_MAX_SAMPLERATE / settings.downsamplingFactor) / 2;
let order = settings.antialiasing;
let firCalculator = new Fili.FirCoeffs();
let filterCoeffs = firCalculator.lowpass(
{
order: order
, Fs: WEBAUDIO_MAX_SAMPLERATE
, Fc: cutoff
});
filterSignal(original, cutoff, order, settings.filterType, filterKernel);
} else {
filterKernel[0] = 1;
}
}
function downsampleWithQuantization(settings, fft, playback) {
// generate new signal buffers for the downsampled signal and quantization
// noise whose sizes are initialized according to the currently set
// downsampling factor
let original = playback ? settings.buffers.original.playback : settings.buffers.original.display;
if (playback && settings.buffers.downsampled.playback.length !== Math.round(original.length / settings.downsamplingFactor)) {
settings.buffers.downsampled.playback = new Float32Array(Math.round(original.length / settings.downsamplingFactor));
settings.buffers.downsampledWithQuantization.playback = new Float32Array(Math.round(original.length / settings.downsamplingFactor));
settings.buffers.quantNoise.playback = new Float32Array(Math.round(original.length / settings.downsamplingFactor));
} else if (settings.buffers.downsampled.display.length !==Math.round(original.length / settings.downsamplingFactor)) {
settings.buffers.downsampled.display = new Float32Array(Math.round(original.length / settings.downsamplingFactor));
settings.buffers.downsampledWithQuantization.display = new Float32Array(Math.round(original.length / settings.downsamplingFactor));
settings.buffers.quantNoise.display = new Float32Array(Math.round(original.length / settings.downsamplingFactor));
}
let reconstructed = playback ? settings.buffers.reconstructed.playback : settings.buffers.reconstructed.display;
let stuffed = playback ? settings.buffers.stuffed.playback : settings.buffers.stuffed.display;
let downsampled = playback ? settings.buffers.downsampled.playback : settings.buffers.downsampled.display;
let downsampledWithQuantization = playback ? settings.buffers.downsampledWithQuantization.playback : settings.buffers.downsampledWithQuantization.display;
let quantNoise = playback ? settings.buffers.quantNoise.playback : settings.buffers.quantNoise.display;
let quantNoiseStuffed = playback ? settings.buffers.quantNoiseStuffed.playback : settings.buffers.quantNoise.display;
// downsample original wave -------------------------------------------------
// zero initialize the reconstruction, and zero stuffed buffers
reconstructed.fill(0);
stuffed.fill(0);
quantNoiseStuffed.fill(0);
// calculate the maximum integer value representable with the given bit depth
let maxInt = Math.pow(2, settings.bitDepth) - 1;
let stepSize = (settings.quantType === "midTread") ? 2 / (maxInt - 1) : 2 / (maxInt);
// generate the output of the simulated ADC process by "sampling" (actually
// just downsampling), and quantizing with dither. During this process, we
// also load the buffer for the reconstructed signal with the sampled values;
// this allows us to skip an explicit zero-stuffing step later
if (!playback) {
settings.ditherHistogram = {};
}
downsampled.forEach((_, n, arr) => {
// keep only every kth sample where k is the integer downsampling factor
let y = Math.min(Math.max(-1, original[n * settings.downsamplingFactor]), 1);
let quantized;
if (settings.bitDepth === BIT_DEPTH_MAX) {
quantized = y;
} else {
let dither = getDither(settings.ditherType) * settings.dither;
if (!playback) {
addDitherToHistogram(settings, dither);
}
quantized = quantize(y + dither, settings.quantType, stepSize);
}
// sparsely fill the reconstruction buffer to avoid having to zero-stuff
reconstructed[n * settings.downsamplingFactor] = quantized;
arr[n] = y;
downsampledWithQuantization[n] = quantized;
stuffed[n * settings.downsamplingFactor] = quantized * settings.downsamplingFactor;
// record the quantization error
quantNoise[n] = quantized - y;
quantNoiseStuffed[n * settings.downsamplingFactor] = quantNoise[n];
});
// To retain the correct amplitude, we must multiply the output of the
// filter by the downsampling factor.
reconstructed.forEach((x, n, arr) => arr[n] = x * settings.downsamplingFactor);
}
function antiImagingFilter(settings, fft, playback) {
let reconstructed = playback ? settings.buffers.reconstructed.playback : settings.buffers.reconstructed.display;
// render reconstructed wave by low pass filtering the zero stuffed array----
const freq = (settings.reconstructionFilterFrequency >= 0)
? settings.reconstructionFilterFrequency
: (WEBAUDIO_MAX_SAMPLERATE / settings.downsamplingFactor) / 2;
filterSignal(reconstructed, freq, settings.reconstructionFilterOrder, 'FIR'); // TODO: slider for order, start at 200
}
function renderWavesImpl(
settings, fft) {
return (playback = false) => {
for (const stage of settings.renderStages) {
stage(settings, fft, playback);
}
// render FFTs --------------------------------------------------------------
// The FFTs of the signals at the various stages of the process are generated
// using fft.js (https://github.com/indutny/fft.js). The call to
// `realTransform()` performs the FFT, and the call to `completeSpectrum`
// fills the upper half of the spectrum, which is otherwise not calculated
// since it is a redundant reflection of the lower half of the spectrum.
if (!playback) {
for (const [key, value] of Object.entries(settings.buffers)) {
fft.realTransform(value.freq, value.display);
fft.completeSpectrum(value.freq);
}
for (let i = 0; i < settings.buffers.filterKernel.freq.length; ++i) {
settings.buffers.filterKernel.freq[i] *= 452;
}
}
// fade in and out and suppress clipping distortions ------------------------
// Audio output is windowed to prevent pops. The envelope is a simple linear
// ramp up at the beginning and linear ramp down at the end.
// This normalization makes sure the original signal isn't clipped.
// The output is clipped during the simulation, so this may reduce its peak
// amplitude a bit, but since the clipping adds distortion the perceived
// loudness is relatively the same as the original signal in my testing.
if (playback) {
let normalize = settings.amplitude > 1.0 ? settings.amplitude : 1.0;
for (const [key, value] of Object.entries(settings.buffers)) {
applyFade(value.playback, normalize);
}
}
}
}