I have an extensive blueprint for an instrument I've been prototyping for years, and I need to start thinking specifically about how I'm going to code certain things (w/ C++, JUCE). I'm trying to learn as much as I can about FFT stuff and spectral manipulation because it's very relevant to the instrument and CPU is a major consideration. Below are some questions I need to figure out. These questions are being asked independent of if it's an audio sample or an oscillator being played. I'm sure some of these questions don't make sense -- I'm very new to this and still learning.
1. How is it that I can just add partials to a sound and then IFFT and not have it sound mangled? Like if I have a sine wave, do an FFT, then add a partial one octave below, then IFFT, won't the syncronization/phase stuff get messed up because it takes twice the time for the lower partial to cycle than the original? So wouldn't the lower partial effectively become a half-sine waveform and sound totally different? I'm probably just confused how the FFT stuff works.
2. With an oscillator wave shape that you want to spectrally manipulate, can you do an FFT ahead of time when the shape changes in the UI instead of repeatedly doing an FFT in the processer? That way, you can just recall the saved FFT and start in the frequency domain, and then you would only have to do an IFFT in the processor to resynthesize instead of both FFT and IFFT. If this is possible for an oscillator, is there any way it can also be done for an audio sample? I assume the answer is no, because there's so much information in an audio sample that it would require an unrealistic amount of storage, whereas an oscillator is just a single FFT on a single cycle.
3. I've seen instruments that "simulate" doing hard sync waveform manipulation (a temporal process) through the frequency domain. Is it possible to easily simulate all typical temporal domain processes in the frequency domain? For example, if you want to do stuff like FM, PD, etc. can that be done after the FFT? If so, where would I learn about doing that?
4. Is it faster or slower to do stuff in the frequency domain vs. temporal domain? If it's slower in the frequency domain, would the benefit of avoiding an FFT/IFFT for say 100 simultaneous voices make it in turn faster? I'm wondering if I should just do everything in the frequency domain until the very end of the process chain, sum all spectrums, then do a single IFFT.
5. What are the main things that actually eat up CPU in a typical instrument? I feel like it's possible I'm focusing too much on avoiding FFTs when maybe they aren't as significant to speed as other things. Of course I will be profiling things extensively regardless.
Just spitballing here. I'm an experienced C++ programmer (not in audio), so feel free to use whatever programming lingo. Any response to any of these questions is appreciated, thanks.
1. How is it that I can just add partials to a sound and then IFFT and not have it sound mangled? Like if I have a sine wave, do an FFT, then add a partial one octave below, then IFFT, won't the syncronization/phase stuff get messed up because it takes twice the time for the lower partial to cycle than the original? So wouldn't the lower partial effectively become a half-sine waveform and sound totally different? I'm probably just confused how the FFT stuff works.
2. With an oscillator wave shape that you want to spectrally manipulate, can you do an FFT ahead of time when the shape changes in the UI instead of repeatedly doing an FFT in the processer? That way, you can just recall the saved FFT and start in the frequency domain, and then you would only have to do an IFFT in the processor to resynthesize instead of both FFT and IFFT. If this is possible for an oscillator, is there any way it can also be done for an audio sample? I assume the answer is no, because there's so much information in an audio sample that it would require an unrealistic amount of storage, whereas an oscillator is just a single FFT on a single cycle.
3. I've seen instruments that "simulate" doing hard sync waveform manipulation (a temporal process) through the frequency domain. Is it possible to easily simulate all typical temporal domain processes in the frequency domain? For example, if you want to do stuff like FM, PD, etc. can that be done after the FFT? If so, where would I learn about doing that?
4. Is it faster or slower to do stuff in the frequency domain vs. temporal domain? If it's slower in the frequency domain, would the benefit of avoiding an FFT/IFFT for say 100 simultaneous voices make it in turn faster? I'm wondering if I should just do everything in the frequency domain until the very end of the process chain, sum all spectrums, then do a single IFFT.
5. What are the main things that actually eat up CPU in a typical instrument? I feel like it's possible I'm focusing too much on avoiding FFTs when maybe they aren't as significant to speed as other things. Of course I will be profiling things extensively regardless.
Just spitballing here. I'm an experienced C++ programmer (not in audio), so feel free to use whatever programming lingo. Any response to any of these questions is appreciated, thanks.
Statistics: Posted by rou58 — Mon Apr 22, 2024 6:21 pm — Replies 0 — Views 25