Who invented the fm synthesis algorithm in 1967




















It wasnt worth giving up. I was in a digital world, but the whole process was intensely musical. Programming and creating sounds, and figuring out how the two relate — all of that was, for me, the point. As if on cue, the famous French composer Pierre Boulez contacted Chowning and offered him an advisory position in Paris.

Chowning jumped at the opportunity, and soon found himself back in Paris, assisting in the development of the program. As Chowning revelled in his new role, Stanford struggled to lease the FM synthesis patent. All said no. As a last resort, Stanford contacted Yamaha, a Japanese musical instrument manufacturer which, at the time, had already begun investigation into the digital domain for a distant future.

And as luck would have it, the company gave the patent a chance. With its tail between its legs, Stanford approached Chowning and extended an offer to return, this time as an research associate. Chowning agreed. Except there was a difference: while his efforts had once been disregarded and underappreciated, they were suddenly of the utmost importance to Stanford and its new patent licensee. For years, analog synthesizer instruments had ruled the market. But they came with their fair share of shortcomings or unique quirks, depending on who you talk to.

From the s to the early s, electronic instruments were limited by their dependance on magnetic tape as a means of producing and recording sounds. Even newer developments in the mids, like the Moog or the Mellotron, were fickle: notes sounded different each time they were played, and there were often variations in pitch and amplitude.

Efforts to digitize the synthesizer had been made in previous decades, but were thwarted by the gargantuan size of computers and memory cards, and the fact that it took up to 30 minutes just to hammer out a few measures of music.

This level of commitment is evident in an excerpt from a July letter from Yamaha to Chowning:. While Yamaha tinkered with digital synthesizer technology, they made great strides with their analogue synths. In and , the company released two machines — the GX1, and the CS80 — in limited runs of 10 units. By then, Yamaha was fully invested in the belief that the technology could make them millions of dollars, and they negotiated a licensing agreement with Stanford, securing them rights to the technology for the next 17 years, until One of 17 schematics submitted in the FM Synthesis patent application submitted in , and approved in But during the next several years, the company hit a number of roadblocks in rolling out their digital technology.

But the instruments were universally touted as great sounding, which boded well for Yamaha. From its onset, the DX7 was a massive success — one that both Stanford and Yamaha minted money from.

As per the original licensing agreement, Stanford received. The synthesizer exploded in popularity in a variety of markets the U. S, Japan, Great Britain, and France and continued to bring in substantial profits through its discontinuation in The company, once focused on diversification, now intensely drove its synthesizer production, producing some 1, electronic organs per day.

The patent was making more money than ever before — and at the height of this second wave, Stanford and Yamaha signed a new royalty agreement: 1. The FM chip, which Yamaha had developed with Chowning, was selling some , pieces per year. When he retired in , Chowning had enough influence to secure two professorships in his program — one for Julius O.

Smith, and the other for Chris Chafe. Above all else, Chowning loves to learn. Of course the technology at the time—the LSI technology—was evolving rapidly, and while they could not have envisioned a commercial instrument in , 72, they could see that maybe in a decade that the technology would have evolved such that they could—with some special skills, and the development of their own LSI capability—build an affordable instrument.

And in they did! I visited probably 15 or so times in those ten years and worked with their engineers developing tones, and they developed various forms of the algorithm. All the while, of course, the large-scale integration was doubling its computational capacity and halving its size every year or so. So there was a lot of research trying to anticipate the convergence point of what they knew and what the technology would allow.

And that moment was when they introduced the GS-1, a large key keyboard, which was used by one group famously. I think Toto was the name of the group. Do you think the fact that Yamaha implemented a great deal of touch sensitivity and finger control in those instruments was as much responsible for their success as the FM synthesis generating technique itself? That was one of the things that was so very attractive—that they could couple the strike force on a key to the synthesis algorithm such that it not only affected intensity but also spectral bandwidth.

And that was very important because the spectral bandwidth, or shifting the centroid of the spectrum up in frequency with increased strike force, is critical to our notion of loudness. So they saw on the DX-7 that good musicians would make use of that in a way that not such good keyboard players, would not.

And I guess one of the British rock bands of the day had David Bristow—who was one of the factory voice people [11] for the DX-7—kind of advising them. And that was a revelation for this musician. Happens on all musical instruments except for the organ.

And deprived of that, one loses a lot in the notion of loudness, and loudness, of course, is one of the basic musical parameters. What kind of encouragement did you get? Well, I was working on music, on compositions, and my whole environment was not one that was developing technology for the music industry but trying to make use of computers in a way that was expressive for my own interests in composition. At that moment I knew myself that I was on to something, without any confirmation from anyone else—although of course I played it for musician friends and computer science friends in the A.

And everyone was astounded that we could get so close with such a simple algorithm. What did you feel about FM synthesis being yoked to a keyboard? What did you feel about it being used in that way?

I do now, because I guess it defines an era in a way. But I was not so interested in the DX-7 as a musical instrument for the reasons that you just mentioned. One lacked a kind of control—over pitch frequencies especially—that I had been very interested in. Ligeti was very interested in that, and we made the case to Yamaha. Of course their sales force had little interest in microtuning because most of their world was in common-practice tuning—the pop music world. But we explained to them that they could improve the piano tones if they had microtuning because of the stretched octave phenomenon etc.

So in the DX-7II they provided it, and Ligeti was a very strong voice in that effort at persuading them. Ligeti was a friend of Simha, and a couple of years later Arom dedicated a paper—that paper, I think—to Ligeti.

Do you think his achievements are slightly underrated, in that he was able to see that so early—at a time when actually implementing those ideas was so time-consuming and costly? Well we certainly credit him with enormous insight. So he not only saw that this was powerful and that sampling was a general way—I mean that was what caught my attention: his assertion that any sound that can be produced by a loudspeaker can be synthesized using a computer.

And that was such a stunning statement because we know that loudspeakers have an enormous range of possible sounds. But it was not only that. It was his insight that psychoacoustics would be an important discipline in the evolution of this as an artistic medium.

And indeed it was. Perception, psychoacoustics turned out to be critical in a way that few people foresaw. Typically, if an electronic musician today wants things to be louder, they just push up the pots on the gain. So it gets pretty complicated. You can tell the difference between a very distant, loud trumpet and a close-up quiet flute even though they may have the same sound pressure level when they reach your ear. Their RMS intensity could be the same except that we perceive one as being loud and the other soft because we know something about auditory context.

Could you talk a little about that? Well, I think that was in his work on the simulation of the piano tones. So he separated that out and included it in one of his versions of the piano tone. Some noises would increase with key velocity as if one were hitting the key harder.

And he played this example for a keyboard player who had played the DX-7, I think it was, and then showed him the new one with this new voicing and the player thought it was a different keyboard—it was more responsive to velocity, or strike force. There are two models for sound synthesis these days, I guess.

It looks like signal flow, and one can change or modify the signal flow as one is listening. So this is more like an analogue synthesizer in that sense, because one has real-time control over the sound as one listens to it.

Musical complexity, that is. Well, both. And I'm using frequency modulation synthesis, but in a pretty complicated way. In this case it produces sounds that are quite un-FM-like, if one thinks about typical DX-7 sounds.

The idea in this piece for soprano is to create what I call structured spectra, inharmonic spectra that are orderly and predictable and unlike most natural inharmonic spectra, which have lots of partials that are not determined by the people who build or make the instruments; bells, gongs, things of that sort.

And in this case they're based upon carrier-to-modulator frequency ratios that are powers of the golden section or golden ratio—much like I used in a piece in , Stria. You were saying that the sounds are not obviously FM-like. Well, I use FM much like additive synthesis. That is, I create complex spectra by using iterations of fairly simple FM algorithms added to itself. So to achieve a kind of nuance, then treating FM as if it were additive synthesis, building up a complex timbre by multiple iterations of the algorithm at different frequencies and adding them together, seems to be a very effective way.

And perhaps using the horn spectrum in there somewhere—the particular pitch possibilities inherent in the horn? Yes, which are unique of course. I gather your new pieces are not the only works of yours that have benefitted from advances in technology—you now have cleaner versions of some of your older works. Yes, the Samson Box has been reconstructed so we have a pristine version of Turenas , for instance. Could you talk us through that?

I think the version that was released on CD [13] was generated at a relatively low sample rate. Yes, and also that version on Wergo is a digitized version, that was made from an analogue tape recording so I redid Turenas in , I think, for the Samson Box. Well, no the DX-7 was greater.

But the Samson Box was lower by an octave. In Bill Schottstaedt built a software version of the Samson Box such that all of our input files for the original Samson Box could be regenerated. Tape hiss and digital sampling noise, the low-order bit flipping , like in reverberation tails. What do you make of people who hanker after early analogue synthesizer sounds—and indeed early digital ones? Exactly, exactly. And it was kind of a natural part of the analogue world.

Do any of the synthesis techniques that followed FM, such as waveguide synthesis and physical modelling have any interest for you? Well, they do One gets what one gets. And with slight modifications one can own it. With synthesis we had to understand a lot of very basic stuff that is no longer required with sampling.

But in struggling with those synthesis issues we learned to understand aspects of music perception that we would otherwise not have understood. But that some of us did is certainly good. It was an important payoff. That was first done by Jean-Claude Risset in his piece Mutations. Only with a computer could one do that. Right, and the fact that the spectral components are absolutely ordered. That means also, from your point of view, that the compositional intent and the technological means to achieve them are closely knit.

You mentioned earlier that in Stria the compositional procedure was analogous to a computational procedure—could you tell us something about that?

Stria is where I first used recursion. Somebody explained it to me And so yes, the idea of algorithmic composition was very interesting to me and is still. You've mentioned these relatively recent pieces that are combinations of a solo performer and real-time electronics.

What do you see as being fertile ground for experimentation or research? Oh, well I think there are all sorts of combinations that I—given a long enough life—would like to explore. Not just solo instruments but combinations of instruments. So yes, I find it all very interesting. Who do you think is doing especially interesting work or work that seems to have potential?

The instantaneous pressure is sensed and run through a computer processor and then using a voice coil it forces the drum head back up, so it can produce a response at various rates depending on the program and the strike force etc. Sound was produced by simple analog circuits, not the digital sampling used by professional electronics drums.

Many inventors sought ways to incorporate electricity into music-making in the late 19th century. This add-on piano keyboard capitalized on the remarkable sound chip in the Commodore 64 computer.

Three independent tone generators with programmable waveforms and filters created a distinctive sound that even some professional musicians used. CHM Revolution. Exhibition Computer Graphics, Music, and Art 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Producing Music Electronically Strings.



0コメント

  • 1000 / 1000