Oral-History:Harry F. Olson: Difference between revisions

From ETHW
No edit summary
(27 intermediate revisions by 3 users not shown)
Line 3: Line 3:
[[Image:Olson.jpg|thumb|left|Harry Olson]]  
[[Image:Olson.jpg|thumb|left|Harry Olson]]  


[[Harry Olson|Harry Olson]], a pioneer in musical sound reproduction, received his B.E. degree from the University of Iowa in 1924. He continued his graduate studies at Iowa, taking a masters in 1925 and a Ph.D. in atomic physics in 1928. Olson joined [[RCA (Radio Corporation of America)|RCA]] in 1928, immediately tackling the problem of poor quality sound in the new "talking pictures." In 1935, Olson was placed in charge of RCA's Camden acoustic laboratory, where he went on to develop the [[Electronic Music Synthesizer|electronic synthesizer]] with Herbert Millar. He moved his work to the newly opened [[RCA Laboratories at Princeton, New Jersey|RCA Laboratories]] in Princeton, NJ in 1941.  
[[Harry Olson|Harry Olson]], a pioneer in musical sound reproduction, received his B.E. degree from the University of Iowa in 1924. He continued his graduate studies at Iowa, taking a masters in 1925 and a Ph.D. in atomic physics in 1928. Olson joined [[RCA (Radio Corporation of America)|RCA]] in 1928, immediately tackling the problem of poor quality sound in the new "talking pictures." In 1935, Olson was placed in charge of RCA's Camden acoustic laboratory, where he went on to develop the [[Electronic Music Synthesizer|electronic synthesizer]] with Herbert Belar. He moved his work to the newly opened [[RCA Laboratories at Princeton, New Jersey|RCA Laboratories]] in Princeton, NJ in 1941.  


The interview covers Olson's groundbreaking work in acoustic research in nearly forty years with RCA. Olson discusses his work with [[Microphone|microphones]], including the development of the velocity microphone and the unidirectional microphone for use in movies. He was also instrumental in work done on RCA's second-order gradient microphone. The interview offers a comprehensive discussion of Olson's work with Herbert R. Millar to develop an electronic music synthesizer, including both technical discussion and the implications of musical aesthetics. The interview continues with comments on Olson's work with RCA's underwater sound project for the Navy in the 1940s and his subsequent work on loudspeakers. Olson also discusses the potential of [[Quadraphonic Stereo|quadrophonic sound]], the development of the phonetic typewriter, as well as his work on sound reinforcement systems and the music composing machine. The interview concludes with a brief discussion of current limitations in loudspeakers and the potential of air-suspension speaker systems.  
The interview covers Olson's groundbreaking work in acoustic research over nearly forty years with RCA. Olson discusses his work with [[Microphone|microphones]], including the development of the velocity microphone and the unidirectional microphone for use in movies. He was also instrumental in work done on RCA's second-order gradient microphone. The interview offers a comprehensive discussion of Olson's work with Herbert R. Belar to develop an electronic music synthesizer, including both technical discussion and the implications of musical aesthetics. The interview continues with comments on Olson's work with RCA's underwater sound project for the Navy in the 1940s and his subsequent work on [[Loudspeakers|loudspeakers]]. Olson also discusses the potential of [[Quadraphonic Stereo|quadrophonic sound]], the development of the phonetic typewriter, as well as his work on sound reinforcement systems and the music composing machine. The interview concludes with a brief discussion of current limitations in [[Loudspeakers|loudspeakers]] and the potential of air-suspension speaker systems.


== About the Interview  ==
== About the Interview  ==
Line 39: Line 39:
'''Olson:'''  
'''Olson:'''  


My name is Harry F. Olson. I received the B.E. degree from the University of Iowa in 1924, the M.S. degree in 1925, and the Ph.D. degree in 1928. I was introduced to the science of acoustics through my contacts as a student at the University of Iowa with Dean Seashore, who pioneered in the field of psychology of musical sounds, and with Professor Stewart, the inventor of the acoustic wave filter. My master's thesis was on solid mechanical wave filters. However, my Ph.D. degree was in the field of atomic physics.  
My name is Harry F. Olson. I received the B.E. degree from the University of Iowa in 1924, the M.S. degree in 1925, and the Ph.D. degree in 1928. I was introduced to the science of acoustics through my contacts as a student at the University of Iowa with Dean [Carl Emil] Seashore, who pioneered in the field of psychology of musical sounds, and with Professor [George W.] Stewart, the inventor of the acoustic wave filter. My master's thesis was on solid mechanical wave filters. However, my Ph.D. degree was in the field of atomic physics.


==== Unidirectional Microphone  ====
==== Unidirectional Microphone  ====
Line 45: Line 45:
'''Olson:'''  
'''Olson:'''  


Sound motion pictures were commercialized in the mid-1920s. I came with RCA in 1928. RCA had acquired sound studios in Hollywood, which they named Radio Picture Studios. One of the problems was sound pick-up with the [[Microphone|microphone]] out of the picture. This required rather long sound pickup distances. As a result, there was a lot of ambient room noise and reverberation in the recorded sound. The obvious solution was a directional microphone which would discriminate against noise and reverberation. At that time there were no directional microphones. Some microphones had a little directivity in the very high frequency range. I developed the velocity microphone, which had a bi-directional figure eight characteristic, that was uniform over the entire audio frequency range. This microphone solved the problem of distant sound pickup. Later it became apparent the unidirectional microphone would be more appropriate. Therefore I started to work on this project and developed the unidirectional microphone with a cardoid unidirectional pattern. This microphone was found to be exactly what was required. The cardoid unidirectional microphone is still today a microphone that is used for a boom or long-distance pickup, for sound pickup in motion pictures, television, and sound reinforcement.  
Sound motion pictures were commercialized in the mid-1920s. I came with RCA in 1928. RCA had acquired sound studios in Hollywood, which they named [RKO] Radio Picture Studios. One of the problems was sound pick-up with the [[Microphone|microphone]] out of the picture. This required rather long sound pickup distances. As a result, there was a lot of ambient room noise and reverberation in the recorded sound. The obvious solution was a directional microphone which would discriminate against noise and reverberation. At that time there were no directional microphones. Some microphones had a little directivity in the very high frequency range. I developed the velocity microphone, which had a bi-directional figure eight characteristic that was uniform over the entire audio frequency range. This microphone [BK-44A, BK-44B, BK-44BX] solved the problem of distant sound pickup. Later it became apparent the unidirectional microphone would be more appropriate. Therefore I started to work on this project and developed the unidirectional microphone with a cardioid unidirectional pattern [BK-77A, BK-77B, BK-77C, BK-77D]. This microphone was found to be exactly what was required. The cardioid unidirectional microphone is still today a microphone that is used for a boom or long-distance pickup, for sound pickup in motion pictures, television, and sound reinforcement.


==== Psychology of Sound  ====
==== Psychology of Sound  ====
Line 53: Line 53:
Following World War II, several investigators claimed that the average listener preferred a restricted frequency range in the reproductions of speech and music, with a top frequency range of 5,000 Hertz. There were three reasons for this state of affairs namely, 1) the average listener after listening to the restricted frequency range of [[Radio|radio]] and [[Phonograph|phonographs]] had been conditioned to this state of affairs, and did not want a wider frequency range; 2) that the reproduction of musical instruments was more pleasing with the higher overtones eliminated; and 3) the distortions and deviations in sound reproduction were less objectionable with the restricted frequency range because the restricted frequency range eliminated the harmonics which were generated by the distortion.  
Following World War II, several investigators claimed that the average listener preferred a restricted frequency range in the reproductions of speech and music, with a top frequency range of 5,000 Hertz. There were three reasons for this state of affairs namely, 1) the average listener after listening to the restricted frequency range of [[Radio|radio]] and [[Phonograph|phonographs]] had been conditioned to this state of affairs, and did not want a wider frequency range; 2) that the reproduction of musical instruments was more pleasing with the higher overtones eliminated; and 3) the distortions and deviations in sound reproduction were less objectionable with the restricted frequency range because the restricted frequency range eliminated the harmonics which were generated by the distortion.  


We set out to perform what is now considered a classical experiment. We arranged an acoustic filter between a live orchestra and the listeners. The acoustic filters were in the form of doors that could be turned in and out. In other words the filter could be placed in or out of operation. A light opaque sound transmitting curtain was placed between the listeners and the acoustic filters so that the listeners could not see the filters or what transpired behind the screen. The high-frequency cut-off of the filters was 5,000 hertz. When the filters were turned out, the listeners received the full frequency range from the orchestra or speech. Tests were performed with people from all walks of life. The experiments indicated a preference of 70% for the full frequency range. This showed that there was something wrong with reproduced music and speech. An investigation indicated that it was indeed distortion which brought the people to prefer a limited frequency range. When the distortion was eliminated, high-frequency sound reproduction took off and burgeoned during the following year. Today, reproduction of sound occurs over the entire audio frequency range.  
We set out to perform what is now considered a classical experiment. We arranged an acoustic filter between a live orchestra and the listeners. The acoustic filters were in the form of doors that could be turned in and out. In other words the filter could be placed in or out of operation. A light opaque sound transmitting curtain was placed between the listeners and the acoustic filters so that the listeners could not see the filters or what transpired behind the screen. The high-frequency cut-off of the filters was 5,000 hertz. When the filters were turned out, the listeners received the full frequency range from the orchestra or speech. Tests were performed with people from all walks of life. The experiments indicated a preference of 70 percent for the full frequency range. This showed that there was something wrong with reproduced music and speech. An investigation indicated that it was indeed distortion which brought the people to prefer a limited frequency range. When the distortion was eliminated, high-frequency sound reproduction took off and burgeoned during the following year. Today, reproduction of sound occurs over the entire audio frequency range.  


'''Heyer:'''  
'''Heyer:'''  
Line 69: Line 69:
'''Olson:'''  
'''Olson:'''  


Radio and phonograph. Television had not taken hold yet.  
Radio and phonographs. Television had not taken hold yet [Postwar television production began 1946].  


'''Heyer:'''  
'''Heyer:'''  
Line 85: Line 85:
'''Olson:'''  
'''Olson:'''  


That’s exactly right.  
That’s exactly right.


=== The Synthesizer  ===
=== The Synthesizer  ===
Line 93: Line 93:
'''Heyer:'''  
'''Heyer:'''  


Did your early work on the psychology of sound with Seashore have any influence on what you did later?  
Did your early work on the psychology of sound with [Carl] Seashore have any influence on what you did later?  


'''Olson:'''  
'''Olson:'''  


Yes. That had an influence on the [[Electronic Music Synthesizer|electronic music synthesizer]]. Seashore had the idea, too, that if you could produce an instrument that had no limitations, you would indeed have great applications. This is exactly what turned out with the electronic music synthesizer. But the other situation was not that studies that we carried out on musical instruments, with the object of improving the recording of sound, indicated that the musical instruments had limitations in what one could do with ten fingers and one's mouth and feet in performing on the musical instrument. Also, the fundamental range of musical instruments is indeed quite limited. Another factor is that the quality is not altogether what musicians would like in the case of musical instruments.  
Yes. That had an influence on the [[Electronic Music Synthesizer|electronic music synthesizer]]. Seashore had the idea, too, that if you could produce an instrument that had no limitations, you would indeed have great applications. This is exactly what turned out with the electronic music synthesizer. But the other situation was that studies that we carried out on musical instruments, with the object of improving the recording of sound, indicated that the musical instruments had limitations in what one could do with ten fingers and one's mouth and feet in performing on the musical instrument. Also, the fundamental range of musical instruments is indeed quite limited. Another factor is that the quality is not altogether what musicians would like in the case of musical instruments.  


To overcome these limitations, Herbert S. Belar and I started work on an electronic music synthesizer in 1952. The idea was to develop a musical instrument with no limitations whatsoever. Seashore had indicated an instrument that could produce any musical tone, regardless of whether it had ever been produced before or not.  
To overcome these limitations, Herbert Belar and I started work on an electronic music synthesizer in 1952. The idea was to develop a musical instrument with no limitations whatsoever. Seashore had indicated an instrument that could produce any musical tone, regardless of whether it had ever been produced before or not.  


'''Heyer:'''  
'''Heyer:'''  
Line 107: Line 107:
'''Olson:'''  
'''Olson:'''  


<P><flashmp3>026 - olson - clip 1.mp3</flashmp3></p>
<p><flashmp3>026 - olson - clip 1.mp3</flashmp3><p>


Yes, it was. Because no one had really produced a programmed electronic music synthesizer. We used a digital punched record, which looked of course like a record that is used for a player piano. However this was in a digital form, so that we could indeed perform all the functions of a musical tone, i.e. the amplitude, frequency, harmonic content, the growth and decay, and so on with this punched paper record. Another advantage of this instrument is the fact that a man does not have to have great physical dexterity in order to play the instrument, he does not have to have any physical dexterity at all. However, in order to play traditional musical instruments, the musician must indeed have great physical dexterity. On this, the musician punches out what he thinks is right, he listens to it, and then he can make changes. He can punch more holes or he can plug up some holes in the digital record and obtain exactly what he wants. When we had finished the construction of the instrument, we wanted to prove that it could indeed produce great music because [[David Sarnoff|General Sarnoff]] said that, “A synthesizer is of no value if it does not provide the possibilities of producing great music."  
Yes, it was. Because no one had really produced a programmed electronic music synthesizer. We used a digital punched record, which looked of course like a record that is used for a player piano. However this was in a digital form, so that we could indeed perform all the functions of a musical tone, i.e. the amplitude, frequency, harmonic content, the growth and decay, and so on with this punched paper record. Another advantage of this instrument is the fact that a man does not have to have great physical dexterity in order to play the instrument; he does not have to have any physical dexterity at all. However, in order to play traditional musical instruments, the musician must indeed have great physical dexterity. On this, the musician punches out what he thinks is right, he listens to it, and then he can make changes. He can punch more holes or he can plug up some holes in the digital record and obtain exactly what he wants. When we had finished the construction of the instrument, we wanted to prove that it could indeed produce great music because [[David Sarnoff|General Sarnoff]] said that, “A synthesizer is of no value if it does not provide the possibilities of producing great music."  


To prove this we analyzed piano recordings of Polonaise by Chopin and Clair de Lune by Debussy played by a Yterby, Rubenstein and Horowitz. Also the old refrain by Kreisler played on a violin by Kreisler. The analysis was then synthesized and recorded and we intermixed short excerpts of the synthesized and original recordings for a test. We had fourteen excerpts, seven original and seven synthesized. Professional musicians and laymen were unable to detect the original from the synthesized versions. This proved that the electronic music synthesizer could produce great music. This test so impressed Howard Taubman, the music critic of the The New York Times, that he wrote an article on our electronic synthesizer that appeared on the front page of The New York Times. This was indeed, at that time, a revolutionary development. Later, Charles Wuorinen produced his composition Time's Encomium on our electronic synthesizer, and this was released as a record. Wuorinen received the Pulitzer Prize for his work. This was the first time that a Pulitzer Prize had ever been given for electronic music of any kind, regardless of how it was produced, whether on an electronic organ or any other electronic instrument. Electronic music synthesizers are, of course, commonplace today, ranging from small keyboard instruments to programmed computers similar to the one we developed two decades ago.  
To prove this we analyzed piano recordings of "Polonaise" by [Frederic] Chopin and "Clair de Lune" by [Claude] Debussy played by [José] Iturbi, [Arthur] Rubinstein and [Vladimir] Horowitz. Also "The Old Refrain" by [Fritz] Kreisler played on a violin by Kreisler. The analysis was then synthesized and recorded and we intermixed short excerpts of the synthesized and original recordings for a test. We had fourteen excerpts, seven original and seven synthesized. Professional musicians and laymen were unable to detect the original from the synthesized versions. This proved that the electronic music synthesizer could produce great music.  
 
This test so impressed Howard Taubman, the music critic of the The New York Times, that he wrote an article on our electronic synthesizer that appeared on the front page of The New York Times. This was indeed, at that time, a revolutionary development. Later, Charles Wuorinen produced his composition "Time's Encomium" on our electronic synthesizer, and this was released as a record. Wuorinen received the Pulitzer Prize for his work. This was the first time that a Pulitzer Prize had ever been given for electronic music of any kind, regardless of how it was produced, whether on an electronic organ or any other electronic instrument. Electronic music synthesizers are, of course, commonplace today, ranging from small keyboard instruments to programmed computers similar to the one we developed two decades ago.  


'''Heyer:'''  
'''Heyer:'''  
Line 119: Line 121:
'''Olson:'''  
'''Olson:'''  


It was built in the [[RCA (Radio Corporation of America)|RCA Laboratories]]. We had Maltny, who has a large, popular band, work on the synthesizer and Timmons, who worked on the electronic music synthesizer at the laboratories. We built a second synthesizer, which is now located at Columbia University in the Princeton-Columbia Music Center in New York. Many compositions have been synthesized on this instrument in addition to the work by Wuorinen.  
It was built in the [[RCA (Radio Corporation of America)|RCA Laboratories]]. We had [Richard] Maltby, who has a large, popular band, work on the synthesizer and [James "Jim"] Timmens, who worked on the electronic music synthesizer at the laboratories. We built a second synthesizer, which is now located at Columbia University in the Princeton-Columbia Music Center in New York. Many compositions have been synthesized on this instrument in addition to the work by Wuorinen.  


'''Heyer:'''  
'''Heyer:'''  
Line 127: Line 129:
'''Olson:'''  
'''Olson:'''  


Yes. It is a paper record with punched holes. The one in New York has two punched records so you can punch out two tones at a time, which of course speeds up the process. When you do this, for example, you punch out a series of tones, you record those. This paper record is synchronized with the tape recording system. The tape-recording system uses a sprocket type of tape, so it can be synchronized. You can then record seven series of tones on one tape, combine these seven into one tone; then continue again, making seven more tones and so on, so you can have any number you please — up to a thousand if you want to.  
Yes. It is a paper record with punched holes. The one in New York has two punched records so you can punch out two tones at a time, which of course speeds up the process. When you do this, for example, you punch out a series of tones, you record those. This paper record is synchronized with the tape recording system. The tape-recording system uses a sprocket type of tape, so it can be synchronized. You can then record seven series of tones on one tape, combine these seven into one tone; then continue again, making seven more tones and so on, so you can have any number you please — up to a thousand if you want to.


==== Testing  ====
==== Testing  ====
Line 137: Line 139:
'''Olson:'''  
'''Olson:'''  


What we did was this: we analyzed first the amplitude and frequency range of each tone of the original, the growth and decay and, the tempo, i.e. the space in between the tones. An interesting thing happened. We had an Yterby recording, which we analyzed so much we had worn out the record. We got a new record, but it was a different release and it sounded much more mechanical. We had to use that in the original excerpt and everybody said, “That has to be the synthesized version”.  
What we did was this: we analyzed first the amplitude and frequency range of each tone of the original, the growth and decay and the tempo, i.e. the space in between the tones. An interesting thing happened. We had an Iturbi recording, which we analyzed so much we had worn out the record. We got a new record, but it was a different release and it sounded much more mechanical. We had to use that in the original excerpt and everybody said, “That has to be the synthesized version”.  


'''Heyer:'''  
'''Heyer:'''  
Line 145: Line 147:
'''Olson:'''  
'''Olson:'''  


Yes. [Laughter]. No one could really tell. It was really just guessing because even Taubman said when he came down, “I can tell electronic music a mile away.” But when he started to take the test, he said, “Well, this has me stumped”.  
Yes. [Laughter]. No one could really tell. It was really just guessing because even Taubman said when he came down, “I can tell electronic music a mile away.” But when he started to take the test, he said, “Well, this has me stumped.


'''Heyer:'''  
'''Heyer:'''  
Line 153: Line 155:
'''Olson:'''  
'''Olson:'''  


The excerpts were about fifteen to thirty seconds, so it took about a day to analyze a record. Then it took another day to synthesize it. So it took at least a month to do this. We had synthesized versions before this, such as Blue Skies and some other popular versions. When General Sarnoff heard that, he said he would bring a music director of NBC down, and he did. This man said “Engineers should not be fooling around with this sort of thing because it could never produce great music,” so then we decided to do this test. As a result, when General Sarnoff heard it again, he said to this music critic who couldn’t tell which was which. “I think you've proved that it can indeed produce great music in the hands of someone who knows how to operate the machine.”  
The excerpts were about fifteen to thirty seconds, so it took about a day to analyze a record. Then it took another day to synthesize it. So it took at least a month to do this. We had synthesized versions before this, such as "Blue Skies" and some other popular versions. When General [David] Sarnoff heard that, he said he would bring a music director of NBC down, and he did. This man said “Engineers should not be fooling around with this sort of thing because it could never produce great music,” so then we decided to do this test. As a result, when General Sarnoff heard it again, he said to this music critic who couldn’t tell which was which. “I think you've proved that it can indeed produce great music in the hands of someone who knows how to operate the machine.”


==== Sound Quality  ====
==== Sound Quality  ====
Line 163: Line 165:
'''Olson:'''  
'''Olson:'''  


I think it really didn’t come about until the 1960s. The move started in the years around 1960, and it took hold fairly well. But until they had this Bach selection, it didn’t really catch hold. That was one of the top records. Since then, there have been many top records. As a matter of fact, a lot of these rock bands now have several manual synthesizers in their combinations.  
I think it really didn’t come about until the 1960s. The move started in the years around 1960, and it took hold fairly well. But until they had this Bach selection [Switched-On Bach (Columbia Masterworks, 1968)], it didn’t really catch hold. That was one of the top records. Since then, there have been many top records. As a matter of fact, a lot of these rock bands now have several manual synthesizers in their combinations.  


'''Heyer:'''  
'''Heyer:'''  


It seems to me kind of amazing that you were able to produce very realistic sounds. People associate synthesizers or crude synthesizer music as being very obviously like the critics think, like all electronic music. But it's interesting to me that in 1948 you were able to produce something that fairly exactly duplicated the real sound.  
It seems to me kind of amazing that you were able to produce very realistic sounds. People associate synthesizers or crude synthesizer music as being very obviously like the critics think, like all electronic music. But it's interesting to me that in 1948 [1955] you were able to produce something that fairly exactly duplicated the real sound.  


'''Olson:'''  
'''Olson:'''  


If you can produce the overtone structure, the amplitude, the growth, and the decay, you can duplicate the instrument exactly. You can also very easily duplicate the spacing between the tones and the amplitude of the tones, which of course, in the case of the piano are important. The fingers do not all have the same strengths, so the tones don't all have the same amplitude. We simulated Yterby, Rubenstein and Horowitz in the way they play it. They play it, of course, differently.  
If you can produce the overtone structure, the amplitude, the growth, and the decay, you can duplicate the instrument exactly. You can also very easily duplicate the spacing between the tones and the amplitude of the tones, which of course, in the case of the piano are important. The fingers do not all have the same strengths, so the tones don't all have the same amplitude. We simulated Iturbi, Rubinstein, and Horowitz in the way they play it. They play it, of course, differently.  


'''Heyer:'''  
'''Heyer:'''  
Line 179: Line 181:
'''Olson:'''  
'''Olson:'''  


Yes, it is. There are very, very subtle differences. In the Stradivarius violin, as I understand it, the first overtones are quite strong, but the very, very high overtones are not as strong. In the cheaper violins, the higher overtones are probably stronger. In addition, the fundamental is stronger in the Stradivarius than it is in the others. But these higher overtones tend to produce dissonance and sounds which are not too desirable, and this is one of the reasons that why the Stradivarius is so popular.  
Yes, it is. There are very, very subtle differences. In the [http://www.stradivarius.org/stradivarius-violins Stradivarius violin], as I understand it, the first overtones are quite strong, but the very, very high overtones are not as strong. In the cheaper violins, the higher overtones are probably stronger. In addition, the fundamental is stronger in the Stradivarius than it is in the others. But these higher overtones tend to produce dissonance and sounds which are not too desirable, and this is one of the reasons that why the Stradivarius is so popular.  


'''Heyer:'''  
'''Heyer:'''  
Line 187: Line 189:
'''Olson:'''  
'''Olson:'''  


Yes. That's right. Of course, in the case of Kreisler, he had, I imagine, a Stradivarius or Guarneri violin. We synthesized that. The violin was the most difficult to synthesize, the piano was quite easy.  
Yes. That's right. Of course, in the case of Kreisler, he had, I imagine, a Stradivarius or [http://www.guarnieri.com/violin.htm Guarneri] violin. We synthesized that. The violin was the most difficult to synthesize; the piano was quite easy.  


'''Heyer:'''  
'''Heyer:'''  
Line 203: Line 205:
'''Olson:'''  
'''Olson:'''  


No. We build up each part separately. We first got the tone generators and they were tuning fork generators. We had no problem with that. We had not only the equally tempered scale, but we also had the so-called "just scale" in the instrument. We proved in the case of the violin, when it plays solo, it can play in the just scale, which is more pleasing than the tempered scale. There is a clash between the various tones and the overtones in the tempered scale, whereas in the just scale we have a ratio of 2:3:4 and 3:4:5 and so on, which, of course, does not occur in the tempered scale.  
No. We built up each part separately. We first got the tone generators and they were tuning fork generators. We had no problem with that. We had not only the equally tempered scale, but we also had the so-called "just scale" in the instrument. We proved in the case of the violin, when it plays solo, it can play in the just scale, which is more pleasing than the tempered scale. There is a clash between the various tones and the overtones in the tempered scale, whereas in the just scale we have a ratio of 2:3:4 and 3:4:5 and so on, which, of course, does not occur in the tempered scale.


=== Velocity and Shot-Gun Microphones  ===
=== Velocity and Shot-Gun Microphones  ===
Line 213: Line 215:
'''Olson:'''  
'''Olson:'''  


Yes, we had studios that RCA bought there. They bought the RKO studios and named them the "The Radio Pictures Studios." It became quite apparent that the long-distance pickup required in order to keep the microphone out of the picture led to many difficulties, predictably the reverberant sounds. They could keep things fairly quiet, but there was still the noises of cameras which also gave some problems because they would get into the microphone. So they built sound stages with a tremendous amount of absorbing material, several inches thick. But that was not enough to reduce the reverberation because the set, itself, had a reverberant characteristic. Obviously, if we had a more directional microphone it would discriminate against the sound, which was bouncing around in all directions. We started out to develop a directional microphone. The obvious solution was a velocity microphone. There are two components in a sound wave, a pressure component and a velocity component, which is analogous to the voltage and current in an electrical system. The pressure microphone is not directional and responds to pressure in a sound wave, whereas a velocity microphone is directional because the particle velocity is a vector quantity and is therefore a directional quantity.  
Yes, we had studios that RCA bought there. They bought the [http://en.wikipedia.org/wiki/RKO_Pictures RKO] studios and named them the "The Radio Pictures Studios." It became quite apparent that the long-distance pickup required in order to keep the microphone out of the picture led to many difficulties, predictably the reverberant sounds. They could keep things fairly quiet, but there was still the noises of cameras which also gave some problems because they would get into the microphone. So they built sound stages with a tremendous amount of absorbing material, several inches thick. But that was not enough to reduce the reverberation because the set, itself, had a reverberant characteristic. Obviously, if we had a more directional microphone it would discriminate against the sound, which was bouncing around in all directions. We started out to develop a directional microphone. The obvious solution was a velocity microphone. There are two components in a sound wave, a pressure component and a velocity component, which is analogous to the voltage and current in an electrical system. The pressure microphone is not directional and responds to pressure in a sound wave, whereas a velocity microphone is directional because the particle velocity is a vector quantity and is therefore a directional quantity.  


So I decided that the velocity microphone would have directivity, and I proceeded to develop a velocity microphone. It is a microphone that responded to particle velocity in a sound wave. This had a digrate characteristic, a cosine characteristic of a figure eight type, and this indeed did discriminate against noise. Later on, they decided that the two lobes were a disadvantage in some cases and they wanted a microphone that would pick up only in one direction, so we started to work on that. This is really a combination of a pressure and a velocity microphone because, when you add the two, you obtain a cardoid pattern. That is indeed a unidirectional pattern. This microphone has been used ever since that time in sound motion pictures. With the advent of television it has been used exclusively for distance pickup in television on the boom. It has also used in sound reinforcement systems and all other applications where directional microphones are required.  
So I decided that the velocity microphone would have directivity, and I proceeded to develop a velocity microphone. It is a microphone that responded to particle velocity in a sound wave. This had a digrate characteristic, a cosine characteristic of a figure eight type, and this indeed did discriminate against noise. Later on, they decided that the two lobes were a disadvantage in some cases and they wanted a microphone that would pick up only in one direction, so we started to work on that. This is really a combination of a pressure and a velocity microphone because, when you add the two, you obtain a cardioid pattern. That is indeed a unidirectional pattern. This microphone has been used ever since that time in sound motion pictures. With the advent of television it has been used exclusively for distance pickup in television on the boom. It has also used in sound reinforcement systems and all other applications where directional microphones are required.  


'''Heyer:'''  
'''Heyer:'''  


How does the shot-gun microphone differ?  
How does the shotgun microphone differ?  


'''Olson:'''  
'''Olson:'''  


The shot-gun microphone is very much like a wave antenna. It has a series of pickup points along a line and is sometimes called a "line microphone" or a "wave microphone". When sound originates from the side, the output of these pickup points are out of phase and there is no pickup. Even at fairly small angles the pickup points are out of phase. So it indeed has a high directivity. But since the wavelength at a 100 cycles is around 11 feet, if you are going to go down to a 100 cycles, a microphone must be around 10 feet in length. Most of these microphones pick up speech. This can be limited to around 200 cycles, so the microphone can be around 5 feet in length and still obtain very high directivity. We also have another microphone that has very high directivity which has been used in many applications where there are difficulties in the pickup. It is what we call "a second ordered gradient." It is really a cosine multiplied by a cardoid, which provides a very highly directive microphone in a very small space. This has been used. It is a fairly complicated and expensive microphone, but it has been used where there are difficulties in the pickup.  
The shotgun microphone is very much like a wave antenna. It has a series of pickup points along a line and is sometimes called a "line microphone" or a "wave microphone." When sound originates from the side, the output of these pickup points are out of phase and there is no pickup. Even at fairly small angles the pickup points are out of phase. So it indeed has a high directivity. But since the wavelength at 100 cycles is around 11 feet, if you are going to go down to 100 cycles, a microphone must be around 10 feet in length. Most of these microphones pick up speech. This can be limited to around 200 cycles, so the microphone can be around 5 feet in length and still obtain very high directivity. We also have another microphone that has very high directivity which has been used in many applications where there are difficulties in the pickup. It is what we call "a second ordered gradient." It is really a cosine multiplied by a cardioid, which provides a very highly directive microphone in a very small space. This has been used. It is a fairly complicated and expensive microphone, but it has been used where there are difficulties in the pickup.  


'''Heyer:'''  
'''Heyer:'''  
Line 243: Line 245:
'''Heyer:'''  
'''Heyer:'''  


I'm thinking back to the movies I have seen, the velocity microphones are the ones with the heavy grillwork around them.  
I'm thinking back to the movies I have seen, the velocity microphones are the ones with the heavy grille work around them.  


'''Olson:'''  
'''Olson:'''  


That's right. They had the shape case. That was functionally designed that way.  
That's right. They had the shaped case. That was functionally designed that way.  


'''Heyer:'''  
'''Heyer:'''  
Line 255: Line 257:
'''Olson:'''  
'''Olson:'''  


Yes. That's right. They were pressure microphones. Condenser microphones were used because they have a very high quality. They operate over an entire audio frequency range, but they are omnidirectional or non-directional. They picked up in all directions.  
Yes. That's right. They were pressure microphones. Condenser microphones were used because they have a very high quality. They operate over an entire audio frequency range, but they are omnidirectional or non-directional. They picked up in all directions.


=== Acoustic Laboratory at RCA  ===
=== Acoustic Laboratory at RCA  ===
Line 261: Line 263:
'''Heyer:'''  
'''Heyer:'''  


Let me ask you a little about your situation. What was your position at RCA at the time you were developing the velocity microphone? Was the acoustic laboratory well under way at that point?  
Let me ask you a little about your situation. What was your position at RCA at the time you were developing the velocity microphone? Was the Acoustic Laboratory well under way at that point?  


'''Olson:'''  
'''Olson:'''  


I started out at Van Courtland Park, and I was associated there with Dr. Wolf and Abraham Ringle. Three of us worked in the field of acoustics. I was a staff engineer. Then we moved to Camden. Julius Weinberger was in charge of the acoustic laboratory in Camden until around 1935, when he transferred to New York and I was placed in charge of the acoustic research. We moved to Princeton in 1942, but from 1942 to the beginning of 1946 we were engaged in underwater sound work.  
I started out at Van Cortland Park [RCA Laboratories in New York City], and I was associated there with Dr. [Irving] Wolf and Abraham Ringel. Three of us worked in the field of acoustics. I was a staff engineer. Then we moved to [RCA Victor in] Camden [NJ]. Julius Weinberger was in charge of the acoustic laboratory in Camden until around 1935, when he transferred to New York and I was placed in charge of the acoustic research. We moved to [RCA Laboratories in] Princeton [NJ] in 1942, but from 1942 to the beginning of 1946 we were engaged in underwater sound work.  


'''Heyer:'''  
'''Heyer:'''  


For the Navy?  
For the [U.S.] Navy?  


'''Olson:'''  
'''Olson:'''  
Line 285: Line 287:
'''Heyer:'''  
'''Heyer:'''  


Was this a lot of work?  
Was this a lot of work?


=== Superdirectivity  ===
=== Superdirectivity  ===
Line 333: Line 335:
Well, of course, yes. But that was a beat note that you hear. That is a beat between another oscillator that beats with the incoming wave and produces the audible tone.  
Well, of course, yes. But that was a beat note that you hear. That is a beat between another oscillator that beats with the incoming wave and produces the audible tone.  


=== Loudspeakers ===
=== Loudspeakers ===


'''Heyer:'''  
'''Heyer:'''  


I see. How about your loudspeaker work?  
I see. How about your [[Loudspeakers|loudspeaker]] work?  


'''Olson:'''  
'''Olson:'''  


The first loudspeaker work we did was in connection with loudspeakers for theater. Originally, we used loudspeakers very similar to what you had in radios and phonographs. The difficulty there was that these speakers were fairly wide in directivity and the sound would bounce around from the walls. So we started work on horns, which indeed have very good directivity. We did all-horn loudspeakers for the theater, and that solved the problem of the sound bouncing around. One other advantage of the horn is the high efficiency. In a direct radiator loudspeaker, the type that you have in radios and phonographs, and televisions today, the efficiency is less than 5%; it is somewhere around 2%. With a well designed horn loudspeaker, you can get 25% to 50% efficiency. Since the theater requires a lot of power, it is important to have a high efficiency loudspeaker so that the amplifier won't be so large. In those days we used vacuum tubes so that it was difficult to obtain high power from the amplifier. Today, with solid-state systems, there is no problem obtaining a kilowatt of power.  
The first [[Loudspeakers|loudspeaker]] work we did was in connection with [[Loudspeakers|loudspeakers]] for theater. Originally, we used [[Loudspeakers|loudspeakers]] very similar to what you had in radios and phonographs. The difficulty there was that these speakers were fairly wide in directivity and the sound would bounce around from the walls. So we started work on horns, which indeed have very good directivity. We did all-horn [[Loudspeakers|loudspeakers]] for the theater, and that solved the problem of the sound bouncing around. One other advantage of the horn is the high efficiency. In a direct radiator loudspeaker, the type that you have in radios and phonographs, and televisions today, the efficiency is less than 5%; it is somewhere around 2%. With a well designed horn loudspeaker, you can get 25% to 50% efficiency. Since the theater requires a lot of power, it is important to have a high efficiency [[Loudspeakers|loudspeaker]] so that the amplifier won't be so large. In those days we used vacuum tubes so that it was difficult to obtain high power from the amplifier. Today, with solid-state systems, there is no problem obtaining a kilowatt of power.  


But in those days 10 and 25 watt amplifiers were somewhat difficult to build. So it was important to have a high-efficiency speaker, and we developed the high efficiency horn. Later we started work on improving the frequency range of loudspeakers. We ran into the difficulty that the people did not prefer this due to distortions in the system. We did develop these for NBC, for monitoring loudspeakers. From the microphone through the amplifier we had very low distortions, so there was no problem there. But in the case of the phonograph and the radio, in order to produce instruments of low cost, the distortion was indeed high. These high-fidelity loudspeakers came into play after we had performed this experiment on frequency preference. The wide-range loudspeakers we developed were indeed used in the instruments which we produced with the wide frequency range. We also developed the air suspension loudspeaker, which is a direct radio loudspeaker with the back completely enclosed. The back then supplies the stiffness of the system instead of the surround of the cone. This reduces the distortion very much because the surround in the loudspeaker is inherently non-linear and produces distortion, whereas the air in a cabinet is not non-linear and does not produce distortion.  
But in those days 10 and 25 watt amplifiers were somewhat difficult to build. So it was important to have a high-efficiency speaker, and we developed the high efficiency horn. Later we started work on improving the frequency range of [[Loudspeakers|loudspeakers]]. We ran into the difficulty that the people did not prefer this due to distortions in the system. We did develop these for NBC, for monitoring [[Loudspeakers|loudspeakers]]. From the microphone through the amplifier we had very low distortions, so there was no problem there. But in the case of the phonograph and the radio, in order to produce instruments of low cost, the distortion was indeed high. These high-fidelity [[Loudspeakers|loudspeakers]] came into play after we had performed this experiment on frequency preference. The wide-range [[Loudspeakers|loudspeakers]] we developed were indeed used in the instruments which we produced with the wide frequency range. We also developed the air suspension loudspeaker, which is a direct radio [[Loudspeakers|loudspeaker]] with the back completely enclosed. The back then supplies the stiffness of the system instead of the surround of the cone. This reduces the distortion very much because the surround in the [[Loudspeakers|loudspeaker]] is inherently non-linear and produces distortion, whereas the air in a cabinet is not non-linear and does not produce distortion.  


'''Heyer:'''  
'''Heyer:'''  
Line 351: Line 353:
'''Olson:'''  
'''Olson:'''  


It was mostly in the amplifiers. The amplifiers produced most of the distortion. In the radio receivers, the distortion occurred in the detection system for the most part. As a result, these linear detectors were developed. Valentine developed the linear detector, which had very low distortion. This was then followed by amplifiers with very low distortion. This required more money because to produce a system with low distortion, great care has to be taken in all elements of the system, i.e. in the pre-amplifiers and the power amplifiers, in order to reduce distortion. One of the largess sources of distortion was the "pento". At about that same time the feedback systems came in, which made it possible to reduce distortion by the use of feedback. This was a big help in obtaining systems of low distortion.  
It was mostly in the amplifiers. The amplifiers produced most of the distortion. In the radio receivers, the distortion occurred in the detection system for the most part. As a result, these linear detectors were developed. [Charles Stuart] Ballantine developed the linear detector, which had very low distortion. This was then followed by amplifiers with very low distortion. This required more money because to produce a system with low distortion, great care has to be taken in all elements of the system, i.e. in the pre-amplifiers and the power amplifiers, in order to reduce distortion. One of the largess sources of distortion was the pentode [electron tube]. At about that same time the feedback systems came in, which made it possible to reduce distortion by the use of feedback. This was a big help in obtaining systems of low distortion.  


'''Heyer:'''  
'''Heyer:'''  
Line 359: Line 361:
'''Olson:'''  
'''Olson:'''  


Yes, but they did have the class B because of the fact that they produced high power at very low cost. As you know, they introduced a great deal of distortion. This, I think, was one of the reasons why the pentos became very popular at that time, after the war. Distortion was very high in these. If you would cut off the frequency ranges at 5,000 cycles, then you tended to reduce the higher components.  
Yes, but they did have the class B because of the fact that they produced high power at very low cost. As you know, they introduced a great deal of distortion. This, I think, was one of the reasons why the pentodes became very popular at that time, after the war. Distortion was very high in these. If you would cut off the frequency ranges at 5,000 cycles, then you tended to reduce the higher components.  


'''Heyer:'''  
'''Heyer:'''  
Line 367: Line 369:
'''Olson:'''  
'''Olson:'''  


Yes. As a matter of fact, following this experiment with the acoustic filters, we then performed an experiment with the orchestra. We had the listeners in another room and we carried the sound there by means of two channels, i.e. really stereophonic sound — though it was long before stereophonic sound was used. We used a very low distortion system in the amplifier. We repeated the experiment and found that the listeners preferred a low frequency range, the same as they had in the original orchestra. As a matter of fact, the tests indicated an even greater preference for the full frequency range than in the case of the orchestra direct. We attributed that to the fact that there were some noises in the orchestra that the people heard which were a little distracting. The orchestra was not as careful as they are when they know they have a microphone around.  
Yes. As a matter of fact, following this experiment with the acoustic filters, we then performed an experiment with the orchestra. We had the listeners in another room and we carried the sound there by means of two channels, i.e. really stereophonic sound — though it was long before stereophonic sound was used. We used a very low distortion system in the amplifier. We repeated the experiment and found that the listeners preferred a low frequency range, the same as they had in the original orchestra. As a matter of fact, the tests indicated an even greater preference for the full frequency range than in the case of the orchestra direct. We attributed that to the fact that there were some noises in the orchestra that the people heard which were a little distracting. The orchestra was not as careful as they are when they know they have a microphone around.


=== High Fidelity  ===
=== High Fidelity  ===
Line 373: Line 375:
'''Heyer:'''  
'''Heyer:'''  


After the war, that's when the whole idea of high-fidelity comes around?  
After the war, that's when the whole idea of high fidelity comes around?  


'''Olson:'''  
'''Olson:'''  
Line 393: Line 395:
'''Olson:'''  
'''Olson:'''  


Yes, we did. Others had experimented with that. As a matter of fact, the Bell Laboratories had carried out experiments on stereophonic sound with the Philadelphia Orchestra. They picked up the Philadelphia Orchestra in Philadelphia and reproduced the performance in Washington D.C. by using two and three channels. Around 1920, Alexandersen actually had the stereophonic sound. He used two microphones and two loudspeakers, and two different rooms. This was the first instance I know of stereophonic sound.  
Yes, we did. Others had experimented with that. As a matter of fact, the Bell [Telephone] Laboratories had carried out experiments on [http://www.stokowski.org/Harvey%20Fletcher%20Bell%20Labs%20Recordings.htm stereophonic sound with the Philadelphia Orchestra] [in 1932]. They picked up the Philadelphia Orchestra in Philadelphia and reproduced the performance in Washington D.C. by using two and three channels. Around 1920, [Ernst?] Alexanderson actually had the stereophonic sound. He used two microphones and two loudspeakers, and two different rooms. This was the first instance I know of stereophonic sound.  


'''Heyer:'''  
'''Heyer:'''  
Line 401: Line 403:
'''Olson:'''  
'''Olson:'''  


It was around 1920. I have not been able to find any record of that, but Alexandersen told me that he performed an experiment like that.  
It was around 1920. I have not been able to find any record of that, but Alexanderson told me that he performed an experiment like that.  


'''Heyer:'''  
'''Heyer:'''  
Line 409: Line 411:
'''Olson:'''  
'''Olson:'''  


That's right. The conditions have to be right in order for a development to take hold. There are a lot of factors that conspire to make a system successful or unsuccessful. Besides the actual commercial aspects of it, there are some technical aspects that have to be right before it can be successful.  
That's right. The conditions have to be right in order for a development to take hold. There are a lot of factors that conspire to make a system successful or unsuccessful. Besides the actual commercial aspects of it, there are some technical aspects that have to be right before it can be successful.


=== Quadraphonic Sound  ===
=== Quadraphonic Sound  ===
Line 419: Line 421:
'''Olson:'''  
'''Olson:'''  


In the early 1960s we carried out experiments in quadraphonic sound. The RCA record division did indeed record in quadraphonic sound quite early because they felt that it would be something that would be coming along. They recorded not only in two-channel stereo, but also in four-channel quadraphonic sound. We carried out many experiments starting in the 1960s on quadraphonic sound. Of course, there are two aspects of quadraphonic sound in the classical field. You have the stereophonic sound, that is the auditory perspective, where you can pick out the instruments in the orchestra. Then you have the envelope, that is the reflective sound. Stereophonic sound cannot produce the envelope properly in a small room, such as a room in home. But the use of four loudspeakers, with the loudspeaker supplying the reverberation envelope, makes this very realistic from the standpoint of reproduction of symphonic music. With more popular music, four-channel sound has other great possibilities. You could make the sound go around, switch back and forth, which of course provides artistic aspects that are impossible in two-channel sound. Another thing about four-channel sound is that it can carry twice as much information as two-channel sound in the same way that two-channel sound carries twice as much information as monophonic sound — you take full advantage of the four- or the two-channel system. So the four-channel system has tremendous advantage from the standpoint of transmission of information.  
In the early 1960s we carried out experiments in quadraphonic sound. The RCA record division did indeed record in quadraphonic sound quite early because they felt that it would be something that would be coming along. They recorded not only in two-channel stereo, but also in four-channel quadraphonic sound. We carried out many experiments starting in the 1960s on quadraphonic sound. Of course, there are two aspects of quadraphonic sound in the classical field. You have the stereophonic sound, that is the auditory perspective, where you can pick out the instruments in the orchestra. Then you have the envelope, that is the reflective sound. Stereophonic sound cannot produce the envelope properly in a small room, such as a room in home. But the use of four [[Loudspeakers|loudspeakers]], with the loudspeaker supplying the reverberation envelope, makes this very realistic from the standpoint of reproduction of symphonic music. With more popular music, four-channel sound has other great possibilities. You could make the sound go around, switch back and forth, which of course provides artistic aspects that are impossible in two-channel sound. Another thing about four-channel sound is that it can carry twice as much information as two-channel sound in the same way that two-channel sound carries twice as much information as monophonic sound — you take full advantage of the four- or the two-channel system. So the four-channel system has tremendous advantage from the standpoint of transmission of information.  


'''Heyer:'''  
'''Heyer:'''  
Line 427: Line 429:
'''Olson:'''  
'''Olson:'''  


In speech there is a tremendous amount of redundancy, which we found in our work with the phonetic typewriter. You can indeed compress speech in many different ways and still transmit the information because of the great redundancy in speech.  
In speech there is a tremendous amount of redundancy, which we found in our work with the phonetic typewriter. You can indeed compress speech in many different ways and still transmit the information because of the great redundancy in speech.


=== The Phonetic Typewriter and Speech Compression  ===
=== The Phonetic Typewriter and Speech Compression  ===
Line 433: Line 435:
'''Heyer:'''  
'''Heyer:'''  


<P><flashmp3>026 - olson - clip 2.mp3</flashmp3></p>
<p><flashmp3>026 - olson - clip 2.mp3</flashmp3></p>


Why don't you tell me a little about the phonetic typewriter?  
Why don't you tell me a little about the phonetic typewriter?  
Line 441: Line 443:
We felt that we could develop a system which would provide the possibility of speaking into a microphone and have the output on a phonetic typewriter, which would type out on a page what is spoken into the microphone. In the case of speech, in the words you have syllables and in the syllables you have phonemes. There are around forty phonemes in the English language, some 2,000 syllables, and about 100,000 words. The phoneme is very difficult to analyze out of context because one phoneme runs into another one. So we decided to work on the syllable approach. We also used phonemes when we could.  
We felt that we could develop a system which would provide the possibility of speaking into a microphone and have the output on a phonetic typewriter, which would type out on a page what is spoken into the microphone. In the case of speech, in the words you have syllables and in the syllables you have phonemes. There are around forty phonemes in the English language, some 2,000 syllables, and about 100,000 words. The phoneme is very difficult to analyze out of context because one phoneme runs into another one. So we decided to work on the syllable approach. We also used phonemes when we could.  


We analyzed the syllables. We first divided out the syllables in a word. This is not too difficult to do because there is indeed a spacing of a type between the syllables in the word. There may be an amplitude spacing or frequency spacing. So it's possible to separate a word into syllables. Then when you have the syllable, we have the frequency, time, and amplitude pattern for each syllable. It is different for each particular syllable. We decided that if we had 200 syllables, we could do pretty well in the English language. These are then analyzed by the phonetic typewriter as you speak into the microphone by means of the logic system and the storage system. It types out on the typewriter the syllable that was spoken into the microphone. Of course, it is a phonetic thing, so that if you pronounce "Hoyle" as "herl", it would indeed type out "erl" and not "oil". It types out what it hears, so that it could not be used for a letter to be sent out. But it could be used for a memorandum and it could be used instead of dictation. The secretary could type from the phonetic typewriter output and put it in the proper spelling.  
We analyzed the syllables. We first divided out the syllables in a word. This is not too difficult to do because there is indeed a spacing of a type between the syllables in the word. There may be an amplitude spacing or frequency spacing. So it's possible to separate a word into syllables. Then when you have the syllable, we have the frequency, time, and amplitude pattern for each syllable. It is different for each particular syllable. We decided that if we had 200 syllables, we could do pretty well in the English language. These are then analyzed by the phonetic typewriter as you speak into the microphone by means of the logic system and the storage system. It types out on the typewriter the syllable that was spoken into the microphone. Of course, it is a phonetic thing, so that if you pronounce "Hoyle" as "herl", it would indeed type out "erl" and not "oil." It types out what it hears, so that it could not be used for a letter to be sent out. But it could be used for a memorandum and it could be used instead of dictation. The secretary could type from the phonetic typewriter output and put it in the proper spelling.  


'''Heyer:'''  
'''Heyer:'''  
Line 449: Line 451:
'''Olson:'''  
'''Olson:'''  


We built one with a memory of 200 syllables, and it worked fairly well. There is one other problem with the phonetic typewriter, which is true of all analysis of speech, namely that you have to have a memory for each particular person. Each person has to have a personal memory. This can be done by a person speaking in each of the syllables, say ten times, and loading up the memory with these syllables. Then when the person speaks into the phonetic typewriter and his particular memory is loaded in, the machine types out his speech. Otherwise, the machine would operate on 75% of the time for a voice that is similar to a loaded one, and as little as 25% if the voices are entirely different, e.g. a man and a woman.  
We built one with a memory of 200 syllables, and it worked fairly well. There is one other problem with the phonetic typewriter, which is true of all analysis of speech, namely that you have to have a memory for each particular person. Each person has to have a personal memory. This can be done by a person speaking in each of the syllables, say ten times, and loading up the memory with these syllables. Then when the person speaks into the phonetic typewriter and his particular memory is loaded in, the machine types out his speech. Otherwise, the machine would operate on 75 percent of the time for a voice that is similar to a loaded one, and as little as 25 percent if the voices are entirely different, e.g. a man and a woman.  


'''Heyer:'''  
'''Heyer:'''  
Line 501: Line 503:
'''Heyer:'''  
'''Heyer:'''  


Have you seen the speech compressors that are on the market now? You make a dub from one tape to another essentially. Then there is one type that chops out at regular intervals.  
Have you seen the speech compressors that are on the market now? You make a dub from one tape to another essentially. There is one type that chops out at regular intervals. Then there are other ones, supposedly, which are more discriminatory, that chomp out spaces in reference to words.  
 
'''Heyer:'''
 
Then there are other ones, supposedly, which are more discriminatory, that chomp out spaces in reference to words.  


'''Olson:'''  
'''Olson:'''  


One of the problems in handling all of these systems is the fact that you have this chopping frequency that comes in the picture and is somewhat annoying at times. WOR tried this several years ago on some of its newscasts in order to speed up the newscasts, and people did not seem to object to it. Of course, it was not speeded up very much, perhaps 5%.  
One of the problems in handling all of these systems is the fact that you have this chopping frequency that comes in the picture and is somewhat annoying at times. WOR tried this several years ago on some of its newscasts in order to speed up the newscasts, and people did not seem to object to it. Of course, it was not speeded up very much, perhaps 5 percent.  


'''Heyer:'''  
'''Heyer:'''  
Line 517: Line 515:
'''Olson:'''  
'''Olson:'''  


I guess they can double the speed, alright.  
I guess they can double the speed, alright.
 
'''Heyer:'''
 
I have gotten proficient at listening at double speed. I concentrate and turn the volume a little bit so I can hear. You set bias at the high end of the frequency spectrum. It is possible to listen, but you really have to concentrate.


=== Auditorium Sound Systems  ===
=== Auditorium Sound Systems  ===
Line 523: Line 525:
'''Heyer:'''  
'''Heyer:'''  


I have gotten proficient at listening at double speed. I concentrate and turn the volume a little bit so I can hear. You set bias at the high end of the frequency spectrum. It is possible to listen, but you really have to concentrate. Do you want to say something about auditoriums and the sound-reinforcement systems that you worked on?  
Do you want to say something about auditoriums and the sound-reinforcement systems that you worked on?  


'''Olson:'''  
'''Olson:'''  


We worked on sound-reinforcing systems all the time that we were involved in theater work, from the early 1930s up to the present time. One of the developments which we carried out at the RCA laboratories was on microphones located above the stage so that anyone could walk around the stage and perform an experiment on the stage and it would be picked up by these microphones. We had loudspeakers distributed all through the ceiling, so that we had complete coverage over the entire listening area. In addition to that, we had a delay between the microphone and the loudspeakers so that the first sound a person heard was a sound which originated on the stage even though it may have been very weak. If you have a delay so that the sound that emanates from the loudspeaker that is behind the sound that comes from the stage directly, then all sound appears to be coming from the stage. This is a psychological phenomenon, which says that the first sound that you hear determines the direction of the sound. If you delay the sound in the loudspeakers, the sound will appear to come from the stage even though it’s weaker from the stage. As you travel down the auditorium, each loudspeaker is delayed so it tends to send a wave from the stage down to the audience. This contributes even more to the fact that the sound appears to originate on the stage. This tends to reduce feedback in the system as well on the delay. A little bit, perhaps, 3 dB or so improvement in the acoustic feedback. The fact that you pick up on the stage at a large distance and that we use a second-ordered gradient microphone hidden in the ceiling made it possible to perform an experiment without having a microphone hanging around one's neck. He had perfect freedom to move around. Since then, many auditoriums have been built of this type.  
We worked on sound-reinforcing systems all the time that we were involved in theater work, from the early 1930s up to the present time. One of the developments which we carried out at the RCA Laboratories was on microphones located above the stage so that anyone could walk around the stage and perform an experiment on the stage and it would be picked up by these microphones. We had [[Loudspeakers|loudspeakers]] distributed all through the ceiling, so that we had complete coverage over the entire listening area. In addition to that, we had a delay between the microphone and the loudspeakers so that the first sound a person heard was a sound which originated on the stage even though it may have been very weak. If you have a delay so that the sound that emanates from the loudspeaker that is behind the sound that comes from the stage directly, then all sound appears to be coming from the stage.  
 
This is a psychological phenomenon, which says that the first sound that you hear determines the direction of the sound. If you delay the sound in the loudspeakers, the sound will appear to come from the stage even though it’s weaker from the stage. As you travel down the auditorium, each loudspeaker is delayed so it tends to send a wave from the stage down to the audience. This contributes even more to the fact that the sound appears to originate on the stage. This tends to reduce feedback in the system as well on the delay. A little bit, perhaps, 3 dB or so improvement in the acoustic feedback. The fact that you pick up on the stage at a large distance and that we use a second-ordered gradient microphone hidden in the ceiling made it possible to perform an experiment without having a microphone hanging around one's neck. He had perfect freedom to move around. Since then, many auditoriums have been built of this type.  


'''Heyer:'''  
'''Heyer:'''  
Line 539: Line 543:
'''Heyer:'''  
'''Heyer:'''  


I was interested in reading about this system, but never really thought about it very much. It certainly seems that if you were designing an auditorium, it is the obvious thing to do. You say you used a second-ordered gradient microphone?  
I was interested in reading about this system, but never really thought about it very much. It certainly seems that if you were designing an auditorium, it is the obvious thing to do. You say you used a second-order gradient microphone?  


'''Olson:'''  
'''Olson:'''  
Line 567: Line 571:
'''Olson:'''  
'''Olson:'''  


There are many university auditoriums that use that sort of a system. I think Purdue and Indiana University do in some of their auditoriums.  
There are many university auditoriums that use that sort of a system. I think Purdue and Indiana University do in some of their auditoriums.


=== The Music Composing Machine  ===
=== The Music Composing Machine  ===
Line 593: Line 597:
'''Olson:'''  
'''Olson:'''  


That was done around the late 1950s and the early 1960s, I believe.  
That was done around the late 1950s and the early 1960s [late 1940s and early 1950s], I believe.  


'''Heyer:'''  
'''Heyer:'''  
Line 601: Line 605:
'''Olson:'''  
'''Olson:'''  


Timmons used it a little bit. I think it has some limitations, and of course some of the composers would rather start from scratch.  
[James "Jim"] Timmens used it a little bit. I think it has some limitations, and of course some of the composers would rather start from scratch.  


'''Heyer:'''  
'''Heyer:'''  
Line 609: Line 613:
'''Olson:'''  
'''Olson:'''  


I think so. I have seen in the New York Times where composers have used something similar to that a couple of times. Their aides of this type were not necessarily like what we had, but something similar to that.  
I think so. I have seen in the New York Times where composers have used something similar to that a couple of times. Their aids of this type were not necessarily like what we had, but something similar to that.  


'''Heyer:'''  
'''Heyer:'''  
Line 625: Line 629:
'''Olson:'''  
'''Olson:'''  


Seems to be perfectly random.  
Seems to be perfectly random.


=== Future and Problems of Loudspeakers  ===
=== Future and Problems of [[Loudspeakers|loudspeakers]] ===


'''Heyer:'''  
'''Heyer:'''  
Line 691: Line 695:
Yes, anything you do with the moving air stream, you're going to have that problem.<br>  
Yes, anything you do with the moving air stream, you're going to have that problem.<br>  


[[Category:People_and_organizations|Oral-History:Harry F. Olson]] [[Category:Engineers|Oral-History:Harry F. Olson]] [[Category:Inventors|Oral-History:Harry F. Olson]] [[Category:Signals|Oral-History:Harry F. Olson]] [[Category:Acoustics|Oral-History:Harry F. Olson]] [[Category:Components,_circuits,_devices_&_systems|Category:Components,_circuits,_devices_&amp;_systems]] [[Category:Electronic_equipment_manufacture|Oral-History:Harry F. Olson]] [[Category:Culture_and_society|Oral-History:Harry F. Olson]] [[Category:Leisure|Oral-History:Harry F. Olson]] [[Category:Music|Oral-History:Harry F. Olson]] [[Category:News|Oral-History:Harry F. Olson]]
[[Category:People and organizations|Olson]] [[Category:Engineers|Olson]] [[Category:Inventors|Olson]] [[Category:Signals|Olson]] [[Category:Acoustics|Olson]] [[Category:Components, circuits, devices & systems|Olson]] [[Category:Electronic equipment manufacture|Olson]] [[Category:Culture and society|Olson]] [[Category:Leisure|Olson]] [[Category:Music|Olson]] [[Category:News|Olson]]

Revision as of 15:39, 8 May 2012

About Harry F. Olson

Harry Olson

Harry Olson, a pioneer in musical sound reproduction, received his B.E. degree from the University of Iowa in 1924. He continued his graduate studies at Iowa, taking a masters in 1925 and a Ph.D. in atomic physics in 1928. Olson joined RCA in 1928, immediately tackling the problem of poor quality sound in the new "talking pictures." In 1935, Olson was placed in charge of RCA's Camden acoustic laboratory, where he went on to develop the electronic synthesizer with Herbert Belar. He moved his work to the newly opened RCA Laboratories in Princeton, NJ in 1941.

The interview covers Olson's groundbreaking work in acoustic research over nearly forty years with RCA. Olson discusses his work with microphones, including the development of the velocity microphone and the unidirectional microphone for use in movies. He was also instrumental in work done on RCA's second-order gradient microphone. The interview offers a comprehensive discussion of Olson's work with Herbert R. Belar to develop an electronic music synthesizer, including both technical discussion and the implications of musical aesthetics. The interview continues with comments on Olson's work with RCA's underwater sound project for the Navy in the 1940s and his subsequent work on loudspeakers. Olson also discusses the potential of quadrophonic sound, the development of the phonetic typewriter, as well as his work on sound reinforcement systems and the music composing machine. The interview concludes with a brief discussion of current limitations in loudspeakers and the potential of air-suspension speaker systems.

About the Interview

HARRY F. OLSON: An Interview Conducted by Mark Heyer, IEEE History Center, 14 July 1975

Interview # 026 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Harry F. Olson, an oral history conducted in 1975 by Mark Heyer, IEEE History Center, New Brunswick, NJ, USA.

Interview

Interview: Dr. Harry F. Olson

Interviewer: Mark Heyer

Place: Princeton, New Jersey

Date: July 14, 1975

Early Career

Educational Background

Olson:

My name is Harry F. Olson. I received the B.E. degree from the University of Iowa in 1924, the M.S. degree in 1925, and the Ph.D. degree in 1928. I was introduced to the science of acoustics through my contacts as a student at the University of Iowa with Dean [Carl Emil] Seashore, who pioneered in the field of psychology of musical sounds, and with Professor [George W.] Stewart, the inventor of the acoustic wave filter. My master's thesis was on solid mechanical wave filters. However, my Ph.D. degree was in the field of atomic physics.

Unidirectional Microphone

Olson:

Sound motion pictures were commercialized in the mid-1920s. I came with RCA in 1928. RCA had acquired sound studios in Hollywood, which they named [RKO] Radio Picture Studios. One of the problems was sound pick-up with the microphone out of the picture. This required rather long sound pickup distances. As a result, there was a lot of ambient room noise and reverberation in the recorded sound. The obvious solution was a directional microphone which would discriminate against noise and reverberation. At that time there were no directional microphones. Some microphones had a little directivity in the very high frequency range. I developed the velocity microphone, which had a bi-directional figure eight characteristic that was uniform over the entire audio frequency range. This microphone [BK-44A, BK-44B, BK-44BX] solved the problem of distant sound pickup. Later it became apparent the unidirectional microphone would be more appropriate. Therefore I started to work on this project and developed the unidirectional microphone with a cardioid unidirectional pattern [BK-77A, BK-77B, BK-77C, BK-77D]. This microphone was found to be exactly what was required. The cardioid unidirectional microphone is still today a microphone that is used for a boom or long-distance pickup, for sound pickup in motion pictures, television, and sound reinforcement.

Psychology of Sound

Olson:

Following World War II, several investigators claimed that the average listener preferred a restricted frequency range in the reproductions of speech and music, with a top frequency range of 5,000 Hertz. There were three reasons for this state of affairs namely, 1) the average listener after listening to the restricted frequency range of radio and phonographs had been conditioned to this state of affairs, and did not want a wider frequency range; 2) that the reproduction of musical instruments was more pleasing with the higher overtones eliminated; and 3) the distortions and deviations in sound reproduction were less objectionable with the restricted frequency range because the restricted frequency range eliminated the harmonics which were generated by the distortion.

We set out to perform what is now considered a classical experiment. We arranged an acoustic filter between a live orchestra and the listeners. The acoustic filters were in the form of doors that could be turned in and out. In other words the filter could be placed in or out of operation. A light opaque sound transmitting curtain was placed between the listeners and the acoustic filters so that the listeners could not see the filters or what transpired behind the screen. The high-frequency cut-off of the filters was 5,000 hertz. When the filters were turned out, the listeners received the full frequency range from the orchestra or speech. Tests were performed with people from all walks of life. The experiments indicated a preference of 70 percent for the full frequency range. This showed that there was something wrong with reproduced music and speech. An investigation indicated that it was indeed distortion which brought the people to prefer a limited frequency range. When the distortion was eliminated, high-frequency sound reproduction took off and burgeoned during the following year. Today, reproduction of sound occurs over the entire audio frequency range.

Heyer:

Approximately what year was this?

Olson:

Around 1948, I think.

Heyer:

What was RCA manufacturing at that time?

Olson:

Radio and phonographs. Television had not taken hold yet [Postwar television production began 1946].

Heyer:

What kind of recording equipment and playback equipment?

Olson:

The playback equipment was limited to a frequency range of around 5,000 cycles. The reason it was limited to 5,000 cycles turned out to be that, in that way, the distortion was reduced and, as a consequence, people preferred this to a wider frequency range with distortion. This was the reason for the preference for 5,000 Hertz.

Heyer:

When they did tests with the people with high-fidelity equipment of the day, people said they preferred eliminating frequency range because of the effects of hearing?

Olson:

That’s exactly right.

The Synthesizer

Origins

Heyer:

Did your early work on the psychology of sound with [Carl] Seashore have any influence on what you did later?

Olson:

Yes. That had an influence on the electronic music synthesizer. Seashore had the idea, too, that if you could produce an instrument that had no limitations, you would indeed have great applications. This is exactly what turned out with the electronic music synthesizer. But the other situation was that studies that we carried out on musical instruments, with the object of improving the recording of sound, indicated that the musical instruments had limitations in what one could do with ten fingers and one's mouth and feet in performing on the musical instrument. Also, the fundamental range of musical instruments is indeed quite limited. Another factor is that the quality is not altogether what musicians would like in the case of musical instruments.

To overcome these limitations, Herbert Belar and I started work on an electronic music synthesizer in 1952. The idea was to develop a musical instrument with no limitations whatsoever. Seashore had indicated an instrument that could produce any musical tone, regardless of whether it had ever been produced before or not.

Heyer:

Was that a revolutionary thought at the time?

Olson:

<flashmp3>026 - olson - clip 1.mp3</flashmp3>

Yes, it was. Because no one had really produced a programmed electronic music synthesizer. We used a digital punched record, which looked of course like a record that is used for a player piano. However this was in a digital form, so that we could indeed perform all the functions of a musical tone, i.e. the amplitude, frequency, harmonic content, the growth and decay, and so on with this punched paper record. Another advantage of this instrument is the fact that a man does not have to have great physical dexterity in order to play the instrument; he does not have to have any physical dexterity at all. However, in order to play traditional musical instruments, the musician must indeed have great physical dexterity. On this, the musician punches out what he thinks is right, he listens to it, and then he can make changes. He can punch more holes or he can plug up some holes in the digital record and obtain exactly what he wants. When we had finished the construction of the instrument, we wanted to prove that it could indeed produce great music because General Sarnoff said that, “A synthesizer is of no value if it does not provide the possibilities of producing great music." To prove this we analyzed piano recordings of "Polonaise" by [Frederic] Chopin and "Clair de Lune" by [Claude] Debussy played by [José] Iturbi, [Arthur] Rubinstein and [Vladimir] Horowitz. Also "The Old Refrain" by [Fritz] Kreisler played on a violin by Kreisler. The analysis was then synthesized and recorded and we intermixed short excerpts of the synthesized and original recordings for a test. We had fourteen excerpts, seven original and seven synthesized. Professional musicians and laymen were unable to detect the original from the synthesized versions. This proved that the electronic music synthesizer could produce great music. This test so impressed Howard Taubman, the music critic of the The New York Times, that he wrote an article on our electronic synthesizer that appeared on the front page of The New York Times. This was indeed, at that time, a revolutionary development. Later, Charles Wuorinen produced his composition "Time's Encomium" on our electronic synthesizer, and this was released as a record. Wuorinen received the Pulitzer Prize for his work. This was the first time that a Pulitzer Prize had ever been given for electronic music of any kind, regardless of how it was produced, whether on an electronic organ or any other electronic instrument. Electronic music synthesizers are, of course, commonplace today, ranging from small keyboard instruments to programmed computers similar to the one we developed two decades ago. Heyer: Where was the first synthesizer built? Olson: It was built in the RCA Laboratories. We had [Richard] Maltby, who has a large, popular band, work on the synthesizer and [James "Jim"] Timmens, who worked on the electronic music synthesizer at the laboratories. We built a second synthesizer, which is now located at Columbia University in the Princeton-Columbia Music Center in New York. Many compositions have been synthesized on this instrument in addition to the work by Wuorinen. Heyer: They both used the disk with the punch holes? Olson: Yes. It is a paper record with punched holes. The one in New York has two punched records so you can punch out two tones at a time, which of course speeds up the process. When you do this, for example, you punch out a series of tones, you record those. This paper record is synchronized with the tape recording system. The tape-recording system uses a sprocket type of tape, so it can be synchronized. You can then record seven series of tones on one tape, combine these seven into one tone; then continue again, making seven more tones and so on, so you can have any number you please — up to a thousand if you want to.

Testing

Heyer:

I’m interested in the first demonstration you did when you analyzed the performances and then duplicated them with the synthesizer. Could you give me an idea of the process that was involved in that?

Olson:

What we did was this: we analyzed first the amplitude and frequency range of each tone of the original, the growth and decay and the tempo, i.e. the space in between the tones. An interesting thing happened. We had an Iturbi recording, which we analyzed so much we had worn out the record. We got a new record, but it was a different release and it sounded much more mechanical. We had to use that in the original excerpt and everybody said, “That has to be the synthesized version”.

Heyer:

So you fooled everyone with a live performance?

Olson:

Yes. [Laughter]. No one could really tell. It was really just guessing because even Taubman said when he came down, “I can tell electronic music a mile away.” But when he started to take the test, he said, “Well, this has me stumped.”

Heyer:

How long did it take you to do those tests?

Olson:

The excerpts were about fifteen to thirty seconds, so it took about a day to analyze a record. Then it took another day to synthesize it. So it took at least a month to do this. We had synthesized versions before this, such as "Blue Skies" and some other popular versions. When General [David] Sarnoff heard that, he said he would bring a music director of NBC down, and he did. This man said “Engineers should not be fooling around with this sort of thing because it could never produce great music,” so then we decided to do this test. As a result, when General Sarnoff heard it again, he said to this music critic who couldn’t tell which was which. “I think you've proved that it can indeed produce great music in the hands of someone who knows how to operate the machine.”

Sound Quality

Heyer:

It seems synthesizers really haven’t been accepted as a commonplace until fairly recently.

Olson:

I think it really didn’t come about until the 1960s. The move started in the years around 1960, and it took hold fairly well. But until they had this Bach selection [Switched-On Bach (Columbia Masterworks, 1968)], it didn’t really catch hold. That was one of the top records. Since then, there have been many top records. As a matter of fact, a lot of these rock bands now have several manual synthesizers in their combinations.

Heyer:

It seems to me kind of amazing that you were able to produce very realistic sounds. People associate synthesizers or crude synthesizer music as being very obviously like the critics think, like all electronic music. But it's interesting to me that in 1948 [1955] you were able to produce something that fairly exactly duplicated the real sound.

Olson:

If you can produce the overtone structure, the amplitude, the growth, and the decay, you can duplicate the instrument exactly. You can also very easily duplicate the spacing between the tones and the amplitude of the tones, which of course, in the case of the piano are important. The fingers do not all have the same strengths, so the tones don't all have the same amplitude. We simulated Iturbi, Rubinstein, and Horowitz in the way they play it. They play it, of course, differently.

Heyer:

Do you think when people talk about the difference between a good guitar and a great guitar, that really sounds good, is the difference they are hearing is very subtle overtones?

Olson:

Yes, it is. There are very, very subtle differences. In the Stradivarius violin, as I understand it, the first overtones are quite strong, but the very, very high overtones are not as strong. In the cheaper violins, the higher overtones are probably stronger. In addition, the fundamental is stronger in the Stradivarius than it is in the others. But these higher overtones tend to produce dissonance and sounds which are not too desirable, and this is one of the reasons that why the Stradivarius is so popular.

Heyer:

So you would have a strong fundamental every octave?

Olson:

Yes. That's right. Of course, in the case of Kreisler, he had, I imagine, a Stradivarius or Guarneri violin. We synthesized that. The violin was the most difficult to synthesize; the piano was quite easy.

Heyer:

Because you have a percussion?

Olson:

Yes. And the tone dies out, so you have a discrete signal. In the violin, it has the portamento, the sliding from one tone to another. We have this sliding of one tone to another in the synthesizer as well. You have to have that in the case of the violin because often times they just slide from one tone to another in a continuous glide.

Heyer:

Had you done any experiments previous to that?

Olson:

No. We built up each part separately. We first got the tone generators and they were tuning fork generators. We had no problem with that. We had not only the equally tempered scale, but we also had the so-called "just scale" in the instrument. We proved in the case of the violin, when it plays solo, it can play in the just scale, which is more pleasing than the tempered scale. There is a clash between the various tones and the overtones in the tempered scale, whereas in the just scale we have a ratio of 2:3:4 and 3:4:5 and so on, which, of course, does not occur in the tempered scale.

Velocity and Shot-Gun Microphones

Heyer:

I am interested in the interaction between the Hollywood producers and you, working here in New Jersey. Did they come to you? RCA was involved, I guess, in the very first sound development.

Olson:

Yes, we had studios that RCA bought there. They bought the RKO studios and named them the "The Radio Pictures Studios." It became quite apparent that the long-distance pickup required in order to keep the microphone out of the picture led to many difficulties, predictably the reverberant sounds. They could keep things fairly quiet, but there was still the noises of cameras which also gave some problems because they would get into the microphone. So they built sound stages with a tremendous amount of absorbing material, several inches thick. But that was not enough to reduce the reverberation because the set, itself, had a reverberant characteristic. Obviously, if we had a more directional microphone it would discriminate against the sound, which was bouncing around in all directions. We started out to develop a directional microphone. The obvious solution was a velocity microphone. There are two components in a sound wave, a pressure component and a velocity component, which is analogous to the voltage and current in an electrical system. The pressure microphone is not directional and responds to pressure in a sound wave, whereas a velocity microphone is directional because the particle velocity is a vector quantity and is therefore a directional quantity.

So I decided that the velocity microphone would have directivity, and I proceeded to develop a velocity microphone. It is a microphone that responded to particle velocity in a sound wave. This had a digrate characteristic, a cosine characteristic of a figure eight type, and this indeed did discriminate against noise. Later on, they decided that the two lobes were a disadvantage in some cases and they wanted a microphone that would pick up only in one direction, so we started to work on that. This is really a combination of a pressure and a velocity microphone because, when you add the two, you obtain a cardioid pattern. That is indeed a unidirectional pattern. This microphone has been used ever since that time in sound motion pictures. With the advent of television it has been used exclusively for distance pickup in television on the boom. It has also used in sound reinforcement systems and all other applications where directional microphones are required.

Heyer:

How does the shotgun microphone differ?

Olson:

The shotgun microphone is very much like a wave antenna. It has a series of pickup points along a line and is sometimes called a "line microphone" or a "wave microphone." When sound originates from the side, the output of these pickup points are out of phase and there is no pickup. Even at fairly small angles the pickup points are out of phase. So it indeed has a high directivity. But since the wavelength at 100 cycles is around 11 feet, if you are going to go down to 100 cycles, a microphone must be around 10 feet in length. Most of these microphones pick up speech. This can be limited to around 200 cycles, so the microphone can be around 5 feet in length and still obtain very high directivity. We also have another microphone that has very high directivity which has been used in many applications where there are difficulties in the pickup. It is what we call "a second ordered gradient." It is really a cosine multiplied by a cardioid, which provides a very highly directive microphone in a very small space. This has been used. It is a fairly complicated and expensive microphone, but it has been used where there are difficulties in the pickup.

Heyer:

I don't think I have ever seen one.

Olson:

There are about a foot in length.

Heyer:

Is it like the Sennheiser?

Olson:

No. No one but RCA has the second-ordered gradient.

Heyer:

I'm thinking back to the movies I have seen, the velocity microphones are the ones with the heavy grille work around them.

Olson:

That's right. They had the shaped case. That was functionally designed that way.

Heyer:

The previous ones were the ones that were always on a suspension.

Olson:

Yes. That's right. They were pressure microphones. Condenser microphones were used because they have a very high quality. They operate over an entire audio frequency range, but they are omnidirectional or non-directional. They picked up in all directions.

Acoustic Laboratory at RCA

Heyer:

Let me ask you a little about your situation. What was your position at RCA at the time you were developing the velocity microphone? Was the Acoustic Laboratory well under way at that point?

Olson:

I started out at Van Cortland Park [RCA Laboratories in New York City], and I was associated there with Dr. [Irving] Wolf and Abraham Ringel. Three of us worked in the field of acoustics. I was a staff engineer. Then we moved to [RCA Victor in] Camden [NJ]. Julius Weinberger was in charge of the acoustic laboratory in Camden until around 1935, when he transferred to New York and I was placed in charge of the acoustic research. We moved to [RCA Laboratories in] Princeton [NJ] in 1942, but from 1942 to the beginning of 1946 we were engaged in underwater sound work.

Heyer:

For the [U.S.] Navy?

Olson:

That's right. We worked on that, and I had a group of about twelve in the laboratory. Then it expanded from then on.

Heyer:

Do you want to say more about this?

Olson:

Do you mean expansion in the laboratory?

Heyer:

Was this a lot of work?

Superdirectivity

Olson:

Yes. One of the first problems was to obtain a projector that would have a very high directivity so that the sound would be concentrated in a very narrow beam. Then, when you sent out the beam, it would be reflected from the submarine in a way you would have a very good bearing on the submarine. We worked on highly directional projectors and developed one with what in antennas is termed "superdirectivity." This projector did indeed incorporate superdirectivity.

Heyer:

Was the beam like a beam from a parabolic antenna?

Olson:

Very similar to that. It had an angle perhaps of about plus or minus 5 degrees. It was a very narrow beam. It was about 3 dB.

Heyer:

Was the generator some kind of tube?

Olson:

No, that was a diaphragm type of system, magnetostriction drivers to drive the diaphragms. There were a 100 magnetostriction rods which were surrounded by coils. These rods would resonate at 25 KC. The entire diaphragm moved as a piston at 25 KC because of the large number of these magnetostriction rods on the diaphragm.

Heyer:

They were parallel, in and out?

Olson:

That's right, yes.

Heyer:

That does not seem to be quite that high in frequency, does it?

Olson:

Well, some of them were lower. At a higher frequency you got the greater directivity. So it was a compromise. There was some attenuation at the higher frequencies. Since then, they used much larger projectors so they could go down lower in frequency and obtain larger ranges.

Heyer:

Also in making movies, you have to be able to hear it.

Olson:

Well, of course, yes. But that was a beat note that you hear. That is a beat between another oscillator that beats with the incoming wave and produces the audible tone.

Loudspeakers

Heyer:

I see. How about your loudspeaker work?

Olson:

The first loudspeaker work we did was in connection with loudspeakers for theater. Originally, we used loudspeakers very similar to what you had in radios and phonographs. The difficulty there was that these speakers were fairly wide in directivity and the sound would bounce around from the walls. So we started work on horns, which indeed have very good directivity. We did all-horn loudspeakers for the theater, and that solved the problem of the sound bouncing around. One other advantage of the horn is the high efficiency. In a direct radiator loudspeaker, the type that you have in radios and phonographs, and televisions today, the efficiency is less than 5%; it is somewhere around 2%. With a well designed horn loudspeaker, you can get 25% to 50% efficiency. Since the theater requires a lot of power, it is important to have a high efficiency loudspeaker so that the amplifier won't be so large. In those days we used vacuum tubes so that it was difficult to obtain high power from the amplifier. Today, with solid-state systems, there is no problem obtaining a kilowatt of power.

But in those days 10 and 25 watt amplifiers were somewhat difficult to build. So it was important to have a high-efficiency speaker, and we developed the high efficiency horn. Later we started work on improving the frequency range of loudspeakers. We ran into the difficulty that the people did not prefer this due to distortions in the system. We did develop these for NBC, for monitoring loudspeakers. From the microphone through the amplifier we had very low distortions, so there was no problem there. But in the case of the phonograph and the radio, in order to produce instruments of low cost, the distortion was indeed high. These high-fidelity loudspeakers came into play after we had performed this experiment on frequency preference. The wide-range loudspeakers we developed were indeed used in the instruments which we produced with the wide frequency range. We also developed the air suspension loudspeaker, which is a direct radio loudspeaker with the back completely enclosed. The back then supplies the stiffness of the system instead of the surround of the cone. This reduces the distortion very much because the surround in the loudspeaker is inherently non-linear and produces distortion, whereas the air in a cabinet is not non-linear and does not produce distortion.

Heyer:

In these instances, where was the distortion originating? Was it in the records, the recording techniques, or the electronics?

Olson:

It was mostly in the amplifiers. The amplifiers produced most of the distortion. In the radio receivers, the distortion occurred in the detection system for the most part. As a result, these linear detectors were developed. [Charles Stuart] Ballantine developed the linear detector, which had very low distortion. This was then followed by amplifiers with very low distortion. This required more money because to produce a system with low distortion, great care has to be taken in all elements of the system, i.e. in the pre-amplifiers and the power amplifiers, in order to reduce distortion. One of the largess sources of distortion was the pentode [electron tube]. At about that same time the feedback systems came in, which made it possible to reduce distortion by the use of feedback. This was a big help in obtaining systems of low distortion.

Heyer:

Those were mostly class A amplifiers they had?

Olson:

Yes, but they did have the class B because of the fact that they produced high power at very low cost. As you know, they introduced a great deal of distortion. This, I think, was one of the reasons why the pentodes became very popular at that time, after the war. Distortion was very high in these. If you would cut off the frequency ranges at 5,000 cycles, then you tended to reduce the higher components.

Heyer:

Did your work on speakers demonstrate to people that if the distortion was at a low level people preferred to hear the low range?

Olson:

Yes. As a matter of fact, following this experiment with the acoustic filters, we then performed an experiment with the orchestra. We had the listeners in another room and we carried the sound there by means of two channels, i.e. really stereophonic sound — though it was long before stereophonic sound was used. We used a very low distortion system in the amplifier. We repeated the experiment and found that the listeners preferred a low frequency range, the same as they had in the original orchestra. As a matter of fact, the tests indicated an even greater preference for the full frequency range than in the case of the orchestra direct. We attributed that to the fact that there were some noises in the orchestra that the people heard which were a little distracting. The orchestra was not as careful as they are when they know they have a microphone around.

High Fidelity

Heyer:

After the war, that's when the whole idea of high fidelity comes around?

Olson:

That's right. Exactly. After our tests, which we published. Then everyone set out to see what the reason was. We had come to the conclusion and all others came to the same conclusion, so people started to develop systems with low distortion. The vinyl record came in shortly. It had very low surface noise. The surface noise of the shellacked records was another reason for the restricted range in phonographs. With the advent of the vinyl record, that reduced that noise and this was no longer a problem.

Heyer:

It sounds like all the factors were coming together?

Olson:

That's right.

Heyer:

Everything was getting better. Since you did that test with two speakers, you must have had an understanding of the stereo effect and sound perspectives?

Olson:

Yes, we did. Others had experimented with that. As a matter of fact, the Bell [Telephone] Laboratories had carried out experiments on stereophonic sound with the Philadelphia Orchestra [in 1932]. They picked up the Philadelphia Orchestra in Philadelphia and reproduced the performance in Washington D.C. by using two and three channels. Around 1920, [Ernst?] Alexanderson actually had the stereophonic sound. He used two microphones and two loudspeakers, and two different rooms. This was the first instance I know of stereophonic sound.

Heyer:

What was the date for that?

Olson:

It was around 1920. I have not been able to find any record of that, but Alexanderson told me that he performed an experiment like that.

Heyer:

Interesting. The ideas are always there, it is just a matter of finding them a way to make them work.

Olson:

That's right. The conditions have to be right in order for a development to take hold. There are a lot of factors that conspire to make a system successful or unsuccessful. Besides the actual commercial aspects of it, there are some technical aspects that have to be right before it can be successful.

Quadraphonic Sound

Heyer:

What were the earliest experiments in quadraphonic?

Olson:

In the early 1960s we carried out experiments in quadraphonic sound. The RCA record division did indeed record in quadraphonic sound quite early because they felt that it would be something that would be coming along. They recorded not only in two-channel stereo, but also in four-channel quadraphonic sound. We carried out many experiments starting in the 1960s on quadraphonic sound. Of course, there are two aspects of quadraphonic sound in the classical field. You have the stereophonic sound, that is the auditory perspective, where you can pick out the instruments in the orchestra. Then you have the envelope, that is the reflective sound. Stereophonic sound cannot produce the envelope properly in a small room, such as a room in home. But the use of four loudspeakers, with the loudspeaker supplying the reverberation envelope, makes this very realistic from the standpoint of reproduction of symphonic music. With more popular music, four-channel sound has other great possibilities. You could make the sound go around, switch back and forth, which of course provides artistic aspects that are impossible in two-channel sound. Another thing about four-channel sound is that it can carry twice as much information as two-channel sound in the same way that two-channel sound carries twice as much information as monophonic sound — you take full advantage of the four- or the two-channel system. So the four-channel system has tremendous advantage from the standpoint of transmission of information.

Heyer:

What are the limits of hearing in terms of being able to hear things? I have been interested in speech compressors.

Olson:

In speech there is a tremendous amount of redundancy, which we found in our work with the phonetic typewriter. You can indeed compress speech in many different ways and still transmit the information because of the great redundancy in speech.

The Phonetic Typewriter and Speech Compression

Heyer:

<flashmp3>026 - olson - clip 2.mp3</flashmp3>

Why don't you tell me a little about the phonetic typewriter?

Olson:

We felt that we could develop a system which would provide the possibility of speaking into a microphone and have the output on a phonetic typewriter, which would type out on a page what is spoken into the microphone. In the case of speech, in the words you have syllables and in the syllables you have phonemes. There are around forty phonemes in the English language, some 2,000 syllables, and about 100,000 words. The phoneme is very difficult to analyze out of context because one phoneme runs into another one. So we decided to work on the syllable approach. We also used phonemes when we could.

We analyzed the syllables. We first divided out the syllables in a word. This is not too difficult to do because there is indeed a spacing of a type between the syllables in the word. There may be an amplitude spacing or frequency spacing. So it's possible to separate a word into syllables. Then when you have the syllable, we have the frequency, time, and amplitude pattern for each syllable. It is different for each particular syllable. We decided that if we had 200 syllables, we could do pretty well in the English language. These are then analyzed by the phonetic typewriter as you speak into the microphone by means of the logic system and the storage system. It types out on the typewriter the syllable that was spoken into the microphone. Of course, it is a phonetic thing, so that if you pronounce "Hoyle" as "herl", it would indeed type out "erl" and not "oil." It types out what it hears, so that it could not be used for a letter to be sent out. But it could be used for a memorandum and it could be used instead of dictation. The secretary could type from the phonetic typewriter output and put it in the proper spelling.

Heyer:

You actually built these devices?

Olson:

We built one with a memory of 200 syllables, and it worked fairly well. There is one other problem with the phonetic typewriter, which is true of all analysis of speech, namely that you have to have a memory for each particular person. Each person has to have a personal memory. This can be done by a person speaking in each of the syllables, say ten times, and loading up the memory with these syllables. Then when the person speaks into the phonetic typewriter and his particular memory is loaded in, the machine types out his speech. Otherwise, the machine would operate on 75 percent of the time for a voice that is similar to a loaded one, and as little as 25 percent if the voices are entirely different, e.g. a man and a woman.

Heyer:

When were you working on that?

Olson:

We worked on that about between 1952 and 1960.

Heyer:

That sounds pretty reasonable because you would have to have memory devices available at that point.

Olson:

Yes.

Heyer:

What kind of memory did you use? Did you draw upon computer technology for this?

Olson:

In general, that's what it was, computer technology, very similar to that for the memory.

Heyer:

It seems to me that advances in solid-state memories and processors would make a device like this somewhat more feasible.

Olson:

Yes. RCA developed not for our laboratory, but in Camden, for the post office, a device so that a man could read out the zip code and the device would separate the letters. I think this has been fairly successful, and of course you only have the ten digits for the memory and you can, for each particular person, speak in the ten digits and obtain the memory and operate that way. There is, of course a question of whether that's more desirable than a keypunch system. They are trying to determine whether it is better or not.

Heyer:

Did you consider applying these ideas to translation devices?

Olson:

Yes, but we didn't get into that. That's very difficult because, as you well know, there are sentences, for which you get entirely different meaning out of the sentence depending on context.

Heyer:

I have heard of all the problems that translating systems have. Does the typewriter have trouble keeping up with someone if they talked fast or did you have to pace yourself?

Olson:

You had to pace yourself with ours. The memory was such in the analyzing system that you had to speak fairly slowly, much more slowly than I am speaking now. You had to enunciate rather clearly and speak fairly slowly.

Heyer:

Have you seen the speech compressors that are on the market now? You make a dub from one tape to another essentially. There is one type that chops out at regular intervals. Then there are other ones, supposedly, which are more discriminatory, that chomp out spaces in reference to words.

Olson:

One of the problems in handling all of these systems is the fact that you have this chopping frequency that comes in the picture and is somewhat annoying at times. WOR tried this several years ago on some of its newscasts in order to speed up the newscasts, and people did not seem to object to it. Of course, it was not speeded up very much, perhaps 5 percent.

Heyer:

I listen to hundreds of hours of tapes in the process of doing these programs, and it would speed up my process a lot if I could double the speed.

Olson:

I guess they can double the speed, alright.

Heyer:

I have gotten proficient at listening at double speed. I concentrate and turn the volume a little bit so I can hear. You set bias at the high end of the frequency spectrum. It is possible to listen, but you really have to concentrate.

Auditorium Sound Systems

Heyer:

Do you want to say something about auditoriums and the sound-reinforcement systems that you worked on?

Olson:

We worked on sound-reinforcing systems all the time that we were involved in theater work, from the early 1930s up to the present time. One of the developments which we carried out at the RCA Laboratories was on microphones located above the stage so that anyone could walk around the stage and perform an experiment on the stage and it would be picked up by these microphones. We had loudspeakers distributed all through the ceiling, so that we had complete coverage over the entire listening area. In addition to that, we had a delay between the microphone and the loudspeakers so that the first sound a person heard was a sound which originated on the stage even though it may have been very weak. If you have a delay so that the sound that emanates from the loudspeaker that is behind the sound that comes from the stage directly, then all sound appears to be coming from the stage.

This is a psychological phenomenon, which says that the first sound that you hear determines the direction of the sound. If you delay the sound in the loudspeakers, the sound will appear to come from the stage even though it’s weaker from the stage. As you travel down the auditorium, each loudspeaker is delayed so it tends to send a wave from the stage down to the audience. This contributes even more to the fact that the sound appears to originate on the stage. This tends to reduce feedback in the system as well on the delay. A little bit, perhaps, 3 dB or so improvement in the acoustic feedback. The fact that you pick up on the stage at a large distance and that we use a second-ordered gradient microphone hidden in the ceiling made it possible to perform an experiment without having a microphone hanging around one's neck. He had perfect freedom to move around. Since then, many auditoriums have been built of this type.

Heyer:

I keep thinking that sound systems in auditoriums are usually so bad.

Olson:

That's true, yes.

Heyer:

I was interested in reading about this system, but never really thought about it very much. It certainly seems that if you were designing an auditorium, it is the obvious thing to do. You say you used a second-order gradient microphone?

Olson:

Yes, a highly directional microphone.

Heyer:

That would cut down on feedback, too?

Olson:

Yes, that's right. It cuts down the feedback very much.

Heyer

Did you use that kind of a set-up for recording symphonies?

Olson:

They had used that sort of thing, but we haven't used it in our small auditorium. It has been used in the Philadelphia Academy of Music, for example, in some of the tests down there.

Heyer:

Are there any other big auditoriums that have used it?

Olson:

There are many university auditoriums that use that sort of a system. I think Purdue and Indiana University do in some of their auditoriums.

The Music Composing Machine

Heyer:

Let's discuss the music composing machine.

Olson:

Yes. We started off on that project with the idea of analyzing the music of Stephen Foster. We would feed his songs into a memory, then we would extract randomly from the memory with the system, so that you would get a few bars that would sound like Stephen Foster, which we would record. Then we would listen again. We were always recording. We would then get a few bars that would sound like Stephen Foster, but it wasn't Stephen Foster music. We finally ended up having a new Stephen Foster selection. We would play this for audiences and ask them who the composer was and they would immediately say Stephen Foster. But when you ask them which composition of Stephen Foster, they couldn't say. So a man who is composing can use a few of his compositions to get ideas on new compositions by feeding everything he has in this machine and then listening to it.

Heyer:

Sort of randomly putting it back together.

Olson:

Yes, that's right.

Heyer:

That's interesting, like the electronic free associations for a composer. When was that done?

Olson:

That was done around the late 1950s and the early 1960s [late 1940s and early 1950s], I believe.

Heyer:

Has anything more happened with it?

Olson:

[James "Jim"] Timmens used it a little bit. I think it has some limitations, and of course some of the composers would rather start from scratch.

Heyer:

I can see that a composer might see it as an insult to his musical intelligence.

Olson:

I think so. I have seen in the New York Times where composers have used something similar to that a couple of times. Their aids of this type were not necessarily like what we had, but something similar to that.

Heyer:

I get the impression that today some composers let the synthesizer do the composing.

Olson:

Yes, I guess so, especially some of the rock music today.

Heyer:

Some of the randomness is...

Olson:

Seems to be perfectly random.

Future and Problems of loudspeakers

Heyer:

Are you going to make any more predictions for the future of sound?

Olson:

I think one of the problems is the case of the loudspeaker in that in order to reproduce low frequencies, it is necessary to use a very large cabinet for the loudspeaker. One objection to using a quadraphonic system is that you have to have four large loudspeakers and the reproduction of low-frequency range loudspeakers of large size. We worked on one system in which we had a throttled air stream. This did produce a good low-frequency response. We had an air stream and we would throttle that at the audio frequency rate. This would then produce the sound because, after all, all the loudspeaker does is push out and withdraw air. The throttled air loudspeaker is very much like vocal cords. It throttles a steady air stream and converts that into sound vibrations. Although it worked fairly well, the big problem we had there was that we couldn't get rid of the air noise in the loudspeaker.

There are other ways to obtain low frequencies, for example by means of ionized air between two plates. Plates are actuated by audio frequencies, which then pull the ionized air first in one direction towards one plate and then in the opposite direction when the polarity changes. This would indeed be a very small low-frequency loudspeaker. There are many possibilities in this, although none of them so far have been very successful. In any case I think a large volume occupied by the loudspeaker is one of the things that is limiting the reproduction of sound today.

Heyer:

Did you get adequate bass response?

Olson:

That's right.

Heyer:

I know I have seen electrostatic tweeters on the market.

Olson:

Yes. Of course, they are in the high-frequency range. The electrostatic loudspeaker has not been very successful in the low frequency range because fairly large amplitudes are required. This is difficult in the case of the electrostatic loudspeaker. You have to push out a certain amount of air in order to produce a certain sound level and, of course, this becomes greater in the low-frequency range than in the high-frequency range. That is one of the big problems. You have to have a fairly large diameter cone, and if you do, you have to have a large cabinet or the stiffness must be so great that it cuts off before attaining a low-frequency range. Many attempts have been made in various ways. One way is to use a smaller diameter cone with a very long travel, which of course does fairly well, too, in reproducing low frequencies.

Heyer:

That's a small diameter cone.

Olson:

Yes, but having a large travel, so that it could still push out and withdraw an adequate amount of air in the low-frequency range.

Heyer:

A long travel is one of the attributes of the air suspension system?

Olson:

Yes, that's right. You can have a fairly large travel there because the distortion introduced by the suspension system is quite small.

Heyer:

In your system, what was the mechanism of the throttle?

Olson:

We had two cylinders with apertures. When these apertures coincided you got the maximum of all the air coming out. When it moved up it could close off the air completely. It was located so that with no signal, the apertures were half-closed. This cylinder was actuated by a voice coil. The voice coil moved this cylinder up and down and opened and closed the apertures, or partially opened and partially closed them. Then the air was fed into the cylinder and it emanated from these apertures. We had about 100 apertures like this. The diameter of the cylinder was about four inches, and its height was about four inches. This produced very good low-frequency response, but the air noise through these apertures was not very desirable.

Heyer:

I can imagine. What occurs to me immediately is a boundary-layer switching device where you can feed in a little bit from either side. I don't know whether they are a strictly bistable switch or whether you could switch them in an appropriate way I think you could switch the air stream.

Olson:

Yes. You probably could do that. You probably would have the noise of the air stream again.

Heyer:

Yes, anything you do with the moving air stream, you're going to have that problem.