Oral-History:Mischa Schwartz: Difference between revisions

From ETHW
No edit summary
No edit summary
(29 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== About Mischa Schwartz<br> ==
== About Mischa Schwartz  ==


Schwartz received his bachelor’s from Cooper Union, his Masters from Brooklyn Polytechnic, and his PhD (1951) from Harvard, the last thanks to a Sperry Graduate Fellowship. He worked at Sperry from 1947 to 1952, largely on issues signal detection theory (also the subject of his dissertation). He was a professor at Brooklyn Poly from 1953 to 1973 (head of the EE department 1961-66, established a telecommunications group there, and since then has been a professor at Columbia (helping found the Center for Telecommunications Research (CTR) in 1985, and serving as hits director until 1988). His research included coincidence detection and sequential detection through the mid-1960s; then, with the development of SABRE, [http://www.ieeeghn.org/wiki/index.php/SAGE_(Semi-Automatic_Ground_Environment) SAGE], and ARPANet, he switched focus to computer networks, particularly performance analysis and queuing theory. He worked on setting standards for networks with the CCITT, CCIR, ISO, and NRC. He has been involved with the IEEE and its Information Theory Group and Communication Society for much of his career, including stints as president of the Communication Society. He has published at least three textbooks, ''Information Transmission, Modulation, and Noise'' (1959), ''Computer Communication Network Design'' (1977), and ''Telecommunications Networks: Protocols, Modeling, and Analysis'' (1987). He mentions various of his doctoral students, the achievements of the field in general and of institutions to which he is affiliated, such as the CTR, in particular, and speculates on the future of the field.
<p>[[Image:Mischa Schwartz 4170.jpg|thumb|left|Mischa Schwartz]] </p>


== About the Interview<br> ==
<p>Schwartz received his bachelor’s from Cooper Union, his Masters from Brooklyn Polytechnic, and his PhD (1951) from Harvard, the last thanks to a Sperry Graduate Fellowship. He worked at Sperry from 1947 to 1952, largely on issues signal detection theory (also the subject of his dissertation). He was a professor at Brooklyn Poly from 1953 to 1973 (head of the EE department 1961-66, established a telecommunications group there, and since then has been a professor at Columbia (helping found the Center for Telecommunications Research (CTR) in 1985, and serving as hits director until 1988). His research included coincidence detection and sequential detection through the mid-1960s; then, with the development of SABRE, [[SAGE (Semi-Automatic Ground Environment)|SAGE]], and ARPANet, he switched focus to computer networks, particularly performance analysis and queuing theory. He worked on setting standards for networks with the CCITT, CCIR, ISO, and NRC. He has been involved with the IEEE and its Information Theory Group and Communication Society for much of his career, including stints as president of the Communication Society. He has published at least three textbooks, ''Information Transmission, Modulation, and Noise'' (1959), ''Computer Communication Network Design'' (1977), and ''Telecommunications Networks: Protocols, Modeling, and Analysis'' (1987). He mentions various of his doctoral students, the achievements of the field in general and of institutions to which he is affiliated, such as the CTR, in particular, and identifies central topics in the field. </p>


MISCHA SCHWARTZ: An Interview Conducted by David Hochfelder, IEEE History Center, 17 September 1999<br>
== About the Interview ==


Interview # 360 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc., and Rutgers, The State University of New Jersey<br>
<p>MISCHA SCHWARTZ: An Interview Conducted by David Hochfelder, IEEE History Center, 17 September 1999 </p>


== Copyright Statement ==
<p>Interview # 360 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc. </p>


This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.<br>
== Copyright Statement  ==


<p>This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center. </p>


<p>Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. </p>


Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.<br>
<p>It is recommended that this oral history be cited as follows: </p>


 
<p>Mischa Schwartz, an oral history conducted in 1999 by David Hochfelder, IEEE History Center, New Brunswick, NJ, USA. </p>
 
It is recommended that this oral history be cited as follows:<br>Mischa Schwartz, an oral history conducted in 1999 by David Hochfelder, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.<br>


== Interview  ==
== Interview  ==


Interview: Mischa Schwartz<br>Interviewer: David Hochfelder<br>Date: 17 September 1999<br>Place: Columbia University, New York<br>  
<p>Interview: Mischa Schwartz </p>


=== World War II influences on radar, communication theory  ===
<p>Interviewer: David Hochfelder </p>


'''Schwartz:'''
<p>Date: 17 September 1999 </p>


As my first job, fresh out of school in 1947, I was lucky to get a radar systems job at Sperry Gyroscope Company, which had pioneered in radar during the war. I had a wonderful group to work with, and in the process of doing that, I got heavily involved in communication theory, so I come at the field of telecommunications from communication theory. Before World War II, communications was broadly considered to be the two areas of radio and telephony. Much of it was “seat-of-the-pants” engineering. For example, one of the textbooks we used in those days was the book Radio Engineering by Frederick Terman, who was one of the real pioneers going back to the ‘30s. That’s strictly circuit after circuit, with very little analysis or overall systems orientation. I think World War II changed that. At least from my perspective it did.
<p>Place: Columbia University, New York </p>


<br>
=== World War II influences on radar, communication theory ===
 
<br>In our work, many of us learned from the work at the Radiation Laboratory at MIT during World War II, which put out a whole series of books on radar. While working in radar and communication theory, post-1947, I used a lot of that material. At the same time there were people at Bell Labs also working in that area. Claude Shannon, for example, the founder of, and a giant in information theory, did a lot of work during World War II on problems related to communication and control for the military. Norbert Wiener at MIT was also doing work of that type. So, this all came out of problems during World War II, much of it having to do with trying to improve radar, communications, and system control technology. There were many physicists and mathematicians working on radar at the MIT Rad Lab.
 
<br>


<br>That’s my own view. They developed a systems unit that had been missing before. You had very good physicists working, good mathematicians, and very good engineers. The systems concepts had been slowly developing, but they really came together there. For example, such problems like detecting signals in noise—a critical issue in communications—came out of work on radar.
<p>'''Schwartz:''' </p>


=== Education; Sperry Gyroscope Company employment ===
<flashmp3>360 - schwartz - clip 1.mp3</flashmp3>


'''Schwartz:'''
<p>As my first job, fresh out of school in 1947, I was lucky to get a [[Radar|radar]] systems job at Sperry Gyroscope Company, which had pioneered in [[Radar|radar]] during the war. I had a wonderful group to work with, and in the process of doing that, I got heavily involved in communication theory, so I come at the field of telecommunications from communication theory. Before World War II, communications was broadly considered to be the two areas of radio and telephony. Much of it was “seat-of-the-pants” engineering. For example, one of the textbooks we used in those days was the book Radio Engineering by [[Frederick Terman|Frederick Terman]], who was one of the real pioneers going back to the ‘30s. That’s strictly circuit after circuit, with very little analysis or overall systems orientation. I think World War II changed that. At least from my perspective it did. </p>


I worked at this group at Sperry for two years in 1947-1949, coming in as a young kid, fresh out of school. I had just received a degree at Cooper Union. I started Cooper Union early on during the war and got drafted. I came back after the war, and luckily I had a good advisor who pushed me out in a hurry. He said I didn’t have to take certain courses, and so I got out in ‘47. I started a master’s degree at night at Brooklyn Poly and I was teaching at Cooper Union, too. When you’re young you have a lot of energy to do all kinds of things simultaneously.  
<p>In our work, many of us learned from the work at the [[MIT Rad Lab|Radiation Laboratory at MIT]] during World War II, which put out a whole series of books on [[Radar|radar]]. While working in [[Radar|radar]] and communication theory, post-1947, I used a lot of that material. At the same time there were people at Bell Labs also working in that area. [[Claude Shannon|Claude Shannon]], for example, the founder of, and a giant in information theory, did a lot of work during World War II on problems related to communication and control for the military. [[Norbert Wiener]] at MIT was also doing work of that type. So, this all came out of problems during World War II, much of it having to do with trying to improve [[Radar|radar]], communications, and system control technology. There were many physicists and mathematicians working on [[Radar|radar]] at the [[MIT Rad Lab|MIT Rad Lab]]. </p>


<br>  
<p>That’s my own view. They developed a systems unit that had been missing before. You had very good physicists working, good mathematicians, and very good engineers. The systems concepts had been slowly developing, but they really came together there. For example, such problems like detecting signals in noise—a critical issue in communications—came out of work on [[Radar|radar]]. </p>


<br>At any rate, I got fascinated. Now, here I am fresh out of Cooper Union, I get involved with the radar systems group at Sperry, and the first thing I had to learn was probability. I never took a course in probability theory, but here I’m being asked to study signals in noise. What is noise? What are signals? How do you represent these things? What do you mean by detecting signals in noise? Because this was a critical issue in radar. So I did a quick learning process, and I must say in all honesty that sometimes I feel that I’m not as good in probability as I should be! All my work has been in statistical communications technology. But I never had a formal course in the subject, so it’s all “seat-of-the-pants” self-learning. I have taught numerous courses in statistical concepts, but somehow or other I still feel I missed the rigorous approach needed to truly learn probabilistic concepts. Anyway, it turned out one of the key areas that we had to learn quickly was noise representation. That was a very hot topic.
=== Education; Sperry Gyroscope Company employment  ===


<br>  
<p>'''Schwartz:''' </p>


<br>First of all, in radar, how do you pull up signals from noise? I used two primers at that time. One book of the Radiation Laboratory’s series was on the detection of signals in noise, so I went through that. The other was a classic series of papers written by Steve Rice, S. O. Rice, at Bell Laboratories in 1944 on mathematical noise representation. I found it very difficult as a young kid without enough background in probability when these guys are tossing this stuff at me. For instance, spectral analysis—I didn’t know what that was. The problem is in school it was all seat-of-the-pants kinds of things. Design electronic circuits—there’s no system orientation of any kind. Now suddenly you have to learn about spectrum analysis. Some of this had appeared earlier in the literature but really hadn’t been done in the schools. The whole field has changed now. This came out of World War II where people were dealing with noise representation, representation of signals, spectral analysis.
<p>I worked at this group at Sperry for two years in 1947-1949, coming in as a young kid, fresh out of school. I had just received a degree at Cooper Union. I started Cooper Union early on during the war and got drafted. I came back after the war, and luckily I had a good advisor who pushed me out in a hurry. He said I didn’t have to take certain courses, and so I got out in ‘47. I started a master’s degree at night at Brooklyn Poly and I was teaching at Cooper Union, too. When you’re young you have a lot of energy to do all kinds of things simultaneously. </p>


<br>  
<p>At any rate, I got fascinated. Now, here I am fresh out of Cooper Union, I get involved with the [[Radar|radar]] systems group at Sperry, and the first thing I had to learn was probability. I never took a course in probability theory, but here I’m being asked to study signals in noise. What is noise? What are signals? How do you represent these things? What do you mean by detecting signals in noise? Because this was a critical issue in [[Radar|radar]]. So I did a quick learning process, and I must say in all honesty that sometimes I feel that I’m not as good in probability as I should be! All my work has been in statistical communications technology. But I never had a formal course in the subject, so it’s all “seat-of-the-pants” self-learning. I have taught numerous courses in statistical concepts, but somehow or other I still feel I missed the rigorous approach needed to truly learn probabilistic concepts. Anyway, it turned out one of the key areas that we had to learn quickly was noise representation. That was a very hot topic. </p>


<br>At the same time, in 1948 (the year after I started at Sperry) hot in the air was Shannon’s work on information theory that came out at that time. That was an eye-opener. I didn’t quite understand all of it either, and that took a lot of learning. So everything was really very exciting in those days, I must say. The whole idea of communication theory: how to improve transmission of signals in the presence of noise; how you cope with this in radio, how you cope with this in radar-related ventures. But starting with radar and moving into radio, now, how do you handle that? How do you carry out telephone transmission in the presence of noise? All of these ideas began to gel and come together based on work during the war by Shannon, Wiener, Steve Rice at Bell Labs, and the physicists who worked at Rad Lab. There was a lot of work being done. Some of this work was still being done in the classified area. There was a very famous report being done by a man named Marcum, who was one of the first to do some basic work on detection of signals in noise for radar problems. This was declassified a long time ago and it has been published in a classic set of papers. Our work at Sperry was sort of similar. We used some of his work, and we went beyond his to some extent. So this was “hot in the air” in those days, and that was the most exciting thing.  
<p>First of all, in [[Radar|radar]], how do you pull up signals from noise? I used two primers at that time. One book of the Radiation Laboratory’s series was on the detection of signals in noise, so I went through that. The other was a classic series of papers written by [[Stephen Rice|Steve Rice, S. O. Rice]], at [[Bell Labs|Bell Laboratories]] in 1944 on mathematical noise representation. I found it very difficult as a young kid without enough background in probability when these guys are tossing this stuff at me. For instance, spectral analysis—I didn’t know what that was. The problem is in school it was all seat-of-the-pants kinds of things. Design electronic circuits—there’s no system orientation of any kind. Now suddenly you have to learn about spectrum analysis. Some of this had appeared earlier in the literature but really hadn’t been done in the schools. The whole field has changed now. This came out of World War II where people were dealing with noise representation, representation of signals, spectral analysis. </p>


<br>  
<p>At the same time, in 1948 (the year after I started at Sperry) hot in the air was [[Claude Shannon|Shannon’s]] work on information theory that came out at that time. That was an eye-opener. I didn’t quite understand all of it either, and that took a lot of learning. So everything was really very exciting in those days, I must say. The whole idea of communication theory: how to improve transmission of signals in the presence of noise; how you cope with this in radio, how you cope with this in [[Radar|radar]]-related ventures. But starting with [[Radar|radar]] and moving into radio, now, how do you handle that? How do you carry out telephone transmission in the presence of noise? All of these ideas began to gel and come together based on work during the war by [[Claude Shannon|Shannon]], Wiener, [[Stephen Rice|Steve Rice]] at [[Bell Labs|Bell Labs]], and the physicists who worked at [[MIT Rad Lab|Rad Lab]]. There was a lot of work being done. Some of this work was still being done in the classified area. There was a very famous report being done by a man named Marcum, who was one of the first to do some basic work on detection of signals in noise for [[Radar|radar]] problems. This was declassified a long time ago and it has been published in a classic set of papers. Our work at Sperry was sort of similar. We used some of his work, and we went beyond his to some extent. So this was “hot in the air” in those days, and that was the most exciting thing. </p>


<br>Nowadays, we look back and we say that’s only part of the communications process, what you call the physical layer. We didn’t call it that in those days. But nobody thought about going up higher or anything like that; just get those signals across and noise is your big problem. So, how do you handle it? How do you characterize the noise? How do you characterize the signals? How do you handle problems of this type? So that’s where things were gelling in those days.  
<p>Nowadays, we look back and we say that’s only part of the communications process, what you call the physical layer. We didn’t call it that in those days. But nobody thought about going up higher or anything like that; just get those signals across and noise is your big problem. So, how do you handle it? How do you characterize the noise? How do you characterize the signals? How do you handle problems of this type? So that’s where things were gelling in those days. </p>


<br>
=== Ph.D. studies, Sperry Graduate Fellowship  ===


=== Ph.D. studies, Sperry Graduate Fellowship ===
<p>'''Schwartz:''' </p>


'''Schwartz:'''<br>Early in 1949, after I had been there almost two years, Sperry Gyroscope introduced a new doctoral program for people working at Sperry. I applied and was lucky enough to get the award. The most amazing thing is they say to pick any school I want in the country. “So, how much will I get?” “Well, you figure out what it’s going to cost.” I mean, nobody had a price, believe it or not. I could have gone anywhere, assuming I had been accepted. Luckily, I’m an honest guy. I had good grades in my evening Master’s program at Brooklyn Polytech, and as a young kid I had always heard Harvard was the best university in the country. Maybe not the best place for electrical engineering, but I decided to go to Harvard, and I’m glad I did.  
<p>Early in 1949, after I had been there almost two years, Sperry Gyroscope introduced a new doctoral program for people working at Sperry. I applied and was lucky enough to get the award. The most amazing thing is they say to pick any school I want in the country. “So, how much will I get?” “Well, you figure out what it’s going to cost.” I mean, nobody had a price, believe it or not. I could have gone anywhere, assuming I had been accepted. Luckily, I’m an honest guy. I had good grades in my evening Master’s program at Brooklyn Polytech, and as a young kid I had always heard Harvard was the best university in the country. Maybe not the best place for electrical engineering, but I decided to go to Harvard, and I’m glad I did. </p>


<br>  
<p>So I went to Harvard in ‘49. Harvard had had an engineering school, but they closed it down just as I arrived there, so I went into the Applied Physics program. I was really doing communications work as well as applied physics, but you could do anything you pleased. I submitted a proposal to Sperry to cover my costs there and I thought I was limited in the amount of money, so I really didn’t ask for much. But as the “richest” graduate student on campus, I bought an old car—nobody else could do that. I wasn’t making much money, but it was still better than most students were getting in those days. I was very thankful to Sperry for the Sperry Graduate Fellowship. It was very nice. Summers I came back to work at Sperry. </p>


<br>So I went to Harvard in ‘49. Harvard had had an engineering school, but they closed it down just as I arrived there, so I went into the Applied Physics program. I was really doing communications work as well as applied physics, but you could do anything you pleased. I submitted a proposal to Sperry to cover my costs there and I thought I was limited in the amount of money, so I really didn’t ask for much. But as the “richest” graduate student on campus, I bought an old car—nobody else could do that. I wasn’t making much money, but it was still better than most students were getting in those days. I was very thankful to Sperry for the Sperry Graduate Fellowship. It was very nice. Summers I came back to work at Sperry.  
<p>I went to Harvard and looked around for a thesis topic. I got an advisor, a man named Pierre LeCorbeiller, who was a physicist. He said to me, “I’ve got a nice problem on the double pendulum that I’d like you to solve. Solve it, you get the degree.” I said, “I’m not really interested.” I took some reading courses with him on nonlinear mechanics, stuff like that. So I went scouting around for a thesis topic. I even went over to MIT to talk to some well-known electrical engineers there. One guy was a doctoral student, Bill Huggins, from Johns Hopkins whose work I’d known about earlier. I couldn’t get a topic, so I decided to extend the work that I’d done at Sperry. I’m happy that I did because it turned out to be wonderful- it expedited my getting the doctorate. My first year at Harvard, I took courses and I took the doctoral qualifying exam. The second year, I spent a few months on the thesis and I finished up in two years, very quickly. They tell me the thesis was very good. Somebody once said it’s the most dog-eared thesis at Harvard in that field, because it was a hot topic. It was on the detection of signals in noise as applied to [[Radar|radar]], but extending the work that Marcum had done and the work we had done at Sperry. So it really was detection theory, if you wish. </p>


<br>
=== Signal detection in noise; thesis and publications  ===


<br>I went to Harvard and looked around for a thesis topic. I got an advisor, a man named Pierre LeCorbeiller, who was a physicist. He said to me, “I’ve got a nice problem on the double pendulum that I’d like you to solve. Solve it, you get the degree.” I said, “I’m not really interested.” I took some reading courses with him on nonlinear mechanics, stuff like that. So I went scouting around for a thesis topic. I even went over to MIT to talk to some well-known electrical engineers there. One guy was a doctoral student, Bill Huggins, from Johns Hopkins whose work I’d known about earlier. I couldn’t get a topic, so I decided to extend the work that I’d done at Sperry. I’m happy that I did because it turned out to be wonderful- it expedited my getting the doctorate. My first year at Harvard, I took courses and I took the doctoral qualifying exam. The second year, I spent a few months on the thesis and I finished up in two years, very quickly. They tell me the thesis was very good. Somebody once said it’s the most dog-eared thesis at Harvard in that field, because it was a hot topic. It was on the detection of signals in noise as applied to radar, but extending the work that Marcum had done and the work we had done at Sperry. So it really was detection theory, if you wish.
<p>'''Schwartz:''' </p>


<br>  
<p>I tell my students I believe in serendipity. I like browsing, and I suddenly find something that can be useful. This was the case with my doctoral work. You realize the problem—the detection of signals in noise—is statistical, so you look through the statistics literature. It turns out statisticians had done work similar to this. They may have not have used the term noise, they may not have used the term signal, but instead talked of detecting something in the presence of interference. Something like that. In particular, both Marcum and I came across this. Two statisticians named Neyman and Pearson had done work on the theory of statistical detection that I applied to the problem of [[Radar|radar]]. It enables you to find optimization techniques. For example, if you have a signal in the presence of Gaussian noise, the model everybody uses, says that the optimum thing to do (which is what people were doing all along) is to take these signals, one after the other, and just add them up. After a certain period of time, you stop and say that if the sum of those signals exceeds a certain threshold based on the noise, then you call it a signal present. Otherwise, you call it noise present. </p>


=== Signal detection in noise; thesis and publications ===
<p>See, they came up with the nice idea that you have what’s called the probability of success and the probability of making a mistake—you have to incorporate both. A lot of books have been written on this. One of my books talks about that too. What you do is to say, “I want the probability of a mistake to be less than a specified amount.” In our case, mistaking noise for signal is to be less than a certain value. That sets a threshold. Then you maximize the probability of success of a signal when it does appear. The procedure tells you the way to handle this. If it turns out noise is Gaussian, just add up the signal samples and set a level, depending on the probability of false alarm, the probability of mistake. Some of this work had already been done at the [[MIT Rad Lab|Rad Lab]], but not in this clear form. They had done similar things, because, if you think about it awhile, it’s almost an obvious thing to do. </p>


'''Schwartz:'''<br>I tell my students I believe in serendipity. I like browsing, and I suddenly find something that can be useful. This was the case with my doctoral work. You realize the problem—the detection of signals in noise—is statistical, so you look through the statistics literature. It turns out statisticians had done work similar to this. They may have not have used the term noise, they may not have used the term signal, but instead talked of detecting something in the presence of interference. Something like that. In particular, both Marcum and I came across this. Two statisticians named Neyman and Pearson had done work on the theory of statistical detection that I applied to the problem of radar. It enables you to find optimization techniques. For example, if you have a signal in the presence of Gaussian noise, the model everybody uses, says that the optimum thing to do (which is what people were doing all along) is to take these signals, one after the other, and just add them up. After a certain period of time, you stop and say that if the sum of those signals exceeds a certain threshold based on the noise, then you call it a signal present. Otherwise, you call it noise present.<br>  
<p>So I pursued that for my doctoral thesis. I extended this work in my thesis to handle a signal that is fading, or fluctuating in amplitude. I also came up with another detection technique when I said, “But, gee, I might try other techniques instead of adding the signals up. Maybe there are simpler techniques.” These apply to communications obviously, because it’s the same problem of detecting a signal in the presence of noise. So I developed a technique called coincidence detection, taking a number of these [[Radar|radar]] signals coming in to see if each signal separately exceeds a certain level. If it does, you count it, then after a certain interval of time, you count the number of incidences above that level. So it’s a counter rather than an adder. I thought it might be easier to implement, and I compared that with the optimum Gaussian adding procedure and showed that it does very well. </p>


<br>  
<p>You have a bunch of signal plus noise samples coming in; they may be fluctuating because the signal is fluctuating. So if each signal comes in, it could be noise, so you set a threshold. As each one comes in, you ask is it above the threshold? You set a counter. So every time it comes in, a counter keeps advancing. The [[Radar|radar]] can only handle a certain number, so you set a certain fixed number to come in, in a certain period of time. You say, “I’ll call it a signal rather than noise,” if K of N samples exceeds that level, and the value K depends on the noise level and on the signal level. Not only that, I found by just playing around it turns out when you do a study of a lot of these in large numbers, that there is an optimum value for that value K, in terms of maximizing the probability of success, for example. So that was a chapter of my thesis, and later published as a paper. The [[Radar|radar]] people picked up on this paper, and it was later cited as a classic paper in the field. </p>


See, they came up with the nice idea that you have what’s called the probability of success and the probability of making a mistake—you have to incorporate both. A lot of books have been written on this. One of my books talks about that too. What you do is to say, “I want the probability of a mistake to be less than a specified amount.” In our case, mistaking noise for signal is to be less than a certain value. That sets a threshold. Then you maximize the probability of success of a signal when it does appear. The procedure tells you the way to handle this. If it turns out noise is Gaussian, just add up the signal samples and set a level, depending on the probability of false alarm, the probability of mistake. Some of this work had already been done at the Rad Lab, but not in this clear form. They had done similar things, because, if you think about it awhile, it’s almost an obvious thing to do.<br>  
<p>Not only that. I found to my amazement later on, people saying this was one of the first papers on non-parametric detection because the coincident detection technique was non-parametric. It didn’t count on the optimum Gaussian statistics it simply said count the number above a level. That’s all I did. It was very nice. </p>


<br>  
<p>So I published these papers. I came back to Sperry after two years. I had nobody to guide me; my advisor didn’t really guide me in this. Somebody said, “Why don’t you publish this?” I said, “Where should I publish it? I don’t know anything about publishing.” “''The Journal of Applied Physics''.” I sent the whole thesis (180 pages long) to the ''Journal of Applied Physics''. It goes over there. Back comes a letter from the editor: “It sounds interesting, but maybe you ought to send in separate articles.” It took me a while. I ended up finishing my thesis in ‘51. In ‘54, two papers were finally published. </p>


So I pursued that for my doctoral thesis. I extended this work in my thesis to handle a signal that is fading, or fluctuating in amplitude. I also came up with another detection technique when I said, “But, gee, I might try other techniques instead of adding the signals up. Maybe there are simpler techniques.” These apply to communications obviously, because it’s the same problem of detecting a signal in the presence of noise. So I developed a technique called coincidence detection, taking a number of these radar signals coming in to see if each signal separately exceeds a certain level. If it does, you count it, then after a certain interval of time, you count the number of incidences above that level. So it’s a counter rather than an adder. I thought it might be easier to implement, and I compared that with the optimum Gaussian adding procedure and showed that it does very well.<br>  
<p>Another example of serendipity was the chapter in my thesis on sequential detection applied to [[Radar|radar]]. While at Harvard, I was browsing through the statistical literature on signal detection. I came across a book by a man named Abraham Wald, who was a world-famous statistician at Columbia, called Sequential Decision Theory. He had the concept that one could speed up the process of determining whether a product that you were looking for is ok. All of this is small sample theory. Maybe I can do better with fewer samples, if instead of setting a criterion and seeing if my samples exceed that criterion, I do it sequentially. I start looking at each sample as it comes along. I have a decision region that changes depending on what the sample was. For example, if the first few samples I get are above a certain level, maybe that means it is a signal coming in. Let me reduce the threshold or do something. Adapting Wald’s work to [[Radar|radar]], I developed a double threshold scheme. If something fell below a lower threshold it was noise; above a second threshold it was counted as signal; between the two, you continue the process, and start narrowing that double threshold region, the one between the two thresholds. </p>


<br>  
<p>'''Hochfelder:''' </p>


You have a bunch of signal plus noise samples coming in; they may be fluctuating because the signal is fluctuating. So if each signal comes in, it could be noise, so you set a threshold. As each one comes in, you ask is it above the threshold? You set a counter. So every time it comes in, a counter keeps advancing. The radar can only handle a certain number, so you set a certain fixed number to come in, in a certain period of time. You say, “I’ll call it a signal rather than noise,” if K of N samples exceeds that level, and the value K depends on the noise level and on the signal level. Not only that, I found by just playing around it turns out when you do a study of a lot of these in large numbers, that there is an optimum value for that value K, in terms of maximizing the probability of success, for example. So that was a chapter of my thesis, and later published as a paper. The radar people picked up on this paper, and it was later cited as a classic paper in the field.<br>  
<p>So it’s an adaptive measure? </p>


<br>  
<p>'''Schwartz:''' </p>
 
Not only that. I found to my amazement later on, people saying this was one of the first papers on non-parametric detection because the coincident detection technique was non-parametric. It didn’t count on the optimum Gaussian statistics it simply said count the number above a level. That’s all I did. It was very nice.<br>
 
<br>
 
So I published these papers. I came back to Sperry after two years. I had nobody to guide me; my advisor didn’t really guide me in this. Somebody said, “Why don’t you publish this?” I said, “Where should I publish it? I don’t know anything about publishing.” “''The Journal of Applied Physics''.” I sent the whole thesis (180 pages long) to the ''Journal of Applied Physics''. It goes over there. Back comes a letter from the editor: “It sounds interesting, but maybe you ought to send in separate articles.” It took me a while. I ended up finishing my thesis in ‘51. In ‘54, two papers were finally published.<br>
 
<br>
 
Another example of serendipity was the chapter in my thesis on sequential detection applied to radar. While at Harvard, I was browsing through the statistical literature on signal detection. I came across a book by a man named Abraham Wald, who was a world-famous statistician at Columbia, called Sequential Decision Theory. He had the concept that one could speed up the process of determining whether a product that you were looking for is ok. All of this is small sample theory. Maybe I can do better with fewer samples, if instead of setting a criterion and seeing if my samples exceed that criterion, I do it sequentially. I start looking at each sample as it comes along. I have a decision region that changes depending on what the sample was. For example, if the first few samples I get are above a certain level, maybe that means it is a signal coming in. Let me reduce the threshold or do something. Adapting Wald’s work to radar, I developed a double threshold scheme. If something fell below a lower threshold it was noise; above a second threshold it was counted as signal; between the two, you continue the process, and start narrowing that double threshold region, the one between the two thresholds.<br>
 
<br>
 
'''Hochfelder:'''
 
So it’s an adaptive measure?<br><br>
 
<br>
 
'''Schwartz:'''  
 
It’s an adaptive technique. I said, “My God. I can use that for radar!” I have a chapter in my thesis on sequential decision theory, which I never published, unfortunately. I wish I had, because other people published and got the credit for it. I tell my students this all the time now, “Publish as quickly as you can. Otherwise, you’re going to find that everything is in the air at that time. Everybody’s working on these problems. Things don’t come out of a vacuum.” If I’m working on this problem, other people are working on this problem. Don’t be afraid if somebody has similar ideas. You are going to have your own ideas, always different enough from somebody else’s ideas that it will be all right.<br>Reminiscing a little bit, the day of my doctoral exam at Harvard, I got a call from my advisor to come into his office. He says, “When you went over to MIT, who did you speak to?” I said, “Why?” Well, it turned out that I had a fright, but it worked out okay. My advisor had invited Jerome Wiesner, a very well known professor from MIT, whose work was in this area, to serve on the committee. (He became the president of MIT later on.) He accepted because there was nobody—really, there were very few people at Harvard working in the field. My advisor was a physicist. He didn’t know anything initially about this field, but he took me on as a student, which was nice. So I worked on my own, based on my experience at Sperry. At any rate, my advisor says, “Well, he [Wiesner] looked at your thesis, and said, ‘A thesis at MIT was finished last year, exactly in that area.’” I said, “I never saw this before.” He gives me a copy of the other thesis. I look through it. The first chapter was very similar to mine. Quantitatively he was doing the same kind of thing, but happily, mine was theoretical. He had built a radar system and did studies, which took me off the hook. I’d never met the guy. When I went over to MIT, it had nothing to do with that. I was looking for other ideas in other fields.<br>
 
<br>
 
Generally, when you’re working on something, other people are working on the same thing. That’s the way life is. I mean, I worked on radar because I was at Sperry and Sperry was a radar company, among other things. Work had gone on at Rad Lab. The Rad Lab books had been published. The Bells Labs’ people were doing work in this area. It was all over the place. Marcum had done this work at Rand Corporation out in California. So everything was going on at that time. Then you have your own ideas. So, at any rate, both sequential detection and Neyman-Pearson detection theory as applied to radar were in my thesis. But by the time I got around to wanting to publish this work, somebody had already published papers on them. I got two nice papers when I could have had four nice papers. But, okay. How could I complain?<br>  


<p>It’s an adaptive technique. I said, “My God. I can use that for [[Radar|radar]]!” I have a chapter in my thesis on sequential decision theory, which I never published, unfortunately. I wish I had, because other people published and got the credit for it. I tell my students this all the time now, “Publish as quickly as you can. Otherwise, you’re going to find that everything is in the air at that time. Everybody’s working on these problems. Things don’t come out of a vacuum.” If I’m working on this problem, other people are working on this problem. Don’t be afraid if somebody has similar ideas. You are going to have your own ideas, always different enough from somebody else’s ideas that it will be all right. Reminiscing a little bit, the day of my doctoral exam at Harvard, I got a call from my advisor to come into his office. He says, “When you went over to MIT, who did you speak to?” I said, “Why?” Well, it turned out that I had a fright, but it worked out okay. My advisor had invited [[Jerome B. Wiesner|Jerome Wiesner]], a very well known professor from MIT, whose work was in this area, to serve on the committee. (He became the president of MIT later on.) He accepted because there was nobody—really, there were very few people at Harvard working in the field. My advisor was a physicist. He didn’t know anything initially about this field, but he took me on as a student, which was nice. So I worked on my own, based on my experience at Sperry. At any rate, my advisor says, “Well, [[Jerome B. Wiesner|he [Wiesner]]] looked at your thesis, and said, ‘A thesis at MIT was finished last year, exactly in that area.’” I said, “I never saw this before.” He gives me a copy of the other thesis. I look through it. The first chapter was very similar to mine. Quantitatively he was doing the same kind of thing, but happily, mine was theoretical. He had built a [[Radar|radar]] system and did studies, which took me off the hook. I’d never met the guy. When I went over to MIT, it had nothing to do with that. I was looking for other ideas in other fields. </p>


<p>Generally, when you’re working on something, other people are working on the same thing. That’s the way life is. I mean, I worked on [[Radar|radar]] because I was at Sperry and Sperry was a [[Radar|radar]] company, among other things. Work had gone on at [[MIT Rad Lab|Rad Lab]]. The [[MIT Rad Lab|Rad Lab]] books had been published. The Bells Labs’ people were doing work in this area. It was all over the place. Marcum had done this work at Rand Corporation out in California. So everything was going on at that time. Then you have your own ideas. So, at any rate, both sequential detection and Neyman-Pearson detection theory as applied to [[Radar|radar]] were in my thesis. But by the time I got around to wanting to publish this work, somebody had already published papers on them. I got two nice papers when I could have had four nice papers. But, okay. How could I complain? </p>


=== Teaching; information theory  ===
=== Teaching; information theory  ===


'''Schwartz:'''  
<p>'''Schwartz:''' </p>


Anyway, I went back to Sperry for a year and did more work. Then I decided I wanted to go into the academic world. I started doing teaching at night. I taught at Adelphi on Long Island and I gave a course at City College in statistical communication theory, which was just beginning to gel. Now, while I was at Harvard as a graduate student, one of the pioneers in information theory was a man named Peter Elias, who had done his thesis on this at Harvard.<br>  
<p>Anyway, I went back to Sperry for a year and did more work. Then I decided I wanted to go into the academic world. I started doing teaching at night. I taught at Adelphi on Long Island and I gave a course at City College in statistical communication theory, which was just beginning to gel. Now, while I was at Harvard as a graduate student, one of the pioneers in information theory was a man named [[Peter Elias]], who had done his thesis on this at Harvard. </p>


<br>  
<p>'''Hochfelder:''' </p>


'''Hochfelder:'''
<p>He was associated with [[Norbert Wiener]] for some reason? </p>


He was associated with Norbert Weiner for some reason?<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>Yes. He became a professor at MIT. He had one of these special fellowships at Harvard, I think, to do anything you please, really. He may have gotten his Ph.D. at MIT. I don’t remember where it was, but his thesis was in information theory, coding, those things. I listened to some talks that he gave, and it was very exciting. So I got very interested in information theory as well. I never really worked in information theory. Coding was not my field of interest. I was much more fascinated by noise. Signals in noise, detection theory—that excited me. </p>


'''Schwartz:'''
<p>I spent another year at Sperry, and then decided to develop some graduate courses at Adelphi University on Long Island and a graduate course at City College, trying to pull together some of these ideas on statistical communication theory. I enjoyed those courses. I enjoyed pulling them together and I loved teaching. When I was doing my master’s degree at Brooklyn Poly at night and working at Sperry, Friday nights I was also handling a physics laboratory at Cooper Union. (When you’re a kid, you can do anything, obviously!) At Cooper Union as a senior, I enjoyed helping students with their course in applied mathematics, stuff like that. I enjoyed teaching very much. That’s how I decided to become a teacher. </p>


Yes. He became a professor at MIT. He had one of these special fellowships at Harvard, I think, to do anything you please, really. He may have gotten his Ph.D. at MIT. I don’t remember where it was, but his thesis was in information theory, coding, those things. I listened to some talks that he gave, and it was very exciting. So I got very interested in information theory as well. I never really worked in information theory. Coding was not my field of interest. I was much more fascinated by noise. Signals in noise, detection theory—that excited me.  
<p>How I finally made the switch from Sperry to teaching is very interesting. On my plane coming back from Chicago from the National Electronics Conference, the year after I got my doctorate, was [[Ernst Weber|Ernst Weber]], one of my teachers at Brooklyn Poly, who was a pioneer in the field of electrical engineering. He was very well known, and had published books, including one much later with the IEEE Press. He has now passed away, but was an outstanding giant in the field. He was later the president of Brooklyn Poly and was head of the department when I was there. One of my teachers, and a wonderful teacher. So, I see him on the plane. I go over to him and say, “Dr. Weber, do you happen to have any openings?” It turned out he had, so he offered me a job at Brooklyn Poly in September of 1952 as assistant professor. Nowadays, when people complain about one course that they have to teach I had 18 contact hours. I had two three-hour lectures and twelve hours of laboratory. I said to myself, “Gee, what do I do with all the free time that I’ve got?” Because I was used to working a forty-hour week at Sperry and maybe overtime as well. </p>


<br>  
<p>One of my colleagues that joined the same time as I did, Athanasius Papoulis, went on to become very well-known in the field. He’s published many books. We even shared a desk. They had no room for us when we first joined. [[Ernst Weber|Dr. Weber]] was also president of the Polytechnic Research and Development Corporation, which was marketing some of the stuff done at the Microwave Research Institute at Poly, of which he was also the director at that time. Poly had done pioneering work in microwaves under his direction during World War II. He was never in his office, so he gave us this desk to share the first half a year or year, or something like that. </p>


<br>I spent another year at Sperry, and then decided to develop some graduate courses at Adelphi University on Long Island and a graduate course at City College, trying to pull together some of these ideas on statistical communication theory. I enjoyed those courses. I enjoyed pulling them together and I loved teaching. When I was doing my master’s degree at Brooklyn Poly at night and working at Sperry, Friday nights I was also handling a physics laboratory at Cooper Union. (When you’re a kid, you can do anything, obviously!) At Cooper Union as a senior, I enjoyed helping students with their course in applied mathematics, stuff like that. I enjoyed teaching very much. That’s how I decided to become a teacher.
<p>Papoulis had five courses to teach, three graduate and two undergraduate courses. Maybe he had ten to twelve contact hours. Young faculty only teach three contact hours now, but there’s more pressure on them now. We didn’t have that much pressure. </p>


<br>  
<p>Brooklyn Poly had pioneered in microwaves, electromagnetic theory applied to [[Radar|radar]], and applied to a lot of other problems. Weber had built up this large group, and they had set up a separate department called the ElectroPhysics Department. </p>
 
<br>How I finally made the switch from Sperry to teaching is very interesting. On my plane coming back from Chicago from the National Electronics Conference, the year after I got my doctorate, was Ernst Weber, one of my teachers at Brooklyn Poly, who was a pioneer in the field of electrical engineering. He was very well known, and had published books, including one much later with the IEEE Press. He has now passed away, but was an outstanding giant in the field. He was later the president of Brooklyn Poly and was head of the department when I was there. One of my teachers, and a wonderful teacher. So, I see him on the plane. I go over to him and say, “Dr. Weber, do you happen to have any openings?” It turned out he had, so he offered me a job at Brooklyn Poly in September of 1952 as assistant professor. Nowadays, when people complain about one course that they have to teach I had 18 contact hours. I had two three-hour lectures and twelve hours of laboratory. I said to myself, “Gee, what do I do with all the free time that I’ve got?” Because I was used to working a forty-hour week at Sperry and maybe overtime as well.<br>
 
<br>
 
One of my colleagues that joined the same time as I did, Athanasius Papoulis, went on to become very well-known in the field. He’s published many books. We even shared a desk. They had no room for us when we first joined. Dr. Weber was also president of the Polytechnic Research and Development Corporation, which was marketing some of the stuff done at the Microwave Research Institute at Poly, of which he was also the director at that time. Poly had done pioneering work in microwaves under his direction during World War II. He was never in his office, so he gave us this desk to share the first half a year or year, or something like that.<br>
 
<br>
 
Papoulis had five courses to teach, three graduate and two undergraduate courses. Maybe he had ten to twelve contact hours. Young faculty only teach three contact hours now, but there’s more pressure on them now. We didn’t have that much pressure.<br>
 
<br>
 
Brooklyn Poly had pioneered in microwaves, electromagnetic theory applied to radar, and applied to a lot of other problems. Weber had built up this large group, and they had set up a separate department called the ElectroPhysics Department.<br>
 
<br>I joined the Electrical Engineering Department. It turned out, for whatever the reason, the Electrical Engineering Department was more of an undergraduate department at the time. We taught graduate courses, but there wasn’t that much research going. The ElectroPhysics Department was strictly a research department teaching graduate courses. So my first courses that first year were standard electrical engineering undergraduate courses.<br>
 
<br>
 
This is 1952, and I had this interest in communication theory. In 1952, one of the first books published in the area was by Davenport and Root called''Random Signals'' based on pioneering work at MIT and elsewhere, again, during World War II. It was on the detection of signals in noise and written at the graduate level. Davenport was from MIT, and I guess Root was also from MIT. Root went on to be a professor at Michigan. I, of course, looked through the book and liked it. I think I may have even taught a course at that time using that book. I don’t recall anymore; my mind is not clear about this.<br>  


<p>I joined the Electrical Engineering Department. It turned out, for whatever the reason, the Electrical Engineering Department was more of an undergraduate department at the time. We taught graduate courses, but there wasn’t that much research going. The ElectroPhysics Department was strictly a research department teaching graduate courses. So my first courses that first year were standard electrical engineering undergraduate courses. </p>


<p>This is 1952, and I had this interest in communication theory. In 1952, one of the first books published in the area was by Davenport and Root called''Random Signals'' based on pioneering work at MIT and elsewhere, again, during World War II. It was on the detection of signals in noise and written at the graduate level. Davenport was from MIT, and I guess Root was also from MIT. Root went on to be a professor at Michigan. I, of course, looked through the book and liked it. I think I may have even taught a course at that time using that book. I don’t recall anymore; my mind is not clear about this. </p>


=== Communication systems text  ===
=== Communication systems text  ===


'''Schwartz:'''<br>  
<p>'''Schwartz:''' </p>


I decided that it might be interesting to develop a modern undergraduate course in communications systems, because the only books available were books like Terman’s book on radio engineering, which was strictly radio. Maybe a little of telephony, but very little on this. When I joined Brooklyn Poly, I decided that I’d start focusing on developing that, and I started teaching a course in that area. I developed some notes for that and handed notes out to students. It took a number of years. I don’t remember the exact timing, but it was probably about the mid-’50s—’55, ‘56, something like that, during which I put the book together.<br>  
<p>I decided that it might be interesting to develop a modern undergraduate course in communications systems, because the only books available were books like [[Frederick Terman|Terman’s]] book on radio engineering, which was strictly radio. Maybe a little of telephony, but very little on this. When I joined Brooklyn Poly, I decided that I’d start focusing on developing that, and I started teaching a course in that area. I developed some notes for that and handed notes out to students. It took a number of years. I don’t remember the exact timing, but it was probably about the mid-’50s—’55, ‘56, something like that, during which I put the book together. </p>


<br>The head of the EE department at Brooklyn Poly at the time was an outstanding man named John Truxal, who was a pioneer in the control area. He had published an outstanding book called Automatic Feedback Control System Synthesis. He came out of MIT. He joined Brooklyn Poly as head of the department in ‘54, ‘55, something like that. He began to build a strong control group, which became one of the world’s outstanding groups on controls.<br>  
<p>The head of the EE department at Brooklyn Poly at the time was an outstanding man named John Truxal, who was a pioneer in the control area. He had published an outstanding book called Automatic Feedback Control System Synthesis. He came out of MIT. He joined Brooklyn Poly as head of the department in ‘54, ‘55, something like that. He began to build a strong control group, which became one of the world’s outstanding groups on controls. </p>


<br>  
<p>I put these notes together, and Truxal encouraged me to have it published as a book. I remember saying to him, “What title should I use?” He said to me, “Take a long title. Long titles sell well.” (Note that his best-selling book was called ''Automatic Feedback Control Systems Synthesis''!) There were other books available related to my proposed book, but not quite the same. In particular, there was a book that had been written by Goldman of Syracuse University called ''Frequency Analysis, Modulation and Noise''. A classic book at that time, very nice, but it really didn’t focus on statistical communication theory and the information theory aspects. It was more frequency analysis. [[Ernst Guillemin|Guillemin]] at MIT had published some nice books as well on frequency analysis, which was beginning to pervade the curriculum. However, there was very little on noise, very little on signals and noise and very little on information theory, especially on the undergraduate level. Graduate courses were developed using the Davenport and Root book. </p>


I put these notes together, and Truxal encouraged me to have it published as a book. I remember saying to him, “What title should I use?” He said to me, “Take a long title. Long titles sell well.” (Note that his best-selling book was called ''Automatic Feedback Control Systems Synthesis''!) There were other books available related to my proposed book, but not quite the same. In particular, there was a book that had been written by Goldman of Syracuse University called ''Frequency Analysis, Modulation and Noise''. A classic book at that time, very nice, but it really didn’t focus on statistical communication theory and the information theory aspects. It was more frequency analysis. Guillemin at MIT had published some nice books as well on frequency analysis, which was beginning to pervade the curriculum. However, there was very little on noise, very little on signals and noise and very little on information theory, especially on the undergraduate level. Graduate courses were developed using the Davenport and Root book.  
<p>My notes covered these topics on the undergraduate level, so I decided to proceed with book publication. I borrowed part of my title from Goldman’s book from Syracuse on frequency analysis and called my book ''Information Transmission, Modulation, and Noise''. You can see the first edition here on my bookcase shelf. This whole period in the ‘50s had communication theory pervading the field of communication, at least in my mind. I come at it though radio, much less through telephony. At that time we had salesmen (they called them travelers) coming around from various publishing companies. A guy from McGraw Hill came in and said to me, “How about writing a book?” So, I said, “I have notes for a book on statistical communication theory for undergraduates.” He said, “No, no. You don’t want to do that. Davenport and Root’s book has just come out. Why don’t you try another area.” He tried to dissuade me. Luckily, he didn’t persuade me, because the book turned out to be a bestseller. </p>


<br>  
<p>In 1959 this book was published. Independently, John Truxal, together with McGraw Hill, had set up a series called the Brooklyn Polytechnic Series, and this was published as part of the Brooklyn Polytechnic Series. It might have been the first or second in the series. </p>


<br>My notes covered these topics on the undergraduate level, so I decided to proceed with book publication. I borrowed part of my title from Goldman’s book from Syracuse on frequency analysis and called my book ''Information Transmission, Modulation, and Noise''. You can see the first edition here on my bookcase shelf. This whole period in the ‘50s had communication theory pervading the field of communication, at least in my mind. I come at it though radio, much less through telephony. At that time we had salesmen (they called them travelers) coming around from various publishing companies. A guy from McGraw Hill came in and said to me, “How about writing a book?” So, I said, “I have notes for a book on statistical communication theory for undergraduates.” He said, “No, no. You don’t want to do that. Davenport and Root’s book has just come out. Why don’t you try another area.” He tried to dissuade me. Luckily, he didn’t persuade me, because the book turned out to be a bestseller.<br>  
<p>It was the first undergraduate textbook to cover modern communication systems from a statistical point of view. It talked about AM, [[FM Radio|FM]] and digital communications from a unified view, and brought in some of the statistical stuff that had appeared in other books, as well as spectrum analysis. It starts with frequency analysis, after an introductory chapter on information theory, presented in a very qualitative way. What do you mean by information? This was [[Claude Shannon|Shannon’s]] great idea, which he actually put into mathematical form. If I’m sending a signal, which is on continuously, it carries no information. So why send it? When you send something it should be unknown. So he quantified this. </p>


<br>
<p>Then I went on, in the book, to write about AM and [[FM Radio|FM]] signals. Then I went on to discuss digital communication systems, PCM, starting with pulse amplitude modulation. Then I said, “Okay. Now we can try to understand how these systems all function from a systems point of view. Because they all have to function in the presence of noise.” [[FM Radio|FM]], AM—all of these get swamped by noise. Why is PCM better? Why is [[FM Radio|FM]] better than AM as far as noise is concerned? We all knew this. In fact, Armstrong pointed this out many years before and he did this in a very nice graphical way. Actually, in the ‘30s, people began to quantify this, so I tried to put all of this together in the book: the analog stuff that came out of pre-World War II, the digital stuff that came out of signal communication theory during World War II, and information theory during World War II. Students weren’t expected to know anything about probability. I hadn’t had it as a student, so I put in a chapter on statistical analysis. I applied that to [[FM Radio|FM]], AM, and PCM. First, I study them without noise. I talk about statistics. Then I introduce simple analysis of noise on the undergraduate level using the work of [[Stephen Rice|Rice]] in 1944 that I’d learned a few years before. By now, fifteen years after [[Stephen Rice|Rice’s]] work had appeared, it was classic stuff, and I put it together for student use. </p>
 
In 1959 this book was published. Independently, John Truxal, together with McGraw Hill, had set up a series called the Brooklyn Polytechnic Series, and this was published as part of the Brooklyn Polytechnic Series. It might have been the first or second in the series.<br>
 
<br>
 
It was the first undergraduate textbook to cover modern communication systems from a statistical point of view. It talked about AM, FM and digital communications from a unified view, and brought in some of the statistical stuff that had appeared in other books, as well as spectrum analysis. It starts with frequency analysis, after an introductory chapter on information theory, presented in a very qualitative way. What do you mean by information? This was Shannon’s great idea, which he actually put into mathematical form. If I’m sending a signal, which is on continuously, it carries no information. So why send it? When you send something it should be unknown. So he quantified this.<br>
 
<br>Then I went on, in the book, to write about AM and FM signals. Then I went on to discuss digital communication systems, PCM, starting with pulse amplitude modulation. Then I said, “Okay. Now we can try to understand how these systems all function from a systems point of view. Because they all have to function in the presence of noise.” FM, AM—all of these get swamped by noise. Why is PCM better? Why is FM better than AM as far as noise is concerned? We all knew this. In fact, Armstrong pointed this out many years before and he did this in a very nice graphical way. Actually, in the ‘30s, people began to quantify this, so I tried to put all of this together in the book: the analog stuff that came out of pre-World War II, the digital stuff that came out of signal communication theory during World War II, and information theory during World War II. Students weren’t expected to know anything about probability. I hadn’t had it as a student, so I put in a chapter on statistical analysis. I applied that to FM, AM, and PCM. First, I study them without noise. I talk about statistics. Then I introduce simple analysis of noise on the undergraduate level using the work of Rice in 1944 that I’d learned a few years before. By now, fifteen years after Rice’s work had appeared, it was classic stuff, and I put it together for student use.  
 
<br>
 
<br>The book turned out to be very successful; many schools picked up on it. I was very pleased. A number of years back, the University of Wisconsin Department of Electrical Engineering had their hundredth anniversary celebration and they invited me as one of the guest lecturers there. I was very pleased to do this. They had published a book summarizing activities in their department for the last hundred years. They gave me a copy and other people copies. When they talk, in their book, about the ‘50s, they discuss introducing courses in communications. No suitable books were available, so in ‘59-’60, they began using my book, which is very nice to read. It was the first book in the field. It held the field for about six years or seven years. Then other people began publishing and it lost sales. The first five or six years it was the only book out in the field. I’m very pleased that it was a pioneering book; this made me feel very good.<br>
 
<br>
 
I left Poly in ‘73. Len Shaw, one of my former colleagues there, was rummaging through the files a number of years back and found a mimeographed copy of my original notes. He was at Brooklyn Poly for many years and was one of the Deans there (he’s just retired). So he sent them to me with a comment saying, “Some of your ideas still hold up.” It was very nice. Very pleasant.


<p>The book turned out to be very successful; many schools picked up on it. I was very pleased. A number of years back, the University of Wisconsin Department of Electrical Engineering had their hundredth anniversary celebration and they invited me as one of the guest lecturers there. I was very pleased to do this. They had published a book summarizing activities in their department for the last hundred years. They gave me a copy and other people copies. When they talk, in their book, about the ‘50s, they discuss introducing courses in communications. No suitable books were available, so in ‘59-’60, they began using my book, which is very nice to read. It was the first book in the field. It held the field for about six years or seven years. Then other people began publishing and it lost sales. The first five or six years it was the only book out in the field. I’m very pleased that it was a pioneering book; this made me feel very good. </p>


<p>I left Poly in ‘73. Len Shaw, one of my former colleagues there, was rummaging through the files a number of years back and found a mimeographed copy of my original notes. He was at Brooklyn Poly for many years and was one of the Deans there (he’s just retired). So he sent them to me with a comment saying, “Some of your ideas still hold up.” It was very nice. Very pleasant. </p>


=== Brooklyn Poly Teaching and Research  ===
=== Brooklyn Poly Teaching and Research  ===


'''Schwartz:'''
<p>'''Schwartz:''' </p>
 
I must say in all honesty that when I joined Brooklyn Poly in ‘52, I focused on the undergraduate level. Then I began to teach graduate courses also. So I taught courses in a variety of areas, not just communications, because we were encouraged to do this kind of thing. I developed this book, then I began thinking about going back to research. I don’t have time for doing that sort of thing now. That’s why life was a lot easier in those days. Nowadays, the minute a young man or woman joins the university, he or she has got to start running. I guess I just take back what I said before. It’s much harder for them now. They can’t afford to teach eighteen contact hours and do research at the same time. But I managed somehow to begin to try to do this.<br>
 
<br>I remember going to Bell Laboratories, thinking maybe I can work with them or get some ideas from them. I was introduced to David Slepian, who retired from Bell Labs a long time ago. He was a pioneer in this whole field of mathematical representation of communication signals. At this time, the Army was doing work in RF and they encountered a lot of work with fading channels, fading signals. I thought that might be an interesting idea, so I began doing work on fading channels.<br>
 
<br>
 
I had a number of very fine doctoral students at the time: Don Schilling, Ray Pickholtz, Bob Boorstyn, Ken Clark, Don Hess, and a number of others, and they were doing their theses for me. Actually, Ken Clark was the first one to do his thesis for me. There was a lot of interest in FM as well, so we began introducing a research program in the studies of FM and noise, and at the same time in fading signals and noise because the Army was pushing things like that. It was in the air—papers were being published. I put some students to work on that.<br>
 
<br>
 
Rice at Bell Labs had continued to do his work. He was the pioneer in noise. He tried to apply some of his noise ideas to FM, and he developed the concept of click analysis. Of course, everybody knew why FM provided an improvement over AM in the presence of noise, above a threshold. Armstrong, as well as Crosby at RCA had pointed this out in the early ‘30s. But then, suddenly, below a threshold, FM goes to pot. Why? Why does the noise suddenly get larger? Rice through the click analysis was able to put this on a firm footing and so FM studies were in the air too.<br>
 
<br>My first doctoral student was Ken Clarke. He finished up in 1959. He had been an instructor. All these guys were appointed instructors on the staff, which was one way of getting teaching out of them and getting them a little extra money. Don Schilling and Don Hess, and, a number of years later, Pickholtz and Boorstyn, were instructors. In 1961, Truxal moved on to become Dean and I became the EE Department Head at Brooklyn Poly. I really started pushing telecommunications. These guys were there; they were very good guys and we had them all appointed assistant professors. We set up a group that eventually had seven of us working in telecommunications. I must have begun the group in 1959, I guess, but when I became the department head I began to push it.
 
<br>
 
<br>At that time, Jack Wolf, an outstanding graduate of Princeton, joined us. I heard about him. He had been in the Air Force and then joined NYU. I appointed him to the staff also, and with him we had seven faculty members in the telecommunications area. Poly had a big engineering school, and the EE department had about forty-five people in it at that time. We had a group of about seven faculties, including myself, working the field. It was a glorious time.<br>Ken Clarke and Don Hess were experimentalists—they tested. When we were talking about fading channels and fading signals, they developed an underwater fading channel simulator. If you send signals through the water, water also has some of the same effect on them. They built a water tank for this purpose. Sid Deutsch, another faculty member in the EE department, had done work on his own in television. He was doing work on low bit-rate television. He worked closely with our group as well. We began to publish in FM, fading signals, and noise. Jack Wolf was a specialist in information theory and coding, and was doing work in coding. We covered the gamut from information theory and coding to statistical communications theory to communication systems like FM, both in teaching and in research that we were carrying on at the time. I think it was probably one of the largest groups in the country, seven faculty members at that time, plus a sizable graduate enrollment. It was a wonderful time.
 
<br>
 
<br>I say the department started off as an undergraduate department when I joined it. When Truxal joined the department, he brought in graduate research and teaching in the control area, but we built a large group in communications as well, following up on that. So, it was a real wonderful experience.<br>
 
<br>
 
That was the tenor of the times at Brooklyn Poly when I was there. Unfortunately, Brooklyn Poly fell into financial problems and went through a difficult time. I was the Department Head until 1965. I left on sabbatical to go to France, came back a year later and continued my work in this area. That’s when in 1966-68, Ray Pickholtz and Bob Boorstyn and two other graduate students finished up, followed by a flock of others. I had a lot of students from Bell Laboratories come to work with me in those days as well too. I put them to work on problems in digital technology, communication theory, and things of that type. In fact, Poly set up a special program jointly with Bell Labs to have them come spend a year with us full time working on their doctoral thesis and go back. It was a wonderful time.<br>
 
 
 
=== Communication networks and computers ===
 
'''Schwartz:'''
 
This took us to about the late ‘60s, I guess. About 1970 or so, I began to sense that there was a change taking place. We focused on what we now call the physical layer, and people were now beginning to talk about communication networks—machines talking to one another.<br>
 
<br>
 
I’ll go back a little bit historically because this has emerged now, using computers and communications, as the Information Age. Back in the mid-’60s, the way I look at it, GE and MIT and other places began to experiment with time-sharing computers. Before that time, everybody who worked in a large establishment or university had access to a large machine. You would bring your cards in and have them loaded in and a couple of hours later you would go pick up your cards and maybe some printed-out papers. It was exciting, but young people nowadays have no idea how difficult this was. They began experimenting with the idea of time-sharing computers because there had been some studies done that indicated that the machine was not operating most of the time, or you could do multiple operations simultaneously. MIT experimented with this, and GE joined them on this, and other companies too. Once you start time-sharing, you start being concerned about communicating with that machine. A few years or so ago, I wrote a paper which discusses the history of some of this. Would you like to have a copy of that?<br>
 
<br>
 
'''Hochfelder:'''
 
Sure. That would be great.<br>
 
<br>
 
'''Schwartz:'''
 
I’ll tell you how to access it. You can get it at the Journalism School. I wrote the paper for our Journalism School. It’s called "Telecommunications, Past, Present, and Future," written specifically for the non-Engineer. It has an introductory chapter which talks about the history of computing and communications in that period. It is very simple. IBM and other companies, for example, had in the ‘50s, developed communication systems for airlines—airline agent terminals connected to a central computer. IBM was selling the concentrators and the terminals.<br>
 
<br>
 
'''Hochfelder:'''
 
Is that SABRE?<br>
 
<br>
 
'''Schwartz:'''
 
Yes. SABRE came out of that. SABRE was the first system. IBM pioneered in that. I forget the details, but I have it in this paper of mine. I went back and checked through the old literature on that. There had been work in that era on terminals communicating with a central computer. For the military, Bell Labs had done some work on some systems, even radar. You had terminals away from the central system that you wanted to communicate with the central system, back and forth. The military and the commercial world had already begun to develop the concept of terminals communicating to a computer somewhere over lines.<br>
 
<br>
 
'''Hochfelder:'''
 
So, for the military that would be the SAGE system.<br>
 
<br>
 
'''Schwartz:'''
 
SAGE, yes. I mention SAGE in my paper too. So, all of these things were in the air.<br>
 
<br>
 
Then in the late ‘60s, it became apparent to IBM and other organizations that you are asking these large computers to do a lot of communication tasks. Once you start to do more and more of this, you’re tasking the computers with this, and in a sense you’re undoing what you started doing. You want them to do more computational work, and now you’re doing this other work. So they decided to off-load the communication tasks to special purpose communication systems they built called communication concentrators—computers which just do communications work. The concept was to have terminals connected to these; they concentrate the activity and they send them to the same central computer. In a way, this is the same thing the SABRE system does.. The SABRE system already operates under that premise: connect terminals through a concentrator. The concentrator then interrogates the central system and sends messages back. So that was already there many years before that idea, but now they decided to do it more generally. Once you start doing this, you have to develop what are called protocols—ways of having two machines that are a distance from one another interrogate one another and send messages back to one another and understand one another. This has been done for a long time now. Time-sharing, as well as airline reservation systems and military communication systems, among others, helped develop this.<br>
 
=== ARPANET; commercial computer utilities ===
 
'''Schwartz:'''
 
At the same time (the mid-’60s now), there was pioneering work going on at the then Advanced Research Project Agency (ARPA) of the Department of Defense. Work has been published on this very recently. In fact, in this paper I mentioned, I sort of explore the ARPA, the IBM company thing; I explore the predecessors, the work of the airline reservation systems. The primary work at ARPA at that time, in the mid-’60s, looked to see if people can communicate with computers in some better way.<br>
 
<br>
 
A man named Larry Roberts joined ARPA in the late ‘60s. He had the concept of developing what he called a computer utility. Since people were beginning to do all kinds of time-sharing activity at the time, why should every organization, every university, every commercial organization, have replicas of the software? Why not develop something special? For example, the University of Utah had a specialty in graphics capability. UCLA was doing its own work. Why couldn’t people all over the country access those universities for their software, rather than having to duplicate it in your own place? It made sense. So he had the idea of building a computer utility, like an electric utility, distributed, and ARPA began to fund the ARPA Network project. I think the first one went online in 1969 with four nodes or computers interconnected. So, you visualize all these things are coming together.
 
<br>
 
<br>Now, in order for ARPA to operate, they had to have communication protocols to handle the messages back and forth. They had the concept of a router—a message processor that handles signals and routes them appropriately in some way. ARPA began to develop routing algorithms that came out of this.
 
<br>  
 
<br>IBM at the same time period, 1969, had begun work on something they called Systems Network Architecture, SNA, which is probably the first commercial network architecture designed to handle messages between computer systems. ARPA was a distributed topology because you could be anywhere, and their inter-network message processors (IMPs) were scattered all over the country and you fed into them and they connected with one another in a distributed fashion. IBM wanted people to access their main hosts, their large machines, so the concept was to have terminals connected to concentrators connected to the main host, passing messages back and forth, and you wanted an architecture for this.
 
<br>
 
<br>In 1969, we also saw coming out the first commercial computer utility. A company called Tymshare was set up, coming from the word time-sharing. Their idea was, if you don’t have access to your own computer, you have a terminal, which you use to access its computers. Tymshare had a bank of computers scattered all over the country, large computer centers, where they would process your information. You would pay for this, so your company didn’t have to own a computer of its own. They developed a network called Tymnet, which also began to operate in the late ‘60s, early ‘70s. Everything was coming together now: ARPA’s pioneering work on the computer utility, mostly for universities communicating in a distributed fashion, Tymnet, IBM’s SNA. GE set up a network called GE Information Services Network, which did the same thing that Tymnet did. It offered services. They began to go abroad also and offered links to Europe and other places. Tymshare had their computers scattered all over the country because they felt that it was more reliable that way. GE had its computers all in the one center in Rockville, Maryland. It felt the system was more reliable that way. In fact, I attended a conference later on, where two guys, one from each company, were debating which one was more reliable. One is distributed, and if one system fails, you still have others. GE said if they are all together, we can make them more secure, so who knows? But anyway, very interesting.<br>
 
<br>
 
=== Networks research, teaching, and publication<br> ===
 
'''Schwartz:'''  
 
This thing started to happen now, and I began to feel that networking was an exciting area to work in now. I began to develop some activity at Brooklyn Poly in this area. The Poly Microwave Research Institute (MRI) annually had a large workshop, covering various topics, not just microwaves. We ran one on computer communications and the integration of computers with communications, which is part of the same thing.<br>  
 
<br>I remember talking to Paul Green, at the time at IBM, who had come to IBM from Lincoln Laboratories. He was a real pioneer in this field. He was at IBM Research and had done some pioneering research work in SNA. He said to me, “If you really want to learn about the field, why don’t you go out and find out what some of these companies are doing?” I think he might have suggested this for a journal. I said sure, and got together two of my colleagues, Bob Boorstyn and Ray Pickholtz at Brooklyn Poly, and the three of us picked four networks that were ongoing in this country. We went and talked to the people and learned what they were doing. It was a new field. We wrote a nice paper. That started us going. Once you learn what companies are doing, it gives you some exciting problems to work on.<br>
 
<br>
 
'''Hochfelder:'''
 
It would be a good paper to have.<br>
 
<br>
 
'''Schwartz:'''
 
Yes. I have it in my files here. I’ve got a copy of it. One of the IEEE journals, I forget which one, published it. So that was a tutorial paper. Nothing new, but we three interviewed people at different companies. One was the NASDAQ system, for example, that they had set up in those days. One might have been Tymnet too; I don’t remember anymore.
 
<br>
 
<br>I began to develop a program on this subject and began to teach a course at Brooklyn Poly in the subject of networking. I had some notes. I left Brooklyn Poly in 1973, just when all of this was coming to a head. Poly had been having financial problems, as I pointed out, and it was sort of sad in a way because some of the leading people had left. Jack Wolf had left by that time. Ray Pickholtz decided to leave and go to George Washington, and two of the other key people decided to go into industry by themselves.<br>
 
<br>
 
'''Hochfelder:'''
 
Is that when Don Schilling left Poly?<br>
 
<br>
 
'''Schwartz:'''
 
No. Don Schilling had gone to City College. He may have gone before this time. But Don Hess and Ken Clarke organized a company of their own. They were experimentalists. They still have their company functioning. I think Bob Boorstyn and I might have been the only ones left at this time.
 
<br>
 
<br>Columbia asked me to spend a year with them as a visiting professor, which I did. I gave a couple of courses in computer networking, because that was the thing I was really pushing at this time. They asked me to stay on and I stayed on at Columbia. I came to Columbia officially full time in 1974, and, again, continued developing a program in networks. Out of this came some notes and the first textbook in the field called ''Computer Communication Network Design and Analysis'', published in 1977. A very nice book was published much earlier by Len Kleinrock, who was one of the pioneers in the field. That is his doctoral thesis that he did in 1961, I guess at MIT. Even years later, it is a classic book with wonderful stuff in it. He had it reprinted maybe by Dover Press; I forget who did it, and I’ve got a copy somewhere. That was really the first book in the field. There are other books that have been published too, but mine was a textbook with problems and exercises, stuff like that, for students to use based on the course we developed first at Poly and then at Columbia. I have continued in that field ever since. I’ve done work personally with students at Bell Labs, at Brooklyn Poly, at Columbia, and at other places. First at Poly and then at Columbia, in what is called congestion control.
 
<br>
 
=== Performance analysis<br> ===
 
'''Schwartz:'''<br>Now, preferring work in analytical areas, I gravitated more towards performance analysis. How do you see if these systems are performing properly? By doing analysis. It turns out you have to learn queuing theory and things of that type. I tend to be oriented more in that direction, in more quantitative approaches. Bob Kahn, who was one of the pioneers of ARPAnet, one of the giants in the field, had published a paper on flow control for the ARPAnet. He was an electrical engineer by training, out of Princeton. He had worked in communication theory originally too, and had moved into this field. But he has his own organization now. He published his paper on congestion and flow control for the ARPAnet and I read it and I said, “Gee, maybe we could quantify these; maybe come up with some numbers.”<br>I put a student named Mike Pennotti to work on this, a guy from Bell Labs who knew nothing about the field. This guy is sharp, so he picked up and finished his thesis in a year’s time. He had been working in, I think, underwater acoustics at Bell Labs and Navy work, or something like that, and switched fields completely, and boom. Sharp guy and good work. So we worked together on this, and he came up with what I call a pioneering paper on a virtual connection, a connection from point to point along a network which consists of a source terminal connected through routing nodes or routing switches to a destination node with buffering and queuing. That’s the way that networks operate. You store and transmit information in packets. We were able to model this. We came up with some concepts. We compared two different strategies. Do you want control over the virtual connection end-to-end, or do you want control at each node separately? We found that, by proper tweaking, both gave us the same performance.<br>
 
<br>
 
<br>But then, which one is easier to implement? It was the first such quantitative study and it has since become a paper that has been cited a lot of times, because it gave rise to a lot of other work in the field of congestion control and performance analysis. Again, the way engineering always works, somebody invents an idea, you develop the software (in the old days, it wasn’t the software, but nowadays the software), you develop the hardware for it, and then somebody comes along thinks maybe I can study and analyze it and get improvements on it. That’s the way it usually works.
 
<br>
 
<br>So, this paper was published in ‘75, and it was very good early work in the area. Pennotti’s work was from Poly, but I had moved to Columbia, so he worked with me there. He got his doctoral degree at Poly, but he came to see me at Columbia at that time. There were other students who I had had at Poly whom I carried with me. They got their degrees at Poly but they worked with me at Columbia.
 
=== Routing protocols; communication links<br> ===
 
'''Schwartz:'''<br>I began to work at that time with a colleague at Columbia here, Tom Stern, on routing protocols. Bob Gallager at MIT was doing some very nice work on routing protocols. Again, the ARPAnet had focused on that work. They had a routing procedure as part of the ARPAnet, which was pioneering. Had a lot of problems with it, because they tried to have it react too quickly and it was unstable, as it turned out. They began to change their routing protocol. The question arose, what are good routing protocols? Bob Gallagher worked on this problem. Can you distribute the routing algorithms in some way? There were many routing protocols developed years before for work in transportation networks. How do you route trucks and things like that? Some of those ideas were picked up on in this case. So, Gallager did some pioneering work in distributed routing controls. Tom Stern did some fine, related work, which also stimulated work in the area. A lot of work was going on in this area. Harry Rudin, an American who went to Switzerland and is still living there, who had been active for years, was working in this field at IBM Zurich Research Laboratory. He’s now retired, I understand. He also did some pioneering work in routing.
 
<br>
 
<br>We now left the physical layer behind and we were now moving into what is now called the network layer. IBM had done work on Synchronous Data link Control (SDLC). When the world’s standards bodies in the ‘70s began to pick up on this, they changed it a little bit; they called it High Level Data link Control (HLDC), but it is based on the IBM’s SDLC. So, that’s the second layer, data link control. People began to work on this, and papers began to be published on that layer. Now, we are moving up to what we now call the network, or third, layer. The network layer involves congestion control and routing. If you go higher to the fourth layer, now called the transport layer that also involves congestion control. TCP came along about that time. Later on, people began to develop a sophisticated control, called flow control, at that layer. We do congestion control at the network layer, we do flow control at the transport layer. But they are all very related. How do you keep receiving systems from being overrun by packets arriving? Now, you can do it end-to-end; you can do it hop-by-hop at the network layer. You can do it end-to-end on the transport protocol, but the ideas are very similar. Sometimes the layers get mixed up.
 
<br>
 
<br>By this time ARPAnet was developing a full-fledged network and giving rise to a lot of work going on all over the country in these areas, so we were not among the few working on networking now. Everyone was beginning to work in this area. I just mentioned Gallager did pioneering work. Kleinrock, from the beginning, at UCLA, did pioneering work on the ARPAnet. People from other universities did the same thing as well too.
 
<br>
 
<br>I personally focused on performance analysis. Kleinrock did too, by the way. He’s a broad guy. He does systems and software work, and he’s published classic books on queuing theory, giving courses in that regularly. So, he does work in everything.
 
<br>
 
<br>We’re now in a real hot period in the network area. The work that I’m doing has become fully focused on networks. My book came out in 1977 as the first textbook in the field, although other books have been published on this as well too. What I do in this book is based on the literature, as well as some work that we had been doing. The examples of routing and flow control in networks that I give in this book are based on those of GE Information Services Network, Tymnet, and the ARPA Network from a qualitative point of view to try to understand what everything is all about. A big topic in those days (as now) had to do with being connected together with links of various kinds. So I also have in the book work on how you assign capacities to communication links, things like that. A lot of it was based on work that Kleinrock had done in his original thesis studying end-to-end delay. Since you now have queuing delay, this is very different from the telephone network. This is a packet-switched network with routers—you have buffering, so one of the performance objectives is to reduce the queuing delay as much as possible. How do you route according to queuing delay? He had done work in that, and so I discuss, how do you assign capacity to reduce delay? I have a chapter there on queuing theory, because people hadn’t done this before. I have a chapter here applied to store and forward buffering. I have a chapter on routing and flow control. All these things were in the air in those days. Other books have been published since, of course.<br>
 
=== Modems and data networks<br> ===
 
'''Scwartz:'''<br>
 
I have a much bigger textbook covering much more material that came out in 1987 called ''Telecommunications Networks: Protocols, Modeling, and Analysis'' published by Addison Wesley. That also uses a quantitative approach, but I do treat protocols and things like that. Now, what was happening in the world in those days? Well, networking is becoming really significant to the world. It is known to everybody now. We had telephone networks that covered voice messages only. Suddenly, you find data becoming important now. People were shipping data over modems. Modems were being developed in those days because the telephone people realized early on that you want to ship data.<br>
 
<br>
 
'''Hochfelder:'''
 
Bob Lucky’s work?<br>
 
<br>
 
'''Schwartz:'''
 
Yes. Bob Lucky had a group at Bell Laboratories. Actually, his group came out of an earlier group, started by Bill Bennett, who passed away a long time ago. He had some of the best people working on modems. But other people were doing this work too. A guy named Dave Forney set up a company called Codex, and they developed a data modem. So, other people were doing work too. Bob Lucky’s group did pioneering work. Steve Weinstein, as well as others, worked for him. They began to develop modems early on for handling data over telephone networks. As a matter of fact, all of these networks we talk about use telephone facilities. How does someone get into those telephone networks in some way? This is for terminal, low bit rate modems, things of that type. So that was the modem work that was going on. The CCITT in the early ‘70s was aware, not only of the modem work going on…<br>
 
=== Standards<br> ===
 
'''Hochfelder:'''
 
CCITT?<br>
 
<br>
 
'''Schwartz:'''
 
Yes. There is something called the International Telecommunications Union, ITU, housed in Geneva. That’s a standards making body for telecommunications administrations all over the world. Only administrations can belong to this. In the United States, it’s the State Department and the FCC that jointly work together on this. This has now changed. The ITU had, at the time, two separate standards bodies, one called CCITT, the other called CCIR. They are French acronyms.<br>
 
<br>
 
'''Hochfelder:'''
 
One is for telegraphy and telephone, and the other one is for?<br>
 
<br>
 
'''Schwartz:'''


Comité Consultative Internationale Télégraphie et Téléphonique is that CCITT international standards organization, and the other one is called Comité Consultative Internationale Radio. One is for radio standards and one is for telephone and telegraph standards. The CCITT has standardized a lot of modem 34 work, things like that. But let’s focus now on the networking area that I am more familiar with.<br>  
<p>I must say in all honesty that when I joined Brooklyn Poly in ‘52, I focused on the undergraduate level. Then I began to teach graduate courses also. So I taught courses in a variety of areas, not just communications, because we were encouraged to do this kind of thing. I developed this book, then I began thinking about going back to research. I don’t have time for doing that sort of thing now. That’s why life was a lot easier in those days. Nowadays, the minute a young man or woman joins the university, he or she has got to start running. I guess I just take back what I said before. It’s much harder for them now. They can’t afford to teach eighteen contact hours and do research at the same time. But I managed somehow to begin to try to do this. </p>


<br>  
<p>I remember going to [[Bell Labs|Bell Laboratories]], thinking maybe I can work with them or get some ideas from them. I was introduced to [[David Slepian|David Slepian]], who retired from [[Bell Labs|Bell Labs]] a long time ago. He was a pioneer in this whole field of mathematical representation of communication signals. At this time, the Army was doing work in RF and they encountered a lot of work with fading channels, fading signals. I thought that might be an interesting idea, so I began doing work on fading channels. </p>


They were well aware that data networks were now becoming significant. We already had IBM; we had Tymnet developed; GE Information Services Network. The French had a network set up called the Cyclades network. ARPAnet was here. Data networks were developing worldwide. The telephone industry was very aware of this, and they were very aware of networking. So the idea developed to try to standardize some data networks in some way. They set groups to work. I wasn’t involved, so I don’t know the details of it. But in 1976 they came out with a different kind of standard called an interface standard and called X.25.<br>  
<p>I had a number of very fine doctoral students at the time: [[Oral-History:Donald Schilling|Don Schilling]], [[Oral-History:Raymond Pickholtz|Ray Pickholtz]], Bob Boorstyn, Ken Clark, Don Hess, and a number of others, and they were doing their theses for me. Actually, Ken Clark was the first one to do his thesis for me. There was a lot of interest in [[FM Radio|FM]] as well, so we began introducing a research program in the studies of [[FM Radio|FM]] and noise, and at the same time in fading signals and noise because the Army was pushing things like that. It was in the air—papers were being published. I put some students to work on that. </p>


<br>  
<p>Rice at [[Bell Labs|Bell Labs]] had continued to do his work. He was the pioneer in noise. He tried to apply some of his noise ideas to [[FM Radio|FM]], and he developed the concept of click analysis. Of course, everybody knew why [[FM Radio|FM]] provided an improvement over AM in the presence of noise, above a threshold. [[Edwin H. Armstrong|Armstrong]], as well as Crosby at [[RCA (Radio Corporation of America)|RCA]] had pointed this out in the early ‘30s. But then, suddenly, below a threshold, [[FM Radio|FM]] goes to pot. Why? Why does the noise suddenly get larger? Rice through the click analysis was able to put this on a firm footing and so [[FM Radio|FM]] studies were in the air too. </p>


DEC had developed DECnet, Burroughs had developed its Burroughs network architecture; all of these architectures in the United States were proprietary—they were developed for their own equipment only, although the companies weren’t specifically in the data networking area. The CCITT was talking about developing some kind of open networking, but across interfaces. Another organization, the International Standards Organization (ISO), was beginning to develop activities as well too, so the two were going on simultaneously now.  
<p>My first doctoral student was Ken Clarke. He finished up in 1959. He had been an instructor. All these guys were appointed instructors on the staff, which was one way of getting teaching out of them and getting them a little extra money. [[Oral-History:Donald Schilling|Don Schilling]] and Don Hess, and, a number of years later, [[Oral-History:Raymond Pickholtz|Pickholtz]] and Boorstyn, were instructors. In 1961, Truxal moved on to become Dean and I became the EE Department Head at Brooklyn Poly. I really started pushing telecommunications. These guys were there; they were very good guys and we had them all appointed assistant professors. We set up a group that eventually had seven of us working in telecommunications. I must have begun the group in 1959, I guess, but when I became the department head I began to push it. </p>


<br>  
<p>At that time, [[Jack Wolf]], an outstanding graduate of Princeton, joined us. I heard about him. He had been in the Air Force and then joined NYU. I appointed him to the staff also, and with him we had seven faculty members in the telecommunications area. Poly had a big engineering school, and the EE department had about forty-five people in it at that time. We had a group of about seven faculties, including myself, working the field. It was a glorious time. Ken Clarke and Don Hess were experimentalists—they tested. When we were talking about fading channels and fading signals, they developed an underwater fading channel simulator. If you send signals through the water, water also has some of the same effect on them. They built a water tank for this purpose. Sid Deutsch, another faculty member in the EE department, had done work on his own in television. He was doing work on low bit-rate television. He worked closely with our group as well. We began to publish in FM, fading signals, and noise. Jack Wolf was a specialist in information theory and coding, and was doing work in coding. We covered the gamut from information theory and coding to statistical communications theory to communication systems like FM, both in teaching and in research that we were carrying on at the time. I think it was probably one of the largest groups in the country, seven faculty members at that time, plus a sizable graduate enrollment. It was a wonderful time. </p>


<br>Let’s focus first on CCITT. They came out with a standard called X.25. Now remember, they have working with them mostly telephone administrations, so they are catering mostly worldwide to the telephone companies which, except for the United States, were all government organizations in those days. AT&amp;T was quasi-governmental. It was a monopolistic organization in those days. They had the standard called X.25, which enabled users to interconnect to any network. Networks would have their own protocols, but by using this interface you could get into that protocol. On your side, you could see this protocol. That network on this side sees the same protocol, so you can get into it and then handle it anyway you wish, and the other side, it gets back out again. It is strictly an interface protocol; it’s not an end-to-end protocol. (It has subsequently also been adopted in some places as an end-to-end protocol.) There was an interface defined between a terminal on the user’s side and the network side. The same on the other side of a network between: between network side and the user side. What happens in the network between they weren’t concerned with. They weren’t going to tell people how to handle their networks. X.25 is a three-layered protocol. People knew about layers now: physical layer; data link layer which is HDLC; and a third layer, normally a networking layer, but, in the X.25 case, an interface layer, handling X.25 packets going across it. That came out of the CCITT activity. There was a lot of work published.<br>  
<p>I say the department started off as an undergraduate department when I joined it. When Truxal joined the department, he brought in graduate research and teaching in the control area, but we built a large group in communications as well, following up on that. So, it was a real wonderful experience. </p>


<br>  
<p>That was the tenor of the times at Brooklyn Poly when I was there. Unfortunately, Brooklyn Poly fell into financial problems and went through a difficult time. I was the Department Head until 1965. I left on sabbatical to go to France, came back a year later and continued my work in this area. That’s when in 1966-68, [[Oral-History:Raymond Pickholtz|Ray Pickholtz]] and Bob Boorstyn and two other graduate students finished up, followed by a flock of others. I had a lot of students from [[Bell Labs|Bell Laboratories]] come to work with me in those days as well too. I put them to work on problems in digital technology, communication theory, and things of that type. In fact, Poly set up a special program jointly with [[Bell Labs|Bell Labs]] to have them come spend a year with us full time working on their doctoral thesis and go back. It was a wonderful time. </p>


One of the first administrations to latch onto this was that of Canada, with Bell Northern Research developing products. In fact, I attended a communications meeting in 1976 and they presented some of their work at that time too. So they developed some of their products. The Canadians were one of the first in this area. Contemporaneous with this, the computer manufacturers and the public using computers began to feel the need for interconnecting computers in a non-proprietary sense. I just mentioned before IBM, DEC, and Burroughs in the United States. The same for Japanese companies. Each developed their own protocols for their own equipment.
=== Communication networks and computers ===


<br>  
<p>'''Schwartz:''' </p>


<br>The international standards organization, ISO, is also housed in Geneva. It is made up of companies rather than administrations. They felt the need to do this, so they set up some bodies and they began to develop pioneering activity in standardizing protocol architectures. They developed the idea of a seven-layered computer communications architecture, a physical data link layer, network layer, transport layer, session layer, all the way up to application layer. Why seven? Well, you don’t want too many, you don’t want too few, so they came up with that. They began a massive effort in each layer to really standardize this. Independently in the United States, the ARPAnet community had developed protocols, so they had their own.<br>  
<p>This took us to about the late ‘60s, I guess. About 1970 or so, I began to sense that there was a change taking place. We focused on what we now call the physical layer, and people were now beginning to talk about communication networks—machines talking to one another. </p>


<br>  
<p>I’ll go back a little bit historically because this has emerged now, using computers and communications, as the Information Age. Back in the mid-’60s, the way I look at it, [[General Electric (GE)|GE]] and MIT and other places began to experiment with time-sharing computers. Before that time, everybody who worked in a large establishment or university had access to a large machine. You would bring your cards in and have them loaded in and a couple of hours later you would go pick up your cards and maybe some printed-out papers. It was exciting, but young people nowadays have no idea how difficult this was. They began experimenting with the idea of time-sharing computers because there had been some studies done that indicated that the machine was not operating most of the time, or you could do multiple operations simultaneously. MIT experimented with this, and GE joined them on this, and other companies too. Once you start time-sharing, you start being concerned about communicating with that machine. A few years or so ago, I wrote a paper which discusses the history of some of this. Would you like to have a copy of that? </p>


'''Hochfelder:'''  
<p>'''Hochfelder:''' </p>


The TCP?<br>  
<p>Sure. That would be great. </p>


<br>  
<p>'''Schwartz:''' </p>


'''Schwartz:'''
<p>I’ll tell you how to access it. You can get it at the Journalism School. I wrote the paper for our Journalism School. It’s called "Telecommunications, Past, Present, and Future," written specifically for the non-Engineer. It has an introductory chapter which talks about the history of computing and communications in that period. It is very simple. IBM and other companies, for example, had in the ‘50s, developed communication systems for airlines—airline agent terminals connected to a central computer. IBM was selling the concentrators and the terminals. </p>


Yes. So they had developed a network layer called IP, actually Internet Protocol, interconnecting different networks. It told you how to route packets using the ARPA routing algorithms. TCP, Transmission Control Protocol, was the layer above, a transport protocol architecture. They had a data link layer too, et cetera. They had, of course, layers on top of that for handling various kinds of transfers, file transfers, e-mail, things like that.
<p>'''Hochfelder:''' </p>


<br>  
<p>Is that SABRE? </p>


<br>Kleinrock, in one of his early papers, summarizing traffic usage early on for the ARPAnet, showed the most amazing result. Despite the initial ARPAnet idea of a computer utility, with the network used for accessing other Hosts’ software, it turned out that people were using the network to talk to their own friends! The study showed the first elementary use of email, where you’re talking to the guys in the office next to you. So most of the activity was local activity. People never know what things are going to be used for. That was what happened with ARPAnet of course. It never became a computer utility. It developed for other communications purposes.
<p>'''Schwartz:''' </p>


<br>  
<p>Yes. SABRE came out of that. SABRE was the first system. IBM pioneered in that. I forget the details, but I have it in this paper of mine. I went back and checked through the old literature on that. There had been work in that era on terminals communicating with a central computer. For the military, [[Bell Labs|Bell Labs]] had done some work on some systems, even [[Radar|radar]]. You had terminals away from the central system that you wanted to communicate with the central system, back and forth. The military and the commercial world had already begun to develop the concept of terminals communicating to a computer somewhere over lines. </p>


<br>ISO began to develop these standards and they came up with a seven-layered model and the ARPAnet community developed its own layered architecture. I innocently got involved in this. A standardization conflict had developed in the United States between the National Bureau of Standards, which is now NIST, and the Department of Defense. NBS was concerned with all commercial scientific and technological activity in the United States, in helping the commercial sector as part of the Department of Commerce. They had a Computer Laboratory Division. I happened to be a member of a Visiting Committee to that, to look at the Division activities once a year. They had a large computer communications activity and they were studying how to determine whether the ISO protocols, in their commercialization, were “correct” in their operation or not. They were heavily involved in this. They were concerned that American companies not lose out worldwide if the world were to adopt the international computer communication standards being developed by ISO. The Europeans were moving in the direction of ISO. In the United States, there was a push on by the Department of Defense to standardize the TCP suite, the ARPAnet suite, because they already had it going. Why bring in new protocols? So there was sort of a conflict between the Department of Defense and the Commerce Department, the National Bureau of Standards. They went to the National Academy of Sciences/National Academy of Engineering, which runs the National Research Council, NRC, and asked them to adjudicate. NRC set up an expert panel made up of people from industry and universities to try to determine which is the better way to go. Unfortunately (in retrospect!), I was asked to join that panel. On the panel were people from other universities: Dave Farber, a well-known guy in communications; Larry Landweber from Wisconsin; experts from IBM, from DEC, from Burroughs, and others. It was a broad-sweeping panel. We met for a long period of time in Washington with long meetings where we had people come in. We had Vint Cerf, who was with ARPA, speaking on behalf of TCP and that suite. We had people speaking on the other side. Interestingly enough, Dave Farber and Larry Landweber kept pushing for TCP. The rest of us, myself included, said no, wait awhile. TCP is just United States; we have to go worldwide. The ISO suite, seven-layer suite, is much newer, it has the new features in it. It is based on TCP to some extent. It doesn’t have the segmentation TCP has and in some areas it is much better. The guys from industry were pushing for it. So why not go for that? We finally over-rode their objections and they reluctantly agreed. We pushed for the ISO suite.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>So, for the military that would be the [[SAGE (Semi-Automatic Ground Environment)|SAGE]] system. </p>


We issued this report and the Department of Defense said okay, they’re going to ask all of their contractors to move to the International Standards Organization’s suite and move away from TCP as soon as practical. Well, the rest is history. Despite this, TCP took over and the ISO suite never came in, and I regret my decision to this day. I tell my students any time I give a talk, don’t ask me to predict what is going to happen in this world anymore. That was a real goof on my part. I was wrong. You never know.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>[[SAGE (Semi-Automatic Ground Environment)|SAGE]], yes. I mention [[SAGE (Semi-Automatic Ground Environment)|SAGE]] in my paper too. So, all of these things were in the air. </p>


'''Hochfelder:'''
<flashmp3>360 - schwartz - clip 2.mp3</flashmp3>


The computer keyboard, the typewriter keyboard, is almost the same sort of thing.<br>  
<p>Then in the late ‘60s, it became apparent to IBM and other organizations that you are asking these large computers to do a lot of communication tasks. Once you start to do more and more of this, you’re tasking the computers with this, and in a sense you’re undoing what you started doing. You want them to do more computational work, and now you’re doing this other work. So they decided to off-load the communication tasks to special purpose communication systems they built called communication concentrators—computers which just do communications work. The concept was to have terminals connected to these; they concentrate the activity and they send them to the same central computer. In a way, this is the same thing the SABRE system does.. The SABRE system already operates under that premise: connect terminals through a concentrator. The concentrator then interrogates the central system and sends messages back. So that was already there many years before that idea, but now they decided to do it more generally. Once you start doing this, you have to develop what are called protocols—ways of having two machines that are a distance from one another interrogate one another and send messages back to one another and understand one another. This has been done for a long time now. Time-sharing, as well as airline reservation systems and military communication systems, among others, helped develop this. </p>


<br>
=== ARPANET; commercial computer utilities  ===


'''Schwartz:'''  
<p>'''Schwartz:''' </p>


Is that right?<br>  
<p>At the same time (the mid-’60s now), there was pioneering work going on at the then Advanced Research Project Agency (ARPA) of the Department of Defense. Work has been published on this very recently. In fact, in this paper I mentioned, I sort of explore the ARPA, the IBM company thing; I explore the predecessors, the work of the airline reservation systems. The primary work at ARPA at that time, in the mid-’60s, looked to see if people can communicate with computers in some better way. </p>


<br>  
<p>A man named Larry Roberts joined ARPA in the late ‘60s. He had the concept of developing what he called a computer utility. Since people were beginning to do all kinds of time-sharing activity at the time, why should every organization, every university, every commercial organization, have replicas of the software? Why not develop something special? For example, the University of Utah had a specialty in graphics capability. UCLA was doing its own work. Why couldn’t people all over the country access those universities for their software, rather than having to duplicate it in your own place? It made sense. So he had the idea of building a computer utility, like an electric utility, distributed, and ARPA began to fund the ARPA Network project. I think the first one went online in 1969 with four nodes or computers interconnected. So, you visualize all these things are coming together. </p>


=== IEEE, Communications Society<br> ===
<p>Now, in order for ARPA to operate, they had to have communication protocols to handle the messages back and forth. They had the concept of a router—a message processor that handles signals and routes them appropriately in some way. ARPA began to develop routing algorithms that came out of this. </p>


'''Hochfelder:'''
<p>IBM at the same time period, 1969, had begun work on something they called Systems Network Architecture, SNA, which is probably the first commercial network architecture designed to handle messages between computer systems. ARPA was a distributed topology because you could be anywhere, and their inter-network message processors (IMPs) were scattered all over the country and you fed into them and they connected with one another in a distributed fashion. IBM wanted people to access their main hosts, their large machines, so the concept was to have terminals connected to concentrators connected to the main host, passing messages back and forth, and you wanted an architecture for this. </p>


We can talk about that off tape. Please talk about your involvement with the IEEE and with the Communications Society, and also your involvement here at Columbia at the Center for Telecommunications Research.<br>  
<p>In 1969, we also saw coming out the first commercial computer utility. A company called Tymshare was set up, coming from the word time-sharing. Their idea was, if you don’t have access to your own computer, you have a terminal, which you use to access its computers. Tymshare had a bank of computers scattered all over the country, large computer centers, where they would process your information. You would pay for this, so your company didn’t have to own a computer of its own. They developed a network called Tymnet, which also began to operate in the late ‘60s, early ‘70s. Everything was coming together now: ARPA’s pioneering work on the computer utility, mostly for universities communicating in a distributed fashion, Tymnet, IBM’s SNA. GE set up a network called [[General Electric (GE)|GE]] Information Services Network, which did the same thing that Tymnet did. It offered services. They began to go abroad also and offered links to Europe and other places. Tymshare had their computers scattered all over the country because they felt that it was more reliable that way. [[General Electric (GE)|GE]] had its computers all in the one center in Rockville, Maryland. It felt the system was more reliable that way. In fact, I attended a conference later on, where two guys, one from each company, were debating which one was more reliable. One is distributed, and if one system fails, you still have others. GE said if they are all together, we can make them more secure, so who knows? But anyway, very interesting. </p>


<br>
=== Networks research, teaching, and publication  ===


'''Schwartz:'''  
<p>'''Schwartz:''' </p>


Yes. The IEEE one I’ll make brief. As a young fellow in the early ‘50s, I got very involved in information theory and communication theory. I became active in the then Information Theory Group before it was called a Society of the IEEE and attended a lot of meetings. It is hard to recollect the details of it. I don’t follow the Information Theory Society activities anymore. But somebody told me a year or two ago that they had read the newsletter and somebody had mentioned my name in the newsletter because he had found it in studying the old society records of the ‘50s. I was active then in the Information Theory Group and served as Chairman in about 1964 or 1965.<br>  
<p>This thing started to happen now, and I began to feel that networking was an exciting area to work in now. I began to develop some activity at Brooklyn Poly in this area. The Poly Microwave Research Institute (MRI) annually had a large workshop, covering various topics, not just microwaves. We ran one on computer communications and the integration of computers with communications, which is part of the same thing. </p>


<br>  
<p>I remember talking to [[Oral-History:Paul Green|Paul Green]], at the time at IBM, who had come to IBM from Lincoln Laboratories. He was a real pioneer in this field. He was at IBM Research and had done some pioneering research work in SNA. He said to me, “If you really want to learn about the field, why don’t you go out and find out what some of these companies are doing?” I think he might have suggested this for a journal. I said sure, and got together two of my colleagues, Bob Boorstyn and [[Oral-History:Raymond Pickholtz|Ray Pickholtz]] at Brooklyn Poly, and the three of us picked four networks that were ongoing in this country. We went and talked to the people and learned what they were doing. It was a new field. We wrote a nice paper. That started us going. Once you learn what companies are doing, it gives you some exciting problems to work on. </p>


I was simultaneously active in the Communication Technology Group, or whatever it was called before the IEEE Communications Society. That was when I was head of the department at Brooklyn Poly. I was head of the department from 1961 to 1965 and living on Long Island. I became chair of the Long Island section of the Communication Technology Group. Then I got active in the overall Communications Group itself. I was one of the original people involved with the change from Group to Society. There was a fellow named Dick Kirby, Richard Kirby, who I guess was the first president of the new Communications Society which came out of the old Communication Technology Group. He invited me among others to join the committee to come up with the first constitution. So I was on that committee.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>It would be a good paper to have. </p>


I became active. I was on the Board of Governors for years. I don’t remember the dates anymore. I got elected Vice President. Then in 1978 I was elected Director of the IEEE representing the Division of which the Communication Society was then a part. I was there for two years as part of that.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>Yes. I have it in my files here. I’ve got a copy of it. One of the IEEE journals, I forget which one, published it. So that was a tutorial paper. Nothing new, but we three interviewed people at different companies. One was the NASDAQ system, for example, that they had set up in those days. One might have been Tymnet too; I don’t remember anymore. </p>


I am very proud of one incident while IEEE Director. People have forgotten this by now, but I’m the guy who proposed the idea of President-Elect. It’s not mentioned anywhere. I’m not sure that anybody would recognize it. But when I first became a Director, it became apparent to me that it was difficult to be a President for one year. You come and go and that’s it. You need some training. I knew other organizations had that, like the AAAS of which I was a member. So I proposed that at one of our meetings. We instituted the idea of President-elect and that was accepted. So now a guy comes in, is trained for a year as President, and then is able to go on as President the year after. I think I might have even proposed having a President for two years.<br>  
<p>I began to develop a program on this subject and began to teach a course at Brooklyn Poly in the subject of networking. I had some notes. I left Brooklyn Poly in 1973, just when all of this was coming to a head. Poly had been having financial problems, as I pointed out, and it was sort of sad in a way because some of the leading people had left. [[Jack Wolf]] had left by that time. [[Oral-History:Raymond Pickholtz|Ray Pickholtz]] decided to leave and go to George Washington, and two of the other key people decided to go into industry by themselves. </p>


<br>  
<p>'''Hochfelder:''' </p>


'''Hochfelder:'''
<p>Is that when [[Oral-History:Donald Schilling|Don Schilling]] left Poly? </p>


Isn’t there also like a Past President?<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>No. [[Oral-History:Donald Schilling|Don Schilling]] had gone to City College. He may have gone before this time. But Don Hess and Ken Clarke organized a company of their own. They were experimentalists. They still have their company functioning. I think Bob Boorstyn and I might have been the only ones left at this time. </p>


'''Schwartz:'''
<p>Columbia asked me to spend a year with them as a visiting professor, which I did. I gave a couple of courses in computer networking, because that was the thing I was really pushing at this time. They asked me to stay on and I stayed on at Columbia. I came to Columbia officially full time in 1974, and, again, continued developing a program in networks. Out of this came some notes and the first textbook in the field called ''Computer Communication Network Design and Analysis'', published in 1977. A very nice book was published much earlier by [[Oral-History:Leonard Kleinrock|Len Kleinrock]], who was one of the pioneers in the field. That is his doctoral thesis that he did in 1961, I guess at MIT. Even years later, it is a classic book with wonderful stuff in it. He had it reprinted maybe by Dover Press; I forget who did it, and I’ve got a copy somewhere. That was really the first book in the field. There are other books that have been published too, but mine was a textbook with problems and exercises, stuff like that, for students to use based on the course we developed first at Poly and then at Columbia. I have continued in that field ever since. I’ve done work personally with students at [[Bell Labs|Bell Labs]], at Brooklyn Poly, at Columbia, and at other places. First at Poly and then at Columbia, in what is called congestion control. </p>


Yes. Stay on for President, Past President. See, really they are committed for three years, which is very difficult at that level. Anyway, I was proud of that.
=== Performance analysis  ===


<br>  
<p>'''Schwartz:''' </p>


<br>I kept up, obviously, my interest in Communications Society. First I was active in the Communication Theory Committee for a long time, then I helped organize a new Technical Committee on Computer Communications, which has now grown considerably. That was the committee devoted to networking and things like that. Some of my students came in. Ray Pickholtz, my former student, later also became active in that committee as well following that.<br>  
<p>Now, preferring work in analytical areas, I gravitated more towards performance analysis. How do you see if these systems are performing properly? By doing analysis. It turns out you have to learn queuing theory and things of that type. I tend to be oriented more in that direction, in more quantitative approaches. [[Robert Kahn|Bob Kahn]], who was one of the pioneers of ARPAnet, one of the giants in the field, had published a paper on flow control for the ARPAnet. He was an electrical engineer by training, out of Princeton. He had worked in communication theory originally too, and had moved into this field. But he has his own organization now. He published his paper on congestion and flow control for the ARPAnet and I read it and I said, “Gee, maybe we could quantify these; maybe come up with some numbers.”I put a student named Mike Pennotti to work on this, a guy from [[Bell Labs|Bell Labs]] who knew nothing about the field. This guy is sharp, so he picked up and finished his thesis in a year’s time. He had been working in, I think, underwater acoustics at [[Bell Labs|Bell Labs]] and Navy work, or something like that, and switched fields completely, and boom. Sharp guy and good work. So we worked together on this, and he came up with what I call a pioneering paper on a virtual connection, a connection from point to point along a network which consists of a source terminal connected through routing nodes or routing switches to a destination node with buffering and queuing. That’s the way that networks operate. You store and transmit information in packets. We were able to model this. We came up with some concepts. We compared two different strategies. Do you want control over the virtual connection end-to-end, or do you want control at each node separately? We found that, by proper tweaking, both gave us the same performance. </p>


<br>  
<p>But then, which one is easier to implement? It was the first such quantitative study and it has since become a paper that has been cited a lot of times, because it gave rise to a lot of other work in the field of congestion control and performance analysis. Again, the way engineering always works, somebody invents an idea, you develop the software (in the old days, it wasn’t the software, but nowadays the software), you develop the hardware for it, and then somebody comes along thinks maybe I can study and analyze it and get improvements on it. That’s the way it usually works. </p>


In 1984-’85, I was elected president of ComSoc. I might have been vice president before that. I was president for two years. One of our meetings was held abroad in Amsterdam. I think it might have been one of the first meetings we held abroad and that worked out very nicely. We had a couple of anecdotes. In Amsterdam, they threw out the red carpet—they opened up the city for us. We had a dinner engagement at the municipal hall, whatever they call it. The Queen came down to greet us. My wife tells a funny story where she came into this room where the ComSoc governing group was gathered to meet with her, and somebody had said to us, “Number one, be careful how you greet her—she is a Queen, remember. Number two, just these people here, nobody else.” So I’m very different, I guess. She walks in, I shook her hand rather than bowing or something like that. In America, we don’t do things like that, right? She’s a Queen—so what? Secondly, I beckoned for my wife, “Come on. Come meet the Queen.” I wasn’t supposed to do that by protocol either. So what! Why can’t my wife meet the Queen? Anyway, it was very nice.<br>  
<p>So, this paper was published in ‘75, and it was very good early work in the area. Pennotti’s work was from Poly, but I had moved to Columbia, so he worked with me there. He got his doctoral degree at Poly, but he came to see me at Columbia at that time. There were other students who I had had at Poly whom I carried with me. They got their degrees at Poly but they worked with me at Columbia. </p>


<br>
=== Routing protocols; communication links  ===


The nicest thing was the Chair of our Awards Committee at that time was a man named Ralph Schwarz, who was also at Columbia. He has since retired. He’s older than I am and a very nice guy. He was to give awards at the awards luncheon that we run annually. He and his wife were both refugees from Hitler Germany. They fled to Holland and he spent a year or two in Holland. They didn’t meet there; they met back in the United States, by coincidence. Ralph still retained some of his Dutch, so when he got up to give the awards, he started speaking Dutch. The Dutch hosts were amazed. This was wonderful. Imagine that- ComSoc comes to Holland, and this guy is actually speaking Dutch, which was wonderful. I think people respect you for that. So thank God for Ralph. It was very nice.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>I began to work at that time with a colleague at Columbia here, Tom Stern, on routing protocols. [[Robert G. Gallager|Bob Gallager]] at MIT was doing some very nice work on routing protocols. Again, the ARPAnet had focused on that work. They had a routing procedure as part of the ARPAnet, which was pioneering. Had a lot of problems with it, because they tried to have it react too quickly and it was unstable, as it turned out. They began to change their routing protocol. The question arose, what are good routing protocols? [[Robert G. Gallager|Bob Gallager]] worked on this problem. Can you distribute the routing algorithms in some way? There were many routing protocols developed years before for work in transportation networks. How do you route trucks and things like that? Some of those ideas were picked up on in this case. So, [[Robert G. Gallager|Gallager]] did some pioneering work in distributed routing controls. Tom Stern did some fine, related work, which also stimulated work in the area. A lot of work was going on in this area. Harry Rudin, an American who went to Switzerland and is still living there, who had been active for years, was working in this field at IBM Zurich Research Laboratory. He’s now retired, I understand. He also did some pioneering work in routing. </p>


It was a good two years. Then I stayed on, of course, as past president, as you normally do, to run the Nominations Committee. The last couple of years I have cut back and I haven’t been involved as much because I feel it is time for young people to take over. But I was active during that period of time in these committees.<br>  
<p>We now left the physical layer behind and we were now moving into what is now called the network layer. IBM had done work on Synchronous Data link Control (SDLC). When the world’s standards bodies in the ‘70s began to pick up on this, they changed it a little bit; they called it High Level Data link Control (HLDC), but it is based on the IBM’s SDLC. So, that’s the second layer, data link control. People began to work on this, and papers began to be published on that layer. Now, we are moving up to what we now call the network, or third, layer. The network layer involves congestion control and routing. If you go higher to the fourth layer, now called the transport layer that also involves congestion control. TCP came along about that time. Later on, people began to develop a sophisticated control, called flow control, at that layer. We do congestion control at the network layer, we do flow control at the transport layer. But they are all very related. How do you keep receiving systems from being overrun by packets arriving? Now, you can do it end-to-end; you can do it hop-by-hop at the network layer. You can do it end-to-end on the transport protocol, but the ideas are very similar. Sometimes the layers get mixed up. </p>


<br>  
<p>By this time ARPAnet was developing a full-fledged network and giving rise to a lot of work going on all over the country in these areas, so we were not among the few working on networking now. Everyone was beginning to work in this area. I just mentioned [[Robert G. Gallager|Gallager]] did pioneering work. [[Oral-History:Leonard Kleinrock|Kleinrock]], from the beginning, at UCLA, did pioneering work on the ARPAnet. People from other universities did the same thing as well too. </p>


In other IEEE things, I’ve always been involved in various IEEE award groups. I won the IEEE Education Medal in 1983. So, as always happens, I was invited to become a member of the Education Medal Award Committee to select new candidates. I did that for a number of years. The same for the Kobayashi award. One of our own past presidents and good friend, Eric Sumner, who passed away some years ago, who had been an executive at Bell Labs: we have an award for him. I’m on the award committee in his name. So I’ve been on a number of IEEE awards committees.<br>  
<p>I personally focused on performance analysis. [[Oral-History:Leonard Kleinrock|Kleinrock]] did too, by the way. He’s a broad guy. He does systems and software work, and he’s published classic books on queuing theory, giving courses in that regularly. So, he does work in everything. </p>


<br>  
<p>We’re now in a real hot period in the network area. The work that I’m doing has become fully focused on networks. My book came out in 1977 as the first textbook in the field, although other books have been published on this as well too. What I do in this book is based on the literature, as well as some work that we had been doing. The examples of routing and flow control in networks that I give in this book are based on those of GE Information Services Network, Tymnet, and the ARPA Network from a qualitative point of view to try to understand what everything is all about. A big topic in those days (as now) had to do with being connected together with links of various kinds. So I also have in the book work on how you assign capacities to communication links, things like that. A lot of it was based on work that [[Oral-History:Leonard Kleinrock|Kleinrock]] had done in his original thesis studying end-to-end delay. Since you now have queuing delay, this is very different from the telephone network. This is a [[Packet Switching|packet-switched]] network with routers—you have buffering, so one of the performance objectives is to reduce the queuing delay as much as possible. How do you route according to queuing delay? He had done work in that, and so I discuss, how do you assign capacity to reduce delay? I have a chapter there on queuing theory, because people hadn’t done this before. I have a chapter here applied to store and forward buffering. I have a chapter on routing and flow control. All these things were in the air in those days. Other books have been published since, of course. </p>


'''Hochfelder:'''
=== Modems and data networks  ===


What impact do you think that the IEEE Communications Society has had on advancing the state of art? <br>  
<p>'''Scwartz:''' </p>


<br>  
<p>I have a much bigger textbook covering much more material that came out in 1987 called ''Telecommunications Networks: Protocols, Modeling, and Analysis'' published by Addison Wesley. That also uses a quantitative approach, but I do treat protocols and things like that. Now, what was happening in the world in those days? Well, networking is becoming really significant to the world. It is known to everybody now. We had telephone networks that covered voice messages only. Suddenly, you find data becoming important now. People were shipping data over modems. Modems were being developed in those days because the telephone people realized early on that you want to ship data. </p>


'''Schwartz:'''  
<p>'''Hochfelder:''' </p>


It’s hard to judge. Obviously, like every IEEE society, it enhances education of its members. It has its publications. It has its conferences. So I think through the conferences and the publications, this is where we mostly advance the state of the art. We could argue that it is the engineers at companies like Bell Labs and IBM who advance the state of the art. But they are the guys who also do attend meetings and conferences and publish papers. Other companies learn from one another that way. I think the educational mission of the IEEE is reflected in ComSoc.<br><br>
<p>[[Robert W. Lucky|Bob Lucky’s]] work? </p>


<br>
<p>'''Schwartz:''' </p>


Unfortunately, the whole business of competition coming in has made things a lot more difficult now. I used to love to go to the ComSoc meetings, like the International Conference on Communications, ICC, and Globecom, the Global Communications Conference, and listen to papers from people from industry. They would talk about new systems, whether it was ITT talking about a new switching system or Bell Labs engineers talking about a new system. I would learn a lot from that. As an academic guy, that is important to me. Otherwise, academics talk to one another. Unfortunately, now, with competition you get very little of this. They give you very little information. I can’t blame them, but it is very hard now. You find more academic papers now. I used to like the other papers from industry where you would learn from these guys. It’s important. I think ComSoc as the leading communications society in the world has contributed to that in a broader sense.<br>  
<p>Yes. [[Robert W. Lucky|Bob Lucky]] had a group at [[Bell Labs|Bell Laboratories]]. Actually, his group came out of an earlier group, started by Bill Bennett, who passed away a long time ago. He had some of the best people working on modems. But other people were doing this work too. A guy named [[G. David Forney, Jr.|Dave Forney]] set up a company called Codex, and they developed a data modem. So, other people were doing work too. Bob Lucky’s group did pioneering work. Steve Weinstein, as well as others, worked for him. They began to develop modems early on for handling data over telephone networks. As a matter of fact, all of these networks we talk about use telephone facilities. How does someone get into those telephone networks in some way? This is for terminal, low bit rate modems, things of that type. So that was the modem work that was going on. The CCITT in the early ‘70s was aware, not only of the modem work going on… </p>


<br>
=== Standards  ===


I see more and more the need for bringing societies together. The communications field now spans many organizations. Steve Weinstein, who was president of ComSoc a couple of years ago, has done a lot of this. He tried to bring different societies together, and was very successful. When I was president of ComSoc I tried bringing the Communication and Computer Societies together in the area of computer communications. It was difficult because once you are part of an organization; you don’t want anybody intruding on your turf. You have the right to go ahead. The computer communications area belongs to both fields, computers and communications. I remember trying to get the Computer Society to try to join with us, meeting with their president. It was a difficult situation because we were pushing ahead in computer communications and it might be better for the two Societies to work together. It is difficult. Steve and other people have managed to do that; I did not. Maybe I laid the groundwork; I really don’t know. But I couldn’t accomplish that much. We just went ahead with our own journals.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>CCITT? </p>


Now, for example, the leading journal on networking is the IEEE/ACM ''Transactions in Networking''. That was jointly set up, through Steve’s efforts by the IEEE Computer Society, the IEEE Communications Society, and the ACM SIGCOMM. (I’m proud that the first Editor-in-Chief was my former student Jim Kurose, from Columbia, now a faculty member at UMass. He was the first editor for three or four years of that journal. ) Steven Weinstein has done a lot of work in trying to bring societies and groups together—ACM, and the IEEE Computer and Communications Societies—so we do a lot more work together. We have the leading conference on computer communications, INFOCOM, and that’s a joint Computer Society and Communications Society conference. We’ve had that for a number of years now. I’m very pleased. I’m still active in that conference. I’m currently on its Program Committee. I’ve got fifteen papers to review for the conference in the next two weeks, unfortunately.
<p>'''Schwartz:''' </p>


<br>  
<p>Yes. There is something called the International Telecommunications Union, ITU, housed in Geneva. That’s a standards making body for telecommunications administrations all over the world. Only administrations can belong to this. In the United States, it’s the State Department and the FCC that jointly work together on this. This has now changed. The ITU had, at the time, two separate standards bodies, one called CCITT, the other called CCIR. They are French acronyms. </p>


<br>That’s my activity, in a nutshell. Anything else you want to ask about that?<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>One is for [[Telegraph|telegraphy]] and [[Telephone|telephone]], and the other one is for? </p>


'''Hochfelder:'''  
<p>'''Schwartz:''' </p>


No. I think that covers it.<br>  
<flashmp3>360 - schwartz - clip 3.mp3</flashmp3>


<br>  
<p>Comité Consultative Internationale Télégraphie et Téléphonique is that CCITT international standards organization, and the other one is called Comité Consultative Internationale Radio. One is for radio standards and one is for telephone and telegraph standards. The CCITT has standardized a lot of modem 34 work, things like that. But let’s focus now on the networking area that I am more familiar with. </p>


'''Schwartz:'''
<p>They were well aware that data networks were now becoming significant. We already had IBM; we had Tymnet developed; GE Information Services Network. The French had a network set up called the Cyclades network. ARPAnet was here. Data networks were developing worldwide. The telephone industry was very aware of this, and they were very aware of networking. So the idea developed to try to standardize some data networks in some way. They set groups to work. I wasn’t involved, so I don’t know the details of it. But in 1976 they came out with a different kind of standard called an interface standard and called X.25. </p>


Details appear in the IEEE files. I’m a Life Fellow now. I was a Fellow before that. So I’m proud of the IEEE. A great organization. You want to talk about Columbia?<br>  
<p>DEC had developed DECnet, Burroughs had developed its Burroughs network architecture; all of these architectures in the United States were proprietary—they were developed for their own equipment only, although the companies weren’t specifically in the data networking area. The CCITT was talking about developing some kind of open networking, but across interfaces. Another organization, the International Standards Organization (ISO), was beginning to develop activities as well too, so the two were going on simultaneously now. </p>


<br>  
<p>Let’s focus first on CCITT. They came out with a standard called X.25. Now remember, they have working with them mostly telephone administrations, so they are catering mostly worldwide to the telephone companies which, except for the United States, were all government organizations in those days. AT&amp;T was quasi-governmental. It was a monopolistic organization in those days. They had the standard called X.25, which enabled users to interconnect to any network. Networks would have their own protocols, but by using this interface you could get into that protocol. On your side, you could see this protocol. That network on this side sees the same protocol, so you can get into it and then handle it anyway you wish, and the other side, it gets back out again. It is strictly an interface protocol; it’s not an end-to-end protocol. (It has subsequently also been adopted in some places as an end-to-end protocol.) There was an interface defined between a terminal on the user’s side and the network side. The same on the other side of a network between: between network side and the user side. What happens in the network between they weren’t concerned with. They weren’t going to tell people how to handle their networks. X.25 is a three-layered protocol. People knew about layers now: physical layer; data link layer which is HDLC; and a third layer, normally a networking layer, but, in the X.25 case, an interface layer, handling X.25 packets going across it. That came out of the CCITT activity. There was a lot of work published. </p>


=== Center for Telecommunications Research, Columbia<br> ===
<p>One of the first administrations to latch onto this was that of Canada, with Bell Northern Research developing products. In fact, I attended a communications meeting in 1976 and they presented some of their work at that time too. So they developed some of their products. The Canadians were one of the first in this area. Contemporaneous with this, the computer manufacturers and the public using computers began to feel the need for interconnecting computers in a non-proprietary sense. I just mentioned before IBM, DEC, and Burroughs in the United States. The same for Japanese companies. Each developed their own protocols for their own equipment. </p>


'''Hochfelder:'''
<p>The international standards organization, ISO, is also housed in Geneva. It is made up of companies rather than administrations. They felt the need to do this, so they set up some bodies and they began to develop pioneering activity in standardizing protocol architectures. They developed the idea of a seven-layered computer communications architecture, a physical data link layer, network layer, transport layer, session layer, all the way up to application layer. Why seven? Well, you don’t want too many, you don’t want too few, so they came up with that. They began a massive effort in each layer to really standardize this. Independently in the United States, the ARPAnet community had developed protocols, so they had their own. </p>


Yes. If you can talk about your involvement with CTR.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>The TCP? </p>


'''Schwartz:'''  
<p>'''Schwartz:''' </p>


What happened was that I came to Columbia in, say ‘73 unofficially; officially in ‘74. I started teaching courses in communications and computer communications. I set up the first graduate course in computer communications. I set up a course in signal processing. In fact, I published a book on that jointly with a former colleague at Brooklyn Poly. Signal processing came out of work at Brooklyn Poly; I set up a course in that. I taught a variety of courses.<br>  
<p>Yes. So they had developed a network layer called IP, actually Internet Protocol, interconnecting different networks. It told you how to route packets using the ARPA routing algorithms. TCP, Transmission Control Protocol, was the layer above, a transport protocol architecture. They had a data link layer too, et cetera. They had, of course, layers on top of that for handling various kinds of transfers, file transfers, e-mail, things like that. </p>


<br>  
<p>[[Oral-History:Leonard Kleinrock|Kleinrock]], in one of his early papers, summarizing traffic usage early on for the ARPAnet, showed the most amazing result. Despite the initial ARPAnet idea of a computer utility, with the network used for accessing other Hosts’ software, it turned out that people were using the network to talk to their own friends! The study showed the first elementary use of email, where you’re talking to the guys in the office next to you. So most of the activity was local activity. People never know what things are going to be used for. That was what happened with ARPAnet of course. It never became a computer utility. It developed for other communications purposes. </p>


I started working closely with my colleague Tom Stern here at Columbia, who is a wonderful person. He just retired, too. He’d been an old systems and control guy. He published a book years ago, on nonlinear networks, and he moved into the communications area, communication networking in particular. The two of us began to work together. We published a joint paper on routing in networks. He and I got together and we started organizing a little computer communications research group besides teaching courses. We developed the area. We got some industrial funding. Places like GTE and other companies gave us grants. I have to give the then Dean, Bob Gross, a lot of credit. He said, “Why don’t you guys organize yourselves as a Center and try to get more funds? Go out and get more funds from companies.” So we did. We set up a small center. I didn’t want to be the director, so I said, “Tom, you be the director.So he was the director of this small center.  
<p>ISO began to develop these standards and they came up with a seven-layered model and the ARPAnet community developed its own layered architecture. I innocently got involved in this. A standardization conflict had developed in the United States between the National Bureau of Standards, which is now NIST, and the Department of Defense. NBS was concerned with all commercial scientific and technological activity in the United States, in helping the commercial sector as part of the Department of Commerce. They had a Computer Laboratory Division. I happened to be a member of a Visiting Committee to that, to look at the Division activities once a year. They had a large computer communications activity and they were studying how to determine whether the ISO protocols, in their commercialization, were “correct” in their operation or not. They were heavily involved in this. They were concerned that American companies not lose out worldwide if the world were to adopt the international computer communication standards being developed by ISO. The Europeans were moving in the direction of ISO. In the United States, there was a push on by the Department of Defense to standardize the TCP suite, the ARPAnet suite, because they already had it going. Why bring in new protocols? So there was sort of a conflict between the Department of Defense and the Commerce Department, the National Bureau of Standards. They went to the National Academy of Sciences/National Academy of Engineering, which runs the National Research Council, NRC, and asked them to adjudicate. NRC set up an expert panel made up of people from industry and universities to try to determine which is the better way to go. Unfortunately (in retrospect!), I was asked to join that panel. On the panel were people from other universities: Dave Farber, a well-known guy in communications; Larry Landweber from Wisconsin; experts from IBM, from DEC, from Burroughs, and others. It was a broad-sweeping panel. We met for a long period of time in Washington with long meetings where we had people come in. We had [[Vinton Cerf|Vint Cerf]], who was with ARPA, speaking on behalf of TCP and that suite. We had people speaking on the other side. Interestingly enough, Dave Farber and Larry Landweber kept pushing for TCP. The rest of us, myself included, said no, wait awhile. TCP is just United States; we have to go worldwide. The ISO suite, seven-layer suite, is much newer, it has the new features in it. It is based on TCP to some extent. It doesn’t have the segmentation TCP has and in some areas it is much better. The guys from industry were pushing for it. So why not go for that? We finally over-rode their objections and they reluctantly agreed. We pushed for the ISO suite. </p>


<br>  
<p>We issued this report and the Department of Defense said okay, they’re going to ask all of their contractors to move to the International Standards Organization’s suite and move away from TCP as soon as practical. Well, the rest is history. Despite this, TCP took over and the ISO suite never came in, and I regret my decision to this day. I tell my students any time I give a talk, don’t ask me to predict what is going to happen in this world anymore. That was a real goof on my part. I was wrong. You never know. </p>


<br>I was on sabbatical at IBM Research in 1980, and I still remember the day we hired a young man named Aurel Lazar. He’d gotten his degree at Princeton in point processes, a very theoretical subject. He joined us and we said to him, “Look Aurel, we’re doing work in networks now. So, how about doing that?” He switched over.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>The computer keyboard, the typewriter keyboard, is almost the same sort of thing. </p>


During my year at IBM Research in 1980, I worked on the IBM networking architecture, SNA, and other topics such as routing protocols, while meeting some of the people there. I did some work on congestion control. I took the SNA congestion control system and analyzed it. I published a paper that showed how one could generalize other kinds of congestion control techniques. We began doing this work. Beginning in 1980, Tom Stern, Lazar, and I developed this computer communications group.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>Is that right? </p>


In 1984, NSF sent out a notice saying they were setting up a new concept called Engineering Research Centers. These were to be multi-purpose centers with special funding in particular areas of engineering where the United States faced a competitive threat, and where there was a lot of basic research to be done. The Centers were to bring together faculty of different disciplines in that one field, work with graduate students, and bring undergraduates in as well, to do research.<br>
=== IEEE, Communications Society  ===


<br>They were to work closely with industry and try to move ahead into that field. I remember saying to Tom that we had no choice but to do this. “We have to apply for this because, if we don’t, somebody else is going to do that”.<br>We got together a group of people here from Electrical Engineering, our Operations Research Department, and faculty and students in applied physics. We had a concept of looking into telecommunication systems of the future, starting from the basic VLSI hardware level, the device and chip levels, all the way up to the systems level. Electrical Engineering was broad enough to encompass all of these. We tried to involve our faculty in the solid state area and the optics area. We had multiple activities going on. We had queuing theorists from the Operations Research Department; we covered device physics; and, of course, we had systems guys, Tom Stern, myself, Lazar, and others.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>We can talk about that off tape. Please talk about your involvement with the [[IEEE History|IEEE]] and with the [[IEEE Communications Society History|Communications Society]], and also your involvement here at Columbia at the Center for Telecommunications Research. </p>


I organized a group of interested faculty and I put together a position paper on our concept. By the way, in all honesty, the guys, aside from Tom Stern, myself, and Lazar, knew very little about communications. The guys working in VLSI and solid state were not really knowledgeable in that area. Once we got the award, we started training them. It was interesting.<br>  
<p>'''Schwartz:''' </p>


<br>  
<flashmp3>360 - schwartz - clip 4.mp3</flashmp3>


Anyway, we put this proposal together. I wrote it with Tom’s help and submitted it. I guess there were 42 proposals submitted from all over the country in all fields of engineering. (Seven were finally selected, ours being the only one in telecommunications.) There was, initially, a site visit. They came down to visit us. We were then selected as among the top fourteen.<br>  
<p>Yes. The IEEE one I’ll make brief. As a young fellow in the early ‘50s, I got very involved in information theory and communication theory. I became active in the then Information Theory Group before it was called a Society of the IEEE and attended a lot of meetings. It is hard to recollect the details of it. I don’t follow the [[IEEE Information Theory Society History|Information Theory Society]] activities anymore. But somebody told me a year or two ago that they had read the newsletter and somebody had mentioned my name in the newsletter because he had found it in studying the old society records of the ‘50s. I was active then in the Information Theory Group and served as Chairman in about 1964 or 1965. </p>


<br>Then I had to go to Washington to present our case. I remember that was a difficult time. I don’t know who the other finalists are, because they don’t tell you this. You’re sitting in an anteroom and there are some other people who you don’t even recognize. (I might have recognized one from another university, but you’re not supposed to talk to each other.) They invite me in. Then you have what seems like a hostile audience in front of you. This is the selection committee, and they started firing questions at me, all kinds of questions. In particular, one guy was really firing hostile questions. Perhaps not hostile, but tough questions at me. Later on, I mentioned his name to one of my colleagues, and he says, “He is really a friend.” He may have been a friend, but not inside that meeting.<br>  
<p>I was simultaneously active in the Communication Technology Group, or whatever it was called before the [[IEEE Communications Society History|IEEE Communications Society]]. That was when I was head of the department at Brooklyn Poly. I was head of the department from 1961 to 1965 and living on Long Island. I became chair of the Long Island section of the Communication Technology Group. Then I got active in the overall Communications Group itself. I was one of the original people involved with the change from Group to Society. There was a fellow named Dick Kirby, Richard Kirby, who I guess was the first president of the new Communications Society which came out of the old Communication Technology Group. He invited me among others to join the committee to come up with the first constitution. So I was on that committee. </p>


<br>  
<p>I became active. I was on the Board of Governors for years. I don’t remember the dates anymore. I got elected Vice President. Then in 1978 I was elected Director of the IEEE representing the Division of which the Communication Society was then a part. I was there for two years as part of that. </p>


Anyway, I came out saying forget it, we’re not going to get it. Yet we got the award despite these very tough questions! We were awarded this grant, one of seven, the only one in telecommunications. With NSF support, we started the Center for Telecommunications Research going. We got the award officially in May of 1985, and we began to build.<br>One of the problems was that it was supposed to have been long range. NSF had said it would be long range, which means that you start taking on graduate students with the funding they require. The initial year the funding might have been a million and a half or two million. They had told us they were going to go up to five million a year. We began hiring graduate students on that basis. Of course, a couple of years into it, it turned out they leveled off at three million, and we had already hired all these graduate students, so, for a while, we had a real problem. We went up to a maximum of eighty-five doctoral students supported on this, plus twenty-seven faculty from these different disciplines. Not full time support for the faculty, but some support for each person. Lots of equipment. A tremendous thing. We began working and I think we did a lot of wonderful things.<br>  
<p>I am very proud of one incident while IEEE Director. People have forgotten this by now, but I’m the guy who proposed the idea of President-Elect. It’s not mentioned anywhere. I’m not sure that anybody would recognize it. But when I first became a Director, it became apparent to me that it was difficult to be a President for one year. You come and go and that’s it. You need some training. I knew other organizations had that, like the AAAS of which I was a member. So I proposed that at one of our meetings. We instituted the idea of President-elect and that was accepted. So now a guy comes in, is trained for a year as President, and then is able to go on as President the year after. I think I might have even proposed having a President for two years. </p>


<br>  
<p>'''Hochfelder:''' </p>


We got industry involved. The biggest job I had in those days was convincing industry to join us, and a lot of my time as director was spent on the telephone, or going in person to meet people. I didn’t curtail my teaching activities at all. I kept teaching. I kept doing research. I had a book published two years later. It was just tiring, working longer hours. We managed to get a sizable number of companies involved. The major companies in the United States and elsewhere, actually. We had ATT, Bell Labs, IBM, GTE, Bellcore, Timeplex, and many others. Bellcore had been set up in ‘84, so we had Bellcore as part of us. We had at that time NYNEX, which is now Bell Atlantic. We had Southwestern Bell. We had Southern Bell from the southeast. We couldn’t get all of the then RBOCs, but we did get quite a number. We tried hard to get a lot of companies from the financial industry because they are heavy users. We said we must have the big users. We got Merrill Lynch to join us, and, through them, we got a company called Teleport, which they had acquired, which has now been picked up by AT&amp;T as a carrier. We never managed to get any banks, even though the banks always said to us, “We welcome you. Please come give talks to us. We like the seminars, but no money.” They didn’t give us any money. But they sent students to the programs we had. The only major company from the financial industry that supported us was Merrill Lynch. I’m very pleased about that. That was the most difficult thing, bringing some of the users onboard, but we had, maybe twenty to twenty-five companies join with us, big and small.<br>  
<p>Isn’t there also like a Past President? </p>


<br>  
<p>'''Schwartz:''' </p>


We set up an industrial affiliates program. Once a year we ran a big, open two-day forum on what we had done, talks and seminars on what we had accomplished. We did it the first year in 1986 and continued doing it every year. We also had Japanese companies joining us. We got the award in 1985; our first open large meeting was in 1986. One of the first questions asked was a hostile question from the audience: “This is an American Center, funded by the National Science Foundation, to advance American industry in a competitive environment. How can you tolerate having foreign companies as part of you?”<br>  
<p>Yes. Stay on for President, Past President. See, really they are committed for three years, which is very difficult at that level. Anyway, I was proud of that. </p>


<br>  
<p>I kept up, obviously, my interest in [[IEEE Communications Society History|Communications Society]]. First I was active in the Communication Theory Committee for a long time, then I helped organize a new Technical Committee on Computer Communications, which has now grown considerably. That was the committee devoted to networking and things like that. Some of my students came in. [[Oral-History:Raymond Pickholtz|Ray Pickholtz]], my former student, later also became active in that committee as well following that. </p>


'''Hochfelder:'''
<p>In 1984-’85, I was elected president of [[IEEE Communications Society History|ComSoc]]. I might have been vice president before that. I was president for two years. One of our meetings was held abroad in Amsterdam. I think it might have been one of the first meetings we held abroad and that worked out very nicely. We had a couple of anecdotes. In Amsterdam, they threw out the red carpet—they opened up the city for us. We had a dinner engagement at the municipal hall, whatever they call it. The Queen came down to greet us. My wife tells a funny story where she came into this room where the [[IEEE Communications Society History|ComSoc]] governing group was gathered to meet with her, and somebody had said to us, “Number one, be careful how you greet her—she is a Queen, remember. Number two, just these people here, nobody else.” So I’m very different, I guess. She walks in, I shook her hand rather than bowing or something like that. In America, we don’t do things like that, right? She’s a Queen—so what? Secondly, I beckoned for my wife, “Come on. Come meet the Queen.” I wasn’t supposed to do that by protocol either. So what! Why can’t my wife meet the Queen? Anyway, it was very nice. </p>


Especially the Japanese.<br>  
<p>The nicest thing was the Chair of our Awards Committee at that time was a man named Ralph Schwarz, who was also at Columbia. He has since retired. He’s older than I am and a very nice guy. He was to give awards at the awards luncheon that we run annually. He and his wife were both refugees from Hitler Germany. They fled to Holland and he spent a year or two in Holland. They didn’t meet there; they met back in the United States, by coincidence. Ralph still retained some of his Dutch, so when he got up to give the awards, he started speaking Dutch. The Dutch hosts were amazed. This was wonderful. Imagine that- [[IEEE Communications Society History|ComSoc]] comes to Holland, and this guy is actually speaking Dutch, which was wonderful. I think people respect you for that. So thank God for Ralph. It was very nice. </p>


<br>  
<p>It was a good two years. Then I stayed on, of course, as past president, as you normally do, to run the Nominations Committee. The last couple of years I have cut back and I haven’t been involved as much because I feel it is time for young people to take over. But I was active during that period of time in these committees. </p>


'''Schwartz:'''
<p>In other IEEE things, I’ve always been involved in various IEEE award groups. I won the IEEE Education Medal in 1983. So, as always happens, I was invited to become a member of the Education Medal Award Committee to select new candidates. I did that for a number of years. The same for the [[Koji Kobayashi|Kobayashi]] award. One of our own past presidents and good friend, [[Eric E. Sumner|Eric Sumner]], who passed away some years ago, who had been an executive at [[Bell Labs|Bell Labs]]: we have an award for him. I’m on the award committee in his name. So I’ve been on a number of [[IEEE Awards|IEEE awards]] committees. </p>


The Japanese. My answer was very simple—we’re at a university; we’re open to the world. Now, everything we publish is above board and published in all kinds of journals. We have a lot to learn from the Japanese. It’s a two-way street, remember. They’re not going to steal us blind. We learn from them as much as they learn from us. It’s important to have these companies participate. In fact, some of our best defenders at that meeting were people from Bell Labs and places like that who recognized this. We had Japanese companies; we had a Korean company joining us; some European companies. But the bulk were American. When I went back and spoke to the then Director of NSF, he said, “No, by all means, you’re free to bring other countries aboard.” They were very supportive of us because they recognized that as well, too. It was a great time.<br>  
<p>'''Hochfelder:''' </p>


<br>Of course, our colleagues at other universities were very jealous of us. I remember being in a swimming pool at a Communication Theory Workshop in Palm Springs, California, and a well-known colleague from a well-known university comes up to me and he says, “Mischa, we have a big communications group, even bigger than your communications group. How come you got it and we didn’t?” I said, “I don’t know. Go ask NSF. We applied for it. We did our work. We’re doing good work, we think. I’m not going to argue.” There was jealousy there.<br>  
<p>What impact do you think that the [[IEEE Communications Society History|IEEE Communications Society]] has had on advancing the state of art? </p>


<br>  
<p>'''Schwartz:''' </p>


It was a lot of work, because NSF, in order to support this program, had to convince Congress that it was worthwhile as a heavy investment of money. Other universities were very jealous. They thought that this would come out of the funding of individual investigator funding. NSF had assured them that it didn’t. That it came from extra money. It was a very difficult and trying time from all respects. So it was difficult for NSF.<br>  
<p>It’s hard to judge. Obviously, like every IEEE society, it enhances education of its members. It has its publications. It has its conferences. So I think through the conferences and the publications, this is where we mostly advance the state of the art. We could argue that it is the engineers at companies like [[Bell Labs|Bell Labs]] and IBM who advance the state of the art. But they are the guys who also do attend meetings and conferences and publish papers. Other companies learn from one another that way. I think the educational mission of the IEEE is reflected in [[IEEE Communications Society History|ComSoc]]. </p>


<br>  
<p>Unfortunately, the whole business of competition coming in has made things a lot more difficult now. I used to love to go to the [[IEEE Communications Society History|ComSoc]] meetings, like the International Conference on Communications, ICC, and Globecom, the Global Communications Conference, and listen to papers from people from industry. They would talk about new systems, whether it was ITT talking about a new switching system or [[Bell Labs|Bell Labs]] engineers talking about a new system. I would learn a lot from that. As an academic guy, that is important to me. Otherwise, academics talk to one another. Unfortunately, now, with competition you get very little of this. They give you very little information. I can’t blame them, but it is very hard now. You find more academic papers now. I used to like the other papers from industry where you would learn from these guys. It’s important. I think [[IEEE Communications Society History|ComSoc]] as the leading communications society in the world has contributed to that in a broader sense. </p>


NSF kept asking us for different kinds of identifiers of activity. We had to hire staff. We had five full-time administrative people. We also hired about ten post-docs, called Associate Research Scientists here at Columbia, to work with us too. We had to indicate over the years how this Center had helped American industry and helped our society. Well, one obvious way we kept pointing out was that a lot of our students did go into American industry. They were welcomed with open arms. They were properly trained because we had good research facilities we set up. We covered everything from VLSI all the way up to systems, signal processing, and image processing. A lot of good work came out of this, a lot of activity. Patents were issued. Papers were published. I was proud that in 1987, three years after we got the funding, I put a book out on telecommunications networks, which was well accepted. NSF used that to show people some of our accomplishments. In fact, in the preface of the book I give credit to NSF and the NSF Center for helping us do the research that led to a book like this. So it was very useful in many respects.  
<p>I see more and more the need for bringing societies together. The communications field now spans many organizations. Steve Weinstein, who was president of [[IEEE Communications Society History|ComSoc]] a couple of years ago, has done a lot of this. He tried to bring different societies together, and was very successful. When I was president of [[IEEE Communications Society History|ComSoc]] I tried bringing the Communication and Computer Societies together in the area of computer communications. It was difficult because once you are part of an organization; you don’t want anybody intruding on your turf. You have the right to go ahead. The computer communications area belongs to both fields, computers and communications. I remember trying to get the [[IEEE Computer Society History|Computer Society]] to try to join with us, meeting with their president. It was a difficult situation because we were pushing ahead in computer communications and it might be better for the two Societies to work together. It is difficult. Steve and other people have managed to do that; I did not. Maybe I laid the groundwork; I really don’t know. But I couldn’t accomplish that much. We just went ahead with our own journals. </p>


<br>  
<p>Now, for example, the leading journal on networking is the IEEE/ACM ''Transactions in Networking''. That was jointly set up, through Steve’s efforts by the IEEE Computer Society, the [[IEEE Communications Society History|IEEE Communications Society]], and the ACM SIGCOMM. (I’m proud that the first Editor-in-Chief was my former student Jim Kurose, from Columbia, now a faculty member at UMass. He was the first editor for three or four years of that journal. ) Steven Weinstein has done a lot of work in trying to bring societies and groups together—ACM, and the [[IEEE Computer Society History|IEEE Computer]] and [[IEEE Communications Society History|Communications Societies]]—so we do a lot more work together. We have the leading conference on computer communications, INFOCOM, and that’s a joint Computer Society and Communications Society conference. We’ve had that for a number of years now. I’m very pleased. I’m still active in that conference. I’m currently on its Program Committee. I’ve got fifteen papers to review for the conference in the next two weeks, unfortunately. </p>


<br>I stepped down as director in 1988 after three years. I had had it. I’m not really an administrator; I just don’t like that kind of thing. I managed to hire away from Bell Labs a top-notch researcher named Tony Acampora, Anthony Acampora. A very able guy. He worked at the director level at Bell Labs. He had always wanted to go to a university. He happened to be visiting us, and Tom Stern and I cornered him one day in my office and I said, “Tony, how about becoming director of CTR?” He said, “What!” Never even thought of it. Well, he agreed to come and he became director. He led CTR for the last eight years of CTR’s existence. I maintained my activity in it, of course. So, that’s the genesis of CTR.
<p>That’s my activity, in a nutshell. Anything else you want to ask about that? </p>


<br>  
<p>'''Hochfelder:''' </p>


<br>By the way, we found out later on, because we had finally built up to about a five million dollar budget (three and a half million from NSF and maybe a million and a half, two million from industry, which is good), we had to cut back the number of students because we were using a lot of money to support the activities, meaning new research, equipment. By the way, out of this came the building you are sitting in now. We are sitting in what is called the Schapiro Research Center. Our Dean, Bob Gross, once we had the Center going and showed that we were really moving along and becoming well-known worldwide, went to the State of New York and Governor Cuomo and sold them on the idea that New York State had a lot to gain in terms of industrial activity in New York State by supporting the building of a new building with research activities in it. I think we got a sixty-million-dollar loan, or something like that, interest-free, from the New York State Dormitory Authority. We got a ten-million-dollar gift from a man named Morris Schapiro, who was a Columbia alumnus who had been a mining engineer here. He made a lot of money as a financier and has funded the Schapiro Dormitory here as well. His brother, Meyer Schapiro, is a well-known art historian, who was at Columbia too. This man, Morris Schapiro, I think he’s still living if I’m not mistaken. He’s in his late 90s by now. Wonderful man. Gave ten million dollars. So we built this new state-of-the-art building for research only. There is no teaching done here. Research facilities and seminar rooms. It came out of the fact that our Center had given the Engineering School the ability to go to the governor of New York and say, “Look, there’s something really going on here.” So I’m very proud of that too. This building houses all kinds of research activities from the Engineering School and the Columbia Physics department.<br>  
<p>No. I think that covers it. </p>


<br>  
<p>'''Schwartz:''' </p>


=== Image processing ADVENT group ===
<p>Details appear in the IEEE files. I’m a [[IEEE Fellow Grade History|Life Fellow]] now. I was a Fellow before that. So I’m proud of the IEEE. A great organization. You want to talk about Columbia? </p>


'''Hochfelder:'''
=== Center for Telecommunications Research, Columbia  ===


Could you talk about some of the technical advances or spin-offs, or more to the point, any of the ideas that came out of this Center that some companies picked up and actually…<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>Yes. If you can talk about your involvement with CTR. </p>


'''Schwartz:'''  
<p>'''Schwartz:''' </p>


Well, a very important one, one of the Center leaders in the field of signal processing, a man named Dimitris Anastassiou, developed a large group on image processing. That’s his specialty. It’s called ADVENT, which is part of CTR. He worked as part of the MPEG team to develop the MPEG standard. Columbia is listed as one of the patent holders on the MPEG standard and Columbia, and I just learned last week, derives a million dollars a year revenue from that MPEG standard.<br>  
<p>What happened was that I came to Columbia in, say ‘73 unofficially; officially in ‘74. I started teaching courses in communications and computer communications. I set up the first graduate course in computer communications. I set up a course in signal processing. In fact, I published a book on that jointly with a former colleague at Brooklyn Poly. Signal processing came out of work at Brooklyn Poly; I set up a course in that. I taught a variety of courses. </p>


<br>  
<p>I started working closely with my colleague Tom Stern here at Columbia, who is a wonderful person. He just retired, too. He’d been an old systems and control guy. He published a book years ago, on nonlinear networks, and he moved into the communications area, communication networking in particular. The two of us began to work together. We published a joint paper on routing in networks. He and I got together and we started organizing a little computer communications research group besides teaching courses. We developed the area. We got some industrial funding. Places like GTE and other companies gave us grants. I have to give the then Dean, Bob Gross, a lot of credit. He said, “Why don’t you guys organize yourselves as a Center and try to get more funds? Go out and get more funds from companies.” So we did. We set up a small center. I didn’t want to be the director, so I said, “Tom, you be the director.” So he was the director of this small center. </p>


'''Hochfelder:'''
<p>I was on sabbatical at IBM Research in 1980, and I still remember the day we hired a young man named Aurel Lazar. He’d gotten his degree at Princeton in point processes, a very theoretical subject. He joined us and we said to him, “Look Aurel, we’re doing work in networks now. So, how about doing that?” He switched over. </p>


The MPEG standard is for image transmission?<br>  
<p>During my year at IBM Research in 1980, I worked on the IBM networking architecture, SNA, and other topics such as routing protocols, while meeting some of the people there. I did some work on congestion control. I took the SNA congestion control system and analyzed it. I published a paper that showed how one could generalize other kinds of congestion control techniques. We began doing this work. Beginning in 1980, Tom Stern, Lazar, and I developed this computer communications group. </p>


<br>  
<p>In 1984, NSF sent out a notice saying they were setting up a new concept called Engineering Research Centers. These were to be multi-purpose centers with special funding in particular areas of engineering where the United States faced a competitive threat, and where there was a lot of basic research to be done. The Centers were to bring together faculty of different disciplines in that one field, work with graduate students, and bring undergraduates in as well, to do research. </p>


'''Schwartz:'''
<p>They were to work closely with industry and try to move ahead into that field. I remember saying to Tom that we had no choice but to do this. “We have to apply for this because, if we don’t, somebody else is going to do that”. We got together a group of people here from Electrical Engineering, our Operations Research Department, and faculty and students in applied physics. We had a concept of looking into telecommunication systems of the future, starting from the basic VLSI hardware level, the device and chip levels, all the way up to the systems level. Electrical Engineering was broad enough to encompass all of these. We tried to involve our faculty in the solid state area and the optics area. We had multiple activities going on. We had queuing theorists from the Operations Research Department; we covered device physics; and, of course, we had systems guys, Tom Stern, myself, Lazar, and others. </p>


Movie information. Compressed movies over low bit rates. There are various versions of MPEG. This was, I think, MPEG 2. There is now an MPEG 4. There’s an MPEG 1. There are various versions of this now. So that’s the official standard. MPEG stands for Motion Pictures Expert Group. That’s used worldwide now for compressed digital TV.<br>  
<p>I organized a group of interested faculty and I put together a position paper on our concept. By the way, in all honesty, the guys, aside from Tom Stern, myself, and Lazar, knew very little about communications. The guys working in VLSI and solid state were not really knowledgeable in that area. Once we got the award, we started training them. It was interesting. </p>


<br>  
<p>Anyway, we put this proposal together. I wrote it with Tom’s help and submitted it. I guess there were 42 proposals submitted from all over the country in all fields of engineering. (Seven were finally selected, ours being the only one in telecommunications.) There was, initially, a site visit. They came down to visit us. We were then selected as among the top fourteen. </p>


We developed a sizable software activity. Lazar had built one of the first high speed local area networks called Magnet. In fact, it was before we got the award. He had the Magnet II and this was one of the things we pointed to in preparing our proposal. Lazar was both a theoretician and a practitioner. A very rare combination.<br>  
<p>Then I had to go to Washington to present our case. I remember that was a difficult time. I don’t know who the other finalists are, because they don’t tell you this. You’re sitting in an anteroom and there are some other people who you don’t even recognize. (I might have recognized one from another university, but you’re not supposed to talk to each other.) They invite me in. Then you have what seems like a hostile audience in front of you. This is the selection committee, and they started firing questions at me, all kinds of questions. In particular, one guy was really firing hostile questions. Perhaps not hostile, but tough questions at me. Later on, I mentioned his name to one of my colleagues, and he says, “He is really a friend.” He may have been a friend, but not inside that meeting. </p>


<br>  
<p>Anyway, I came out saying forget it, we’re not going to get it. Yet we got the award despite these very tough questions! We were awarded this grant, one of seven, the only one in telecommunications. With NSF support, we started the Center for Telecommunications Research going. We got the award officially in May of 1985, and we began to build.One of the problems was that it was supposed to have been long range. NSF had said it would be long range, which means that you start taking on graduate students with the funding they require. The initial year the funding might have been a million and a half or two million. They had told us they were going to go up to five million a year. We began hiring graduate students on that basis. Of course, a couple of years into it, it turned out they leveled off at three million, and we had already hired all these graduate students, so, for a while, we had a real problem. We went up to a maximum of eighty-five doctoral students supported on this, plus twenty-seven faculty from these different disciplines. Not full time support for the faculty, but some support for each person. Lots of equipment. A tremendous thing. We began working and I think we did a lot of wonderful things. </p>


As the years went Lazar got very heavily involved in all kinds of higher level software activities. He developed a group called the Comet Group, which still exists now, together with the Advent Group. Comet was more devoted to networking as such, compared to Advent, which focuses on signal processing for data to go over networks. Comet handles networking issues, both software and hardware issues, protocols, standards of various kinds. Lazar took leave from Columbia last year to set up a company here in New York to develop and market some of his ideas. I understand he has some Columbia backing behind this. So Columbia is trying hard to do things like that.<br>  
<p>We got industry involved. The biggest job I had in those days was convincing industry to join us, and a lot of my time as director was spent on the telephone, or going in person to meet people. I didn’t curtail my teaching activities at all. I kept teaching. I kept doing research. I had a book published two years later. It was just tiring, working longer hours. We managed to get a sizable number of companies involved. The major companies in the United States and elsewhere, actually. We had ATT, [[Bell Labs|Bell Labs]], IBM, GTE, Bellcore, Timeplex, and many others. Bellcore had been set up in ‘84, so we had Bellcore as part of us. We had at that time NYNEX, which is now Bell Atlantic. We had Southwestern Bell. We had Southern Bell from the southeast. We couldn’t get all of the then RBOCs, but we did get quite a number. We tried hard to get a lot of companies from the financial industry because they are heavy users. We said we must have the big users. We got Merrill Lynch to join us, and, through them, we got a company called Teleport, which they had acquired, which has now been picked up by AT&amp;T as a carrier. We never managed to get any banks, even though the banks always said to us, “We welcome you. Please come give talks to us. We like the seminars, but no money.” They didn’t give us any money. But they sent students to the programs we had. The only major company from the financial industry that supported us was Merrill Lynch. I’m very pleased about that. That was the most difficult thing, bringing some of the users onboard, but we had, maybe twenty to twenty-five companies join with us, big and small. </p>


<br>  
<p>We set up an industrial affiliates program. Once a year we ran a big, open two-day forum on what we had done, talks and seminars on what we had accomplished. We did it the first year in 1986 and continued doing it every year. We also had Japanese companies joining us. We got the award in 1985; our first open large meeting was in 1986. One of the first questions asked was a hostile question from the audience: “This is an American Center, funded by the National Science Foundation, to advance American industry in a competitive environment. How can you tolerate having foreign companies as part of you?” </p>


Our students have gone to most companies you can think of. Not just Bell Labs. They’re at Microsoft, at Cisco. They’re all over the place now. Not only did we have, and currently have, doctoral students; we have a lot of Master’s students and undergraduate students working and they love it. The undergraduate students are working in facilities with Master’s and doctoral students.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>Especially the Japanese. </p>


=== ATM switching; Wave Division Multiplexing ===
<p>'''Schwartz:''' </p>


'''Schwartz:'''
<p>The Japanese. My answer was very simple—we’re at a university; we’re open to the world. Now, everything we publish is above board and published in all kinds of journals. We have a lot to learn from the Japanese. It’s a two-way street, remember. They’re not going to steal us blind. We learn from them as much as they learn from us. It’s important to have these companies participate. In fact, some of our best defenders at that meeting were people from [[Bell Labs|Bell Labs]] and places like that who recognized this. We had Japanese companies; we had a Korean company joining us; some European companies. But the bulk were American. When I went back and spoke to the then Director of NSF, he said, “No, by all means, you’re free to bring other countries aboard.” They were very supportive of us because they recognized that as well, too. It was a great time. </p>


We did a lot of pioneering work in ATM switching, which came along at that time. In fact, we just mentioned Tony Acampora who was here at that time. He talks of a switch we developed which is one of the first distributed ATM switches. We built a network here in this laboratory connecting a bunch of terminals scattered throughout the building with switches, using our own homegrown protocol, but it was ATM-based. That has had an impact indirectly. There’s been no direct spin-off of that. We moved into the optical arena. That became a hot topic.<br>  
<p>Of course, our colleagues at other universities were very jealous of us. I remember being in a swimming pool at a Communication Theory Workshop in Palm Springs, California, and a well-known colleague from a well-known university comes up to me and he says, “Mischa, we have a big communications group, even bigger than your communications group. How come you got it and we didn’t?” I said, “I don’t know. Go ask NSF. We applied for it. We did our work. We’re doing good work, we think. I’m not going to argue.” There was jealousy there. </p>


<br>  
<p>It was a lot of work, because NSF, in order to support this program, had to convince Congress that it was worthwhile as a heavy investment of money. Other universities were very jealous. They thought that this would come out of the funding of individual investigator funding. NSF had assured them that it didn’t. That it came from extra money. It was a very difficult and trying time from all respects. So it was difficult for NSF. </p>


Tom Stern, who was our Technical Director, began to develop an activity in optical networking communications. Tony Acampora joined him in that. I helped out a little bit, but they were the two guys in this area. It was a natural, because we had people in the solid state area working in optics as well, with a strong optical activity. The idea, though, was, can we build an all-optical network in the future that will far surpass in capability any existing network? It’s all-optical, because you don’t have to convert from optics to electronics and back to optics. Do everything optically, but the fact that this requires an optical switch was then the one drawback. Tony and Tom and others with them began working on problems like that. We had some good activity in that area. A lot of good work came out of that. Tom Stern, as a matter of fact, has just published a very fine book on optical networking. This has, in the last few years, become a hot area commercially. A number of Tom’s students are currently involved in leading companies dealing with WDM- Wave Division Multiplexing.<br>  
<p>NSF kept asking us for different kinds of identifiers of activity. We had to hire staff. We had five full-time administrative people. We also hired about ten post-docs, called Associate Research Scientists here at Columbia, to work with us too. We had to indicate over the years how this Center had helped American industry and helped our society. Well, one obvious way we kept pointing out was that a lot of our students did go into American industry. They were welcomed with open arms. They were properly trained because we had good research facilities we set up. We covered everything from VLSI all the way up to systems, signal processing, and image processing. A lot of good work came out of this, a lot of activity. Patents were issued. Papers were published. I was proud that in 1987, three years after we got the funding, I put a book out on telecommunications networks, which was well accepted. NSF used that to show people some of our accomplishments. In fact, in the preface of the book I give credit to NSF and the NSF Center for helping us do the research that led to a book like this. So it was very useful in many respects. </p>


<p>I stepped down as director in 1988 after three years. I had had it. I’m not really an administrator; I just don’t like that kind of thing. I managed to hire away from [[Bell Labs|Bell Labs]] a top-notch researcher named Tony Acampora, Anthony Acampora. A very able guy. He worked at the director level at [[Bell Labs|Bell Labs]]. He had always wanted to go to a university. He happened to be visiting us, and Tom Stern and I cornered him one day in my office and I said, “Tony, how about becoming director of CTR?” He said, “What!” Never even thought of it. Well, he agreed to come and he became director. He led CTR for the last eight years of CTR’s existence. I maintained my activity in it, of course. So, that’s the genesis of CTR. </p>


=== Educational and international influences of NSF Engineering Center concept ===
<p>By the way, we found out later on, because we had finally built up to about a five million dollar budget (three and a half million from NSF and maybe a million and a half, two million from industry, which is good), we had to cut back the number of students because we were using a lot of money to support the activities, meaning new research, equipment. By the way, out of this came the building you are sitting in now. We are sitting in what is called the Schapiro Research Center. Our Dean, Bob Gross, once we had the Center going and showed that we were really moving along and becoming well-known worldwide, went to the State of New York and Governor Cuomo and sold them on the idea that New York State had a lot to gain in terms of industrial activity in New York State by supporting the building of a new building with research activities in it. I think we got a sixty-million-dollar loan, or something like that, interest-free, from the New York State Dormitory Authority. We got a ten-million-dollar gift from a man named Morris Schapiro, who was a Columbia alumnus who had been a mining engineer here. He made a lot of money as a financier and has funded the Schapiro Dormitory here as well. His brother, Meyer Schapiro, is a well-known art historian, who was at Columbia too. This man, Morris Schapiro, I think he’s still living if I’m not mistaken. He’s in his late 90s by now. Wonderful man. Gave ten million dollars. So we built this new state-of-the-art building for research only. There is no teaching done here. Research facilities and seminar rooms. It came out of the fact that our Center had given the Engineering School the ability to go to the governor of New York and say, “Look, there’s something really going on here.” So I’m very proud of that too. This building houses all kinds of research activities from the Engineering School and the Columbia Physics department. </p>
'''Schwartz:'''<br>  


A lot of us also led in developing courses. I say my textbook came out of courses we taught, and that’s been adopted by other schools. Tom Stern developed a course in optics. Our solid state people moved more into the optical area, developed and taught courses in that area, and that gets spun off also to other schools as well too. So a lot of activity in different areas.<br>
=== Image processing ADVENT group  ===


<br>  
<p>'''Hochfelder:''' </p>


I might say also that many other countries emulated this NSF Engineering Center concept. In fact, the first couple of years, not only was I busy with research, teaching, running the organization and trying to get new companies involved, and trying to satisfy NSF by going to meetings, we had to host a whole bunch of visitors all the time from Canada, the U. K., China, Australia. You name it. Everyone of them ended up setting up centers like ours. There are a lot of schools that set up centers. Canada also developed a center for networking and for communications that covers the entire country. They took our model and developed it into a center where fourteen different universities and other institutions are combined with high-speed networks to work jointly. In fact, I was asked to serve on the original site team and I became chair of that site team, a visiting team, for a while. Australia had centers like that set up. I think the U. K. set up a center. So we’re very proud that once we did this, these other centers were set up. The United States also has a lot of centers now from all different universities too. Some existed before we set ours up. We had the biggest center, I think, in the country for a while working in that area. So it was a very successful venture.<br>  
<p>Could you talk about some of the technical advances or spin-offs, or more to the point, any of the ideas that came out of this Center that some companies picked up and actually… </p>


<br>  
<p>'''Schwartz:''' </p>


Initially, we didn’t have the Columbia Computer Science department involved. But at one of our meetings we had industrial affiliates meeting with us. We had set up two boards, an Industrial Affiliates Board that was made up of top-flight people from each of the major companies that joined us. They had to commit to a certain amount, give us a certain amount of money—fifty thousand dollars a year or above. They helped us set policy and provide long-range direction for the Center. Then we also set up a Technical Advisory Board. That was made up of invited people. We invited the most outstanding people in the country in different areas to come join us on the board. We were free to pick whom we pleased. Sandy Fraser, who was one of the pioneers at Bell Labs in data networking, was on our board. Dave Forney, who was an outstanding person in coding and modems, and one of the founders of Codex, which became part of Motorola, was on our Technical Advisory Board. Outstanding people joined us.<br>  
<p>Well, a very important one, one of the Center leaders in the field of signal processing, a man named Dimitris Anastassiou, developed a large group on image processing. That’s his specialty. It’s called ADVENT, which is part of CTR. He worked as part of the MPEG team to develop the MPEG standard. Columbia is listed as one of the patent holders on the MPEG standard and Columbia, and I just learned last week, derives a million dollars a year revenue from that MPEG standard. </p>


<br>  
<p>'''Hochfelder:''' </p>


At any rate, at one of the meetings with industrial representatives, I remember Sandy Fraser, who was a software person who headed up Computer Science activities at Bell Labs, saying to us, “You know, you’re doing good work in software, but you really ought to focus more on software. Have the Computer Science department join you.” So we approached them and they did join us. So we began to broaden the activities because software became more and more significant, as you are well aware. Any other questions?<br>  
<p>The MPEG standard is for image transmission? </p>


=== Internet, wireless, and multimedia networking ===
<p>'''Schwartz:''' </p>


'''Hochfelder:'''
<p>Movie information. Compressed movies over low bit rates. There are various versions of MPEG. This was, I think, MPEG 2. There is now an MPEG 4. There’s an MPEG 1. There are various versions of this now. So that’s the official standard. MPEG stands for Motion Pictures Expert Group. That’s used worldwide now for compressed digital TV. </p>


Not on the Center for Telecommunications for Research. By way of wrapping up, if you could give your thoughts on the future of telecommunications. Perhaps some of the technical challenges that might be on the horizon.<br>  
<p>We developed a sizable software activity. Lazar had built one of the first high speed local area networks called Magnet. In fact, it was before we got the award. He had the Magnet II and this was one of the things we pointed to in preparing our proposal. Lazar was both a theoretician and a practitioner. A very rare combination. </p>


<br>  
<p>As the years went Lazar got very heavily involved in all kinds of higher level software activities. He developed a group called the Comet Group, which still exists now, together with the Advent Group. Comet was more devoted to networking as such, compared to Advent, which focuses on signal processing for data to go over networks. Comet handles networking issues, both software and hardware issues, protocols, standards of various kinds. Lazar took leave from Columbia last year to set up a company here in New York to develop and market some of his ideas. I understand he has some Columbia backing behind this. So Columbia is trying hard to do things like that. </p>


'''Schwartz:'''
<p>Our students have gone to most companies you can think of. Not just [[Bell Labs|Bell Labs]]. They’re at Microsoft, at Cisco. They’re all over the place now. Not only did we have, and currently have, doctoral students; we have a lot of Master’s students and undergraduate students working and they love it. The undergraduate students are working in facilities with Master’s and doctoral students. </p>


Well, as I said before, I don’t like to predict things, because I am always wrong! I can’t tell what’s going to happen. But clearly, the Internet is driving the show now. That and wireless. Those are the two major activities. Of course, Internet is moving now to the wireless domain as well too. As a matter of fact, before I retired three years ago, about five or six years ago, I began to get heavily interested in wireless, because I saw that it was an area with great promise for the future, with very interesting research challenges. I like to move into new areas as they come along. People are different. Some people like to stay in an area and really delve more deeply. I like to go on to new challenges. So five or six years ago I saw wireless communications research becoming a challenge to the academic world. A lot of activity in the academic world had been going on in the propagation aspects and physical layer aspects. But, to my knowledge, very little in the networking area, so I began to try to do work in that area.<br>
=== ATM switching; Wave Division Multiplexing  ===


<br>  
<p>'''Schwartz:''' </p>


When I was on sabbatical at University College, London, in 1995, I wrote a paper that was published in the''IEEE Personal Communications Magazine'' on network challenges for wireless that has gotten a lot of play. I got a lot of nice comments on that and I have given a lot of talks all over the world on that. On the higher layer challenges, not for current wireless, which is what we call second generation, the digital wireless that everybody uses, but third generation and beyond, which is now coming along. I see that as one of the major challenges. Wireless terminals will be expected to carry, using limited bandwidth, multimedia traffic, which means video, voice, and images from the Internet. Now, how do you do this with battery-operated devices, which have limited power? Devices of this type are already starting to appear. More are expected. There is currently a lot of work going on in this area. I find wireless networking incorporating such devices one of the major engineering challenges now.<br>  
<p>We did a lot of pioneering work in ATM switching, which came along at that time. In fact, we just mentioned Tony Acampora who was here at that time. He talks of a switch we developed which is one of the first distributed ATM switches. We built a network here in this laboratory connecting a bunch of terminals scattered throughout the building with switches, using our own homegrown protocol, but it was ATM-based. That has had an impact indirectly. There’s been no direct spin-off of that. We moved into the optical arena. That became a hot topic. </p>


<br>  
<p>Tom Stern, who was our Technical Director, began to develop an activity in optical networking communications. Tony Acampora joined him in that. I helped out a little bit, but they were the two guys in this area. It was a natural, because we had people in the solid state area working in optics as well, with a strong optical activity. The idea, though, was, can we build an all-optical network in the future that will far surpass in capability any existing network? It’s all-optical, because you don’t have to convert from optics to electronics and back to optics. Do everything optically, but the fact that this requires an optical switch was then the one drawback. Tony and Tom and others with them began working on problems like that. We had some good activity in that area. A lot of good work came out of that. Tom Stern, as a matter of fact, has just published a very fine book on optical networking. This has, in the last few years, become a hot area commercially. A number of Tom’s students are currently involved in leading companies dealing with WDM- Wave Division Multiplexing. </p>


'''Hochfelder:'''
=== Educational and international influences of NSF Engineering Center  ===


Especially in the mobile environment.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>A lot of us also led in developing courses. I say my textbook came out of courses we taught, and that’s been adopted by other schools. Tom Stern developed a course in optics. Our solid state people moved more into the optical area, developed and taught courses in that area, and that gets spun off also to other schools as well too. So a lot of activity in different areas. </p>


'''Schwartz:'''
<p>I might say also that many other countries emulated this NSF Engineering Center concept. In fact, the first couple of years, not only was I busy with research, teaching, running the organization and trying to get new companies involved, and trying to satisfy NSF by going to meetings, we had to host a whole bunch of visitors all the time from Canada, the U. K., China, Australia. You name it. Everyone of them ended up setting up centers like ours. There are a lot of schools that set up centers. Canada also developed a center for networking and for communications that covers the entire country. They took our model and developed it into a center where fourteen different universities and other institutions are combined with high-speed networks to work jointly. In fact, I was asked to serve on the original site team and I became chair of that site team, a visiting team, for a while. Australia had centers like that set up. I think the U. K. set up a center. So we’re very proud that once we did this, these other centers were set up. The United States also has a lot of centers now from all different universities too. Some existed before we set ours up. We had the biggest center, I think, in the country for a while working in that area. So it was a very successful venture. </p>


Yes. Yes. So, how do you do that? Multimedia. There are a lot of network management issues involved with that, so there are a lot people working on this now. I find it very exciting, and I am personally involved in that.<br>  
<p>Initially, we didn’t have the Columbia Computer Science department involved. But at one of our meetings we had industrial affiliates meeting with us. We had set up two boards, an Industrial Affiliates Board that was made up of top-flight people from each of the major companies that joined us. They had to commit to a certain amount, give us a certain amount of money—fifty thousand dollars a year or above. They helped us set policy and provide long-range direction for the Center. Then we also set up a Technical Advisory Board. That was made up of invited people. We invited the most outstanding people in the country in different areas to come join us on the board. We were free to pick whom we pleased. Sandy Fraser, who was one of the pioneers at [[Bell Labs|Bell Labs]] in data networking, was on our board. Dave Forney, who was an outstanding person in coding and modems, and one of the founders of Codex, which became part of Motorola, was on our Technical Advisory Board. Outstanding people joined us. </p>


<br>  
<p>At any rate, at one of the meetings with industrial representatives, I remember Sandy Fraser, who was a software person who headed up Computer Science activities at [[Bell Labs|Bell Labs]], saying to us, “You know, you’re doing good work in software, but you really ought to focus more on software. Have the Computer Science department join you.” So we approached them and they did join us. So we began to broaden the activities because software became more and more significant, as you are well aware. Any other questions? </p>


I think one of the big challenges that is coming up now is the whole optical area. I’m not really as involved in it as I used to be. Fiber now is allowing multiple wavelength transmission over one system now. Tom Stern has written a lot about that. For a while, we did a lot of basic work here on optics and it wasn’t going anywhere because people didn’t have the optical switches. But then, suddenly, breakthroughs began to develop through which you can handle multiple colors on the same fiber. That’s now increased the use of optics tremendously. Suddenly, it’s become a real hot industry that is obviously affecting the Internet because it means now that you can really drive much more high-speed traffic over it. This technique is called wave division multiplexing, WDM. It has become a real hot technology area now, with both companies and universities involved. Wireless networking and WDM- those are two major technical challenges.<br>
=== Internet, wireless, and multimedia networking ===


<br>  
<p>'''Hochfelder:''' </p>


=== Internet access technologies ===
<p>Not on the Center for Telecommunications for Research. By way of wrapping up, if you could give your thoughts on the future of telecommunications. Perhaps some of the technical challenges that might be on the horizon. </p>


'''Schwartz:'''
<p>'''Schwartz:''' </p>


The key question now is, what is the impact of the Internet and how is that going to manifest itself in the future? One clear thing is access technologies that have fallen behind. We all have our 28.8 kilobits per second or 56 kilobits per second access to the Internet. That’s too slow at home. We have no problem at a place like Columbia because we use our Ethernet facility, right into the Internet. But at home, you don’t have that kind of thing. So, access technology is important.<br>  
<p>Well, as I said before, I don’t like to predict things, because I am always wrong! I can’t tell what’s going to happen. But clearly, the Internet is driving the show now. That and wireless. Those are the two major activities. Of course, Internet is moving now to the wireless domain as well too. As a matter of fact, before I retired three years ago, about five or six years ago, I began to get heavily interested in wireless, because I saw that it was an area with great promise for the future, with very interesting research challenges. I like to move into new areas as they come along. People are different. Some people like to stay in an area and really delve more deeply. I like to go on to new challenges. So five or six years ago I saw wireless communications research becoming a challenge to the academic world. A lot of activity in the academic world had been going on in the propagation aspects and physical layer aspects. But, to my knowledge, very little in the networking area, so I began to try to do work in that area. </p>


<br>  
<p>When I was on sabbatical at University College, London, in 1995, I wrote a paper that was published in the''IEEE Personal Communications Magazine'' on network challenges for wireless that has gotten a lot of play. I got a lot of nice comments on that and I have given a lot of talks all over the world on that. On the higher layer challenges, not for current wireless, which is what we call second generation, the digital wireless that everybody uses, but third generation and beyond, which is now coming along. I see that as one of the major challenges. Wireless terminals will be expected to carry, using limited bandwidth, multimedia traffic, which means video, voice, and images from the Internet. Now, how do you do this with battery-operated devices, which have limited power? Devices of this type are already starting to appear. More are expected. There is currently a lot of work going on in this area. I find wireless networking incorporating such devices one of the major engineering challenges now. </p>


You’ve heard of XDSL. That stands for Digital Subscriber Line, “X” meaning different versions of that: ADSL, HDSL, for example. ADSL, the asymmetric version, seems to be taking off to some extent. Some telephone companies are now beginning to push that now. It enables you to ship signals downstream to the user at megabit-per-second bit rates and upstream at slower bit rates. Asymmetric. That’s what you want for accessing the Internet. You want high speed coming down.<br>  
<p>'''Hochfelder:''' </p>


<br>  
<p>Especially in the mobile environment. </p>


Cable modems are being pushed now too, and of course you read that AT&amp;T has bought cable companies. They are going to be pushing cable modems. They’re also high bit rate, the difference being that they’re like Ethernet, with multiple users using the same cable. So you have to compete with other users. If you get the facility for yourself, you go high speed; otherwise you have to share it. You have to slow down a little bit sometimes. Whereas with ADSL, systems like that, you have your own dedicated wire. There are tradeoffs. Access technology is a hot topic now.<br><br>I still think that one big difficult area that hasn’t been decided yet is how should you run Internet? Is there a place for Asynchronous Transfer Mode (ATM), for example? That’s always been a topic of discussion lately. ATM was touted as a broadband integrated networks concept years ago by the CCITT, the organization mentioned before. (The names have changed, by the way. We now have ITU-T and ITU-R, no longer CCITT and CCIR, as part of the ITU. Telephone and Radio Telecommunications Committees, respectively.) They developed the concept of broadband integrated service, BISDN networks. Broadband Integrated Service Digital Networks. ATM was being touted by the ITU-T (the CCITT initially) as the networking system of the future that would enable us to bring broadband ISDN into use. It’s geared specifically to multimedia. It enables different kinds of service to be provided, whether it’s video, voice, data, images. All integrated, so it is multimedia in nature. There is an organization called the ATM Forum made up of, by now, hundreds of companies trying to develop standards for this. The one basic concept that’s been driving ATM is the concept of Quality of Service, which is a new concept.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>Yes. Yes. So, how do you do that? Multimedia. There are a lot of network management issues involved with that, so there are a lot people working on this now. I find it very exciting, and I am personally involved in that. </p>


In the telephone industry, you talk of Grade of Service. You don’t want to have too many calls blocked. Voice-based wireless is the same way. The current wireless, cellular wireless. In the data networking world, you talk about packet delay; you talk about packet loss probability. You can’t lose data packets because they contain important information. So TCP had built into it the concept that if it does not receive an acknowledgement for a packet within a certain period of time, it repeats the packet, because every packet has to be received correctly.<br><br>ATM has Quality of Service built in, right from the beginning, depending on the kind of the service to be provided. ATM is a packet-switched service using constant- size packets called cells. Not to be confused with the wireless cell. That is different. The ATM cell is a small packet, forty-eight bytes of data and five bytes of overhead going through the network—very short. The whole point of the standards body, the ITU-T, and the ATM Forum following, was to develop standards of Quality of Service, and have this built in. So they have a concept that different kinds of traffic will be transmitted. For example, continuous bit rate traffic, such as voice, may incur a delay. In transmitting these packets, voice traffic has the property that it must arrive at the destination within a certain interval of time; otherwise it’s not tolerable in real time. But you can drop some voice packets, voice cells. The ear won’t notice it too much. A little bit of noise is heard. So its Quality of Service is maximum packet delay as well as packet jitter, because voice cells are buffered as they move along. Successive packets in the same conversation will change their spacing. That’s not good for the ear; you have to reduce that. So, you have these Qualities of Service for voice.<br>  
<p>I think one of the big challenges that is coming up now is the whole optical area. I’m not really as involved in it as I used to be. Fiber now is allowing multiple wavelength transmission over one system now. Tom Stern has written a lot about that. For a while, we did a lot of basic work here on optics and it wasn’t going anywhere because people didn’t have the optical switches. But then, suddenly, breakthroughs began to develop through which you can handle multiple colors on the same fiber. That’s now increased the use of optics tremendously. Suddenly, it’s become a real hot industry that is obviously affecting the Internet because it means now that you can really drive much more high-speed traffic over it. This technique is called wave division multiplexing, WDM. It has become a real hot technology area now, with both companies and universities involved. Wireless networking and WDM- those are two major technical challenges. </p>


<br>
=== Internet access technologies  ===


Video is transmitted at variable bit rates. Video starts off being continuous bit rate. When you compress it, it becomes variable bit rate (VBR) traffic. Real-time video is similar to voice in Quality-of-Service. Real-time video has to be delivered, just as voice does, within a certain interval of time, but you can’t lose too many video cells because this might wipe out a whole screen. Data packets, on the other hand, can be delayed to some extent, but you can’t drop any of them.<br><br>So all of these have different kinds of Quality of Service, and that’s built into ATM at the beginning.&nbsp;
<p>'''Schwartz:''' </p>


<br>  
<p>The key question now is, what is the impact of the Internet and how is that going to manifest itself in the future? One clear thing is access technologies that have fallen behind. We all have our 28.8 kilobits per second or 56 kilobits per second access to the Internet. That’s too slow at home. We have no problem at a place like Columbia because we use our [[Ethernet|Ethernet]] facility, right into the Internet. But at home, you don’t have that kind of thing. So, access technology is important. </p>


For a long time, people thought ATM was the networking standard of the future and many companies were set up building ATM switches. Interestingly enough, they originally touted them as being switches for the wide area networks. They first got their sales, however, in local area networks for companies and academic institutions. Now they are being deployed again by wide area networks running over a system called SONET. It’s a high bit rate optically-based protocol for the physical layer, also called SDH. But people in the Internet world are now saying, “But why should we take our Internet packets, TCP/IP packets, and chop them into ATM cells and transfer them to SONET? Why can’t we have TCP/IP over SONET directly?” So there’s a sort of a little struggle going now. I wouldn’t quite call it a war, but a struggle going between two different factions. In fact, I’ve attended meetings where there are proponents of both. So it’s not clear what’s going to happen with ATM at this point, whether it will support Internet all over the world or not. A lot of the telephone administrations are deploying ATM switches.<br>  
<p>You’ve heard of XDSL. That stands for Digital Subscriber Line, “X” meaning different versions of that: ADSL, HDSL, for example. ADSL, the asymmetric version, seems to be taking off to some extent. Some telephone companies are now beginning to push that now. It enables you to ship signals downstream to the user at megabit-per-second bit rates and upstream at slower bit rates. Asymmetric. That’s what you want for accessing the Internet. You want high speed coming down. </p>


<br>  
<p>Cable modems are being pushed now too, and of course you read that AT&amp;T has bought cable companies. They are going to be pushing cable modems. They’re also high bit rate, the difference being that they’re like Ethernet, with multiple users using the same cable. So you have to compete with other users. If you get the facility for yourself, you go high speed; otherwise you have to share it. You have to slow down a little bit sometimes. Whereas with ADSL, systems like that, you have your own dedicated wire. There are tradeoffs. Access technology is a hot topic now.I still think that one big difficult area that hasn’t been decided yet is how should you run Internet? Is there a place for Asynchronous Transfer Mode (ATM), for example? That’s always been a topic of discussion lately. ATM was touted as a broadband integrated networks concept years ago by the CCITT, the organization mentioned before. (The names have changed, by the way. We now have ITU-T and ITU-R, no longer CCITT and CCIR, as part of the ITU. Telephone and Radio Telecommunications Committees, respectively.) They developed the concept of broadband integrated service, BISDN networks. Broadband Integrated Service Digital Networks. ATM was being touted by the ITU-T (the CCITT initially) as the networking system of the future that would enable us to bring broadband ISDN into use. It’s geared specifically to multimedia. It enables different kinds of service to be provided, whether it’s video, voice, data, images. All integrated, so it is multimedia in nature. There is an organization called the ATM Forum made up of, by now, hundreds of companies trying to develop standards for this. The one basic concept that’s been driving ATM is the concept of Quality of Service, which is a new concept. </p>


=== Internet quality of service ===
<flashmp3>360 - schwartz - clip 5.mp3</flashmp3>


'''Schwartz:'''
<p>In the telephone industry, you talk of Grade of Service. You don’t want to have too many calls blocked. Voice-based wireless is the same way. The current wireless, cellular wireless. In the data networking world, you talk about packet delay; you talk about packet loss probability. You can’t lose data packets because they contain important information. So TCP had built into it the concept that if it does not receive an acknowledgement for a packet within a certain period of time, it repeats the packet, because every packet has to be received correctly.ATM has Quality of Service built in, right from the beginning, depending on the kind of the service to be provided. ATM is a packet-switched service using constant- size packets called cells. Not to be confused with the wireless cell. That is different. The ATM cell is a small packet, forty-eight bytes of data and five bytes of overhead going through the network—very short. The whole point of the standards body, the ITU-T, and the ATM Forum following, was to develop standards of Quality of Service, and have this built in. So they have a concept that different kinds of traffic will be transmitted. For example, continuous bit rate traffic, such as voice, may incur a delay. In transmitting these packets, voice traffic has the property that it must arrive at the destination within a certain interval of time; otherwise it’s not tolerable in real time. But you can drop some voice packets, voice cells. The ear won’t notice it too much. A little bit of noise is heard. So its Quality of Service is maximum packet delay as well as packet jitter, because voice cells are buffered as they move along. Successive packets in the same conversation will change their spacing. That’s not good for the ear; you have to reduce that. So, you have these Qualities of Service for voice. </p>


Talking about new issues of the future, another issue is the Quality of Service on the Internet. If you don’t have ATM, how do you guarantee user Quality of Service over the Internet? TCP was developed as a data protocol—packets for data. Now, you want to run voice over the Internet and that voice has to have that same time guarantee mentioned above. Very difficult, particularly now with the Internet. So, they’ve been tussling with Quality of Service. Because if you really want to have real-time voice, you have to guarantee it will get there in time, and it’s very difficult with the Internet. When you go through various Internet service providers over different networks, nobody knows what’s happening to your packets. You can’t guarantee anything. So they’ve had their IETF, which is their standards making body, tussling with this.<br>  
<p>Video is transmitted at variable bit rates. Video starts off being continuous bit rate. When you compress it, it becomes variable bit rate (VBR) traffic. Real-time video is similar to voice in Quality-of-Service. Real-time video has to be delivered, just as voice does, within a certain interval of time, but you can’t lose too many video cells because this might wipe out a whole screen. Data packets, on the other hand, can be delayed to some extent, but you can’t drop any of them.So all of these have different kinds of Quality of Service, and that’s built into ATM at the beginning. </p>


<br>  
<p>For a long time, people thought ATM was the networking standard of the future and many companies were set up building ATM switches. Interestingly enough, they originally touted them as being switches for the wide area networks. They first got their sales, however, in local area networks for companies and academic institutions. Now they are being deployed again by wide area networks running over a system called SONET. It’s a high bit rate optically-based protocol for the physical layer, also called SDH. But people in the Internet world are now saying, “But why should we take our Internet packets, TCP/IP packets, and chop them into ATM cells and transfer them to SONET? Why can’t we have TCP/IP over SONET directly?” So there’s a sort of a little struggle going now. I wouldn’t quite call it a war, but a struggle going between two different factions. In fact, I’ve attended meetings where there are proponents of both. So it’s not clear what’s going to happen with ATM at this point, whether it will support Internet all over the world or not. A lot of the telephone administrations are deploying ATM switches. </p>


Quite a number of years back they proposed a technique called RSVP, a receiver-based protocol, that would try to bring some Quality of Service into this. I’m not really that active in the area, so I’ll just give you my judgment on what I’ve heard. People say it doesn’t scale to large numbers of users and large numbers of sessions going on. So there are other techniques being developed now by various people. One is called Differentiated Services. I said that’s the thing the Internet communications people are tussling with now. How do you guarantee Quality of Service in the Internet environment if you don’t use ATM, which might have that built in? So, that’s another technical challenge in the future that people are working right now on.<br>
=== Internet Quality of Service ===


<br>  
<p>'''Schwartz:''' </p>


'''Hochfelder:'''
<p>Talking about new issues of the future, another issue is the Quality of Service on the Internet. If you don’t have ATM, how do you guarantee user Quality of Service over the Internet? TCP was developed as a data protocol—packets for data. Now, you want to run voice over the Internet and that voice has to have that same time guarantee mentioned above. Very difficult, particularly now with the Internet. So, they’ve been tussling with Quality of Service. Because if you really want to have real-time voice, you have to guarantee it will get there in time, and it’s very difficult with the Internet. When you go through various Internet service providers over different networks, nobody knows what’s happening to your packets. You can’t guarantee anything. So they’ve had their IETF, which is their standards making body, tussling with this. </p>


Okay. Sounds good. That’s all I have. Do you have any concluding thoughts?<br>  
<p>Quite a number of years back they proposed a technique called RSVP, a receiver-based protocol, that would try to bring some Quality of Service into this. I’m not really that active in the area, so I’ll just give you my judgment on what I’ve heard. People say it doesn’t scale to large numbers of users and large numbers of sessions going on. So there are other techniques being developed now by various people. One is called Differentiated Services. I said that’s the thing the Internet communications people are tussling with now. How do you guarantee Quality of Service in the Internet environment if you don’t use ATM, which might have that built in? So, that’s another technical challenge in the future that people are working right now on. </p>


<br>  
<p>'''Hochfelder:''' </p>


'''Schwartz:'''
<p>Okay. Sounds good. That’s all I have. Do you have any concluding thoughts? </p>


Actually, I’ve run out.<br>  
<p>'''Schwartz:''' </p>


<br>  
<p>Actually, I’ve run out. </p>


'''Hochfelder:'''  
<p>'''Hochfelder:''' </p>


Thanks very much.<br>  
<p>Thanks very much. </p>


<br>  
<p>'''Schwartz:''' </p>


'''Schwartz:'''
<p>My pleasure. </p>


My pleasure.<br>
[[Category:People and organizations|Schwartz]] [[Category:Engineers|Schwartz]] [[Category:Universities|Schwartz]] [[Category:Corporations|Schwartz]] [[Category:Communications|Schwartz]] [[Category:Communication systems|Schwartz]] [[Category:Telecommunications|Schwartz]] [[Category:Computers and information processing|Schwartz]] [[Category:Computer networks|Schwartz]] [[Category:Information theory|Schwartz]] [[Category:IEEE|Schwartz]] [[Category:Culture and society|Schwartz]] [[Category:Defense & security|Schwartz]] [[Category:World War II|Schwartz]] [[Category:Standardization|Schwartz]] [[Category:Communication equipment|Schwartz]] [[Category:Modems|Schwartz]] [[Category:Image processing|Schwartz]] [[Category:Distributed computing|Schwartz]] [[Category:Internet|Schwartz]] [[Category:News|Schwartz]]

Revision as of 18:23, 29 March 2012

About Mischa Schwartz

Mischa Schwartz

Schwartz received his bachelor’s from Cooper Union, his Masters from Brooklyn Polytechnic, and his PhD (1951) from Harvard, the last thanks to a Sperry Graduate Fellowship. He worked at Sperry from 1947 to 1952, largely on issues signal detection theory (also the subject of his dissertation). He was a professor at Brooklyn Poly from 1953 to 1973 (head of the EE department 1961-66, established a telecommunications group there, and since then has been a professor at Columbia (helping found the Center for Telecommunications Research (CTR) in 1985, and serving as hits director until 1988). His research included coincidence detection and sequential detection through the mid-1960s; then, with the development of SABRE, SAGE, and ARPANet, he switched focus to computer networks, particularly performance analysis and queuing theory. He worked on setting standards for networks with the CCITT, CCIR, ISO, and NRC. He has been involved with the IEEE and its Information Theory Group and Communication Society for much of his career, including stints as president of the Communication Society. He has published at least three textbooks, Information Transmission, Modulation, and Noise (1959), Computer Communication Network Design (1977), and Telecommunications Networks: Protocols, Modeling, and Analysis (1987). He mentions various of his doctoral students, the achievements of the field in general and of institutions to which he is affiliated, such as the CTR, in particular, and identifies central topics in the field.

About the Interview

MISCHA SCHWARTZ: An Interview Conducted by David Hochfelder, IEEE History Center, 17 September 1999

Interview # 360 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Mischa Schwartz, an oral history conducted in 1999 by David Hochfelder, IEEE History Center, New Brunswick, NJ, USA.

Interview

Interview: Mischa Schwartz

Interviewer: David Hochfelder

Date: 17 September 1999

Place: Columbia University, New York

World War II influences on radar, communication theory

Schwartz:

<flashmp3>360 - schwartz - clip 1.mp3</flashmp3>

As my first job, fresh out of school in 1947, I was lucky to get a radar systems job at Sperry Gyroscope Company, which had pioneered in radar during the war. I had a wonderful group to work with, and in the process of doing that, I got heavily involved in communication theory, so I come at the field of telecommunications from communication theory. Before World War II, communications was broadly considered to be the two areas of radio and telephony. Much of it was “seat-of-the-pants” engineering. For example, one of the textbooks we used in those days was the book Radio Engineering by Frederick Terman, who was one of the real pioneers going back to the ‘30s. That’s strictly circuit after circuit, with very little analysis or overall systems orientation. I think World War II changed that. At least from my perspective it did.

In our work, many of us learned from the work at the Radiation Laboratory at MIT during World War II, which put out a whole series of books on radar. While working in radar and communication theory, post-1947, I used a lot of that material. At the same time there were people at Bell Labs also working in that area. Claude Shannon, for example, the founder of, and a giant in information theory, did a lot of work during World War II on problems related to communication and control for the military. Norbert Wiener at MIT was also doing work of that type. So, this all came out of problems during World War II, much of it having to do with trying to improve radar, communications, and system control technology. There were many physicists and mathematicians working on radar at the MIT Rad Lab.

That’s my own view. They developed a systems unit that had been missing before. You had very good physicists working, good mathematicians, and very good engineers. The systems concepts had been slowly developing, but they really came together there. For example, such problems like detecting signals in noise—a critical issue in communications—came out of work on radar.

Education; Sperry Gyroscope Company employment

Schwartz:

I worked at this group at Sperry for two years in 1947-1949, coming in as a young kid, fresh out of school. I had just received a degree at Cooper Union. I started Cooper Union early on during the war and got drafted. I came back after the war, and luckily I had a good advisor who pushed me out in a hurry. He said I didn’t have to take certain courses, and so I got out in ‘47. I started a master’s degree at night at Brooklyn Poly and I was teaching at Cooper Union, too. When you’re young you have a lot of energy to do all kinds of things simultaneously.

At any rate, I got fascinated. Now, here I am fresh out of Cooper Union, I get involved with the radar systems group at Sperry, and the first thing I had to learn was probability. I never took a course in probability theory, but here I’m being asked to study signals in noise. What is noise? What are signals? How do you represent these things? What do you mean by detecting signals in noise? Because this was a critical issue in radar. So I did a quick learning process, and I must say in all honesty that sometimes I feel that I’m not as good in probability as I should be! All my work has been in statistical communications technology. But I never had a formal course in the subject, so it’s all “seat-of-the-pants” self-learning. I have taught numerous courses in statistical concepts, but somehow or other I still feel I missed the rigorous approach needed to truly learn probabilistic concepts. Anyway, it turned out one of the key areas that we had to learn quickly was noise representation. That was a very hot topic.

First of all, in radar, how do you pull up signals from noise? I used two primers at that time. One book of the Radiation Laboratory’s series was on the detection of signals in noise, so I went through that. The other was a classic series of papers written by Steve Rice, S. O. Rice, at Bell Laboratories in 1944 on mathematical noise representation. I found it very difficult as a young kid without enough background in probability when these guys are tossing this stuff at me. For instance, spectral analysis—I didn’t know what that was. The problem is in school it was all seat-of-the-pants kinds of things. Design electronic circuits—there’s no system orientation of any kind. Now suddenly you have to learn about spectrum analysis. Some of this had appeared earlier in the literature but really hadn’t been done in the schools. The whole field has changed now. This came out of World War II where people were dealing with noise representation, representation of signals, spectral analysis.

At the same time, in 1948 (the year after I started at Sperry) hot in the air was Shannon’s work on information theory that came out at that time. That was an eye-opener. I didn’t quite understand all of it either, and that took a lot of learning. So everything was really very exciting in those days, I must say. The whole idea of communication theory: how to improve transmission of signals in the presence of noise; how you cope with this in radio, how you cope with this in radar-related ventures. But starting with radar and moving into radio, now, how do you handle that? How do you carry out telephone transmission in the presence of noise? All of these ideas began to gel and come together based on work during the war by Shannon, Wiener, Steve Rice at Bell Labs, and the physicists who worked at Rad Lab. There was a lot of work being done. Some of this work was still being done in the classified area. There was a very famous report being done by a man named Marcum, who was one of the first to do some basic work on detection of signals in noise for radar problems. This was declassified a long time ago and it has been published in a classic set of papers. Our work at Sperry was sort of similar. We used some of his work, and we went beyond his to some extent. So this was “hot in the air” in those days, and that was the most exciting thing.

Nowadays, we look back and we say that’s only part of the communications process, what you call the physical layer. We didn’t call it that in those days. But nobody thought about going up higher or anything like that; just get those signals across and noise is your big problem. So, how do you handle it? How do you characterize the noise? How do you characterize the signals? How do you handle problems of this type? So that’s where things were gelling in those days.

Ph.D. studies, Sperry Graduate Fellowship

Schwartz:

Early in 1949, after I had been there almost two years, Sperry Gyroscope introduced a new doctoral program for people working at Sperry. I applied and was lucky enough to get the award. The most amazing thing is they say to pick any school I want in the country. “So, how much will I get?” “Well, you figure out what it’s going to cost.” I mean, nobody had a price, believe it or not. I could have gone anywhere, assuming I had been accepted. Luckily, I’m an honest guy. I had good grades in my evening Master’s program at Brooklyn Polytech, and as a young kid I had always heard Harvard was the best university in the country. Maybe not the best place for electrical engineering, but I decided to go to Harvard, and I’m glad I did.

So I went to Harvard in ‘49. Harvard had had an engineering school, but they closed it down just as I arrived there, so I went into the Applied Physics program. I was really doing communications work as well as applied physics, but you could do anything you pleased. I submitted a proposal to Sperry to cover my costs there and I thought I was limited in the amount of money, so I really didn’t ask for much. But as the “richest” graduate student on campus, I bought an old car—nobody else could do that. I wasn’t making much money, but it was still better than most students were getting in those days. I was very thankful to Sperry for the Sperry Graduate Fellowship. It was very nice. Summers I came back to work at Sperry.

I went to Harvard and looked around for a thesis topic. I got an advisor, a man named Pierre LeCorbeiller, who was a physicist. He said to me, “I’ve got a nice problem on the double pendulum that I’d like you to solve. Solve it, you get the degree.” I said, “I’m not really interested.” I took some reading courses with him on nonlinear mechanics, stuff like that. So I went scouting around for a thesis topic. I even went over to MIT to talk to some well-known electrical engineers there. One guy was a doctoral student, Bill Huggins, from Johns Hopkins whose work I’d known about earlier. I couldn’t get a topic, so I decided to extend the work that I’d done at Sperry. I’m happy that I did because it turned out to be wonderful- it expedited my getting the doctorate. My first year at Harvard, I took courses and I took the doctoral qualifying exam. The second year, I spent a few months on the thesis and I finished up in two years, very quickly. They tell me the thesis was very good. Somebody once said it’s the most dog-eared thesis at Harvard in that field, because it was a hot topic. It was on the detection of signals in noise as applied to radar, but extending the work that Marcum had done and the work we had done at Sperry. So it really was detection theory, if you wish.

Signal detection in noise; thesis and publications

Schwartz:

I tell my students I believe in serendipity. I like browsing, and I suddenly find something that can be useful. This was the case with my doctoral work. You realize the problem—the detection of signals in noise—is statistical, so you look through the statistics literature. It turns out statisticians had done work similar to this. They may have not have used the term noise, they may not have used the term signal, but instead talked of detecting something in the presence of interference. Something like that. In particular, both Marcum and I came across this. Two statisticians named Neyman and Pearson had done work on the theory of statistical detection that I applied to the problem of radar. It enables you to find optimization techniques. For example, if you have a signal in the presence of Gaussian noise, the model everybody uses, says that the optimum thing to do (which is what people were doing all along) is to take these signals, one after the other, and just add them up. After a certain period of time, you stop and say that if the sum of those signals exceeds a certain threshold based on the noise, then you call it a signal present. Otherwise, you call it noise present.

See, they came up with the nice idea that you have what’s called the probability of success and the probability of making a mistake—you have to incorporate both. A lot of books have been written on this. One of my books talks about that too. What you do is to say, “I want the probability of a mistake to be less than a specified amount.” In our case, mistaking noise for signal is to be less than a certain value. That sets a threshold. Then you maximize the probability of success of a signal when it does appear. The procedure tells you the way to handle this. If it turns out noise is Gaussian, just add up the signal samples and set a level, depending on the probability of false alarm, the probability of mistake. Some of this work had already been done at the Rad Lab, but not in this clear form. They had done similar things, because, if you think about it awhile, it’s almost an obvious thing to do.

So I pursued that for my doctoral thesis. I extended this work in my thesis to handle a signal that is fading, or fluctuating in amplitude. I also came up with another detection technique when I said, “But, gee, I might try other techniques instead of adding the signals up. Maybe there are simpler techniques.” These apply to communications obviously, because it’s the same problem of detecting a signal in the presence of noise. So I developed a technique called coincidence detection, taking a number of these radar signals coming in to see if each signal separately exceeds a certain level. If it does, you count it, then after a certain interval of time, you count the number of incidences above that level. So it’s a counter rather than an adder. I thought it might be easier to implement, and I compared that with the optimum Gaussian adding procedure and showed that it does very well.

You have a bunch of signal plus noise samples coming in; they may be fluctuating because the signal is fluctuating. So if each signal comes in, it could be noise, so you set a threshold. As each one comes in, you ask is it above the threshold? You set a counter. So every time it comes in, a counter keeps advancing. The radar can only handle a certain number, so you set a certain fixed number to come in, in a certain period of time. You say, “I’ll call it a signal rather than noise,” if K of N samples exceeds that level, and the value K depends on the noise level and on the signal level. Not only that, I found by just playing around it turns out when you do a study of a lot of these in large numbers, that there is an optimum value for that value K, in terms of maximizing the probability of success, for example. So that was a chapter of my thesis, and later published as a paper. The radar people picked up on this paper, and it was later cited as a classic paper in the field.

Not only that. I found to my amazement later on, people saying this was one of the first papers on non-parametric detection because the coincident detection technique was non-parametric. It didn’t count on the optimum Gaussian statistics it simply said count the number above a level. That’s all I did. It was very nice.

So I published these papers. I came back to Sperry after two years. I had nobody to guide me; my advisor didn’t really guide me in this. Somebody said, “Why don’t you publish this?” I said, “Where should I publish it? I don’t know anything about publishing.” “The Journal of Applied Physics.” I sent the whole thesis (180 pages long) to the Journal of Applied Physics. It goes over there. Back comes a letter from the editor: “It sounds interesting, but maybe you ought to send in separate articles.” It took me a while. I ended up finishing my thesis in ‘51. In ‘54, two papers were finally published.

Another example of serendipity was the chapter in my thesis on sequential detection applied to radar. While at Harvard, I was browsing through the statistical literature on signal detection. I came across a book by a man named Abraham Wald, who was a world-famous statistician at Columbia, called Sequential Decision Theory. He had the concept that one could speed up the process of determining whether a product that you were looking for is ok. All of this is small sample theory. Maybe I can do better with fewer samples, if instead of setting a criterion and seeing if my samples exceed that criterion, I do it sequentially. I start looking at each sample as it comes along. I have a decision region that changes depending on what the sample was. For example, if the first few samples I get are above a certain level, maybe that means it is a signal coming in. Let me reduce the threshold or do something. Adapting Wald’s work to radar, I developed a double threshold scheme. If something fell below a lower threshold it was noise; above a second threshold it was counted as signal; between the two, you continue the process, and start narrowing that double threshold region, the one between the two thresholds.

Hochfelder:

So it’s an adaptive measure?

Schwartz:

It’s an adaptive technique. I said, “My God. I can use that for radar!” I have a chapter in my thesis on sequential decision theory, which I never published, unfortunately. I wish I had, because other people published and got the credit for it. I tell my students this all the time now, “Publish as quickly as you can. Otherwise, you’re going to find that everything is in the air at that time. Everybody’s working on these problems. Things don’t come out of a vacuum.” If I’m working on this problem, other people are working on this problem. Don’t be afraid if somebody has similar ideas. You are going to have your own ideas, always different enough from somebody else’s ideas that it will be all right. Reminiscing a little bit, the day of my doctoral exam at Harvard, I got a call from my advisor to come into his office. He says, “When you went over to MIT, who did you speak to?” I said, “Why?” Well, it turned out that I had a fright, but it worked out okay. My advisor had invited Jerome Wiesner, a very well known professor from MIT, whose work was in this area, to serve on the committee. (He became the president of MIT later on.) He accepted because there was nobody—really, there were very few people at Harvard working in the field. My advisor was a physicist. He didn’t know anything initially about this field, but he took me on as a student, which was nice. So I worked on my own, based on my experience at Sperry. At any rate, my advisor says, “Well, he [Wiesner] looked at your thesis, and said, ‘A thesis at MIT was finished last year, exactly in that area.’” I said, “I never saw this before.” He gives me a copy of the other thesis. I look through it. The first chapter was very similar to mine. Quantitatively he was doing the same kind of thing, but happily, mine was theoretical. He had built a radar system and did studies, which took me off the hook. I’d never met the guy. When I went over to MIT, it had nothing to do with that. I was looking for other ideas in other fields.

Generally, when you’re working on something, other people are working on the same thing. That’s the way life is. I mean, I worked on radar because I was at Sperry and Sperry was a radar company, among other things. Work had gone on at Rad Lab. The Rad Lab books had been published. The Bells Labs’ people were doing work in this area. It was all over the place. Marcum had done this work at Rand Corporation out in California. So everything was going on at that time. Then you have your own ideas. So, at any rate, both sequential detection and Neyman-Pearson detection theory as applied to radar were in my thesis. But by the time I got around to wanting to publish this work, somebody had already published papers on them. I got two nice papers when I could have had four nice papers. But, okay. How could I complain?

Teaching; information theory

Schwartz:

Anyway, I went back to Sperry for a year and did more work. Then I decided I wanted to go into the academic world. I started doing teaching at night. I taught at Adelphi on Long Island and I gave a course at City College in statistical communication theory, which was just beginning to gel. Now, while I was at Harvard as a graduate student, one of the pioneers in information theory was a man named Peter Elias, who had done his thesis on this at Harvard.

Hochfelder:

He was associated with Norbert Wiener for some reason?

Schwartz:

Yes. He became a professor at MIT. He had one of these special fellowships at Harvard, I think, to do anything you please, really. He may have gotten his Ph.D. at MIT. I don’t remember where it was, but his thesis was in information theory, coding, those things. I listened to some talks that he gave, and it was very exciting. So I got very interested in information theory as well. I never really worked in information theory. Coding was not my field of interest. I was much more fascinated by noise. Signals in noise, detection theory—that excited me.

I spent another year at Sperry, and then decided to develop some graduate courses at Adelphi University on Long Island and a graduate course at City College, trying to pull together some of these ideas on statistical communication theory. I enjoyed those courses. I enjoyed pulling them together and I loved teaching. When I was doing my master’s degree at Brooklyn Poly at night and working at Sperry, Friday nights I was also handling a physics laboratory at Cooper Union. (When you’re a kid, you can do anything, obviously!) At Cooper Union as a senior, I enjoyed helping students with their course in applied mathematics, stuff like that. I enjoyed teaching very much. That’s how I decided to become a teacher.

How I finally made the switch from Sperry to teaching is very interesting. On my plane coming back from Chicago from the National Electronics Conference, the year after I got my doctorate, was Ernst Weber, one of my teachers at Brooklyn Poly, who was a pioneer in the field of electrical engineering. He was very well known, and had published books, including one much later with the IEEE Press. He has now passed away, but was an outstanding giant in the field. He was later the president of Brooklyn Poly and was head of the department when I was there. One of my teachers, and a wonderful teacher. So, I see him on the plane. I go over to him and say, “Dr. Weber, do you happen to have any openings?” It turned out he had, so he offered me a job at Brooklyn Poly in September of 1952 as assistant professor. Nowadays, when people complain about one course that they have to teach I had 18 contact hours. I had two three-hour lectures and twelve hours of laboratory. I said to myself, “Gee, what do I do with all the free time that I’ve got?” Because I was used to working a forty-hour week at Sperry and maybe overtime as well.

One of my colleagues that joined the same time as I did, Athanasius Papoulis, went on to become very well-known in the field. He’s published many books. We even shared a desk. They had no room for us when we first joined. Dr. Weber was also president of the Polytechnic Research and Development Corporation, which was marketing some of the stuff done at the Microwave Research Institute at Poly, of which he was also the director at that time. Poly had done pioneering work in microwaves under his direction during World War II. He was never in his office, so he gave us this desk to share the first half a year or year, or something like that.

Papoulis had five courses to teach, three graduate and two undergraduate courses. Maybe he had ten to twelve contact hours. Young faculty only teach three contact hours now, but there’s more pressure on them now. We didn’t have that much pressure.

Brooklyn Poly had pioneered in microwaves, electromagnetic theory applied to radar, and applied to a lot of other problems. Weber had built up this large group, and they had set up a separate department called the ElectroPhysics Department.

I joined the Electrical Engineering Department. It turned out, for whatever the reason, the Electrical Engineering Department was more of an undergraduate department at the time. We taught graduate courses, but there wasn’t that much research going. The ElectroPhysics Department was strictly a research department teaching graduate courses. So my first courses that first year were standard electrical engineering undergraduate courses.

This is 1952, and I had this interest in communication theory. In 1952, one of the first books published in the area was by Davenport and Root calledRandom Signals based on pioneering work at MIT and elsewhere, again, during World War II. It was on the detection of signals in noise and written at the graduate level. Davenport was from MIT, and I guess Root was also from MIT. Root went on to be a professor at Michigan. I, of course, looked through the book and liked it. I think I may have even taught a course at that time using that book. I don’t recall anymore; my mind is not clear about this.

Communication systems text

Schwartz:

I decided that it might be interesting to develop a modern undergraduate course in communications systems, because the only books available were books like Terman’s book on radio engineering, which was strictly radio. Maybe a little of telephony, but very little on this. When I joined Brooklyn Poly, I decided that I’d start focusing on developing that, and I started teaching a course in that area. I developed some notes for that and handed notes out to students. It took a number of years. I don’t remember the exact timing, but it was probably about the mid-’50s—’55, ‘56, something like that, during which I put the book together.

The head of the EE department at Brooklyn Poly at the time was an outstanding man named John Truxal, who was a pioneer in the control area. He had published an outstanding book called Automatic Feedback Control System Synthesis. He came out of MIT. He joined Brooklyn Poly as head of the department in ‘54, ‘55, something like that. He began to build a strong control group, which became one of the world’s outstanding groups on controls.

I put these notes together, and Truxal encouraged me to have it published as a book. I remember saying to him, “What title should I use?” He said to me, “Take a long title. Long titles sell well.” (Note that his best-selling book was called Automatic Feedback Control Systems Synthesis!) There were other books available related to my proposed book, but not quite the same. In particular, there was a book that had been written by Goldman of Syracuse University called Frequency Analysis, Modulation and Noise. A classic book at that time, very nice, but it really didn’t focus on statistical communication theory and the information theory aspects. It was more frequency analysis. Guillemin at MIT had published some nice books as well on frequency analysis, which was beginning to pervade the curriculum. However, there was very little on noise, very little on signals and noise and very little on information theory, especially on the undergraduate level. Graduate courses were developed using the Davenport and Root book.

My notes covered these topics on the undergraduate level, so I decided to proceed with book publication. I borrowed part of my title from Goldman’s book from Syracuse on frequency analysis and called my book Information Transmission, Modulation, and Noise. You can see the first edition here on my bookcase shelf. This whole period in the ‘50s had communication theory pervading the field of communication, at least in my mind. I come at it though radio, much less through telephony. At that time we had salesmen (they called them travelers) coming around from various publishing companies. A guy from McGraw Hill came in and said to me, “How about writing a book?” So, I said, “I have notes for a book on statistical communication theory for undergraduates.” He said, “No, no. You don’t want to do that. Davenport and Root’s book has just come out. Why don’t you try another area.” He tried to dissuade me. Luckily, he didn’t persuade me, because the book turned out to be a bestseller.

In 1959 this book was published. Independently, John Truxal, together with McGraw Hill, had set up a series called the Brooklyn Polytechnic Series, and this was published as part of the Brooklyn Polytechnic Series. It might have been the first or second in the series.

It was the first undergraduate textbook to cover modern communication systems from a statistical point of view. It talked about AM, FM and digital communications from a unified view, and brought in some of the statistical stuff that had appeared in other books, as well as spectrum analysis. It starts with frequency analysis, after an introductory chapter on information theory, presented in a very qualitative way. What do you mean by information? This was Shannon’s great idea, which he actually put into mathematical form. If I’m sending a signal, which is on continuously, it carries no information. So why send it? When you send something it should be unknown. So he quantified this.

Then I went on, in the book, to write about AM and FM signals. Then I went on to discuss digital communication systems, PCM, starting with pulse amplitude modulation. Then I said, “Okay. Now we can try to understand how these systems all function from a systems point of view. Because they all have to function in the presence of noise.” FM, AM—all of these get swamped by noise. Why is PCM better? Why is FM better than AM as far as noise is concerned? We all knew this. In fact, Armstrong pointed this out many years before and he did this in a very nice graphical way. Actually, in the ‘30s, people began to quantify this, so I tried to put all of this together in the book: the analog stuff that came out of pre-World War II, the digital stuff that came out of signal communication theory during World War II, and information theory during World War II. Students weren’t expected to know anything about probability. I hadn’t had it as a student, so I put in a chapter on statistical analysis. I applied that to FM, AM, and PCM. First, I study them without noise. I talk about statistics. Then I introduce simple analysis of noise on the undergraduate level using the work of Rice in 1944 that I’d learned a few years before. By now, fifteen years after Rice’s work had appeared, it was classic stuff, and I put it together for student use.

The book turned out to be very successful; many schools picked up on it. I was very pleased. A number of years back, the University of Wisconsin Department of Electrical Engineering had their hundredth anniversary celebration and they invited me as one of the guest lecturers there. I was very pleased to do this. They had published a book summarizing activities in their department for the last hundred years. They gave me a copy and other people copies. When they talk, in their book, about the ‘50s, they discuss introducing courses in communications. No suitable books were available, so in ‘59-’60, they began using my book, which is very nice to read. It was the first book in the field. It held the field for about six years or seven years. Then other people began publishing and it lost sales. The first five or six years it was the only book out in the field. I’m very pleased that it was a pioneering book; this made me feel very good.

I left Poly in ‘73. Len Shaw, one of my former colleagues there, was rummaging through the files a number of years back and found a mimeographed copy of my original notes. He was at Brooklyn Poly for many years and was one of the Deans there (he’s just retired). So he sent them to me with a comment saying, “Some of your ideas still hold up.” It was very nice. Very pleasant.

Brooklyn Poly Teaching and Research

Schwartz:

I must say in all honesty that when I joined Brooklyn Poly in ‘52, I focused on the undergraduate level. Then I began to teach graduate courses also. So I taught courses in a variety of areas, not just communications, because we were encouraged to do this kind of thing. I developed this book, then I began thinking about going back to research. I don’t have time for doing that sort of thing now. That’s why life was a lot easier in those days. Nowadays, the minute a young man or woman joins the university, he or she has got to start running. I guess I just take back what I said before. It’s much harder for them now. They can’t afford to teach eighteen contact hours and do research at the same time. But I managed somehow to begin to try to do this.

I remember going to Bell Laboratories, thinking maybe I can work with them or get some ideas from them. I was introduced to David Slepian, who retired from Bell Labs a long time ago. He was a pioneer in this whole field of mathematical representation of communication signals. At this time, the Army was doing work in RF and they encountered a lot of work with fading channels, fading signals. I thought that might be an interesting idea, so I began doing work on fading channels.

I had a number of very fine doctoral students at the time: Don Schilling, Ray Pickholtz, Bob Boorstyn, Ken Clark, Don Hess, and a number of others, and they were doing their theses for me. Actually, Ken Clark was the first one to do his thesis for me. There was a lot of interest in FM as well, so we began introducing a research program in the studies of FM and noise, and at the same time in fading signals and noise because the Army was pushing things like that. It was in the air—papers were being published. I put some students to work on that.

Rice at Bell Labs had continued to do his work. He was the pioneer in noise. He tried to apply some of his noise ideas to FM, and he developed the concept of click analysis. Of course, everybody knew why FM provided an improvement over AM in the presence of noise, above a threshold. Armstrong, as well as Crosby at RCA had pointed this out in the early ‘30s. But then, suddenly, below a threshold, FM goes to pot. Why? Why does the noise suddenly get larger? Rice through the click analysis was able to put this on a firm footing and so FM studies were in the air too.

My first doctoral student was Ken Clarke. He finished up in 1959. He had been an instructor. All these guys were appointed instructors on the staff, which was one way of getting teaching out of them and getting them a little extra money. Don Schilling and Don Hess, and, a number of years later, Pickholtz and Boorstyn, were instructors. In 1961, Truxal moved on to become Dean and I became the EE Department Head at Brooklyn Poly. I really started pushing telecommunications. These guys were there; they were very good guys and we had them all appointed assistant professors. We set up a group that eventually had seven of us working in telecommunications. I must have begun the group in 1959, I guess, but when I became the department head I began to push it.

At that time, Jack Wolf, an outstanding graduate of Princeton, joined us. I heard about him. He had been in the Air Force and then joined NYU. I appointed him to the staff also, and with him we had seven faculty members in the telecommunications area. Poly had a big engineering school, and the EE department had about forty-five people in it at that time. We had a group of about seven faculties, including myself, working the field. It was a glorious time. Ken Clarke and Don Hess were experimentalists—they tested. When we were talking about fading channels and fading signals, they developed an underwater fading channel simulator. If you send signals through the water, water also has some of the same effect on them. They built a water tank for this purpose. Sid Deutsch, another faculty member in the EE department, had done work on his own in television. He was doing work on low bit-rate television. He worked closely with our group as well. We began to publish in FM, fading signals, and noise. Jack Wolf was a specialist in information theory and coding, and was doing work in coding. We covered the gamut from information theory and coding to statistical communications theory to communication systems like FM, both in teaching and in research that we were carrying on at the time. I think it was probably one of the largest groups in the country, seven faculty members at that time, plus a sizable graduate enrollment. It was a wonderful time.

I say the department started off as an undergraduate department when I joined it. When Truxal joined the department, he brought in graduate research and teaching in the control area, but we built a large group in communications as well, following up on that. So, it was a real wonderful experience.

That was the tenor of the times at Brooklyn Poly when I was there. Unfortunately, Brooklyn Poly fell into financial problems and went through a difficult time. I was the Department Head until 1965. I left on sabbatical to go to France, came back a year later and continued my work in this area. That’s when in 1966-68, Ray Pickholtz and Bob Boorstyn and two other graduate students finished up, followed by a flock of others. I had a lot of students from Bell Laboratories come to work with me in those days as well too. I put them to work on problems in digital technology, communication theory, and things of that type. In fact, Poly set up a special program jointly with Bell Labs to have them come spend a year with us full time working on their doctoral thesis and go back. It was a wonderful time.

Communication networks and computers

Schwartz:

This took us to about the late ‘60s, I guess. About 1970 or so, I began to sense that there was a change taking place. We focused on what we now call the physical layer, and people were now beginning to talk about communication networks—machines talking to one another.

I’ll go back a little bit historically because this has emerged now, using computers and communications, as the Information Age. Back in the mid-’60s, the way I look at it, GE and MIT and other places began to experiment with time-sharing computers. Before that time, everybody who worked in a large establishment or university had access to a large machine. You would bring your cards in and have them loaded in and a couple of hours later you would go pick up your cards and maybe some printed-out papers. It was exciting, but young people nowadays have no idea how difficult this was. They began experimenting with the idea of time-sharing computers because there had been some studies done that indicated that the machine was not operating most of the time, or you could do multiple operations simultaneously. MIT experimented with this, and GE joined them on this, and other companies too. Once you start time-sharing, you start being concerned about communicating with that machine. A few years or so ago, I wrote a paper which discusses the history of some of this. Would you like to have a copy of that?

Hochfelder:

Sure. That would be great.

Schwartz:

I’ll tell you how to access it. You can get it at the Journalism School. I wrote the paper for our Journalism School. It’s called "Telecommunications, Past, Present, and Future," written specifically for the non-Engineer. It has an introductory chapter which talks about the history of computing and communications in that period. It is very simple. IBM and other companies, for example, had in the ‘50s, developed communication systems for airlines—airline agent terminals connected to a central computer. IBM was selling the concentrators and the terminals.

Hochfelder:

Is that SABRE?

Schwartz:

Yes. SABRE came out of that. SABRE was the first system. IBM pioneered in that. I forget the details, but I have it in this paper of mine. I went back and checked through the old literature on that. There had been work in that era on terminals communicating with a central computer. For the military, Bell Labs had done some work on some systems, even radar. You had terminals away from the central system that you wanted to communicate with the central system, back and forth. The military and the commercial world had already begun to develop the concept of terminals communicating to a computer somewhere over lines.

Hochfelder:

So, for the military that would be the SAGE system.

Schwartz:

SAGE, yes. I mention SAGE in my paper too. So, all of these things were in the air.

<flashmp3>360 - schwartz - clip 2.mp3</flashmp3>

Then in the late ‘60s, it became apparent to IBM and other organizations that you are asking these large computers to do a lot of communication tasks. Once you start to do more and more of this, you’re tasking the computers with this, and in a sense you’re undoing what you started doing. You want them to do more computational work, and now you’re doing this other work. So they decided to off-load the communication tasks to special purpose communication systems they built called communication concentrators—computers which just do communications work. The concept was to have terminals connected to these; they concentrate the activity and they send them to the same central computer. In a way, this is the same thing the SABRE system does.. The SABRE system already operates under that premise: connect terminals through a concentrator. The concentrator then interrogates the central system and sends messages back. So that was already there many years before that idea, but now they decided to do it more generally. Once you start doing this, you have to develop what are called protocols—ways of having two machines that are a distance from one another interrogate one another and send messages back to one another and understand one another. This has been done for a long time now. Time-sharing, as well as airline reservation systems and military communication systems, among others, helped develop this.

ARPANET; commercial computer utilities

Schwartz:

At the same time (the mid-’60s now), there was pioneering work going on at the then Advanced Research Project Agency (ARPA) of the Department of Defense. Work has been published on this very recently. In fact, in this paper I mentioned, I sort of explore the ARPA, the IBM company thing; I explore the predecessors, the work of the airline reservation systems. The primary work at ARPA at that time, in the mid-’60s, looked to see if people can communicate with computers in some better way.

A man named Larry Roberts joined ARPA in the late ‘60s. He had the concept of developing what he called a computer utility. Since people were beginning to do all kinds of time-sharing activity at the time, why should every organization, every university, every commercial organization, have replicas of the software? Why not develop something special? For example, the University of Utah had a specialty in graphics capability. UCLA was doing its own work. Why couldn’t people all over the country access those universities for their software, rather than having to duplicate it in your own place? It made sense. So he had the idea of building a computer utility, like an electric utility, distributed, and ARPA began to fund the ARPA Network project. I think the first one went online in 1969 with four nodes or computers interconnected. So, you visualize all these things are coming together.

Now, in order for ARPA to operate, they had to have communication protocols to handle the messages back and forth. They had the concept of a router—a message processor that handles signals and routes them appropriately in some way. ARPA began to develop routing algorithms that came out of this.

IBM at the same time period, 1969, had begun work on something they called Systems Network Architecture, SNA, which is probably the first commercial network architecture designed to handle messages between computer systems. ARPA was a distributed topology because you could be anywhere, and their inter-network message processors (IMPs) were scattered all over the country and you fed into them and they connected with one another in a distributed fashion. IBM wanted people to access their main hosts, their large machines, so the concept was to have terminals connected to concentrators connected to the main host, passing messages back and forth, and you wanted an architecture for this.

In 1969, we also saw coming out the first commercial computer utility. A company called Tymshare was set up, coming from the word time-sharing. Their idea was, if you don’t have access to your own computer, you have a terminal, which you use to access its computers. Tymshare had a bank of computers scattered all over the country, large computer centers, where they would process your information. You would pay for this, so your company didn’t have to own a computer of its own. They developed a network called Tymnet, which also began to operate in the late ‘60s, early ‘70s. Everything was coming together now: ARPA’s pioneering work on the computer utility, mostly for universities communicating in a distributed fashion, Tymnet, IBM’s SNA. GE set up a network called GE Information Services Network, which did the same thing that Tymnet did. It offered services. They began to go abroad also and offered links to Europe and other places. Tymshare had their computers scattered all over the country because they felt that it was more reliable that way. GE had its computers all in the one center in Rockville, Maryland. It felt the system was more reliable that way. In fact, I attended a conference later on, where two guys, one from each company, were debating which one was more reliable. One is distributed, and if one system fails, you still have others. GE said if they are all together, we can make them more secure, so who knows? But anyway, very interesting.

Networks research, teaching, and publication

Schwartz:

This thing started to happen now, and I began to feel that networking was an exciting area to work in now. I began to develop some activity at Brooklyn Poly in this area. The Poly Microwave Research Institute (MRI) annually had a large workshop, covering various topics, not just microwaves. We ran one on computer communications and the integration of computers with communications, which is part of the same thing.

I remember talking to Paul Green, at the time at IBM, who had come to IBM from Lincoln Laboratories. He was a real pioneer in this field. He was at IBM Research and had done some pioneering research work in SNA. He said to me, “If you really want to learn about the field, why don’t you go out and find out what some of these companies are doing?” I think he might have suggested this for a journal. I said sure, and got together two of my colleagues, Bob Boorstyn and Ray Pickholtz at Brooklyn Poly, and the three of us picked four networks that were ongoing in this country. We went and talked to the people and learned what they were doing. It was a new field. We wrote a nice paper. That started us going. Once you learn what companies are doing, it gives you some exciting problems to work on.

Hochfelder:

It would be a good paper to have.

Schwartz:

Yes. I have it in my files here. I’ve got a copy of it. One of the IEEE journals, I forget which one, published it. So that was a tutorial paper. Nothing new, but we three interviewed people at different companies. One was the NASDAQ system, for example, that they had set up in those days. One might have been Tymnet too; I don’t remember anymore.

I began to develop a program on this subject and began to teach a course at Brooklyn Poly in the subject of networking. I had some notes. I left Brooklyn Poly in 1973, just when all of this was coming to a head. Poly had been having financial problems, as I pointed out, and it was sort of sad in a way because some of the leading people had left. Jack Wolf had left by that time. Ray Pickholtz decided to leave and go to George Washington, and two of the other key people decided to go into industry by themselves.

Hochfelder:

Is that when Don Schilling left Poly?

Schwartz:

No. Don Schilling had gone to City College. He may have gone before this time. But Don Hess and Ken Clarke organized a company of their own. They were experimentalists. They still have their company functioning. I think Bob Boorstyn and I might have been the only ones left at this time.

Columbia asked me to spend a year with them as a visiting professor, which I did. I gave a couple of courses in computer networking, because that was the thing I was really pushing at this time. They asked me to stay on and I stayed on at Columbia. I came to Columbia officially full time in 1974, and, again, continued developing a program in networks. Out of this came some notes and the first textbook in the field called Computer Communication Network Design and Analysis, published in 1977. A very nice book was published much earlier by Len Kleinrock, who was one of the pioneers in the field. That is his doctoral thesis that he did in 1961, I guess at MIT. Even years later, it is a classic book with wonderful stuff in it. He had it reprinted maybe by Dover Press; I forget who did it, and I’ve got a copy somewhere. That was really the first book in the field. There are other books that have been published too, but mine was a textbook with problems and exercises, stuff like that, for students to use based on the course we developed first at Poly and then at Columbia. I have continued in that field ever since. I’ve done work personally with students at Bell Labs, at Brooklyn Poly, at Columbia, and at other places. First at Poly and then at Columbia, in what is called congestion control.

Performance analysis

Schwartz:

Now, preferring work in analytical areas, I gravitated more towards performance analysis. How do you see if these systems are performing properly? By doing analysis. It turns out you have to learn queuing theory and things of that type. I tend to be oriented more in that direction, in more quantitative approaches. Bob Kahn, who was one of the pioneers of ARPAnet, one of the giants in the field, had published a paper on flow control for the ARPAnet. He was an electrical engineer by training, out of Princeton. He had worked in communication theory originally too, and had moved into this field. But he has his own organization now. He published his paper on congestion and flow control for the ARPAnet and I read it and I said, “Gee, maybe we could quantify these; maybe come up with some numbers.”I put a student named Mike Pennotti to work on this, a guy from Bell Labs who knew nothing about the field. This guy is sharp, so he picked up and finished his thesis in a year’s time. He had been working in, I think, underwater acoustics at Bell Labs and Navy work, or something like that, and switched fields completely, and boom. Sharp guy and good work. So we worked together on this, and he came up with what I call a pioneering paper on a virtual connection, a connection from point to point along a network which consists of a source terminal connected through routing nodes or routing switches to a destination node with buffering and queuing. That’s the way that networks operate. You store and transmit information in packets. We were able to model this. We came up with some concepts. We compared two different strategies. Do you want control over the virtual connection end-to-end, or do you want control at each node separately? We found that, by proper tweaking, both gave us the same performance.

But then, which one is easier to implement? It was the first such quantitative study and it has since become a paper that has been cited a lot of times, because it gave rise to a lot of other work in the field of congestion control and performance analysis. Again, the way engineering always works, somebody invents an idea, you develop the software (in the old days, it wasn’t the software, but nowadays the software), you develop the hardware for it, and then somebody comes along thinks maybe I can study and analyze it and get improvements on it. That’s the way it usually works.

So, this paper was published in ‘75, and it was very good early work in the area. Pennotti’s work was from Poly, but I had moved to Columbia, so he worked with me there. He got his doctoral degree at Poly, but he came to see me at Columbia at that time. There were other students who I had had at Poly whom I carried with me. They got their degrees at Poly but they worked with me at Columbia.

Routing protocols; communication links

Schwartz:

I began to work at that time with a colleague at Columbia here, Tom Stern, on routing protocols. Bob Gallager at MIT was doing some very nice work on routing protocols. Again, the ARPAnet had focused on that work. They had a routing procedure as part of the ARPAnet, which was pioneering. Had a lot of problems with it, because they tried to have it react too quickly and it was unstable, as it turned out. They began to change their routing protocol. The question arose, what are good routing protocols? Bob Gallager worked on this problem. Can you distribute the routing algorithms in some way? There were many routing protocols developed years before for work in transportation networks. How do you route trucks and things like that? Some of those ideas were picked up on in this case. So, Gallager did some pioneering work in distributed routing controls. Tom Stern did some fine, related work, which also stimulated work in the area. A lot of work was going on in this area. Harry Rudin, an American who went to Switzerland and is still living there, who had been active for years, was working in this field at IBM Zurich Research Laboratory. He’s now retired, I understand. He also did some pioneering work in routing.

We now left the physical layer behind and we were now moving into what is now called the network layer. IBM had done work on Synchronous Data link Control (SDLC). When the world’s standards bodies in the ‘70s began to pick up on this, they changed it a little bit; they called it High Level Data link Control (HLDC), but it is based on the IBM’s SDLC. So, that’s the second layer, data link control. People began to work on this, and papers began to be published on that layer. Now, we are moving up to what we now call the network, or third, layer. The network layer involves congestion control and routing. If you go higher to the fourth layer, now called the transport layer that also involves congestion control. TCP came along about that time. Later on, people began to develop a sophisticated control, called flow control, at that layer. We do congestion control at the network layer, we do flow control at the transport layer. But they are all very related. How do you keep receiving systems from being overrun by packets arriving? Now, you can do it end-to-end; you can do it hop-by-hop at the network layer. You can do it end-to-end on the transport protocol, but the ideas are very similar. Sometimes the layers get mixed up.

By this time ARPAnet was developing a full-fledged network and giving rise to a lot of work going on all over the country in these areas, so we were not among the few working on networking now. Everyone was beginning to work in this area. I just mentioned Gallager did pioneering work. Kleinrock, from the beginning, at UCLA, did pioneering work on the ARPAnet. People from other universities did the same thing as well too.

I personally focused on performance analysis. Kleinrock did too, by the way. He’s a broad guy. He does systems and software work, and he’s published classic books on queuing theory, giving courses in that regularly. So, he does work in everything.

We’re now in a real hot period in the network area. The work that I’m doing has become fully focused on networks. My book came out in 1977 as the first textbook in the field, although other books have been published on this as well too. What I do in this book is based on the literature, as well as some work that we had been doing. The examples of routing and flow control in networks that I give in this book are based on those of GE Information Services Network, Tymnet, and the ARPA Network from a qualitative point of view to try to understand what everything is all about. A big topic in those days (as now) had to do with being connected together with links of various kinds. So I also have in the book work on how you assign capacities to communication links, things like that. A lot of it was based on work that Kleinrock had done in his original thesis studying end-to-end delay. Since you now have queuing delay, this is very different from the telephone network. This is a packet-switched network with routers—you have buffering, so one of the performance objectives is to reduce the queuing delay as much as possible. How do you route according to queuing delay? He had done work in that, and so I discuss, how do you assign capacity to reduce delay? I have a chapter there on queuing theory, because people hadn’t done this before. I have a chapter here applied to store and forward buffering. I have a chapter on routing and flow control. All these things were in the air in those days. Other books have been published since, of course.

Modems and data networks

Scwartz:

I have a much bigger textbook covering much more material that came out in 1987 called Telecommunications Networks: Protocols, Modeling, and Analysis published by Addison Wesley. That also uses a quantitative approach, but I do treat protocols and things like that. Now, what was happening in the world in those days? Well, networking is becoming really significant to the world. It is known to everybody now. We had telephone networks that covered voice messages only. Suddenly, you find data becoming important now. People were shipping data over modems. Modems were being developed in those days because the telephone people realized early on that you want to ship data.

Hochfelder:

Bob Lucky’s work?

Schwartz:

Yes. Bob Lucky had a group at Bell Laboratories. Actually, his group came out of an earlier group, started by Bill Bennett, who passed away a long time ago. He had some of the best people working on modems. But other people were doing this work too. A guy named Dave Forney set up a company called Codex, and they developed a data modem. So, other people were doing work too. Bob Lucky’s group did pioneering work. Steve Weinstein, as well as others, worked for him. They began to develop modems early on for handling data over telephone networks. As a matter of fact, all of these networks we talk about use telephone facilities. How does someone get into those telephone networks in some way? This is for terminal, low bit rate modems, things of that type. So that was the modem work that was going on. The CCITT in the early ‘70s was aware, not only of the modem work going on…

Standards

Hochfelder:

CCITT?

Schwartz:

Yes. There is something called the International Telecommunications Union, ITU, housed in Geneva. That’s a standards making body for telecommunications administrations all over the world. Only administrations can belong to this. In the United States, it’s the State Department and the FCC that jointly work together on this. This has now changed. The ITU had, at the time, two separate standards bodies, one called CCITT, the other called CCIR. They are French acronyms.

Hochfelder:

One is for telegraphy and telephone, and the other one is for?

Schwartz:

<flashmp3>360 - schwartz - clip 3.mp3</flashmp3>

Comité Consultative Internationale Télégraphie et Téléphonique is that CCITT international standards organization, and the other one is called Comité Consultative Internationale Radio. One is for radio standards and one is for telephone and telegraph standards. The CCITT has standardized a lot of modem 34 work, things like that. But let’s focus now on the networking area that I am more familiar with.

They were well aware that data networks were now becoming significant. We already had IBM; we had Tymnet developed; GE Information Services Network. The French had a network set up called the Cyclades network. ARPAnet was here. Data networks were developing worldwide. The telephone industry was very aware of this, and they were very aware of networking. So the idea developed to try to standardize some data networks in some way. They set groups to work. I wasn’t involved, so I don’t know the details of it. But in 1976 they came out with a different kind of standard called an interface standard and called X.25.

DEC had developed DECnet, Burroughs had developed its Burroughs network architecture; all of these architectures in the United States were proprietary—they were developed for their own equipment only, although the companies weren’t specifically in the data networking area. The CCITT was talking about developing some kind of open networking, but across interfaces. Another organization, the International Standards Organization (ISO), was beginning to develop activities as well too, so the two were going on simultaneously now.

Let’s focus first on CCITT. They came out with a standard called X.25. Now remember, they have working with them mostly telephone administrations, so they are catering mostly worldwide to the telephone companies which, except for the United States, were all government organizations in those days. AT&T was quasi-governmental. It was a monopolistic organization in those days. They had the standard called X.25, which enabled users to interconnect to any network. Networks would have their own protocols, but by using this interface you could get into that protocol. On your side, you could see this protocol. That network on this side sees the same protocol, so you can get into it and then handle it anyway you wish, and the other side, it gets back out again. It is strictly an interface protocol; it’s not an end-to-end protocol. (It has subsequently also been adopted in some places as an end-to-end protocol.) There was an interface defined between a terminal on the user’s side and the network side. The same on the other side of a network between: between network side and the user side. What happens in the network between they weren’t concerned with. They weren’t going to tell people how to handle their networks. X.25 is a three-layered protocol. People knew about layers now: physical layer; data link layer which is HDLC; and a third layer, normally a networking layer, but, in the X.25 case, an interface layer, handling X.25 packets going across it. That came out of the CCITT activity. There was a lot of work published.

One of the first administrations to latch onto this was that of Canada, with Bell Northern Research developing products. In fact, I attended a communications meeting in 1976 and they presented some of their work at that time too. So they developed some of their products. The Canadians were one of the first in this area. Contemporaneous with this, the computer manufacturers and the public using computers began to feel the need for interconnecting computers in a non-proprietary sense. I just mentioned before IBM, DEC, and Burroughs in the United States. The same for Japanese companies. Each developed their own protocols for their own equipment.

The international standards organization, ISO, is also housed in Geneva. It is made up of companies rather than administrations. They felt the need to do this, so they set up some bodies and they began to develop pioneering activity in standardizing protocol architectures. They developed the idea of a seven-layered computer communications architecture, a physical data link layer, network layer, transport layer, session layer, all the way up to application layer. Why seven? Well, you don’t want too many, you don’t want too few, so they came up with that. They began a massive effort in each layer to really standardize this. Independently in the United States, the ARPAnet community had developed protocols, so they had their own.

Hochfelder:

The TCP?

Schwartz:

Yes. So they had developed a network layer called IP, actually Internet Protocol, interconnecting different networks. It told you how to route packets using the ARPA routing algorithms. TCP, Transmission Control Protocol, was the layer above, a transport protocol architecture. They had a data link layer too, et cetera. They had, of course, layers on top of that for handling various kinds of transfers, file transfers, e-mail, things like that.

Kleinrock, in one of his early papers, summarizing traffic usage early on for the ARPAnet, showed the most amazing result. Despite the initial ARPAnet idea of a computer utility, with the network used for accessing other Hosts’ software, it turned out that people were using the network to talk to their own friends! The study showed the first elementary use of email, where you’re talking to the guys in the office next to you. So most of the activity was local activity. People never know what things are going to be used for. That was what happened with ARPAnet of course. It never became a computer utility. It developed for other communications purposes.

ISO began to develop these standards and they came up with a seven-layered model and the ARPAnet community developed its own layered architecture. I innocently got involved in this. A standardization conflict had developed in the United States between the National Bureau of Standards, which is now NIST, and the Department of Defense. NBS was concerned with all commercial scientific and technological activity in the United States, in helping the commercial sector as part of the Department of Commerce. They had a Computer Laboratory Division. I happened to be a member of a Visiting Committee to that, to look at the Division activities once a year. They had a large computer communications activity and they were studying how to determine whether the ISO protocols, in their commercialization, were “correct” in their operation or not. They were heavily involved in this. They were concerned that American companies not lose out worldwide if the world were to adopt the international computer communication standards being developed by ISO. The Europeans were moving in the direction of ISO. In the United States, there was a push on by the Department of Defense to standardize the TCP suite, the ARPAnet suite, because they already had it going. Why bring in new protocols? So there was sort of a conflict between the Department of Defense and the Commerce Department, the National Bureau of Standards. They went to the National Academy of Sciences/National Academy of Engineering, which runs the National Research Council, NRC, and asked them to adjudicate. NRC set up an expert panel made up of people from industry and universities to try to determine which is the better way to go. Unfortunately (in retrospect!), I was asked to join that panel. On the panel were people from other universities: Dave Farber, a well-known guy in communications; Larry Landweber from Wisconsin; experts from IBM, from DEC, from Burroughs, and others. It was a broad-sweeping panel. We met for a long period of time in Washington with long meetings where we had people come in. We had Vint Cerf, who was with ARPA, speaking on behalf of TCP and that suite. We had people speaking on the other side. Interestingly enough, Dave Farber and Larry Landweber kept pushing for TCP. The rest of us, myself included, said no, wait awhile. TCP is just United States; we have to go worldwide. The ISO suite, seven-layer suite, is much newer, it has the new features in it. It is based on TCP to some extent. It doesn’t have the segmentation TCP has and in some areas it is much better. The guys from industry were pushing for it. So why not go for that? We finally over-rode their objections and they reluctantly agreed. We pushed for the ISO suite.

We issued this report and the Department of Defense said okay, they’re going to ask all of their contractors to move to the International Standards Organization’s suite and move away from TCP as soon as practical. Well, the rest is history. Despite this, TCP took over and the ISO suite never came in, and I regret my decision to this day. I tell my students any time I give a talk, don’t ask me to predict what is going to happen in this world anymore. That was a real goof on my part. I was wrong. You never know.

Hochfelder:

The computer keyboard, the typewriter keyboard, is almost the same sort of thing.

Schwartz:

Is that right?

IEEE, Communications Society

Hochfelder:

We can talk about that off tape. Please talk about your involvement with the IEEE and with the Communications Society, and also your involvement here at Columbia at the Center for Telecommunications Research.

Schwartz:

<flashmp3>360 - schwartz - clip 4.mp3</flashmp3>

Yes. The IEEE one I’ll make brief. As a young fellow in the early ‘50s, I got very involved in information theory and communication theory. I became active in the then Information Theory Group before it was called a Society of the IEEE and attended a lot of meetings. It is hard to recollect the details of it. I don’t follow the Information Theory Society activities anymore. But somebody told me a year or two ago that they had read the newsletter and somebody had mentioned my name in the newsletter because he had found it in studying the old society records of the ‘50s. I was active then in the Information Theory Group and served as Chairman in about 1964 or 1965.

I was simultaneously active in the Communication Technology Group, or whatever it was called before the IEEE Communications Society. That was when I was head of the department at Brooklyn Poly. I was head of the department from 1961 to 1965 and living on Long Island. I became chair of the Long Island section of the Communication Technology Group. Then I got active in the overall Communications Group itself. I was one of the original people involved with the change from Group to Society. There was a fellow named Dick Kirby, Richard Kirby, who I guess was the first president of the new Communications Society which came out of the old Communication Technology Group. He invited me among others to join the committee to come up with the first constitution. So I was on that committee.

I became active. I was on the Board of Governors for years. I don’t remember the dates anymore. I got elected Vice President. Then in 1978 I was elected Director of the IEEE representing the Division of which the Communication Society was then a part. I was there for two years as part of that.

I am very proud of one incident while IEEE Director. People have forgotten this by now, but I’m the guy who proposed the idea of President-Elect. It’s not mentioned anywhere. I’m not sure that anybody would recognize it. But when I first became a Director, it became apparent to me that it was difficult to be a President for one year. You come and go and that’s it. You need some training. I knew other organizations had that, like the AAAS of which I was a member. So I proposed that at one of our meetings. We instituted the idea of President-elect and that was accepted. So now a guy comes in, is trained for a year as President, and then is able to go on as President the year after. I think I might have even proposed having a President for two years.

Hochfelder:

Isn’t there also like a Past President?

Schwartz:

Yes. Stay on for President, Past President. See, really they are committed for three years, which is very difficult at that level. Anyway, I was proud of that.

I kept up, obviously, my interest in Communications Society. First I was active in the Communication Theory Committee for a long time, then I helped organize a new Technical Committee on Computer Communications, which has now grown considerably. That was the committee devoted to networking and things like that. Some of my students came in. Ray Pickholtz, my former student, later also became active in that committee as well following that.

In 1984-’85, I was elected president of ComSoc. I might have been vice president before that. I was president for two years. One of our meetings was held abroad in Amsterdam. I think it might have been one of the first meetings we held abroad and that worked out very nicely. We had a couple of anecdotes. In Amsterdam, they threw out the red carpet—they opened up the city for us. We had a dinner engagement at the municipal hall, whatever they call it. The Queen came down to greet us. My wife tells a funny story where she came into this room where the ComSoc governing group was gathered to meet with her, and somebody had said to us, “Number one, be careful how you greet her—she is a Queen, remember. Number two, just these people here, nobody else.” So I’m very different, I guess. She walks in, I shook her hand rather than bowing or something like that. In America, we don’t do things like that, right? She’s a Queen—so what? Secondly, I beckoned for my wife, “Come on. Come meet the Queen.” I wasn’t supposed to do that by protocol either. So what! Why can’t my wife meet the Queen? Anyway, it was very nice.

The nicest thing was the Chair of our Awards Committee at that time was a man named Ralph Schwarz, who was also at Columbia. He has since retired. He’s older than I am and a very nice guy. He was to give awards at the awards luncheon that we run annually. He and his wife were both refugees from Hitler Germany. They fled to Holland and he spent a year or two in Holland. They didn’t meet there; they met back in the United States, by coincidence. Ralph still retained some of his Dutch, so when he got up to give the awards, he started speaking Dutch. The Dutch hosts were amazed. This was wonderful. Imagine that- ComSoc comes to Holland, and this guy is actually speaking Dutch, which was wonderful. I think people respect you for that. So thank God for Ralph. It was very nice.

It was a good two years. Then I stayed on, of course, as past president, as you normally do, to run the Nominations Committee. The last couple of years I have cut back and I haven’t been involved as much because I feel it is time for young people to take over. But I was active during that period of time in these committees.

In other IEEE things, I’ve always been involved in various IEEE award groups. I won the IEEE Education Medal in 1983. So, as always happens, I was invited to become a member of the Education Medal Award Committee to select new candidates. I did that for a number of years. The same for the Kobayashi award. One of our own past presidents and good friend, Eric Sumner, who passed away some years ago, who had been an executive at Bell Labs: we have an award for him. I’m on the award committee in his name. So I’ve been on a number of IEEE awards committees.

Hochfelder:

What impact do you think that the IEEE Communications Society has had on advancing the state of art?

Schwartz:

It’s hard to judge. Obviously, like every IEEE society, it enhances education of its members. It has its publications. It has its conferences. So I think through the conferences and the publications, this is where we mostly advance the state of the art. We could argue that it is the engineers at companies like Bell Labs and IBM who advance the state of the art. But they are the guys who also do attend meetings and conferences and publish papers. Other companies learn from one another that way. I think the educational mission of the IEEE is reflected in ComSoc.

Unfortunately, the whole business of competition coming in has made things a lot more difficult now. I used to love to go to the ComSoc meetings, like the International Conference on Communications, ICC, and Globecom, the Global Communications Conference, and listen to papers from people from industry. They would talk about new systems, whether it was ITT talking about a new switching system or Bell Labs engineers talking about a new system. I would learn a lot from that. As an academic guy, that is important to me. Otherwise, academics talk to one another. Unfortunately, now, with competition you get very little of this. They give you very little information. I can’t blame them, but it is very hard now. You find more academic papers now. I used to like the other papers from industry where you would learn from these guys. It’s important. I think ComSoc as the leading communications society in the world has contributed to that in a broader sense.

I see more and more the need for bringing societies together. The communications field now spans many organizations. Steve Weinstein, who was president of ComSoc a couple of years ago, has done a lot of this. He tried to bring different societies together, and was very successful. When I was president of ComSoc I tried bringing the Communication and Computer Societies together in the area of computer communications. It was difficult because once you are part of an organization; you don’t want anybody intruding on your turf. You have the right to go ahead. The computer communications area belongs to both fields, computers and communications. I remember trying to get the Computer Society to try to join with us, meeting with their president. It was a difficult situation because we were pushing ahead in computer communications and it might be better for the two Societies to work together. It is difficult. Steve and other people have managed to do that; I did not. Maybe I laid the groundwork; I really don’t know. But I couldn’t accomplish that much. We just went ahead with our own journals.

Now, for example, the leading journal on networking is the IEEE/ACM Transactions in Networking. That was jointly set up, through Steve’s efforts by the IEEE Computer Society, the IEEE Communications Society, and the ACM SIGCOMM. (I’m proud that the first Editor-in-Chief was my former student Jim Kurose, from Columbia, now a faculty member at UMass. He was the first editor for three or four years of that journal. ) Steven Weinstein has done a lot of work in trying to bring societies and groups together—ACM, and the IEEE Computer and Communications Societies—so we do a lot more work together. We have the leading conference on computer communications, INFOCOM, and that’s a joint Computer Society and Communications Society conference. We’ve had that for a number of years now. I’m very pleased. I’m still active in that conference. I’m currently on its Program Committee. I’ve got fifteen papers to review for the conference in the next two weeks, unfortunately.

That’s my activity, in a nutshell. Anything else you want to ask about that?

Hochfelder:

No. I think that covers it.

Schwartz:

Details appear in the IEEE files. I’m a Life Fellow now. I was a Fellow before that. So I’m proud of the IEEE. A great organization. You want to talk about Columbia?

Center for Telecommunications Research, Columbia

Hochfelder:

Yes. If you can talk about your involvement with CTR.

Schwartz:

What happened was that I came to Columbia in, say ‘73 unofficially; officially in ‘74. I started teaching courses in communications and computer communications. I set up the first graduate course in computer communications. I set up a course in signal processing. In fact, I published a book on that jointly with a former colleague at Brooklyn Poly. Signal processing came out of work at Brooklyn Poly; I set up a course in that. I taught a variety of courses.

I started working closely with my colleague Tom Stern here at Columbia, who is a wonderful person. He just retired, too. He’d been an old systems and control guy. He published a book years ago, on nonlinear networks, and he moved into the communications area, communication networking in particular. The two of us began to work together. We published a joint paper on routing in networks. He and I got together and we started organizing a little computer communications research group besides teaching courses. We developed the area. We got some industrial funding. Places like GTE and other companies gave us grants. I have to give the then Dean, Bob Gross, a lot of credit. He said, “Why don’t you guys organize yourselves as a Center and try to get more funds? Go out and get more funds from companies.” So we did. We set up a small center. I didn’t want to be the director, so I said, “Tom, you be the director.” So he was the director of this small center.

I was on sabbatical at IBM Research in 1980, and I still remember the day we hired a young man named Aurel Lazar. He’d gotten his degree at Princeton in point processes, a very theoretical subject. He joined us and we said to him, “Look Aurel, we’re doing work in networks now. So, how about doing that?” He switched over.

During my year at IBM Research in 1980, I worked on the IBM networking architecture, SNA, and other topics such as routing protocols, while meeting some of the people there. I did some work on congestion control. I took the SNA congestion control system and analyzed it. I published a paper that showed how one could generalize other kinds of congestion control techniques. We began doing this work. Beginning in 1980, Tom Stern, Lazar, and I developed this computer communications group.

In 1984, NSF sent out a notice saying they were setting up a new concept called Engineering Research Centers. These were to be multi-purpose centers with special funding in particular areas of engineering where the United States faced a competitive threat, and where there was a lot of basic research to be done. The Centers were to bring together faculty of different disciplines in that one field, work with graduate students, and bring undergraduates in as well, to do research.

They were to work closely with industry and try to move ahead into that field. I remember saying to Tom that we had no choice but to do this. “We have to apply for this because, if we don’t, somebody else is going to do that”. We got together a group of people here from Electrical Engineering, our Operations Research Department, and faculty and students in applied physics. We had a concept of looking into telecommunication systems of the future, starting from the basic VLSI hardware level, the device and chip levels, all the way up to the systems level. Electrical Engineering was broad enough to encompass all of these. We tried to involve our faculty in the solid state area and the optics area. We had multiple activities going on. We had queuing theorists from the Operations Research Department; we covered device physics; and, of course, we had systems guys, Tom Stern, myself, Lazar, and others.

I organized a group of interested faculty and I put together a position paper on our concept. By the way, in all honesty, the guys, aside from Tom Stern, myself, and Lazar, knew very little about communications. The guys working in VLSI and solid state were not really knowledgeable in that area. Once we got the award, we started training them. It was interesting.

Anyway, we put this proposal together. I wrote it with Tom’s help and submitted it. I guess there were 42 proposals submitted from all over the country in all fields of engineering. (Seven were finally selected, ours being the only one in telecommunications.) There was, initially, a site visit. They came down to visit us. We were then selected as among the top fourteen.

Then I had to go to Washington to present our case. I remember that was a difficult time. I don’t know who the other finalists are, because they don’t tell you this. You’re sitting in an anteroom and there are some other people who you don’t even recognize. (I might have recognized one from another university, but you’re not supposed to talk to each other.) They invite me in. Then you have what seems like a hostile audience in front of you. This is the selection committee, and they started firing questions at me, all kinds of questions. In particular, one guy was really firing hostile questions. Perhaps not hostile, but tough questions at me. Later on, I mentioned his name to one of my colleagues, and he says, “He is really a friend.” He may have been a friend, but not inside that meeting.

Anyway, I came out saying forget it, we’re not going to get it. Yet we got the award despite these very tough questions! We were awarded this grant, one of seven, the only one in telecommunications. With NSF support, we started the Center for Telecommunications Research going. We got the award officially in May of 1985, and we began to build.One of the problems was that it was supposed to have been long range. NSF had said it would be long range, which means that you start taking on graduate students with the funding they require. The initial year the funding might have been a million and a half or two million. They had told us they were going to go up to five million a year. We began hiring graduate students on that basis. Of course, a couple of years into it, it turned out they leveled off at three million, and we had already hired all these graduate students, so, for a while, we had a real problem. We went up to a maximum of eighty-five doctoral students supported on this, plus twenty-seven faculty from these different disciplines. Not full time support for the faculty, but some support for each person. Lots of equipment. A tremendous thing. We began working and I think we did a lot of wonderful things.

We got industry involved. The biggest job I had in those days was convincing industry to join us, and a lot of my time as director was spent on the telephone, or going in person to meet people. I didn’t curtail my teaching activities at all. I kept teaching. I kept doing research. I had a book published two years later. It was just tiring, working longer hours. We managed to get a sizable number of companies involved. The major companies in the United States and elsewhere, actually. We had ATT, Bell Labs, IBM, GTE, Bellcore, Timeplex, and many others. Bellcore had been set up in ‘84, so we had Bellcore as part of us. We had at that time NYNEX, which is now Bell Atlantic. We had Southwestern Bell. We had Southern Bell from the southeast. We couldn’t get all of the then RBOCs, but we did get quite a number. We tried hard to get a lot of companies from the financial industry because they are heavy users. We said we must have the big users. We got Merrill Lynch to join us, and, through them, we got a company called Teleport, which they had acquired, which has now been picked up by AT&T as a carrier. We never managed to get any banks, even though the banks always said to us, “We welcome you. Please come give talks to us. We like the seminars, but no money.” They didn’t give us any money. But they sent students to the programs we had. The only major company from the financial industry that supported us was Merrill Lynch. I’m very pleased about that. That was the most difficult thing, bringing some of the users onboard, but we had, maybe twenty to twenty-five companies join with us, big and small.

We set up an industrial affiliates program. Once a year we ran a big, open two-day forum on what we had done, talks and seminars on what we had accomplished. We did it the first year in 1986 and continued doing it every year. We also had Japanese companies joining us. We got the award in 1985; our first open large meeting was in 1986. One of the first questions asked was a hostile question from the audience: “This is an American Center, funded by the National Science Foundation, to advance American industry in a competitive environment. How can you tolerate having foreign companies as part of you?”

Hochfelder:

Especially the Japanese.

Schwartz:

The Japanese. My answer was very simple—we’re at a university; we’re open to the world. Now, everything we publish is above board and published in all kinds of journals. We have a lot to learn from the Japanese. It’s a two-way street, remember. They’re not going to steal us blind. We learn from them as much as they learn from us. It’s important to have these companies participate. In fact, some of our best defenders at that meeting were people from Bell Labs and places like that who recognized this. We had Japanese companies; we had a Korean company joining us; some European companies. But the bulk were American. When I went back and spoke to the then Director of NSF, he said, “No, by all means, you’re free to bring other countries aboard.” They were very supportive of us because they recognized that as well, too. It was a great time.

Of course, our colleagues at other universities were very jealous of us. I remember being in a swimming pool at a Communication Theory Workshop in Palm Springs, California, and a well-known colleague from a well-known university comes up to me and he says, “Mischa, we have a big communications group, even bigger than your communications group. How come you got it and we didn’t?” I said, “I don’t know. Go ask NSF. We applied for it. We did our work. We’re doing good work, we think. I’m not going to argue.” There was jealousy there.

It was a lot of work, because NSF, in order to support this program, had to convince Congress that it was worthwhile as a heavy investment of money. Other universities were very jealous. They thought that this would come out of the funding of individual investigator funding. NSF had assured them that it didn’t. That it came from extra money. It was a very difficult and trying time from all respects. So it was difficult for NSF.

NSF kept asking us for different kinds of identifiers of activity. We had to hire staff. We had five full-time administrative people. We also hired about ten post-docs, called Associate Research Scientists here at Columbia, to work with us too. We had to indicate over the years how this Center had helped American industry and helped our society. Well, one obvious way we kept pointing out was that a lot of our students did go into American industry. They were welcomed with open arms. They were properly trained because we had good research facilities we set up. We covered everything from VLSI all the way up to systems, signal processing, and image processing. A lot of good work came out of this, a lot of activity. Patents were issued. Papers were published. I was proud that in 1987, three years after we got the funding, I put a book out on telecommunications networks, which was well accepted. NSF used that to show people some of our accomplishments. In fact, in the preface of the book I give credit to NSF and the NSF Center for helping us do the research that led to a book like this. So it was very useful in many respects.

I stepped down as director in 1988 after three years. I had had it. I’m not really an administrator; I just don’t like that kind of thing. I managed to hire away from Bell Labs a top-notch researcher named Tony Acampora, Anthony Acampora. A very able guy. He worked at the director level at Bell Labs. He had always wanted to go to a university. He happened to be visiting us, and Tom Stern and I cornered him one day in my office and I said, “Tony, how about becoming director of CTR?” He said, “What!” Never even thought of it. Well, he agreed to come and he became director. He led CTR for the last eight years of CTR’s existence. I maintained my activity in it, of course. So, that’s the genesis of CTR.

By the way, we found out later on, because we had finally built up to about a five million dollar budget (three and a half million from NSF and maybe a million and a half, two million from industry, which is good), we had to cut back the number of students because we were using a lot of money to support the activities, meaning new research, equipment. By the way, out of this came the building you are sitting in now. We are sitting in what is called the Schapiro Research Center. Our Dean, Bob Gross, once we had the Center going and showed that we were really moving along and becoming well-known worldwide, went to the State of New York and Governor Cuomo and sold them on the idea that New York State had a lot to gain in terms of industrial activity in New York State by supporting the building of a new building with research activities in it. I think we got a sixty-million-dollar loan, or something like that, interest-free, from the New York State Dormitory Authority. We got a ten-million-dollar gift from a man named Morris Schapiro, who was a Columbia alumnus who had been a mining engineer here. He made a lot of money as a financier and has funded the Schapiro Dormitory here as well. His brother, Meyer Schapiro, is a well-known art historian, who was at Columbia too. This man, Morris Schapiro, I think he’s still living if I’m not mistaken. He’s in his late 90s by now. Wonderful man. Gave ten million dollars. So we built this new state-of-the-art building for research only. There is no teaching done here. Research facilities and seminar rooms. It came out of the fact that our Center had given the Engineering School the ability to go to the governor of New York and say, “Look, there’s something really going on here.” So I’m very proud of that too. This building houses all kinds of research activities from the Engineering School and the Columbia Physics department.

Image processing ADVENT group

Hochfelder:

Could you talk about some of the technical advances or spin-offs, or more to the point, any of the ideas that came out of this Center that some companies picked up and actually…

Schwartz:

Well, a very important one, one of the Center leaders in the field of signal processing, a man named Dimitris Anastassiou, developed a large group on image processing. That’s his specialty. It’s called ADVENT, which is part of CTR. He worked as part of the MPEG team to develop the MPEG standard. Columbia is listed as one of the patent holders on the MPEG standard and Columbia, and I just learned last week, derives a million dollars a year revenue from that MPEG standard.

Hochfelder:

The MPEG standard is for image transmission?

Schwartz:

Movie information. Compressed movies over low bit rates. There are various versions of MPEG. This was, I think, MPEG 2. There is now an MPEG 4. There’s an MPEG 1. There are various versions of this now. So that’s the official standard. MPEG stands for Motion Pictures Expert Group. That’s used worldwide now for compressed digital TV.

We developed a sizable software activity. Lazar had built one of the first high speed local area networks called Magnet. In fact, it was before we got the award. He had the Magnet II and this was one of the things we pointed to in preparing our proposal. Lazar was both a theoretician and a practitioner. A very rare combination.

As the years went Lazar got very heavily involved in all kinds of higher level software activities. He developed a group called the Comet Group, which still exists now, together with the Advent Group. Comet was more devoted to networking as such, compared to Advent, which focuses on signal processing for data to go over networks. Comet handles networking issues, both software and hardware issues, protocols, standards of various kinds. Lazar took leave from Columbia last year to set up a company here in New York to develop and market some of his ideas. I understand he has some Columbia backing behind this. So Columbia is trying hard to do things like that.

Our students have gone to most companies you can think of. Not just Bell Labs. They’re at Microsoft, at Cisco. They’re all over the place now. Not only did we have, and currently have, doctoral students; we have a lot of Master’s students and undergraduate students working and they love it. The undergraduate students are working in facilities with Master’s and doctoral students.

ATM switching; Wave Division Multiplexing

Schwartz:

We did a lot of pioneering work in ATM switching, which came along at that time. In fact, we just mentioned Tony Acampora who was here at that time. He talks of a switch we developed which is one of the first distributed ATM switches. We built a network here in this laboratory connecting a bunch of terminals scattered throughout the building with switches, using our own homegrown protocol, but it was ATM-based. That has had an impact indirectly. There’s been no direct spin-off of that. We moved into the optical arena. That became a hot topic.

Tom Stern, who was our Technical Director, began to develop an activity in optical networking communications. Tony Acampora joined him in that. I helped out a little bit, but they were the two guys in this area. It was a natural, because we had people in the solid state area working in optics as well, with a strong optical activity. The idea, though, was, can we build an all-optical network in the future that will far surpass in capability any existing network? It’s all-optical, because you don’t have to convert from optics to electronics and back to optics. Do everything optically, but the fact that this requires an optical switch was then the one drawback. Tony and Tom and others with them began working on problems like that. We had some good activity in that area. A lot of good work came out of that. Tom Stern, as a matter of fact, has just published a very fine book on optical networking. This has, in the last few years, become a hot area commercially. A number of Tom’s students are currently involved in leading companies dealing with WDM- Wave Division Multiplexing.

Educational and international influences of NSF Engineering Center

Schwartz:

A lot of us also led in developing courses. I say my textbook came out of courses we taught, and that’s been adopted by other schools. Tom Stern developed a course in optics. Our solid state people moved more into the optical area, developed and taught courses in that area, and that gets spun off also to other schools as well too. So a lot of activity in different areas.

I might say also that many other countries emulated this NSF Engineering Center concept. In fact, the first couple of years, not only was I busy with research, teaching, running the organization and trying to get new companies involved, and trying to satisfy NSF by going to meetings, we had to host a whole bunch of visitors all the time from Canada, the U. K., China, Australia. You name it. Everyone of them ended up setting up centers like ours. There are a lot of schools that set up centers. Canada also developed a center for networking and for communications that covers the entire country. They took our model and developed it into a center where fourteen different universities and other institutions are combined with high-speed networks to work jointly. In fact, I was asked to serve on the original site team and I became chair of that site team, a visiting team, for a while. Australia had centers like that set up. I think the U. K. set up a center. So we’re very proud that once we did this, these other centers were set up. The United States also has a lot of centers now from all different universities too. Some existed before we set ours up. We had the biggest center, I think, in the country for a while working in that area. So it was a very successful venture.

Initially, we didn’t have the Columbia Computer Science department involved. But at one of our meetings we had industrial affiliates meeting with us. We had set up two boards, an Industrial Affiliates Board that was made up of top-flight people from each of the major companies that joined us. They had to commit to a certain amount, give us a certain amount of money—fifty thousand dollars a year or above. They helped us set policy and provide long-range direction for the Center. Then we also set up a Technical Advisory Board. That was made up of invited people. We invited the most outstanding people in the country in different areas to come join us on the board. We were free to pick whom we pleased. Sandy Fraser, who was one of the pioneers at Bell Labs in data networking, was on our board. Dave Forney, who was an outstanding person in coding and modems, and one of the founders of Codex, which became part of Motorola, was on our Technical Advisory Board. Outstanding people joined us.

At any rate, at one of the meetings with industrial representatives, I remember Sandy Fraser, who was a software person who headed up Computer Science activities at Bell Labs, saying to us, “You know, you’re doing good work in software, but you really ought to focus more on software. Have the Computer Science department join you.” So we approached them and they did join us. So we began to broaden the activities because software became more and more significant, as you are well aware. Any other questions?

Internet, wireless, and multimedia networking

Hochfelder:

Not on the Center for Telecommunications for Research. By way of wrapping up, if you could give your thoughts on the future of telecommunications. Perhaps some of the technical challenges that might be on the horizon.

Schwartz:

Well, as I said before, I don’t like to predict things, because I am always wrong! I can’t tell what’s going to happen. But clearly, the Internet is driving the show now. That and wireless. Those are the two major activities. Of course, Internet is moving now to the wireless domain as well too. As a matter of fact, before I retired three years ago, about five or six years ago, I began to get heavily interested in wireless, because I saw that it was an area with great promise for the future, with very interesting research challenges. I like to move into new areas as they come along. People are different. Some people like to stay in an area and really delve more deeply. I like to go on to new challenges. So five or six years ago I saw wireless communications research becoming a challenge to the academic world. A lot of activity in the academic world had been going on in the propagation aspects and physical layer aspects. But, to my knowledge, very little in the networking area, so I began to try to do work in that area.

When I was on sabbatical at University College, London, in 1995, I wrote a paper that was published in theIEEE Personal Communications Magazine on network challenges for wireless that has gotten a lot of play. I got a lot of nice comments on that and I have given a lot of talks all over the world on that. On the higher layer challenges, not for current wireless, which is what we call second generation, the digital wireless that everybody uses, but third generation and beyond, which is now coming along. I see that as one of the major challenges. Wireless terminals will be expected to carry, using limited bandwidth, multimedia traffic, which means video, voice, and images from the Internet. Now, how do you do this with battery-operated devices, which have limited power? Devices of this type are already starting to appear. More are expected. There is currently a lot of work going on in this area. I find wireless networking incorporating such devices one of the major engineering challenges now.

Hochfelder:

Especially in the mobile environment.

Schwartz:

Yes. Yes. So, how do you do that? Multimedia. There are a lot of network management issues involved with that, so there are a lot people working on this now. I find it very exciting, and I am personally involved in that.

I think one of the big challenges that is coming up now is the whole optical area. I’m not really as involved in it as I used to be. Fiber now is allowing multiple wavelength transmission over one system now. Tom Stern has written a lot about that. For a while, we did a lot of basic work here on optics and it wasn’t going anywhere because people didn’t have the optical switches. But then, suddenly, breakthroughs began to develop through which you can handle multiple colors on the same fiber. That’s now increased the use of optics tremendously. Suddenly, it’s become a real hot industry that is obviously affecting the Internet because it means now that you can really drive much more high-speed traffic over it. This technique is called wave division multiplexing, WDM. It has become a real hot technology area now, with both companies and universities involved. Wireless networking and WDM- those are two major technical challenges.

Internet access technologies

Schwartz:

The key question now is, what is the impact of the Internet and how is that going to manifest itself in the future? One clear thing is access technologies that have fallen behind. We all have our 28.8 kilobits per second or 56 kilobits per second access to the Internet. That’s too slow at home. We have no problem at a place like Columbia because we use our Ethernet facility, right into the Internet. But at home, you don’t have that kind of thing. So, access technology is important.

You’ve heard of XDSL. That stands for Digital Subscriber Line, “X” meaning different versions of that: ADSL, HDSL, for example. ADSL, the asymmetric version, seems to be taking off to some extent. Some telephone companies are now beginning to push that now. It enables you to ship signals downstream to the user at megabit-per-second bit rates and upstream at slower bit rates. Asymmetric. That’s what you want for accessing the Internet. You want high speed coming down.

Cable modems are being pushed now too, and of course you read that AT&T has bought cable companies. They are going to be pushing cable modems. They’re also high bit rate, the difference being that they’re like Ethernet, with multiple users using the same cable. So you have to compete with other users. If you get the facility for yourself, you go high speed; otherwise you have to share it. You have to slow down a little bit sometimes. Whereas with ADSL, systems like that, you have your own dedicated wire. There are tradeoffs. Access technology is a hot topic now.I still think that one big difficult area that hasn’t been decided yet is how should you run Internet? Is there a place for Asynchronous Transfer Mode (ATM), for example? That’s always been a topic of discussion lately. ATM was touted as a broadband integrated networks concept years ago by the CCITT, the organization mentioned before. (The names have changed, by the way. We now have ITU-T and ITU-R, no longer CCITT and CCIR, as part of the ITU. Telephone and Radio Telecommunications Committees, respectively.) They developed the concept of broadband integrated service, BISDN networks. Broadband Integrated Service Digital Networks. ATM was being touted by the ITU-T (the CCITT initially) as the networking system of the future that would enable us to bring broadband ISDN into use. It’s geared specifically to multimedia. It enables different kinds of service to be provided, whether it’s video, voice, data, images. All integrated, so it is multimedia in nature. There is an organization called the ATM Forum made up of, by now, hundreds of companies trying to develop standards for this. The one basic concept that’s been driving ATM is the concept of Quality of Service, which is a new concept.

<flashmp3>360 - schwartz - clip 5.mp3</flashmp3>

In the telephone industry, you talk of Grade of Service. You don’t want to have too many calls blocked. Voice-based wireless is the same way. The current wireless, cellular wireless. In the data networking world, you talk about packet delay; you talk about packet loss probability. You can’t lose data packets because they contain important information. So TCP had built into it the concept that if it does not receive an acknowledgement for a packet within a certain period of time, it repeats the packet, because every packet has to be received correctly.ATM has Quality of Service built in, right from the beginning, depending on the kind of the service to be provided. ATM is a packet-switched service using constant- size packets called cells. Not to be confused with the wireless cell. That is different. The ATM cell is a small packet, forty-eight bytes of data and five bytes of overhead going through the network—very short. The whole point of the standards body, the ITU-T, and the ATM Forum following, was to develop standards of Quality of Service, and have this built in. So they have a concept that different kinds of traffic will be transmitted. For example, continuous bit rate traffic, such as voice, may incur a delay. In transmitting these packets, voice traffic has the property that it must arrive at the destination within a certain interval of time; otherwise it’s not tolerable in real time. But you can drop some voice packets, voice cells. The ear won’t notice it too much. A little bit of noise is heard. So its Quality of Service is maximum packet delay as well as packet jitter, because voice cells are buffered as they move along. Successive packets in the same conversation will change their spacing. That’s not good for the ear; you have to reduce that. So, you have these Qualities of Service for voice.

Video is transmitted at variable bit rates. Video starts off being continuous bit rate. When you compress it, it becomes variable bit rate (VBR) traffic. Real-time video is similar to voice in Quality-of-Service. Real-time video has to be delivered, just as voice does, within a certain interval of time, but you can’t lose too many video cells because this might wipe out a whole screen. Data packets, on the other hand, can be delayed to some extent, but you can’t drop any of them.So all of these have different kinds of Quality of Service, and that’s built into ATM at the beginning.

For a long time, people thought ATM was the networking standard of the future and many companies were set up building ATM switches. Interestingly enough, they originally touted them as being switches for the wide area networks. They first got their sales, however, in local area networks for companies and academic institutions. Now they are being deployed again by wide area networks running over a system called SONET. It’s a high bit rate optically-based protocol for the physical layer, also called SDH. But people in the Internet world are now saying, “But why should we take our Internet packets, TCP/IP packets, and chop them into ATM cells and transfer them to SONET? Why can’t we have TCP/IP over SONET directly?” So there’s a sort of a little struggle going now. I wouldn’t quite call it a war, but a struggle going between two different factions. In fact, I’ve attended meetings where there are proponents of both. So it’s not clear what’s going to happen with ATM at this point, whether it will support Internet all over the world or not. A lot of the telephone administrations are deploying ATM switches.

Internet Quality of Service

Schwartz:

Talking about new issues of the future, another issue is the Quality of Service on the Internet. If you don’t have ATM, how do you guarantee user Quality of Service over the Internet? TCP was developed as a data protocol—packets for data. Now, you want to run voice over the Internet and that voice has to have that same time guarantee mentioned above. Very difficult, particularly now with the Internet. So, they’ve been tussling with Quality of Service. Because if you really want to have real-time voice, you have to guarantee it will get there in time, and it’s very difficult with the Internet. When you go through various Internet service providers over different networks, nobody knows what’s happening to your packets. You can’t guarantee anything. So they’ve had their IETF, which is their standards making body, tussling with this.

Quite a number of years back they proposed a technique called RSVP, a receiver-based protocol, that would try to bring some Quality of Service into this. I’m not really that active in the area, so I’ll just give you my judgment on what I’ve heard. People say it doesn’t scale to large numbers of users and large numbers of sessions going on. So there are other techniques being developed now by various people. One is called Differentiated Services. I said that’s the thing the Internet communications people are tussling with now. How do you guarantee Quality of Service in the Internet environment if you don’t use ATM, which might have that built in? So, that’s another technical challenge in the future that people are working right now on.

Hochfelder:

Okay. Sounds good. That’s all I have. Do you have any concluding thoughts?

Schwartz:

Actually, I’ve run out.

Hochfelder:

Thanks very much.

Schwartz:

My pleasure.