Oral-History:Paul Baran: Difference between revisions

From ETHW
No edit summary
No edit summary
Line 62: Line 62:
==== Packet Technologies, Broadband Digital Services to the Home Via TV Cable<br>  ====
==== Packet Technologies, Broadband Digital Services to the Home Via TV Cable<br>  ====


The next spinout company I co-founded was Packet Technologies. Packet Technologies had been originally conceived 12 years earlier when setting up and naming the original company “Cabledata Associates”. Packet Technologies was to be a company dedicated to providing equipment to deliver high-speed data services to the home via TV cable. <br> TV cable started in the 1950’s as an extension of a commonly shared TV antenna in the rural areas of the country. At that time the cities didn’t need cable. City users had antennas that provide good TV coverage for the few TV networks then available. By the 1980’s HBO and other additional channels changed the economics of cable in the cities, as there were now channels that could only be received if you were connected to the cable. <br> To win franchises each cable operator promised more and more features with wonderful new services to the home being a big attraction. However, there was no technology available at that time able to deliver this future promised capability. That gave us the opening we were waiting for. The market for this technology for the first time appeared ripe as a result of the cable company’s promise of the delivery of all sorts of “blue sky” services. These promises were made in competitive fashion to win the cable franchise for the cities that were to be wired. The potential market was now so large it could justify the scale of development work needed to create the missing technology. <br>The new company was originally called Packetcable and later Packet Technologies after it had a non-cable product as well. After an initial funding round by local investors, AMOCO became the major funding source saying that their plan was to be in “a major size new business before the oil ran out in the 21st Century”. The technology development proceeded well, aside from the usual problem of taking a bit longer than initially planned. <br>Two different cable systems were modified for two-way operation. An outdoor unit hanging from the TV cable and powered off the cable delivered TV viewing control for pay TV, data and videotext access to a cluster of up to eight houses. Each house was connected with conventional TV drop cables that also carried the normal TV. Each of the remote units was served by a high-speed two-way data connection over the cable to the cable head end. At the cable head end was a connection to a time-sharing data service provider. Each six MHz TV channel could support two 1.54 (T-1) rate channels. The equipment worked and it worked well.<br>




The next spinout company I co-founded was Packet Technologies. Packet Technologies had been originally conceived 12 years earlier when setting up and naming the original company “Cabledata Associates”. Packet Technologies was to be a company dedicated to providing equipment to deliver high-speed data services to the home via TV cable. <br> TV cable started in the 1950’s as an extension of a commonly shared TV antenna in the rural areas of the country. At that time the cities didn’t need cable. City users had antennas that provide good TV coverage for the few TV networks then available. By the 1980’s HBO and other additional channels changed the economics of cable in the cities, as there were now channels that could only be received if you were connected to the cable. <br> To win franchises each cable operator promised more and more features with wonderful new services to the home being a big attraction. However, there was no technology available at that time able to deliver this future promised capability. That gave us the opening we were waiting for. The market for this technology for the first time appeared ripe as a result of the cable company’s promise of the delivery of all sorts of “blue sky” services. These promises were made in competitive fashion to win the cable franchise for the cities that were to be wired. The potential market was now so large it could justify the scale of development work needed to create the missing technology. <br>The new company was originally called Packetcable and later Packet Technologies after it had a non-cable product as well. After an initial funding round by local investors, AMOCO became the major funding source saying that their plan was to be in “a major size new business before the oil ran out in the 21st Century”. The technology development proceeded well, aside from the usual problem of taking a bit longer than initially planned. <br>Two different cable systems were modified for two-way operation. An outdoor unit hanging from the TV cable and powered off the cable delivered TV viewing control for pay TV, data and videotext access to a cluster of up to eight houses. Each house was connected with conventional TV drop cables that also carried the normal TV. Each of the remote units was served by a high-speed two-way data connection over the cable to the cable head end. At the cable head end was a connection to a time-sharing data service provider. Each six MHz TV channel could support two 1.54 (T-1) rate channels. The equipment worked and it worked well.<br>Packetized Voice<br> All the data in the system is handled in short packets, and since the data rate was roughly similar to the telephone T-1 rate, I had the idea of also sending telephony over the same cable. At that time, prior to the fiber optics era, T-1 circuits were very expensive. We believed that we could send telephony more cheaply over the TV cable, given the then telephone company tariffs. <br>My basic idea was to use the 192 bit frames of the T-1 system as a separate very short, and very fast packet. This would allow us to make statistical use of the channel. And the short packet would allow us to avoid any significant delay, important for maintaining high quality voice. By sending packets only when the user was talking, and using 32 Kb ADPCM in lieu of the older 64 Kbps PCM approach we were able to carry 96+ voice channels. This may be compared to a conventional T-1 circuit, which could carry a maximum of 24 voice channels, a factor of four improvement.<br> We described this proposed cable telephone system technology on a white board to visitors from Michigan Bell. We said, “Our technology will allow TV cable to transmit telephone voice at a lower cost than conventional alternatives”. Their reply was, “Could you also do that packetization trick over our existing T-1 twisted pair circuits, and get the same 4X efficiency improvement?” We said, “Yes, I guess we might be able to do that.” And they were interested in proceeding.<br> The telephone industry’s equipment for rearranging the connections of T-1 trunks from different central offices is called Digital Access Cross Connect Systems (DACCS). What Michigan Bell wanted was in essence a DACCS using the compression approach that we described. We built a pair of prototype units for Michigan Bell, which we named PacketDax. They were highly efficient and flexible and able to remotely control automatic cross connection switches with excellent remote control monitoring and set up. Parenthetically it was mandatory that the voice quality be indistinguishable from the conventional toll quality voice circuits, which it was. And the units met all the telephone plant requirements with redundant components etc. This interesting project represented only about five percent of the entire Packet Technologies efforts but would have an important place in the Company’s future. <br>
 
===== Packetized Voice<br> =====
 
 
 
All the data in the system is handled in short packets, and since the data rate was roughly similar to the telephone T-1 rate, I had the idea of also sending telephony over the same cable. At that time, prior to the fiber optics era, T-1 circuits were very expensive. We believed that we could send telephony more cheaply over the TV cable, given the then telephone company tariffs. <br>My basic idea was to use the 192 bit frames of the T-1 system as a separate very short, and very fast packet. This would allow us to make statistical use of the channel. And the short packet would allow us to avoid any significant delay, important for maintaining high quality voice. By sending packets only when the user was talking, and using 32 Kb ADPCM in lieu of the older 64 Kbps PCM approach we were able to carry 96+ voice channels. This may be compared to a conventional T-1 circuit, which could carry a maximum of 24 voice channels, a factor of four improvement.<br> We described this proposed cable telephone system technology on a white board to visitors from Michigan Bell. We said, “Our technology will allow TV cable to transmit telephone voice at a lower cost than conventional alternatives”. Their reply was, “Could you also do that packetization trick over our existing T-1 twisted pair circuits, and get the same 4X efficiency improvement?” We said, “Yes, I guess we might be able to do that.” And they were interested in proceeding.<br> The telephone industry’s equipment for rearranging the connections of T-1 trunks from different central offices is called Digital Access Cross Connect Systems (DACCS). What Michigan Bell wanted was in essence a DACCS using the compression approach that we described. We built a pair of prototype units for Michigan Bell, which we named PacketDax. They were highly efficient and flexible and able to remotely control automatic cross connection switches with excellent remote control monitoring and set up. Parenthetically it was mandatory that the voice quality be indistinguishable from the conventional toll quality voice circuits, which it was. And the units met all the telephone plant requirements with redundant components etc. This interesting project represented only about five percent of the entire Packet Technologies efforts but would have an important place in the Company’s future. <br>  


===== Amoco’s Remarkable Intransigence<br>  =====
===== Amoco’s Remarkable Intransigence<br>  =====

Revision as of 18:31, 23 September 2008

About Paul Baran

Baran received his BS in electrical engineering from the Drexel Institute of Technology (now Drexel University) in 1949. He worked for the Eckert Mauchly Computer Company on the UNIVAC, 1949-50, Raymond Rosen Engineering Products on magnetic taper error correction and the Cape Canaveral telemetering system, 1950-53, and Hughes Aircraft on radar data processing,1955-59. He took classes at UCLA, 1955-62, and got an MS in engineering in 1959. He worked on missile command and control from the laste 1950s. He went to the RAND Corporation as a researcher, 1959-68, working on defense against a nuclear first strike—particularly, working on communication, how to get messages through after a military strike. Working on high data rate distributed communications in early 1960s, digital redundant means for military to communicate even after a nuclear first strike ; e.g., packet switching. He later worked on gun detection, computer privacy, and then helped set up the ARPANET. He then went to work for the Institute for the Future, working on quality control. From 1973 he went to work at Cabledata Associates, recommending divestiture of the ARPANET (though not carried out for a while); working on low cost computer printers, satellite transponders, telephone modems, packet technologies, remote electric metering, automated response to faxes, and ATM-Cable TV combinations.

About the Interview

PAUL BARAN:An Interview Conducted by David Hochfelder, IEEE History Center, 24 October 1999


Interview #378 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc., and Rutgers, The State University of New Jersey

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:
Paul Baran, an oral history conducted in 1999 by David Hochfelder, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.

Interview

Communications History

Hochfelder:
Would you begin by sketching the last fifty years or so of communications history? What do you think were the important communications technologies and innovations?


Baran:
There are so many important new technologies created during this last half century it is impossible to enumerate them all. During this period transistors and computers became real. Out of the transistor came the microprocessor that opened unimagined applications in business, in homes, and even in automobiles. The miniaturization of the integrated circuit permitted a level of signal processing that changed our entire concept of communications. And information theory provided some clues to better understand what we were doing. This era also saw the development of satellites, of modems and digital transmission and of packet switching. Cable TV, which started as a modest extension of the TV antenna with time and access to new technology evolved into a high-speed two-way transmission, channel for voice and data, and now even digital TV. Fiber optics, a remarkable and non-obvious breakthrough changed the economics of long distance transmission. The follow on development of wave division multiplexing is creating an almost infinite bandwidth channel for longer distances. Even that old technology from the last turn of the Century, “wireless” had by mid century morphed into broadcast radio. And old quaint name “wireless” was recycled to describe the new categories of cordless telephones, cellular and radio data transmission that have become commonplace.

Given this endless stream of technologies that together comprises “communications technology” no single technology can be said to be “the most important”. More accurately, we are dealing with not a single technology, but rather myriad-networked combinations of each of these separate technologies so merged together that an almost indefinable family of information links is being created, bringing humanity closer together around the world. As engineers who may spend our days working at the forefront of a single narrow technology, it is appropriate to take the time to step back. And, from this vantage point consider the larger picture that has evolved over these fifty years, in which the combination of these technologies is far greater than its parts.

Today, the all-pervasive worldwide communications networks weave together the strands of commerce, ideas and in shared common interests. The payoff to society for the development of the new communications technology during the last fifty years cannot be overestimated. And, let us not forget that the IEEE has been a major factor in the development of communications technology.

Hochfelder:

Would you give a thumbnail sketch of the evolution of communications technology since you began your career?


Career

Baran:
As my life touched on only a small portion of the vast panoply of communications technology that enfolded over the last fifty years, I would like to focus on technologies where I have had first hand experience. When I chat about my career and touch on the different technologies that I personally encountered along the way, I remind the reader that no modern technology is ever the product of any single individual. So, I am also talking about the efforts of many others as well. I have tried to add references to relevant papers and patents as pointers to previously published information.

Drexel, 1945 - 49; and Eckert-Mauchly, First Commercial Computer, the Univac, 1949 - 50

Baran: I started my engineering career by attending what was then Drexel Institute of Technology, now Drexel University , receiving a BS in EE in 1949.

Jobs for engineers were scarce that year, particularly the choice ones at the larger, well established companies. So, I had to settle working as a technician at a new “start-up”. This was the Eckert Mauchly Computer Company, created by two of the key individuals, Presper J. Eckert and John Mauchly who built the first large electronic computer, the ENIAC at the University of Pennsylvania. The new company was created not as the result of detailed planning, but in anger over a disagreement with the University over patent rights. Their product would be the UNIVC, which was to be the first large commercial general-purpose electronic computer.

At the time I knew nothing about computers. But as few others did at the time, I was not concerned about by my ignorance. My job was to determine the lifetime of each of the potential components that might be used operating under different voltages and temperatures. This was critical as the composite failure rate for the very many components comprising this large machine limited the ability of the large machine to operate successfully. Otherwise failures would be so frequent that the computer would be down for repairs most of the time, and so be unusable.
Vacuum tubes and the other electronic components at the time were developed for entertainment radios where the number of parts is small and the failure of a single component is not serious. The radio would be brought into the shop for tubes to be changed, or an electrolytic capacitor to be replaced and the radio would be back in service. But, in the case where a very large number of components all had to work at the same time, component failure rates became critical. The available components proved to be highly unreliable. For example, vacuum tubes suffered from a mysterious malady called “sleeping sickness.” When a tube was in the “0” state for a long time, it would occasionally miss the first “1” -- disastrous in a computer where every bit is mandatory. The diodes were also unreliable. Every semiconductor in a batch had different properties. We had to measure the characteristics for each diode, and then determine where in the circuit it could be used. I recall that in the first UNIVAC, a hole was left for a dual diode 6H6 vacuum tube for every two semiconductor diodes, just in case they didn’t prove reliable enough.
By calculating the mean time to failure, and considering the time to find and repair a component failure, I convinced myself that the a commercial computer would likely be an economic failure as the components of the day were so unreliable. And, after only seven months I doubted whether the damn thing would work. And, if it did; not well enough. Time to get another job.
Raymond Rosen Engineering Products (RREP), 1950-53
Radio Telemetering, 1950 - 53
My next job was with Raymond Rosen Engineering Products, a block away from Drexel. It was a company that found itself in the telemetering and remote control business by accident. The primarily business was that of an RCA TV distributor that also serviced RCA police and taxicab radios. But, in 1948 a year also marked by an oversupply of graduate engineers as in 1959. The company had hired a few highly overqualified engineers (whose salaries were less than technicians pay). This small team of engineers had, under government contract, modified these early radios for use in telemetering and remote control. And then they went on to design the products from scratch, “doing it right.” These products worked well, and were relatively inexpensive. I joined this company in about March 1950. On June 24th 1950 the Korean War started, and the company’s products became of increased interest to the military. And, in a very short time a lot of unexpected business descended on this company.
Magnetic Tape Error Correction, 1951
At Raymond Rosen Engineering products I designed electronic equipment for use with magnetic recording equipment for recording aircraft flight and other test data. Magnetic tape recording of the raw data was initially used only as a backup in case of failure of the complex telemetering receiving station. If a channel failed critical date could be lost. Themagnetic tape equipment was not accurate enough for primary recording of data as there were large unexplained errors in the played back data.
The problem that I was working on was how to correct errors caused by the magnetic tape stretching and other impairments. The amplitude response of magnetic tape is highly variable -it could be +/- 30% or more, so frequency modulation (FM) is used instead of amplitude modulation. The telemetering data was carried on a number of FM subcarriers modulating an FM radio carrier. FM had its own set of problems. Magnetic recording tape stretched and caused FM errors, which were amplified by the narrow frequency swings necessary to jam many subchannels onto the same rf channel within the limited tape channel recording bandwidth. The slow tape stretching was easy to correct by the magnetic recorder manufacturer (Ampex) who added a reference tone on the tape, and used this known frequency tone to servo drive the taped playback motor. While this took care of the low frequencies I was surprised to encounter all sorts of higher frequency modulation noise. No one had ever seen this phenomenon before, because no on ever recorded precision FM data on magnetic tape. Magnetic tape recording was a new art at the time. We thought that we could use a reference tone and pass it through an FM discriminatory and use the resulting output signal to cancel the errors in each of the many separate FM subcarrier discriminators. I was surprised to find that the correction signals arrived too early. Each FM channel used a constant percentage bandwidth filter, so there was a linear phase delay or constant time delay passing through each discriminatory channel. But the time delay required was different for each sub-channel. The FM channel bandwidths were part of a fixed standard. To get around this problem, I came up with the idea of a bank of variable all-pass-filter delay lines that held back the correction signal to arrive exactly in time and amplitude to correct the errors. This worked well. But, there was still more residual noise than expected which I found the noise to come from the magnetic recording tape itself. This was a phenomenon that no one had encountered earlier, because magnetic tape wasn’t used this way before.
There were three manufactures of magnetic recording tape at that time. But I was only able to get Dr. Wetzel of 3M interested in working on this problem with us. Wetzel had the 3M tape manufactured by different methods, and sent us samples. We then tested each sample and reported back our results. After a few iterations, the problem was tracked to microscopic protrusions on the tape that would catch on the heads and cause the tape to resonate like a bow across a violin string. When these protrusions were removed, there was a marked improvement in performance. As these mounds looked like miniature breasts, I referred to the tape where these protrusions were scraped off in our correspondence as “deteatified tape”.
This deteatified tape was a major improvement over what was previously available, so it became a new commercial product for 3M. Before the formal product release someone in the 3M marketing department asked where the term “deteatified” came from, and the tape was immediately renamed “Instrumentation Grade Recording Tape”. It proved be a very highly profitable product. Those who record data on tape are not price sensitive, write only once and never recycle the tape.
Cape Canaveral First Telemetering System, 1952 - 53
The first customer for this work was the Cape Canaveral Long Range Test Facility then being created. This was to be a series of island tracking stations out from Cape Canaveral along a 135-degree path, for testing long-range air breathing missiles and later ballistic missiles entirely over water, with island stations along the way. The improvements in the state of the art or recording telemetering data had now reached a point where it was now feasible to record primary data on tape at the island bases, and then play the tape back at the main facility at a later time without significant loss of accuracy.
By now the Korean War had intensified and the first telemetering test there was to be for the Matador, an air-breathing missile that would fly at 40,000 feet. I was given the task of getting our breadboard equipment out of the lab, set up and to be operational within two months at Cape Canaveral and Grand Bahama Island. The recording station in Grand Bahama was built in a trailer, while the one in Cape Canaveral was to be set it up in a literally commandeered beach house.
Experiences as a Field Engineer
A predecessor to the Patrick Air Force Base south of Cocoa Beach the Banana River Naval Station had been in operation for some time. The Cape Canaveral site, about twenty miles to the north was a remote and lightly inhabited place, primarily palmetto bush and some beach houses recently claimed by the government under Eminent Domain. A few hold out owners found their beach houses bordered by barbed wire.
The most challenging impediment in meeting this tight schedule was a few bureaucratic military officers assigned to shuffling papers who had no notion of what telemetering was about, nor any sense of urgency. Telemetering appeared in the organization chart somewhere beneath the motor pool. Obtaining the needed priority treatment to meet the schedule was difficult, with an excessive amount of time spent on red tape. We somehow were able to work around the impediments and get all our equipment working in time for the first scheduled launch.
This first Matador took off cleanly, gained altitude nicely and flew at 40,000 feet out to its target near Grand Bahama Island. But when it got there, embarrassingly missed its target. The telemetering equipment all worked as planned. The recorded tape on Grand Bahama was flown back to the Cape and played back to drive the multichannel oscillographs that traced each of the measured quantities. Reading these traces on paper told the story.
Missing the target was immediately known to all at Cape Canaveral. The immediate question was, “What went wrong? Who screwed up?” This presented a wonderful target of opportunity to teach the bureaucracy why telemetering was important and should be raised in the organization’s pecking order. When asked, “What did the telemetering data show?”, we just smiled and said, “Yes, we believe that we know what the problem is, but of course we can’t tell you. The data belongs to the Telemetering Branch.” This small civil service group milked it far longer than one’s normally entitled 15 minutes in the spotlight. Instead of saying what the problem was, they religiously followe the rules and sent their information up the chain of command. This delay further increased the mystique. In the end the data suggested that the guidance system worked fine, but Grand Bahama Island wasn’t where it supposed to be. While we had highly accurate maps for single landmasses, remote island locations relative to the mainland were only approximations at that time.
Consulting, 1954 - 55
After RREP, I worked in field engineering for Audio Video Products Co. in New York and I later developed some magnetic recording equipment on a consulting basis. I met my future wife, Evelyn Murphy, a transplanted Californian in New York. But as we all know, Californians don’t transplant well. They tend to complain about how much better the weather is in California. I’m sure that most Easterners must be tempted to say, “Why don’t you go back where you came from.” So, shortly after we married, she returned to California with me in tow. And here is where we live to this day.
Hughes Aircraft Ground Systems, 1955 - 59
I was recruited in to work for Hughes Aircraft Ground System Department in New York City while attending the IEEE Annual Conference and Show held in March. Hughes found this to be a perfect time to recruit young engineers from the East at this event – after long months of gray skies, flashing pictures of palm trees and blue California skies proved to be irresistible to many of us.
Radar Data Processing, 1955 - ~ 57
In that era military inter-service rivalries were fierce. Hughes Aircraft’s relationship to the Air Force was a deterrent to them receiving Army or Navy contracts. So Hughes formed a talented group of six, headed by Dick Barlow to write proposals for the other services. I asked Barlow “Why the odd name, Ground Systems?” “So I can grab all requests for proposals that came in the front door not specifically earmarked “aircraft.”
When I joined Hughes Ground Systems it had just received its first contract in response to some imaginative proposal writing. The department had only 28 people, but within a few years Ground Systems would grow to over 6,000 people in an era of rapid expansion of the defense electronics industry.
My first job there was in the Systems Group on a radar data processing system for the Army (if I recall correctly)-- the AN/SRS-2. At that time the Air Force was building a vast computer system -- the Sage System combining computers, radar and communications links to form an air defense command and control systems. The Ground System proposal was for a “vest pocket” version of the Sage System -- a system with much of the capability of the huge Sage System but all packed inside an Army trailer. The concept was that by using transistors instead of large hot radio tubes, tremendous miniaturization and improved reliability would result.
The trailer connected to a radar set attached at the end of a long cable. These radar units were conventional mechanical scanning radars. The equipment’s purpose was to mark the location of airplanes seen on a radar screen. As the search radar scanned around in about 15 seconds the detected airplane would be in slightly different position, which would be marked. After a few such marks, the computer would pick up the task by extrapolating previous measurements. This allowed a few operators to monitor a large number of aircraft.
About six months after the work on the first unit started, another contract arrived from the Navy that used much of the original development. This system would become the Naval Tactical Data Systems (NTDS), versions of which are still in use today, perhaps the most successful of any military electronic systems. Dr. Nicholas Begovitch the associate head of the department came up with some new concepts in phased array radars. The Navy version particularly benefited from phased array radars, which were novel at the time, by allowing the beam to be electronically stabilized while the ship platform rolled in heavy seas.
Another novel idea that I recall seeing there for the first time was a track ball invented by mechanical engineer John Bozeman. He used a small bowing ball with two orthogonal pickups. In tracking airplanes, slight rotation of the ball allowed precision marking of the position. And, if a new airplane showed up from another direction, the ball would be quickly spun and then fine position corrections made. In other words, it had the properties similar to today’s computer ball type mouse, but it seems to be lost in history.
UCLA, 1955 - 62
At the time I joined Hughes I knew little about radar, less about computers and nothing at all about transistors. I was temporarily ensconced in “system engineering” which considered the macro design issues, sparing me from having to know the details of what was going on in the guts of the system. But I knew that I would need to quickly come up to speed to be more effective. A bit of on the job training and a lot of reading made me a less dangerous person to have around. But, there was so much that I should know, but didn’t. So, I started to take after hour’s classes, and continued to do so for many years.
UCLA had an excellent Extension Division with a wide range of leading edge engineering courses taught by first-rate people in their field, such as Dr. Willis Ware of RAND. I started off with courses in computers and transistors and later biotechnology. Along the way I accumulated enough courses to be eligible to work toward an MS in engineering degree at UCLA. I had the good luck to draw Prof. Gerald Estrin as my advisor. I attended his seminars as well as working with him one-on-one basis on my MS thesis on automated character reading. Jerry Estrin has a wonderful knack of finding out what you knew and what you didn’t and focused on your gaps. I learned many things from this fine gentleman, such as how to really read a technical paper. (“It isn’t read until you have also read all the footnotes, which by inclusion form part of the paper.”) In 1959 I received an MS in Engineering degree and continued to take classes at night. A sad fact of life of leading edge engineering is that your technology knowledge becomes obsolete very quickly, so you always have to be learning just to stay in the same place.
Missile Command and Control, ~1958
In the late ‘50’s the Cold War was heating up, and the Air Force was beginning to move from early liquid fueled ballistic missiles planning to enter the era of solid-state missiles. The major difference between liquid and solid fuel missiles was that the liquid fueled missile may take eight hours for fueling, while the solid fuel missile could be fired in minutes. As a result of my working on radar information processing issues I became increasingly concerned about issues of vulnerability, counter-measures and command and control, and I moved over to the system vulnerability analysis group in Ground Systems.
The Minuteman solid fuel missile command and control system was being designed at the time. Many of us felt that unless the control system was very carefully designed it would be far more dangerous than anything the world has ever seen. It would require as a minimum of layer after layer of safeguards lest any panicked launch control officer inadvertently start World War III. Prof. Warren McCollough of MIT, a brain-researcher who was a combination of electrical engineer and psychiatrist was a consultant on this project. He brought a range of insights into the risk factors on the extreme fallibility of human beings. My interest further increased in the issues of command and control and techniques to ameliorate these risk factors.
3. Not-for-Profit Period
The RAND Corporation, 1959 - 68
At Hughes Ground Systems continued to grow, I felt that my effectiveness was increasingly constrained by the even more rapidly growing bureaucracy. I thought that I could be more effective in a smaller organization, preferably working as an individual researcher. And, I was lucky in ending up at the RAND Corporation, a not-for-profit organization set up at the end of World War II to preserve the Operations Research capability created and developed during the war. I had visited RAND a few times while at Hughes, and was struck by the freedom and effectiveness of its people. And I was further impressed by Willis Ware who taught a computer course at UCLA that I took.
RAND received its money once a year from the Air Force and the RAND people had a remarkable freedom to pursue subjects that the researcher believes would yield the highest payoff to the Nation. The individuals used that freedom well and this privilege was rarely abused. RAND was by far the most effective organization in the defense sector that I every encountered. I am honored to be able to continue an affiliation with RAND to this day as a Member of its President’s Council.
Cold War Threat, ~ 1959
In late 1959 when I joined the RAND Corporation, the Air Force was synonymous with National Defense. The other services were secondary. The major problem facing the Country and the World was that the Cold War between the two super powers had esculated to the point by 1959 when both sides were starting to build highly vulnerable missile systems prone to accidents. Whichever side fired their thermonuclear weapons first would essentially destroy the retaliatory capacity of the other. This was a highly unstable and dangerous era. A single accidental fired weapon could set off an unstoppable nuclear war. A preferred alternative would be to have the ability to withstand a first strike and the capability of returning the damage in kind. This reduces the overwhelming advantage by a first strike, and allows much tighter control over nuclear weapons. This is sometimes called Second Strike Capability. If both sides had a retaliatory capability that could withstand a first-strike attack, a more stable situation would result. This situation is sometimes called Mutually Assured Destruction, also known by its appropriate acronym, MAD. Those were crazy times.
Communications: the Achilles Heel, 1960+
The weakest spot in assuring a second strike capability was in the lack of reliable communications. At the time we didn’t know how to build a communication system that could survive even collateral damage by enemy weapons. RAND determined through computer simulations that the AT&T Long Lines telephone system, that carried essentially all the Nation’s military communications, would be cut apart by relatively minor physical damage. While essentially all of the links and the nodes of the telephone system would survive, a few critical points of this very highly centralized analog telephone system would be destroyed by collateral damage alone by missiles directed at air bases and collapse like a house of card. This rendered critical long distance communications unlikely. Well what about high frequency radio, i.e. the HF or short wave band? The problem here is that a single high altitude nuclear bursts destroys sky wave propagation for hours. While propagation would continue via the ground wave, the sky wave badly needed for long distance radio would not function reducing usable radio ranges to a few tens of miles.
The fear was that our communications were so vulnerable that each missile base commander would face the dilemma of either doing nothing in the event of a physical attack, or taking action that would mean an all out irrevocable war. A communications system that could withstand attack was needed that would allow reduction of tension at the height of the Cold War.
Broadcast Station Distributed Teletypewriter Network, 1960
At that time the expressed concern was for a system able to support Minimum Essential Communications -- a euphemism for the President authorizing a weapons launch.
In 1960 I proposed using broadcast stations as the links of a network . Broadcast stations during the daytime depend soley only on the ground wave, not subject to the loss of the sky wave. This is the reason that AM broadcast stations have such a short range during the day. I was able to demonstrate using FCC station data that there were enough AM broadcast stations in the right location and of the right power levels to allow signals to be relayed across the country. I proposed a very simple protocol. Just flood the network with the same message.
When I took this briefing around to the Pentagon, and other parts of the defense establishment I received the objection that it didn’t fix the problem of the military. “OK, a very narrow band capacity may take care of the President issuing the orders at the start of a war, but how do you support all the other important communications requirements that you need to operate the military during such a critical time.”
High Data Rate Distributed Communications, 1961 - 64
The response was unambiguous. What I proposed wouldn’t fully hack it. So it was “back to the drawing board” time. I started to examine what military communications needs were regarded as essential by reading reports on the subject, and asking people at various military command centers. The more that I examined the issues, the longer the list. So I said to myself. “As I can’t figure out what essential communications is needed, let’s take a different tack. I’ll give those guys so much damn bandwidth that they wouldn’t know what in Hell to do with it all.” In other words, I viewed the challenge to be the design of a secure network able send signals over a network being cut up, and yet having the signals delivered with perfectly reliability. And, with more capacity than anything built to date. When one starts a project aim for the moon. Reality will cut you back later. But if you don’t aim high at the outset you can never advance very far.
Why Digital? Why Message Blocks?
I knew that the signals would have to find their way through surviving paths, which would mean a lot of switching through multiple tandem links. But, at that time long distance telephone communications systems transmitted only analog signals. This placed a fundamental restriction on the number of tandem connected links that could be used before the voice signal quality became unusable. A telephone voice signal could pass through no more than about five independent tandem links before it would become inaudible. This ruled out analog transmission in favor of digital transmission. Digital signals have a wonderful property. As long as the noise is less than the signal’s amplitude it is possible to reconstruct the digital signal without error.
The future survivable system had to be all-digital. At each connected node, the digital signal would be verified that the next node correctly received it. And, if not, the signal would be retransmitted. As one day the network would also have to carry voice as well as teletypewriter and computer data, all traffic would be in the same form – bits. All analog signals would first be digitized. To keep the delay times short the digital stream would be packaged into small message blocks each with a standardized format. Work on time division multiplexing of digital telephone signals was in an early state in Bell Labs. Their experimental equipment used a data rate of about 1.5 Megabits/sec. I then started with the premise that it would be feasible to use digital transmission, at least for short distances at 1.5 Megabits/sec. since the signals could be reconstructed at each node. A big problem blocking long distance digital transmission was transmission jitter buildup. Every mile a repeater amplifier chopped the tops off the wave and reconstituted a clean digital signal. But noise caused a cumulative shifting of the zero crossing points. This limited the span distance. I thought that a node terminating each link in a non-synchronous manner should effectively clean up the accumulated jitter. This would and provide a de facto way of achieving long distances by such jitter error clean up. And I felt that if that didn’t work, then our fall back technology would then be the use of extremely cheap microwaves that could be feasible in this noise margin tolerable application.
On Parallelism
By this time it was beginning to become clear that the new system’s overall reliability would be significantly greater than the reliability of any one component. Hence I could think in terms of building the entire system out of cheap parts – something previously inconceivable in the all-analog world.
Hochfelder:
Because it is in parallel?
Baran:
Yes. In parallelism there is strength. Many parts must fail before no path could be found through the network. It took a redundancy level of only about three times the theoretical minimum to build a very tough network . If you didn’t have to worry about enemy attacks, then a redundancy level of about 1.5 would suffice for a very reliable network out of very inexpensive and unreliable parts. And, it would later show that it would be possible to reduce the cost of communication by almost two decimal orders of magnitude. The saving in part came from being able to design the long distance transmission systems as links of a meshed network with alternative paths without allowing huge fade margins where all the links are connected in tandem. With analog transmission every link of the network must be “gold plated” to achieve reliability.
Hot-Potato Routing
A key element of the concept was that it would be necessary to keep a “carbon copy” of each message block using computer technology, until the next station successfully received the message. The next challenge was to find a way for the packets to seek their own way through the network. This meant that some implicit path information must be contained as housekeeping data within the message block itself. The housekeeping includes data about the source and destination of the packet together with a an implied time measurement such as the number of times the message block had been retransmitted. This small amount of information allowed creation of an algorithm that did a very effective job of routing dynamically changing traffic to always find the best instantaneous path through the network.
Basic Concepts Underlying Packet Switching, 1960
I had earlier discovered that very robust networks could be built with only modest increases in redundancy over that required for the minimum connectivity. And, then it dawned on me that the process or resending defective or missing packets would then allow the creation of an essentially error-free network . Since it didn’t make any difference whether a failure was due to enemy attacks or poor reliable components, it would be possible to build systems where the system reliability is far greater than the reliability of any of its parts And, even with inexpensive components a super reliable network would result.
Another interesting characteristic was the network learning property would allow users to move around the network, with that person’s address following them . This would allow separating the physical address from the logical address throughout the network, a fundamental characteristic of the Internet.
Another that I learned was that in building self-learning systems it is equally important to forget, as it is to learn. For example, when you destroy parts of a network, the network must quickly adapt to routing traffic entirely differently. I found that by using two different time constants, one for learning and the other for forgetting provided the balanced properties desired. And, I found it helpful to view the network as an organism, as it had many of the characteristics of an organism as it responds to overloads, and sub-system failures.
Dynamic Routing, 1961
I first thought that it might be possible to build a system capable of smart routing through the network after reading about Shannon’s mouse through a maze mechanism . But instead of remembering only a single path, I wanted a scheme that not only remembered, but also knew when to forget, if the network was chopped up. It is interesting to note that the early simulation showed that after the hypothetical network was 50% instantly destroyed, that the surviving pieces of the network reconstituted themselves within a half a second of real world time and again worked efficiently in handling the packet flow.
Hochfelder:
How would the packets know how to do that?
Baran:
Through the use of a very simple routing algorithm. Imagine that you are a hypothetical postman and mail comes in from different directions, North, South, East and West. You, the postman would look at the cancellation dates on the mail from each direction. If for example if our postman was in Chicago, mail from Philadelphia would tend to arrive from the East with the latest cancellation date. If the mail from Philadelphia had arrived from the North, South, or West it would arrive with a later cancellation date because it would have had to take a longer route (statistically). Thus, the preferred direction to send traffic to Philadelphia would be out over the channel connected from the East as it had the latest cancellation date. Just by looking at the time stamps on traffic flowing through the post office you get all the information you need to route traffic efficiently.
Each hypothetically post office would be built the same way. And, each would have a local table that recorded the statistics of traffic flowing through the post office. With packets, it was easier to increment a count in a field of the packet than to time stamp. So, that is what I did. It’s simple and self-learning. And when this “handover number” got too big, then we knew that the end point was unreachable and dropped that packet so that it didn’t clutter the network.
Hochfelder:
Always searching for the shortest path.
Baran:
Yes, that is the scheme. We needed a learning constant and a forgetting constant as no single measurement could be completely trusted. The forgetting constant also allows the network to respond to rapidly varying loads from different places. If the instantaneous load exceeded the capacity of the links, then the traffic is automatically spread through more of the network. I called this doctrine, “Hot Potato Routing.” These days this approach is called “Deflection Routing.” By the way, the routing doctrine used in the Internet differs from the original Hot Potato approach, and is the result of a large number of improvements over the years.
Basic Properties of Packet Switching, 1960 - 62
The term “packet switching” was first used by Donald Davies of the National Physical Laboratory in England who independently came up with the same general concept in November 1965 .
Essentially all the basic concepts of today’s packet switching can be found described either in the 1962 paper or in the Augurst 1964 RAND Memoranda in which such key concepts as the virtual circuit are described in detail.
The concept of the “virtual circuit” is that the links and nodes of the system are all free, except during those instances when actually sending packets. This allows a huge saving over circuit switching, because 99 percent of the time nothing is being sent so the same facilities can be shared with other potential users.
Then there is the concept of “flow control”, which is the mechanism to automatically prevent any node from overloading. All the basic concepts were worked out in engineering detail ina series of RAND Memoranda (between 10 to 14 volumes, depending on how they are counted) What resulted was a realization that the system would be extremely robust, with the end to end error rate essentially zero, even if built with inexpensive components. And, it would be very efficient in traffic handling in comparison to the circuit-switching alternative.
Economic Payoff Potential Versus Perceived Risks
This combination of economy and capability suggested that if built and maintained at a cost of $60,000,000 (1964 Dollars) that it could handle the long distance telecommunications within the Department of Defense that was costing the taxpayer about $2 billion a year.
At the time, the great saving in cost claimed was so great that it made the story intuitively unbelievable. It violated the common sense instincts of the listener who would say in effect that: “If it were ever possible to achieve such efficiencies the phone company (AT&T) would have done it already.”
Another understandable objection was “This couldn’t possibly work. It is too complicated.” This perception was based on the common view, correct at the time, that computers were big, taking up large glass walled rooms, and were notoriously unreliable. When I said that that each switching node could be a shoe sized box with the required computer’s capabilities, many didn’t believe it. (I had planned doing everything in miniaturized hardware in lieu of using off the shelf minicomputers.) So I had the burden of proof, to define the small box down to the circuit level to show that it could indeed be done.
Another issue was the separation of the transmission network from the analog to digital conversion points. This is described in detail in Vol. 8 of the ODC series This RAND Memorandum describes in detail how users are connected to the switching network. The separate unit that is described connects up to 1024 users and convert their analog signals into digital signals. This included voice, teletypewriters, computer modems, etc. One side of the box would connect to the existing analog telephones, while the other side which was digital would connect to the switching network, preferably at multiple points to eliminate a single point of failure.
This constant increase in desire for engineering details caused so much paper to be written at the time cluttering up the literature. On a positive note it left us with a very detailed description of packet switching proposed at that time. This record has been helpful in straightening out some of the later misrepresentations of who did what and when as found in the popular press’s view of history.
Opposition and Detailed Definition Memoranda, 1961+
The enthusiasm that this early plan encountered was mixed. I obtained excellent support from RAND (after an early cool and cautious initial start). Others, particular those from AT&T (the telephone monopoly at the time) objected violently. Many of the objections were at the detail level, so the burden of proof was then on me to provide proposed implementation descriptions at an ever finer level of detail. Time after time I would return with increasingly detailed briefing charts and reports. But, each time I would hear a mantra, “It won’t work because of ____”. “ It won’t work because of (some new objection).” I gave the briefings to many places to various government agencies, to research laboratories, to commercial companies, but primarily to the military establishment I gave briefings at least 35 times. It was hard for a visitor with an interest in communications to visit RAND without being subject to a presentation. My chief purpose in giving these presentations so broadly was that I was looking for reasons that it might not work. I wanted to be absolutely sure that I hadn’t overlooked anything that could affect workability. After each encounter where I could not answer the questions quantitatively, I would go back and study each of the issues raised and fill in the missing details. This was an iterative process constituting a wire brush treatment of a wild set of concepts.
In fairness, much of the early criticism was valid. Of course the burden of proof belongs to the proponent. Among the many positive outcomes of the exercise was that, 1) I was building a better understanding the details of such new systems, 2) I was building a growing degree of confidence in the notions, and 3) I had accumulated a growing pile of paper including simulation data to support the idea that the system would be self learning and stable.
Broad Open Publication, 1964
Most of the work was done in the period 1960-62. As you can imagine old era analog transmission engineers unable to understand what was being contemplated in detail. And, not understanding, they were negative and intuitively believed that it could possibly work. However, I did build up a set of great supporters as I went along. My most loyal supporters at RAND included Keith Uncapher my boss at the time, and Paul Armer and Willis Ware, co-heads of the Computer Science Department. RAND provided a remarkable degree of freedom to do this controversial work, and supported me in external disagreements. By 1963 I felt that I had carried this work about as far as appropriate to RAND (which some jokingly say stands for “Research And No Development.”) And, I felt that as I had completed the bulk of my work I began wrapping up the technical development phase in 1964 when I published the set of memoranda in 1964 which were primarily written on airplanes in the 1960 to 1962 era. There were some revisions in 1963, and the RAND Memoranda came out in 1964. I continued to work on some of the non-technical issues and gave tutorials in many places including summer courses at the University of Michigan in 1965 and 1966.
In May 1964 I published a paper in the IEEE Communications Transactions which summarizes the work and provides a pointer to each of a dozen volumes of Rand Memoranda for the serious reader who wanted to read the backup material Essentially all this work was unclassified in the belief that we would all be better off if the fate of the world relied on more robust communications networks. Only two of the first twelve Memoranda were classified. One dealt with cryptography and the other with weak spots that were discovered and the patches to counter the weak spots. A thirteenth classified volume was written in 1965 by Rose Hirshfield on real world geographical layout of the network. And there was a 14th describing a secure telephone that could be used with the system and had possible applications outside of the network and so wasn’t included in the number series. This was co-authored with Dr. Rein Turn.
Receiving the Word
Getting a new idea out to a larger audience is always challenging. Perhaps more so if it is a departure from the classical way of doing things. The IEEE Spectrum which is sent to all IEEE members picked up the article in a “Scanning the Transactions”. I looked to this short summary to being a pointer to the IEEE Transaction article, for those that didn’t normally read the Communications Society Transactions. This article in turn pointed to RAND Memoranda. readily available either from RAND or its depositories around the world. In those days RAND Publications were mailed free to anyone who requested a copy.
But no matter how hard one tries, it seems that it is impossible to get the word out to everyone. This is not a novel problem. And, it contributes to duplicative research, made more common by the reluctance by some to take the time to review the literature before proceeding with their own research. Some even regard reviewing the literature as a waste of time. I was surprised many years later to find a few key people in closely related research say that they were totally unaware of this work until many years later. I recall describing the system in detailed discussions, only to find out at a later date that the listener had completely forgotten what was said, and who didn’t receive his Epiphany until much later and ostensibly through a different channel.
Conceptual Gap Between Analog and Digital Thinking
The fundamental hurdle in acceptance was whether the listener had digital experience or knew only analog transmission techniques. The older telephone engineers had problems with the concept of packet switching. On one of my several trips to AT&T Headquarters at 195 Broadway in New York City I tried to explain packet switching to a senior telephone company executive. In mid sentence he interrupted me, “Wait a minute, son. “Are you trying to tell me that you open the switch before the signal is transmitted all the way across the country?” I said, “Yes sir, that’s right.” The old analog engineer looked stunned. He looked at his colleagues in the room while his eyeballs rolled up sending a signal of his utter disbelief. He paused for a while, and then said, “Son, here’s how a telephone works….” And then he went on with a patronizing explanation of how a carbon button telephone worked. It was a conceptual impasse.
On the other hand, the computer people over at Bell Labs in New Jersey did understand the concept. That was insufficient. When I told the AT&T Headquarters folks that their own research people at Bell Labs had no trouble understanding and didn’t have the same objections as the Headquarters people. Their response was, “Well, Bell Labs is made up of impractical research people who don’t understand real world communication.”
Willis Ware of RAND tried to build a bridge early in the process. He knew Dr. Edward David Executive Director of Bell Labs and he aske for help. Ed set up a meeting at his house with the chief engineer of AT&T and myself to try to overcome the conceptual hurdle. At this meeting I would describe something in language familiar to those that knew digital technology. Ed David would translate what I was saying into language more familiar in the analog telephone world (he practically used Western Electric part numbers) to our AT&T friend, who responded in a like manner. Ed David would translate it back into computer nerd language.
I would encounter this cultural impasse time after time between those who were familiar only with the then state of the art of analog communications – highly centralized and with highly limited intelligence circuit switching and myself talking about all-digital transmission, smart switches and self-learning networks. But, all through the process of erosion, more and more people came to understand what was being said. The base of support strengthened in RAND, the Air Force, academia, government and some industrial companies --and parts of Bell Labs. But I could never penetrate AT&T Headquarters objections who at that time had a complete monopoly on telecommunications. It would have been the perfect organization to build the network. Our initial objective was to have the Air Force contract the system out to AT&T to build the network but unfortunately AT&T was dead set against the idea.
Hochfelder:
Were there financial objections as well?
Baran:
AT&T Headquarters Lack of Receptivity
Possibly, but not frontally. They didn’t want to do it for a number of reasons and dug their heels in looking for publicly acceptable reasons. For example, AT&T asserted that were not enough paths through the country to provide for the number of routes that I had proposed for the National packet based network but refused to show us their route maps. (I didn’t tell them that someone at RAND had already acquired a back door copy of the AT&T maps containing the physical routes across the US since AT&T refused to voluntarily provide these maps that were needed to model collateral damage to the telephone plant by attacks at the US Strategic Forces.) I told AT&T that I thought that they were in error and asked them to please check their maps more carefully. After a month’s delay in which they never directly answered the question, one of their people responded by grumbling, “It isn’t going to work, and even if it did, damned if we are going to put anybody in competition to ourselves.”
I suspect the major reason for the difficulty in accommodating packet switching at the digital transmission level was that it would violate a basic ground rule of the Bell System -- everything added to the telephone system had to work with all previous equipment presently installed. Everything had to fit to into the existing plan. Nothing totally different could be allowed except as a self contained unit that fit into the overall system. The concept of long distance all-digital communications links connecting small computers serving as switches represents a totally different technology and paradigm, and was too hard for them to swallow. I can understand and respect that reason, but can also appreciate the later necessity for divestiture. Competition better serves the public interest in the longer term than a monopoly, no matter how competent and benevolent that monopoly might. There is always the danger that the monopoly can be in error and there is no way to correct this.
On Bell Labs Response.
While the folks AT&T Headquarters violently opposed the technology, there were digitally competent people in Bell Labs who appreciated what it was all about. One of the mysteries that I have never figured out is why after packet switching was shown to be feasible in practice and many papers published by others that it took so many years before papers in packet switching would ever emerge from Bell Labs.
The first paper on the subject that I recall being published in the Bell System Technical Journal was by Dr. John Pierce. This paper described a packet network made up of overlapping Ballantine rings. It was a brilliant idea and his architecture used in today’s ATM systems.
Hochfelder:
What is a Ballantine ring?
Baran:
Have you ever seen the Ballantine Beer’s logo? It is made up of three overlapping rings? Since a signal can be sent in both directions on a loop, no single loop cut need stop communications from flowing from the other direction. Because the signal can go both ways any single cut can be tolerated without loss allowing time for repair. It is a powerful idea.
The RAND Formal Recommendation to the Air Force, 1965
In 1965 the RAND Corporation issued a formal Recommendation to the Air Force (which they do very rarely) for the Air Force to proceed to build the proposed network . The Air Force then asked the MITRE Corporation, a not-for-profit organization that worked for the government to set up a study and review committee. The Committee after independent investigation concluded that the design was valid and that a viable system could be built and that the Air Force should immediately proceed with implementation.
As the project was about to launch, the Department of Defense said that as this system was to be a National communications system, it would in accordance with the Defense Reorganization Act of 1949 (finally being implemented in 1965) fall into the charter of the Defense Communications Agency.
The choice of DCA would have been fine years later when DCA was more appropriately staffed. But at that time the DCA was a shell organization staffed by people who lacked strength in digital understanding. I had learned through the many briefings I had given to various audiences that there was an impenetrable barrier to understanding packet switching by those who lacked digital experience.
Putting the Program on Ice
At RAND I was essentially free to work on anything that I felt to be of most importance to National Security. This allowed me for example to serve on various ad hoc DDR&E (Department of Defense Research & Engineering) committees. I sometimes consulted with Frank Eldridge in the Comptrollers Office of the Department of Defense helping him to review items in the command and control budgets submitted by the services. Frank Eldridge was an old RAND colleague initially responsible for the project on the protection of command and control. He was among the strongest supporters for the work that I was doing on Distributed Communications. He had gone over to the Pentagon working with McNamara’s “whiz kids.” Frank Eldridge had undergone many of the same battles with AT&T and understood the issues of the RAND thence Air Force proposal,
Approval for the money for the Defense Communication Agency (DCA) to undertake the RAND distributed communications system development was under Frank Eldridge’s responsibility. Both Frank and I agreed that DCA lacked the people at that time who could successfully undertake this project and would likely screw up this program. An expensive failure would make it difficult for a more competent agency to later undertake this project. I recommended that this program not be funded at this time and the program be quietly shelved, waiting for a more auspicious opportunity to resurrect it.
The Cold War at this time had cooled from loud threats of thermonuclear warheads to the lower level of surrogate small wars. And, we were bogged down in Viet Nam.
Some Other Projects at RAND
The Doorway Gun Detector, 1964
By 1963-4 I had completed most of my work on the packet switching technology and was engaged in a number of other activities at RAND. In 1964 I came up with the idea of the doorway gun detector (like the ones used in airports today) and built one together with Dr. Harold Steingold at RAND. Unlike packet switching, this was a classified project at the Rand Corporation. No stray metallic object other than a gun was to set off the alarm. But, every real gun should be detected and pinpointed to the part of the body where the gun was concealed. The objective of the design was to minimize false alarms, which would be regarded as an unacceptable embarrassment should it occur.
We received excellent cooperation with the Los Angeles Police Department at the time. If I recall correctly our contact with them was through Inspector Gates who arranged to let us borrow for test purposes a large collection of concealed weapon guns captured by the Los Angeles Police Department over the last year. These weapons ranged from small Derringer pistols to sawed off shotguns. This provided us with a sample of the weapons that we wished to detect. And, it provided a test set to be sure that we didn’t miss any weapons types.
My objective was to build something totally surreptitious, unlike the doorway gun detectors used at the airport these days. Our first gun detector (essentially a simple metal detector) was put together within a few weeks. But, it took about six months to come up with a method detecting the signature of a gun and not trigger an alarm for pocket knives and other normally carried metal objects. And, the device could have an indicator arrangement to help locate where on the body the gun was hidden.
In the next two years hijacking airplanes to Cuba was growing rapidly in frequency and had become a significant problem. As those incidents kept increasing, I suggested through Rand in a letter to the FAA that they consider the use of doorway gun detectors, which we deemed to be feasible technology. We then sent the FAA the technical details. They apparently followed that guidance and we soon saw doorway gun detectors in the airports and the hijackings essentially dropped to near zero.
It’s interesting that all the effort put into developing signatures for the separation of guns turned out to be counterproductive. The FAA found it preferable to stop and search any person carrying any metal at all including knives and other objects. Instead of surreptitious monitoring, what worked best was the very opposite, announcing to everyone that they are being watched. Personally I never thought the public would stand for it. But I was wrong. Only the simplest of metal detection devices were needed. I read somewhere that there are three thousand people a year arrested at those airport metal detectors. So many disputes occur at the metal detectors that a security post is generally located nearby.
Gun Barrel Marking
Following my interest in the misuse of guns, I proposed a techniques to uniquely identify the source of spent bullets at a crime scene. Every rifle left a unique impression on its bullet. If every rifle manufactured was intentionally inscribed with a set of binary scratches along its rifling it would uniquely mark each bullet, and if a test bullet was fired and the marks read then each bullert found at a possible crime scene could immediately identify the rife and its history. A somewhat related concept is now being tested with the cooperation of gun manufacturers appears to be underway as part of the Federal gun control program .
Computer Privacy
I was the first computer expert to testify to Congress on the coming threat of computer privacy. I had delivered a keynote speech at the Fall Joint Computer Conference in 1965 ? describing this coming problem. Reprints of my paper were widely circulated and a copy ended up with Congressman Cornelius Gallagher. Gallagher was about to initiate hearings into a proposed National Data Bank and extended the scope of the hearing bringing in the larger problem that occurs when many personal files are tied together using the Social Security number permitting building up a dossier. Other hearings then followed. . I gave many talks over the next two years around the country on this potential problem. Having contributed to sowing the seed, and being delighted at the number of competent legal and computer people joining into the discussion, I felt my part was done and tapered out.
1967- RAND, Redefinition of National Security
The Rand Corporation is a not-for-profit organization and initially set up to preserve the Air Force operations research capability created in World War II. It is chartered to work in behalf of National Security. But, by the mid 1960’s the definition of “National Security” was changing. With the Watts Riots, civil disobedience, increasingly violent anti-war behavior growing in the campus were new threats to social stability. Thus as we went forward in time the definition of National Security was increasingly internal relative to international security issues.
Some colleagues and I actively worked to broaden the definition of RAND’s National Security charter to include other dimensions of security including problems of social unrest and that law and order issues. Over time this area of research has been broadened so that now about 50% of RAND’s work today is on social issues such as ageing and health delivery economics with the remainder with continuing support of defense activities.
ARPANET History Sources, 1966+
As the history of the ARPANET is so well described in the literature I won’t cover it here. In brief the ARPANET was proposed by Robert Taylor of the ARPA IPTO office as a network to connect terminals to multiple computers. Dr. Lawrence G. Roberts was selected to lead the project. He chose to use packet switching in lieu of circuit switching and as a result the network had far more powerful features than initially considered. My relationship with the ARPANET came by its choice to use packet switching.
I have been embarrassed on occasion by people improperly giving me credit for creating the ARPANET. Of course I did not create the ARPANET or the Internet. Yes, I did seem to have invented packet switching as far as I can tell. And, yes, packet switching was used in the ARPANET and in the Internet. And, yes, packet switching did give the Internet some of its novel properties. But I only did this one piece of the underlying technology. Another person [Davies] came along later and independently came up with much of the same stuff , so I don’t feel that I deserve any excessive credit.
The most accurate sources of information describing this portion of the history of the ARPANET and the Internet and my relation to this work have been prepared by competent historians. The four best books on the history of the ARPANET and the Internet (where the authors have taken the time to review the contemporaneous documentation) are in my opinion:
1. Arthur L Norberg and Judy E. O’Neill, Transforming Computer Technology, Information Processing for the Pentagon, 1962-1986, John Hopkins Press, 1996
2. George Dyson, Darwin Among the Machines, the Evolution of Global Intelligence, Addison-Wesley Publishing Company, Inc. 1997
3. Janet Abbate, Inventing the Internet, MIT Press. 1999
4. John Naughton, A Brief History of the Future, The Origins of the Internet, p.283, Weidenfeld & Nicholson, London 1999.
I was pleased to see that Janet Abbate, who did a fine job, had worked at the IEEE History Center. This is an indication to me of the value that the History Center serves. And, the Charles Babbage Institute, where Arthur Norberg and Judy O’Neill resides is another place of excellence in this field.
Hochfelder:
Right.
Baran:
Having said that, I believe that there is a growing problem by the popular media interviewing only individuals who believe that history started the day they were first exposed to the field. The news reporter with a short deadline, a predetermined story line and a cute title to maximize the audience unfortunately creates Oliver Stone type views of history. Going back to unravel the inevitable mess and the piling up of later work on top of earlier work takes time and effort, a role that places like the Charles Babbage Institute and the IEEE History Center serve so well. I was tempted to add an appendix to this “oral” history to reconcile the different views. But this paper is too long already. I have to say that contemporaneous documentation is the only data that a historian can completely trust in this domain.
In any field no one never starts from scratch. There are always predecessor activities that form the foundation for later work. Vol. V of the ODC series attempted to describe all known (by me) alternative approaches suggested up to 1963.
Hochfelder:
Is this (Vol. V) a RAND publication?
Baran:
Yes. It is one of the ODC series on line at the RAND web site. I believe that anyone contemplating research in a field should always first review the literature in that field, particularly if being paid by taxpayer funds. This minimize research that duplicates prior work and why Vol. V was written. While Vol. V list relevant work going on up to the 1963 cutoff date, I am afraid that there are gaps in the literature from about that period up to the early start of the ARPANET which is excellently documented.
I vaguely recall work in Canada and Japan that doesn’t seem to have made it into widely available literature. I don’t recall the details. Maybe it will be lost in history.
Institute For the Future, 1967+
A number of us at RAND were struck by the realization that the lead-time for affecting the solution of the major problems facing the country was far longer than the time when the problem needed to be solved. For examples there were major questions just beginning to be raised about global change. Were we moving into another ice age or will carbon dioxide buildup lead to global warning? And, what might we do that would be economically feasible today to deal with these problems tomorrow? We lacked the tools to even talk intelligently about these longer term issues.
“Could we do a better job of very long range forecasting?” Long range forecasting has historically been the domain of the soothsayer, and entrails reader, a highly disrespectable business at best. There were fewer fields of lower repute.
Our objective was to start by considering the basic process and methodology of longer range forecasting. We sought to consider likely futures, more realistically, and earlier in time. We obtained a small grant from the Ford Foundation to think about these issues. We soon became concerned about doing this work at RAND as RAND had a great reputation guarded by a powerfully effective review mechanism. What we planned to do could easily come to nothing. We didn’t know what would work and what wouldn’t work. In the interest of prudence, we chose to create an entirely new not for profit organization not in any way connected to RAND so that if the project was a failure, (and the probability of this being high), the venture could be quietly buried without leaving a negative mark on RAND’s reputation.
Hochfelder:
Right.
Baran:
We then set up the Institute for the Future, initially at Wesleyan University in Middletown, Connecticut with a few people from RAND, and from other places as well. Its initial President was Frank Davidson, a lawyer whose main interest was forming a consortium to build a tunnel under the English Channel. (As we know he finally did it, decades later.) He maintained an office in New York. And, we had a Vice President, Arnold Kramisch a former RAND physicist who kept an office in Washington. Lacking credibility we first gathered a distinguished Board of Trustees to provide enough instant respectability to allow us to be eligible for foundation, government and business support.
I initially wore two hats there, as a Senior Fellow doing research and as Treasurer as I was suspected of being the only one in the group who had balanced their checkbook. The organization groped to find out what worked and what didn’t with regard to very long range forecasting. The plan was to conduct studies for government, for industry and for foundations and then going back years later to see which tools worked and which did not to establish effectiveness. One study that I did together with Andrew Lipinski, formerly at SRI, was a study of the future of the telephone company sponsored by AT&T for the period 1970 to 1985 AT&T’ cooperation with the study was excellent and the study turned out to be surprisingly accurate fifteen years later , suggesting that that it really is possible to use very long range planning.
The reader might be interested in why AT&T would ever allow me work on a study on their future after being at such odds with them five years earlier. Although I strongly disagreed with AT&T with regard to packet switching, in all our discussions, we wre able to maintaine a civil discourse and remained friends with many key people there in spite of our major disagreements.
After the Institute was set up in Connecticut I felt that I had accomplished my objective and wished to return to California. Since the AT&T study was not complete I planned to finish it in California and to do so I opened a small office in Menlo Park. Thirty-two years later, the Institute for the Future continues operation on Sand Hill Rd. in Menlo Park.
Forecasting Quality Control,1967+
One of the early ideas of the Institute was to maintain quality control of forecasts. The major value of a study is to be able to go back to find out what worked and what didn’t. My first attempt as a forecasting paper for this purpose was written in late 1967, shortly before starting the Institute for the Future in 1968. I sent you (Hochfelder) a copy partially for your amusement and partially to show you what a 32-year-old forecast looks like in retrospect. This paper was presented at the 1967 Annual Meeting of the American Marketing Association, and is entitled, Marketing in the Year 2000.
Thirty-two years ago I forecast that we would be shopping via a TV screen to a virtual department store. I described a number pad used to enter multilevel choices. For example in this paper the hypothetical buyer interested in buying a power drill might enter the virtual store comprising a set of departments. The user would first select the hardware department, which would be seen on the screen. Then, the user would select power tools, and those items would appear on the screen. Next, if the user selected power drills, they would be displayed with their description including price. Assuming that the user was still interested he or she might also select Consumer’s Union to look at the rating of competing tools.
The paper went into the issue of “push” vs. “pull” selection and forecasts the technology allowing the world to move toward the “pull” selection process. What is described is very much like Web TV.
Hochfelder:
It’s pretty accurate.
Baran:
Frankly, I too was surprised when I dusted off this old paper to review its predictive accuracy as the Year 2000 would be coming up shortly. I mention this third of a Century old forecast for two reasons. The first is that it suggests that we can do a better job of long range forecasting than we realize. And, secondly that at least some of the applications appearing on today’s Internet were not completely anticipated.
I tapered out of the Institute for the Future over a few year period, doing a little consulting on and off. One task was a “D-Net.” Much of the work that the Institute was doing at that time was based on pencil and paper Delphi studies, invented by Drs. Olaf Helmer and Norm Dalkey of RAND. This is a parallel process in which iterative questioners are sent to experts and the feedback used to focus on areas of disagreement. Helmer’s idea was to try to automate this process on line. The conferencing software that existed at the time was a serial process as is Roberts Rules of Order. What was wanted was a parallel approach, including voice conferencing. The original software was done by Richard Miller and Dr. Hubert Lipinski. Later the work was extended by Dr. Jacques Vallee. The next generation was spun out by Heubert Lipinski as a commercial product which was later sold to Lotus and became Lotus Notes.

Silicon Valley Period

Cabledata Associates: to Create New Technologies and Launch New Companies, 1973+

My plan for the next stage of my life was to develop new communications technology to the home. For example consider the communications technology required to support the process described in the described American Marketing Association paper. The idea would be to create high technology start-up companies in the new field of digital communications based on new products made possible by Moore’s Law anticipated decline in the cost of semiconductor electronics. Once the initial highest risk start-up activity was out of the way – with the new products defined, patents would be filed and demonstration prototypes built. Then the new company would, according to the plan, be spun out as an independent venture.

We eventually did exactly that. But it was very slow going. Getting started took twice as long as planned. At the time of the mid-70s the Nation had a shortsighted tax policy keeping the capital gain rate very high. This was the era of outrageous tax shelters that were far more attractive than venture capital. Capital flowed into non-productive tax shelters to beat the taxes, rather than into honest risk capital, which was taxed so heavily as to be prohibitive.
In this environment, the new company Cabledata Associates initially financed itself by performing study contracts. After the first 40 hours, the facilities and people were free to work toward the Company’s real goals. The initial product goal would be to use the TV cable to the house as the delivery vehicle for broadband data transmission. At the outset we knew that this would be totally premature and that we would not be able to get into the cable business for many years. So we intentionally picked the name “Cabledata Associates” to remind ourselves that one day when the time is ripe, that this would be the business we wanted to be in. We just didn’t know when the “right time” would occur.
Our Cabledata Associates group was a remarkable quality group, primarily a part time collection of people at Stanford University – a few faculty members and some graduate students plus a few permanent staff. Over the years several companies were created and launched, and over the longer term the plan proved to be highly economically successful.
- Divestiture of the ARPANET Study, 1974-5
The first contract that the Cabledata Associates group undertook was one for ARPA on the possible divestiture of the ARPANET. In the early 1970’s the Department of Defense didn’t quite know what to do with the ARPANET. It was obvious that one day it would likely evolve into a good business. The government is comfortable with activities that lose money it didn’t quite know what to do in the case of a potential profit maker.
Among the recommendations of this study was that the then existing ARPANET continue to be dedicated to research, and a new parallel network be encouraged to be built using the same ARPANET technology for commercial use. Under the laws at that time the Government owned all intellectual property in development contracts. The early days of the ARPANET was marked by low usage. The concept chosen was that any business starting off should ideal be able to use minimum capital. The software the government paid should be openly available to all. And rather than having each entity build its own national network, anyone should be allowed to connect to the network provided three conditions were met: 1) a common standards used throughout the network, 2) an agreement on an industry standards enforcing agency in place, and 3) a separations agreement mechanism as used by the telephone industry. A separation agreement is a standard telephone company transfer payment mechanism to allocate revenues in proportion to the traffic generated and delivered, as only the originator of the traffic generates revenues.
The Cabledata Associates study was primarily written by a group at Stanford University, professors and graduate students covering law, economics, and the technology. The conclusion was by allowing ownership of different potions of the by different entities reduces the cost of entry for new entrants. Later, multiple ownership would create a competitive environment to accelerate the evolution of common user public packet switching networks. The study concluded that such division of ownership by smaller entities would be feasible – and this is the model of today’s Internet, in lieu of a single network owner. But it would be decades before we reached this model of ownership fundamental to the Internet.
The results of this study were not what the sponsors of this study, ARPA-IPO wished to hear, so this potential direction was ignored. Instead a commercial version of the ARPANET, Telenet was spun out of BBN with Larry Roberts who left ARPA-IPTO at this time serving as its CEO.
Comprint, Inc., Low Cost Computer Printers
The first of Cabledata Associates product developments was a low-cost, high-speed (120 characters per second) keyboard terminal printer for timesharing applications. The mechanical printers at the time operated at 10 and 30 characters per second using plain paper. The first generation of the new printer used electrorestive printing that required a frankly unattractive paper. A second-generation product was in early development based on transferring higher resolution writing from a ribbon onto plain paper. New startup companies will find that their venture capital money comes with strings. Generally the venture capital investor wants their own chosen CEO with a track record in a related business. After a painful development cycle, the product was developed and sales took off in the absence of any better product on the market at that time. The printers were low cost and sold through computer stores.
The venture capital selected CEO, seeing a very high initial order rate stopped all development on the plain paper version, and put all the Company’s capital into buying parts in anticipation of a killing. He felt that the original product was adequate to garner more than enough orders for a fast success after which he would sell the company as he had just done with his last company.
Unfortunately, although an expert in large company sales, he neglected to remember that the printers were sold through computer stores so he confused pipe filling of the sales channel with the number of units actually being sold. Competitors impact ribbon plain paper units began arriving in the marketplace to compound this problem. And the Company, Comprint, Inc. lacking a second-generation product in the pipeline blew its opportunity. The Companylasted a number of years and was eventually sold at a low price to Chapperal Communications..
Equatorial Communications Co., First VSAT Company
The next Company to be started by the Cabledata Associates group was Equatorial Communications. This was the first small dish satellite ground station. These small dish satellites are now called VSATS or Very Small Aperture Terminal Stations. At this time the FCC allowed only very large antennas to be used with equatorial satellites to insure that the energy from one ground transmitter did not inadvertently illuminate the adjacent orbital slot.
I came up with the idea of using spread spectrum modulation so that even with small antennas and broad beam widths, the energy density was so low in the adjacent orbital slot that it would be below the allowable noise floor. It did however mean giving up data rate. But, it allowed applications that could not tolerate the larger antennas. This was probably the first non-military application of spread spectrum technology.
Again, in order to obtain funding, another venture capital chosen CEO ran the company. The Company did very well for several years, going public and its stock rising to high valuations. (I’m glad that I sold a significant amount at a good price.) Then he was convinced that there would be a shortage of satellite transponders for lease and then signed long-term leases for many of the world’s satellite transponders. And, fortune backfired. Instead of a shortage, there was a glut of transponders on the world market. This quick money making scheme caused the company’s stock price to fall and the company was then sold to Contel, which was later acquired by GTE.
Telebit, Inc., Highest Speed Telephone Modems for Bad Lines
The next company to come out of our small Cabledata Associate activity was Telebit, Inc. This company developed by far the most robust and fastest telephone modem of its day. The product concept was that everyone was designing and building telephone modems for good quality telephone lines. Instead, I wanted to design a telephone line optimized for rotten telephone lines, since there are so many of them around. This was particularly true at that time, and especially the telephone lines in the underdeveloped parts of the world.
Instead of using the usual single or dual tone modulation I chose to use orthogonal frequency division multiplexing. Instead of a single modulated tone carrying all the information I used an ensemble of tones carried the maximum number of bits in accordance with the measured noise level for each tone separately. Unlike the conventional modem that places all its energy towards the center of the telephone pass band, this one would spread its energy across the entire band in proportion to each frequency being able to support a separate signal.
The modem’s performance was remarkable. Since each tone was so narrow, equalization was not required. It could operate with noise signals right in the center of the band. One could play a trumpet on the same phone line and the Telebit modem would operate error free, with a slight loss of data rates. These modems were for a time the fastest modems in the world. And, by far the most robust, allowing links in the under-developed nations previously useable at a maximum of 300 or 600 bits per second to operate beyond 10,000 bits per second.
When modems talk to one another they exchange information as to their feature set. If the modem on the other end were not a Telebit type modulation modem, it would fall back to one the conventional standard single tone modulation. The Telebit modem using OFDM modulation was a proprietary standard. This is effective in maximizing short-term revenues. And Telebit was a successful public company. But over time the concept of a proprietary standard tends to be counterproductive
As the quality of telephone lines improved the advantage offered by the Telebit modem declined. With the number of people using dial-up modems exploding, modems had become a commodity product. With the rapid improvements in digital signal processing higher data rates from the conventional single carrier modems became feasible over good quality telephone lines. And, the competitive advantage of this modem was lost except in the underdeveloped parts of the world. Telebit was a successful public company for many years. The company did develop other products along the way to fill the gap, but none with the high margins of its initial product. The company was eventually sold to Cisco primarily for its later ISDN products and its patent collection.


Packet Technologies, Broadband Digital Services to the Home Via TV Cable

The next spinout company I co-founded was Packet Technologies. Packet Technologies had been originally conceived 12 years earlier when setting up and naming the original company “Cabledata Associates”. Packet Technologies was to be a company dedicated to providing equipment to deliver high-speed data services to the home via TV cable.
TV cable started in the 1950’s as an extension of a commonly shared TV antenna in the rural areas of the country. At that time the cities didn’t need cable. City users had antennas that provide good TV coverage for the few TV networks then available. By the 1980’s HBO and other additional channels changed the economics of cable in the cities, as there were now channels that could only be received if you were connected to the cable.
To win franchises each cable operator promised more and more features with wonderful new services to the home being a big attraction. However, there was no technology available at that time able to deliver this future promised capability. That gave us the opening we were waiting for. The market for this technology for the first time appeared ripe as a result of the cable company’s promise of the delivery of all sorts of “blue sky” services. These promises were made in competitive fashion to win the cable franchise for the cities that were to be wired. The potential market was now so large it could justify the scale of development work needed to create the missing technology.
The new company was originally called Packetcable and later Packet Technologies after it had a non-cable product as well. After an initial funding round by local investors, AMOCO became the major funding source saying that their plan was to be in “a major size new business before the oil ran out in the 21st Century”. The technology development proceeded well, aside from the usual problem of taking a bit longer than initially planned.
Two different cable systems were modified for two-way operation. An outdoor unit hanging from the TV cable and powered off the cable delivered TV viewing control for pay TV, data and videotext access to a cluster of up to eight houses. Each house was connected with conventional TV drop cables that also carried the normal TV. Each of the remote units was served by a high-speed two-way data connection over the cable to the cable head end. At the cable head end was a connection to a time-sharing data service provider. Each six MHz TV channel could support two 1.54 (T-1) rate channels. The equipment worked and it worked well.


Packetized Voice

All the data in the system is handled in short packets, and since the data rate was roughly similar to the telephone T-1 rate, I had the idea of also sending telephony over the same cable. At that time, prior to the fiber optics era, T-1 circuits were very expensive. We believed that we could send telephony more cheaply over the TV cable, given the then telephone company tariffs.
My basic idea was to use the 192 bit frames of the T-1 system as a separate very short, and very fast packet. This would allow us to make statistical use of the channel. And the short packet would allow us to avoid any significant delay, important for maintaining high quality voice. By sending packets only when the user was talking, and using 32 Kb ADPCM in lieu of the older 64 Kbps PCM approach we were able to carry 96+ voice channels. This may be compared to a conventional T-1 circuit, which could carry a maximum of 24 voice channels, a factor of four improvement.
We described this proposed cable telephone system technology on a white board to visitors from Michigan Bell. We said, “Our technology will allow TV cable to transmit telephone voice at a lower cost than conventional alternatives”. Their reply was, “Could you also do that packetization trick over our existing T-1 twisted pair circuits, and get the same 4X efficiency improvement?” We said, “Yes, I guess we might be able to do that.” And they were interested in proceeding.
The telephone industry’s equipment for rearranging the connections of T-1 trunks from different central offices is called Digital Access Cross Connect Systems (DACCS). What Michigan Bell wanted was in essence a DACCS using the compression approach that we described. We built a pair of prototype units for Michigan Bell, which we named PacketDax. They were highly efficient and flexible and able to remotely control automatic cross connection switches with excellent remote control monitoring and set up. Parenthetically it was mandatory that the voice quality be indistinguishable from the conventional toll quality voice circuits, which it was. And the units met all the telephone plant requirements with redundant components etc. This interesting project represented only about five percent of the entire Packet Technologies efforts but would have an important place in the Company’s future.

Amoco’s Remarkable Intransigence

The overall development was going along fine, except for the inevitable schedule slips common in most large complex projects. The only major surprise was that the price of oil fell to $10 a barrel, and Amoco’s “support into the 21st Century” moved up the data of arrival of the 21st Century and forced the conversion of their stock into debt. We were not overly concerned as to Amoco’s loss of support as there were other organizations that had expressed strong interest in investing in the Company. These included IBM, CBS and AT&T. However, out of the blue CBS was taken over by Tish buying CBS. This changed CBS’s direction and interest. IBM came to the very edge of a commitment, but an internal disagreement in IBM prevented them from proceeding. AT&T was interested and a handshake agreement was reached with the AMOCO representatives.
Then a senior vice president at AMOCO known for his macho take no prisoners negotiation style reneged on the handshake agreement reached by his own people. He demanded that AT&T to pay a higher price than previously agreed. AT&T concluded that it could not do business with such a person and walked. Conventional venture capital was interested, but not at the price that AMOCO demanded.
Then AMOCO undertook a series of steps whose only purpose would be to bankrupt the Company at the expense of the smaller creditors. The Packet Technologies key people felt that AMOCO’s behavior was unfair and continued to work without pay to protect the interest of the smaller creditors. Meanwhile AMOCO kept pushing for bankruptcy instead of agreeing to sell the Company at a reduced fair market price. But the creditors stuck together with the Packet Technologies management so AMOCO was blocked in this move.
A big mystery to us was why was AMOCO not acting in its own best interest? Certainly a half a loaf is better than none. The only explanation that anyone was able to come up was a story that came via a back channel in AMOCO, saying that the AMOCO senior vice president involved in the deal became concerned that if they sold Packet Technologies and if at a later date the company became highly successful, then “…it wouldn’t be career enhancing. Kill it off so it isn’t around to haunt us at a later date!”

Stratacom, a Leveraged Buyout of the PacketDax

If this wild hypothesis were true then AMOCO was only concerned with the risk of the TV Packetcable System and not the telephone PacketDax product, (which they considered to be a highly specialized product and as we expected not pose a “career enhancing risk”.) After much niggling AMOCO allowed the people working on the PacketDax project to engage in a leveraged buyout, in payment for part of their debt. Venture capital were allowed in, but solely in the PacketDax part of the business. Stock in the new company went to the Packet Technologies shareholders as well as the employees in the new company. Some stock also went to AMOCO and to the major creditors. We had enough cash to pay off all the smaller ones.
This new company became Stratacom. AMOCO sold its stock in this company at an early date. Stratacom went on to be a huge success and was sold to Cisco for $4 Billion. This stock converted to Cisco stock, now worth about $20 Billion. So even with the dilution along the way, Packet Technologies turned out to be a highly profitable venture in the end.

Metricom, Remote Electric Power Meter Reading Via Packet Radio

Baran:

The next company I started was Metricom. Metricom started off in the remote electric metering business to develop an all electronic power meter, which measured many more parameters than were measured by the ordinary electric meter such as voltage, VARS, power quality, line imbalance, etc. The meter sent its signals over carrier current. The carrier current signals from all the houses on a common transformer are relayed by a pole mounted packet transceiver to a central site.

The reason for starting Metricom was that I believed that the power companies would be increasingly reluctant to add new capital equipment (at roughly $1,000 per Kw. or per new connected house.) in an environment moving to deregulation. Instead it would be far more economic for the electric utilities to sell electricity like we sell airplane seats. Charge higher prices for those with least flexibility in when they must travel, and give bargains to keep all the seats filled. Time of use billing would be coming to the electric industry.

Our first customer was Southern California Edison whose research department had fine quality people and appreciated the importance of the implication of this new technology. We thought that this company had plenty of licensed radio frequencies that could be used. But, the communications people informed us that none could be spared for meter data transmission. Therefore I came up with the idea of using unlicensed frequencies, and a routing approach that could tolerate many different users on this 902-28 MHz. ISM (Industrial, Scientific, Manufacturing) band. This band was originally set aside for diathermy machines and was in effect a garbage band. A key idea was using geographical coordinates to route the traffic to its end destination in a network shared by many different users.

The major problem we encountered was that electric meters were a 100 year old technology. And being 100 years old had become institutionalized in stone. That meant that any new electronic meter had to look and behave exactly the same as the old electromechanical meters. The constraints of having to look and be tested exactly the same as the old meters significantly increased the price of an electronic meter relative to the inexpensive electric meter. But after a lot of work, and many miss-steps, a highly reliable and effective meter was built. And, the radio network, originally tested in Valencia, California in an area with a lot of long dead end canyons posed a great challenge for the routing algorithms in the packet radios. But with enough trials and patches, a reliable system emerged.

To be installed in quantity it was necessary to show that the system would pay for itself in a single department of the company. While there are benefits to many departments, it was necessary to find one that generated enough savings to pay the entire cost of the system. We were surprised as to where the big savings actually occurred. It was our ability to measure voltage in real time at the house entry point. I was surprised to find that the electric utilities didn’t know the voltage at the house entry point in real time. In California each utility is required by the Public Utility Commission to deliver power within a narrow voltage band. To play it safe the utilities tend to run one or two volts higher for using remotely controlled capacitor switching banks. As the voltage could be measured at each house, the delivered voltage could now be be safely reduced. Power consumed in generating electricity is directly proportional to voltage. By being able to safely reduce power by a volt or two means that each volt not needed saves about 1% of the fuel required. The system built for the Southern California Edison System, which uses about 25,000 radios paid for itself in about a year’s time.

Evolution to the Support of Lap Top Computers

Baran:

The CEO of Metricom had formerly worked for an early company in the lap top computer business and felt that a better business would result if the packet radio network were used to connect lap top computers to the Internet. That is the Metricom Ricochet system. A small radio transceiver connects to the laptop computer, which in turn is connected to the same type of pole mounted packet radio network used to support remote meters. The service has proved to be very economical and attractive to its present installed base of about 25,000 users in a few cities and airports.

Metricom left the metering business to concentrate solely on the packet radio network for laptop computers. A major multi-city rollout is anticipated in late summer of the Year 2000. This will be for a much higher speed product than the original product and will operate at roughly ISDN rates in lieu of telephone modem rates.

Hochfelder:
Is that stationary wireless?

Baran:
No. It is a totally portable two-way radio that attaches to your lap top computer. The signals go directly from your laptop to small radio transceivers mounted on electric light poles. The signals from the pole top transceivers are collected at a number of terminating locations that directly connects the user to the Internet. Richochet’s major feature is a low fixed monthly cost for as much use as desired, and an always-on type connection to the Internet.
Metricom has been a public company since May 1992 and its stock price has languished until recently when its stock price went up well over a factor of 15 times its offering price and now has a valuation of over $2 Billion.

Interfax, Interactive Facsimile ~1989+

A few years after Metricom’s permanent CEO was on board, I co-founded Interfax, Inc. Fax had become a widespread channel for manual person-to-person communications text and graphic communications. In many applications, the information requested resided on a computer. The idea behind Interfax was to automate the process by having a computer read the incoming fax, and take whatever action was required. Among the earliest applications was handling reader response cards. These are those nasty pieces of cardboard that are included in magazines that drop out and flutter out on to the floor. The reader fills in the card and mails it. Perhaps in a month or so, the response to a request for literature is mailed back. With the faster pace of modern communications, the idea was to fax a sheet of paper, mark off what is requested and then have the information returned by fax.

At Interfax we developed software that automated the reading and response cycle to a fraction of a minute rather than waiting for the mail. The Company sold its hardware/software system to customers and also operated a service business to allow customers to try the service before buying systems. This venture capital funded company was doing very well with its proprietary automatic reading software. Part of next round founding arrangement with the venture capital firms called for veto control when replacing the initial temporary CEO with a permanent CEO. By a bit of bad luck the two senior and highly experienced venture capital people on the board were out of the country and their replacement were younger representatives from each company. One was a junior associate that threw his weight around rejecting a pretty good candidate saying he was “unfundable” in favor of a “fundable” salesman type who spoke well, looked great in a suit, but lacked any technology understanding of the business. The new “Suit” brought some friends into to the company, and decided to change the direction of the business by discontinuing the existing business that was nearing breakeven. They were successful in seizing defeat from the jaws of victory, requiring the company to be sold to another company in the field, Cardiff Software to preserve what little value there was left.

The moral is that it is not enough to have good technology alone. Good management is essential. As an incompetent CEO is capable of doing more damage in a short time that can ever be corrected. Hence, the turnover rate for CEO’s in Silicon Valley is about 18 months. Change is made as soon as serious problems are noted. Any delay in changing pitchers in this game can be deadly. Start-up companies are very fragile organizations

Com21, ATM Based Cable TV System 1992+

Baran:

While in semi-retirement in 1992 I started part time detailing a new concept for an ATM based communications system to allow very low cost communications from any person or machine to any other. It would take advantage of the large global ATM fiber networks then being built. This network would use cable modems for the local tails and be able to support video, data and voice communications from any point to any other point.

Together with my colleagues William Houser and Scott Loftesness we launched this company, Com21, Inc. in about 1994. The Company’s first challenge was creating the first missing piece -- an ATM based cable modem. We came out with an extremely good cable modem system including the head end (cable modem terminating system, CMTS). It’s performance was so good and met such good customer acceptance that the Company decided to focus, at least at this time on being a cable modem system supplier. Today Com21 is the third largest supplier of cable modems, and Number One in Europe. It is running at a rate of over $100 Million per year and growing almost 100% per year.
More recently the US cable industry has come up with industry wide standards. Regardless of how good a proprietary product is, it is mandatory to support the new industry standards. Whatever the customer wants is what the successful company must do. So Com21 is selling a wide variety of cable modem systems today, both standards based units and its own proprietary products for voice and data using the two-way cable plant as well as having other goodies in development.

I’m involved in other activities as well, but this gives you an idea of what I’ve been up to over the years.

Relevance of the IEEE Communications Society

Hochfelder:
What involvement, if any, have you had with the IEEE Communications Society?

Baran:
I have been an IEEE member for a long time, starting as a Student Member, and am now a Life Fellow. The Communications Society has been important to me in many ways, primarily being able to read their literature to stay current in the field. My first publication outside of RAND about packet switching was a 1964 paper published in the Transactions of the Communications Society. At a later date I received the Communications Society Edwin Howard Armstrong Award.

I have served as session chairman for a number of IEEE Communications Society events along the way, but don’t recall holding an actual Communications Society office. There is one important contribution by the Communications Society to the Federal Communications Commission that I did recall after our meeting.

In the late 1960's computer technology was rapidly evolving but limited by the then telecommunications policy constraints of an earlier low technology era. The conflicting statements being made by the adversaries before them about the new technology and its implications troubled the FCC.

I was involved with an ad hoc group within the ACM Committee on Communications and Computers and the Communications Society that undertook a set of educational seminars for the FCC Commissioners in September 1986, I don't recall exactly who took the lead on this. I do recall Lou Feldner being involved with this, but am not sure of the names of the other members of the group. These tutorials were probably the first detailed description of the new technology presented to the Commissioners about what was going on in digital communications technology and what this might mean for the FCC in the future.

I recall one particularly sharp FCC Commissioner, Nicholas Johnson ask Hank McDonald of Bell Labs, "How much would you estimate is invested in the Nation's present switching system?" "About $30 Billion". Commissioner Johnson then asked, "Now suppose you were to build it from scratch using the latest digital technology that you have described. How much might that cost?" Hank McDonald responded, "I would guess about $3 Billion.” I thought I heard a gasp, as the mouths of the AT&T lobbyists at the back of the room dropped open.

Everyone understood that the rate that the telephone company could charge for its services was based on the unamortized cost of its plant. The openness, the honesty, and the impartiality of the information transmitted was extremely well received. A warm working relationship then developed between the technical participant members and the FCC Commissioners and staff in helping them understand the coming new world of digital communications. This series of educational seminars was a factor in leading to the First Computer Communications Inquiry.

I later served as a part time consultant to Bernard Strassburg, the Head of the FCC Common Carrier Bureau. One of the more amusing aspects of the assignment was to act as a language and culture translator between the lawyers at the FCC and the SRI people working on a contract for the Common Carrier Bureau. It took some mutual hand holding, but it was the first time that these two different cultures had interacted at the FCC.

This FCC story is one of the many "untold stories" about members contributions. Many of stories will be forgotten over time. Some may be remembered, but only as a peripheral remark in an oral interview. Much preferred is a trained historian of technology sifting through the FCC archives is needed to get this story’s details right to better evaluate the long term effects of such contributions.

I remain an active Communication Society member to this day looking forward to the arrival of the Transactions, and the Special Topics in Communications. It is my way of staying abreast of this rapidly changing field. However, this field has become so specialized that I must confess that I understand fewer and fewer of papers as I grow older. I don’t think that it is me. The the field is getting so specialized that each article can be understood only by a smaller and smaller group as time moves on. I feel fortunate in being able to join the profession at an early era when it was more fun when one could read all the papers and understand most everything being said. The number of pages was much fewer than today.

Hochfelder:
It’s become very theoretical, from what I understand.

Baran:
Yes. The titles of papers are great and the abstracts are fine, but I just can’t understand the rest of the paper without taking lot’s of time. There’s so much going on in the field that one gets overwhelmed, but it’s an important activity.


The impact of communications on economic development

Hochfelder:
What about communications for the future? What do you predict?

Baran:
I thought that my1967 paper on shopping in the Year 2000 was fun. There’s also one on AT&T with predictions after fifteen years that I described . That one also turned out to be surprisingly accurate. It is possible to look forward to the future. We do know about some things. For one thing, it will be an all-digital future. Analog is on its way out. It’s probably going to be primarily packet and cell switching, and the Internet is going to be everywhere. It won’t be our present version of the Internet, but higher speed, and more reliable and ubiquiteous. Everything is going to flow in synergy with that. Worldwide access to all information will have an important impact.

Baran:

My paper points out the increasing amount of the economy that is based on information. You can see the trends there where that is leading. About half of the economy is manufacturing, mining and distribution. The rest is sales and other things that can be automated. This is a big change. It’s probably the reason that we’re seeing a period of high economic growth, and it’s probably a byproduct of computer technology and the Internet finally beginning to be accessed on a significant scale. The payoff for society will in the long term be significant. With material goods like oil or steel, if you have it, I cannot also have it. But, with information the incremental cost of duplicating information for the next person is near zero. It’s a different model. It means that poor children in underdeveloped countries can have access to all the world’s information and education through access to the Internet at essentially just the cost of delivery.. God sprinkled the brains pretty uniformly. With the access to the Internet being widespread and pervasive and reaching the underdeveloped part of the world, it can greatly speed up the process of raising the standard of living for the rest of the world.

On International Stability and Peace

Baran:

I believe that we are not going to see greater stability in this world until we have a greater of uniformity of income in the underdeveloped countries relative to the developed countries. This may take a hundred years or more. Almost of the small wars that we are seeing around the world are essentially all in the underdeveloped countries.

Hochfelder:
That’s true.

Baran:
Developed countries are pretty stable, so it’s the less developed countries that we have to bring along and get up economically as quickly as we reasonably can. The basic Internet has that potential. One might say, “What about cost? These poor people can barely afford to eat.” Looking at what we’re spending for weapons and looking at the declining cost of the communications, we could well afford to subsidize that. Looking at the long term, we would come out ahead. The present cost barrier is not going to be there forever. It’s already being eroded. We could help that with satellite delivery. There are a lot of ways of doing this. It’s a great challenge and a great goal for the future.
Looking at the technologies, we’ll see fiber for long haul transmission usage. We will have wide average area coverage from spot beam satellites. These will be particularly important to the underdeveloped parts of the world where they have no communications at all. Radio for short tails, wireless for a few mile range. We don’t have a big built-in infrastructure there, so a new technology infrastructure can be built quickly. The capacity of the delivery system to the home will include a near infinite number of radio and TV channels. There is no limitation with streaming video channels on fiber. Today’s mass media with a relatively few number of stations and everyone listening to these same few stations will give way to more of a magazine type of programming. There will be great diversity in channel access with highly specialized groups finding commonality of interests. As time goes on our capacity will grow for narrower and narrower interest level channels. We have the resource coming along to allow us an infinite number of TV like channels on the future Internet.
As we look to the future, the History Center clients will increasingly look via their browsers to the web site for communication. Later generations may do most of their museum touring via the Internet to virtual museums. There is no point in just locking up one copy of the first telegraph machine in a single building. A three dimensional view under viewer control may be the way we look at museum objects. There is so much that we will be able to do in the future. The key characteristic of the future may be the ability to access to all the world’s information by anyone and everyone at any time and at near zero cost.

Hochfelder:
You seem very optimistic.

Baran:
Oh yes. I’m a naturally optimistic. But, I think with good reason. We are about as economically well off as the world has every been, and our life span is increasing. In this last decade and a half we have arisen far from the depths of the Cold War fears of a mad and dangerous itchy trigger period. Today, relatively sane people rule the major countries. The residual danger is primarily in the developing countries, where sprinklings of crazy despotic leaders still are in power. As an old optimist, I think we’ll make it just fine into the future, other than a few inevitable few bumps along the way.
This optimistic statement assumes that the developed countries fully appreciate it is to their own self interest to reduce the information gap between the developing and developed countries, and that it is in all our interest to aid the economic development of the third word and reduce the large disparity that still exists today.

We techies can look toward the future, envisioning a never-ending stream of wonderful new technologies to help make this world a better place to live.

Hochfelder:
Thank you very much.