Oral-History:Meir Lehman

From ETHW
Revision as of 21:50, 26 January 2009 by Nbrewer (talk | contribs)

About Meir M. Lehman

Dr. Lehman grew up in England and worked at Murphy Radio as a youth. His early interest in mathematics and science inspired him to obtain a National Electronics Certificate, and in 1949 he won a technical state scholarship which enabled him to attend Imperial College. He obtained a mathematics degree and then began graduate work at that institution, working on early computer projects. He obtained the Ph.D. while doing computer research at Ferranti's London laboratory and with the Israeli Department of Defense. Lehman obtained his Ph.D. in 1957 and moved to Haifa, Israel, where he continued his work for the Israeli Defense Ministry, designing low-budget digital computers with magnetic core memories. He helped develop second address registers (modifiers) and the use of printed circuit boards.

Lehman then moved to the United States to work for IBM in its Yorktown Heights laboratory. There he designed arithmetic units for the supercomputer project and researched parallel processing. Between 1965 and 1968 he managed the Yorktown Heights Integrated Multi-Processor Project, which used simulation in hardware design. In 1972 Lehman began teaching at Imperial College, designing undergraduate courses in computing and control. In 1979 he became the head of the department, and helped found the Imperial Software Company in 1982. At the time of the interview, Lehman was on part-time contract with Imperial College after taking early retirement in the mid-1980s.

The interview spans Lehman's multifaceted career, concentrating upon his years with IBM and at Imperial College. Lehman discusses his early and collegiate education as well as the progression of his computing experience. He explains his projects with the Israeli Defense Ministry as well as his decision to move to IBM. Lehman describes his early computer research, including his work with serial and parallel processing, magnetic core memory design, arithmetic unit design, and software improvement. He recalls colleagues in the computer science and control fields, and gives his opinions on ways to improve programming productivity. Lehman also discusses his 1979 study tour to Soviet Russia, significant British developments in the software field, early British computer projects, and the foundations of software engineering. The interview closes with Lehman's opinions about theoretical trends within logic programming changing criteria for software and programming quality.


About the Interview

Meir M. Lehman: An Interview Conducted by William Aspray, IEEE History Center, 23 September 1993

Interview # 178 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc. and Rutgers, The State University of New Jersey


Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Meir M. Lehman, an oral history conducted in 1993 by William Aspray, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.


Interview

Interview: Meir M. Lehman
Interviewer: William Aspray
Place: London, England
Date: 23 September 1993

Background and Early Career

Lehman:

I think I ought to start at the tender age of sixteen since what happened then had a direct impact on the rest of the story. I will try to give you the facts as they are. Not to be unusually boastful on the one hand or too modest on the other. I'll just give you the facts. That seems to be fair in this case.

Aspray:

Very good.

Lehman:

My father died when I was ten. My mother was left as a widow at the age of thirty-five with six children. I was the second child, and the oldest son. The significance of that was that I had to leave school at the age of sixteen to help support the family. At that time my favorite subject was mathematics, but I was also pretty good at physics and subjects of that sort. I was the first person in my school ever to sit matriculation examinations in advanced mathematics in addition to the normal mathematics taken by all students. As it happened, the same year that I took the advanced mathematics test, the headmaster's son, who had matriculated one or two years earlier, came back to sit the advanced exam. I think the Head was a bit jealous that I was taking the advanced math and chose to put him through as well. Overall, my matriculation results were good with the exception of history. This I fluffed completely because I was never any good at topics where I had to learn things rather than reason them out for myself. The way history was taught in those days left no room for reasoning in it.

On the basis of those results, I could have proceeded to University and would have loved to do so, but as described, family circumstances were such that I had no option but to leave school. I started work in 1941, during the second year of the war. Since I had for a number of years been very interested in radio — I had built myself a radio — I looked around and found a job with a company called Murphy Radio, where I joined their service Department. That service Department, although it was wartime, was responsible for the repairs of civilian radios.

Aspray:

Where was this located?

Lehman:

In Welwyn Garden City. I was living at the time in a town called Letchworth, in Hertfordshire, and Murphy's was in Welwyn. When the radio sets came in for repair, my job was to remove the metal chassis from its wooden or plastic case. I was on the "boxing and unboxing bench." In addition to this I had to sweep the floor twice a day, and make tea in the morning and the afternoon. I was a very junior boy. I did this job for about a year and I felt fairly frustrated, wanting to do something more technical. Finally I was allowed to go on to the benches and begin soldering and repairing sets. I wasn't allowed to test them, but the so-called "testers" would write on a sheet of paper "Change the following parts...” which instructions I would then follow. Very occasionally, I would be given a set which was completely burned out and have to rebuild it from scratch, and that was always a challenge. In those days, rebuilding was an economic proposition. Of course today you would just throw it away.

I did this job for two, three, or four years, and I was getting desperate because I really felt that I knew how to solder, knew the sets inside out in terms of where the components were, and had by that time rebuilt several from scratch. On one memorable occasion, my foreman, who was in charge of the whole shop, of thirty or forty people, was away sick. His deputy — I can't remember whether I asked or he volunteered — said, "Why don't you start testing instead of just soldering components?" He knew of my interest and my technical background. Soon I was given a multimeter, maybe an oscilloscope, I don't remember, and I started actually testing radio sets. I spent two weeks supremely happy, because I had finally reached the height of my ambition — I really got what I wanted to do. I must have been 18 or 19 at the time. After two weeks of enjoying myself — I had begun to get the hang of things — the foreman came back, and he saw me sitting at my test bench and said, "What are you doing there?" So I said, "I thought I had been promoted to go on tests." He said, "Well, you're not paid to think. Go back." Those were his words exactly. I was desperately unhappy, but of course I had no choice.

Another lad was also unhappy about our treatment. Because it was considered important that the civilian population have their radio sets, we were exempt from the army, although I was of military age. I upped and volunteered to go into the army because I hated the work I was doing, but was turned down. "Sorry, the work you are doing is too important." It was a reserved occupation, and no way out, at least for me. That went on for two or so years.

I went to evening classes at the time and took courses in electronics. At a certain point in time, the government allowed the company to take some incomplete chassis from a storage, and I was given the job of completing them, wiring and testing them. I was located in a fairly large room, all on my own where I spent much of my time listening to classical music instead of getting on with the job. At least that's how it remains in my memory. I've always had a taste for classical music, and I took the opportunity. Anyway, I spent another few years with the company, Murphy. The same foreman who had told me I was not paid to think persuaded me to take off one day a week to go to London to attend a day release course. On the basis of that course, I obtained an ordinary and Higher National Certificate in Electronics. I didn't do terribly well, but I managed to get through the exams. In my final year at Murphy's, I moved from the Service Department to the Test Instrument Department, where my hope was to join the laboratory and be allowed to design the test gear that was the Department's product. In fact I was again assigned to building the instruments. But then one day I was given a test instrument to design. But even then, I was not promoted nor permitted to join the lab staff, but remained an assembly technician. I designed my first instrument, a coil comparator, for testing production tuning coils for tolerance. By this time, 1949, it was already well after the end of the war and I had been with Murphy for eight years. My life seems to have gone in seven-to-eight year cycles, as you'll see as we go on.

Imperial College

On the basis of my examination results, and my record, I was awarded a Technical State Scholarship, which was meant for people who had missed out on their education because of the war. This allowed me at last to go to university. I gave a lot of thought — should I take electrical engineering? Should I go back to my first love of mathematics? I decided to go back to do mathematics, and applied to Imperial College, a constituent college of the University of London, to read for a mathematics degree. Because I had left school at sixteen, I wasn't qualified, so I had to take entry examinations before being accepted. I came up to the College, had an interview with the Head of the Department of mathematics, who at the time was Professor Levy, a very famous figure in numerical analysis.

I'll tell you an interesting story there because it's fun. I'm not sure whether it's relevant at all. It's probably for you to filter out. He interviewed me and said, "You'll have to sit four exams, pure and applied mathematics, physics, and chemistry." So I said, "Pure and applied mathematics is no problem nor is physics, although I haven't done any work in that area for the last seven years. But chemistry is an impossibility because at school I had to take a foreign language. I was hopeless in French, but having been born in Germany, I felt that I was reasonably fluent in German, despite the fact that I left Germany in 1931 at the age of six, had never been to school there, and we only spoke English at home from the day we moved here. So I reluctantly chose German as my foreign language. But at our school, timetabling problems meant that one could only take German or chemistry, so I never learnt any chemistry, and had no background in the subject. In this context there is a further connection with my having been born in Germany. In November 1930, the Nazis won 37% of the seats in the Reichstag. When my father heard the results, he said, "This is no place for a Jew to be; we're leaving." So we left Germany and came to England two years before the Nazis came to power. It is only because of his foresight that my family and I did not suffer the fate of European Jewry.

Back to my interview with Professor Levy. "Okay," he said, "under the circumstances I'll admit you if you can pass your pure and applied math and physics." And that was that. I started working for the entrance exams in all three topics. In fact, I started a correspondence course. In due course I got notice of the exam. When I got the letter, I noticed that one of the papers, I've forgotten which one, was to be sat on a Jewish holiday Pentecost. There was no way as an orthodox Jew that I'd be willing to sit a paper on that day. So I wrote a letter to Professor Levy saying that I had received notice of the exams and that as an orthodox Jew, there was no way I could sit the examination on that day. I asked, "Would it be possible to sit the paper on the day before or maybe on the day after and have someone supervise me to guarantee that I had no contact with others?" I got back a very sharp letter from him saying, "Terribly sorry, Lehman, but you should have been aware of the date of this examination for a long time. I've already compromised and I've let you off chemistry. I'm afraid you'll have to just sit this paper or forget about Imperial College."

So I wrote back and said, "Dear Professor Levy, thank you for your letter. I well understand your reaction and cannot argue with it. On the other hand I am afraid, when the chips are down, that my religion comes first. There's no way that I can compromise, and whilst I'm desperately sad about it, we'll have to let it go." By return of mail I got another letter from Levy. I should mention here that apart from his fame as a numerical analyst, Levy was also the chairman of the British Communist Party and a confirmed atheist. He left the party in 1956 in protest at the Hungarian invasion, but at the time when I met him, he was still in the party. In any event, what ever his personal beliefs, by return of post I got another letter from him saying, "Thank you for your letter. To be quite honest, I was merely testing to see how sincere you are. Since you are quite obviously very sincere, it will be perfectly all right, no problem, you can sit the exam the day before." I felt this was wonderful. One of my great regrets in life is that while I came back as a professor to Imperial College, he was still alive yet I didn't go and visit him.

I joined Imperial College in October 1950 to start a three years math degree course. I became very depressed during those years because I really was not coping with the studies at all well. I managed in the end to get my degree. But as far as I remember, obtained only a lower second degree, way down on the scale. It was tough. I just thought that I wasn't very bright. At this time between 1949 and 1953, I was well in the mid-twenties — and it was only many years later that I realized what was really happening was that here was I, a person in his mid-twenties, relatively mature, sitting with young men in their teens. When one teaches, one teaches at a certain level. One tells lies and half-truths, and in fact what was happening was that at the beginning of every lecture, after about five minutes, something would be said that didn't seem clear to me. I would find some problem in what I was being told. Now I have the sort of mind that can cope with one thing at a time, so I lost much of the lecture while I was trying to resolve the problem. I was losing track — not because I was stupid, but because I was seeing things that I wasn't intended to see. On top of that I had another problem as well — one which I didn't discover until fifty years later — that I had certain food allergies. That made me lethargic and "weary unto death." I went to the doctor and said, "I'm always tired. I fall asleep very easily." His response was, "you're clearly working too hard. Why don't you take up shooting or fishing or, at the very least, go on a cruise in the next summer vacation."

Anyway, I managed to survive the course, and get my degree. On the basis of my background, I was awarded the Ferranti Research Scholarship from the Institution of Electrical Engineers. I had really intended to go into quantum mechanics. I was living at the time in a hostel. A friend of mine in the hostel had a brother who was a sailor on the first Israeli cargo vessel to come into a British port since the declaration of the State of Israel. So on the night in 1953 before sitting my final quantum mechanics paper we went to the Docks to visit his brother. We were all very excited. Unfortunately on the way back we got lost. I didn't get into bed before about two o'clock in the morning, so the next morning was just completely washed out. I completely flunked the quantum mechanics paper. Thus when I went to Professor Jones and asked, "Can I be your research student?" His response was "With the results you got on your paper?"

Digital Relay Computer and ICCE 2

Lehman:

At that time, there was a Dr. Keith Tocher, who together with Mr. (now Professor) Sydney Michaelson [later at the University of Edinburgh] was designing and building a digital relay computer at Imperial College. Unfortunately, their work never became well known in the world of computing. If anyone writes a book about the history of computing, they ought to take a good look at this particular machine. It was called ICCE (Imperial College Computing Engine.) It was the first work on computing within the college, and they had built it, together with Tony Brooker a professor at the University of Essex and previously at Manchester. They built a very interesting relay machine, with some very very special features. For example, they did not store instructions and numbers in Von Neumann fashion in the same store; they had separate instruction and number storage. They also had variable-length instructions. The length of an instruction was made to depend on the complexity of the instruction. It was almost a RISC approach to instructions.

The relay machine was housed in the Imperial College mathematics building, known as the Huxley building opposite the Science Museum, a building which is now an extension of the Victoria and Albert museum. The relay machine was just beginning to be operational on the first floor mezzanine, and they were then considering whether to build an electronic machine. I went to see Tocher and said, "I'd like to do my Ph.D. with you on the building of this machine." Given my electronics background plus my mathematics background, which was now coming together, he said, "By all means." That's how I got into computing. That was in 1953, so you'll notice that this year is my 40th anniversary in the field.

Aspray:

At the time, did you know much about the machines?

Lehman:

No, nothing at all. I was really starting from the ground floor. My job was to design the arithmetic unit for that machine. I spent what was supposed to be two years — I only had a two-year scholarship — studying. But I had trouble with my eyes. I had eye surgery for a squint. If you look at my glasses you will see that they are very heavy prisms. From this visible squint, I tend to see double. That was causing quite a lot of trouble. The net result was that my scholarship was suspended for a year — the college paid for me. So I spent three years doing my Ph.D. By the end of that period I had designed the arithmetic unit for ICCE 2, which was to have been an electronic machine.

The principal innovation of the arithmetic unit design was that I came up with a new multiplication algorithm. I intuitively felt this must be the optimum algorithm. I'm pretty sure this is the basis of the algorithm, which is always used in all multipliers these days because it is quite clearly the fastest. My supervisor Tocher subsequently published a paper in which he proved that it was in fact the optimum multiplication algorithm. The mathematics was beyond me, but he did a very nice job there. Interestingly enough, subsequently I was able to trace no less than thirteen different publications, which described this multiplication algorithm in the ten years after I first publicized it.

I remember writing a letter to the ACM Communications pointing out that everybody was republishing this algorithm, and that the original had in fact been published in my Ph.D. thesis in 1956. Anyway, that's neither here nor there. I claim to be the author of that algorithm, but lots of other people have claimed the same since the normal multiplier at the time simply summed the multiplier instead adding the multiplier appropriately shifted to the accumulator — whenever a "one" was encountered. My algorithm added the multiplier when encountering a zero after a sequence of ones (scanning from the right) or subtracted the multiplier when encountering a one following a zero. But this is not the proper place to describe this algorithm in detail. But there is a letter in the ACM Communications describing it in detail, and as I have already said it has also been publicized by at least 13 other authors. If you are interested in tracking it down, I can give you a copy of my publications list.

Aspray:

Fine.

Lehman:

When we started on the project, Professor Levy was still the Head of the Mathematics Department. By the time we were halfway through, Professor Jones, the quantum mechanics man, who had rejected me as a graduate (Ph.D.) student, was then Head. Incidentally, I think I should perhaps point out, I'm using this term, "Head of the Department" instead of chairman. You're aware of the difference between the UK and the States with regard to Departmental administration?

Aspray:

No, I don't think I am.

Lehman:

When I became Head of the Department of Computing (and Control), in 1979 the Rector, Dr. (now Lord) Brian Flowers, FRS) who appointed me said, "Firstly, you ought to understand the difference between a U.S. chairman and a U.K. Head of Department. A U.S. chairman represents his colleagues to the University administration. The administration is the boss, and the Department representative negotiates with them. A British Head of Department represents the administration to his colleagues and is lord and master of all he surveys." He really is the boss, and whatever he wants goes. The only way that the university can in fact control him or her is through budgets. The money still comes down through the institution as a whole.

Aspray:

I see.

Lehman:

Back to the subject. By that time I was close to completing my Ph.D., Jones was Head of Department. Jones firmly believed that mathematicians shouldn't get their hands dirty, and he effectively killed the ICCE 2 project. From my point of view, it didn't matter because I had more or less finished the work on my thesis, although I hadn't written it up yet. I had already applied to and got a job with Ferranti, the very people who were responsible for my Research scholarship. So I moved over to Ferranti's London laboratory. Tocher and Michaelson, who were building the electronic machine, were so disgusted that they both resigned. Tocher went to British Steel and worked in the cybernetics area with Stafford Beer working as a mathematician mainly. Michaelson eventually moved out as well, and went to Edinburgh. In fact he became a leading figure on the British computer scene, a position he maintained till his unfortunate death two or three years ago.

Work on Mercury Computer

Aspray:

I vaguely remember, but I may remember incorrectly, that A. D. Booth had some connection with Imperial College?

Lehman:

No, A.D. Booth, who was incidentally my external examiner for my Ph.D., was at Birkbeck College. Of course he was one of the earliest computing people in British university life.

So I joined the London computer laboratories of Ferranti, although I had not yet written up my Ph.D. dissertation. My principal responsibility was to do a feasibility study for the use of what was then the new Mercury computer. I also became familiar with the Pegasus computer, but my main assignment was to analyze the Mercury computer to determine its suitability for the control of Blue Streak, Britain's first ballistic missile. At that time Blue Streak was under development and being tested in Australia. I spent my time commuting between London and Manchester.

Move to Israel and SABRAC Machine

Lehman:

Whilst I had been at Imperial I met a middle-aged individual called Cederbaum who worked for the Israeli Ministry of Defense. He was here doing his Ph.D., also in mathematics. He was actually an analog computing man, but he was doing a mathematics degree. In discussing my interests with him, he said, "Would you like to join us? We don't have a digital computer group, but you could start up a section — start up some digital computer work." He was in what was known as the Scientific Computing Department of the Israeli Defense Ministry, which later on became known as the Weapons Development Authority (RAFAEL) of the Israeli Ministry of Defense. I readily agreed to this.

I had joined Ferranti, in October 1955, but in January or February of 1956, my wife and I decided that we would emigrate to Israel around August or September of that year and that I would join the Ministry of Defense in their Scientific Department. In June or July, the Suez War broke out. I was living on a scholarship — a very minimum income, which had not permitted me to accumulate any savings, and we were relying on the Ministry to provide us with suitable accommodation in Haifa. They then asked me to postpone my coming until the following year because of the war. After the Suez War, things were in a mess. They couldn't have me till later. My reaction was to say to my wife, "once we go to Israel, I'll never finish my thesis. If I don't get my Ph.D. now, I'll never get it." So between September 1956 and February 1957 when we were due to leave for Israel, I wrote up my thesis. In due course I had my oral examination, and subject to the correction of a few spelling errors and one or two minor errors, I was awarded my Ph.D. in 1957.

We went to Israel as planned in February 1957, and I joined the Israeli Ministry of Defense. I went there with the idea of building a machine. I found two other people who were interested to work with me. I couldn't get any official project to build the machine, but we started working anyway. Basically because I didn't have a budget, I said that we were going to build a minimum cost machine. The idea of this whole project was not primarily to build the best possible machine, but to learn and demonstrate the technology. So there were, on that project, besides myself, two engineers, who were going to do the circuit design. Because it was to be a minimum cost machine, it had to be a serial rather than a parallel machine. For reasons which are not now important we decided on a 36-bit number representation. The only possible affordable storage would be a magnetic drum, and economics forced us to settle on a 4,096-word capacity.

We started work on the design. In 1959, for the first UNESCO Computer meeting in Paris, which was the forerunner of IFIP, I prepared and presented a paper called "A Minimum Cost Machine." The paper presented an analysis of the design and described how we were going to build this machine for ten thousand dollars. I remember getting into a furious row with an IBM representative because my analysis asserted that 50% of the costs of the machine at that time reflected selling costs and things like that. The actual component cost was only a small amount of this. With hindsight, I must admit that it was a very naive analysis. In any event, IBM didn't like what I said at all. I presented this paper, and I was talking about it briefly at a press conference at the UNESCO conference.

At the same time, I had permission to go out and recruit a programmer. So I found a young lady who had been working at Elliott Brothers as a programmer, machine language programming, of course, in those days. She was also interested in coming to Israel, so I recruited her, and within the next few months we became a four-person team. You'll find their names in two of the references, because subsequently, when the machine was designed and operational, we published two papers, one in the ACM Journal and one in the IEEE Transactions on Electronic Computing. The name of the machine was SABRAC. Precise references appear on my reference list. Incidentally, any copies of papers that you want I can pull together. I have most of my papers still, and I am sure I can put my hands on them.

Aspray:

Thank you. At the time, in Israel, what other computing machines were there?

Lehman:

There was only one other computer effort in Israel at the time, and that was at the Weizmann Institute, where they were building WEIZAC 1.

Aspray:

Oh, that's Gerry Estrin.

Lehman:

That's right, Gerry Estrin and Co. That's when I first met Gerry. Professor Pekeris was then Head of the Department of Mathematics, but he was a mathematician. Gerry was the engineer. Their machine was a copy of the JOHNNIAC 1, whereas our SABRAC machine was an entirely new design. SABRAC was indeed a very interesting machine, having a number of innovations in it, as can be seen from the relevant papers. Many design aspects were brought about by our budgetary problems but ultimately important in themselves. We had to think hard about how to get the most out of the machine while being severely restricted financially. If we had patented two or three of the ideas, which we put into the machine, we would have been well ahead financially. Other concepts came from the Ferranti Pegasus machine. Some of the philosophy came from ICCE 2. So the SABRAC pedigree is traceable. For example, we had multiple-length instructions with individual bits controlling what happens. That made instruction decoding much cheaper and simpler, almost a form of microprogramming.

SABRAC Innovations

Lehman:

We also had two features in particular that were innovative at the time. In one case they were definitely new, and in the other case I'm not sure timing-wise. After we were already committed to a magnetic drum memory, magnetic core memories became available. We decided that we didn't want a machine based on optimum coding, which gets over the access time problem on magnetic drums. We couldn't afford a 1,000-word magnetic-core memory. That was an unheard-of luxury. We decided that we would put in just 128 words of magnetic core, which was just about the maximum that we could afford. But how could we use that effectively? We came up with a scheme, which was in fact paging and indirect addressing, or what the Manchester people called B registers, later termed modifiers. We reasoned as follows: "Supposing, instead of making the magnetic drum the main memory of the machine, we have 128 locations of core as main memory. We can then keep only a small amount of data and instructions in the memory, and bring in more from the drum as needed."

But then we asked, "the unit will still have to wait for the drum transfer." The Solution? "Have 256 locations split into two sets each of 128 locations, and have them switchable such that at any given moment in time one is acting as the control and data memory for execution in the computer, while the other is being unloaded or loaded from the drum." But then we reasoned, "This will not permit continuity in execution from one memory block to the other." So the final scheme with we implemented was a seven page memory, each of 32 words organized as two 128 word memory systems. Addresses 0 to 31 addressed the same physical core and was shared between the two systems. Addresses 32 to 127 were duplicated in the two systems. Thus both systems I and II respectively had an address range of 1 to 127 with 0 to 31 common. We then had a "change system" command to switch the roles of I and II such that at any one time one system of 128 locations served as the main memory while the remaining 96 locations were available for autonomous memory/drum transfers in parallel with continuity in execution. That is we had achieved parallel execution. We then reasoned further, "If one can have autonomy of drum transfers, one can also do that for input/output, so why not permit transfer between one or the other of the memory systems and the paper tape input output devices, in parallel with program execution?" I believe that we invented this concept of autonomous or parallel operation in 1960. We must have been the first with this concept of autonomous parallel input/output conceived in 1960, implemented and in use by 1962.

The concept led directly to a second innovation. We argued, "given such a small memory space and a switchable page, absolute instruction addresses must not be embedded in the program text. We need to be able to place the program into any location of that memory. Why don't we have a register in which we put a base reference, and by updating that number, we can in fact place information, data, and programs free location within the total memory space, i.e. in any subset of the 128 locations." So we invented the concept of B lines or modifiers. We called the modifier the " second address register." I believe that the same concept had been devised and implemented by Professor Williams at about the same time at the University of Manchester. They had named it the "B" line (on the CRT memory) where the A line was the accumulator. But we were not aware of their work.

Aspray:

What kinds of literature did you have available to you from the States, or the UK?

Lehman:

If I remember correctly, we received both the Proceedings and the Transactions of the IEEE as well as Communications of the ACM. Thus we were fairly well in touch with the world.

Autonomous transfers, paging and address modification were not the only innovations in SABRAC The machine was the first in Israel to use transistors. WEIZAC 1, for example, still used thermionic valves. Tubes, you would say. We were also the first in Israel to use printed circuits. Within the Scientific Department there was a chemist commonly known as Pushkin — I didn't understand why at the time, but I soon learned. He was beginning to experiment with etching copper-clad circuit boards, so we designed our own printed circuit boards, and placed our orders for printed circuit boards as they were designed. Their return was a different matter, with delivery taking many months. He always had a good excuse why a particular batch of printed cards weren't delivered on time. But by the end of the project we came to the conclusion that he was just totally incapable of delivering anything on time. But in the end we obtained them and the machine was built and commissioned.

Amongst other things, this machine became a test-bed for the multiplier algorithm described above, which we first debugged using software simulation, as was the remainder of the machine.

David Ben Gurion and SABRAC

Eventually SABRAC started operation. Even before that, Ben Gurion, the Prime Minister and Minister of Defense, came, as was his wont, once a year to visit the lab. In 1961 his visit included my lab and SABRAC. During that visit there were only three people in the lab: Ben Gurion, Shimon Peres, then Director General of the Ministry of Defense, now Foreign Minister (1993), and I. Ben Gurion asked me three questions. The first was, "How many people worked on building this machine?" And I answered, "Four people plus occasional technician's help." Then he asked, "What will we be able to do in twenty years as a result of having built this machine?" He clearly understood completely what the objective of building the machine was.

I said that I thought digital technology was the technology of the future, and that in twenty years' time everything would be computerized. He seemed satisfied with my answers, and then asked the third question, "Why did you come to Israel?" Peres having told him that I had come to Israel from England. Before I could replay, Peres said, "Oh, Lehman is religious. It must be his religious feelings that brought him here." Ben Gurion looked me up and looked me down, and asked, "But if you're religious, where's your yarmulke?" He was so small he couldn't see what was on top of my head! [Laughter] So that's my pleasant memory of Ben Gurion, from the only occasion that I met him.

The visit had its direct consequences. Up to that point, although SABRAC was almost operational, we had worked without a budget. Everything we had got we had stolen, begged or borrowed. But within three weeks of Ben Gurion's visit I got a letter saying I had been given a budget to finish the machine. We finally had a proper budget to work and not long afterward, the machine entered service. A number of people started using it, particularly a scientist named Dr. Menat, who used it to design the infrared optical system of Israel's first land-to-sea missile, the Gavriel. So the SABRAC was not only just an experimental digital computer; it was actually used.

Some time in late 1962, Ben Gurion had retired and the new Prime Minister, Levi Eshkol came to visit the laboratory, a visit that once again included SABRAC. But this time, the entire High Command came into my lab to see the machine. I had thirty or forty people crowded into the room. By that time the machine was in operation, and we had written a program to play the Israeli national anthem, which was one of the standard demos. Eshkol too asked me two questions. The first was, "How much did this machine cost to build?" The second question was, "For how much could we have bought it in the United States?" End of story. Two or three months later, a high-ranking officer of the Defense Ministry in Tel Aviv came to tell me that it had been decided at top level that all work on digital computers was to cease. I was to be given the choice of either switching to the Operations Research group, or to resign from the Scientific Department. Now I didn't feel able to switch to Operations Research. On the other hand, I had at the time an offer to join IBM Research at Yorktown Heights. So with a heavy heart, since neither my wife nor I wished to leave Israel, I decided to resign and join IBM.

Joining IBM at Yorktown Heights

Lehman:

The offer from IBM came about in the following way. In 1963, as a result of a brief discussion at the first IFIP (Paris) with Professor Beltrano of the University of Mexico, I was invited to visit the University and to give a three-month course on computer design. I took the opportunity of my life, to pay a visit to the United States. When I requested permission to go, the representative of the Ministry said, "Yes, with pleasure, under the condition that you don't go job hunting." So I promised I wouldn't go job hunting.

I had a cousin at that time who was working in the Chemistry Department of Yorktown Heights. When I arrived in the States he suggested that I visit him there and he would also arrange for me to visit the Computer Science Department. So I went to Yorktown Heights. When I got there my cousin said, "I suppose I ought to tell the Director of the Computer Science Department you are here. Then I'll try and find out who might be interested to meet you." So he called through to the Director and brought me up to his office. After the preliminary introduction, he said to my cousin, "Come back in fifteen minutes to pick Dr. Lehman up." However, after ten minutes he picked up the phone, called my cousin and said, "Forget it. I'll look after him for the rest of the day. Pick him up at 5.30 from my office."

Then I did the rounds, meeting various managers. At the end of the day the Director said, "I'm going to offer you a job here." I replied, "I'm on my honor not to go job hunting. I would have liked to work here, but under the circumstances I can't consider coming." He replied, "Look, anyway, we'll send you an offer." And indeed, I received the offer letter from IBM while in Mexico. I responded by saying that since I was on my honor not to go job hunting I had, reluctantly, to refuse the offer. However, shortly after my return I received the ultimatum, as above to switch to Operational Research or resign. I discussed the situation with my wife, who had absolutely no desire to go to the U.S. but I pointed out that "We really have no choice. I haven't got another job. "

Israeli Abandonment of Digital Research

Lehman:

I should point out here that there was more behind the Ministry's decision just shortsightedness.

Aspray:

Would you explain that? Please.

Lehman:

There are three separate explanations. I'll just talk about one of them. There were units within the Israeli defense organization who were at that time setting up a major computer center. They were hoping to buy a Philco 212, a machine just coming to market at the time. Those people feared that developing local skills to build computers would jeopardize their chance to obtain a budget to buy their machines in the States. This was, of course, nonsense because there was no way that we were about to compete with Philco or any other manufacturer. Nevertheless, they were a powerful force in their opposition to continuing digital computer development work in the Scientific Department, wishing themselves to be the focus of computer activity within the Ministry of Defense. Thus they played a part in forcing me out.

Arithmetic Unit Design & Parallel Computing

Lehman:

After lengthy discussion, my wife and I decided that we would join IBM Research, Yorktown Heights for two or three years if their offer was still open. After that time we would return to Israel, because there was no question in our minds that was really where we wanted to be. So I wrote back to IBM and asked, "Is your offer still open?" and the answer was, "Yes." And in August of 1964, we sailed to the USA as immigrants and went to live in Monsey, New York, the town where my cousin and his parents also lived. We chose this location, some 30 minutes drive from Yorktown where I was to work, because we needed ready access to Jewish schooling for our four children. Thus, I joined the Computer Science Department of IBM Research at Yorktown. It turned out that they were interested in my background in arithmetic unit design. The late Jack Bertrand, who later became an IBM vice-president, was at that time Head of the Computer Science Department. He together with Herb Shaw, now with ISI in Los Angeles, wanted to build a supercomputer, and wanted me to design the arithmetic unit for it.

What was this background in arithmetic units? To explain this, I must backtrack once again. I have already mentioned the first UNESCO conference in Paris in 1959. I also went to its follow-up, the first IFIP conference, held in Munich in 1962. For this conference, I had prepared a paper "A Comparison of Carry Speed-Up Circuits." which reflected my continued interest in arithmetic units. By that time there were eight or ten different ways of speeding up the basic addition operation in electronic digital computers. The basic problem in parallel arithmetic unit design is to achieve high speed propagation of the carry when there is a chain of 0's and 1's to add and which in the worst case will have to propagate from the least to the most-significant end of the addition mechanism. Ultimately it is the speed of the carry propagation mechanism that determines the speed of the entire arithmetic unit.

At the Munich IFIP Congress, I presented a paper comparing and evaluating different methods of carry speed up. On the way home to Haifa, I remember thinking, "Hey, this is crazy! If I look at the most effective parallel arithmetic units, then only some 33% of the hardware is actually doing arithmetic and 66% of the hardware is concerned with speed-up. In other words, the fastest arithmetic unit is at least three times as expensive as the slowest unit simply because of the carry speed-up cost. So I said to myself, "Hey, wouldn't it make more economic sense if instead of having an n-bit-wide parallel arithmetic unit in a parallel machine, one had n serial machines and built a high-level parallel machine that is a machine capable of executing up to? Instruction streams simultaneously." Remember, of course, that at that time machines were still built from discreet components — not from integrated circuits. The same argument would not hold true today.

Thinking about the problem, I realized that if you look at the problem of parallelism, whether it be at the bit level, at one extreme, or at the job level at the other extreme, then a major part of what you're paying for is the cost of the communication overhead. And the overhead is not proportional to the level of parallelism. The higher the level of parallelism exploited the smaller the total relative amount of communication needed. The correct way of exploiting parallelism is not to start at the bottom, at the bit end of the design, but to start at the top, at the job end, the workload. Then when you've got as much parallelism as you can running jobs in parallel — no-one had ever heard of multiprogramming in those days — then you go one level down and you see if you can break up individual jobs into tasks and soon to lower levels. Only as a last resort, if you still need that extra speed, do you parallelize at the bit level. I decided that the path of the future was toward parallel machines. So I came to IBM in New York with a desire to investigate parallel computing, parallel processing, not to design ever faster, high speed, parallel arithmetic units.

Now I was not the first to conceive of parallel computing. Slotnick had already proposed his Solomon machine. But the Solomon machine was a vector, not a parallel processor. My objective was to build a parallel processor, a system that could break down a large computation into (almost) independent tasks and execute those concurrently, in parallel. So I came to IBM with this objective. Only when I got to Yorktown did I discover that I had been hired to design a high speed, pipelined, parallel arithmetic unit. I was hostile to the whole idea since I was convinced that the whole approach to higher speed computing had to be top-down, starting with multiple processors and multiple job and task streams. So I fell out with Jack Bertram and was sort of left to myself, waiting for people to give me work.

Gradually I became friendly with Don Senzig, who was also interested in parallel computing. Together with Jack Rosenfeld, we wrote a number of papers on the mathematical aspects of parallel computing. Meanwhile, Bertram and his "right hand" Herb Shore had taken a large section of the Department to California to build a highly parallel machine. I was left behind because I wasn't interested in designing the ACS (Advanced Computing System) arithmetic unit, believing it to be the wrong way to go in search of performance. Incidentally, the ACS project failed for a number of reasons, including the fact that Gene Amdahl, the chief architect on the project, strongly disagreed with Bertram over the best way forward. So he, Gene, left IBM and went on to set up his own company to build machines the way he believed they should be designed and built.

Let me recap. Gene Amdahl was initially the chief architect of the ACS project, but, he and Bertram quarreled over what was better, pipelining or some other technique. I differed with both and argued that neither technique represented the best way forward for several reasons. These included the fact the problem of compiling for a pipelined machine to keep the pipe full and fully utilized. This represented an unsolved, and probably unsolvable, problem. Thus the pipeline could not be exported effectively and the discrete components used before the advent of integrated circuits, to construct the high speed parallel arithmetic units were exported very inefficiently. In pursuit of my own vision, I took on a Ph.D. student to investigate parallelization in FORTRAN compiling, and at the same time, I continued work on a project proposal to build a parallel processing system.

Project IMP

Lehman:

In 1965, I was given the go-ahead to start a project called Project IMP, an acronym standing for Integrated Multi-Processor. Between 1965 and 1968, I was manager of Project IMP. That program ran through three phases. The first phase was designing the hardware for a machine, whose operation we simulated both to evaluate the designs and to gain real experience in parallel computing. The idea was to build a machine with a variable number of processors, probably 16 or 32 accessing some 64 memory modules. But they had to access these concurrently, and we looked at the memory interference problem and various other things.

After about one year's work, we had achieved what we considered a satisfactory design. For example, we proposed use of a cross-bar switch, for processor/memory communication, and also an "interaction" communication device for processor/processor communication to facilitate work load allocation. We also proposed an instruction set addressed mechanism for the task generation, allocation, and merging. I would claim that most of the concepts of modern parallel machines were in that architecture. At the end of the first year, we realized that we had designed a machine on paper that could not be built with the technology then available. But we also realized that we had solved the wrong problem. Given the challenge, any fool could put together hardware to build a parallel machine. The problem is not building the parallel machine, but driving it to keep it busy.

So we spent the next year or so designing what we termed an Executive System. We created an executive system to drive the hardware that we had designed. At the end of the year or year and a half, we realized that we had an Executive architecture, but that once again we had solved the wrong problem. Given the challenge, recognizing that one needs an executive system to drive, control, and exploit the hardware, any (intelligent) fool can specify or design it. The real problem is how to hand on the design experience from one system to the next. So we spent the next year or so looking at design methodology in the context of parallel machines. This work came up with one or two rather original papers, in particular the IFIP 68 paper by Zurcher and Randel. The IMP project as a whole was first publicized in a 1966 paper "On Parallel Computing and Parallel Computers," in a Special Issue of Proceedings of the IEEE on digital computers and computing. It must have been one of the first parallel computing papers ever published, but is never referenced to this day. Nobody remembers it any more. Ah, there it is, yes, Proceedings of the IEEE, Volume 54, Number 12, 1966.

1968 Simulation Symposium

Lehman:

An interesting innovation came about during that project. I had originally used simulation in Haifa to debug the multiplier on SABRAC. Subsequently I benefited from that experience by using simulation in Project IMP to look at the problem of memory interference and at other aspects of the Executive system. About 1968 I came across a publication by somebody elsewhere in IBM who had also used simulation in hardware design. I called the author up and I said, "I see you use simulation for hardware debugging and so do we in Project IMP. Why don't we get together and talk about what we do — maybe we've got a lot to learn from each other." He said, "That's fine, but I know another group which is also doing the same thing. Why don't we involve them as well?" So we called them up, and they said, "Ah but we know a fourth group."

Eventually we identified over a hundred different groups in IBM, all of whom were using simulation in one way or another, and none of whom had talked to anybody else. As a result, I organized a four-day symposium called SIMSYMP 1968, which was a get-together of all the groups in IBM using simulation in some way or another. There were three volumes of proceedings. The papers included the paper by Brian Randell and Frank Zurcher called "Multi-Level Modeling," mentioned above. This most original concept subsequently became more and more important. It was a precursor of Niklaus Wirth's concept of successive refinement, as a methodology of program design.

In 1968, the then Director of the Department of Computer Science, Henry Ernst, resigned. Herb Shaw, whom I had antagonized by opposing his concept of arithmetic unit design, came back as Director of the Department. He had learned his management skills from Jack Bertram, and one of his principles was that when one takes over a Department, one gets rid of all the managers then in place. Shaw, in effect, closed down Project IMP. He told me about the closing down of the project the day I came back from that symposium, and I was essentially told to find another job for myself within Research.

Programming Productivity Research

Lehman:

Two or three days later in November 1968, Dr. Arthur Andersen — a physicist, not an accountant — who was then Director of Research, came up to me. He told me that he had just come across a recent paper in which the authors Sackman, Erickson, and Grant of SDD claimed that Bell Labs, in its Indian Hill work on ESS-1, had increased their programming productivity by a factor of three by switching from paper and pencil programming to interactive programming using IBM's TSS-67. Anderson's remarks to me were, and I quote, "Look, there's this paper by Sackman et al. Anything that Bell Labs can do, IBM can do better. Would you please spend several months, or maybe even a year, looking at programming within IBM and coming up with some research projects to help IBM improve its productivity, its capability in programming?"

It was this assignment that changed my career path and set me off in a direction, which I have now followed for over 25 years. I started the investigation by going to Indian Hill and talking to some of the people involved. I came away convinced at one level that the claim to have increased programming productivity by a factor of three was correct. But I also realized was that while they had achieved this three fold increase in the rate at which locally correct code was being generated and tested, the consequence of the programmers' local focus and concentration on the code on his screen was that the system as a whole was being destructured. They were producing program text and code in exquisite detail that appeared perfect when tested on the spot. But they were losing sight of the forest for the trees. Their joint effort was creating a system, which would ultimately become unmaintainable, because individual programmers took little notice of what others were doing. While the net productivity measured in terms of lines of code per programmer per day had gone up, the gross productivity for the project as a whole, and particularly the gross productivity as measured over the lifetime of the software product had probably gone down.

The visit to Indian Hill initiated a 9 month study that was summed up in a report, "The Programming Process." Every word of that report is as true today as it was when I wrote it over twenty-five years ago. This is not the only time I have had the fortune — good fortune, I'm not sure — of being ten to twenty years ahead of my time, and as a result, of having the work ignored. Later on, what I have said has been rediscovered by others. I shouldn't say these things, but I will say them. Are you aware of this book?

Aspray:

I don't know this book.

Published Work on Programming

Lehman:

Okay. This is a book by Les Belady and I. You know the name of this book?

Aspray:

Yes, I do.

Lehman:

It's called Program Evolution: Processes of Software Change. One of the papers in there is that 1969 report, "The Programming Process," inspired by Sackman's paper and my subsequent study. So this year, 1993, is my silver jubilee in Process. It was a remarkably insightful paper. I myself didn't understand everything I was seeing until many years later. I spent nine months, doing the work, talking to hundreds of people, thinking a lot, writing it all up, producing a report, and then — silence. Nobody was interested. It was simply before its time. People didn't understand what I was saying, and it just didn't sink in any way. By the time the report was completed, there had been a change of director at IBM, and the new man, Dr. Ralph Gomory, a mathematician, just wasn't interested in the software problem and software technology. So, I was then shifted into the Computer Systems Department to undertake a number of planning assignments.

At the same time, on the basis of my findings, during the Programming Process study, I started working first on my own, and then together with Les Belady on a discovery which we called "Programming Growth Dynamics." This came directly out of my observations and data on the growth of the IBM OS360 operating system. It showed that the growth of a piece of software seemed to have a disciplined dynamics of its own which could be analyzed, and used to plan the further development of the system and its evolution. Eventually we changed the name of the study area to "Evolution Dynamics." We conceived of the word "evolution" in relation to software. Again today the phenomenon is widely recognized, but at that time was still fairly unknown. I expect that we were the first persons to use the term. It was actually Les, while he was visiting here at Imperial College, who said that we had to change from program dynamics. He said, "Why don't we just call it program evolution, because that's what it is." So that idea belongs to him.

Job Search and Imperial College Chair

Lehman:

Reverting to the historical review, in 1971 or thereabouts Ted Climis wrote a letter to the Director of Research asking whether he could borrow me for three months because the Poughkeepsie Development Labs were interested in some of the work that I was doing on program evolution and on the growth of OS/360. In a letter that was later leaked to me, Gomory wrote back to Climis to say, "Of course, you may have Lehman, and while you have him, why don't you keep him?" So I came to the conclusion that IBM had no future for me. Once a letter like that goes on your file — forget it.

By chance, about that time my wife and five children visited England for the summer for the first time since we had left there for Israel in 1957. My wife had never been happy in America because she had been wonderfully happy and relaxed in Israel. On several occasions I tried to get back to Israel, and in particular to return to the Ministry of Defense Laboratories Haifa. But I was essentially told that they didn't have a job vacant which was senior enough for me. My answer was, "I'm so anxious to go back to Israel that I will take anything, whether it is senior enough or not." Their answer to that was, "Sorry, if you come back to a job, which is below where you ought to be, you won't be happy, and your boss won't be happy. Your boss will feel threatened, and you will be unhappy because you haven't got the responsibility you want, so no, I'm sorry, we can't take you back."

I also applied to the Technion in Haifa to join their faculty. The computing science activity was then in the Department of Mathematics, and their response was "No, you are a hardware man, and we are not interested in hardware." Computer scientists have always been plagued by this concept of computer science as a branch of mathematics. So I was turned down by them. The Weizmann Institute was also not interested in me. I had previously worked for the Weizmann Institute while with the Ministry of Defense on the design of the arithmetic unit for the WEIZAC-2 machine. Again, at that particular time my ideas were different than — what's his name?

Aspray:

Estrin?

Lehman:

No, it wasn't Jerry Estrin, it was Shmill Ruhman. Estrin was acting as a consultant, but it was Ruhman, who was not happy with my concept that a computer was only as good as the software that drove it. We just didn't see eye to eye. So, I couldn't get a job in Israel. Thus when in 1971 my wife and children visited London, where I joined her briefly in August on my way to the IFIP '71 meeting in Dubrovnik, Yugoslavia, she said, "Manny I'm not going back to America. I'm so happy here in England I want to stay." I said, "I'm terribly sorry, I've got no job here." After some friendly discussion, we agreed that she would stay in England for a year. I would go back to IBM in the States, and if during the course of the year I found a job in England, then I would join her in England. Otherwise she would come back to the States with the children.

While I was in Dubrovnik, someone mentioned that there was a chair vacant at Imperial College. So when I got back to London I said, "Look, I may be closer to a job in England than we thought. I will see whether I could be considered for that chair." So I called up Imperial College, and was told that the Head of the Department, of what was then the Computing and Control Department, was away on holiday, but he would be back in two weeks' time. I could call him then and talk to him. I said, "Well, that happens to be the day that I am going back to New York, but I'll call him before I leave for the airport." When I called him he said, "Thank you for calling. When I went away on holiday I thought that the vacancy had been filled, that we had made an appointment. I come back today to find out that we still have a vacancy."

That chair had originally been filled by Dr. Stanley Gill. As already said, the first digital computing activity at Imperial College had been by Dr. Keith Tocher, joined by (now later Professor) Sydney Michaelson, and (now Professor) Tony Brooker working in the old Huxley Building in the Mathematics Department. I joined them in 1953 to do my Ph.D., on the design of a new electronic model ICCE 2. That work came to an ignominious end in 1957, when Professor Jones, then Head of the Department of Mathematics, said mathematicians shouldn't get their hands dirty, and closed down the project. Fortunately for me I had then completed my Ph.D. There was no further work with computers at Imperial until 1962 or 1963. In that year, IBM presented a 709 to the College on condition they open a Center for Computing and Automation. This was opened under the directorship of Dr. Stan Gill who was there for two or three years, and then resigned to go into consultancy.

His place was then taken by Professor John Wescott, a control engineer. He transferred the Control Engineering activity from Electrical Engineering and brought it into the Center for Computing and Automation and renamed it the Department of Computing and Control. However, there was only limited Computer Science activity from 1967 to 1971-1972 since they couldn't find anyone willing to fill the chair left vacant by the resignation of Professor Gill. In 1972 it was decided to fill the vacancy by appointing Professor Steven Goldsack, then a Reader in the Physics Department who had significant experience in the use of computers in Physics. When Wescott came back from his holiday, he found that the Department had been awarded a grant for a new Chair in the Department by the Control Data Corporation. Thus he had two chairs to fill and not one.

So I was appointed to Department chair and Steven Goldsack to the Control Data Chair. But this did not happen immediately. That morning on the phone Wescott said to me, "Come back and I'll find out. We may have a vacancy after all. Why don't you send me your CV?" I said, "Well, I'm going to be back in England in six weeks' time. I'll send you my CV as soon as I get to New York, and then if you're interested, we can talk when I am back." I had my interviews and in due course was told that they were interested, but of course it has to go up through the University Senate. In fact at Imperial College at that time the appointment had to be approved by the privy council. In due course I was told that I had been appointed to the Chair.

There were thus a number of factors that caused us to return to the U.K. One was the desire to go back to Israel, and at least England was halfway. Second was my wife's unhappiness in America. Third of all was the realization that I was unwanted in IBM.

I returned to London and joined Imperial College on Monday, March 7th, 1972. I remember that date because I had been asked by Ted Climis who was then the Director of Software in the SDD division of IBM, to give a final presentation of my work to the software mangers before I left. That presentation was on Monday the twenty-ninth of February, 1972 in San Jose, in California. It was a fairly important presentation because I was able to show that OS/360 or Father 370, as it was known by then, had become top-heavy, that they would not be able to continue developing it in the way they had been doing. They would have to split it somehow, or it would "break under its own weight" as it were. Nobody admitted this to me, but several months later they announced that instead of releasing release twenty-two or twenty-three of OS/370 they would in fact have two new releases called VS1 and VS2, which separated out between large-scale, faster machines and the smaller machines in the 370 range. Now, whether that decision was taken as a result of my presentation, or merely from a evidence that management had itself deduced, I don't know. I was no longer there in IBM, but certainly my analysis was accurate.

So two days after the San Jose meeting, I returned to England, and I started at Imperial the following week. I then became Head of the Computing Section. The Department had started a masters course two or three years earlier, and had started on the design of an undergraduate course, 60% of which was mathematics. I tore that up and said, "We're starting again, I don't like that at all." In accordance with the British tradition as Head of the section I was "the boss," so we redesigned the course. Wescott had already decided that the Department was not about to enter into competition with Manchester hardware people, and therefore we would concentrate on software, methodology and so forth.

This of course was completely in accord with my own view and interests. We designed an undergraduate course, which was quite original in many different ways, and that undergraduate course began in 1973. Wescott's appointment came up for renewal in 1975. There were really only two competitors for the position. One was Wescott, who wanted a renewal, and the other was I. Essentially my position was undermined, because while Head of the Section I had instituted a system of appraisals in the Section that reviewed everybody's performance once a year. I had learnt the practice at IBM and had come to respect it. People thought that I was trying to introduce the industrial IBM attitudes to management, and they resented that. So one or two people in particular went to the Rector, Sir Brian and now Lord Flowers, and said, "Nothing doing, we don't want Lehman." And while it was the Rector's prerogative, in fact Wescott was reappointed, and I continued my work.

In 1979, the Head of Departmentship was up for appointment again. This time I was appointed. In asking me to accept the position, Flowers said "Look, you're running the Department, but I would advise you to get rid of the Control Section people. Send them back to Electrical Engineering. We need a Department of Computing if this college is to be established in the computing field. We don't want this mixed business. And, the control people are needed in Electrical Engineering." So a year later we became the Department of Computing, and the Control staff went back to where they had come from.

Imperial Software Technology & ISTAR

Lehman:

One of my primary objections when assuming the Headship was to design and start an undergraduate degree in Software Engineering. This was in 1979-1980. Again, fairly innovative — absolutely innovative. We spent a year or two designing that course, although it didn't in fact begin until September 1985, by which time I was no longer the Head of Department. Now, while I was preparing for that course, I and one colleague in particular concluded that we couldn't be claiming to produce software engineers purely on a theoretical basis. Software engineering as an engineering course has to include industrial experience. But, at the same time I also realized that there was no company in the UK which was sufficiently advanced in its software engineering approach, that I would send our students there with any confidence. The only thing they would learn would be how not to do things.

Therefore I went to the Rector to propose that the College set up an Advanced Software Technology company. At the Head of Department meeting, he had previously described a recent visit to MIT, where he had observed that university-owned companies played an important role in advancing academic status, both financially and technically. He felt that maybe we should also be looking at setting up various companies. So I went to Lord Flowers and said to him, "Hey, look. Funny you should have said that at yesterday's meeting, because I believe that we ought to be setting up a software engineering company, both to exploit our own research, and also to act as an employer to our software engineering students when we eventually get off the ground." Much to my surprise, he said, "Well, prepare a proposal. I'll put it to the Governors to consider." In due course I had a proposal ready. I put it to him, it got approved, and he then said, "Okay, go on. Set it up." I thought they would take it off me, but they didn't.

Over a two-year period we laid the groundwork for the company. I got as investors three major organizations. One was the National Westminster Bank, one was Plessey, who were then an independent company in defense electronics, and the third one was a consulting systems house called P.A. International. They became the investors in a company, which we eventually called Imperial Software Technology, or IST for short, which went into business in about 1982. It started operation in 1982, and we very quickly got into the idea of building what in today's terminology we would call a support environment. In European terminology it's an integrated project support environment or IPSE.

That led to the design of a system called ISTAR, which was important because in its time it was the most advanced and first major support environment — programming or project support environment. When ISTAR started to work, we eventually got four or five customers, including British Telecom, Motorola in the States, and Plessey themselves. But each sale, instead of going through in two or three months took one to two years. While technical people were quickly convinced that ISTAR was the best thing since sliced bread, their management, the executives, responded by saying, "For years you have been telling us that all you need for good software development is a few prima donnas, a few brains. Now you're telling us to spend a million dollars or a million pounds or something on software?" It was a very difficult selling job and the typical sales cycle was more like three years than the six months predicted in the business plan.

There were other problems with selling ISTAR, which represented a totally new concept. In addition, our first sale had been made into British Telecom. BT was then being privatized, and having changes in management every six months.

This led to a continual change of direction, and we had to change with them. The net result was basically a disaster from a business point of view. By that time, one of the original investors, PA, had sold its share back to the company and two other venture capital people had come in. The investors then decided on a change of direction and brought in an American, whom I won't name because I'm about to say that if there was ever a Mafia type, he was it. He claimed to have a very good track record in "saving companies that were headed for disaster." But he came in not understanding the business, which subsequent experience showed was a real problem. He didn't want academics around, nor did he want the founder around, hence one of his first acts was to fire me. I was then a member of the IST Board and a consultant to the company.

As Founder of the company I have been and its first chairman, until, after some three years, the financial troubles became dominant, at which time it was realized that someone more experienced at finance than I was required at the helm. But I remained a director until the individual who was supposed to turn IST around was brought in. He immediately forced my resignation from both the Board and the Company. I then returned to the Department of Computing, in the sense that I was given an office but no pay. At the same time, the other members of the IST senior management also had no wish to stay with the company and all resigned. But in ISTAR, IST had a lead technology, although some of its weaknesses were by then becoming apparent. It should have been able to recover. But this individual who had taken over was not able to achieve this, and in his year with IST, nearly drove it into liquidation. Thus after one year he was, in turn, dismissed. Someone else came in, turned the company around, and IST still exists, though rather smaller than originally envisioned. It has been rather successful in producing a product called X-Designer, which is very widely used for X-Windows design.

Return to Dept. & Academic Politics

Lehman:

In any event, I came back to the Department in 1988. I have glossed over what happened in 1984. When my first term as the Head of Department expired, I had hoped to get a second term because I wanted to finish the job I had started. My main battle had resulted from the fact that the Department had been viewed as a paper and pencil and mathematics-type Department. Budgets were based primarily on so much per student head, and the rate per student head in a mathematics Department was about a half of what engineering Departments were getting and a quarter of the physics rate.

Aspray:

I see.

Lehman:

I had to convince the College that computer science, as we ran it, was an experimental engineering science, and our needs corresponded to that of, say, a physics department. And certainly more than engineering Departments because the lifetime of our equipment was much shorter than the lifetime of engineering equipment.

Aspray:

Right.

Lehman:

I battled all the Department Heads because any more money that we got meant they would get less. So, inevitably, I incurred the antagonism of the other Departments because we were seeking to grow too fast, both in size and financially. College politics is very much one Department against the other, as it is in most universities. A second problem that proved fatal to my ambition for a second term was the fact that the Prolog or the Logic Programming Group wanted to set up a School of Logic Programming. My stand was that this was too narrow a topic to become an independent Department. The then Head of that particular group went to the Rector and said, "If Lehman is reappointed the Head of the Department I'm going to take this whole group out of the Computing because he won't agree to our independence." Now the Rector, while agreeing with my view, interpreted that as a sign of division within the Department. This was totally untrue; the Department had never been so solid as it was at that time. The morale was tremendously high. But he was concerned that I had apparently failed to hold a group with he saw as non conformist at a time when the Japanese fifth generation plan, largely centered around the use of Prolog, had not yet been discredited.

The Rector further recognized that the Department was being held back by the Imperial College Mafia — the large Departments: Physics, Chemistry, Electrical Engineering, and Mechanical Engineering because our growth would act to their detriment as they saw it. As the Rector said to me the only way to fight the Mafia is to appoint a Mafiosi to lead the fight. As if all this was not enough, I had further antagonized other Heads by arguing that programming was a specialized topic requiring specialist teaching. The Department of Computing Science, now being mature, should be given the responsibility for all programming and computer familiarization teaching within the College, just as all Mathematics, was taught by the Mathematics Department. Other Departments didn't want that because it meant they would lose income and because they felt we didn't understand their computing needs. Thus they were perfectly satisfied to teach use of computers by giving their students FORTRAN manuals saying, "Go learn FORTRAN."

The net result of all these problems was that I was not reappointed. Instead, the new appointee was the then Head of the Department of Electrical Engineering. Now this gentleman had on two separate occasions said to me that he believed firmly that there was neither an intellectual nor academic justification for a Department of Computing because computers were just electronic components, and therefore ought to be a part of Electrical Engineering. So when he was appointed as Head in my place, I said, "No, thank you very much. I can't work with a man like that. He should never have been appointed."

I therefore took "early retirement," much too early, because I had no intention of retiring. My successor did give me the 40% special two year contract, quite standard in the U.K. for early retirees. At the end of that time he said, "Nothing for you, I don't want you in the Department any more. Good-bye, it was nice having you." I then moved over full time to IST, and I took over responsibility for the company's consulting activity in addition to my initial responsibilities as Chairman until the break-up came in 1988-1989. In 1984, I had been appointed an emeritus Professor of London University, in the Department of Computing, and this entitled me to office space. So I rejoined the Department in the sense that I had an office and a title but neither duties nor salary. Thus I was free to follow my research interests.

My successors’ 5 years term of office as Head of Department came to an end and he was not reappointed. One of his successor's first acts, even before taking up his post, was to ask me whether I would rejoin the Department formally. I said, "Yes I will. I'd be delighted to." So in 1989 I came back into the Department on a 40% basis, continuing my research, but also to do a small amount of teaching. That was the situation until about two years ago. In 1991, a group within the Department became a part of an EEC research program called ESF, the Eureka Software Factor program. I became the director of that research group. Then I went back up from a 40% salary to a 100% salary because what wasn't paid by the Department was paid by the research project.

Aspray:

Right.

Lehman:

This lasted for about one year and then in late 1992 my Department contract was terminated. As a result of financial squandering by the previous Head of Department had badly overspent, and was very much in the red. The Pro-Rector told the present Head that he would not permit any new appointments in the Department as long as Lehman was around because he had been there too long. He saw me as redundant from a financial point of view. Once again I was chucked out, but as an Emeritus Professor I was allowed to keep my office. Moreover, my ESF contract is still in force, so that I remain on a 60% salary until March 1994 when the ESF project comes to an end. Then, unless I find a new source of income, I will have a problem. But be that as it is, that's irrelevant to history. I always appear to come to a disastrous end. I don't know why; there must be something about me, because I'm really a friendly sort of soul; I don't think I have ever had an enemy in my life! Anyway, things always have picked up again and in the long run each such disaster has proved to have been for the good after all.

Software Process Engineering

Lehman:

So that really brings us to the present day. Now, let me just say a few more words. I'm sure that there are bits of history that I've forgotten, but maybe in our subsequent discussion we'll bring that out. As I mentioned, I started off in arithmetic unit design (my Ph.D.). From there, I advanced to the design of an entire computer, the SABRAC. From machine design I went to system design, first with SABRAC, in terms of its operating environment, and later to IMP, designing the whole system. I passed on to methodology — the whole concept of it — and in particular software technology through my "Software Process" study at IBM in 1968/9. for the last twenty-five years the study of the software process has really been my thing. What is interesting is that in my own evolution, my own development, I have gone from the inside of the system right to the sort of top level, where today my main interest is, what I would call the expanding universe theory of computing science.

I see today's computing science as being a technology without a scientific framework, which is fatal. I believe that a lot of the lack of progress, particularly in software technology, has been due to the fact that we don't have a significant scientific framework that gives us the invariances and principles to guide the advance of the technology. For a number of years now I have had the ambition to try to bring them together — because I think that all of the elements of the science are there. They need to be brought together. That was one aspect of my interest which I hadn't really actively explored, except that I'm right in the middle of a paper, which I plan to call, "Software Technology: Theory and Practice." It brings that together, and could eventually, if God gives me the strength and the life, end up as a book. But I don't know whether it will ever get to that.

In the meantime, I've been an initiator and an active part of the software process revolution. In fact, in 1983, one of my first activities at Imperial Software and Technology was to conceive the International Software Process Workshop. My colleague in planning and organization the first workshop was Vic Stenning. Does that name mean anything to you?

Aspray:

No.

Lehman:

You should learn something about him. Vic Stenning is an interesting person. He has a degree in computing science, and has worked in industry since graduating in the late '70. In 1980 he came to Washington for one year, brought over by the Department of Defense to work with John Buxton on Stoneman. They co-authored the Stoneman document, which laid the foundations for support environments in the Ada environment. Stenning is very, very bright, one of the brightest people I know. Very very clever. We're now very close friends as well. I am now working together with him now on Project FEAST which I will mention to you in a few minutes.

In 1983 we set up a workshop, which we called the Software Process Workshop. It was the first of what has become a periodic event sponsored by the ACM, the IEEE, the IEE, and maybe also the BCS. It's a very useful workshop. It's been at annual or eighteen months intervals ever since. This year’s will be the eighth or ninth. For some reason, I stopped going to them after the third or fourth and I believe that unfortunately it's been heading in the wrong direction. It's been hijacked by a couple of individuals and by the concept of process programming. There was a famous debate between Leon Osterell and myself at the ninth ICSE, at which Leon gave his first public presentation of the concept of process programming, and I tried to show that it wasn't what he claimed it was. Those two viewpoints have existed ever since then.

So I have been seen to be interested in the software process since about 1982. In fact, I've been interested and involved in Process since 1969. My original IBM study and report, which brought me into this whole business, was called "The Programming Process." And everything I have done since then has been along those lines, but it has only become fashionable in the last five or six years. Vic Stenning and I played a major part in making it fashionable by creating the Process Workshop. By now there's also an International Conference on the software process; I think there have been two so far.

As for me, I have been continuing thinking about these problems, making progress. Earlier this year there was a workshop in Montreal on process evolution where I gave one of the keynote lectures at that workshop. That together with a request by John Marciniac that I write an article on Evolution in Software Technology for an Encyclopedia on Software Engineering that he is preparing and which is to be published by Wiley, got me much more actively involved in thinking about evolution per se again. Just in the last few months I've suddenly come to a very profound realization. As far as I'm concerned it's a shattering realization, and something, which I've been blind to for twenty-five years, should have seen twenty-five years ago. I'm now really bringing together twenty-five years — forty years of experience. All of that is culminating in what I hope will be a very major project, although I'm a little bit desperate at the moment because I haven't got funding yet. If we — you're a historian. What is your academic background?

Aspray:

Mathematical logician.

Lehman:

Mathematical logician. Do you know Dov Gabbai in this Department?

Aspray:

No. I don't.

Role of Feedback in Software Development

Lehman:

Oh, okay. You should do. He's also a logician, very good. Anyway, every engineer worth his salt, and a lot of other people, have known that if you have a feedback system, say a hi-fi amplifier, whose main characteristics are determined the forward amplification path. You then provide positive and negative feedback, to flatten the frequency response and also obtain the other response characteristics desired. Once the system has been designed, if you subsequently try to change the characteristics by merely changing the forward path, one of two things will happen. Either the thing will blow up because it goes unstable, or nothing will happen because the feedback is designed to maintain the characteristic stable. Now that general characteristic, stability, is true for all systems with negative feedback. That's one of the properties of feedback systems.

<flashmp3>178_-_lehman_-_clip_1.mp3</flashmp3>

Now, what I have been saying for more than 20 years is that the programming process is a feedback system. Let me just show you the charts here. The programming process treats a program that is a model of some process or events in the real world, which you are trying to control. By running your model and reading the outputs of that model you may then apply the outputs of that model in the real world to control or guide the process. Therefore, if you look at the process of program development, you start off with an application concept in some sort of environment. I won't give you the full spiel now, there's a lot more — there's a reason why these lines are dotted and not solid. Right? You develop views, then you evolve some sort of understanding and this must lead you to restructure your views. From that you develop theories and models and procedures of both the application domain and of the execution system domain. You go through the requirements analysis, you define your program, you compute — you identify your computational procedures and algorithms, you create the program when that is finally done, and what do you do then? You install the program in the original operational environment.

The moment you install that program, the environment changes. The act of installation changes the domain of the computation. In other words, when you develop the view of the application in its domain, they must not be the views of the domain as it is now, but a view of the domain as it will be when your system is operational. But since you don't know what the characteristics of your system are, you don't know what it's going to be like. As in the classical engineering feedback loop, you have a tightly closed loop in that the inputs to the development process the application a domain description are changed by the output of that process. Hence evolution is an essential property of real-world software. This is something that I realized twenty-five years ago — I mean I first drew this particular picture in a more primitive form in the early 1970s. You'll see it in this book, in some of the very earliest papers. So, as I said, I've been blind for twenty-five years. Essentially the software development is a closed loop system because the act of installing it changes things, which then cause you to go through and change your program to correct for the changes.

Aspray:

I see.

Lehman:

Therefore, the fact that 70% of the life-cycle costs of a program are incurred through maintenance is not something which is due to shortsightedness and incompetence on the part of the programmers. It is built into the very act of being a computing application, the very act of being a program.

Aspray:

Hmm. Interesting thought.

Lehman:

It's a fact, it's not a thought.

Aspray:

Okay.

Lehman:

A subtle difference. But this is a very simplistic view. The process as I describe it is a sequential process. In actual fact, as your views develop, your understanding increases, and therefore your concept changes. There's feedback from that step. So that you not only get the overall feedback as the program is installed and operated in the domain that defines it, but also in each step as understanding evolves, views change. As views change, concepts change, you become more ambitious, you have to do things differently. So there is an in-built feedback which changes what it is you're trying to do and how you are going to But even this picture is simplistic. To provide more detail above a diagram here that shows one fellow sitting in the middle, responsible for all of these activities. Whatever he learns in one activity is fed back into the others.

This is the a representation, but it's not a very useful representation except to convey this idea that one gets feedback from each of the activities in the process to every other activity that precedes it. Right? Now, but even that picture is not complete because you never develop large scale real world software with one person. You have a whole team or many cooperating teams.

Aspray:

Right.

Lehman:

Many different users, many developers. So we realize that the software development is a very complex business. It is non-sequential, and it involves heavy feedback. Therefore the software process will have all the characteristics of a feedback system. Now as we first observed, one of these characteristics is "external invariance relative to internal forward path changes." For the last twenty-five years people have come up with all sorts of innovative ideas of how to improve programmer productivity, for example by interactive programming, high level languages, object oriented programming, formal methods, CASE and so on. All of those, each of those things, each of those techniques was regarded as a panacea when it was conceived. When people invented high level languages, they thought, "Now we've solved the program problem." Had they, heck! Right! And the same for every one of them, formal methods, CASE, even CASE. CASE has been a major flop, and I think now that the reason is not because these concepts lack value but because all of those represent changes to the forward path. In a feedback dominated system, forward path changes have little external effect.

In order to really improve the programming process, it is not sufficient to improve individual steps. The introduction of a CASE tool, a new technique, a new method must be accompanied by changes to the feedback mechanisms that encompass it if real progress is to be achieved. This now goes right back to the observation that I made about Bell Labs; Indian Hill and the ESS1 project. While they had locally increased productivity, globally it had remained invariant because they were destructuring the system. That's a typical example of the sort of things that happen when we confine improvement to the forward path. There has certainly been enormous progress in programming. The fact that we're building systems far more complex and far larger than they have ever been before. This means that the technology must have advanced. But we need another one or two orders of magnitude of advance before we've really mastered the software development process.

Project FEAST

Lehman:

At the last ICSE, John Major, not the Prime Minister, but a senior vice-president at Motorola, gave a keynote address. One of the points that he made was that over the next six years Motorola wants to increase its productivity and the quality of its software products by two orders of magnitude. So they believe it's possible. Now I'm telling Motorola and everybody else who will listen to me that I know how to do that. The answer is to stop treating the process as a collection of individual sequential steps. It must be treated as a feedback system, in which you look not so much at the forward path, (because we've got enormous untapped technology already available), but to identify the feedback path, and learn to control and direct that path to give it the system treatment. Now, whether that would be a panacea or not, whether it can be done or not, I don't know. I want someone to give me a million dollars to set up a team. We have at least got a name for the project; it's Project FEAST, for "Feedback, Evolution, And Software Technology." And I have now sent not so much a proposal as an initial feeler out to a number of funding agencies, both here and in the United States. I haven't gone to Japan, perhaps I ought to. Because I think it is an important project. Vic Stenning would also be involved. There's a Professor Wlad Turski who's a Professor of Computing — there's a man you ought to talk to if you want to talk —

Aspray:

I know him by name.

Lehman:

Turski is a very close friend of mine. There is in fact a paper by Lehman, Stenning and Turski, in reverse order of contribution to the content of the paper! Turski is a very remarkable individual, a very extraordinary person, and is one of my closest friends, and I certainly would recommend you try to meet him. He was trained as an astronomer. In the early 1960s he came to Manchester to work on the machine there and fell in love with computing. But he is the prototype of what I would call the middle-European intellectual. He's got a very wide knowledge, a very wide interest. He's had a lot of industrial experience as well as a lot of academic experience. He certainly would have a lot to tell, and certainly would tell you lots of things about computing in Eastern Europe, which is not unimportant.

Aspray:

Right.

Lehman:

I was in Russia in 1979, and was taken to three places, in Moscow, in Kiev, and in Akadem Gorodok which is near Novosibirsk, and gave lectures there. I was hosted by a man called Ershov who you may have heard of.

Aspray:

Oh yes.

Lehman:

While I was in Kiev at one of the computing centers — they told me that they had an operating system, and they showed it to me, running on their Russian machines — I asked them very innocently, "Could you tell me why a Russian operating system is printing error messages in English?" Deathly silence and a quick diversion to another subject! They had managed to get their hands on OS/360, or DOS, or something. They had corrected a lot — externally the error messages were all in Russian, but internally, in the body of the code they were in English! Anyway, be that as it may, that was just another story. So Turski would get involved in the FEAST project, so will Vic Stenning, and I'm telling you off the record now that I've talked to Larry Druffel at SEI, and I've talked to Vic Basilli at the University of Maryland. So it would really be a top level investigation. Such is my view of the significance of this investigation in terms of making significant progress. My only need now is to persuade the funding authorities that this is so. I don't know what my chances are. If I don't succeed in doing that, then after next April I will be unemployed in the sense of having no team to work with an no income. That's a slight additional incentive, but it's not really the driving force. I've done an awful lot of talking. Two hours and twenty minutes or thereabouts. I hope it has been useful to you.

Aspray:

Almost certainly.

Lehman:

Now, we can talk for another ten minutes or so, and then we'll go across to lunch, and we'll continue talking over lunch.

Landmarks in British Computer Science

Aspray:

Maybe, for the sake of my colleagues, who are going to have to plot out what few events in the UK hardware and software fields they should explore in a survey book, you can tell me what you think are the most significant things over the past forty years.

Lehman:

Well, undoubtedly you have to go back more than forty years. You have to go back forty-four to forty-five years. Undoubtedly British computing began when Wilkes, who is still alive by the way —

Aspray:

I know him. Quite well.

Lehman:

You know him, Maurice Wilkes. I know him as well. Of course he's in Cambridge. In fact I spoke to him a few weeks ago. He's an employee of Olivetti, their research center on Cambridge. When Wilkes went to Princeton and worked under Von Neumann on JOHNNIAC and came back to England, to Cambridge, to the Maths Lab, and set up a computing unit there, and they began work on, the EDSAC.

Aspray:

EDSAC, yes.

Lehman:

So certainly Wilkes and EDSAC is one seed. As far as I know, I suspect that independent of that, Williams, who was of course one of the pioneers in radar, went to the University of Manchester, or perhaps was at the University of Manchester after the war, and set up, also in the late 1940s or early 1950s, the Manchester School. They built the various versions of MADAM. Cambridge was responsible for what later became known as microprogramming, or maybe even they called it microprogramming. From the beginning they were working with magnetic drums and later delay lines and cores. The Manchester School worked very hard on storage tube memories, and because of that they invented the B-line and the concept of modification, and so on and so forth. My introduction to computing came in 1953, when I had finished my undergraduate work and started on my Ph.D. About two months later there was the first computing conference at the National Physical Laboratory, where they had also been working on a machine. What was it called?

Aspray:

Was this ACE?

Lehman:

ACE! Of course. ACE was later transferred to English Electric and was renamed DEUCE. They were present at the 1953 conference, Manchester, Cambridge and NPL. One of the speakers was — this business of remembering names is terrible! What's the famous man in logic?

Aspray:

Turing.

Lehman:

Turing, again, of course, how could I forget that name? I heard Turing speak. I don't think I understood very much of what Turing said, but of course he was one of the people behind the ACE design. I think afterwards he went to Manchester and influenced the Manchester machine as well. So certainly going back to that 1953 conference where all of the British people came together. I should have the proceedings — I may have passed them over to the Science Museum.

Aspray:

I have seen them. I know how to get hold of them.

WG2.3 Group &amp; Programming Methodology

Lehman:

That is certainly an important event. I would regard the ICCE work at Imperial College as being important in its own right, although I don't think that it impacted anything else anywhere. That work certainly influenced me; that was my introduction to digital computing. But I don't think it got much publicity anywhere else. Britain played a fairly important role. On the hardware side I am not sure that I can pick out anything else, which made a fundamental contribution, but certainly on the software side and programming languages side there were several people who played a very important role in the development of programming methodology, particularly in the work of the group, the WG 2.3 group. Have you ever heard of them?

Aspray:

Yes.

Lehman:

IFIP has got working groups. There was WG 2.1, the group that designed the original ALGOL. The group involved people like Dijkstra Dinos Bjorner.

Aspray:

Right. I know who you mean.

Lehman:

And also Dahl from Norway, the father of the simulation languages —

Aspray:

SIMULA.

Lehman:

SIMULA. There were a number of British participants, including Tony Hoare and Brian Randell, and Wlad Turski was also one of that original group. When they finished work on ALGOL, Van Wijngarden decided that what was needed was ALGOL 2. It's what later became ALGOL 67, or ALGOL 68, which was an omnibus language. There were a group of people — all the people I mentioned above and others who said, "No, that's a lot of nonsense. We need to keep the language simple." They walked out of 2.1, which was the official languages working group, and they formed 2.3 at a foundation meeting in Norway, which they called the Programming Methodology Working Group.

Strangely enough, because of my work on the programming process, although I was not at the foundation meeting in Norway, in Trondheim, I in fact attended the first WG2.3 meeting in Copenhagen in January 1972, and at that very first meeting I was elected a member as well, which means I was the first non-programmer member of WG2.3, and perhaps the only one ever. I'm not too sure about that. So I have been a member of WG2.3 ever since then although I don't attend too many meetings, primarily at the moment because I don't have a travel budget. The British contribution to the work of WG2.3, which then had a profound influence on the development of formal methods, on programming languages and so on and so forth, was fairly significant. Obviously the power behind the throne was ultimately Dijkstra, but Hoare was an equally important member of that group, and so was, what's his name, the Swiss guy I mentioned him before — successive refinement — Niklaus Wirth.

Aspray:

Oh right, yes.

Lehman:

But he was elected after me, two or three years later, when he came up with the Modula language. I would have thought that that is certainly something worth exploring — exploration of the work of WG2.3 as laying the foundations for any systematic methodological approach to programming and program development. There was John Backus there as well, another member of the original group and then later WG2.3. This same group played an important role in the very important 1968 Gamisch software engineering conference, which is again something that you ought to look at as laying the foundations for the whole software engineering movement. What else can I pick out? Well, software engineering as an academic discipline started, I believe, in this College.

I think that we were the first people to propose a software engineering degree. That was done when I was the Head of Department, back in 1981, although the course didn't begin until 1984 or 1985, by which time I had already taken early retirement. What else can I mention? Logic programming. That whole movement — there were really two prime movers. There was a Frenchman from Marseilles, Alain Colmerauer of Prolog? And there was Bob Kowalski, who is still in this Department and who always has been seen as one of the main protagonists of Prolog. We also have in this Department John Darlington, who did very basic work on program transformation, which formed the basis of his Edinburgh Ph.D., which he brought with him to the college here at a later date. There is of course the work of Tony Hoare — have you talked to Tony Hoare at all?

Aspray:

I haven't talked to him. I certainly know of his work.

Lehman:

Yes. Tony Hoare is an outstanding personality, a Classics man. His Ph.D. was in Greek, or Latin, or something like that, or History maybe even. His first job after graduation was as a programmer, and once he got hooked he went on from there. He is, however, not as extreme in his views as Dijkstra, who is very extreme in his views on programming and software development seeing it purely as a mathematical science. This is a view with which I have to disagree with profoundly since it is relevant to only one aspect — it's a very complex process involving many people with different skills. While I disagree, I can see what he is saying and recognize it as very important.

S-, P- and E-type Programs

Lehman:

My program classification scheme, first published in one of my early papers, defines three types of program, S-, P-, and E. S-type programs are programs where the criterion of success is that the program satisfies its specification. This is a mathematical concept, and in my view the relevance of Dijkstra's approach is restricted to S-type programs.

Aspray:

Right.

Lehman:

For such programs one's obligations are restricted to a demonstration that the program is correct relative to the specification. It says nothing at all about the specification. The criterion of success in creating an S-type program is that it is correct in a strict mathematical sense.

The methods needed to achieve this is the domain of Dijkstra's work. You then have an intermediate type of program, which I called a P-type of program. Here you are solving a problem, a very well specified type of problem in which the criterion of success is that your solution is correct. Never mind whether the program is coded well — have you obtained the correct solution? If in fact you have specified the problem precisely, then this is an S-type program. But sometimes it is imprecise, and when you get your solution you realize that you made a mistake in your specification and you correct it. The criterion of success is a demonstration of that the solution is correct.

The third type of program I called E-type where E stands for "Evolutionary." It is the study of the E-type programs that has been my concern for 25 years. E-type programs address a problem or an application in the real world. In this domain the concept of correctness has no meaning whatsoever for two reasons. Firstly because the real world is infinite and continuous, and therefore you cannot come up with an exact or complete specification. That's why the lines on the chart that I showed you were dotted.

Aspray:

Yes.

Lehman:

That's why the domain is delineated by cloud. How far the domain goes is a question of judgment, a judgment that may change at each moment in time. As time passes things change, the domain represented by the cloud expand, growing ever bigger. Correctness can have no meaning because you have got nothing relative to which you demonstrate its correctness. Secondly, even if one were able to provide a specification relative to which one could show an E-type program to be correct, this would not be of major import, since the user is not concerned with abstract properties of correctness. What the user requires is that when the program executes, he is satisfied with the results of the execution. User satisfaction is the criteria of success. Now, user satisfaction changes as you get more experience, as your views change.

As your needs change, your criteria for satisfaction change. So, satisfaction is a transient state. That is why the whole concept of maintenance is not really applicable to software. When one maintains one's car, scratches are removed, worn out tires are replaced. When software is maintained there are no worn out parts whose replacement will bring it back to its pristine beauty. What one does is to change it to satisfy new conditions. Maintenance is an inappropriate word to use in connection with software. What is being maintained in the process commonly termed maintenance is user satisfaction. Alternately, you may see it as the maintenance of software quality. Like hardware, software quality deteriorates with time, not because the software changes — it doesn't — but because the application and the domain change. What you are actually doing in the maintenance phase is evolving the program to maintain user satisfaction.

When you build an E-type program, you construct a finite model of an unbounded or infinite real world. The program has to be finite because you only have a finite time to produce it, and a finite memory in which to store it. Moreover, the program and the process it controls are discrete since that is the only way we can represent information and execute instructions in a digital computer. But the real world is essentially continuous. Thus the programmer's model is an approximation of the real world. It is not a precise representation of the real world. It cannot be. Inherently not. In order to represent an infinite, continuous world by a discrete finite model, one has to make assumptions. And those assumptions are imbedded in the design and implementation of the program. But the real world is always changing. In fact, creating or using the program accelerates that rate of change. Therefore some of the assumptions that are made and embedded in the program make will become invalid in the course of time, and you may not know that. These observations lead to a principle known as Lehman's Uncertainty principle, which says, "No E-type program can ever be relied upon to be correct" in the sense that one cannot know that all the assumptions you made explicitly or implicitly while creating the program and which are embedded in the design, the code and the documentation are still valid, even though they may have been valid when you took them. So you cannot know that the program is correct. It may be correct, but you can't know it. One has to accept that the outcome of any execution of any E-type program is, at an absolute level, unpredictable. That's only one of three aspects of uncertainty. That's really isn't much to do with the history of computing anyway. I don't know how we got there!

Aspray:

Well, why don't we quit it at this point?

Lehman:

Yes.

Aspray:

Thank you.