Oral-History:Daniel Siewiorek: Difference between revisions

From ETHW
No edit summary
No edit summary
(11 intermediate revisions by 4 users not shown)
Line 1: Line 1:
== About Daniel Siewiorek  ==
== About Daniel Siewiorek  ==


[[Image:Daniel_Siewiorek_4679.jpg|thumb|center|Daniel Siewiorek]]Daniel Siewiorek received the B.S. degree in Electrical Engineering from the University of Michigan, Ann Arbor, in 1968, and the M.S. and Ph.D. degrees in Electrical Engineering (minor in Computer Science) from Stanford University, in 1969 and 1972, respectively. At Carnegie Mellon University, where he helped to initiate and guide the Cm* Project that culminated in an operational 50-processor multiprocessor system. He has designed or been involved with the design of nine multiprocessor systems and has been a key contributor to the dependability design of over two dozen commercial computing systems. Dr. Siewiorek leads an interdisciplinary team that has designed and constructed 20 generations of mobile computing systems. He has served as a consultant to several commercial and government organizations, while serving on six technology advisory committees. Dr. Siewiorek has also written eight textbooks in the areas of parallel processing, computer architecture, reliable computing, and design automation in addition to over 400 papers. He has served as Chairman of the IEEE Technical Committee on Fault-Tolerant Computing and as founding Chairman of the IEEE Technical Committee on Wearable Information Systems.  
[[Image:Daniel Siewiorek 4679.jpg|thumb|left|Daniel Siewiorek]]
 
Daniel Siewiorek received the B.S. degree in Electrical Engineering from the University of Michigan, Ann Arbor, in 1968, and the M.S. and Ph.D. degrees in Electrical Engineering (minor in Computer Science) from Stanford University, in 1969 and 1972, respectively. At Carnegie Mellon University, where he helped to initiate and guide the Cm* Project that culminated in an operational 50-processor multiprocessor system. He has designed or been involved with the design of nine multiprocessor systems and has been a key contributor to the dependability design of over two dozen commercial computing systems. Dr. Siewiorek leads an interdisciplinary team that has designed and constructed 20 generations of mobile computing systems. He has served as a consultant to several commercial and government organizations, while serving on six technology advisory committees. Dr. Siewiorek has also written eight textbooks in the areas of parallel processing, computer architecture, reliable computing, and design automation in addition to over 400 papers. He has served as Chairman of the IEEE Technical Committee on Fault-Tolerant Computing and as founding Chairman of the IEEE Technical Committee on Wearable Information Systems.  


The interview focuses on the role the National Science Foundation (NSF) played in funding some of his key projects: register transfer modules, an ISP compiler, EXPL, Cm*, Registered Transfer CAD, SAW, PIE, C.VMP and C.Fast. He also discusses the role of grad students and their support by NSF grants.  
The interview focuses on the role the National Science Foundation (NSF) played in funding some of his key projects: register transfer modules, an ISP compiler, EXPL, Cm*, Registered Transfer CAD, SAW, PIE, C.VMP and C.Fast. He also discusses the role of grad students and their support by NSF grants.  
<br>


== About the Interview  ==
== About the Interview  ==
Line 11: Line 11:
DANIEL P. SIEWIOREK: An Interview Conducted by Andrew Goldstein, IEEE History Center, October 24, 1991  
DANIEL P. SIEWIOREK: An Interview Conducted by Andrew Goldstein, IEEE History Center, October 24, 1991  


Interview #135 for the IEEE History Center, The Institute of Electrical and Electronic Engineers, Inc. and Rutgers, The State University of New Jersey
Interview #135 for the IEEE History Center, The Institute of Electrical and Electronic Engineers, Inc.  
 
<br>


== Copyright Statement  ==
== Copyright Statement  ==
Line 19: Line 17:
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.  
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.  


Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.  
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.  


It is recommended that this oral history be cited as follows:  
It is recommended that this oral history be cited as follows:  


Daniel P. Siewiorek, an oral history conducted in 1991 by Andrew Goldstein, IEEE History Center, Rutgers University, New Brunswick, NJ, USA.  
Daniel P. Siewiorek, an oral history conducted in 1991 by Andrew Goldstein, IEEE History Center, New Brunswick, NJ, USA.  
 
== Interview  ==
 
INTERVIEW: Dr. Daniel P. Siewiorek


<br>
INTERVIEWER: Andrew Goldstein


== Interview  ==
DATE: 24 October 1991


INTERVIEW: Dr. Daniel P. Siewiorek<br>INTERVIEWER: Andrew Goldstein<br>DATE: 24 October 1991<br>PLACE: Telephone Interview  
PLACE: Telephone Interview  


=== Circuit and Architecture Grants  ===
=== Circuit and Architecture Grants  ===
Line 273: Line 275:
'''Siewiorek:'''  
'''Siewiorek:'''  


<flashmp3>135_-_siewiorek_-_clip_1.mp3</flashmp3>
<p><flashmp3>135_-_siewiorek_-_clip_1.mp3</flashmp3></p>


After Endot things become fuzzy, it's hard to track how much direct influence things have after the first generation. Then, about the same time we got the concept of trying to use simulators to inject faults. It’s a way of testing. People were starting to talk about fault-tolerant machines. How do you anticipate the response of something in a controlled environment? Then, starting around '82-'83 we got involved with IBM, as they were bidding on the air traffic control system. At the time my collaborator was Zary Segall.  
After Endot things become fuzzy, it's hard to track how much direct influence things have after the first generation. Then, about the same time we got the concept of trying to use simulators to inject faults. It’s a way of testing. People were starting to talk about fault-tolerant machines. How do you anticipate the response of something in a controlled environment? Then, starting around '82-'83 we got involved with IBM, as they were bidding on the air traffic control system. At the time my collaborator was Zary Segall.  
Line 361: Line 363:
'''Siewiorek:'''  
'''Siewiorek:'''  


It's a consortium of semiconductor manufacturers who have basically banded together to create a pool of resources, to get the universities produce research in computer aided design. It's deals with semiconductor processing and everything that the chip manufacturers are interested in. So, SRC picked Berkeley and CMU as a CAD center, center of excellence in CAD.  
It's a consortium of [[Semiconductors|semiconductor]] manufacturers who have basically banded together to create a pool of resources, to get the universities produce research in computer aided design. It's deals with semiconductor processing and everything that the chip manufacturers are interested in. So, SRC picked Berkeley and CMU as a CAD center, center of excellence in CAD.  


We were still getting some NSF funding, but Don was now starting to get SRC funding. Then SRC companies started to pick up and use the software. GM has been using it and found it was very good for first order approximations. In other words, people very quick to write out an algorithm in ISP, and then they could get an implementation within ten or twenty percent accuracy of what a good designer might do. From that they could quickly explore design alternatives. "Should I do this in software on a standard microprocessor or should I try to do an ASIC with it? How big is that ASIC?" After that they were able to try and figure out what the cost would be.  
We were still getting some NSF funding, but Don was now starting to get SRC funding. Then SRC companies started to pick up and use the software. GM has been using it and found it was very good for first order approximations. In other words, people very quick to write out an algorithm in ISP, and then they could get an implementation within ten or twenty percent accuracy of what a good designer might do. From that they could quickly explore design alternatives. "Should I do this in software on a standard microprocessor or should I try to do an ASIC with it? How big is that ASIC?" After that they were able to try and figure out what the cost would be.  
Line 423: Line 425:
'''Siewiorek:'''  
'''Siewiorek:'''  


Demeter was actually a distributed system, in the sense that it actually ran on several machines. As a matter of fact, that is more of a standard now then an exception. With UNIX and C, for example, the database ran on one machine. Some of the tools ran in a native environment. We had some tools still on a PDP 10 back then.  
Demeter was actually a distributed system, in the sense that it actually ran on several machines. As a matter of fact, that is more of a standard now then an exception. With [[UNIX]] and C, for example, the database ran on one machine. Some of the tools ran in a native environment. We had some tools still on a PDP 10 back then.  


Consequently, it looked like the present-day workstations, but as a user clicked on an icon or something like that it would fire up a tool somewhere else. After that it gave the necessary information to strip the information that came out on the workstation. It was really a distributed environment.  
Consequently, it looked like the present-day workstations, but as a user clicked on an icon or something like that it would fire up a tool somewhere else. After that it gave the necessary information to strip the information that came out on the workstation. It was really a distributed environment.  
Line 699: Line 701:
'''Goldstein:'''  
'''Goldstein:'''  


Yes, well, there were a couple of different applications they found, but I think originally it was supposed to analyze FORTRAN programs in an effort to help synchronize them. But you described it as a program where actually there were generated. I was just wondering if it was related.  
Yes, well, there were a couple of different applications they found, but I think originally it was supposed to analyze [[FORTRAN]] programs in an effort to help synchronize them. But you described it as a program where actually there were generated. I was just wondering if it was related.  


'''Siewiorek:'''  
'''Siewiorek:'''  
Line 885: Line 887:
From, '84 to '88. They are very good at trying to anticipate things. I enjoyed a very good working relationship with Bernie Churn; he is always asking hard questions. And people like John Lehman. I don't know why they worked for the NSF. Sometimes I would think there are headaches in that, but I think the country owes people like that a debt, because I think that they do a very good job. They had me come in one time, to head up a committee to look at their reviewing process, seeing how fair it was.  
From, '84 to '88. They are very good at trying to anticipate things. I enjoyed a very good working relationship with Bernie Churn; he is always asking hard questions. And people like John Lehman. I don't know why they worked for the NSF. Sometimes I would think there are headaches in that, but I think the country owes people like that a debt, because I think that they do a very good job. They had me come in one time, to head up a committee to look at their reviewing process, seeing how fair it was.  


<br>
[[Category:People and organizations|Siewiorek]] [[Category:Engineers|Siewiorek]] [[Category:Government|Siewiorek]] [[Category:Inventors|Siewiorek]] [[Category:Universities|Siewiorek]] [[Category:Culture and society|Siewiorek]] [[Category:Defense & security|Siewiorek]] [[Category:Law & government|Siewiorek]] [[Category:Engineering profession|Siewiorek]] [[Category:Engineering education|Siewiorek]] [[Category:Automation|Siewiorek]] [[Category:Design automation|Siewiorek]] [[Category:Embedded systems|Siewiorek]] [[Category:Components, circuits, devices & systems|Siewiorek]] [[Category:Circuit faults|Siewiorek]] [[Category:Circuit simulation|Siewiorek]] [[Category:Circuit synthesis|Siewiorek]] [[Category:Microprocessors|Siewiorek]] [[Category:Component architectures|Siewiorek]] [[Category:Computers and information processing|Siewiorek]] [[Category:Computer aided analysis|Siewiorek]] [[Category:Computer aided engineering|Siewiorek]] [[Category:Computer integrated manufacturing|Siewiorek]] [[Category:Memory architecture|Siewiorek]] [[Category:Multiprocessor interconnection|Siewiorek]] [[Category:Reconfigurable architectures|Siewiorek]] [[Category:System buses|Siewiorek]] [[Category:Multiprocessor interconnection networks|Siewiorek]] [[Category:Formal languages|Siewiorek]] [[Category:Programming|Siewiorek]] [[Category:Distributed computing|Siewiorek]] [[Category:Registers|Siewiorek]] [[Category:Software & software engineering|Siewiorek]] [[Category:Computer aided software engineering|Siewiorek]] [[Category:Software performance|Siewiorek]] [[Category:Scientific tools and discoveries|Siewiorek]] [[Category:Design methodology|Siewiorek]] [[Category:Design for manufacture|Siewiorek]] [[Category:Prototypes|Siewiorek]] [[Category:Fault diagnosis|Siewiorek]] [[Category:Fault tolerance|Siewiorek]] [[Category:Power, energy & industry applications|Siewiorek]] [[Category:Computer aided manufacturing|Siewiorek]] [[Category:Transfer molding|Siewiorek]] [[Category:Standardization|Siewiorek]] [[Category:Code standards|Siewiorek]] [[Category:Software standards|Siewiorek]] [[Category:News|Siewiorek]]
 
[[Category:People_and_organizations]] [[Category:Engineers]] [[Category:Government]] [[Category:Inventors]] [[Category:Universities]] [[Category:Culture_and_society]] [[Category:Defense_&_security|Category:Defense_&amp;_security]] [[Category:Law_&_government|Category:Law_&amp;_government]] [[Category:Engineering_profession]] [[Category:Engineering_education]] [[Category:Automation]] [[Category:Intelligent_control]] [[Category:Design_automation]] [[Category:Embedded_systems]] [[Category:Components,_circuits,_devices_&_systems|Category:Components,_circuits,_devices_&amp;_systems]] [[Category:Circuit_faults]] [[Category:Circuit_simulation]] [[Category:Circuit_synthesis]] [[Category:Microprocessors]] [[Category:Component_architectures]] [[Category:Computers_and_information_processing]] [[Category:Computer_aided_analysis]] [[Category:Computer_aided_engineering]] [[Category:Computer_integrated_manufacturing]] [[Category:Memory_architecture]] [[Category:Multiprocessor_interconnection]] [[Category:Reconfigurable_architectures]] [[Category:System_buses]] [[Category:Multiprocessor_interconnection_networks]] [[Category:Formal_languages]] [[Category:Programming]] [[Category:Distributed_computing]] [[Category:Registers]] [[Category:Software_&_software_engineering|Category:Software_&amp;_software_engineering]] [[Category:Computer_aided_software_engineering]] [[Category:Software_performance]] [[Category:General_topics_for_engineers]] [[Category:Design_methodology]] [[Category:Design_for_manufacture]] [[Category:Prototypes]] [[Category:Fault_diagnosis]] [[Category:Fault_tolerance]] [[Category:Power,_energy_&_industry_application|Category:Power,_energy_&amp;_industry_application]] [[Category:Computer_aided_manufacturing]] [[Category:Transfer_molding]] [[Category:Standardization]] [[Category:Code_standards]] [[Category:Software_standards]]

Revision as of 17:55, 21 April 2014

About Daniel Siewiorek

Daniel Siewiorek

Daniel Siewiorek received the B.S. degree in Electrical Engineering from the University of Michigan, Ann Arbor, in 1968, and the M.S. and Ph.D. degrees in Electrical Engineering (minor in Computer Science) from Stanford University, in 1969 and 1972, respectively. At Carnegie Mellon University, where he helped to initiate and guide the Cm* Project that culminated in an operational 50-processor multiprocessor system. He has designed or been involved with the design of nine multiprocessor systems and has been a key contributor to the dependability design of over two dozen commercial computing systems. Dr. Siewiorek leads an interdisciplinary team that has designed and constructed 20 generations of mobile computing systems. He has served as a consultant to several commercial and government organizations, while serving on six technology advisory committees. Dr. Siewiorek has also written eight textbooks in the areas of parallel processing, computer architecture, reliable computing, and design automation in addition to over 400 papers. He has served as Chairman of the IEEE Technical Committee on Fault-Tolerant Computing and as founding Chairman of the IEEE Technical Committee on Wearable Information Systems.

The interview focuses on the role the National Science Foundation (NSF) played in funding some of his key projects: register transfer modules, an ISP compiler, EXPL, Cm*, Registered Transfer CAD, SAW, PIE, C.VMP and C.Fast. He also discusses the role of grad students and their support by NSF grants.

About the Interview

DANIEL P. SIEWIOREK: An Interview Conducted by Andrew Goldstein, IEEE History Center, October 24, 1991

Interview #135 for the IEEE History Center, The Institute of Electrical and Electronic Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Daniel P. Siewiorek, an oral history conducted in 1991 by Andrew Goldstein, IEEE History Center, New Brunswick, NJ, USA.

Interview

INTERVIEW: Dr. Daniel P. Siewiorek

INTERVIEWER: Andrew Goldstein

DATE: 24 October 1991

PLACE: Telephone Interview

Circuit and Architecture Grants

Goldstein:

I don’t know if you're acquainted with our project. The idea is that we're writing a history on the National Science Foundation and its role in the development of computer science in this country. We're working on one chapter where we're describing some of the research that has done under NSF grants, and what we'd like to do is describe the research and the principal investigators' involvement with the foundation, and what they achieved. Now, when we were going through the grants we did some categorization, and we identified your grant as being related to circuits and architecture. So the first thing I’d like to do is include you in the section on circuits, so if that was an accurate categorization I will do so. If that categorization was accurate, then I'd like to discuss your work on circuitry and the influence that you've had there.

Siewiorek:

Yes. First of all, we sent off a timeline a couple months ago.

Goldstein:

I'm looking at that, and what I was hoping you could just take me through that. This is phenomenal information and I'm just not familiar with all the different projects.

Siewiorek:

No, that was where I wanted to start. We could structure it based on that. I think we're more in register transfer level and above. When people talk about circuits, they typically think about transistors and SPICE simulation and we're not that. We're the digital synthesis and multiprocessor architecture.

Goldstein:

Yes, it may mean reconfiguring our lay-out for the book, and just including you in the section on architecture, which is fine. I was hoping that you could talk about the extent that you were involved in circuits, but if it's really inappropriate then, we shouldn't force it.

Siewiorek:

Yes, I think it is inappropriate. I think the best thing to do is follow through this timeline here, and I could just give you a couple sentences about each one, and if you decide you'd like more information we can dive in.

Goldstein:

Right.

Register Transfer Modules

Siewiorek:

The pink is the grants, and the purple is the industrial offshoots from grants. Some are very direct, others are much less direct. It turns out that when I first came to Carnegie Mellon University in '72, Gordon Bell had just received an award from NSF on modular design. He went back to Digital Equipment Corp, and so I took over that NSF grant.

Goldstein:

Did you come straight out of your dissertation?

Siewiorek:

Yes, I came directly from Stanford. I guess I turned in my thesis something like the twenty-third of December and started teaching something like the third of January. So, it was certainly direct.

Goldstein:

Right, and you took over as the principal investigator in Bell's grant?

Siewiorek:

Right. And at that time, he had just introduced the PDP-16, which was something called register transfer modules. This was a way of rapidly prototyping by wire patch panels, like you might have done with analog computing in earlier days. Now you could take digital modules and sequentially step through different register transfers, so you could effectively do something like a PDP-8 mini-computer in maybe eight hours.

Goldstein:

What components would you patch together?

Siewiorek:

Basically, this was a commercial product. We had actually talked DEC into developing them. You had a back plane and into the back plane you plugged cards that were typically about four or five inches wide and maybe eight inches long. A data path card might be composed of, for example, arithmetic logic units, an I/O unit, a light switch unit for interfaces, a parallel interface so that you could talk to another computer, and that would be the data part. The control parts were called Kevokes. These were controllers that would be fired one state at a time; they would handshake and hand control from one to another, so that would take care of the sequencing.

Then you would plug the output on the Kevoke into the left-hand side and right-hand side of your register transfer. And so there’s a little protocol there that says, "When you see this signal and you're a right hand side, put your stuff on the bus; and when you're on the left-hand side and you see a signal from the right-hand side and you see that you've been stimulated. Take what's on the bus and raise another signal." And that would complete the bus cycle.

It was a very modular way of doing things. It was very good for education, because we could build much bigger things than could happen before. I started teaching as soon as I came here with that lab. Gordon was in the process of writing a little DEC Press — well, he started up DEC Press, and one of the first things that came out of DEC Press was a little design book on modular design based on RTMs. He and Bohn Grayson have written a book and I wrote a chapter in it, so that was my start.

That resister transfer level work predated the NSF grant. So the idea was originally sponsored by DEC. Let me rethink this. Gordon was a VP for DEC, and was in our faculty for about six years, from '66 to '72. He went back as a VP, and obviously had a lot of influence on convincing DEC to make a product out of the register transfer modules. The educational component was paid by more traditional grants. That was before my time, so I'm not aware if there were grants of equipment from DEC, or not. There was a full-fledged laboratory going, and we've used it for several years as the core of our teaching curriculum in computer engineering.

EXPL and ISP Compiler

Siewiorek:

So the idea then was, "With that as a start, where do you go from there?" If you could put together a PDP8 mini-computer in eight hours, what next? That got us to start up two branches. One branch asked, "Can we automate design?" Probably one of the first theses that came out of that was something called EXPL, which was done by Mario Barbacci. It turns out just in that period of '66 to '72, Gordon Bell and Allen Newell had collaborated on a textbook called Computer Structures: Readings and Examples. This was one of the first high-level architectural books.

They entered these two notations; one was called PMS for processor memory switch, which was a way of concisely defining computer structure at the memory. The I/O, the processor, and the bus level. Another language called ISP, for instruction set processor, that was meant to be a register transfer level language. It was designed to concisely describe computer systems. Those were write-only languages at the time; there was no software or anything for that, so we started working on an idea. If we took something like ISPS, for example, could we understand it and then, using register transfer modules automatically, synthesize a high-level structure? That's what EXPL was.

Goldstein:

Yes, the idea was to synthesize a structure cut of the RTMs?

Siewiorek:

Yes, the target hardware was the RTMs. Input was ISP, which was a behavioral language and very much like, say, what Verilog is today. This was, as far as I could tell, the first high-level synthesis program, what people are calling silicon compilers or behavioral synthesis.

Goldstein:

Yes.

Siewiorek:

That happened back in about '73 or '74, and an offshoot from this was work on the ISP compiler, which became something called ISPS. Then ISPS really got a kick in about 1975, when people at the Naval Research Lab were looking at trying to select a standard computer for the Army and Navy. To replace this plethora of UYK and ANUYK Military Computers and things like that. They wanted to come up with a very solid methodology based on evaluation and measurement and wanted it to be implementation-independent.

They convened a series of about eight meetings, over about a two year period, with Army and Navy personnel from various labs to participate in this process, so it would be bought into by these organizations. They had selected ISPS as the basic evaluation tool, so what came out of that was we actually got three or four really detailed descriptions of things like the PDP-11, the IBM System 360, and Interdata 8 32. These were in enough detail — as a matter of fact, I had one person tell me that the PDP-1170 ISP that I developed was more accurate than the DEC manuals. When they had a question, they looked at the DEC manual and they looked at the ISP code, and then when they ran it on a real machine they found out the ISP code was more accurate. I mean, we basically ran all the manufacturer's diagnostics on the simulator to demonstrate its quality.

Goldstein:

Right. The ISPS were developed before the military interest, under NSF funds?

Siewiorek:

Yes. What happened was that it was developed, but it was nowhere near the robustness or shaken-down form that came out of the user community.

Goldstein:

I see. Did the military approach you as clients or as sponsors?

Siewiorek:

No, they asked us to participate in the process of evaluating, so we basically became part of that committee and we were the evaluation section. We also participated in helping them set up the criteria, but we worked for the people writing the ISPS. We did the measurements, we reported the data to the people who then did statistical analysis and so on. But it turns out that without the code turning over and working, the Naval Research Lab would not have been interested.

It was a collaboration here, where they provided the emphasis for us to really take this a step further and be more robust. For about a decade, it was a core of our internal teaching. I don't know how many places got ISP tapes. I do know that the last time somebody told me a number it was over 80 places that got the software.

Goldstein:

What was the mechanism for distribution?

Siewiorek:

People would call up and we just sent it to them.

Goldstein:

So, it was informal?

Siewiorek:

Yes, it was informal. The software was developed on DEC-10s and 20s, and when DEC started de-emphasizing those lines and they were now becoming commercially available software, like Verilog and stuff like that, we stopped evolving that.

Goldstein:

When was that?

Siewiorek:

PDP-10 basically died, I would say, in the early eighties. So I'd say, we were actively involved in ISP for about ten years and then, as I said, it was taken over by events that could put more resources into it than we could.

Goldstein:

When you say "we," the composition of "we" may change from project to project. Correct?

Siewiorek:

Yes.

Goldstein:

Right.

Siewiorek:

There were certainly different professional faculty collaborators over the years, and there were certainly lots of graduate students.

Goldstein:

Right. I just want to be sure I understand that you are describing projects in which you had a leading role. Is that right?

Siewiorek:

Yes, or at least I was highly influential.

Goldstein:

Yes.

Military Computer Family (MCF)

Siewiorek:

MCF (Military Computer Family), for example, was started by Naval Research Lab. It was their thing, and it was our technology that was supporting it. But what came out of that then was Computer Structures, a new version of the Bell & Newell textbook, which now had something like thirty or forty computers described in it in ISP. And every one of them at this point was an executable version and had been used in classrooms and that became a tape that was part of the distribution tape. So, I think by the end, we probably had about forty or fifty computers that were described in ISP, and if nothing else, it allowed people to study in a uniform manner a lot of different computer instruction sets.

Also at the time that we were based on a military computer family. It was an implementation-independent approach to try to quantitatively analyze instruction sets. There was some follow-up research funded by the military that looked at other instruction sets like the VAX, and this led to the current craze of quantitative computer engineering design. So, we were just trying to put numbers down and get uniform ways of comparing things, and that certainly has been picked up by lots of folks. In some sense, I would like to think that the Hennessey and Patterson books are some many-levels-removed offspring of some of that work. In terms of the textbooks and the software's influence on people to start thinking more quantitatively.

Goldstein:

Did you develop any new metrics?

Siewiorek:

From the military computer family, metrics was proposed. Sam Fuller was on the faculty at the time, heading up the metric committee, and we all had some interaction with it. The metrics proposed was memory traffic — it was called the M-measure. There was an S-measure, which was static size of the program. Some instruction sets had more or denser encoding than others.

Memory traffic represented how many things had to be fetched to and from memory, to execute the same semantic content of a program. Then there was R measure, which was an attempt to measure the register transfer activity, which would then indicate some type of efficiency that was going on there.

There is a whole issue of how we had about twenty kernel benchmark programs, and there were a number of benchmarks that came out of that group that people voted on and supplied the code for. One of the difficulties was how you combine all of those numbers together into a single number to compare computers. Today we see a lot of that: spec marks and all of that other stuff. You got a lot of these numbers. You use geometric means. You use arithmetic means, or what have you.

So, a lot of that stuff had a kernel in the whole MCF project, which was roughly forty people involved. It certainly wasn't a single person show, but on this early work Sam Fuller and I were the major collaborators. I was doing a lot of the CAD [Computer Aided Design] stuff and Sam was taking some of the lead in the multiprocessor stuff.

Endot

Goldstein:

I guess you can decide what the optimal organization is. If you just want to talk about CAD because you feel responsible for that, please do.

Siewiorek:

Let me run through ISPS. There is the Purple, and Endot. One of the undergraduate students that had worked on the implementation of ISPS went up to Case Reserve and from there Chuck Rose started Endot and that is basically an ISP derivative. That, therefore, is directly related and traceable to the undergraduate that worked on it.

Goldstein:

I see. Was Endot the name of the company?

Siewiorek:

Yes that was the name of the company. It was a takeoff of PMS and ISP. They provided effectively an ISP commercial simulator. Endot stood for — if you submitted the PMS notation, you would receive things like “P.C.” That is, P dot C, where P stood for processor, the major function. Then the attributes where separated from the major function by a period, and “C” would stand for central processor. This was a network and “Endot” En stood for “N and “Dot’ was the period.

Anyway, it was a direct derivative of ISP, and used quite a bit in the military. I don't know, I mean the company's been bought. I don't know how large it got. I normally don't follow financial type things.

Goldstein:

Were there any intellectual property issues here? Was he at liberty to run off with this brain work?

Siewiorek:

I never paid attention to that. I mean, I feel I'm in a public domain, so I don't know. I just never pursued it. Another things, in those day’s people weren't as protective. The universities weren't quite as protective about intellectual property. CMU certainly didn't at that point prosecute or protect as much per se.

Goldstein:

You say Endot was a direct descendant of the ISPS work. Were there other research facilities? Was anybody doing anything like this?

Siewiorek:

At the time no. I mean, it's an ISP dialect. All you have to do is take a look at it, you can recognize ISP.

Goldstein:

Right. I’m not trying to challenge the idea that it was a derivative of ISP, but I'm just trying now to get a sense of what the climate of activity was.

Siewiorek:

Let's see, the only other thing that existed at the time that I was aware of was Maryland’s CHDL, which is supposed to be a computer descriptive language, and Wisconsin’s Dewey Dietmire Language (DDL). There had been papers and textbooks based on description language, and as far as I could tell the Bell and Newell book was the first time where they really tried to use it very heavily in describing computers. They tried to get into what I would call industrial quality situations.

What I mean is that we describe real machines. I mean, we went and described the VAX. So, the other ones were toys, or something like that. There was also AHPL, a dialect of APL from Arizona.

Goldstein:

Yes.

Siewiorek:

I think a lot of people were looking at ISP for doing Verilog and VHDL. Mario Barbacci who did EXPL was involved in something called ConLAN, which was a consensus language that had European as well as American participation. They published a lot on that and I'm sure that had a lot of influence on the people who finally did VHDL.

Simulator-Injected Faults

Siewiorek:

<flashmp3>135_-_siewiorek_-_clip_1.mp3</flashmp3>

After Endot things become fuzzy, it's hard to track how much direct influence things have after the first generation. Then, about the same time we got the concept of trying to use simulators to inject faults. It’s a way of testing. People were starting to talk about fault-tolerant machines. How do you anticipate the response of something in a controlled environment? Then, starting around '82-'83 we got involved with IBM, as they were bidding on the air traffic control system. At the time my collaborator was Zary Segall.

With this project the idea was "How do you validate something that is suppose to have a three second downtime per year, and have a probability of surviving a year point nine, nine, nine, nine, nine, nine type of thing?" The idea here was to try to stress the system by purposely seeding in fault and do that in an automated fashion.

Goldstein:

You mean you would simulate the system and you would simulate the hardware of the system and introduce faults?

Siewiorek:

Yes, also that is what we did with ISP. The problem with that is if you are already simulating at a 100 or 1000 to one simulation ratio, what are you going to do if you have got to inject lots and lots of faults? Therefore, it we created a monitor on the real system which sits aside, and creates hooks inside the actual running software so that you could program. For example, “On the 29th time through this loop, change this byte to all zeros.”

We ran at speed and put in some probes that allowed you to set flags. Then at the start of the experiment we could tell which flag should do something that might shoot itself in the foot. We could then take from a library of failures and an actual code. With this we can create a fault-script with the purpose of injecting those failure types into those different locations in the code and generate it semi-automatically. This way, in an eight-hour evening shift we could inject maybe a 1000 or 1500 faults, then collect the statistics and find out which ones failed.

Goldstein:

Did that start as an academic interest or did you always have practical application in mind?

Siewiorek:

Our concern was practical. Some of my other funding which is not by NSF from ONR had been interested in reliability. The idea is we have been measuring systems and trying to find out how they fail in a natural way. It turns out that that is awfully slow. Very often you can't go back to the source cause. You can't repeat the problem. So, we ended up saying this was a way of repeating it.

If we are going to validate something, and do it before we dispatch it, we have to be able to repeat what is going on. We also developed it with NASA money. The space station is using us now. Turns out the ISPS fault injector was actually done by a master's student, but it got us thinking about how to do fault injection and that part of a mental set existed at the time.

Goldstein:

Yes, you are right. I see it's out of the NSF loop there.

Siewiorek:

As a matter of fact I think that it's had a major impact.

Goldstein:

Right, it's the fruit of the NSF grant.

Siewiorek:

I think it helped IBM win their contract. It turns out the DEC studied the methodology and built it into their fault tolerant line of computers. The space station is using it now for validating the software it uses. We used it out there, and on the space-qualified piece of software, and literally in a day or two it had found five unknown errors. It's a very effective approach.

Registered Transfer CAD

Let's go back to EXPL. So now we were dealing with registered transfer modules. Registered transfer modules happen to have a nice level of granularity and it's probably pretty close to what a compiler could produce. What if you were trying to do this with real chips, however, and that interjected some other problems? This was the question that we were confronted with. We created what then became known as a CMU RTCAD project which was registering transfer computer aided design.

Goldstein:

I'm afraid that I don't follow everything that you are saying here. You would create an architecture using the RTMs, and then I'm not sure what you mean when you say you compile?

Siewiorek:

What I was saying was that if you take a look at registered transfer modules, at the type of information needed to build a registered transfer module system, it would be very close to what somebody might call assembly language. The step from assembly language to the actual hardware was rather straight forward syntactic translation. Therefore, we were doing behavioral synthesis but we were dealing with a very structured — or maybe even some might consider it artificial — hardware target set.

It was commercially available and viable. I know that DuPont used it for many, many years to rapidly prototype their chemical process controllers. It met its goal of trying to have non-hardware experts build relatively complicated hardware. Therefore, if we were going to actually get people who wanted to do competitive cost performance type of things, you would have to go down to chips, not to what effectively were boards, or fairly large modules. That's what we decided at that point: "Yes, let's kick off and let's change our target to real chips." That became the CMU registered transfer computer aided design project. And at this point, the CAD area of CMU started to grow.

One of my students, Don Thomas, joined the faculty. He had worked on this, and then Steve Director came I think in about 1979 and Alice Parker was here for part of that period. We went from one person interested in CAD to about four or five. The project became larger, in sense that other people were doing other parts of things. But at this point we had defined the set of things.

As a matter of fact, Don Thomas's thesis, which was NSF funded, defined what you might call behavioral synthesis. It defined what the various steps would be, and how one would do the equivalent of code optimization. In other words, he was using ISP again as the input but then you would have to try to convert it into an internal form. We came up with something very much akin to single assignment languages, which people use to try to generate code for parallel computers, as an internal form.

System Architect Workbench (SAW)

Siewiorek:

There were like six or seven different steps then going from an ISP description into actual gates and actual chips. But it was defining that whole set of sequences, and laying out the whole road map for the area that I think the industry is basically following. We obviously couldn't go back at that time to do it all. As a matter of fact, Don has been working on something called SAW, which is a system architect workbench for about ten years. Now, SAW has implemented a lot of those things that were conceived at least on paper, and a fair number of commercial companies are using it.

We did have a string of NSF proposals, like grants that Don Thomas got, while Steve Director and I were the co-PIs. Steve would be working down on the circuits. He would be trying to optimize the speed power product of circuits, and Don and I would be working at the register transfer level and above. Our goal was to go from ISP for example down to the actual transistors. Don at that point started focusing on design of ASICs, application specific ICs, single integrated circuits, and trying to take their behavior and completely generate the chip.

Don produced a number of things that were very interesting. He, for example, has the ability to show the textual ISP of one half of a work station screen and it would show the graphic structure that has been synthesized from an ISP. In other words, the multiplexors and the buses and the ALUs. You can then click on a part of the actual description, and a plus sign, for example, it may have several adders in the actual implementation. The question was which adders implement that plus sign? Conversely, what part of the behavior does it synthesize or support?

They have gone into things like pipelines and you got things like scheduling, and it turns out that once you get the data path you then have to figure out the control steps, and then you want as few control steps as possible. Then once you have all of that, you have to decide how you implement the design in the target technology using what is called module binding.

In other words, you have a library of physical components and you have these abstract components. How do you figure out which physical components corresponded to which abstract components?

Goldstein:

Is this a tool to help design ASICs? Was he working on his own or did other people use the tool?

Siewiorek:

No, it had been distributed in about 1985 or '86 by Semiconductor Research Corporation. I don't know if you are familiar with that.

Goldstein:

No, I'm not.

Siewiorek:

It's a consortium of semiconductor manufacturers who have basically banded together to create a pool of resources, to get the universities produce research in computer aided design. It's deals with semiconductor processing and everything that the chip manufacturers are interested in. So, SRC picked Berkeley and CMU as a CAD center, center of excellence in CAD.

We were still getting some NSF funding, but Don was now starting to get SRC funding. Then SRC companies started to pick up and use the software. GM has been using it and found it was very good for first order approximations. In other words, people very quick to write out an algorithm in ISP, and then they could get an implementation within ten or twenty percent accuracy of what a good designer might do. From that they could quickly explore design alternatives. "Should I do this in software on a standard microprocessor or should I try to do an ASIC with it? How big is that ASIC?" After that they were able to try and figure out what the cost would be.

So, it's very good to help explore the design space. There have been people using SAW outside of CMU as a mid-career designer for working quickly.

Goldstein:

But the point is that SAW was developed as tool to help you guys with your own ASICs. It wasn't an end into itself, correct?

Siewiorek:

Actually, I would say it was pretty much an end in itself. I mean, at this particular point, even at CMU, CAD was becoming a focus. The interesting thing about CMU was we had people all the way from the circuits’ level up to the super architectural level working together. Ron Rohr came of course. He brought SPICE and analog simulation.

CAD therefore was an end in itself that we tried periodically to do real designs for as a way to corroborate how well it was doing. On the other hand, we are very concerned about technology transfer. I'm not sure about Don's stuff, but the stuff I have been personally involved with include Trimeter and Omniview.

SAW is a complicated thing, so the RTCAD was a transitory phase where we wrestled with what the structure of these things might be. But each of those projects, when you have got four or five faculty members and maybe ten graduate students working on it, take more description than the whole thing that I've given here.

There is actually a paper in Micro, I think, that was a tutorial paper where seven or eight authors try to talk about what CMU RTCAD was in the early '80s.

Demeter Project

Siewiorek:

Then I started working on something called Demeter. Demeter stood for Design Methodology Environment and it's the Greek goddess of harvest. The idea here was to try to move the level of design above the ISP level. We had been at the ISP level for about a decade. Initially, when we first talked to people in the industry back in '78-'79 about some of these things, and I had worked at that the DEC for about a year on a sabbatical, people were saying, "Well, gee, we don't believe you can synthesize from a behavioral description." So, we never thought about going higher. But we got approached by Seimens who realized the time-to-market was a big issue, and they asked us to create a project that would go above the behavioral level. But, what would that look like? We actually took about a year, and after talking to a bunch of people we came up with the concept for a systems design that would take what we call specifications.

Look on the back of a product brochure and what you will see is, "Well, I got a 33 megahertz Intel 486 with a megabyte of memory on the motherboard expandable to 16 megabytes or a 80 megabyte hard disk and two parallel I/O ports. It weighs 20 pounds and gives off 800 watts and it costs $2000." In other words, people don't normally start with a clean sheet of paper all of the time. Therefore, what Demeter did was lay down specifications.

In other words it was a conceptual phase. We asked, "What are the various pieces of this whole thing? What type of databases do you need? Do you get a historical database? Do you need a designed reuse database? Or do you need a data base of components?" That was what the paper was about. We also did two prototypes.

We tried to demonstrate that we could actually operate at this level. And then through the mid-1980s we worked with that very heavily. I should also mention that we were very much users of artificial intelligence. In this whole process, I have had students work on about seven or eight AI-based programs. Some at the circuit level, in terms of place and route of transistors, all the way up. So the next one, Micon, was actually an AI program too. But being close to a school of computer science which built some very excellent AI tools, it makes you a natural, and it's very easy to teach the engineers the necessary AI.

Goldstein:

I see. In a project like Demeter or SAW, what are you working on? What machine or was the paper? What was its content like?

Siewiorek:

Do you mean the overview?

Goldstein:

Yes.

Siewiorek:

The overview paper tries to talk two things. One was try to talk about the various pieces. That gives you a multi-layer view of what design is. It starts at the top and with that goes into until you finally get to physical wires.

Goldstein:

In terms of its structure, is it a textual description?

Siewiorek:

It's a textural and graphic description. At this layer we are doing ISP textual description — boxes for the software we produced. We tell what the software does in a sentence or two.

Goldstein:

What kind of system did Demeter run on?

Siewiorek:

Demeter was actually a distributed system, in the sense that it actually ran on several machines. As a matter of fact, that is more of a standard now then an exception. With UNIX and C, for example, the database ran on one machine. Some of the tools ran in a native environment. We had some tools still on a PDP 10 back then.

Consequently, it looked like the present-day workstations, but as a user clicked on an icon or something like that it would fire up a tool somewhere else. After that it gave the necessary information to strip the information that came out on the workstation. It was really a distributed environment.

Goldstein:

It looks like you began in around 1982?

Siewiorek:

Yes. Three of my students started a company called Trimeter in Pittsburgh that is looking at the use of AI to do role-based systems which optimize down at the gate level. They were eventually bought out by Mentor. Now it's part of the Mentor product line.

Micon

Siewiorek:

Anyway, we started Micon which was a microcosm of Demeter. It stood from Microprocessor Configurator. Demeter was trying to look at a wide variation of these things, so a Demeter system would know how to design PBXs. You could think about a Demeter system as one that had a range from say microprocessor controllers to automobiles.

Goldstein:

Now, would those things demand total overhauls of Demeter, to acquaint it with the specifications of a PBX system?

Siewiorek:

The concept was when you went up higher in the hierarchy you needed more of what we call domain specific knowledge. For example, to give us the solution to the problem of having the capacity for a thousand simultaneous calls, I would need a busy-hour blocking probability of .98. If you gave that to somebody that designed telephone systems, they would know what that means. They would know what that implies in terms of, "Can I scan for digits in software or do I have to do it on the hardware?" The idea was that there would had to be a domain of experts that somehow interacted with or trained the system.

What we did was say, "Yes, let's take the Demeter concepts, and let's build ourselves a system that really does well at designing things like workstations." Therefore , we took one single domain and that was what Micon was. It has gone through about three generations, and right now it has frozen at 1989. It's got a knowledge-acquisition front end, so you can train it to do things. Consequently, it doesn't become obsolete.

We have had about eight different designers’ train it. So, it's got over 3500 rules. It has, we estimate, the equivalent 15 man years of design experience. Starting out, the first thing it designed was a little five inch by five inch board which was a 68,008 microprocessor controller. Then, we went to an Intel 386, 20 megahertz work station. You give it the specifications, by answering some questions, say the 30 questions that it prompts you for. It will then, on one mixed MicroVAX 2, create a design for you composed of over 300 chips and 700 wire nets in less than four hours.

Goldstein:

Is there much iteration? Did you look at that design, do some changes, and then resubmit it?

Siewiorek:

No. We gave it goals like board area, power consumption, reliability, and if it couldn't design, then it would come back and say, "I can't do that," and tell you what is wrong with it. Then you could iterate, and give it enough resources to work with. You could also tell it how important performance cost, board area, and reliability are.

Goldstein:

I just want to make sure that I understand this. Is Micon — was it one module of Demeter or is it something else?

Siewiorek:

I would say Demeter was a project that was a thought piece, and it had some prototypes that implemented segments of it. So, Micon in some sense is a microcosm of Demeter, and in another sense it focused on a one-application domain. It has all of the attributes of what we are looking for in Demeter itself. Although if it were to try and do all of Demeter, you would be working for many years without anything to show for it. We try to narrow down the focus of it.

Goldstein:

Does Micon works only on microprocessors?

Siewiorek:

Well, it turns out, because we were worried about making information obsolete, we actually did the knowledge-acquisition front end. So, we trained Micon about the mechanical design. We also trained Micon about itself. It has a set of about 12 programs, dictating in each what the program is called and what’s its output and input. It depends on what you are trying to do with it.

We trained it about itself, so it can execute inside itself. Doing that makes it a software CAD environment. So, we have focus in terms of the microprocessor design area, because we want to show that we could rival humans at this level of specification.

Goldstein:

Is using it as easy as you suggest, where it simply prompts for questions that you answered? Does it only take a few minutes to specify?

Siewiorek:

Yes, and what we have done is try to design something with it every year or so, and then build it and see what problems we run into.

In a very early version we built a wire-wrap T80 board and that interjected some interesting issues, and changed our thinking. Then we did the 68008. We actually started collaborating with mechanical engineers and we designed a housing for the 68008 and did a wrap prototyping of the housing. After that, on the 386, it was a six layered board 13 inches by 10 inches which was comparable to an Intel commercial product at the time.

One of the things that we found out — not surprising — is that at 20 megahertz, timing became an issue. That created some other follow-up research. A Ph.D. is working on that.

What we try to do is build systems, see what the problems arise, and then suggest where the new research focus could be. We have an annual Micon users meeting where we have people who use Micon, as well as people from industry or industrial sponsors, come. Software is in about six or eight places right now. We are not a commercial company. This is one of the problems with technology transfer. Our product is to produce students and train students and not use them as chief employees. So, we are limited in a number of ways. We can send software and tell people it's unsupported. Or in the case of Micon, we give out user manuals. We also have a two-day workshop that we have given about eight or ten times, and that allows people to come in and see how it's used.

Engineering Research Centers (ERCs)

Goldstein:

Is this NSF funded? This isn't considered development work, is it?

Siewiorek:

It is still considered research. There is a fine line there. It turns out that we do get some funding from industrial affiliates, and some of that money goes to direct the training of people, and things like that. But a lot of times when we run training courses, we use it to also train our own students on it as part of the class. Now, what we are doing here is collaborating with mechanical engineers to do thermal analysis, structural analysis, and things like automatic generating of the enclosing housing.

What happens is NSF and its attempt to try to foster interdisciplinary research has actually offered people — well, not offered, I shouldn't say that. What they did was they run a competition each year where they offer three to five centers. A center gets a five year grant of anywhere from a million and a half.

Goldstein:

Is this the ERC program? It would help if I knew what that stood for.

Siewiorek:

They (ERC) are called Engineering Research Centers, under the Engineering Directorate. Now, they have been copied. There have been some science and technology centers that have come from the computer science directorate. But they all spawned from the Engineering Research Center concept, where they tried to get a group of cross-disciplinary faculty do joint research.

In 1986 CMU got an Engineering Research Center devoted to design. So, Micon and SAW for example became a supporting part of that center and I am now director of design for a manufacturing lab there. What we are doing now is working with mechanical engineers, getting them to synthesize housings and enclosures, and to rapid prototyping of those things while we wrap the prototype for the electronics. The idea is to do a whole product in a very short period of time.

Applications of Micon

Siewiorek:

This summer, for example, we actually used these CAD tools in a course where we had people from industry come in and do the work. In 12 weeks we went from a product concept to manufacturing 30 working prototypes with hardware, software, and mechanical enclosures. We made a map reading devise, a blueprint reading devise, and part of a blueprint reading devise that used something called the private eye, which is commercially available and sits on your head in front of your glasses like a bifocal. The blueprint reading device provides a whole screen image of data.

We had a little IBM PC that we designed and Micon designed from specification. Mechanical people did the thermal analysis. They did the packaging design, and then we manufactured all of those things. We also did the software, assembled them, and gave them to the students to take home with them.

What I want to do is go from those high-level specs, which include hardware and software system packaging specs, to a completed working prototype. I would like to do that in less than two weeks. As a matter of fact, Micon would like to do the electronic design; eventually, even the fabrication would like to do that in less than a week.

Goldstein:

Can Micon specify a design that has holes in it? Can it say, "We need a network now to perform this function which isn't commercially available because it's not in my data base," and then describe it for some other module?

Siewiorek:

There is a branch that is combining Micon and SAW. SAW does chips and Micon does boards. Toward this end Micon is presently synthesizing what is called glue logic. In this process you get a lot of VLSI components and make them talk so you can have some protocol translation. What happens then is that that ends up with a pile of NAND gates.

We then, take that description and give it to SAW. SAW comes back with ASIC and then SAW trains Micon about that ASIC. That way Micon will be able to design the board with that ASIC in mind.

Goldstein:

I see.

Thermal Spraying & Stereo Lithography

Siewiorek:

Now there is a by-product of this that I find very exciting, the branch called thermal spraying. Thermal spraying is one of the rapid prototyping techniques that the mechanical engineers have come up with. You take a plastic model form, for example a fan blade for a computer fan. You could spray steel on it, and then you could then take that steel mold, and use it as an injection mold to mass-produce the fan blade. Therefore, you have a solid-model of what you are trying to do in the computer. This allows you to slice it up into planes.

Then you feed it into something called a stereo-lithography apparatus one plane at a time. What this is, is a laser that writes into a vat of photosensitive plastic. Layer by layer they build up the three dimensional object, which they can spray.

Goldstein:

Could you repeat the name of the apparatus?

Siewiorek:

It's called SLA for stereo-lithography apparatus. Now what we have done is say, "Well, why do the model? Why don't we just spray it directly?" Think of the ball and socket joint that's inside of your other hand. If I put a plane through that, what I could do is take your flesh, for example, and create a mask that I could spray metal on. After this, I would take the complement mask, say the hole between your fist and your hand, to that and spray on a sacrificial material like plastic. I can build this off in layer, and then when I melt the plastic I have a ball and socket completely assembled. You don't even have to assemble it.

They sprayed strain gauges this way, and now we say, "Let's do away with computer boards and let's just spray. We can take the chips and we will build up artifacts layer by layer. We can bury chips in them, connect them by metal lines that we spray. We will keep pipes close to the chips to take the heat out. We will spray electromagnetic shields. We will spray micro-switches, and small mechanical devices so we can create what we call smart structures." Because of all of this we are able to go directly from the solid model in the computer to a manufactured one in literally a matter of less than a day.

Goldstein:

I see. It can spray the chips?

Siewiorek:

No at this point we will embed the chips. We take regular chips and interconnect them in this layering process. At some point, we will say, "This layer is the metallization layer, and let's spray the metal." And then we might spray insulator on top of it. But the idea here is that we are trying to not only worry about going to high levels of specification as input, we also want to take care of all of the nuances of the manufacturing process downstream, and bring that back up into design process.

That way you not only get it right the first time, but you can now think about taking small things, like the two pound PC that I talked about, and reduce its volume by a factor of five to ten. This allows you to get it from specification to a working artifact, maybe in a couple of days. So there you have it, that is what rapid prototyping is about.

Funding Sources for ERCs

Goldstein:

Was the thermal spraying outside the NSF funding?

Siewiorek:

No. Well, that's the future. I mean, it turns out that some of funding came from the Engineering Research Center, which is a NSF funded center. But the project requires heavy usage and big money for robotics, and things like that, which will have to be a collaborative funding effort. We haven't established that yet.

Goldstein:

How did the funds work with the Engineering Research Center? Is it that you are free to enjoy the resources there that are NSF-funded or are you limited to materials and other budget guidelines?

Siewiorek:

What happens is that there is a director’s committee which basically worries about the purse strings. At the present I think that NSF funds are about two point seven million, but it's about a seven million dollar center.

Goldstein:

I see. Where does the rest of it come from?

Siewiorek:

It's comes in thirds, I would say. About one third — maybe a little more than a third — comes from the basic NSF to the center. Then there is about maybe a third to 40% or so from industry. Grants from other government agencies make up the rest.

Goldstein:

I see. Where does the military figure in all of this?

Siewiorek:

DARPA, ONR, and other agencies like that, are solicited. As a result, there is a fair amount of leveraging going on concerning the product. Of course, the industrial people have a vested interest in transfer technology, so they are susceptible to pressure.

Goldstein:

But then you are also responsible for periodically preparing and submitting a grant for your own, correct?

Siewiorek:

Yes, I get my other funding sources there. I mean I'm on the directors' committee at the Engineering Resource Center. In the committee, we are responsible for raising the external funds, as well as writing proposals for the NSF and stuff like that.

We then get the people to cooperate and plan what we should be doing next, because it's not like we are serving as a miniature NSF. What they want us to do is create a strategic plan, figure out where the research should go, encourage people to go into those areas, and collaborate more than you would with a single investigative grant.

Don Thomas and I have an external NSF grant to combine Micon and SAW, and then this synthesis is concurrently funded by the NSF center.

Goldstein:

Yes.

Negiotiating with NSF for Grants

Siewiorek:

The other aspect of this at the time was the feedback from NSF. The actual core of the NSF proposal was the original Bell proposal, and that focused on a modular design, even though we had to get into CAD. At that point NSF didn't see CAD as a fundable thing in and of itself.

Goldstein:

You mean, not promising or not in their purview?

Siewiorek:

No, except we asked them at one point, “Well could we just turn this into a CAD contract?" And they said, "No. You can do CAD so far as it supported the basic things that you are trying to do in terms of the architecture. But we don't think CAD is something in and of itself at this point." Of course it's one of the major things that they are funding now.

Goldstein:

Was there often negotiation with the Foundation about what you could do and what you should do?

Siewiorek:

It turns out that core grant was very unusual, because it was a five year one. Most of them were only for three years. So, as is the very nature of research, as you progress, it's hard to say what you are going to do five years from now.

Goldstein:

I've been hearing praise for the way the NSF is flexible with researchers to reapplying.

Siewiorek:

Yes. They gave me the latitude to open up the CAD area, but not to make it the sole focus for the grant. It had to be a secondary focus. It's interesting that they did recognize the significance of it. Basically, that is one of their major groups that they support now, is CAD.

Anyway, I will tell you why we got into that issue. Sam Fuller and I started looking at modular design. I brought him in to start looking at things because he was a computer architect. One of the questions that came out of this was: "Were there modules beyond register transfer level that it made sense to make computer systems out of?" We looked around, and in the original proposal they had a floating-point unit and a new data type that sounded really good.

We couldn't find anything better than that. Every time we looked around, it turned out it had a program counter in it. So he said, "All right microprocessors are coming along. They're toys now, but are at some point going to be non-trivial." He asked: “How do we tie together large numbers of microprocessors?" From this we wrote a paper in '72 on so called computer modules.

We assumed that the basic module is a computer, much like tinker-toy set, something that we can snap together. Then, in 1974, Gordon let us know before they were announced about the LSI 11, which was DEC's first microprocessor implementation. We worked very closely with the DEC engineers trying to design that into a multiprocessor which became something called Cm* ["CM Star"].

Cm* was built into a fifty processing system which were of two different operating systems and was probably the forerunner to all of these microprocessor base multiprocessors now. We were able to design that under NSF funding, but the building was not funded by the NSF. That is why the line is halfway through the build.

Goldstein:

Yes, I see.

Siewiorek:

We got a lot of equipment donation from DEC, but that still wasn't enough so we had to go out to DARPA and get more. At that point, it was clear that the typical NSF grant was not going to be big enough.

Goldstein:

Was it just a question of amount, or was the activity that you could pursue circumscribed by NSF?

Siewiorek:

I don't know. At one time (and I'm not sure; you might have to check with NSF people) the NSF was reacting to ILLIAC 4, and questioned whether universities could build hardware. There was a shying away from the amount of money that could be spent on equipment.

Goldstein:

Well, there was the facilities grant and I think things were supposed to be done that way.

Siewiorek:

Well, I think those came much later. Anyway, there seemed to be a feeling that you could build small hardware things, but if you wanted to build something like this to was hard to find funding initially. I mean, there were experimental computer science grants that came later, but universities found it very difficult to create the infrastructure to build something. So, we had to go to external sources, such as DARPA, to finish it off.

There are a lot of things that computer companies don't donate that need to be used. When you are building a fifty processor system, we had to build on the order of 100 boards. Even if a board is rather moderately priced at a couple of thousand dollars, you can see that that is more than the NSF budget could afford in those days.

Performance Instrumentation Environment (PIE)

Siewiorek:

Anyway, we built the Cm*, and we did some performance evaluation of that. We did a lot of work in parallel processing and parallel programming. Cm* has a book which documents all of the experiments we did on it. People said it was a really good place to see what it takes to build and design these things, as well as carefully plot experiments and evaluate things.

That culminated into something called PIE, which is a performance instrumentation environment. PIE is a visualization of where software spends its activities. In other words, you could give it a C program, for example, and it will create for you a structure on the screen. Now, I'm collaborating with somebody named Zary Segall. We designed it so that it will create a graph that shows you the various pieces of your program and how they talk to each other. They can go in with mouse, and click on some of those to get the information.

Goldstein:

Now, I'm aware of a project called Paraphrase that David Cook was doing at Illinois. Have you heard of that?

Siewiorek:

Wasn't that an optimizing compiler?

Goldstein:

Yes, well, there were a couple of different applications they found, but I think originally it was supposed to analyze FORTRAN programs in an effort to help synchronize them. But you described it as a program where actually there were generated. I was just wondering if it was related.

Siewiorek:

No. I mean, certainly, there is some traceable origins here. The goals are fairly different. Other than an automatic thing, it was meant to be a visual feedback to the programmer. Programmers in single stream FORTRAN programs don't know where they are spending their time, let alone the fact that this thing is being chopped up by an operating system, so the operating system was instrumented also. That way a programmer could see where they were spending their time. You could then see where it might be idling, and you might want to decompose your programs.

One of the things that we had looked at was how to predict performance from the serial code and try to predict what the maximum speedup might be in that. What we found, surprisingly, is that the shape of the speedup curve, which is the actual performance verses the number of processors used, was actually dictated by two functions. One function was the decomposition function, which is how much extra work do you have to do when you try to chop these things up to parallel. The other one was a contention function, which is how much do you step on each other when they go to a common resource or something.

In these applications and Cm* we found that there is a very small number of these: linear log N, N log N, and square root of N. All of those curved shapes that we came up with were an amalgam of five or six functions. Some were contention functions, and some were decomposition functions. But we only found the six functions for each one, so out of that we got a factor of 36 different curves.

From this it was possible to start measuring a mini-processor version and in an effort to see what was the maximum number of processors you could use before you got to the point of diminishing returns. PIE would pop up for you, with a template for you to fill in, as a function of several different parallel implementations: master-slave, queue scheduling, and pipeline. Then, you could fill in the blanks with your code, and it took care of all of the inter-processor communication. After that, you could select which variables you wanted to observe. This allowed you to get a profile of where your program spent its time, including operating systems functions and the I/O functions. From this visualization you could tune your program.

That is what PIE was about. It's now spawned a fair number of commercial products and been applied in various places there. That is all of those things on ProParasite , IBM Trace,and PIE.

Transfer to Commercial Purposes

Goldstein:

How does the transfer work from your effort to the commercial adventures?

Siewiorek:

Typically they come by to look at it. They want a copy of the code, or they might want us to work with them.

Goldstein:

This a public domain thing, so is it just available to them?

Siewiorek:

Yes. If they are going to commercialize it, then they have to come and negotiate a license with the university. Then, the license apparently will get the university some money, based on what they do.

Goldstein:

Is there a lot of competition there?

Siewiorek:

Not right now, because we do an internal evaluation and give them a non-exclusive internal-use-only license. If they want to commercialize, then they have to come back and negotiate with the university. The university does that on a case-by-case basis. A lot of times, what people will do is they will take the code, they will look at it and decide they can do slightly better. They then go implement it themselves.

Scope of NSF Grants

Goldstein:

Was the work on the Cm* done under a separate grant from the CAD work? Or was that in the same grant and you somehow distinguished it?

Siewiorek:

EXPL and the initial parts of Cm* were in that same core grant. Then, by the time that we got to that grant, which ran out I think in about '76 or so, there was separate NSF funding.

Goldstein:

Were you the PI for both?

Siewiorek:

In the CAD branch we had co-PIs. By the time we started expanding out with Don Thomas and Steve Director , we had something like two or three co-PIs and we started working as a group. I would say the same thing was happening down on the lower branch. You would typically get one or two other faculty members, because some of these things really needed more graduate students than one person could supervise and maybe even give a range of expertise that was beyond what one person would normally come up with.

Anyway, I think there are a number of things that probably had some inspiration from Cm*. The Lucid project in Washington, some of the hypercube work, are examples. There is also a series of workshops that NSF sponsored during the early '80s on parallel processing, trying to figure out where the future direction would be in that. They asked Zary Segall to be the PI in this case.

Connection to Commercial Products

Goldstein:

What is the connection between Cm* and Hypercube?

Siewiorek:

If you take a look at Cm* , you will see two operating systems. What Cm* allowed you to do was share memory. So you could make a memory request and if it's not your processor, there is would be a local switch that could put it out onto a bus that would be picked up by other processors on that bus. Or if it were off that bus, it would be picked up by a mapping processor to go to another bus.

We did that on an address by address basis, so it looked like a shared address space to the programmer even though the time to retrieve things varied by a factor of three as you went from one level to the next. Later, this was called NUMA – non-uniform memory access architecture. In the worst case, if it was the first processor wave, it was a factor of nine. Anyway there were a lot of people, particularly Intel people, that were buzzing around here and they had a product in the early '80s called the 432 that, even though it wasn't commercially successful, drew very heavily on the object-oriented programming and the tinker-toy architecture that we had developed. Those were people up in Oregon.

Again, it's not something we had at that particular time, Intel was very conservative about what they would talk about. We just knew they were around picking our brains a lot. In the case of 432, they invited us up and took the covers off the product. We realized that if we ever had designed it ourselves, we could not have done a design philosophy that was closer to what we wanted! We didn't think we would have changed anything.

Anyway, those things are hard to document whether it was a direct or indirect influence. Certainly there was lots of discussion with people involved at the time. There were also two operating systems with the ability do what we wanted. Star OS, which is the Star operating system for Cm*, had a direct impact on the Mach operating system that came out of this. We looked at it, the Encore multiprocessor, the Sequent which is very similar, and also a DEC multiprocessor. Certainly Encore and DEC had some very direct contact with us through this whole period. With Sequent, it's hard to say, but the architectures look very similar to that class of architectures.

Goldstein:

And when you say Encore and the DEC, you are again basing it on the similarity or the fact that there was contact? Can you discuss this explicitly?

Siewiorek:

I was a consultant to both. I have been involved in architecting at least eight commercial multiprocessors. So, I don't know how you want to trace that back to NSF funding. I certainly got my experience with Cm*.

C.VMP and C.Fast

Goldstein:

I want to talk about grad students at some point. You mentioned them a couple of times, and I would just like to find out their names. At least the people who you feel are important, or who were supported by the Foundation.

Siewiorek:

There is a line that says "C.VMP and C.Fast," which was outside the NSF funding. This is again the reliability and reliability is the center of that diagram. C.VMP was a triplicated processor which could trade performance for reliability. C.Fast was a microprocessor that was fault tolerant, and August Systems built a triplicated version of their highly reliable system. Tandem’s Integrity is now on the triplicated version. Many people reference that early processor work.

The only reason I mention it here is that it was made out of LSI-11s . We used a lot of the concepts in Cm* to see if we could take off-the-shelf components and make a highly reliable systems, running standard software without having to do modifications.

Goldstein:

Who has supported that project since the NSF?

Siewiorek:

ONR and DEC in terms of equipment.

Goldstein:

How would you describe your performance versus its trade off?

Siewiorek:

Yes, the trade off was that you could break it up and treat it as three processors. C.VMP, which stands for voted multiprocessor. You could either enter software control and say, "This is important. I'm going to bring them together and I'm going to have all three working in the same thing so if there is an error it's automatically corrected." Or I can decide I want them to break apart, and get three times the performance.

So, on the software control you could decide whether you want to work on individual tasks or more than one. The critical thing was that you had to synchronize them and bring them back together.

Role and Training of Grad Students

Goldstein:

Earlier you said that your product was supposed to be graduate students. I wonder was this a philosophy that governed the operation of the lab? Were you and the NSF very conscientious about training students?

Siewiorek:

Let's see. I know NSF believes their product is the trained students. Particularly when we are in system building, we are trying to avoid dragging people along. We will either try to hire a programmer or a technician, or get some undergraduates involved. You don't want to have a ninth year graduate student! But if you are doing system building, it does take a while. And sometimes you have to orchestrate it in phases, where you might have some Ph.D's and a couple of Masters and some undergraduates working.

In the case of Micon for example, we had one student whose Ph.D. thesis was knowledge acquisition, but in order to do that he needed the synthesis engine to be redesigned. So that became a very nice Master's project. In order to be able to demonstrate this whole thing, we needed to have a database of parts. We did that with a senior undergraduate, as an honors project. So, we tried to fit pieces together like that. It's a pipeline effect, and you also have to be able to support multiple students at the same time, in order to make that come off.

Of course, we try to build on work that went before. Sometimes there are gaps, but usually you get a really good graduate student to start the project and then he graduates and you have to get somebody to replace him. Continuity, therefore is always an issue. We have been relatively successful in that.

Goldstein:

Well, does CMU have a policy about hiring as faculty your graduate students? Is that then an ideal way to preserve continuity?

Siewiorek:

We don't have a policy for or against it. Obviously, you are very limited in how much of that you can do, so you use more graduate students then faculty. You just don't have that kind of turnover. Don Thomas was supported by the NSF as a graduate student and he is now actually acting-department head. So, he has not only stayed on, he is a fellow of the IEEE and very well known in the CAD arena. He has been program chairman, I believe, of the Design Automation Conference, which is the premier conference in the area.

Al Dunlop who heads the CAD area at Bell Labs in another example. One of the people that came out of the Cm* project was John Ousterhout who is at Berkeley, and well known in the CAD and the operating systems area. He’s now a member of the National Academy of Engineering. He may have actually been supported by DARPA, but without the Foundation's funding of the design and the people building the hardware, that wouldn't have happened.

Goldstein:

Are you suggesting that the project was more or less a team effort?

Siewiorek:

It was definitely a team effort. You don’t do all of the things by yourself. The contributions of the students are very important, because very often they also are sources of innovation. What I'm saying is that without the NSF funding at the root, the project wouldn't have been there.

Goldstein:

In terms of the funding, if the NSF wasn't there, who would have supported it? DARPA?

Siewiorek:

The thing is there was a nice collaboration there, in the sense that we had to do enough initial research to convince people that this was a big bucks project, and that it was worthy of pursuing. So DARPA tended to funds certain types of projects. I mean, it's not as if DARPA doesn't fund basic research, but they are also somewhat mission-oriented and like to see products and prototypes. But where do the ideas for those things come from? They have to be evolved.

I think in this case the set of funding agencies really meshed very nicely in pulling off the project. They were like a three-legged stool. If you try to knock out one leg, it might topple or certainly be unstable. In this particular case we are out there trying to anticipate the future, and we knew that NSF couldn't fund the building of something like this, but we had enough ideas and other things lined up that we could write proposals to other places that could fund it.

I don't believe Ousterhout, for example, was directly funded by NSF. But he certainly worked on that base. Another guy that got his master's degree working on Cm*, and I would have to go check whether we actually paid him or not from NSF funds, was Andy Bechtolsheim. He was one of the founders of Sun, and is a principal in Cisco. So he certainly was around here and got some of his hardware inquisitiveness from that project.

Mario Barbacci , who was at the software engineering institute, also worked on the project. He is a fellow of the IEEE and past president of IEEE computer society. We listed 50 students that had gotten degrees based on Cm* work, so I guess I hadn't prepared for that but I probably should of thought about it, to come up with the half dozen biggest luminaries from that.

Goldstein:

I just want to get a sense, and when you say that number 50, it indicates the magnitude of the project.

Siewiorek:

There are a lot of other people such as Bill Brantly who was one of the major people in the IBM RP3. That project is now rising up in IBM research. Some of these early people have had 15 years experience now, and they are starting to rise in their companies.

Relationship with NSF

Goldstein:

Yes, is there anything else you want to cover. I want to give you an opportunity to call attention to some particular aspect of the work or highlight that you have or give some general comments about your research, the NSF, and the relationship that you've had with it.

Siewiorek:

If I had to mention highlights, I would think things like Cm*, ISP, Micon would be the ones that are most important. NSF has just been very good to work with. I was on the advisory board for the engineering director for I guess three or four years.

Goldstein:

When was that?

Siewiorek:

From, '84 to '88. They are very good at trying to anticipate things. I enjoyed a very good working relationship with Bernie Churn; he is always asking hard questions. And people like John Lehman. I don't know why they worked for the NSF. Sometimes I would think there are headaches in that, but I think the country owes people like that a debt, because I think that they do a very good job. They had me come in one time, to head up a committee to look at their reviewing process, seeing how fair it was.