# Oral-History:C. Sidney Burrus

### From GHN

## About C. Sidney Burrus

C. Sidney Burrus was born on 9 October 1934 in Abilene, Texas. After getting his bachelor’s degree at Rice University, he served two years in the Navy. He was stationed in New London and taught electronic engineering there. He received his Ph.D. from Stanford University in 1965 and accepted a teaching position at Rice. Burrus originally specialized in nonlinear analysis but changed his area of work afterward. Burrus decided to go into digital signal processing with Tom Parks and started the first course in Digital Signal Processing in 1968 at Rice University; they looked at filters and algorithms in DSP. In 1975, Burrus spent a year at Erlangen, Germany, and in the interview, shares his experiences and understanding of German educational and research system he learned during the period. In the mid-1980s, Burrus published with Tom Parks two books, which included a unified theory of FFTs and FORTRAN programs.

Over the years, Burrus took interests in FFT, digital filter designs, wavelets, and now in the use of technology in teaching. He is enthusiastic about teaching an undergraduate level with MatLab, which enables self-education through experiments. In addition, he pays attention to students’ learning process, which has been neglected by the engineering faculty. In the interview, Burrus explains changes in his careers over the last few decades, his students at Rice, and collaborations with other faculty members and their students. The interview concludes with Burrus's thought on IEEE and its functions and his prospect as Dean of Engineering at Rice University.

Other interviews detailing the emergence of the digital signal processing field include Maurice Bellanger Oral History, James W. Cooley Oral History, Ben Gold Oral History, Robert M. Gray Oral History, Alfred Fettweis Oral History, James L. Flanagan Oral History, Fumitada Itakura Oral History, James Kaiser Oral History, William Lang Oral History, Wolfgang Mecklenbräuker Oral History, Russel Mersereau Oral History, Alan Oppenheim Oral History, Lawrence Rabiner Oral History, Charles Rader Oral History, Ron Schafer Oral History, Hans Wilhelm Schuessler Oral History, and Tom Parks Oral History.

## About the Interview

C. SIDNEY BURRUS: An Interview Conducted by Frederik Nebeker, IEEE History Center, 12 May 1998

Interview #340 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

## Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

C. Sidney Burrus, an oral history conducted in 1998 by Frederik Nebeker, IEEE History Center, New Brunswick, NJ, USA.

## Interview

Interview: C. Sidney Burrus

Interviewer: Frederik Nebeker

Date: 12 May 1998

Place: Seattle, Washington

### Family, childhood, and education

**Nebeker:**

I'm talking with C. Sidney Burrus. Could we start by hearing a little bit about when and where you were born and a little bit about your family?

**Burrus:**

I was born on October 9th, 1934 in Abilene, Texas, a medium-sized city in the northwest of Texas. At a pretty early age moved to northeast Texas, McKinney, Texas, and grew up there, going through elementary school and high school in a small farm town community.

**Nebeker:**

What did your father do?

**Burrus:**

My father was a salesman, but he died when I was in junior high school, so in many ways I never knew him. And other people ended up being mentors. My mother was a high school mathematics teacher. I think she ended up having a pretty profound effect on me and my intellectual curiosity.

**Nebeker:**

Were you interested in math and science from an early age?

**Burrus:**

Yes. I was quite interested. I was always fascinated by electricity. There was something magical about it: you couldn't see it, and yet it did things. I was involved with ham radio, and it was just absolutely fascinating to me.

**Nebeker:**

As a youngster?

**Burrus:**

Yes. It amazed my mother, but she encouraged it. I thought I was quite good, but that was because I was in a small rural environment where I had never come across really bright people. So, going to college ended up being a rather shocking experience. I was the greatest thing that had come out of McKinney, Texas, but when I went to Rice in 1953 I was absolutely astonished that there were a lot of people smarter than me.

**Nebeker:**

Well maybe they had better high school courses.

**Burrus:**

It was probably a mixture of a lot of things. But for the first couple of years at Rice I was running scared.

**Nebeker:**

Did you major in electrical engineering?

**Burrus:**

Yes. I never have wished to do anything else. I was quite comfortable with that. It's a place where I could do mathematics. It was a place where I could be involved with electricity. And, it was a place where I could be involved with people.

**Nebeker:**

And what was your boyhood vision? That you would be a practicing electrical engineer?

**Burrus:**

I didn't have much of a vision of that. I mainly was interested in just studying things that intrigued me, and I was kind of waiting to see where that carried me.

**Nebeker:**

And so you got the bachelor's degree in electrical engineering?

**Burrus:**

At Rice we had a kind of a complicated system, so we got a bachelor’s of arts, which is supposedly in general education. Then, in an additional year, we received a bachelor’s of science degree. So it took five years to get a bachelor of science in electrical engineering. I stayed on for two more years teaching and finishing a master's degree. So, I managed to stretch out seven years' worth of university study.

**Nebeker:**

But you got three degrees.

**Burrus:**

Oh yes.

**Nebeker:**

What particularly interested you in those years in engineering?

**Burrus:**

I had a professor in my junior year who was teaching information theory. This was back in the '50s. Information theory, teaching linear system theory using LaPlace transforms when that was hot off the press and it was just fascinating. And that's when I decided I think I'd like to be an academic. I looked at him, and I thought I like his life, I like what he does, I like the way he thinks, I like the influence he has on people.

**Nebeker:**

What was his name?

**Burrus:**

Paul Pfeiffer. An interesting person in his own right. He had an undergraduate degree in electrical engineering and a masters degree in theology. He was a Methodist minister for several years then came back and earned a Ph.D. in pure mathematics. So his eclectic background fascinated me, and he was a bit of a father figure for me.

**Nebeker:**

So what happened after those seven years at Rice?

**Burrus:**

I then served my two years in the Navy, since I had been in the NROTC.

**Nebeker:**

I see.

**Burrus:**

Because of the Korean War and the Vietnam War, people were a little skittish about being drafted. If I'd thought it through more carefully I probably could have avoided the draft, but I was in the NROTC, I got a interesting assignment teaching in the Naval Nuclear Power School, Admiral Rickover's nuclear power school in New London, Connecticut.

**Nebeker:**

How did you land that job?

**Burrus:**

I'm not exactly sure. I heard about it through the local NROTC program, and decided that I would apply for it. It turned out because I had done some teaching and I had a master's degree I got the position with absolutely no trouble at all. So this little Texas boy and his wife and daughter headed to the northeast. So my life went through this enormous set of changes.

**Nebeker:**

Going off to the northeast.

**Burrus:**

And going off to a foreign country. Yes.

**Nebeker:**

Were you teaching nuclear engineering?

**Burrus:**

No, I was teaching electronics, and it turned out to be a very valuable experience because I was teaching people senior to me. So there I was, a newly commissioned officer, teaching more senior officers, people older, more experienced. And so I learned how to interact with people in a respectful but commanding situation. Later when I took oral exams at Stanford it was a piece of cake. And I was accustomed to saying, you know, "You're wrong, sir," in a way that they heard what I was saying and didn't get caught up in the military part but could concentrate on the fact that we were trying to get an idea squared away. So I got some very valuable early training in teaching.

**Nebeker:**

And was it valuable also from the standpoint of the subject matter, or was that at a low enough level that it wasn't a useful experience for you?

**Burrus:**

No, it was actually very good. It gave me a chance to go back to some relatively basic things and understand them in a more thorough way than I had before. Which again, when I took my qualifying exams, I was you know second or third out of a hundred. And part of the reason is a lot of the other people hadn't had my kind of experience and were frightened by the examining process. I wasn't. I could think about the questions and not get shaken by the fact that some senior person was asking me questions. I came across better than I would have under truly fair circumstances. As it was, I was delighted to do well, but it wasn't very fair.

**Nebeker:**

Okay. So you were committed to two years in the Navy.

**Burrus:**

Yes.

**Nebeker:**

And you served that whole time in New London then?

**Burrus:**

Yes.

**Nebeker:**

And so you had already decided that you wanted to go on for a Ph.D.

**Burrus:**

Yes. And that was a good preparation, because all of the other teachers were in almost identical situations. These were people who had come from all over the country, all intending to go to graduate school, and were serving their two years while teaching. And so we talked about graduate school, and I ended up applying to Syracuse, Brooklyn Polytechnical Institute and Stanford. I got wonderful letters from Syracuse and Brooklyn, and a snotty form letter from Stanford. And that so irritated me, I knew I had to go to Stanford. So I think the one that acted like I was no great shakes, is the one of course I had to attend, to prove that they were wrong.

**Nebeker:**

Well, have you wondered how your life would be different if it was the Stanford letter that had gotten into the book?

**Burrus:**

Oh yes. Numerous times I had the thought. Life is a series of lucks.

### Ph.D. studies, Stanford; nonlinear network analysis

**Nebeker:**

So how was Stanford as a graduate student?

**Burrus:**

Perfect. It would not have been appropriate for me as an undergraduate, but it was absolutely perfect for me as a graduate student. I was mature enough and ready to deal with a much larger program than Stanford had over Rice. When I received my Ph.D, I was one of 55 in electrical engineering. So, it was a huge program. You didn't know all of your classmates. On the other hand, it gave you a community, an intellectual community that was terrific. My thesis advisor was David Tuttle, a network synthesis person who had one his degree under Guillemin at MIT. So there's an interesting sort of family tree starting back with Vannevar Bush and down through Guillemin and then to Tuttle and then to me and so forth. And so Bush was an interesting person at MIT, and he and Guillemin and some of their ideas ended up propagating all over the United States. So that at one time I think that most of the good universities in this country had department chairs that had come out of some connection out of the MIT experience. But MIT was a very powerful influence on electrical engineering, especially on the systems theory part of electrical engineering.

**Nebeker:**

But you also were at a place that was very influential because of Frederick Terman?

**Burrus:**

Oh yes. But see Terman had pretty much modeled his ideas on Bush out of MIT, so Terman was strongly influenced by MIT. And Terman was also part of the reason I went out there, especially his book on radio engineering. It was very impressive. Actually, I was far more influenced by some of the younger faculty members like Tom Cover, Gene Franklin, Bill Linvill and David Luenberger. They were people who were classmates of mine, but they finished and then went on to the faculty. I took a course from David Luenberger, who is an absolutely outstanding teacher, and probably learned more from him than I did from my own advisor. Gene Franklin was a very important person to me out there in control theory. Bill Linvill was an absolutely superb person also out of the MIT school in system theory. It was an interesting time.

**Nebeker:**

What did you specialize in?

**Burrus:**

Nonlinear network analysis. I was interested in mathematical methods for solving nonlinear differential equations.

**Nebeker:**

And how did you select that topic, nonlinear network analysis?

**Burrus:**

Just again kind of curiosity.

**Nebeker:**

Was that something that had come into the fore for some reason?

**Burrus:**

No, it was more that linear theory was so powerful because it said what a system was. I wanted to see what could be done when you said what a system wasn't. And so when you say a system is nonlinear, you're not really saying what it is, you're saying what it's not. And can you do anything with that. So it was kind of a frustrating thesis topic. And a few years after graduation, I switched into DSP.

**Nebeker:**

I see. Before we leave that though, was that a matter of intellectual curiosity, this desire to look at nonlinear networks, or some idea that this would be important to the power industry or somebody else?

**Burrus:**

No, it was more intellectual curiosity. There are nonlinear phenomena that are just very counter-intuitive. I'm sure hundreds of people could say this, but I could just kick myself a thousand times, because the roots of chaos theory, I saw them and I just basically didn't recognize that was something of interest. I viewed it, as most people did at that time, as an anomaly that you had to get rid of, and not as an effect that it was actually interesting for its own right. So then my nonlinear differential equation would break up into these kind of bizarre solutions. I clearly realized we'd done something wrong, so I went in and readjusted the parameters to quit doing that. It was chaos. It was there right in front of my nose, and I didn't do anything with it. It taught me to observe these anomalous solutions in a very different way. Ever since then I've been much more careful before I throw ideas out that I consider to be irrelevant.

### Employment at Rice

**Nebeker:**

So you completed your Ph.D. in '65 and went back to Rice?

**Burrus:**

Yes. In '65 the academic scene was absolutely phenomenal. People came to me at Stanford and asked if I would be interested in going to a university, because at that time there was an explosion in higher education. Every engineering school in the country decided they needed a Ph.D. program. Before that there were only a handful of Ph.D. programs in the United States. Engineering was not a Ph.D. discipline, but it became one. People just, they came to Stanford, they came to MIT, they said please replicate what those schools have done at my school. Which of course is impossible, but that's what everyone was trying to do. Well when I was trying to finish my Ph.D., schools from all over were coming in. Industry was also blossoming, so the mid-'60s was a very good time. I narrowed it down to academics. I wanted to go to Rice because I saw it as a place that had great potential, and it would be fun to go someplace and try to develop potential that hadn't been realized.

**Nebeker:**

So they didn't have a Ph.D. program at that time?

**Burrus:**

They had one, but it was small, and they were interested in enlarging it. I decided that Bell Labs was the place to be if I wanted to go into industry. So I narrowed it down to Bell Labs, but actually the Bell Labs I was looking at was in North Andover, Massachusetts. And because when we were living in Connecticut we enjoyed it. My wife especially thought it would be fun to live in New England. And I really wanted to stay in California, so we compromised on Texas. So went back to Rice in 1965.

### Digital signal processing research and teaching

**Nebeker:**

I see. You said that you changed your area of work?

**Burrus:**

A few years after I went to Rice, I realized that nonlinear analysis was not going to be easy. People had been trying to solve nonlinear problems for a couple hundred years, and the techniques were all sporadic, they were a bag of tricks. If you modified the problem slightly the methods fell apart, there were no robust techniques for dealing with nonlinear phenomena. And here I was as a beginning faculty member trying to tackle problems that really good mathematicians had been beating on for hundreds of years. So I probably wasn't going to get much, and I probably wasn't going to get tenure, so that was not an area to go into. Cooley and Tukey's paper on the FFT fascinated me. I got my Ph.D. the same year Cooley and Tukey wrote that paper.

**Nebeker:**

When did you learn of it?

**Burrus:**

Oh, just shortly after it was done. Sort of through the grapevine. I got a hold of a copy of the paper, the paper was not particularly well written, because they didn’t think it was as important as it turned out to be. So they wrote it up, because they were told to write it up, and then got onto something that was more interesting than they thought. A bunch of us around the country read that paper and thought, "Oh my goodness, this is really something exciting, that some mathematical techniques could make a dramatic difference in the way technology performs."

**Nebeker:**

Why were you in a position to recognize that would make any difference?

**Burrus:**

I took a lot of courses in system theory and control theory and communication theory at Stanford. And so I envisioned a circuit theory. I had a long interest in these techniques, most of them used Fourier techniques as kind of a root tool, and calculating Fourier transforms is a real pain for the FFT.

**Nebeker:**

Had you tried to do that?

**Burrus:**

Oh yes.

**Nebeker:**

Had that been something you had done yourself?

**Burrus:**

Yes.

**Nebeker:**

Okay.

**Burrus:**

But it was actually through Bill Linvill at Stanford. We would do what was equivalent to a discrete Fourier transform, but it was an N-squared, order N-squared calculation. Suddenly we had an order N log N calculation.

**Nebeker:**

Why were you doing that particular calculation with Bill Linvill?

**Burrus:**

I was just looking at spectra of signals, how signals pass through linear systems, and the relationship with the frequency response to the system to the spectrum of the signal. But it was mainly the idea that a mathematical trick had such power. And I thought okay, I've got to do that. So Tom Parks, who had joined the faculty at Rice and who had come out of a communications background, and I coming out of a signals background decided together we would go into signal processing, digital signal processing.

**Nebeker:**

Did you think of it as that phrase at the time? Digital Signal Processing?

**Burrus:**

Yes. I knew enough about digital computers and their programming to know that is a very important feature. And then when I saw the FFT it all kind of came together, and that's when Tom and I sat down and made the conscious decision that the future of DSP looked very bright and intellectually challenging. It drew on our backgrounds in system theory, and it had a future in the digital part of the system theory that the analog stuff just didn't have. So off we went. That was in 1968.

**Nebeker:**

When you two decided to introduce a course?

**Burrus:**

We taught our first DSP course in 1968 at Rice.

**Nebeker:**

Was it called that?

**Burrus:**

Yes.

**Nebeker:**

This was a graduate course?

**Burrus:**

Yes, a first year graduate course. I'm not sure whether Gold and Rader's book was out then, but it may well have been.

**Nebeker:**

Right about that time.

**Burrus:**

But see, there was a lot of stuff around the country going on. Jim Kaiser's work. Neither Tom nor I went to the first Arden House. We both went to the second. It was in '70. And absolutely wonderful. The people you wanted to talk to were there. It was in January up in Harriman, New York, and so you wouldn't do anything else except hang around with the other signal processing people. And, they were all there, from all over the world.

**Nebeker:**

Yes.

**Burrus:**

I met Hans Schuessler from Germany at that meeting, and that later resulted in my spending a year in Erlangen in 1975.

### Filters, algorithms, number theory transforms

**Nebeker:**

What were the first things that you or you and Tom looked at there in the late '60s, early '70s? What sort of things in DSP?

**Burrus:**

Filters and algorithms. We were both interested in digital filters and what do they do and how do you design them and how do you implement them. And we were interested in algorithms, variations on fast fourier transforms. One of the realizations I had from early on is that just refining the Cooley-Tukey Radix 2 FFT, it was near its maturity. You weren't going to get much out of that so we looked at other types of transforms. I got involved with number theory transforms, another fascinating topic.

**Nebeker:**

Can you give me a quick summary of that area?

**Burrus:**

I asked a graduate student, Ramesh Agarwal to take a look at the Walsh transform to see if you multiply two Walsh transforms together and took the inverse Walsh transform, what have you done. It wasn't going to be convolution because I knew that it wouldn't implement convolution. However, I wasn't sure what it would implement. I knew that it had to implement something linear. Therefore it must be some kind of a time varying convolution. So I had enough intuition to know that it could be interesting. And so I told Ramesh, "Look into this, because I think we might be able to modify it to do a better form of convolution. So please look at that." So he came back a few weeks later, and he said, "Well, bad news. You can't do that. There is a unique transform, the fourier transform, and that's the only one that will support convolution using normal arithmetic." And so we thought about that, and it suddenly dawned on me that he had said something key about using normal arithmetic. I said, "Well, what if we don't use normal arithmetic? What if we use some other arithmetic system?" That lit a spark in him. He had taken a course in information theory and knew a little bit about coding theory, and he said, "Oh. That's interesting. Maybe we could do it over some kind of a finite field or finite ring." And it just unfolded. He came up using integers, a finite number of integers, and defining the arithmetic over this finite first field, and then later we thought it could be done over certain finite rings, that you could formulate a completely new transform.

Well, about the same time that we were doing this, Charlie Rader came up with a number theory transform based on Mersenne numbers. And then we came up with one based on Fermat numbers and developed the properties of that. And it turns out that even though these are remarkable transforms, they are not very practical, and so they've never turned out to be all that useful. But they were quite interesting, because we had come up with this transform that did exact convolution unlike the regular fourier transform, where you have to use approximations to sign and cosign function. There is no approximation of the number theory transform. It's exact. And you end up with exact convolution, and it's very fast. It uses some peculiar operations in a computer that normal floating point computers are not very well suited for. So it's better done in special hardware or special designs, and it never became all that practical. But okay, the number theory transform was one thing. We also did work on prime factor algorithms, where you had data links that would be multiples of numbers which are relatively prime. So you'd have 5 x 7 x 8. And even though 8 is not a prime number, it's relatively prime to 7 and 5. And so that link could be broken down into an FFT algorithm. About that time Winograd's work came out around '75. We then came up with a way of doing a prime factor algorithm. Tom Parks did some work on that as well. It turns out that actually Jim Cooley had done some work on that too that he had never published, and we learned about that after the fact. But there was a little flurry of work on these prime factor algorithms.

**Nebeker:**

And have they found application?

**Burrus:**

Oh, yes. The application is the same as for the traditional Cooley-Tukey. It's just that another algorithm has come up for a different set of links that you can't do with a traditional Cooley-Tukey. And the efficiency of these algorithms is on the same order as the best Cooley-Tukey, so it's neither better or worse. But, we've got a more flexible and a more versatile algorithm.

**Nebeker:**

I see.

**Burrus:**

It was always aggravating to me that most folks when you say FFT they think power of two links. Well, that's not true. It actually is a much more general algorithm than folks normally think, and I find it irritating that even today books will be written on the FFT. They just assume the Cooley-Tukey algorithm is the FFT. And it's not. I mean it's a much richer and broader class.

I had a very good graduate student, must have been in the '80s, that worked with me, and we came up with a technique for designing algorithms. In other words we had a program that wrote programs. Now here is the philosophy, and I think it's a pretty important one. In ancient times what you would do is you would calculate signs and cosigns and you would put the results of them in a book. You would have tables of signs and cosigns. Well today we would never consider doing that. What we would do is we have an algorithm and a calculator. Punch a button, out comes the value of sign or cosign. Well so why do we write programs? Why don't we have an algorithm that will write a special program for your computer? So there won't be an FFT. What there will be is many FFTs. And I have a program that will write an FFT for your computer. You tell me how much RAM you have, you tell me how much cache you tell me how much disk, you tell me what the transfer times are, the multiplies times, the additions. Then I punch a button on my program, and out comes a special FFT written for your computer. It's optimal. Oh, you enlarge the cache in your computer, I redesign your FFT. It's really a very different philosophy.

**Nebeker:**

Is that the Sorensen FFT algorithm paper?

**Burrus:**

No, it was Johnson.

**Nebeker:**

Did the design of FFT algorithms come out in April '82?

**Burrus:**

That sounds like it.

**Nebeker:**

I see.

**Burrus:**

But it's an idea that still hasn't been fully exploited, the programs that write programs. And I think you could even recurse that further and have programs that write programs that write programs. And using the analogy of algorithms to calculate values of numbers rather than storing numbers in a table, we do the same thing for algorithms. And we have a framework for a rich set of algorithms that allows that to be possible. I'm convinced it's really a powerful idea that has not been fully exploited.

### Digital filter applications; Prony's method

**Nebeker:**

You said that there were these two areas in which you worked in the early days of DSP, the algorithms and the digital filters. And I just wanted to ask what specifically your objectives and motivations were with the digital filter work. Were you trying to achieve the same kind of filters digitally that you'd worked with in analog form?

**Burrus:**

Yes, initially. In fact, almost everyone in DSP was doing that. Which actually is an interesting idea that I've only thought about in the last handful of years, talking to some colleagues in the history of science and technology, about the fact that when new ideas first come along, almost always the first thing they do is mimic old ones. And once they sort of go through a cycle of that, then they start generating completely new applications. At first digital computers mimicked analog computers. Then they started coming up with applications on their own. The laser mimicked just flashlights, powerful flashlights. Then they came up with bar code readers and CD players and stuff like that. And in the case of digital filters, they first mimicked analog filters. You have what are called recursive filters or IR filters. I was doing precisely that. Parks and I came up with a terrific method of time domain design of recursive filters. We got a very nice paper published back in the '70s.

**Nebeker:**

Yes, that was this time domain design of recursive digital filters in June of 1970.

**Burrus:**

Yes.

**Nebeker:**

And that one I think was reprinted in the DSP volume.

**Burrus:**

Yes. Well, about a year or two after that paper came out one of my colleagues said, "You know, that reminds me of Prony's method, which is used for calculating volumes of gases in chemistry and so forth." So I looked it up, and I'll be damned if we hadn't reinvented a method which was done back in like 1790 by Prony. And we did it in a completely different way, using different mathematics and so forth, so it wasn't immediately obvious, but if you were careful and went back, and we did it, you’d discover that we rediscovered Prony's method.

**Nebeker:**

What specifically did Prony’s method do?

**Burrus:**

If you have a signal that's a sum of exponentials, a sum of exponentials with different exponents, can you take that summation and identify the exponents of the component parts? And what Prony did was look at a mixture of gases. And so he took a mixture of gases, compressed and heated or did something to them, I don't know exactly what, took the results of that experiment and was able to determine the properties of the constituent gases of that combination and did some work on it. And here I had done something similar on these components of signals and not realizing that this other theory even existed. And then once I went back I mean, we had a much more clever solution, but the fact is it was the same problem, and got our answers were the same as his. A little bit demoralizing to discover something that was 200 years old, but that's part of the game.

**Nebeker:**

That's fascinating. Yes.

**Burrus:**

But anyway, that was our first filter design experience. Tom Parks and Jim McClellan did some work on Chebshev filter design that was revolutionary. In fact, from my perspective there have been two classical happenings in deterministic signal processing. One was Cooley and Tukey's FFT; the other is Parks and McClellan's Remez exchange algorithm. Those were milestones, that before Parks and McClellan's algorithm, you couldn't design big filters, you couldn't design optimal filters. You had all kinds of crummy approximate methods, but you could not do it. After it, you could do it on desktop computers. It was just landmark.

### Parks-McClellan method

**Nebeker:**

Can you explain to me in laymen's terms what the Parks-McClellan method does?

**Burrus:**

Well, it's both, it's a technique for designing optimal chevychef filters and it does it in a very efficient way. If I say the filter is going to have 100 coefficients, and it has a pass band of a certain size and a stop band of a certain size and I would like to minimize the maximum error in the pass band and stop band, the Remez exchange algorithm will do that. Now in a sense what Tom and Jim did was very straightforward. They posed the filter design problem such that this Remez, he was a Russian mathematician, would solve it. I mean that sounds pretty trivial. Well, it turned out to be incredibly insightful and powerful. Because nobody else had done that, and after Tom and Jim did it, it just changed the world. So it's, again, one of these kind of subtle things.

**Nebeker:**

In the sense that it made it very much easier for people to design digital filters.

**Burrus:**

Oh, yes. I mean, just you could do it on a little computer in a few minutes instead of on giant computers in several days. Before that, it was just impossible. What happened is people used other techniques that weren't optimal, but suboptimal. They used a bunch of trouble to design things like window designs, which I dislike immensely. But that was one of the standard methods of designing filters. It was suboptimal. You couldn't have a different error in the pass band and stop band. It's a poor way to design filters. And we came up with a method that is optimal, that can be done on small computers and allows you independent control of the pass and stop band. It's just very different.

**Nebeker:**

And the result in sort of macroscopic terms was that digital filters came to be more widely used?

**Burrus:**

Oh yes. And you knew they were optimal. We were no longer saying, "Can you do any better than this?" So it just changed the whole world of filter design. And probably eighty per cent of the design techniques that had been used up to that point were totally worthless after that. The Cooley and Tukey paper just changed the world. And it was Jim McClellan's master's thesis. It wasn't even a Ph.D. But it was amazing. You know, and it slowly dawned on Jim and Tom what they had achieved. That was done at Rice, and while I wasn't a major player in that, I was there and it was part of our filter design group. After that we worked on a variety of filter design problems.

### Block implementation of digital filters; applications

**Nebeker:**

I see you had a number of papers on block implementation of digital filters.

**Burrus:**

Yes. Do you know what block processing is?

**Nebeker:**

Yes, in general terms.

**Burrus:**

The reason block processing is important is the FFT. The FFT works on finite length sequence of numbers. Most signals are essentially infinite lengths, so we got to chop them up in blocks and process them in blocks. Can you come up with a method that will give exactly the same results as if it had been processed through infinite techniques, but doing it in blocks where you can use the FFT for efficiency? The answer is yes. So you use the block techniques to get the same results as if it were done in an infinite method, but you have the advantages of using the FFT. We came up with ways of doing it in general for FIR filters and to do it in a completely new way for recursive for IIR filters. Another very interesting and kind of curious technique is that these block implemented filters were more robust in terms of round off error. They were more efficient in terms of computational requirements. Their downside was that they had more delay between the input and output than a traditional filter. That's because you had to start up the block, so you had to bring in one block of data before you could start the processing at all. So that caused a delay that in some applications was not acceptable. If it was acceptable, then it was the way to do it.

**Nebeker:**

I see. So that was influential.

**Burrus:**

Yes. And there was another interesting dimension that came out of this. You have a string of numbers, and you chop it up into blocks. Then you take the blocks and chop them up into sub-blocks. And then you take those and chop them until finally you're down to the individual numbers. Now listen carefully. I am now going to take those numbers and chop them up. So even when I go down to the individual numbers in a string of numbers, I think okay I've gone as far as I can. No, that's not true. What you can do is take that number, which is say an 8-bit number, and I break it down into two 4-bit bytes, and then I can take those two 4-bit bytes and break them down into two bit blocks, until finally I get down to the bit level. Now that is as far as you can go. I mean that's sort of like the atom of a signal is a bit.

Well when you've done that, you go into what we call a distributed arithmetic where you've broken down even the individual numbers. And in a filter then a particular sample will come in and the different bits of that will spread out and go through the filter. It is in a way that's completely different from your traditional filter structure where you come in with the sample, the samples go through the filter and get recombined, and come out the other end. Now not only do the samples break up, but the bits break up and go through the filter. And those that are less important you simply don't do. So it's kind of a like a variable resolution implementation. Over here it's not very sensitive, so I just do a 4-bit worth calculation. Over here it's very sensitive, so I'll do an 8-bit, well maybe a 16-bit. So I have a variable precision arithmetic implementation.

**Nebeker:**

Is that what distributed arithmetic is?

**Burrus:**

That's part of it. It also allows table look-up. It enables you to precalculate part of the implementation and then look it up on the fly as you implement the filter. As you are running a signal through the filter, instead of taking these numbers and multiplying them by coefficients and adding them up, I take these numbers, use them to address precalculated other numbers which I then fetch, and I can do that much more efficiently than actually calculating. Once again, I'm trying to precalculate as much as I can and then fetch it. Because memory is cheap. It turns out in modern hardware to be a pretty effective way of implementing filters. But you look at it and say oh, it's kind of crude. You precalculate it by partial products. I thought we were trying to get away from that. We're trying to get away from wasting money, and if that's the cheaper way to do it, do it that way.

Well, these are only semi-applications. It's more implementation. Am I saying anything about speech or seismic or medical application? No. I'm not talking about real applications. I am talking about an implementation. So I've got a filter, and I want to implement it in hardware. What's a clever way of doing it. Distributed arithmetic? Block processing? Those are clever ways of doing it.

**Nebeker:**

I'm wondering though whether you are looking at the ultimate applications and that's guiding what you're doing?

**Burrus:**

It is in the sense that some of these ultimate applications are real time signal processing. And speed then becomes extremely important. You are willing to sacrifice a bit of accuracy in order to get the speed. Maybe it's speech processing, maybe it's seismic, maybe it's sonar. Where there are other applications that are batch that real time is not important and you are willing to grind on that data for a long time. Seismic for example doesn't need real time. You go out and record the data, bring it in, process it.

**Nebeker:**

Well, there are also many applications where hardware cost may not be so crucial as it was.

**Burrus:**

Back in those days there were three industries that had money. The defense industry, they didn't care how much something cost, they wanted to do it the best possible way. So they bought a lot of DSP. The oil industry, they wanted to find oil. And the medical industry, they were willing to spend money to do tomography and exotic diagnostic tools. The commercial world, the consumer product world, they weren't willing to spend the money. So the initial applications were in defense, oil exploration and medicine. And so the types of filters, the types of specifications, the real-time-ness or non-real-time-ness of the problem drove our filter design.

### Research with H. W. Schuessler

**Nebeker:**

I see. Is this a good point for your to comment on this year at Erlangen? You said that was '75?

**Burrus:**

Yes. I received an Alexander Von Humboldt Senior Award to spend one year in Germany. And that was with Hans Schuessler in Erlangen.

**Nebeker:**

You met him at Arden House.

**Burrus:**

I met him at Arden House. Tom Parks had also spent a year in Germany, and so Schuessler got connected to Rice through Parks and through me. There were two major research groups in Germany at that time, Schuessler's in Erlangen and Albert Fettweis group in Bocum. Schuessler's work was more interesting to me. It was more like mine.

Nebeker:

Could you describe the sort of the state of the art in Germany at that time? Was Schuessler doing similar things to what was being done in the United States?

**Burrus:**

Schuessler had been very interested in digital filters, in communication applications, and was in many ways ahead of what was going on in the United States, but his work was not as well known. He didn't publish as much and didn't present as much. But he learned early on it should be done in English. But nevertheless a lot of his work didn't get known. And so both Tom and I got kind of a special preview by going over there and seeing what his group was up to.

**Nebeker:**

I see.

**Burrus:**

And he learned something about what we at Rice were doing, and he spent some time at Rice, so we ended up with a nice relationship.

**Nebeker:**

Were there any differences in facilities, maybe ease of implementing things in one place or the other?

**Burrus:**

Yes. The German educational system and research system is quite different from the American one. He had technicians, he had shops and laboratories that were much better than most American universities. It would be more like Bell Labs or Lincoln Labs than you would find in a typical university. So he was able to build filters that we would never attempt to build at Rice or in many universities. It was a surprise. I had always thought that Germans were the theoreticians and the Americans were the practitioners. It turns out it's just the other way around, that the Americans were more theoretical and the Germans were more practical. That was an interesting realization, and it also meant that we could learn a lot about practice from them, and in some cases we could contribute some on the theory.

### Graduate students at Rice

**Nebeker:**

That's very interesting. And then you were of course back at Rice. You seem to have had quite a few students, master's, Ph.D. and postdoctoral.

**Burrus:**

Almost any academic person owes a large part of whatever success they have to their graduate students. If you have good graduate students, you are going to have good ideas. The interplay between your work and the graduate students' work is very productive. So you end up appearing to be smarter if you have smart graduate students. I was lucky. I had some really very bright students.

**Nebeker:**

We clearly don't have time to talk about all of them. Is there one or two or so of this list of the early students that you would comment on?

**Burrus:**

Well, in particular Ramesh Agarwal, he went on to become a fellow of the IEEE. He's at Watson Research Laboratories at IBM, a brilliant researcher. He's not all that well known because he often works on problems that are interesting to IBM that don't really get seen as much, but still really very creative. And his ideas on these number theory transforms were wonderful. Also, Howard Johnson, is an incredibly creative young man who was more interested in becoming a California entrepreneur I think than being an academic, but his ideas on writing programs that wrote programs and on the structures of FFT algorithms were extremely good, and are still you cited and used. And, Mike Heideman, a Stanford undergraduate and a Rice Ph.D, did some really good work on the prime length FFTs. More recently, Ramesh Gopinath did some just really good work on wavelets. Haitao Guo, who is here at the conference as a matter of fact, worked on wavelets. And Ivan Selesnick, who is probably one of my smartest students, has done work on FFTs, on filter design and on wavelets. Selesnick's record in terms of publications, when he finished his Ph.D, was better than most people going up for associate professor. And there a bunch of other good ones, but those are some of the kind of stellar folks.

### Collaborations with Tom Parks; FORTRAN programs

**Nebeker:**

So, not only were you working with graduate students, but I know you've collaborated with Tom Parks a good deal.

**Burrus:**

Yes. And we collaborated with each other's students. I mean one of the nice things about Rice is that it's a very collaborative community, and it's something I particularly appreciate about Rice. But I published papers with his students, and his students published papers with my students, and that's very unusual. I mean, normally that just isn't done. Competition says you don't do that. But we were comfortable working with each other's students and the students working with each other's problems and folks not being competitive and trying to steal ideas. So it caused us to be more productive more than we would have been otherwise. So you would end up seeing a bunch of multiple authored papers with students that weren't necessarily my advisees and my students were publishing with people that weren't their advisor.

**Nebeker:**

Could you comment on these two books that you and Tom did in the mid-'80s, the DFT, FFT and convolution algorithms.

**Burrus:**

Well, those represent the two areas that I have been talking about. The FFT, our views about what the FFT is and how it should be described are pretty well described in that book and a more up-to-date version of it is in a chapter of another book that Lim and Oppenheim edited. I later wrote a chapter in that book which was a more modern version of what appeared in that earlier Wiley book. I like that book, even though it's not all that well written, but it gives a unified theory of FFTs that I don't think any other book really gives. And that was my main interest. And it also had FORTRAN programs.

**Nebeker:**

Yes. Both of these books were tied to the programming.

**Burrus:**

Well, part of my own reason for being an academic is that I like to make complicated ideas accessible to practicing engineers and to students who are not as abstract as some faculty members are. So to take a complex idea and make it accessible, that's the challenge I think of engineering faculty and of engineering researchers. If I write papers that are only understandable to other academics, then I haven't really done engineering research. So what I want to do is to write a paper on a complex concept like the FFT. I don't want people to think that that's just simple-minded Cooley-Tukey Radix 2 FFT. It's really a much richer theory than that, and it is accessible to normal people. It's not that exotic.

**Nebeker:**

Yes. I see.

**Burrus:**

The programs are written in a very educational manner the way the programs are even indented and structured. They don't use any bizarre characteristics of FORTRAN. I wrote it in a very simple-minded FORTRAN that looks like Pascal, that looks like Basic, that looks like anything. There are some special commands in FORTRAN that are strange and they don't fit the modern paradigm of programming and I tried not to use any of those. If I were doing it today I'd write in C. But at that time FORTRAN was the best language to use. But I realized that you ought to write it in a form that it was transportable and not weird.

**Nebeker:**

Yes, I see. It would then further the understanding of a person using it.

**Burrus:**

Yes. And not use trickery in the program. So the programs are perhaps not quite as efficient as they could be, but they are understandable. And I use the same notation in the program as I use in the theory that's developed in the book. So you can hop back to the back and look at the program and say, "Ah, so that's how you implement that." And then you hop back and read the theory, hop back to the program, say, "Oh, okay, so you could do it this way." And nowadays I use MatLab or C. In those days it was FORTRAN.

**Nebeker:**

I see.

**Burrus:**

The filter design book was a similar approach. We wanted to teach that the FIR filter through modern applications is more fundamental than the IIR filter. Prior to that, the IIR is sort of the paradigm the FIR, and teach a more modern concept of optimality. They tend to de-emphasize windowing and to emphasize optimality. And again, with FORTRAN programs, it is written for readability. The MatLab filter design toolbox are mainly just transliterations from FORTRAN to MatLab, and they came right out of that book.

### MatLab; wavelets

**Nebeker:**

Is that right? Can you tell me a little about MatLab though.

**Burrus:**

Well, the company, MathWorks in Massachusetts. Well, there's a long history of MatLab. It was a public domain piece of software that MathWorks turned into a very nice commercial package. And one of the things they wanted was to have some signal processing tools in it. And so they just took, with our agreement of course, they took a lot of our programs and just translated it from FORTRAN into MatLab and they suddenly had a nice package of filter design programs, which is good for them. It made us feel good, because our ideas were being used.

**Nebeker:**

I see. And I know you have always been interested in educational questions and made use of MatLab.

**Burrus:**

Yes. In 1988, Ingrid Daubechies wrote a paper on wavelets. She is an absolutely remarkable woman, and that paper had the same impact as the Cooley-Tukey paper and the Parks-McClellan paper. Daubechies' paper just galvanized people's interest. There had been some other wavelet papers written, but they didn't really speak to the important points. Her paper pulled it all together and the whole scientific and technical community said, "Wow. This is something special." And so in 1988, very much like the FFT, I heard about this paper. A colleague of mine at Rice, Ronny Wells, who is a pure mathematician, said, "This is important. You ought to study it." And I said, "No, I've got too much to do." And he said, "No, no, you really ought to do it." And so he pushed and pulled, until finally he and I worked on these wavelets. Then I realized he was right, that there was something new. It wasn't just a reformulation of old ideas; it was something new. And so for the last ten years I've been sort of up to my ears in wavelets, and they are more fun than all the rest of it put together. I've sort of had three careers within DSP, the FFT career, the filter design career and now the wavelet career. And in some ways the wavelet career is the most fun of all. The wavelets themselves are a new way of looking at things. It's not a new implementation the way the FFT was. It wasn't a better design of something we already knew how to do like the filter design. It's truly a new tool. To break signals down in terms of wavelets rather than in terms of polynomials or sine waves is different. It is qualitatively different. The third book that I wrote on wavelets is once again trying to make this rather complicated mathematical structure available to practicing engineers and to students. The book has done extremely well. But wavelets right now are just hot as a pistol.

**Nebeker:**

Yes. I've heard a lot about them. Have they reached the point of making a difference in some applications?

**Burrus:**

Yes. The best example of that is several years ago the FBI decided that they needed to compress their fingerprint files. They just had these horrendous numbers of fingerprints that they couldn't just store. They wanted to digitize them and compress them, but they didn't want to lose any detail, so what's the best way to do it? They weren't stuck with standards. They could do anything they wanted, because they were not on the open market, they were a closed operation. So they did an extensive evaluation of various compression algorithms, and what they came up with was a wavelet compression method. So I just said okay, I like that. They looked at everything. They looked at the various discrete cosine methods, they looked at fractal methods, they looked at wavelet methods, they looked at everything that was available, and the wavelets won. So a relatively new technology, wavelets, won out over these much more mature technologies. Well if that's the case, wait until wavelets get mature.

You know, they are going to be even better, and these others are not going to get that much better, because they are already at their plateau. The wavelets are a new animal. And they've got two special characteristics. First of all, wavelets, unlike Fourier and unlike polynomial transforms, were designed by saying what you wanted the transform to do and then working backwards to see what the basis functions look like. In all these other transforms you started with the basis functions and went forward to get the tool. And then you analyzed it, well what are fourier transforms. In the case of wavelets you say, "What kinds of transforms do I want?" Now, "What kinds of functions satisfy those?" So it's sort of working backwards. It makes all the sense in the world, once you think about it, is rather than see what you get, say what you want. And that's the way the wavelets came into being. So in 1988 Ingrid Daubechies plotted some functions that no human had ever seen before. And they were super fundamental.

The second thing about the wavelets that's extraordinary is that the defining function has no calculus in it. Almost all of our standard functions are solutions of differential equations. Not the wavelet. It's a solution of an equation that has only multiplications and additions. Exactly what a digital computer does. So the wavelet is much more suited for digital computation than the fourier transform is. And the fourier doesn't really fit a digital computer. You have to kind of jam it in. It's not natural. The wavelet transform is natural to the digital computer. So those two things really make it special. And we've had just tons of fun applying it to medical signals, seismic signals, doing some work for DARPA on automatic target recognition, a lot of application-driven problems. But I'm more interested, as I've always been, in kind of the theoretical structure of these things and how to explain it to other people.

**Nebeker:**

That is fascinating.

**Burrus:**

So, most of what I've been doing for the last several years has been based on wavelets.

**Nebeker:**

Okay. In looking at your career then, well, an early career, and then DSP where for many years you were looking both at algorithms and digital filter design, and then the last ten years or so looking mainly at wavelets?

**Burrus:**

And also the same time period I got interested in the use of technology for teaching. And that's, you were asking about a few minutes ago.

**Nebeker:**

Yes, why don't we turn to that now.

**Burrus:**

MatLab is a very nice tool, because it's easy to learn and it's easy to program. It's highly visual and you can plot and display data in ways that kind of don't get in your way. You can see the concept and not have to get tangled up in the language or the system that you're doing. So I chose MatLab as kind of my standard tool. I haven't written a FORTRAN program or a C program in years. And if that's not efficient enough in implementation, I get a student to translate it to C. I don't do that. But what I'm trying to do is figure out ways of having a more experimentally based educational system, having a student run an experiment and learn something from the results of that experiment, rather than me telling him the answer and then running an experiment to verify the answer that I told him. Part of my reasoning was, how do I learn right now? Now, the work I do is based on ideas that didn't exist even when I was a Ph.D student. And digital signal processing came into be after I got my Ph.D. And certainly wavelets, much, much later. How do I learn wavelets? Well, I sit down and run experiments. And then I do math based on the results of the experiments. Then I go back and run more experiments, and then I go back and run more math. That's the way I really learn, is self-education. I wanted my students to learn how to do that, so they don't have to come to the professor for the answer. That they then verify, but they actually discover.

### Evolution of DSP education

**Nebeker:**

That also changes their approach to the mathematics.

**Burrus:**

It means the math becomes something they want to do to make sense out of things rather than something they kind of do because the professor tells them to do it. And so it's just a much healthier, I think, approach. It wasn't possible earlier because they didn't have things like MatLab and desktop computers. The desktop computer, the cheap desktop computer and MatLab, it changed the game. We now have a mathematical laboratory. We didn't use to have that.

**Nebeker:**

It this also possible on an undergraduate level?

**Burrus:**

Oh yes, in high school. I mean it's just changed the world. I told my class that what I'd like for them to do was to learn how MathWorks implemented the FFT. I said, "Now, MathWorks is a company. They're in business. And they've implemented an FFT, and they're not going to tell you how they did it, because that's proprietary. Can you run some experiments and figure out how they did it? Because you're clever. You know some theory. Now can you do that?" And I said, "Here is what I want you to do. I want you to plot the number of operations necessary to calculate an FFT versus the length of the FFT. Just set up a little loop and increment the length, get longer and longer and longer and longer, calculate the flop, put it in a table, then plot it with a dot plot, not a connected plot, with just a dot for each end. And look at the structure of that plot. And from that deduce how they implemented the algorithm." The students were absolutely fascinated by that. For one thing it's a puzzle. It was reverse engineering. They learned more about the FFT than I could ever have forced them to by traditional stick and carrot methods. And then I started doing the same thing in teaching filter design. I said, "Take the Remez command in MatLab and design a bunch of filters with different band ages. Now can you tell me what the property, what the optimal property is for these filters?" So rather than telling them the property and then asking them to do it, I asked them to do it and then tell me what the properties of these filters were. That was a puzzle, another puzzle. I got together with Al Oppenheim, Jim McClellan, Hans Schuessler, Ron Schafer and Tom Parks and we wrote an exercise book. That turned out not to be not easy at all. Working with six hard-headed, opinionated full professors, that all have been teaching a long time and considered themselves good teachers was not an easy process. However we got the thing written, and it's been a good success. I would like to spend more time, if I had it, on pursuing that philosophy. Most faculty members know remarkably little about how students learn. And it's too bad that we know a great deal about the topic we teach and we know very little about the learning process.

**Nebeker:**

Yes. How does this look in practice? Is it still the case that you have traditional lectures, but instead of a list of problems to be solved you give them these larger puzzles that they work at?

**Burrus:**

Yes. That's exactly it. What I would like eventually is to have a rather different structure to the entire course so that rather than giving lectures and then having these exercises, you would have the exercise and then give the lectures, and kind of reverse the order so that the lecture would be more almost a Socratic discussion of the results of the experiment.

**Nebeker:**

Of course you always have to get them to the point where they are able to solve the puzzle.

**Burrus:**

Yes. And that's a challenge. You see but rather than the challenge being, can I present truth to them in a logically sequential fashion?, the challenge is can I post this problem in a way that's accessible to them? Is this a person who likes to go from the general to the particular or vice versa? Well, I wish I could set up exercises that both of those styles would be challenged equally. Your traditional lecturer doesn't do that. Your traditional lecturer assumes the student is like the professor, and the professor always assumes that their learning is the normal way, and that's the way they teach.

**Nebeker:**

Yes. Can you comment on what's happened with DSP education or DSP curriculum over the last few decades that it's existed?

**Burrus:**

Well, until recently, not a lot. Ken Steiglitz thing back, I don't know, maybe in the '70s, of teaching a very elementary DSP course as sort of the first course. And it failed. But most of the courses are kind of traditional. If you look at the standard DSP books, they all look alike. Whether it's Oppenheim's book or Prokis' book or Mitra's book, they all have the same material with mild variations. What Jim McClellan is doing at Georgia Tech, he is going to teach DSP as a first course in engineering. Now that's different. Don Johnson is teaching signal processing. Now it's not as digitally oriented, but he is teaching signals first. When you open up a box you no longer see RLs and Cs, you see chips. So it's kind of inappropriate to teach your first course on RLs and Cs, when no student ever sees one. Students do encounter signals, so perhaps that should be the first course. Students in high school mess around with computers, so maybe computers should be involved in the very first course, not a later course. That's taking place now. But for many years there wasn't really much innovation.

### Circuits and Systems Society, Signal Processing Society

**Nebeker:**

I wonder if I could throw in another question. I know we have gone on here a while. About the relationship between the Circuits and Systems Society and the Signal Processing Society.

**Burrus:**

Earlier on, because of my background in circuit theory, I found the Signal Processing Society to be a bit more open. They were focused enough, but they were open within that focus in a way that I found more productive and more exciting, whereas Circuits and Systems is fragmented. They kind of go in every direction. Circuits and Systems didn't end up being as satisfying to me, so I shifted more toward being involved with the Signal Processing Society, although I'm still a member of Circuits and Systems. I still publish there, and you know some of my best friends are members of Circuits and Systems.

**Nebeker:**

Is there a problem with the IEEE technical structure that you have a great many people who feel they have to belong to both societies?

**Burrus:**

No. I think it's healthy. I think it's competition. And for the same reason I don't like seeing Microsoft run the whole software world. I don't think the Signal Processing Society should be the only people who do signal processing. So I think it's healthy that several people do it from slightly different points of view, and this keeps the system somewhat competitive, but not in an unhealthy way.

**Nebeker:**

Okay. So there's not something you've often thought, "Oh, IEEE really should be doing this or should be organized this way"?

**Burrus:**

No, I mean some of their implementations I think that it's inefficient in that we need to move more towards some kind of electronic publishing. But if I had an answer I would be pushing it. But I just know we need to be doing it, and it will eventually happen, but we're kind of stumbling along. We don't use our own technology.

**Nebeker:**

Is there anything I haven't asked you about that you'd like to comment on?

### Dean of Engineering appointment, Rice

**Burrus:**

Well, something that has just come up really in the last month that's going to make a big difference in my life, and that is that I will be soon becoming Dean of Engineering at Rice. So it will be interesting to see how I take some of my signal processing experience and apply it to a larger format.

**Nebeker:**

Thank you very much.