Oral-History:Wanda Gass

From ETHW

About Wanda Gass

Wanda Gass

Wanda Gass, a 2006 IEEE fellow "for contributions to digital signal processors and circuits," made crucial achievements in digital signal processing chip development through her work at Texas Instruments. Before joining TI, Gass studied electrical engineering at Rice University and earned a master's degree in biomedical engineering at Duke University (1980).

At Texas Instruments, Gass participated in group development of the Signal Processing Chip. Gass' work in circuit design, as well as in logic design and verification was represented in an influential 1982 paper at the International Solid State Circuits Conference (ISSCC) and in the 32010 chip produced by Texas Instruments.  In 1982, Gass moved from product design to corporate engineering, where she conducted research and development on speech coding. Gass then worked on speech processing synthesis, ASIC, the Odyssey board multiprocessor system, silicon compilers, and DSP synthesis.

The interview describes Gass' diverse projects at TI, including 32010 chip development, marketing, and applications. Gass describes her transition to research management roles and details her work as a project leader on DSP architecture for third generation cellular phones. Gass assesses the reception of women in engineering and management, with attention to change over time, and she describes evolution of DSP architecture and applications. She concludes by considering the role of the Signal Processing Society.

About the Interview

WANDA GASS: An Interview Conducted by Frederik Nebeker, IEEE History Center, 13 March 1998

Interview #341 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Wanda Gass, an oral history conducted in 1998 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA.

Interview

INTERVIEW: Wanda Gass

INTERVIEWER: Rik Nebeker

DATE: 13 March 1998

PLACE: Seattle, Washington

Family and education

Nebeker:

Could we start by your telling me a little about your family and your education?

Gass:

My dad was a petroleum engineer. Since my dad didn't have any sons, he actually encouraged me to go into engineering, which was nice. Actually I was interested in the medical field. I went to Rice University as an undergraduate and specialized in double E and biomedical engineering. I took all the pre-medicine classes that were required.

Nebeker:

Were those two separate programs, the biomedical engineering and the EE?

Gass:

Well, I just took enough biomedical. I didn't have a major in biomedical. But, in electrical engineering there were like four areas of specialization. You had to take all the fundamental engineering classes, and then on top of those you could specialize in certain areas. In addition to the normal electrical engineering curriculum I took all the pre-med classes that would allow me to go to medical school. However, I didn't have a degree outside of electrical engineering.

Nebeker:

So you were thinking about going to medical school?

Gass:

Yes.

Nebeker:

Did you like engineering?

Gass:

Yes. I think it was because I felt like the education process for a medical career was so long that I wasn't quite ready to commit my life to being in school for so long.

Nebeker:

But you did go on for a master's degree.

Gass:

Yes, I earned a master's degree in engineering at Duke University. They actually had a degree in biomedical engineering that combined electronics and mechanics and several engineering disciplines to focus more on biomedical problems.

Nebeker:

So both as an undergraduate and at Duke you weren't interested in signal processing?

Gass:

Well, the biomedical aspect of it pulled me very much at Duke into the signal processing side of things because a lot of the things you do in medicine are related to analyzing signals. In my undergraduate study, there was a heavy emphasis on cardiology where you study EKGs and things like that. I also did a senior year project at the medical center that was across the street from Rice. At this project, where there was kind of a biotelemetry project where signals were transmitted from an animal that was being tested and sent to a receiving station where the data was collected. This was instrumentation oriented so the strong electrical engineering was aimed at the instrumentation side of it. But everything pretty much in that area had to do with signals and processing the signals so I got a much better signal processing background when I was at Duke. Even though I had the basic aspects of it when I attended Rice.

Nebeker:

Was there an undergraduate course in signal processing at Rice when you went?

Gass:

Yes. There was basic signal processing.

Nebeker:

I believe Sid Burrus was there and Tom Parks.

Gass:

Yes, they were there.

Nebeker:

Very, very good program.

Gass:

Yes. And I guess my senior year we probably studied the discrete time thing and how continuous time signal processing related to sample signal processing and how those overlap. And then I took another class in DSP when I was at Duke. So, I actually had a pretty strong DSP background.

Nebeker:

You saw yourself as a biomedical engineer?

Gass:

Yes.

Texas Instruments

Nebeker:

Had you decided that you wanted to just earn the master's and then go to work?

Gass:

Yes. I never did consider really doing a Ph.D. I wanted to do a master's and go into industry from there. I looked at several opportunities. I wanted to come back to Texas. So I looked at jobs at the medical center in Houston, and there were some opportunities in Galveston. I interviewed with several different groups before taking a job at Texas Instruments, TI. One of the groups I interviewed with was responsible for designing microprocessors. And I thought, "Wow. Not too many people can say they worked on designing a microprocessor." So it just seemed like a better career move and then an exciting job at the same time, so I opted for a double E job rather than a biomedical engineering job coming out of school.

Nebeker:

I see. But you had looked at both types.

Gass:

Oh, for interview, before I selected which job I wanted to go with?

Nebeker:

Yes, and you selected a very good company. It must have been another factor, I mean a leading company like that.

Gass:

Yes.

DSP development; chip circuit design

Nebeker:

Yes, and you went right to work on one of the most famous projects?

Gass:

It was kind of an interesting sequence of events. They were just beginning to hire people to do the circuit design for the chip. Most of the architecture had been pretty well defined at the time, although it definitely evolved, after I got there.

Nebeker:

I seem to recall that the design started in '78, something like that.

Gass:

Probably. The actual circuit design wasn't, started until maybe the early '80s.

Nebeker:

And you came when?

Gass:

In the middle of '80. So our tape out of the processor probably occurred in the summer of '81, because we were trying to get a submission into a conference and the conference we wanted to get the paper in required that you have work that you [inaudible word]. That was the deadline that was pushing us for our tape out.

Nebeker:

I don't know that term "tape out."

Gass:

Tape out? That's where you take the information of all the mask levels that are needed for the fabrication process and tape out actually used to be a magnetic tape. You take the tape and you hand it to the mask department. The mask department creates what is called the reticles, and the reticles are then used in the fabrication process. So it's the end of the design and the beginning of the application process.

Nebeker:

Okay. The actual application.

Gass:

You can't change anything after you tape out.

Nebeker:

Okay.

Gass:

It goes in the manufacturing side and it pops out the other end and you can't change it.

Nebeker:

So you wanted to have something working for this conference? Do you remember what conference it was?

Gass:

Yes. It was ISSCC, the International Solid State Circuits Conference. And the paper was presented in 1982. It actually won the best paper award for that conference. So it was pretty exciting.

Nebeker:

Did you make the presentation?

Gass:

No. Surrendar Magar did the presentation. He was very influential in the architecture definition of the instruction set and my job was not only circuit design but also logic design and verification. It involved a lot in testing and validation, functional validation [inaudible word] specified with [inaudible phrase].

Nebeker:

Can you tell me the kind of division of labor that produced that chip? I mean, what were the teams and where were you exactly?

Gass:


Audio File
MP3 Audio
(341 - gass - clip 1.mp3)


I tried to prepare some of the teams that do microprocessors today. In fact even other microprocessors that were being done in the group, the team was quite small. There are I think like three people doing layout. Layout is where you actually draw the geometries of all the transistors. The way it was done way back then was you drew it on paper and then that drawing got transferred into the computer representation, and all those things were tiled together and eventually you had a layout of the whole chip. The verification has to happen at the logic level, the functional level, and at the circuit level to make sure that what you implemented was what was specified at the beginning. The circuit design has to do with sizing the transistors and making sure they meet all the speed paths and [specifying] what the power dissipation looks like. And so I was involved in circuit design, logic verification and functional verification. During the development of the processor, it was originally specified to have the program memory on chip and what's called Read Only Memory or ROM. And halfway through the development of the chip, Tony Leigh was concerned that if we couldn't read the program memory that we would have no way of testing the data paths. And so as kind of a [inaudible phrase] actually had, we came up with the idea that if you fed instructions from [inaudible word] chip into the data path at least you could test that part of the chip if the wrong code wasn't working for some reason. Then people began to think about that, and they said "Maybe instead of this being a testing feature, this could actually be a real feature that we can exploit." So we had the ability to switch, we added a pin to be able to switch from what we called microcontroller mode to microprocessor mode, and in the microcontroller mode the program memory came out of on-chip ROM and in the microprocessor mode it came from off-chip. And there were many changes to the architecture from when it was first specified to when it actually was fabricated or manufactured. But I think that was definitely a key change that allowed it to be a commercial success.

Nebeker:

I see. And that started as an idea to test.

Gass:

Yes. And it turned out that a key element of that was used much more as a processor than as kind of an embedded application. The problem with having the program execute out of ROM is it was programmed one time and it only did that one function, and so it was more of a hard wired kind of effect, very function specific type of thing. And what made DSP successful was the fact that it was more like a processor than it was like a dedicated function. And that was definitely a turning point for DSPs historically too.

Nebeker:

Yes. And when did that occur in this process of the developing chip?

Gass:

Of the development of the chip? It probably happened about six months after I got there, so about halfway into the design phase.

Nebeker:

You said there were three people doing layout. What are the other groups?

Gass:Surrendar was the main person who defined the architecture. Ed Cadell helped define the instruction set. I didn't know Harvey Cragon very well, but apparently it was Harvey Cragon who really pushed the idea of TI actually doing a DSP. And, initially the concept was for us to copy a processor or a chip that Intel had done. Strangely enough, Intel had done a DSP that obviously wasn't a commercial success. Of course they were very successful in another area.

Nebeker:

Yes.

Gass:

No. As fate would have it, we made several major modifications to what they had started out with, and a lot of those modifications ended up being something that made our DSP a success. So even though the first idea of doing a DSP was inspired by some things Intel had done, it just kind of set the stage for us to build on that. In fact Intel's signal processor didn't even have a hardware multiplier on it. But it did have an A to D converter on it, interestingly enough.

Nebeker:

Okay. Who else is involved in this thing?

Gass:

Surrendar, Tony Leigh was the program manager and he did a lot of the RAM and the ROM design. Richard Simpson was on the team, and he was mainly a RAM designer. There was a little bit of support from our design automation department that helps with the tool side of things. And there was another person I think working on verification besides me, Jim Tiller. That was a pretty small team.

Nebeker:

It's amazing to me, no more than this eight or so?

Gass:

Yes. But there was some product engineering support that came in later when we were getting closer to tape out. The size of the team was very, very small compared to like processors that are done today. And in fact, to parallel with this, there was a 16-bit microprocessor development that was going on that was kind of a competitor to the 8086 and National's chip, I can't remember, the 16,000. We called it our 99,000. And they were probably two to three times the size of the team working on this microprocessor, which was not a commercial success although the DSP was.

TI management and DSP applications

Nebeker:

What was the attitude do you think, or did you sense the top management of TI?

Gass:

I think they were kind of interested in what might come out of it. Actually still out in my file is a letter of congratulations from I think the chairman of the board of TI. But it didn't have a lot of visibility. There wasn't a lot of pressure from a commercial standpoint. There wasn't really a market out there yet.

Nebeker:

Was there a particular application you were aiming at?

Gass:

Well, actually a lot of it was motivated by the speech group at Texas Instruments. TI has a very long history and speaker verification and speech recognition had a very heavy influence on the architecture of this chip. It was to some extent an attempt to do something much more programmable than a lot of people were doing in speech recognition work. That was probably the strongest motivator.

Nebeker:

But it was thought that this would be a more general purpose DSP.

Gass:

Yes.

Nebeker:

What was it actually called?

Signal Processing Chip part number

Gass:

It's code name was SPC, Signal Processing Computer or Signal Processing Chip.

Nebeker:

I guess there wasn't the concept of the DSP chip.

Gass:

No, not really. Then, all of a sudden, we were getting ready to ship parts and somebody says, "We need a part number for this." We respond, "Oh yeah, a part number." And so they suggested several numbers. Well the people that were responsible for the design were very interested in getting the number 32 in the numbering scheme.

Nebeker:

Because it was a 32-bit.

Gass:

Because it was a 32-bit data path. And even though the memory was only a 16-bit wide data path, it was 32 bits wide. But there was already the 32,000 from National out on the market, so we had to be careful, because we didn't want to get confused with what National was doing. So somebody came up with the idea of instead of breaking the number, we thought about the 32 O [inaudible word], but that wasn't going to work. So they just decided to kind of break the number in a different place. So they made it into 32010, and that's how the 320 series got started.

Nebeker:

Okay.

Gass:

And then that kind of caught on and all the parts that came out of that design group started having some kind of a 3xO after it. We have a micro controller line and we had a LAN, a local area network kind of processor. We had a graphic processor line and all of them had a three and then another number.

Nebeker:

So I understand the 32 and you wanted to separate it, and the 10 part was because you saw this as the first in some series.

Gass:

Yes.

Nebeker:

I see. And then it became a more general style of naming at TI.

Gass:

At TI, yes.

Reception of the 32010 chip; marketing and applications

Nebeker:

That's always interesting to hear how these things get their names. And so tell me about the fabrication and initial reception of this chip.

Gass:

Yes. I think a lot of niche people liked the idea. I remember going back to Rice and showing this concept to some of the people. Some people in the audience were saying, "Wow, you can multiply in 200 nanoseconds." You know, this was just like phenomenal to them, because a multiply back then just took so long. And they thought 200 nanoseconds was really fast. They couldn't imagine that you could do a multiply in 200 nanoseconds. It's pretty funny. Now that seems so slow.

Nebeker:

What was the attitude of the design team? Did you all feel that you really had a fantastic product here?

Gass:

No. We had no idea that it would be commercially important at all. There were normal schedule pressures to get things done so that you just didn't take forever, but there wasn't a customer waiting at the end of the line. It was much more of a speculative effort than it was something where a specific customer was in mind.

Nebeker:

But there was a target in the sense of these speech processing applications.

Gass:

Programmable speech audio kinds of things, yes.

Nebeker:

And it was thought that it would be a large enough market to justify manufacturing such a chip. Was there any mass produced device that was in mind at that time?

Gass:

Well, it was a little bit of an extension of two factors. We had a TMS-1000 family line that was actually a 4-bit micro controller, and Ed Cadell was very influential in helping define the definition of that TMS-1000 product line that was aimed at toys and these little games that had these little controller chips in them. So people would come and order millions of these things with a specific program in them. And so that's why this idea of having to program in ROM was not so strange. And, you combine that with the fact that TI had done the Speak & Spell earlier, which was probably a little bit more of a dedicated kind of an architecture, but the idea was it went into a toy. And our consumer products division was pretty substantial at the time. We had calculators, we had watches, we had all kinds of consumer electronics and toys was one of our main things. So that mentality probably could justify some kind of a demand from a toy market. They probably had a lot of influence in that.

Nebeker:

Okay. Did the thing work as intended when it was first made?

Gass:

Yes. We were correct the first time. We had a pretty good speed and you know have the few revs of it.

Speech recognition, Julie Doll

Gass:

Then what was interesting was one of the most famous applications of the first generation processor was the Julie Doll. And the Julie Doll was produced by Worlds of Wonder. And Worlds of Wonder went bankrupt right before Julie hit the market.

Nebeker:

Oh?

Gass:

Yes. But there was a concerted effort. We did variations of the ten, the different ROM sizes and on-chip RAM sizes and not really affecting the construction set or the data path so much as changing the memory and what kind of IO peripherals we call them, like U-arcs and timers and stuff that were on the chip to try to integrate I guess some stuff. So there were different flavors of the ten's derivatives. And one of those was aimed specifically at the Julie Doll. But I always sat very close to the speech group, and there were several people in the speech group working on the Julie Doll project. Julie had speech recognition as part of her task and that was all done on the 32010.

Nebeker:

I've never seen one. So what happened with Worlds of Wonder? Did they get this? They must have gotten out.

Gass:

I think the semiconductor group of TI didn't suffer too bad, but we had a board manufacturing group. The board manufacturing part of TI had contracted to buy the chips from the semiconductor group, put them on the boards and build the components that went inside the Julie Doll. They were the ones that got stuck with this inventory that never really was sold off. So it wasn't a commercial success, but it was an engineering success.

Nebeker:

And the Julie Doll really functioned as intended?

Gass:

Oh, it was great. It did a pretty good job of speech recognition. It was clever how they engineered that human factor. When you turn the doll on, if she doesn't have your templates in her memory, she will prompt you to start talking to her. She'll get you to say her name first. "What's your name? My name is Julie. Can you say Julie?" You know, and she goes through this dialogue and she gets you to say these secret words, and then she remembers the secret words and she saves them in the template. Then you have to know how to talk to her after that. But it's so funny, because you'll say "Julie," and she'll go, "Yes." It's real freaky. If she says yes, she's just recognized her name, so now she's looking for one of the four key words that you can tell her to go into the other mode that she can go into. And so if she says, "Yes" after you say her name, if you say. "Sing me a melody," and if she hears the word "melody" she will actually sing you a song. She has two different songs she can sing. If you say "let's pretend" she will talk about make-believe things, and if you say you're hungry, she'll say, "Do you like chocolate ice cream or vanilla?" You can even tell her to be quiet. And she'll say, "Oh. Are we making too much noise?" She's quite cute. She has light sensitivity and she has a motion detector, so if you turn off the light, she'll go, "It's really dark in here." If she goes out in the sun, she says, "It's really bright. We need sunglasses." If you pick her up and start moving her, she says, "Are we going someplace?" She also has heat and cold sensitivity, she'll sneeze if she gets cold.

Nebeker:

And when was that first marketed?

Gass:

Oh, probably '84.

Nebeker:

Were you involved with that special version of the 32010?

Gass:

Not directly. I was working on another project. I worked a lot with the speech group, but I didn't actually program it for that specific application. I did program it for some other applications, but not that one.

32010 communications, medical, and military applications

Nebeker:

So what were the first applications?

Gass:

Interestingly enough, General Instruments used it in some kind of a communications device. And we actually collaborated with General Instruments. Our first part was done in what was called NMOS technology, and we collaborated with General Instruments and came out with a CMOS version. I think they became a second source for us for a period of time when you know the 32010 wasn't very big. I can't remember the application they were using it for. And Tektronix used it for their testing measurement equipment and found a variety of uses that were way outside the realm of speech.

Nebeker:

Were there people at TI promoting this in these other areas, or was it more a matter of people hearing of this and deciding maybe they could use?

Gass:

Yes. At the time it was so new and different that people were using it for hearing aids. They were using it for medical applications as well as a lot of military applications when it first came out. It's amazing how many kind of markets it ended up being in. We had a special department set up just to do the military applications.

32010 in hard disk drives

Nebeker:

Well you know you have these terms of invention push or market pull, and I'm just wondering what's going on there. It sounds more like certain applications are reaching in and grabbing this thing rather than that TI is pushing it out.

Gass:

Oh yes. And it wasn't really considered a moneymaker for probably eight years, and then the revenue wasn't really big enough for people to notice it. I think the turning point was when we got our first market that we were pretty successful in hard disk drives. We became kind of the platform of choice for the servo-control for hard disk drives.

Nebeker:

Is that controlling the reading head?

Gass:

Yes. The position of the reading head, not actually the information coming off the channel. That was a market that was just changing so rapidly that they really wanted the programmability.

Nebeker:

Why was this preferred over some controllers that were around?

Gass:

I think it is faster and there were some multiplies involved in the computation, so it could do multiplies a lot faster. And power was obviously somewhat of a factor. We didn't have a lot of the overhead, so we were cheaper.

Nebeker:

When did that happen? When did these start being used?

Gass:

I'll have to send you a copy. I did a little bit of history on DSPs that went into January issue of SP magazine that our committee did, into the TCs. There's one section that specifically talks about the history of DSP processors, and its kind of applications. Gene Frantz would be a lot better at telling you what markets there were. Also Mike Hanes would be a good interview on the market aspects of DSP because he is much more involved in that side of the business side. In fact, I left the DSP group after being in it for two years and joined the research organization. So I was in what we call the product group for the business side the first two years at TI, which was when the chip was being designed and first being fabricated. All the extensions of the chip design, the derivatives of the ten, and then the beginning of the twenty, 32020, and then the twenty-five were all done after I left Houston.

Corporate engineering, speech coding

Nebeker:

Okay. So you moved to Dallas, is that right, in '82, August of '82?

Gass:

Right.

Nebeker:

And I'm sorry, what was it you took up there in Dallas?

Gass:

It was much more of a personal motivation for moving to Dallas than it was a career move. I wanted to be closer to my parents, they lived in Dallas. But it ended up being good for my career in the long run because I got more involved on the application side. I definitely enjoy being in R&D.

Nebeker:

What exactly were you assigned to, what group there?

Gass:

Yes. There was an organization called TEC.

Nebeker:

Corporate engineering.

Gass:

Yes, corporate engineering. And they had several facets of things going on. One of them was development of a speech board. So one of my first tasks when I moved to Dallas was to do actual coding, writing the assembly language for speech coding algorithm. And I also participated in the specification of the emulator function that went with the 32010. That was when I was still in Houston. And I helped out with some user guide information and developed a speech coding application. Then I did some multiprocessor hardware design for a speech recognition board.

Signal processing synthesis, ASIC

Gass:

After the speech stuff I got involved in what was called the SP synthesis. This involved the idea of taking fundamental signal processing algorithms and trying to [inaudible phrase] them hardware. You would do this not in a processor type, but in a very dedicated circuit which would give you speed and power ratio that is significantly better than a processor type implementation. This was a dedicated hard wired approach to DSP.

Nebeker:

The ASIC?

Gass:



Yes, ASIC. And, there was a lot of political opposition inside TI.

Nebeker:

How did it work out?

Gass:

Well, the processor guys won.

Nebeker:

For a good reason.

Gass:

Yes, definitely. So for some applications the computational load is at least 10x over what a processor can do today. Sometimes it was 100x. Some dedicated hardware to do a special function. But if it was real close to what a processor could do. It was only 2x better or maybe even 4x better. There was so much effort and time spent on doing the CPUs. The CPUs could then take advantage of the technology road map, the advances in the silicon side and they would catch up with the hardware and stuff before the hardware stuff got out the door.

Nebeker:

And that was clear at the time?

Gass:

Yes, I guess one of the biggest battles was when the new standard in Europe for cellular phones came out called GSM. There were a lot of people that felt like people might use the programmable DSP initially to prototype these systems, long term, once the standards are fixed everybody goes to fixed function, ASICs, and the DSPs would never be used again. But it turned out that the standards never quite stabilized, and the processors got faster and more power quicker than the hardware designs and the other aspect of it. Probably, the hardest part to deal with was that people found that the algorithm is so complicated that you never could get all the bugs out of the hard wired solution. That was a lot more complicated than they expected it to be and so the ones that fix it is a kind of dedicated path. It never really made it to market on time. So that was definitely a battle where processing [inaudible phrase].

Nebeker:

Is that typical for that period?

Gass:

Yes. I think definitely outside of TI there were areas where ASICs continued to have market share on modems. It was a very classic case of DSPs. Rockwell is very well known for all of its modem chips, and those were in some sense a very dedicated function. And just within the last five years or so, that market has been taken away ROM [inaudible phrase] and converted into DSPs. Even the low end modems can be done with DSPs.

Nebeker:

Is that a general trend over the last ten or fifteen years, that more general purpose DSP processors are taking over?

Gass:

Oh yes. Now, one aspect of modems is that they're getting more complicated as time goes on. Now the 56K is pretty much the limit of what theory can give you from a telephone line as it is today, so we're going to other things like ISDN and ADSL and those technologies. And those technologies are requiring much more MIPS. And again when you get to the cable modems and those kinds of things, the error of hard wire dedicated ASIC chips can you give them more cost effective, lower powered solution than a programmable processor. But I think that eventually processors are going to take over that domain too.

Nebeker:

As you were saying with the GSM, that people thought that the programmable processor would be used in the prototyping stage and then the function of specific things would take over. What you are saying is sort of the opposite of that. In the initial stage when the speeds required are just beyond what the DSP processors can do, you have the function specific ones. But I mean why don't you, when an application is mature, do you then return to a fixed function one?

Gass:

A fixed function again. It all depends on the market.

Nebeker:

Maybe that doesn't happen very often.

Gass:

It all depends on the market. It's almost on a case by case basis. There's always the tension of does it work better in ASIC or does it work better in a programmable processor. Then you have that same tension again of DSP versus a general purpose processor. So there are these three categories of design space, and which one is more appropriate for which market segment is always up for grabs. And, with technology advances, you change the equation all the time, so an answer a year ago may not be the same answer as what you give today and you may not be the same answer in two years. Our objective is to try to push us so that we actually take on more of the microprocessor stuff and we can then extend our domain in the control side and more general purpose stuff. So we're always looking for ways to get a little bit more general purpose with our DSPs. At the same time, we want to make sure we can take our DSPs and tailor them to applications that we can in some sense combat the hard wire dedicated chips. So we're kind of trying to push out on two different fronts. One is to be a little bit more generic and take on a broader range of applications than what is traditionally called the SP, and then the other feature where we're trying to actually infringe on the ASIC market and take a market share there.

Evolution of chip architecture; DLIW architectures

Nebeker:

And what about the members of the family, these chips? Is that a case of new capabilities?

Gass:


Audio File
MP3 Audio
(341 - gass - clip 2.mp3)


At the beginning it was a realization that the first architecture had some improvements that could be made so, it was kind of an evolution of an architecture that got started. So it ends up being the 5x family were all fixed points, and they were kind of eccentric in the first generation. It was just more enhancements on the original architecture. It wasn't driven by an application domain. We were always trying to take advantage of advances in silicon technology, but it was much more of what architecture improvements can we make to get this specific DSP. Then after that the development of floating point line came out, and the definition of that practice is that it had floating point capability, started right after the 2x family came out. So Ray Simar claimed the 3x and 4x camlink for the floating point devices right after the 2x family came out. And then the people that were working on extensions to the 2x had to skip over those two numbers. So they were left with the 5x as their only choice.

In fact the 5x, because it was a derivative of the 2x, came out before the 3x and the 4x did. It doesn't even make chronological sense, it's just when the definition of the project started. The same thing happened with the 6x. Ray Simar again was the architect of that series and that architecture definition started after the 4x came out. But in the meantime, the multiprocessor single chip DSP solution, the C-80 came along. Chronologically it doesn't make any sense. In TI's history there was an attitude of "let's try this DSP architecture." And recently there's been a significant push to keep the architecture line a little bit more consistent. We created three families of processors now and we'll have derivatives of those family lines going forward so we probably won't create anything [inaudible phrase].

Nebeker:

The family lines are what? 16-bit?

Gass:

The 2x family line is still supporting hard disk drives today. The 5x family line evolved into a derivative of it evolved into supporting a wireless communication product line in the cellular phone, mainly the GSM market. Then the third family that will be promoted forward is the 6x family. So it's the 2x, the 5x and 6x family that sort of [inaudible phrase].

Nebeker:

And what's the 6x family?

Gass:

It mainly was aimed at space stations and central office modem bases. It has sound applications and ADSL up on the client side as well as the server side. And people are always coming up with new ideas. It's kind of a phenomena when it first came out, but it has I think the C-6x's is high performance oriented type device as opposed to ultra low power, which is kind of what the 5x is aimed at. But I think what’s most significant about the 6x is that its the most compiler friendly DSP architecture that's come out in a long time.

Nebeker:

Meaning that it's easier to program the thing?

Gass:

No, that's actually not the case. We would like that to be the case, but it allows the C compiler to exploit the parallelism and is not constrained in any way. So, people can actually write their code in C and get a fairly efficient version of assembly language for executable code to be generated by the co-generator. I think that's definitely a point in history where it's going to change the way DSP happens in the future, because as long as everybody's relegated to this assembly language way of programming, the number of people programming DSPs is very limited. When we can actually break through that barrier of saying people can write in C and have an efficient implementation on a DSP, that's going to change the way it's being done in the future. I think it's a very big breakthrough.

Nebeker:

Do you think it's happening?

Gass:

I think it will. In fact, the DLIW approach to processors has been adopted by Intel in their new Merset line that they promoted, and even high performance microprocessors are going more to DLIW architectures. And I think it's the trend. And I'm sure they are going to get the compiler efficiency as well as what our DSPs are going to get. It will take time. I think already our compilers are at 80 percent efficiency anyway, and optimized assembly, and so close to the best set-up from our benchmarks and everything. But I think it will continue to improve over time. When that gap closes close to zero, then you will be able to program in C, which will be good. For DSP it will be very good.

Nebeker:

That's a real change.

Odyssey board

Nebeker:

Maybe we can get back to your own specific assignments away from these bigger trends.

Gass:

Right.

Nebeker:

So you're in Dallas and you're working for the corporate engineering center. Did we cover what you did those four years?

Gass:

Right. Well, it was partly the assembly language programming of speech coding applications. And debugging of that on the hardware system, which had a speech board that plugged into a PC that had to do with speech coding, speaker recognition, and synthesis to some extent. So all those applications were running on the board. Then I was still in the corporate engineering center when we picked up this project to do a Odyssey board. The Odyssey board was a multiprocessor system aimed at facilitating speech recognition for a very large vocabulary.

Nebeker:

Yes. Can you summarize that?

Gass:

Actually the project was motivated by DARPA funding. And, they needed a hardware platform to be able to do very large vocabulary recognition. We knew that one processor couldn't do a very good job. So we created a multiprocessor system. In fact the board was designed so that you could plug in multiple boards and if you needed something more than four processors you could plug in two or three boards and improve the size of the vocabulary.

Nebeker:

This is speech recognition?

Gass:

Yes.

Nebeker:

Okay.

Gass:

So there were several years that went into the development of that board. TI at the time was making a symbolic lisp machine which we sold for a short period of time. It's called the Explorer, and the lisp machine, about the time we had been marketing the Explorer, kind of declined in popularity because conventional [inaudible word] [inaudible word] just as well as the lisp machine. And so the C kind of took over, and C code kind of displaced the lisp from popularity. The Odyssey board was designed to go into our lisp machine. I think we did some multiprocessor boards that plugged into PCS as well. But the life span of that board was limited.

Nebeker:

Was it important in showing that kind of capability?

Gass:

Oh yes. Then people came up with a lot of other applications that were outside the speech recognition. I guess we could talk about some of the applications. We did image tracking or object target tracking and microphone array applications. A lot of people in the research community used the board for development of applications that required more than 90 fb. So its popularity was moderate.

Nebeker:

Okay. And that I see you were working on somewhere in the period '86 to '91, perhaps '87. That was the early part of that.

Silicon compilers, DSP synthesis

Gass:

Well, my involvement with the Odyssey board probably ended shortly after that paper. That’s when I got involved in what we called silicon compilers or DSP synthesis was another term we used for that. There had always been these filter compiler programs that had been around where you'd feed in the design parameters for what you wanted the filter to look like, and it would come out with an equation that implemented the characteristics in digital form. There was always this concept of filtered design packages, and we'd fit the concept of filter design packages and take it another step instead of implementing it in a program that could run on a machine. We'd take it to the point of [inaudible word] and silicon as a hard wired function that would give you the performance that would be close to 100x what a DSP could be giving you. So for military applications it was interesting, because the same project for an FFP was really high performance. FFPs could be done by giving it parameters of what kind of constraints you needed for your application, and it would generate a hard wired function to do these things. So it was very limited in that it couldn't do anything other than an FFP or something like that.

Nebeker:

Have these found much use?

Gass:

We used the filter compiler in an application for a digital TV application when it was kind of under consideration. Right before the HDTV things came along there were some people pushing a more standard definition TV, digital standard. So, they were implemented in a feature which allowed you to do what's called de-ghosting. De-ghosting is when in a TV picture you see a double image because of the multi paths. And so what the filter compiler was there to do was establish links at the [inaudible word] coefficient and then cancel out the multi paths. It was a big transfer [inaudible word], and you would just get a clean signal for that. And there were some military applications but they were real low volume. So from there we kind of evolved into looking specifically at video. So there was a significant amount of time where we were focusing on dedicated architectures for Impact II and Impact I video decoding.

Nebeker:

And what products came from that?

Gass:

We were working on the Impact I decoder. But, the CIA formed an alliance with CQ so we actually went to market with CQ's version of Impact I decoders. We had an agreement with them to also use their Impact II design. The research group was working on a improvement to the architecture for Impact II, and it got canceled because they decided they wanted a programmable approach instead of a hard wired approach. That's when we went from doing dedicated designs to doing more programmable architecture of DSPs. That's when I got back into DSP architecture.

Research management

Nebeker:

Okay. Now in these last couple of the times you had the title of research manager?

Gass:

Yes.

Nebeker:

Is that right?

Gass:

Yes.

Nebeker:

How have you liked that?

Gass:

I liked it. The thing that I liked most about management was the ability to set the direction for what research was being conducted and define the projects. I was able to work with the people on the business side to make sure that there was an interest in the projects and find a home or a transfer point for the technology. And I liked the people side of it too. So there was a lot that I liked about being a manager.

Project leadership, DSP architecture for third generation cellular phones

Nebeker:

And fairly recently you're the project leader?

Gass:

Yes, I made a switch two weeks ago.

Nebeker:

Can you tell me about that project?

Gass:

Yes. This is a good opportunity for me because this was an opportunity where the next generation of architecture was about to begin. We're still probably going to stay within the scope of the 5x family and won't probably deviate it from that family too much. But there are significant challenges in how to get in a programmable processor and compute power that's needed to attach. To be able to use third generation cellular phones is very challenging on the application side. It supports multitasking and a wide variety of applications from speech recognition to speech coding to image compression of video decoding.

Nebeker:

Okay. So although you mentioned cellular phones as an integral area, you want this to be a general purpose DSP?

Gass:


Audio File
MP3 Audio
(341 - gass - clip 3.mp3)


Well, it's got to be able to run the applications that people expect to run on their terminals, their handsets, for third generation. The thing that motivated first generation were analog, second generation phones were digital. Most of them were what's called TDMA, and there are some CDMA phones out there too. Almost everyone that's going to third generation, in Europe, Japan and the U.S. are all looking at wide band CMA as the next step in communications for wireless communications. That will allow you to have significantly more band width and will allow you to extend beyond just weak services. So the ability to get data will become more common and so you have to deal with a whole set of different user interfaces now with web browsing and all kinds of things that you didn't have to deal with in phones before. So that's one constraint. The DR-6 [inaudible phrase]. The other constraint is actually modem parts, the physical layer of doing modulation. Modulation is much more compute intensive for band wide bands being made than it is for TDMA phones, which are common [inaudible phrase] in today's standards. So the computational burden on the modem side has gone up significantly as well as the variety of applications. So those two things combine to make it a tough problem to solve.

Well, I suppose it's pretty preliminary, but the architecture definition doesn't complete by the end of this year, and then the design theme will get started at the beginning of 1999. It's going to probably take eighteen to twenty-four months for the design to work. So the first cam will be late, the early 2001 kind of time frame. And that's when the third generation [inaudible word] probably the first to be employed around the world. So what's ironic about this is something like 250 man-years is estimated for the development of the C. I'm thinking that compared to that first chip it's different in complexity and manpower.

Women in engineering and management

Nebeker:

Well, that should be exciting. Maybe this is a good point to turn to the issue of women in engineering?

Gass:

Sure.

Nebeker:

What thoughts do you have either on a personal level of maybe places you felt that things have been harder than they would have been for a man in the same position or more general thoughts.

Gass:

Well actually I felt like I didn't have a lot of barriers at TI. My biggest challenge was trying to balance family life with work life. What's interesting about being a woman at TI was that it gave me more visibility. So, if I did a bad job everybody knew it. And, if I did a good job it everybody knew as well. People had a tendency to remember me after a meeting. So I haven't had too many issues in my career where I felt like being a female held me back. I felt like every manager I've had has been very supportive of women. However, I remember one boss found out I was pregnant with my second child, and at that point I was a manager. He kind of just looked at me in disbelief and said, "Oh. I guess we'll figure out how to deal with this somehow?" And he never said, "Congratulations." So struggling with family responsibilities and handling a full time career which often spilled over into personal time is definitely the hardest issue I've had to deal with.

Nebeker:

Has it meant that either you didn't take on or weren't allowed to take on some project that if you had been a man you might have?

Gass:

No, I don't think so. I think what happened is I probably ran myself too [inaudible word] trying to. It seemed like every time I turned around somebody was asking me to do something, and it was much more up to me to say no than it was that opportunities were denied me.

Nebeker:

Isn't there very often in industry, in the business world, this kind of expectation that the manager or the person who is responsible for some project, you know, has decided to devote his or her life to that, and you know if you need to put in 100 hours that week you'd better be able to do it. So I can imagine that that's even more difficult for a woman with a family than for a man with a family.

Gass:

Yes. Well, I have somewhat of an advantage over most people because I had a full-time nanny the whole time. It was actually my cousin, which was great, because having a family member take care of the kids is very different from having a total stranger do it. So that part of childcare was definitely a blessing for me. My husband also has to get a lot of credit for this. He has a full-time very highly qualified career, probably not as time-consuming maybe as my job, but he was more than willing to take on more than fifty percent of the responsibility for the kids and the house.

Nebeker:

So you have a supportive family in a couple of ways.

Gass:

Yes.

Nebeker:

You know of course twenty years ago there weren't many women in signal processing, like most fields, and it seems that there are a lot more now. Any thoughts on that?

Gass:

Well I think that women in general are a lot more accepted in engineering today. It's definitely true at TI, I know. But there have been some efforts to try to sensitize people to the fact that people aren't discriminating against people with different cultural backgrounds or gender backgrounds. We've been trying to raise people's sensitivity to these issues. There are several initiatives at TI that address people with English as a second language. There are also methods to assist African-Americans in getting more visibility and levels of responsibility. So there's been a concerted effort to try to make sure that special arrangements are provided so that they get leadership responsibilities. Managers receive an extra level of training and in some cases mentoring, to help compensate for the fact that people, like people that look and do things like themselves better than somebody who is different in some way.

Nebeker:

Yes.

Gass:

So that's been a positive thing at TI.

Signal Processing Society, IEEE

Nebeker:

Do you have any comments on the Signal Processing Society? We're always interested in strictly IEEE history. Any good or bad in what you've seen in your years of involvement with it?

Gass:

Well, I my involvement started probably in '86, or '85, it was the first time I came to ICASP. That was also when not only from TI but from AT&T and Motorola were people starting to do DSPs. Then analog devices came along later. I was involved both with the hard wired dedicated functions standpoint as well as from the processor standpoint. I got active in one of the technical committees. We have since changed our names, the committees and technical committees names to signal processing systems with an emphasis on design and implementation issues. So I've only been there for a short period of time compared to some people who have been in the society. I kept in contact with people from Rice quite a bit.

Nebeker:

So many of the people that I end up talking to, and so many of the leaders in IEEE are academics.

Gass:

Yes.

Nebeker:

And I know the working engineers, and I've heard the comment that it's sort of academic dominated.

Gass:

It is. I saw that there was an opportunity for industry to kind of play a major role in signal processing. But, the opportunity came and went. I don't think the Signal Processing Society was ready to endorse areas of engineering discipline outside of what's been more traditionally considered research. So I think it's much more research oriented, and that's probably why there's a stronger academic flavor here.

Nebeker:

Yes.

Gass:

And, that is limiting. It was kind of a feedback system because the only people coming to ICASP were the researchers, and not the lower level engineers or the engineers that were more practically oriented. They were involved in the implementation and the practical side of engineering as opposed to kind of a theoretical research side of it. The people from industry like Motorola, AT&T, TI and ADI felt like being at the exhibit was not cost effective to them. Because the people that came to ICASP would never buy more than one or two. Ten was a big order for them and so the amount of money they were putting into the conference versus the amount of benefit they were getting from it was not enough profitable enough for them to stay here. Those companies actually got together and created another conference outside of IEEE that would be geared toward the implementation side of things, the IXPAD. Now all the new processor announcements happen at this other conference, without the IEEE involved. Because the industry guys got together and said, "We don't want to go to ICASP anymore."

Nebeker:

Yes. Well this is very interesting. I know that sort of at the highest level and overall for IEEE there are efforts to appeal to the fact that the engineer and Spectrum has the practical engineers featured now.

Gass:

Yes.

Nebeker:

Do you see anything that can be done to make the Signal Processing Society more relevant to the concerns of the practicing engineers out there?

Gass:

I don't know. Unless there's some way that it could in some sense sponsor this other conference that's already going on and somehow pull that into the Society. Because I'm sure with an IEEE endorsement it would probably do better.

Nebeker:

Yes.

Gass:

Trying to change ICASP is never going to work.

Nebeker:

It must have some mission. I know that IEEE would not like to become a mainly academic organization with papers that are not of interest to the practicing engineer. So maybe you have to have some specialized conferences and of course the publications, and then you'd have enough in the magazine and some other publications that all the people out there doing DSP would want to read it. Any sign that the Society is trying to move in that direction?

Gass:

Yes. I think this year’s new technical committees are definitely a move in that direction, trying to address the new excitement out there in DSP. Communications is such a huge field, and a lot of it's DSP oriented now. For DSP to do more communications is not a good idea. So those two new technical committees should help. But again, I don't know that it's going to dramatically influence the type of papers selected at ICASP. But you could see how if we could play a role in another conference that's already going on a lot of the disciplines could contribute. Technical committees could support both conferences; that would be fine.

Nebeker:

Yes. Anything I haven't asked about you'd like to comment on?

Gass:

No.

Nebeker:

Thank you very much.