Oral-History:David M. Young Jr.: Difference between revisions

From ETHW
mNo edit summary
m (Protected "Oral-History:David Young Jr." [edit=sysop:move=sysop] [cascading])
(No difference)

Revision as of 14:07, 10 August 2009

About David Young Jr.

Following service in the U.S. Navy, David M. Young Jr. went to Harvard University to study Mathematics. He was awarded a M.A. in 1947 and a Ph.D. in 1950, working under the direction of Professor Garrett Birkhoff. David Young's Ph.D. research established the mathematical framework for the Successive Over Relaxation (SOR) method. In 1954, Professor Young began his academic career in the Mathematics Department at the University of Maryland, College Park, where he was the first to teach a numerical analysis course focusing on computer programming. In 1958, he became a Professor of Mathematics at The University of Texas in Austin, Texas, where he established the university Computation Center and was its director until 1970, when he became the founding director of the research Center for Numerical Analysis. He was a founding member of the Institute of Computational Engineering and Sciences (ICES). He passed away on December 21, 2008.

The interview focuses on work supported by the National Science Foundation, as well as by the Department of Energy and other government and private sources. He discusses different iterative methods, including SOR. He describes a shift, due largely to increased computing power, to methods for dealing with asymmetric systems — encountered, for instance, when pumping oil out of the ground — as well as methods tailored to parallel and vector computer architectures.


About the Interview

DR. DAVID M. YOUNG JR.: An Interview Conducted by Andrew Goldstein, Center for the History of Electrical Engineering, July 22, 1991

Interview #110 for the Center for the History of Electrical Engineering, IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.


Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, Rutgers - the State University, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.

It is recommended that this oral history be cited as follows:

Dr. David M. Young, an oral history conducted in 1991 by Andrew Goldstein, IEEE History Center, New Brunswick, NJ, USA.


Interview

INTERVIEW: Dr. David M. Young Jr.
INTERVIEWER: Andrew Goldstein
DATE: 22 July 1991
PLACE: Telephone Interview

Overview of Work

Goldstein:

Hello, Professor Young?

Young:

Yes.

Goldstein:

Hi. This is Andy Goldstein from the IEEE Center for the History of Electrical Engineering.

Young:

Right.

Goldstein:

We have an appointment to talk today.

Young:

Right, right.

Goldstein:

Alright. I just want to remind you that I am recording the conversation, and be sure that that's agreeable to you.

Young:

Right, yes it is.

Goldstein:

Okay. I know I have explained the purpose of this phone call and our project, but let me just tell you again. What I am interested in is your research, the things that motivated your research and the impact it has had, and NSF's involvement in promoting your research. So if we could start by just having you summarize what is was that you've worked in.

Young:

Okay. The works really has been devoted to the methods of solving partial differential equations using discretization methods such as finite difference methods. But one of the main stumbling blocks — okay, there were two things. One is to set up a discretization to represent the differential operator by a difference equation. I've done some work in that, not very much. I've done some work on higher order methods for deriving difference equations, but the primary work has been in — you have to solve large systems of linear algebraic equations. Usually the matrices are very sparse. That is, they have very few non-zero elements. And then iterative methods are often used instead of direct methods to solve large sparse linear systems. So my main activity since about when I got my Ph.D. around about 1950 at Harvard has been on the problem of developing rapidly converging iterative methods for solving large linear systems.

Goldstein:

Now, do you have specific applications in mind. Do you know particular differential equations, perhaps from physics or some other discipline that you're interested in solving? Or are you working on general methods?

Young:

Well, okay, I'm interested primarily in elliptic partial differential equations, where normally the matrices have a certain amount of — they have a structured sparsity as opposed to just random sparsity and they usually are diagonally dominant. That is to say, the element on the main diagonal is larger than the total of the other elements in each row each in absolute value.

Goldstein:

Okay.

Young:

And so then you know certain things about the matrices being non-singular and things like that. So they are fairly special linear systems. Although a lot of the work would apply in more general cases, but the emphasis is on these special kinds of systems. We used to consider primarily symmetric systems where the matrix was symmetric and positive definite. And more recently we've — well, I say recently; the last ten or twelve years we've been also working on non-symmetric systems. It's quite a different ballgame when you go from symmetric to non-symmetric systems, since the eigenvalues of the matrices may become complex, and also the Jordan canonical form of the matrices have a special form, and that really complicates the analysis.

Goldstein:

Okay, I want to get back to that, but first let me say what physical systems are described by the elliptical partial differential equations?

Young:

Well, diffusion problems, steady state heat flow problems, anything to do with — you know, we have the equation of continuity of a system. I guess smooth dynamics, nuclear reactors, things like that, where you have a conservation condition of some sort or other.

Articles and Books

Goldstein:

Okay, I've looked over the paper that you sent me, "The Historical Overview of the Iterative Methods," and for the most part it was over my head but I could see that —

Young:

Didn't emphasize enough about the applications, I have to admit.

Goldstein:

You identified several classes of iterative methods, and I'm just wondering which ones of these you've been focusing on?

Young:

Okay. Well let's see now. I forget now which classes —

Goldstein:

For instance, the SOR Method.

Young:

Okay, a lot of the work, especially in the early stages, was on the SOR Method. In fact, I guess my advisor and I developed the name SOR, but that was done back in 1950. By the way, I should say this; much of my work can be summarized in two books. And these are mentioned in that paper there. The one book was written in 1971 by me alone, and then in 1981 a book with Louis Hageman and myself . So a great deal of the work was sort of — it was not just a question of the papers, but then we sort of put it all together in these two books.

Goldstein:

All right, let me make sure I know which ones. Would they be in your bibliography? Would they be under your name?

Young:

Well, the first one, the one that's called Iterative Solutions of Large Linear Systems, let's see if that's in there — oh yes, there a section on books — let's see now where is that?

Goldstein:

Yes, right at the end. There are the books on iterative methods. Right, and so that's Hageman and Young, [Applied Iteration Methods.]

Young:

And then there is also Young, 1971. Those two. I also had a textbook with Gregory, which is not listed in that one but it sort of — that also contained some of the work, but I think it's at a more elementary level. The book with Gregory, Young and Gregory, [Survey of Numerical Mathematics, Volumes I and II], I don't know if that's listed in the longer bibliography or not. Let's see. I guess it's not as critical. I can give you the exact reference to that book. I don't know why it's not here. But basically, the main results are in the two books by myself and the one with Hageman. And the 1971 book is largely devoted to the SOR methods and variations of it.

Iterative Methods

Goldstein:

So maybe then in this conversation you could tell me some of the developments in that area since 1971 that would be missed by the book. Or does it work that — is the dynamic that one method becomes preeminent and most of the attention focuses there? Or are all methods developed more or less simultaneously?

Young:

No, okay. I guess I'd say the SOR Method was pretty much, not the only method, but very widely used in the 1950s anyway, and early 1960s. And other methods we developed such as the Alternating Direction Implicit methods. The one we called the ADI Method. Then there was the Chebyshev related methods. The Jacobi method, which is sort of a very slow method, and then it was combined with Chebyshev Acceleration. That is to say, acceleration by Chebyshev polynomials heats things up. And then also the symmetric SOR method was developed, I guess in about 1955.

Anyway, our work included — There was a problem with determining various iteration parameters. The SOR Method involves a relaxation parameter ω, and for the Chebyshev methods you need upper estimates for the bounds of the eigenvalues of the iteration matrix. Actually, you can apply Chebyshev acceleration to a wide variety of iterative methods, but you do need to be able to estimate the largest and smallest eigenvalues of the matrices for those methods. Hageman and I developed some automatic methods for determining these iteration parameters. You could estimate them, but that was it. You couldn't get enough accuracy by just estimating, using analytical techniques.

But we developed some schemes, we call them adaptive schemes, where you started with an initial guess to the parameters and then you iterate awhile, and then you modify the parameters based on how fast the method is converging. You could call them adaptive methods, and also procedures for determining when the process should be stopped. It's not enough to just look at the residual sectors or the amount by which the equation is not satisfied, because even if it looks like the equation is nearly satisfied, the problem may be ill-conditioned so that the actual error may be very much larger than you think it is by looking at the residuals.

So anyway, we developed some schemes for determining the actual error involved. When you get to a certain point you think you've converged, then you have an estimate of the actual error. And that's related to the estimate of the eigenvalues of the iteration matrix, which we also want to automatically determine. So with those two things, it then became practical to develop iterative algorithms and actually software. We have some software which we distribute throughout the country and world actually, under the ITPACK Project. I-T-P-A-C-K. I think that's discussed in that.

Goldstein:

So what you have been describing then, using this, and I'm not going to pronounce it right, the Chebyshev?

Young:

Chebyshev.

Goldstein:

The Chebyshev acceleration. That was in the mid 1970s under NSF support?

Young:

Right. It started in the mid 1970s, and we then expanded to include conjugate gradient acceleration as well as Chebyshev acceleration. There were advantages, that you don't need to estimate parameters, on the other hand. A little bit more calculations are required per iteration with the conjugate gradiant. It is, I guess, primarily used now. Conjugate gradient is more widely used than Chebyshev. But then, we have a choice. And besides the work on developing iterative algorithms and software, we then started working on non-symmetric systems. In the papers it was Jea, J-E-A, and other people on developing generalized conjugate radiant methods sometimes called "Krylov Space methods" for solving non-symmetric systems. A lot of other people have been working on these non-symmetric systems also, of course.

Non-Symmetric Systems & Computing Power

Goldstein:

Now, did you start pursuing the non-symmetric systems? This question is actually more general than this particular question, but I'm curious to what extent you would determine your research agenda and to what extent computer power that was available to you would determine your research agenda. I guess what I'm asking here is could you start working on non-symmetric matrices because you had more computer power than before, or were new techniques developed?

Young:

Okay, well, back in the mid 1970s, I guess, it was hard enough to get enough computer power to handle symmetric systems and so people sort of shied away from non-symmetric systems. I don't guess they really knew too much about iteration methods for these systems. I mean the one method would be solving the normal equations. You take a non-symmetric system and multiply by A transpose, and then the matrix A transpose times A is symmetric positive definite, and that's great, except that the condition number is greatly magnified. It's generally regarded as not a too red hot of an idea to use the normal equations most of the times.

But then a man by the name of P. K. W Vinsome [1976], he was an oil man, he got the idea of going ahead and using the conjugate gradiant method for non-symmetric systems, even though they were only supposed to work for symmetric systems. Just go ahead and try it anyway, and maybe adding a few extra terms and see how that would work. And that seemed to work out fairly well. And then later on people tried to make this more rigorous, which is called "Krylov Space Methods." K-R-Y-L-O-V space. Where you sort of work in a Krylov space, you generate Krylov vectors. BA, AB, A2 B and so on. And you'd determine the best approximation to the solution in these Krylov spaces according to certain criterion, like minimizing the residual vectors, direction vectors, orthagonalization. and things like that. And in our work, we introduced two terms: ORTHORES and ORTHODIR. I think we were credited with introducing the terminology that goes along with ORTHOMIN. Not that that means that we did all of the work by any means, but in any case we did develop the general framework for considering some of these methods [which are called "generalized CG-acceleration procedures."]

We also showed how these methods relate to the "Lanczos Method." Which is kind of interesting as far as many fewer calculations per iteration. But you don't ever know whether it's going to breakdown or not. An interesting method in that it requires fewer operations per iteration, but you don't really know too much of whether it's going to just sort of breakdown and get a zero denominator. Just more recently there's some work being done to sort of bridge that gap, so if this method does breakdown then you can do some things to sort of bridge over the gap and then get back on track again. But so anyway, we claim some credit for, along with a lot of other people, in this particular area of non-symmetric systems.

Goldstein:

So what you are saying is you developed methods that could take advantage of increased computer power?

Young:

Right. And, of course, increased knowledge of the methods themselves, which was not — But that's right. That was the objective was to — Okay, most of the problems, the realistic problems that you want to solve are non-symmetrical, it turns out.

Goldstein:

What sort of problems were you able to tackle then, either in terms of the physical system that was being modeled or the dimensions of the matrix that you were able to solve?

Application: Oil Recovery

Young:

Well, I tell you, the biggest incentives of the thing were oil recovery problems. Those problems were sort of inherently non-symmetric.

Goldstein:

I'm not sure why. When you say oil recovery, I'm not sure I know what you're referring to.

Young:

Well okay, you're trying to — Let's just say if you're pumping water into one part of the reservoir and then that forces the oil out the other part, so that the whole interface between the oil and the water moves. And then you stop getting water out of the oil end, and then it's time to stop pumping. Anyway, but that whole front between the oil and the water moves, it obeys the diffusion equation. The more water you pump in, it forces the oil out the other. You have a rectangle field: one corner is you pump the water in, the other corner you — the oil comes out the other end. So the more water you pump in, that sort of forces the oil toward the oil end. And the fusion process — and you hopefully have a front between the water and the oil that doesn't get broken. And so if it does, then it fouls things up.

But anyway, that's a complicated system of several pairs of differential equations. And you get symmetry if you greatly simplify, but if you try to make a more realistic model then it no longer simplifies. I should say, we actually worked in collaboration with some engineers and oil people, so we developed the software and we let them actually do the applications. But it was motivated by the need to solve these.

Goldstein:

When you say people who work for oil companies, Exxon —

Young:

No, actually they were in the Petroleum Engineering Department, although there are applications — I mean, this faculty person at the University [Professor Kamy Sepehnoori] is in the Petroleum Engineering Department. He works with industry and has all kind of programs for solving oil recovery problems. So that's one of our biggest motivations is the need that he has to solve these problems. [Professor Mary Wheeler, who did her master's thesis for Dr. Young, also works in this area.]

Goldstein:

Do you discuss this in applications to NSF, or one of your other —

Young:

Yes, well, The Department of Energy —

Goldstein:

Department of Energy is another principal funder?

Young:

The Department of Energy. They've also been supporting us.

Goldstein:

So when you are discussing the research project, you describe it in terms of its mathematical content, or its application?

Young:

Well, primarily, I have to say I'm a mathematician. I describe it mostly in terms of the mathematical aspect, but strongly motivated by the applications. But I must admit, these codes have a lot more in them than just the mathematical solution of the linear system. That's not an insignificant part. In fact, it takes a large part of the computer time, but nevertheless the actual program has got a lot more in it than just the solver. They call this the solver.

Goldstein:

When in your career did you become interested in this diffusion problem?

Young:

Well, I guess I've been interested in these type of problems for — since I started doing research, and I guess I'm really saying that this sort of gives a justification for the research. I myself have not solved oil recovery problems.

Goldstein:

I see. You provided the techniques.

Young:

Especially on the Department of Energy grant, we worked closely with people that do that. But we are still trying to get motivation from them, but we don't actually get out there and solve the problems ourselves.

Goldstein:

I see. When I'm writing about this, what I'm going to try to do is describe the significance of the research. And so although I recognize there is a mathematical significance, it's also important to identify where it's been applied.

Young:

Oh, certainly. I understand that, right. Non-symmetric systems are — most physical systems as I said before, if you simplify you can perhaps get symmetric systems, but when you try to get more and more realistic — Of course that can also work in the case of going from two dimension to three dimension. In the early days, the reason you do two dimensions is it's simply too expensive to try to solve a three dimensional problem on the machine — three dimensional plus time dependence. So it's only when you get bigger machines that have millions of operations per second and huge memories, which you didn't have years ago. You couldn't even think about doing three dimensional and time-dependent problems. Now they're getting close to that.

Algorithms for Parallel & Vector Machines

Goldstein:

What machine do you work on? What environment do you develop in?

Young:

We have access to a Cray, and we also expect to get access to a Connection Machine and to a Hypercube. Use these massively parallel devices. That, by the way, is another area that I want to mention. Well, I guess starting around the early eighties, we have been working on extending our software, and also considering use of vector machines, and then also parallel machines, trying to develop methods to an effective use of parallel computers.

Goldstein:

Are there some methods more amenable to parallelization and algorithms that exploit parallelism?

Young:

Right, some of the methods are very good for serial machines and may not be so good for parallel machines.

Goldstein:

Which ones?

Young:

Well, as an example, there's the "symmetric SOR Method." It's quite good for a certain class of problems. It's very good using the so-called natural ordering, but you can also use the red-black ordering. If you have a square grid of points, label every other grid point red and the other ones black, it's possible using the SOR Method. You can do all of the red points, treat them all at once in parallel, then all of the black points [using a parallel computer.] Well, if you do that with the symmetric SOR Method, the convergence is not at all as good as it would have been if you had used the so-called natural order [using a serial computer]; we just go from left to right and up. But that so-called natural ordering is not parallel, because you can't do half the points at once then do the other half because it's one right after the other. So you have a dilemma that if you use the red-black ordering you get tremendous parallelism, but convergence is all shot on a serial computer. Whereas if you use the natural ordering, the convergence is very good, but the parallelism is not very good. You have sort of a dilemma there. [Clearly, one needs to use a parallel computer to take advantage of a parallel algorithm.] That's just one example.

One of the crudest methods is the Jacobi Method. That's a really simple method. And that appears to be pretty good. It will paralyze completely to use every point at the same time. The idea with the Jacobi Method is you take each point on the grid, and you can compute the value there, but you can do every point at the same time. It's independent. You only change them at the end of the iteration. And so that's an example of a method that's very slow when it comes to a serial computer, but on a parallel machine it parallelyzes perfectly. And if you use the right kind of acceleration, it's very competitive. In fact, we made some studies around 1985, and among the methods we considered it was just about as good as any. Even though prior to that, if anyone mentioned ever using the Jacobi, then the people would say you're crazy, that method is obviously no good. But when you get to parallel, things change.

We're also working on (of course, this may be beyond the time that you're interested in) but the so-called high-level parallelism, where in order to make a method suitable for a vector and parallel machine, we try to completely change the method. In other words, there are several levels of using a parallel machine. One is just to rewrite the code a little bit. And another one would be to change the code a lot, and that would be the second step. And the third step would be to change the algorithm to make it more adaptable to a parallel machine. So that's one of the things we're working on is to try to develop different kinds of iterated methods that are more adaptable to parallel machines.

Goldstein:

Now, is this the large-scale trend in all of numerical analysis, the move to reform techniques and algorithms to the newer machines, or is this something that's particularly well suited to your research?

Young:

No, well, a lot of people are working on it, but many people seem to be concentrating on a slightly lower level. They'll break the thing up into blocks. We're actually trying to get some different mathematical techniques. A deeper thing. I'm not saying they're any better, but we're aiming at deeper changes, some changes of methods. There are things like domain decomposition and block methods and things like that that are perhaps not quite as deep, although they're very important, and there's a lot of good mathematics there.

Goldstein:

I think I saw that Golub had been working with the block decomposition. I saw in one paper that he said that that was particularly well suited to parallel treatment.

Young:

Right. They take an airplane, take the wings, and the fuselage, and things like that, and consider each part separately, then sort of combine them in some way. What we're trying to get at is some different schemes. That's a little bit more fundamental. We're still working on this, so I don't know how it's going to turn out.

Goldstein:

Right, now I see that you work on developing general mathematical methods, and in your particular case, you're working with oil men on these diffusion problems. But are there others? Is your work so specific that it's limited to application on those problems, or are there others — ?

Young:

Well, one of our collaborators works in computational fluid dynamics, Professor Graham Carey. And again, we're actually working on the numerical analysis and, but he's using the methods and software for his problems.

Goldstein:

I see. Are you contacted by other scientific research groups?

Young:

Well, to some extent, people can order without these codes. In other words, they'll send us a request to send them a tape that has about five software packages.

Goldstein:

Could you name them?

Young:

Well, ITPACK 2C [with Dr. David Kincaid]. Let's see if it's in that paper. In fact, that's one version, that's mainly for symmetric systems. Then we have ITPACKV 2C, that's the vectorized version of ITPACK 2C. Then we have two packages that include non-symmetric systems, ITPACK 3A and ITPACK 3B [with Tsun-Zeo Mai]. These include methods of non-symmetric systems. Then we have a thing called NSPCG [with Tom Oppe, Wayne Joubert, and David Kincaid]. That's designed for vector computers and also allows for different storage formats. That is to say, as far as the matrix, you don't want to store the whole matrix including all of the zeros, for example. It's prohibitive. And the simplest sparse storage for it then, would just be to store each non-zero element and the corresponding row number and column number. That would be one way of doing it. Well, there are other ways, including no format at all where the user has the means of computing A times x, the matrix A times the vector x; for any vector x it's just an operational process. You cannot [?] the matrix in several formats. And then some of the physicists, say, that's the only way they can — they couldn't use the package unless we had three or more formats, including no matrix storage

Goldstein:

Because of the memory savings?

Young:

Well, you should never actually want to generate the matrix. All you can do is generate A times x. You wouldn't actually generate A times x. Who knows what you do to get that. But the matrix is — of course in finding that finite element methods the matrix should generate these small sub-matrices. For each element, you generate a little matrix and then you sort of combine the whole thing together to give you a full matrix. Well, even there you may not wish to assemble the entire matrix; you may just want to keep the element matrices together.

So in any case, there are cases where you do not want to ever generate the entire sparse matrix. But you simply have the means of computing the operation on any vector. That's this package N-S-P-C-G, which stands for "Non-Symmetric Preconditioned Conjugate Gradient." And that's a pretty widely distributed package, as I say, efficient for use on vector computers. And when we try to extend that to parallel computers, that's going to be kind of a hard process because there's so many different computers and things of that type. But we're going to try to. There's also a project called LAPACK, you may have heard of. That's for direct methods. The object is to try to generate efficient subroutines for applying direct methods. Jack Dongarra and other people are involved in that. We're working on iterated methods for parallel computers.

Support from NSF and Other Sources

Goldstein:

Now, have you found that NSF has been interested in reformulating your work for the different computing technologies?

Young:

Well, yes. We're still getting support from them, and our most recent grant started about two years ago. It was very heavily emphasized vector and parallel computing. So they are definitely working, particularly their office of new technologies, the Office of Advanced Scientific Computing. They're very interested in the work. Actually we're getting support from that group, plus an Applied Mathematics group, plus a group in Computer Science.

Goldstein:

Did you apply independently to each of those, or did you make one general application?

Young:

One proposal, but they all contributed to it.

Goldstein:

Besides for NSF, who have your principal funders been? Or rather, if you could rank your funders in terms of the significance of their support.

Young:

Well, I guess the Department of Energy would be the other. We really have two major ones, the NSF and the Department of Energy. And then we've had some in the past from the Army and Cray Computer.

Goldstein:

Working on similar problems, or did the problems have more military application?

Young:

Not really, no, these were grants, not contracts. And that was the Army grant, and they were very good also, and it lasted until about the late 1970s.

Goldstein:

When did you have Cray support?

Young:

We had Cray support about three or four years ago. We had it for about two or three years. The Army was much longer ago, from about 1960 to about 1977. And that was very good, but I think their program was being cut back; at least we have not had an occasion to go back to them. Although if they were expanding and interested, we'd be glad to see them. It's always a question of how much effort you can expend on writing proposals and how much you —

Goldstein:

That starts getting in the way.

Young:

Yes. And also how much effort you can spend yourself if you're writing proposals and doing a number of different projects. So we're right now just with NSF and the Department of Energy, and also some support from industry, Cray and so on.

Software Packages

Goldstein:

You said you had the packages that you make available.

Young:

It's ITPACK, with IT meaning "iterative."

Goldstein:

Right, right, well there are a number of them. Who do you distribute them to? Who's interested in these things?

Young:

Well, just people in various laboratories or research places. I don't really — actually we should make a record of this for just such a question. But you just get all kinds of requests come in, you know, every so often a couple of requests come in. But any laboratory that has occasion to solve linear systems throughout the world, even Venezuela.

Goldstein:

Right. Do they work with you very closely, you make it available and then you're not married to it anymore, or does it require close communication in order to get it up and running in the way the laboratory wants?

Young:

Well, we've not found too much difficulty; we send the tape and the instructions, and, of course, if they have any problems they will call us. And it apparently has been quite successful, at least as far as we know. The people have been able to run the programs fairly well. You know, they are written in FORTRAN77, which is probably not the best or most modern language, but it was pretty widely available at the time.

[Sets down phone. Interview ended.]

Additional note by Dr. David Kincaid (2009):

Professor Young had NSF grants throughout his career from the 1960s until the late 1990s. In 1966, as Director of the Computation Center, he obtained the first million dollar grant from the NSF towards the purchase of a CDC 6600, which made the University of Texas the first university to have a "supercomputer." Young had Army research grants from the late 1960s until the late 1970s and DOE grants in the 1980s and early 1990s. These grants totaled in many millions of dollars and supported numerous graduate students and researchers.