Oral-History:Jan Rajchman and Albert S. Hoagland
About Jan Rajchman and Albert S. Hoagland
Dr. Jan Rajchman's pioneering career in electronics contributed enormously to computer technology and data processing. An EE and Ph.D, graduate from the Swiss Federal Institute of Technology, he became a vital innovator in computer hardware and software. During World War II he applied electronics to computers and developed key memory systems. Subsequently he contributed much to digital computer research involving magnetic, optical, superconductor, and other techniques. Rajchman holds more than one hundred patents on his work.
Dr. Rajchman describes his views on data storage and computer memory. He sees these factors as related parts of a single problem and speculates on research and development intended to make electronic memory more efficient, economical, and comprehensible to the user. He concludes by answering questions from the audience. For a detailed interview of Jan Rajchman, covering his career and research, see Jan Rajchman Oral History.
Dr. Albert Hoagland has contributed a great deal to computer data recording. A Ph.D. from the University of California at Berkeley, Hoagland then became part of its electrical engineering faculty. In 1956 he joined the IBM Advanced Systems Development Laboratory, and his research was crucial to the development of IBM computer magnetic data-recording products such as the 1301 disk file. Hoagland also led early technology efforts toward a high-density, replaceable disk file.
The speech explores future options and developments in the management of data storage devices. He describes ways of getting higher storage density and predicts declining memory costs that will make larger memories inevitable and desirable. He also discusses new techniques for coding and modulating magnetic recording surfaces. He then answers audience questions about storage density, disk materials, and improvements in memory cost and design.
About the Interview
JAN RAJCHMAN AND ALBERT S. HOAGLAND: An oral history recorded for the IEEE History Center, 24 March 1971
Interview # 002 for the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.
This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.
Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, 39 Union Street, New Brunswick, NJ 08901-8538 USA. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user.
It is recommended that this oral history be cited as follows:
Jan Rajchman and Albert S. Hoagland, an oral history conducted in 1971, IEEE History Center, New Brunswick, NJ, USA.
SPEAKERS: Jan Rajchman and Albert S. Hoagland
DATE: March 24, 1971
Summary of Jan Rajchman's career
We are speaking of Jan Rajchman, affectionately known as "Mr. Memory." I don't know whether he knew that, but we used to always call him that at IBM. Dr. Rajchman joined RCA in the summer of 1935, and gave them research since that time. In '51 he became director of the Computer Research Laboratory at RCA; in '67, he became staff vice president in data-processing research; and in '69 was appointed staff vice president of information systems. During World War II Dr. Rajchman was among the first to apply electronics to computers. He worked chiefly with memory devices for computers, and developed the selective electrostatic storage tube like in core memory systems, and transfluxer aperture plate memory. He's responsible for a broad-spectrum program of research in digital computers involving magnetic, cryogenic, semiconductor, and optical techniques; input/output devices, computer theory, computer applications, and automated design. In 1960 Dr. Rajchman received the Morris Liebmann Award from the IEEE for his contributions to the development of magnetic devices for information processing. He is a Fellow of IEEE and AAAM. He was elected to the National Academy of Engineering. He holds 105 patents.
Jan Rajchman's comments
I would like to look at the entire problem of storage and memory as one single problem. As the first slide shows, we tend presently to consider the problem of memory and the problem of storage separately, the memory being the random-access device that has a coded address [?] computer in the first place, from which you can extract these fractions at high speed. And we compare that part of the computer. And then we consider the mass storage that takes this drum and what you put on the cells, to put from the cells, and to put into the tape or on the disk, which I think is in itself a very important part of the processing system, as the input-and-output device. Actually, philosophically, both these devices and the core and the transistor all store information. So I'd really like for us to look at this as a whole problem. We've already heard the influence that LSI is likely to have in the area of very fast storage. That is to say, the notion of a buffer between a lot of the core memory, the disparity of speed between it and the processor, the talk about installing a buffer, which tends to make this ensemble look as though it has the speed of the small memory and yet the capacity of the large memory. With the further improvements of transistor memory technology and LSI, we can look for the replacement of the core memory, perhaps with an arrangement of this type, where we would have one [?] very fast memory, another one of slower memory, depending on the technologies involved. These are likely to be cheaper. It's also physically easier to make one small memory that's very fast than a bigger one that's slower. So it still looks something like this.
Memory's discrepancy in time scale
Looking now at the input-and-output devices, we find a tremendous discrepancy in speed between the core and the drum and then the disk, let's say. The disk with the random-access times is typically in milliseconds, say, tenths of milliseconds. Now I suppose we'll get down to perhaps one millisecond. But nevertheless, we go from the millisecond world to the microsecond world, and this discrepancy in time. There are two discrepancies actually. One discrepancy is in time, in the random access. The second discrepancy really works the other way; this discrepancy is extremely high because in electronic mechanics and devices, the bits tend to come at an extremely high bit rate, sometimes so high that the electronic memories have a hard time coping with them. So sometimes the bit rates are very high, and on the other hand the random-access times are very slow. So we have this great discrepancy in a time scale, which is one of the main problems that computer architects have to address. Well, the typical solution, of course, is to put the drum between the disk and the core. And many modern installations use extended core. This is a very expensive solution nowadays because this extended core is still very expensive. But it is my feeling that as the semiconductor industry will improve, the core industry will not die, and is probably going to the very large capacities in which it will soon be more economical than the transistor memories and will produce a buffer between the tape and the transistor memory. So therefore we can look at this sort of an organization: tape disks, extended core, transistor, and transistor processor. This is the kind of thing that is being thought about in development shops and so forth. We can also look at this type of development, which is the type of parallel development where parallel computers can be made. I will not dwell on this. I would mostly like to put my things rather into this area, which I think is an exceedingly important area: how to really do something about mass memories.
Replacing disks with magnetic bubbles
The second slide shows, the same discrepancy that we have today. And now there is one important new concern there, which is the magnetic bubbles that are being worked on, which might be a replacement for the disk. Now, the magnetic bubbles, are a device in which certain domains of magnetization can [?] place of magnetic materials, cylindrical domains of magnetization are opposite in one cylinder with respect to the [?] around the cylinder. And what is interesting is that that little cylindrical domain of opposite magnetization is extremely stable. That is to say, its size is very stable with respect to its magnetic field. However, it can be moved very easily in the plane cells by putting magnetic fields in the plane. Therefore, these little cylindrical domains of bubbles, as they are called, can be randomly moved in the plane at rates that can be a million cycles per second or even ten million cycles per second. They also can be put very densely; approximately a million per square inch is a goal. I don't think that the million per square inch and the million per second are two parameters that go together yet. I think they have been attained separately in the laboratory. But I do think that they will be attained in the laboratory together rather soon because a great deal of work is being done in the laboratory. The work was initiated in many laboratories in this country and abroad. And therefore the chances are that it will be possible to make a little plate that might contain a million bits. Then you can put many of these together and put them in a queue. And you will have a wire device with no [?] parts that will produce the same electrical function without any moving parts. And so this is certainly a possibility. Whether it will occur or not, of course, I don't know. It is extremely dangerous to be a prophet, although perhaps not as dangerous as people think because most people tend to forget prophesies. But in any case, this is a distinct possibility and one on which many millions of dollars are being invested by the industry in general.
However, I would like to say that the general aspect of the architecture, even if this were to succeed, would not pay. In other words, we would still be in an area in which the bit rate will be the dominating factor to the left, and access time will be the dominating factor to the right. The architect will still have the problem of how to deal historically with access times and bit rates in the same way that we have today. The only thing we will have gained is probably a more compact device and one that's more reliable because there will be no moving parts. But architecturally I don't think there will be a very fundamental change. Now this is both a great asset, because that means we can introduce the device without upsetting the tremendous investment in cores and the general investments we have in our machines; but it's also likely to produce less of an improvement.
Now, another thing about which I'd like to say a few words, and about which I can't help but be rather enthusiastic, is the possibility of changing the aspects altogether by an optical memory, which would look like this, in which all of the storage and recording devices would be put in one so to speak. All the problems of how to transfer information from one to the other would be immediately considered to start with inherently in the device. This would then communicate with itself, in which you put the record on one end and the processor on the other, and it would include within itself the transistor technology. The next slide shows the principle of how this might be done. Let me say right away, that this is at the moment completely a paper proposal, and there are a number of laboratories working on things of that general type, including your own. But none of them having the thing working. I would like to propose more of this section of the proposal from an architectural point of view rather than as a hardware solution at the moment.
The idea would be this: You would have an array of light valves, each one of these little points in here, either valves that can let light go or not go. These light valves are electronically controlled by flip-flop in an array like an ordinary transistor memory. In fact, this could be a transistor memory that can control light valves. This communicates with the computer, with the processor. The rays illuminate this array of light valves in some direction in such a way that the image of this entire array is focused on a small area of a storage medium. There are some optics that reduce the size of this so that the size of the record here is much smaller than the array. Simultaneously, some of the light from the laser, by means of mirrors or what have you, is siphoned off and made to interfere with the picture that comes from the array of light valves and produces a little holographic picture of the thing that was on the array of light valves. So now we have a little record on the storage medium of what the pattern of information was in this transistor memory. We have, so to speak, photographed the contents of the transistor memory into the storage medium. We have films of the storage medium as one in which we can photograph things instantaneously without development, and then we can erase things and rewrite things. I'll come to this in a moment of what that medium might be.
Now, when we want to read, we direct the reference beam by means of the same deflecting scheme, exactly at the same hologram, and now we utilize the properties that holograms have, that if you make the angle of the reference beam to be a property that will affect the writing beam, that is to say supplementary, the image will appear exactly where it came from. Therefore the image would come back exactly on the array of light valves, and now you put at each position a light transfer, and the light transfer controls the flip-flop. So now you can photograph from the storage medium into the transistor memory in one sort of light. Now what you have done is to have the advantage of the semi-high speed semiconductor technology and the LSI. With one sort of light we can photograph data into mass storage and convert the photograph from the mass storage into it. Avoid all wiring, as I pointed out at the beginning of the discussion. And utilize the very essence of what light valves possess; namely, that it provides millions of channels of communication in parallel for free. That's what the lenses are. And so that utilizes that part of optics, which is very, very good.
I said I would eliminate all the hierarchy of storage, but here I have, in a sense, a hierarchy. This obviously is one set of elements that you set, and then you eliminate them with light; so it's a type of hierarchy. If the arrangement of the decomposer and light transfer is combined directly with the memory function, as shown here, there is a hierarchical set. But I would like to propose that there is none from a systems point of view because of the following reasons first, let me say this if you are adopting the holography. There are lots of reasons for wanting to develop holography, not the smallest of which is that it permits you to store redundantly. Therefore, the record doesn't need to be perfect. But there are also many other practical reasons that simplify the optics a great deal. The point is that there's one price to pay when you use holography; namely, that you must have all the image controls before you can make a photograph of it. You cannot make it partially. You cannot, say, compose half the page and make a photograph and then compose the other half. By definition, holography is the photography of the whole, and you have to have it all there. Therefore, if I wished to write only one word rather than a whole page, then you would think that philosophically you will use this system which forces you to write an entire page instead of a word, and thereby slow you down. Well, what I'd like to propose is that this is really not so if you have this arrangement. Suppose I want to write a word in a page that is otherwise already written. What I can do is read the information of that whole page by timing lights, and the reference beams. Simultaneously, when I do this, I run electrically the information of that word position by putting the right signals on the digits and word lines electrically. So I have all the words but one coming from the storage medium, but the one word I want to write comes from this single word. Then I immediately rewrite. That means two steps, one writing and one reading, which is what we're already used to in store memory, I can write one word. And the same thing about the read. And so therefore the whole system is completely random, and I don't need to know that I have pages and bits within the page if I can do this.
After having said all this, this is true if the rate at which I could write the whole page and the rate at which I could deflect the beam were the same as the rate at which I could switch transistors. The state of the art at the moment is such that this is not true. At the moment it takes longer to deflect light, and generally longer to record light, than it does to switch transistors. At the moment we in fact do have a hierarchical set. It takes longer to go through the page than it does to read within the page. What I'd like to propose is this is only the happenstance of the historical technology at the moment. As the technology of deflecting and controlling light is improved, this will disappear, and there will be no hierarchy.
Equipment requirements for optical memory
Now, I'll just say a few words, because my time is running out, of how this can be done physically, and this is only one solution which probably will not be the final one, in which it would give a little bit of the reality of all of this. We start with a laser, we have good deflectors. These are the deflectors in which you send sound waves into a medium, which produces alternative layers of higher and lower refractive index, and thereby grading. Spacing of the grading depends on the wavelength of the sound. When you send light through it, the angle of deflection depends on the frequency of the sound. You do this to deflect the beam. Specifically, you can deflect in one position out of thirty-two, say, in microseconds, and deliver, say, ten percent of the light through it [?]. These are practical numbers after all. A very, important point is to make the beam hit what we call the "hololens," which is an array of holograms, each of these holograms being a picture of the [?] rate of the [?] composed transistor, the pixel array. When the beam strikes it, it plays out the image that was recorded and therefore finds light exactly on it. This light, in this particular arrangement, is reflected. Let's assume at the moment that the left is a perfect mirror so the image is reflected and comes to this point because there is a lens here. We can think of this point being simply the image of this. However, the light has a chance to be modulated by the mirror and to do something to do the mirror, and I'll say in a moment how we do that. Now, part of the light--in fact, most of it--is not deflected by the hololens, but goes straight through. It's impossible to make hololenses that are 100 percent efficient. In fact, if you make them ten percent efficient, you are doing very well. So most of the light goes straight through. By means of mirrors, we make it strike the same position. Now, no matter where you want to deflect the light at this position, the light that comes like this, or the light that goes like so, cross each other and produce a hologram on the storage medium.
We use an integrated circuit that has the flip-flops and what have you, and also has a photo sensor in the flip-flop circuit. But between the integrated circuit and the glass plate that we put in the neighborhood, we put liquid crystal. The liquid crystal, of course, is a liquid that is transparent when you put no field on it, but gets milky with light when you put the voltage. We arrange the circuitry in such a way that when the flip-flop is one way, it produces no voltage on the diode, and when it flips the other way, it produces the voltage. So it is a perfect mirror in one case and scattered in the other. If the light is scattered from any point in here, it has no chance to interfere with the reference beam, and therefore doesn't record the hologram. So that's the way we control the light. The way we read the light is to shine the light again this way, and, because of the nature of the hologram, the light is read from the storage medium right back on the matrix, and then it comes into the diode detector and sets the flip-flop directly on the integrated circuit.
My time is running so short that I'll just say one word about the storage medium, which is the crux of the whole matter. It produces a storage medium on which you can write with light and read with light. One of the possible ways is to use a very thin film of manganese bismal as a magnetic film, which can be magnetized upon the surface [?]. And then use an extremely powerful laser beam such that the light at the places where the light is intense, heats the film over the Curie temperature momentarily, and yet leaves it below the Curie temperature in other places where the light is weak. When the light is up here, the places that remain magnetized, demagnetize from up there in the opposite direction, and thereby we plot the magnetic pattern that corresponds to the light pattern of the laser beam. Thereby we plot the hologram. Now, to read out, we simply turn on the laser again, but now with a lesser intensity since it's no longer heated to the Curie temperature, and the phase of the light deflected from the film is changed, depending on the magnetization. This then works like an ordinary phased hologram, and we read out the hologram into the plate.
Well now, I said this extremely fast just to give you an idea that the proposal I made before was not a completely paper proposal. It has flesh and bone attached to it. I would hasten to add that this entire system is not one that is ready to be a product tomorrow. There is a tremendous amount of work to be done to do this on the scale at which it would be interesting. And the scale at which it would be interesting would be to have the capacity we think of the order of 1010 bits on the storage medium. For example, to have 105 pages and 105 bits per page here. Or some other division between pages and bits. But of that general order. These are rather formidable numbers when you look at them these days. So there's a tremendous amount of work to be done. No doubt many solutions other than this particular one I'm showing will probably be necessary. Nevertheless, we think that this represents already a fairly significant step forward in the innovation of this. In fact, we are proposing to build a little system like this just to see how it would work, and we hope to have it done by the end of '71. Thank you very much. [Applause]
Relative value of read-only and read/write memories
I would like to know, Dr. Rajchman, whether in optical memory, do you feel that there will be real dramatic impact on computer architecture if the media, recording media, is non-reversible? In other words, all you described, the way I understand it, is that it's a non-reversible media. Except for fixed holography, or some image processing, or perhaps archival applications, do you foresee a full replacement of bulk memories?
Well, it's a very important question, obviously. Obviously it's much easier to make the read-only memory than the read/write memory. By orders of magnitude it's easier to do it. Therefore, we've pondered the question quite a bit. I would be the last to say that we have an absolute definite answer of yes or no to your question. However, you have to give an operational answer. That is, on which are you going to work? We've decided not to work on the read-only memory. What made us decide that is that applications where we saw the read-only memory would be useful have been attempted in other forms, other than holographic forms, by other people already, and they have not been great successes. Not so much because of the technology, we think, but because of the fact that the read-only feature was difficult to use. Now, it's very difficult to know whether if some other things were done, some more software or this or the other, it might have played. Therefore this is why I'm saying that I don't want to give an absolute answer. I really don't know the absolute answer. The point is that after we chewed on that question great deal, we decided that it was specifically much more important to work on the write/read memory, however more difficult it is.
I was wondering if there was also an economy of scale there. To amortize some of those expensive parts, it would mean you'd have to have a very large memory, and thus you couldn't see a read-only memory that large, but perhaps a read/write memory that large. Is that possibly so?
You mean that a read-only memory would have to be larger than a write/read memory to be useful? Is that what you're saying?
No, possibly they'd both have to be very large, and I wonder if we know how we could use a very large read-only. Whereas we certainly could use a very large read/write. That's what I meant.
Yes. I think there is some validity to that argument. No question about that.
I don't agree with that. For example, most tapes are used in archival fashion...in mass quantity these days. It's basically read-only. I do believe that the read-only would have a place, a particular use if it had a particularly attractive performance or cost or something.
Preferably, tapes are made in situ. Whereas in most read-only memories that people have proposed, the manner of writing in them is a different mechanism than that used for reading. In fact, that's what makes them read-only, really. That is where a lot of the inconvenience lies. Because you have to assume another technology to prepare the mask[?] or whatever it is.
Have you considered producing read-only memory type, which could be used in optical?
Yes, we have. And that is certainly a very tempting thing to do. But simply making the step of making a very large memory that's accessible by a code, rather than having to [?], is the very first step to make. We think that compatibility[?] of some sort would come rather easily afterwards. The main thing is to make random-access memory to start with.
Could you tell me the kind of data you would store invariantly for long periods of time on tape machines where you didn't have to write in situ? One class of user, that uses vast quantities of read-only data, would be the Atomic Energy Commission. They want experiments, for example. Digitize all the data. In other words that data gets fixed there forever on tapes in ever-increasingly larger sized rooms, until someone dreams up a new item to test for.
What part of the total data-processing needs for large-capacity storage do you think these kinds of requirements are?
I think they're significant. The whole database, if you will, the whole concept of a database, involves not only the current status of that base, but also the past history. I would venture to say that this is more significant than most people might credit.
Excuse me. I think that in any database, if it doesn't have the capability of updating continuously, inserting, I don't think it's going to have as much large commercial or other use. Of course your example that you brought, I agree. But how many of that type of example can you provide for large use?
Well, there again, what do you do with the tape? You keep the records as they are for some period of time, and every now and then you write a new record. If the read-only record is basically cheap in the beginning or has a lot of intrinsic advantages (I'm making that premise now, which may not be quite accurate), what you do is just reproduce the records over again. You know, redevelop your file, recreate the new file with the updated material.
The whole thing, I think, is that with optical memories, specifically, the recording media is the negligible part of the core. The whole core is going into optics, which is maybe not true in the magnetic tape case. So I think, maybe we have to give second thoughts about really evaluating optical memories for that purpose.
Our experience has been that wherever you have a large database, the kinds of things we've worked with, it's not necessary to go in at random and erase words. And in that sense the kind of memory that Dr. Quinn was talking about is something that maybe the industry can look a lot toward, a memory that can be written in mass, in situ, but then read thereafter without ever being written again.
Summary of Al Hoagland's career
Our next speaker is Dr. Al Hoagland. Al received his Ph.D. from the University of California at Berkeley in 1954 and became assistant professor of electrical engineering there. He joined IBM, ASDL, or Advanced Systems Development Laboratory, in San Jose in 1956, and subsequently became manager of Engineering Science for the San Jose Research Laboratory. During this period Dr. Hoagland was instrumental in the development of the company's magnetic data-recording products, notably the RAMAC and 1301-disk file. He also initiated and led the early technological efforts on a high-density, replaceable disk file. He was the principal contributor to the basic theory and design underlying digital magnetic recording. He's a Fellow of the IEEE, and author of the book, Digital Magnetic Recording. He's held numerous positions in IEEE and is currently vice president of the IEEE Computer Society. Dr. Hoagland-
Al Hoagland's comments
Significance of storage to the computer industry
I thought I would start by telling you what you should get out of this talk, what message this talk contains. There are a few themes I'd like to get in your minds. First of all, when I think of storage, I'm not sure you think of the same thing. But I'm certain the question of where and what kind of progress we make in the storage area is going to be key to the computer business, particularly with the growing trend for database-type applications. It isn't just a technological question, because the question of how you manage storage devices in order to effectively provide a user a facility is one that is probably further from solution than the technology. There's no sense having huge storage if you don't have an economical way to get data in them, so data entry is also sort of vital. I hope you'll end up with the idea that storage is a rapidly growing industry. That may be at least obvious to people on the west coast. Now, I'm going to talk a little bit about magnetic recording, which is a mature technology in that it's about 73 years old. It has an amazing amount of vitality. I suspect it will see quite a number of further years. I think it's pretty important to put in perspective how you visualize this particular technology, which, in a sense, is older than most things that the IEEE encompasses in the context of technology.
I think the future depends on getting lower cost per bit of storage if you're going to be able to move a lot of applications toward the idea of on-line storage. In order to get these lower costs, I think it's pretty apparent you have to get much higher storage densities, because I think there's a relation between bit density and cost density. The way in which all this is apparently going to come about is not very subtle or complex. It has some of the same elements you see in ordinary transistor or logic technology, and that's basically to scale down dimensions. The last message is that one recognizes that the cost of memory will go down, and therefore it will be economic to have a lot more electronic memory, monolithic memory, if you will. That changes, therefore, the character of the memory that you may have in storage. Basically it's a function of how rapidly it's necessary to make access to what kinds of data. So there could be a change in the type of devices we see with time as both these areas progress in capability.
Comparison of magnetic tape storage and disks
If I can have the next chart. This is an orientation type chart. For convenience, I identify memory primarily as non-mechanical. Shown over on the left, the microsecond doesn't come through, but it has three defined disk cells that are electrically connected, so no mechanical motion is required for access. On the storage side you have mechanical memory of one form or another, and it's not strictly a function of whether it's magnetic recording being used or any other. Basically you're talking about storage of such capacity that you have some sort of motion involved. We have a lot of trade-offs there. We have head per track, which is in the millisecond range. Then in ten to 100 milliseconds, you can go one head per surface and position the transducer over a surface. That gives you a little better cost per bit with a little longer access. Then you can go to devices like tapes where you have a read/write station, and you have a transducer assembly in a read/write station, and the media all have to be brought by the read/write stations. You can also divide it somewhat by rigid versus flexible substrate, [?] tapes, removable media, and then you have the access time scale. This gap is obviously that which relates to the difference between electronic and mechanical. Therefore there's an access gap of a couple of orders of magnitude time-wise. There's also a couple of orders of magnitude cost per bit in that area in the fact that when you go away from predetermined bit cells to a transducer in motion, you have economies that work in your favor. So this cost-per-bit gap may be the most important one to fix on as you look at the shifts in technology in time.
Now that gap doesn't bother me. It's something that seems inherent in a way. You could try and design something to fill it, but on the other hand, we have a tremendous speed range between an airplane and a car, and it's a question of whether anyone would feel compelled to have a uniform distribution of speed. It really depends on how you operate in a systems sense as to whether or not that in itself would be a concern. Now, at the very bottom, just to show you a little difference in the disk area versus the tape, the immediate cost per square inch is about a factor of a couple hundred more in the disk versus tape, which suggests tape tends to be attractive particularly where you have off-line storage. With storage density, on the other hand, in disk versus tape applications or hardware you've got a ratio of about twenty-five to one, which means that on a disk you do a much more intensive job of exploiting the capability of the media for storage. These differences are sort of inherent in how these kinds of structures get used.
Could I show the next slide? These are just uses of magnetic recording as a backing storage database. Use it for archival storage. Key parameters may be the dollars per megabyte per month rental one would identify with the unit.
Implications of growth in storage capacity for computer industry
You have, again, the question of making trade-offs. I don't want to get so much into this. In my particular presentation here I'd like to sort of give you some feeling as to how one could view the future in terms of technology. So could I have the next slide? This is sort of to show how important storage has been and is becoming. It's a pretty big kind of market area in its own right. The growth of storage is going up rather dramatically, as you can see from the chart. Of course these are all ballpark figures to give you a sense as to the way the industry is going. In the large growth in storage capacity, I have identified three components: One is the market for systems as the computer industry grows. Second, the percentage of the systems dollar, the hardware costs, allocated to storage is a much larger fraction than it has been of the total costs. So again, you'd expect a shift into more of the storage area. The third factor is that as the cost per bit goes down with time, it means you can get a great deal more capacity at any given cost you're willing to pay. So from those points of view, this kind of growth may not seem that extreme.
Technological innovations in storage, 1956-1971
Could I have the next slide? Now I thought I'd sort of give you a short snapshot of time versus progress. So I took 1956 for the first disk file, which had 2,000 bits per square inch, which was made up of 100 bits per linear inch and twenty tracks per inch. Now if you go down there, you can see the progress that's been made over a period of fifteen years. That in 1970. I used IBM units because I had no difficulty getting that kind of data, and it made a more coherent presentation, but it more or less typifies the technical state of the art. You have a product announced last year at 800,000 bits per square inch composed of about 4,000 bits per linear inch and 200 tracks per inch. So in a fifteen-year time span, you have about a factor of 400 gain in storage density, which is composed of about a forty-to-one gain in bit density, and a ten-to-one gain in track density. It may not be quite as impressive as in the memory and logic area. But in most technical activities, that would be considered a fairly dramatic rate of progress. Of course there's no feeling among anyone at this point where we may be on the curve of progress. There is no question, at least from one point of view, that there is the largest investment in R&D to advance the state of the art that has ever pertained at any point in the past. One would expect the rate of progress, certainly, to be fairly significant. Now the precursor to all this, which was really primarily kicked off by the recognition of the value of random-access or direct-access storage in the computing system area, where magnetic drums, which were down again by about a 1,000 bits per square inch.... This means, in a sense, that from the day when we first started to apply magnetic recording to direct storage for computers, we've got an advance of roughly 1,000 to one in storage density.
Could I have the next slide? This is to show that you get this trade-off as you go to higher storage densities with cost. So roughly you can see over time--and again these tend to be relative--that as you go to higher storage densities, you get significant reductions in cost per bit, which opens up new market applications that otherwise you wouldn't be able to touch because you couldn't afford to store that data on-line. It sort of shows that if you look to the future, one obvious path to follow is to continue along this curve by looking for higher storage densities. Well, the question is, is that feasible?
Can we see the next slide? The name of the game here, I think, is like LSI, as I understand it. You go to smaller and smaller dimensions. The key dimension normally in recording is the spacing. You can see over this period of progress I have cited, we've roughly reduced spacing by a factor of ten down to fifty micro-inches. At the same time, surface thicknesses, recording films, have gone down from 500 to about fifty, or another factor of ten. The same applies to the head design, the gap, and so forth. Again, the factor that swings it tends to be spacing. Now, if you look at that chart in an absolute sense, you'd say you're pretty close to bottoming out. But you really don't look at charts in the absolute sense. You should look at it in terms of what relative kinds of gains can be made. You can always divide fifty by ten, and you could divide that again by ten. The curve in itself only gives you the range of dimensions you should be thinking of in obviously the potential gain for progress as related to the scaling. You have room to make the same kind of progress if you can do the same kinds of things over and over again.
Could I have the next slide? I thought I'd briefly show what had been done to get where we are, and then we could talk about what could be done. In technological innovations, the name of the game is the registration problem between the transducer and surface. In 1956 pressurized air-bearing, dual-element head device and special recording techniques allowed you to precisely handle the mechanical problem of holding a little head element on a disk that was spinning with a great deal of run-out. You chose to use a two-dimensional head-positioning device to get a low cost per bit, which gave you a relatively long access time. But in '56 the key approach was the pressurized air bearing and lead to the disk file itself. In '61 you were able to get rid of having to have pressurized air by developing a slider bearing or a gliding head. That also simplified the design of the structure, the fact that you didn't end up with a compressor required and so forth, so you could afford the luxury of putting a head on every surface. With one-dimensional positioning then, you dramatically reduced access time, and moved into a new game where we have the kind of files you associate today with a disk path. 1967 saw gains in the actuator, and in 1970 we were beginning to recognize that mechanical tolerances weren't all that great, and you ought to use some sort of feedback system to maintain registration. So we now have servo-access that, in a sense, you sense how far off track you are in read position, depending on the signal you sent. This opened up room to go considerably further with the track end of it. I don't know where we'll go, and I don't think anyone really knows, but it's sort of assured that people will try and get closer to a surface because the benefits are so high. A lot of people have been recording on the surface, some beneath it, in some way. But that's an area people are going to stress very hard. Again, if you could go from fifty to five, you've got another factor of ten there that could be very dramatic in terms of performance.
Another thing that evolved recently is the recognition that if you could make magnetic heads more like the way you make other solid-state devices, and less like assembling a precision watch, you'd have great advantages, those being particularly related to some of the items I've mentioned. You could perhaps build lower-cost head structures of much higher precision and lower mass. By going to very thin film- type devices, you could end up with much bigger frequency responses. So that's certainly an area also which people will be aggressively looking at to make further progress. If you do that, you then can talk about having more head units than normally you'd economically think sensible. Well, those things lend themselves to further progress on track density, because if you can get low mass head systems, then you have a greater possibility to get high precision mechanisms to do registration. When you get to these high densities, you face the fact that while the quality of magnetic surfaces is extremely good, as you go further and further, you need to use sophisticated coding techniques to account for imperfections and defects. So we end up with a lot more work at new techniques for coding and modulation. These are the sort of things that will be pushed to make further progress.
Could I have the next slide? This is to summarize the message that I started with. A day or so ago I looked at the Diebold Report on data storage devices, and it discussed magnetic recording and so forth. It also discussed optical recording. It had a comment in there that I thought I would quote because I sort of feel that this would stimulate the audience. That was in assessing other storage mechanisms that could be competitive to magnetic recording it said: "There's still room in the cemetery." I think you have a moving target here to shoot at, and I hope a few people try. Thank you. [Applause]
Development of higher storage densities
I did notice that the cost curve was getting kind of flat. It seems like you were suggesting that there were plenty of places we could make improvements. Are these improvements going to do something for the cost curve, or are we just going to get better performance along with slightly lower costs?
I suspected someone would make that comment. First of all, that cost curve was not plotted against time. It was plotted against storage density. But plotted against time, it wouldn't have looked like it was getting flat at all. So the real thing you have to say is, is your feeling about how much higher we'll go in density reaching a point, which will be slower or faster? Over a period of time, I could foresee over time densities being achieved such that costs will move down in a very steady fashion. But I did plot it against storage density because that's the key area you have to progress in to really reduce costs. The reason it tends to stretch out, is the way storage density has been related to the technical innovation.
I have a question. What materials do you intend to use for the contact recording?
I don't intend, really, to use any. You're asking what materials may people try and use? I think they'll try and use the materials they've used for some time, such as particular surfaces. I think they'll explore other materials. As you probably know, data disks for several years have had a video buffer which has used a cobalt-nickel plating in contact recording. So I haven't seen anything that would impair one from believing there aren't several ways one can go, as far as materials are concerned.
That track record you showed, Al, for density, I think was impressive. I'm not sure I knew how to correlate it directly with your cost curve. Also, where does access time come in with that density? What has access time over that period of time been, and where is it now? And if you've got competition with new technologies, like, I believe your cost curve ended up somewhere in the five to ten milli-cent per bit, for example. Suppose someone comes up with the solid-state equivalent in the five milli-cent per bit with an order of two magnitude in access time? What's this going to do to you? Access time improvement.
Well, is your question along that line?
Mine was the same question, only I wouldn't have worded it so kindly.
Well, maybe I can hear your words. What was that comment?
My observation is was that we're working on the density, and it seems to me that's the wrong problem to work on. The problem you didn't show was the access time, and the fact that that didn't keep up in the same way. I felt that that was the key parameter to creating a lot of problems in operating systems and for the rest of the uses.
Access time showed up earlier in one of my foils actually, where you can design and choose your trade-off. It's a mechanical structure you're talking about, with head-per-track or fixed head files, all the way to tape-type devices. You get this broad spectrum from one millisecond to several seconds where you can put your design point, if you wish, for access time. We have products, which are called drums, we have products called disk files, we have products that are called tapes. The trade-off there is the cost per bit and access time kind of performance that makes sense from a systems point of view. When you talk about addressing access time, I don't foresee us really moving into the electronics frame of reference. Certainly you could visualize moving into improvements in access time by putting more heads on a disk and making it more of a fixed-head-file type design. When you do that, you're paying a little more in cost per bit. So there is this sort of point where you want to do this. I would be one person that would acknowledge, if you could get enough storage made out of bubbles or monolithic memory or what have you so that you didn't really have to face the kind of capacity issues I'm discussing here, that would probably be the way to do it if you could buy electronic storage at the same price. That's not foreseen by anyone as happening in the foreseeable future. Therefore you have to ask yourself, for the future you are going to have storage other than just main memory. When you go past that point, you're going to be into the kind of trade-off I've alluded to here. It may be that some of the product structures you see today may not be the same as you will have in the future. But I think the demands for storage are such that it's hard to meet the requirements with the mechanical technologies we have.
What would normally be the best device to use in, say, the 100 megabyte range? I don't know really where disks and drums are, but say you had a 10 to 100 megabyte arrangement?
I'll give you a better one than that. How about 100 megabytes to about 1,000 megabytes? That's a good range, right? Fairly large?
Some real-time machines don't need that much, but okay, let's take that. Well, I believe we have a pretty good product in that range. If this path has got 100 megabytes, and you can get eight singles, and get almost 1,000 megabytes with an access time in the range of thirty milliseconds. Now I don't know what that would cost if you bought it in extended core, but I daresay you wouldn't buy it. So there is a market for these very large capacities. The only way they can be satisfied now is by some electromechanical kind of device.