Oral-History:Noel Sharkey

From ETHW

About Noel Sharkey

Born in Belfast, Northern Ireland, 7-year-old Noel Sharkey and his mother moved to Coleraine, Ireland. He would be educated in this town but he felt he never belonged because he was a "city boy". His first university experience was close to home at the University of Coleraine in Northern Ireland as an apprentice electrician where he would help to build the University of Ulster. Sharkey would then go to England, becoming a psychiatric nurse in the 1970s until he went back to university in England. He studied English, mathematics, music, psychology, and English literature, until he became interested in robotics when he got into artificial intelligence while pursuing a psychology degree. He holds a Ph.D., DSc, FIET, FBCS, CITP, FRIN, and FRSA. This includes a doctorate of Experimental Psychology, a Doctorate of Science, as well as an honorary doctorate in Informatics from the University of Skövde.

Noel Sharkey has taught many subjects in universities including, engineering, philosophy, psychology, artificial intelligence, and even computer science to name a few. He has held numerous teaching positions in the U.S. at the Universities of Yale and Stanford as well as the UK, including the Universities of Essex, Exeter, and Sheffield, along with being the director for the Centre for Cognitive Science at University of Essex and was also the Director of the Centre for Connection Science at the University of Sheffield. Noel Sharkey is best known for his appearances on television as an expert on robotics as well as his more than 150 scientific articles and books. He has appeared in more than 300 episodes of BBC's television series Robot Wars, Techno Games, and co presenter of Bright Sparks. His research interests are now in the ethical application of robotics and Artificial Intelligence. Specifically, he works in the areas of military, child care, elder care, policing, surveillance, medicine and surgery, education, and criminal activity.

Sharkey is an advisor to the National Health Service think tank Health 2020, a director for the European Branch of the Centre for the Policy of Emerging Technologies, a co-founder of the International Committee for Robot Arms Control, and is a member of the Nuffield Foundation, a group on ethics of emerging biotechnologies. Sharkey is active in many professional organizations and has served on boards and committees for magazines and journals. For example, Noel has been the Editor-in-Chief of the journal, Connection Science, for twenty-two years and is an editor of Robotics and Autonomous Systems as well as Artificial Intelligence Reviews. He has also received multiple awards and honors, including a Chartered IT Professional (CITP). Sharkey is a Fellow of the UK Institution of Engineering and Technology, a Fellow of the British Computer Society, a Fellow of the Royal Institute of Navigation, and a Fellow of the Royal Society of Arts.

About the Interview

NOEL SHARKEY: An Interview Conducted by Peter Asaro, IEEE History Center, 24 March 2013

Interview #808 for Indiana University and the IEEE History Center, The Institute of Electrical and Electronics Engineers, Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Šabanović, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Noel Sharkey, an oral history conducted in 2013 by Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

Interviewee: Noel Sharkey

Interviewer: Peter Asaro

Date: 24 March 2013

Location: Sheffield, UK

Education, Psychology, Robotics, AI

Asaro:

So we'll just start by having you introduce yourself, tell us where you were born and where you grew up and went to school.

Sharkey:

<chuckles> My name's Noel Sharkey, and I'm a professor of robotics and artificial intelligence at the University of Sheffield in the U.K. In 1948, I was born in Belfast in Northern Ireland at the Royal Victoria Hospital. I lived in Belfast probably until I was seven years old. My father became ill when I was four, and so we moved – eventually moved to the country to a little town called Coleraine, where my mother was from, and my father's mother lived there as well, but it was a little hick town. I always felt that I'd never really quite fitted in because I was a city boy and I was with all these hick kids.

Asaro:

Yeah. Where did you go to university?

Sharkey:

I went to university – well, I went to university in England, but that was a lot later. I was educated in Coleraine, and then my first experience of university was working as an apprentice electrician at the University of Coleraine, where I helped to build the University of Ulster, it's called now. And then I came to England and became a psychiatric nurse in 1970, when I was twentyone21 years old, and I did that for a couple of years, but I found it really awful. It was a thousand-bed hospital, and it introduced me to an awful lot of psychology, but it was kind of a cruel place, really, in those days. It was still padded cells and you had to jump on people and trail them into padded cells and inject them with things, and it got too much for me, and I eventually dropped out for a couple of years, and then went back to high school – did high school all over again for two years – and then went to university. And as soon as high school again, or it was college – A-level college in England – as soon as I went in there, I realized it was nice to be in a warm place, and the work wasn't too hard, so I never left again.

Asaro:

What did you study when you went back?

Sharkey:

Well, first of all, I had to do what we call O-levels and A-levels, and I did English, mathematics, and then at A-level I did music, psychology, and English literature.

Initial Projects in Robotics

Asaro:

How did you eventually get interested in robotics?

Sharkey:

Well, that's a long – that was a long, long story <chuckles> into that. It was because I had been working in – I was a psychologist to begin with. I went and did a degree in psychology, and then I started getting interested in artificial intelligence during that degree. And so when I did my Ph.D., the Department of Psychology bought me a computer, and so I started writing poetry generators because I was president of the Poetry Society. So I wrote my first poetry generator, began writing AI programs, and I connected those with models of word recognition. So I was writing – and then I started working on AI and natural language, and I went to Yale University then as a postdoc and worked with the AI lab with Roger Schank, developing programs that could understand language. And during this time, from 1981 onwards, I was really interested in machine learning, and then had to go into a whole training program in mathematics, so I did calculus and linear algebra, but that was in the background while I was doing psychology and AI. And then of course the natural step for me was to develop machine learning algorithms to do parts of language, language parsing, language understanding. But after a while – it must have probably then got to about 1990, somewhere round about there – before there, actually – 1988 – I started thinking that there was something wrong with doing natural language understanding by machine, and I began to think about the idea of using robot arms, because they could bother with the world. I was very influenced by Lakoff, and I went to Berkeley for a couple of months and worked in Lakoff's group and got into the whole idea of grounding my language in the world. So I thought putting pencils into things, learning metaphors – so it was metaphors we live by – all that kind of stuff with language. But I'm somebody who gets very involved in things, so I started building – got a robot, started working on robotics using my neural network learning algorithms, and within a couple of months I'd completely lost interest in language altogether and was just caught up with the problems of robotics, and from then on it was building robots, learning electronics, programming robots using machine learning algorithms and so on for a very long time – and no language work.

Asaro:

What was your first robot?

Sharkey:

My first robot was called Frank, and it was – it looks like a biscuit tin. It was about that size, like this – so a little biscuit tin. And it was very, very mechanically unsound. It was on tracks, and it had two sensors at the front that were both – there was an infrared sensor and a sonar sensor at the front, so it sort of was like this, and you could move them or twist them, and if it hit the wall, the sensor could be knocked to one side. And I still remember the – I've got a video of the very first robot experiment I did. Because the thing was, I was very interested in this idea – in those days, when we were doing PDP or neural networks, as it was called, you always had a teaching signal. So you'd put in an input, which could be sensors, you get an output which could be completely wrong, but it would do adjustments, error adjustments, using the teaching signal. But you know, if you're driving a robot around, free-roaming robot, where do you get the teaching signal from? So I thought about this quite a long time, and I was thinking about this before the robot actually arrived, so I was waiting for the robot – it took about six months to get it built and get it to right – and I was thinking about this, and I strangely met this guy who worked with chickens, and you'd wonder what that had to do with <chuckles> – with language or robotics.

But he had this thing, because chickens when they're born, they just go peck, peck, peck in a very, very rough way, picking up little bits of corn or whatever. And then as time goes on, they get really smooth, and they can peck and they can do it all sort of in nice dancey little way, very smooth way. And he had a problem with, "How is it that they" – they have this genetic beginning bit, and they peck, peck, peck, peck, and then later on it gets all smooth, so it develops, and how does that happen? What's the sort of neural correlates of that? So taking that into mind, I thought, "I know. What I'll do is I'll write a very, very, very simple program to drive the robot down the corridor and avoid obstacles." So it will go down the corridor, go to the bottom, do a sort of turn and come back up again. And in the meantime, the neural network is kind of looking over the shoulder of the program, so it's sort of looking over the shoulder and it's getting – it's being trained all the time. So it's getting the teaching signal from this program. But the program was going to be very crude – I knew that. It was going to be jerking about all over the place. But the idea is that you've got – what you've got here essentially with a neural network is an adaptive filter. So when you take an adaptive filter, what it will do is there's a lot of twisting and turning, but it's going to smooth it out. It's going to take out the outliers and be very smooth. That's the idea. So first experiment – and I had this research assistant, and you can hear him on the video saying, "Oh, get this ready," and things.

And so we send it down the corridor, and sure enough it's moving over to the wall and avoiding the wall, coming back to the next wall, and is sort of zigzagging down the corridor. And it gets to the bottom and it does a sort of 12-point turn where it's trying to get round the corridor and things, comes back up again. We switch it round so the neural network is now driving it. All it's done is trained going down the corridor zigzag. It goes down the corridor perfectly smoothly, gets to the bottom, does a nice, near 3-point turn, comes back up again. A student comes out of a room, and it drives around the student, and on the video you can hear everybody cheering, because it was beyond our wildest belief, and I wrote a paper on that for machine learning. So it was a very, very exciting experiment done in two hours – actually worked, which is not that usual.

Asaro:

Where were you working at that time?

Sharkey:

I was working in Exeter University in the computer science department, and that's in Devon – lovely district – and I'd gone to university there as a psychology student and as a Ph.D. student. So I was both psychology – experimental psychology Ph.D. student with AI, cognitive science-y thing. But I didn't go straight from – I went off – from working as a Ph.D. student there – it was quite funny, because I got very depressed because the problem is that if you're working in psychology, psychology is a very, very tight experimental subject, and my Ph.D. advisor, Don Mitchell from Exeter, was a really good experimentalist, and I chose him as my Ph.D. advisor because I knew he would keep my feet on the ground. And what we were working on together was models of word recognition – so how do you recognize words – and it's something called the lexical decision task, and so you show somebody a word, and they have to say, "Yes, it's a word," or "No, it's not a word," as fast as possible. So a word might be "chart" or "chest," and a non-word might be something like "asymtart" [ph?] and you have to balance them for word frequency and imagery value, that kind of thing. So that's what people do, they press a button, and it's working on very detailed stuff. But I have this funny thing where – in those days, the models of word recognition were generally bottom-up. The idea was you orthographically parse the word and you see that it's a word, you get the wordship [ph?] and the orthographic parsing. You pass that up to a module called the Lexicon, and you look up all the meanings of the word – all of the meanings – so you're holding all of the meanings in suspense here, and then you go to the next word, and you're holding all the meanings in suspense, until you get to the end of a sentence. Then you put the propositions together, and then you go to the next level with all the meanings and the propositions, and then you bring pragmatics in at that point.

And it struck me, from my own intuitions and research I had done with word recognition, that that wasn't the case, that I thought that knowledge of the word really came in much earlier. It happened in the perception process itself. And that's all right saying that, but you're doing experimental psychology. How do I write materials to do this? How am I going to be able to do this? So I went – I turned – looked around artificial intelligence, because I was very keen on that, and I found Roger Schank's and Abelson – Schank and Abelson's work on scripts. And so the idea of scripts is you have – it's the assumptions underlying language. So you walk into a restaurant, you sit down, and you order food. But what it's not telling you is that you – when you walk in, you don't say to somebody, "What an interesting story," right? "Walked into a restaurant last night, and I looked around, found a table, and then I went and sat down, and I picked up the menu, and the menu had writing on it, and I looked through the menu, and the waiter came." Pretty dull story. So we leave that bit out. That's the background default knowledge. And so what I did was I collected norms from hundreds and hundreds of people on lots and lots of these little scenarios, like restaurants.

One I really liked was children's birthday party. So then I developed the model of word recognition, which was, "The children sat round the table prepared to sing. The children at the birthday party sat round the table and were prepared to sing 'Candles' or 'Fliggles' [ph?]." And people were much quicker to recognize the word "candles" in that context than they were to recognize the word "rabbits." So I was demonstrating that word recognition was top-down like that. So it was quite an interesting sort of idea, and I developed it with scripts. But the problem was that in those days psychologists in Britain certainly did not like artificial intelligence. So when I went to a conference – I remember my very first conference, which was a British Psychological Society conference, and people really didn't like that work. "What are you doing AI for? What's that nonsense? That's not experimental psychology, that's the gut philosophy stuff," or whatever. They didn't like philosophy either much. <chuckles> Psychology is hardcore. And then I went and gave another paper at the Experimental Psychology Society, same sort of response, although a few people – there were a few good people around who thought, "This is an interesting new avenue of research." But it was not – it was kind of slightly taboo at the time. But I persisted with it and got on with my Ph.D., and I was about – pretty much on with the Ph.D., and finished it all, and I had written two chapters and I started applying for jobs in the U.K., psychology jobs. Didn't even get a reply.

Nobody was even the slightest bit interested. So I was a bit depressed and I hadn't – I had a couple of months where I didn't write anything. And then out of the blue, I had a letter from Abelson of Schank and Abelson from Yale, and he just said, "I've read a little paper of yours from a conference. Coming to England. Can I talk to you?" So I said, "Okay, sure," and I met up with him in Oxford. And I really hit it off with Bob. He's a really funny man. We giggled a lot together, because we'd talk about script research with these defaults, and he would come up with stories about Lefty, and we'd end up just giggling. And anyway, he went back to the States, and about a month later he wrote to me and said, "I'd like to offer you a job. Come and work in my lab."

And so that was a big change. It was going from couldn't get any jobs in England to getting into probably the – well, certainly the best AI lab in the world at that time for natural language processing. So I didn't actually work with Abelson in psychology, I worked with Schank in computer science, so I joined the AI lab there. And I could program a bit and things, and had written these little programs and poetry and things, but when I went there, I really starting learning how to program natural language and use Lisp, really. So that was quite a great effort. And then from – but my real hero in psychology – my Ph.D. advisor – when I was starting out in psychology, my Ph.D. advisor said to me, "Well, my Ph.D. advisor said to me, 'Pick a psychologist that you really like and then try and emulate them.'" So he asked me, "Who's your favorite psychologist?" and I went off, and it wasn't very hard to work out. It was Gordon Bower from Stanford, who'd written most of – he'd written about a third of cognitive psychology, so he was brilliant, and I had studied him in a lot of different fields. I'd done my undergraduate projects in memory, and Gordon was the head of memory. But he had also written these papers, the first psychology papers on scripts that I was working, but in a different kind of way. He was looking at story understanding. And so his work was really exciting.

So when I was at Yale working there as a postdoc and I saw an advert that Gordon Bower had put up for Stanford, and I looked at it, and I was very nervous because, well, it wanted a cognitive psychologist who understood memory and – but it didn't say anything about the Yale theories I was working on, and it also said you had to be a hypnotist, <chuckles> – okay? <break in recording> – hypnotist at all, and I wasn't really that interested in becoming one. So I kept walking past and looking at this advert – it was up for a couple of months – and thinking, "Well, maybe I should apply anyway, and I'll just tell them I don't do hypnosis." And after a while, I just gave up and I didn't think any more about it, so the time lapsed and it had expired and the – well gone. About a month later, six weeks later, I was sitting in my office and I get a phone call, and it was Gordon Bower. "Whoa, Gordon Bower! You're on the phone to me." So he says, "Well" – he said, "Did you see my job advert?" I said, "Yeah, yeah, I did, but I'm not a hypnotist." He said, "Never mind about that." He said, "I really need somebody who understands the Yale theories, and it has to be a psychologist." And he said, "There are only two of you, and the other guy's taken a job at Carnegie Mellon. So can I come and interview you for a job?" "Yeah, certainly."

So he came, interviewed me, and I guess it was – in all my career after that and before that, Gordon gave me the toughest interview I ever had in my life. My wife, Amanda, he said – Gordon was very good at this. He always got you to do things. So he called us and said, "I'll tell you what, I'll interview you in the car if you give me a ride to the airport in New York." <chuckles> "I'm coming to New Haven." So we drove him through New York with my wife at the front driving, and she had only been driving for about two months, and Gordon giving her directions, and he would say things – he would say things to her like, "If you just turn left round here, and the other way around" – he said, "Well, Noel, give me the exact difference between the Logjin [ph?] and whatever other – I can't remember what the other models were. <chuckles> But he just kept plunging these – he's very famous for this. He was called the Pitcher, because he just punches you in the stomach with questions, and it was <chuckles> really rigorous. He's sitting in the front seat of the car doing navigation; I'm sitting in the back getting beaten about. So then he gets us to stop, go into a little restaurant, and he starts drawing diagrams of networks and things and asking me a lot of questions, and then he says, "Thank you very much. I'll let you know."

And about three days later he calls me and offers me the job, and we start negotiating and – I'm a bit stupid in those days. I didn't realize what I was going to, how good it was, so I was negotiating quite hard, like at a really good office that was at the front of Stanford, looking down through the palm trees, and talked up my salary, got my wife a half-time job there. But you know, Gordon is one of those people that if he wants you, the more I was kind of tough – he doesn't like yes-men, Gordon. And the tougher I was in negotiating, the more he liked me. <chuckles> So he really wanted me in the end. And he was just – he just changed my whole academic life, really, because he took it very seriously. I had to work very hard for him, and I did work very hard for him, doing experimental work, doing computational modeling. But he took it upon himself – his responsibility was to make sure that when I left his lab, I knew an awful lot more than when I went there. He spent a long time teaching me how to talk. He talk me how to write CVs. He taught me how to do mathematical modeling. I remember he taught me how to do interviews. He practiced me to death before I went for job interviews, and he taught me how to negotiate a good deal and get a good lab budget. I'll be grateful to him forever.

And I was there – I could have had a five-year contract there, but he let me out of it because I said, "I want to go back to England." There was kind of reverse brain drain going on. So they said up to the age – I was 35 by then – so if you were 35 – it was called a New Blood Lectureship – you could go back to England, and got a really good post because you were on a half teaching load and half admin load for five years because your salary was paid for by the research council. So he said, "Go back." And I went back and did all my negotiating, got a good lab budget, got all these things that other people don't usually get because they don't get trained on negotiation by Gordon Bower. <chuckles> And in the linguistics department – because I was doing language. So I started in the linguistics department at Essex, and again had to go into a very sharp <chuckles> learning curve because they made me teach linguistics. So I had to learn transformational grammar, I had to learn orthographic parsing of Medieval English – <chuckles> it was some ridiculous stuff I had to learn. And I was there for probably – after two years I got tenure because other people were offering me positions. So they gave me tenure to keep me, and then after four years – three years – I got promoted to Reader and Director of Cognitive Science there. But then extra computer science departments started chasing after me, and I had this nostalgic attachment to my alma mater, so I went to computer science and took my robots, and it was better.

Well, the attraction for me for computer science was – linguistics was great to begin with. I was really into language and language processing by computer, etcetera. But as time went on, I was getting more and more and more into machine learning. And the one thing linguistic students are really bad at, and that's mathematics. So I couldn't teach the things I wanted to teach, but I could go to Exeter with computer science students – that was very easy. So that was brilliant. But in many ways, I regret it. I mean, computer science was very good for me there then because I was young. I like hard problems. I worked. I had to – then I had to teach computer science then and learn a lot, and that was great and very enjoyable, and I had to set up a psychology lab there. So I was doing computational modeling, AI modeling with neural nets, and also running psychology experiments. But as I went on in computer science – I stayed there, and then I got sort of called for a job at Sheffield – called for an interview there. They invited me to come for an interview for a full professor job in computer science. And at that point, I started getting more computer science-y. I got a really big grant in safety critical systems, so I was moving away from my interest in mind, really. And then we started working with diesel engines and that kind of thing. My wife always traveled with me, changed discipline with me. So we moved to Sheffield, and I became much – I gave up the psychology altogether at that point and became pure computer science, machine learning guy, because it was good – it was exciting at the time, and hard. But then later on I started regretting it a bit, because I had come out of humanities, I had come out of the arts. I even went to art college for a while, for a year. I had done so – and music was my sort of thing, and here I am in hardcore computer science with all mathematicians round me and stuff, and I had nobody to talk to about the kind of work I was doing. So that was a little bit frustrating and – but at the same time, if I'd stayed in psychology – I just was a nomad, really, because if I'd stayed in the psychology at the soft end of things, that wouldn't have satisfied me either. So it was kind of difficult.

Asaro:

So after the robot that navigated the hallway, what was your next robotics project? Next most significant one.

Sharkey:

Well, the next robotics – straight after that, I fiddled around with that and I wrote some papers, and then I got a fairly large grant on robotics, and bought a Nomad robot from the United States for 25 thousand dollars, and that was a – you used to see them about a lot in the '90s. It was a big tower about that size, covered with infrared and sonar sensors, and had a laser on it, and very sophisticated by comparison to this thing that used to rattle around. It was very precise, precise navigation, etcetera. So then I started doing proper robotics experiments, and by that point I had a whole lab working in robotics. I had about three research assistants, postdocs, and I had 12 undergraduate – no undergraduate, sorry – 12 graduate students. So it was quite a thriving lab there, all doing robotics stuff.

And we got little Khapera robots as well in the grant – so lots of robotics experiments, and started working in many different areas. We worked on navigation, localization, not GPS. So the robot would have to work out where it was in the room and navigate around the room, and so I started learning things about odometry, for instance, which doesn't work at it. I mean, you have an odometer, and you have an angle detector, but over the period of about two or three minutes, with just tiny little slippages on the carpet or little bumps or whatever, it just gets out by a tiny bit, and then you find it thinks it's 90 degrees – it's perpendicular to where it thinks it is. It just loses it completely. So them we started using landmark navigation techniques and developing – using self-organizing nets for localization, so a lot of work like that. Then some robot arms, and started working a lot of projects for – we had a big project for pharmacy using a lot of vision systems, and so we had cameras inside the thing, in the glove box, but the cameras were helping find little tiny jars. So the robot arm would pick up the jar, another arm would stick a syringe into it and suck stuff out of it, and then do things with it – so a pharmacy project. And that was fun, because in the pharmacy there's a lot of moving around, and you've got these two cameras, so the idea – I mean, it's not that important, but we just wanted to do it for the sake of it. The idea was that you knock a camera out of place, it has to relearn its position, so how do you do that? So you use a back-propagation learning, is what we're using – multilayered learning – so the camera gets knocked out, it knows it's got knocked out, so it's a continuous learning program. But the thing – this turned into a really interesting piece of work. A guy called Kevin Rathbone did it with me, an Oxford mathematician. And what we did was – because we started using genetic algorithms a lot in those days – so we wanted a back-propagation network that would evolve really – sorry – that would learn really quickly. So it had to be a fast learner. Because we did a lot of research with initial conditions of networks and that affected how fast it would learn. How could you optimize all the learning parameters?

So we started using genetic algorithms to optimize a back- propagation learning algorithm. So the test would be you would get the thing to learn, and the fitness function was the speed of learning. So these things would just keep generating new back- propagation parameters, initial conditions, and keep learning until we really got it down. And it got really, really good. And it got so good, I was really, really impressed with our work at one point because we had the TV program "Tomorrow's World" come to film us. Anybody in robotics will tell you as soon as a TV company walk in the door with a camera, everything breaks straight away. That's the standard rule, rule of thumb. Be sure it will all break. So they arrive. One of the cameras breaks, so we've only got the one camera now, a color camera. And so we run around in a panic all around the department, because they're waiting. It's going to be on BBC2, primetime, and they're waiting to film our experiment. And all we can get is this old security – black and white security camera. I said, "Oh god, this is never going to work." Put on the old black and white security camera, run the back-prop algorithm – learns immediately. The whole thing works perfectly. That's the point when I was really impressed with this student's work. It was excellent. So that was the kind of robotics projects we were doing – lot of learning, lot of genetic algorithms work, and that sort of thing.

Robotics on Television

Asaro:

Was that your first experience with robotics on television?

Sharkey:

Yes, it was actually. It was, yeah. Robot Arms – that was my very first. But it certainly wasn't my last. That was the first of very many that came in a row with science programs on television after that.

Asaro:

How did that evolve?

Sharkey:

Well, what happened then was I had a funny kind of thing where I was – I ran a conference on bio-inspired robotics in London, which is this standard thing nowadays, but this was in the early '90s. And I got up, got up to London and stayed in my hotel, and I had just a string of phone calls from my university telling me to contact different media people, because they were all really interested in, "What is this bio-inspired robotics?" because it was like about evolution and animal-like things and stuff. And so <chuckles> – and this was actually pre- the robot arm thing, when I think about it now. So my first experience in television, the first time I was every filmed on television – right? – was a really odd experience, and it had nothing to do with robotics. So that was on my way to do bio-robotics. So they said, "Come to the TV studio." So I got in the morning and they had a car waiting for me to take me to the TV studio, BBC TV studio. And I arrive and I say, "Where do I go? Where do I go?" And they say, "The studio is down at the bottom and turn right."

And so I walk down to the bottom and there's these two people sitting at a desk chatting – big lights on them, sitting at a desk chatting. And so I walk up to them and I say, "Excuse me, can you tell me where the studio is? I've got to go and get filmed." And they say to me, "You're standing in front of the cameras live on the news." <laughs> So my first experience on TV was a complete klutz walking into the news. <chuckles> So I went and did – so then I went and did a sensible interview about biologically inspired robotics. But a TV company called Mentorn saw me doing this and thought, "Oh, he'd be a guy we'd want for this new TV show we're doing" – it was just a strange kind of coincidence – called "Robot Wars." And so they sent this guy to see me, a guy – this silver-tongued Irishman to see me and persuade me to do this TV show, and I was very reluctant. I wasn't keen on doing some sort of popular TV show. I mean, I was an academic. I don't do that kind of thing.

<chuckles> But anyway, he talked me into it and said, "Oh, it'll be very good. It'll be very educational. Children will learn a lot from this, and it's very educational." So off I went and did a series – a six-part series for "Robot Wars." And they're pretty pathetic robots, really, and I was just a judge. I was sitting there judging and <chuckles> – it was quite – I had no idea what was going on. It was a very odd experience. But then 16 series later, I was still doing it. <chuckles> But it was 16 series. There was only 7 series in the U.K. There was world championships. I did it for Holland. I did it for the United States, for Germany, for Australia. So I was chief judge for a long time, and I – very experienced judge, and really learned – but I learned an incredible amount myself about engineering from this, because I was hanging out with these guys all the time, and they were brilliant engineers, non-university engineers. I mean, the thing about it was <chuckles> university teams would come and do "Robot Wars", and they would have no chance against these guys. They'd have these sophisticated robots, and these guys would just come in with some crude thing with a hatchet on it and just go – and axe – and just chop them to pieces. So it was that kind of show.

But what it was was the first series after that, I thought, "Hmm, I'm not very sure about this." It was a lot of comedy and things, and it wasn't serious enough for me, I thought. So I wasn't sure. I thought, "I'll probably not do this again." And then I started hearing all these kids talking about it, and kids were asking questions about torque and about engines, and I just thought, "This is really good educationally," and that got me committed to it. But it also got me into public engagement. I mean, it put me into a position where – it was dreadful for a while. Because from "Robot Wars," I went on to do another show at the same time for BBC called "Techno Games," and in that one I was a commentator. It was a robot Olympics show, and I did three or four series of that, and that was a – there was high-jumping robots, racing robots, solar robots, rocket cars – and I learned so much from that, and I had to learn really quickly as well, because they just thought you could infuse knowledge from the atmosphere – a TV company. So I would say – they would say to me, "Right, I want you to talk about the rocket cars. We're showing you them now." <chuckles>

And I don't know anything about rocket cars. So I say, "Excuse me, I'm just going to the bathroom," and I'd pelt downstairs and grab all the rocket car guys for about five minutes and get a really strong briefing, and then I'd go upstairs and I'd sound like a real expert on rocket cars. <chuckles> But then – I began to learn all about it, but a lot of it was bluff, really, at the time, because you only had to speak for a few minutes, but it had to be commentary. But it was like, it was really good for me because – in terms of communication, because I had to learn to work out a communication very, very quickly, explain very complex concepts about kinetic energy and storing of energy, and elastic bands and springs, and I had to be able to start to do all that quite quickly, and it took me away from this very jargon-y academic, who always spoke – you have to speak in jargon in academia, but it's almost like poetry, a complex thing where – when the media would call me at that point, I would say, "Ooh, I'm not sure how to explain this to you. What sort of degree did you get?" And I'd try and – really try to – very patronizing explanations. But after a few years of this TV stuff, I could actually speak about my work clearly and in English, and that was a very good training, I thought. But it was an odd experience for me as an academic because it was too highlighted. I just couldn't go anywhere in England without being surrounded by kids asking me for my autograph <chuckles> everywhere I went. Walk into a bookshop, and I'd have a queue of kids all wanting – so I was hiding from people all the time. Thankfully it stopped when I stopped the programs. It went on for a couple of years, then it wore off. But it was an odd – I mean, that wasn't really what you expect as a professor, really. <chuckles> But it was good for me, and it taught me a lot about – it changed my motivations quite a lot about the way I thought about my work, and it also changed my motives about my responsibility as a scientist to explain to the public.

Museums

Asaro:

You've also done work with museums. When did that start, and how did evolve?

Sharkey:

Well, that started – that was very good, because I was still – while I was doing this "Robot Wars" stuff, I was still running a good robotics lab, quite strongly. I mean, it was part-time, the other stuff, but it highlighted me in the public. So there was a chance came along – millennium came along – 2000 – and there was a lot of these little new museums and action adventure centers all started up, and there was one out at Rotherham, which is – it's a sort of very working class, poor city with a lot of unemployment, so they wanted to pump money into this a bit. So they took this fantastic steelworks, and it had been the largest building in England at one time – it was a third of a mile long – and they decided to turn it into a museum. And – but they'd only just begun the project, and this guy, Stephen Feber, who was a really creative guy, was charged with the job of setting it all up. And so he – it hadn't even been built. He called me – well, emailed me, actually – and said, "I'm starting up this museum, and I've got a lot of money." He had a lot of money to do it. And he said, "I'm very interested in your robotics. Why don't you come, and why don't we chat a bit and see if you could do some sort of exhibition for me.

What would you like to do?" So I went out there, and it was a pile of rubble, really, and we talked quite a lot, and he was a good guy to talk to – very creative, as I say. And what he wanted was something very exciting and dramatic, like "Robot Wars," but with autonomous robots. And what I wanted was to get some research done with genetic algorithms. And also I was quite interested in showing the public some science in the raw, that it wouldn’t always work. You have hypothesis, you do science and it will quite often go down a blind alley and you have to start again. So that was my kind of motivation, to show the public science, my science, biologically inspired robotics, teach them about sensors and that kind of thing, but nonetheless it had to be dramatic.

So I really racked my brains really hard for quite a while and came up with this idea of predators and prey and the idea of an artificial food chain. And it was very inspired by Walter Grey Walter, because the year I was born in fact he produced his first robot, 1948, Elmer and Elsie. The idea was they would leave the hutch but when the battery started to run down they would go back into their hutch and recharge. And they were very, very simple. Keep it simple and stupid, the KISS philosophy. So these robots had to be very simple. They had to be very dramatic. And very simple neural networks so they would learn, because that’s the way it worked. So artificial food chain. So what I came up with was this idea of these prey robots.

And they were about this size. I mean they weren’t any size at the time because we didn’t just buy robots, we did the whole project. I was given five post-docs, so I got really good people; really good multidisciplinary team, a mechanical engineer, a mathematician, an electronics expert, computer scientists, and a really good team, the bunch of enablers and we designed these robots. So we had these round ones, they’re like sauce pans really, with great big tops on them with solar panels on them. And the solar panels were split into four, so the solar panels were sensors. And what they had to do was using genetic algorithm they had to learn how to find these big lights. So we had stage lights, light trees, and they would just drive around these infrared sensors wired to a neural network for avoiding obstacles, avoiding each other, and they would go on to the light trees and sit there, “Ah,” and get in the sun and get a lot of energy. But that’s not very dramatic. <laughs> So what we did is we had these predators and I really enjoyed designing that. That was the really fun bit ‘cause they were big things and they had tusks and they had a big spikey nose. And we had put flashing LEDs on them. So they looked really menacing by the time we finished designing them. And what they would do is they would come out hunting for the little ones. Genetic algorithm again, so they’re learning. A lot of simulation going on and then coming back and putting the simulations back in again. And what they had to do is go out, they had to spot the little ones, pick them up with the grind, stick a fang into the center of them, suck out a third of their battery power. So that’s how they charged themselves.

And it was quite an exciting project ‘cause you’d come up with all this and how the hell do you do it, you know? Because we wanted all the predators to know that the other – Well, I’m using the word “know” here in a very loose sense. But one predator know that this other object is coming at it, is another predator. And you want the prey to know that this other little object is a prey. But they want them to recognize that that one’s a predator and get away. But all we’re using is infrared sensing. Well to keep it really, really simple. I don’t want to go any further.

No cameras. Keep it simple. So what we did was we started developing, looked at the infrared sensors themselves. And we looked at the kind of waves they were producing because what they’re using is square waves. So you send a square wave as a chirp, that hits a wall and you look at the intensity. So what we did was we give the predators bigger square waves than the prey. And they learned this using the genetic algorithm so the prey would run away from the predators and they would not run away from each other. So this was a big, big thing. And we had 15 of them. We had 10 of the little prey. And it was a bit more popular than we were expecting in fact. <laughs> Because it was really fun building them. It took us 18 months. And as usual, you know, they had the big launch and they did a PR thing. And the TV companies were coming out of the woodwork and everybody wanted a piece of this. So we had this big auditorium and this big arena packed with all of these TV companies and cameras. My guys and myself had been up all night for two nights. We had had hardly any sleep and of course the robots weren’t ready. You know, we were supposed to have them ready well in advance, test them. The first test of the robot was in front of the TV and media. So we were pretty much, you know, shitting ourselves if you don’t mind me swearing. <laughs> Like heart, my heart was just going “pa-choom, pa-choom, pa-choom.” So you know, and they’re waiting and they had been waiting for two hours. We’re really late and I’m looking at me watch and saying, “Come on, come on, come on! Let’s go, let’s go, let’s go!” So we get them in there and we run in and put the things there and we just have the couple of prey and the predator. And the predator breaks. But it breaks in the most interesting possible way you could imagine. So the predator goes after the prey. <imitates predator sound> It picks one up in the air. But instead of sucking the battery power right, it goes into spin with this thing and ends up slinging it straight to the camera.

Brilliant shot. I mean it was really good visually. <laughs> And they applaud. But after that then we just had documentaries made on it all the time and quite a lot of stuff and half a million people came to see it in the first six months. And that was great and great media stuff and everything else, but that’s not what I was looking for, you know, and it really annoyed me because it messed up the science. We didn’t get as much experimental work. I never got a journal article out of it. And it went on, it was because of the popularity the museum insisted on doing six shows a day. So it turned into this whole thing where there was dry ice and the predators came out of the dry ice with the flashing eyes and pick things up. So we still used the genetic algorithms and we still did the simulations and got it all going in that way. But we never managed to get it taken as far as we wanted to take it scientifically. We wanted to know if the prey would start clustering together and doing things together but it got knocked out by the show. But nonetheless, it was a very good experience for me.

Asaro:

And so when you went back to the lab what kinds of experiments did you take up?

Sharkey:

Well this was all in the lab. This was a lab.

Asaro:

The academic.

Sharkey:

I didn’t go back because I stayed there. I stayed there for two years more than that. I stayed beyond another two years because what they did was they paid the university my salary and the university carried on hiring me. So the next project was a flying robots project. So like surveillance robots that I turned out not to be so keen on later on. But I developed there were 14 foot long blimps. Because this place was a third of a mile long. And there were big white ones. We had them specially made by Per Lindstrand and they were covered with sensors because they were autonomous. But that was an extraordinary project to do. How do you get an autonomous blimp? And we had these big top hat fans. And the fans could turn in response to things. And if one fan could go, if one fan stopped, the other fans would turn in the air. And again the TV companies came to the very first experiment of obstacle avoidance. So they could avoid due obstacle avoidance and fly in the flop. And that was really good scientific work. I really, really enjoyed that project. And they went all over the place as well. But I was hopeless with them because we had them remote controlled as well. I’ve wrecked the café with them.

I’ve completely wrecked the book shop. <laughs> All the toys and things, shelves, everywhere. So when they end up, my staff would never let me remote control them if they ever needed remote controlling. But that was a fun project as well and again a lot of TV stuff with it. But back in the lab, I still have the lab back at the academic world ‘cause I still had all the Ph.D. students who were still doing a lot of genetic algorithms work and that kind of thing. But what happened to me then was quite different because the Army and Engineering Science Research Council, the EPSRC Engineering and Physical Science Research Council, had these things called senior media fellowships. They had just started it. And they had one senior media fellow and he was at the same university as me, a chemist called Tony Ryan. And they said to me, “Well are you interested in this?” And what they do is they want you to work with the public. And what they wanted is somebody who could manage expectations in the public in terms of engineering and sell it to science, enthuse young people and try and get some balance of truth rather than sci- fi right there.

And so I went for an interview for that. And Lord Winston was one of my interviewers and they had the lead PSRC and the chief editor of I think it was “The Times” or “The Guardian.” And they gave me a really hard time. They kept asking me questions about nuclear science. And I know nothing about nuclear science. And they would say, “If a bomb was found in some, you know, university somewhere and the press rang you, what would you say?” I said—I had to be honest—well I would have to say, “I’m sorry, you’re talking to the wrong person. I have no idea. You’ll have to speak to a physicist. I’ll put you in touch with somebody.” It was this kind of question. So every answer I gave them was the same. And I just left there despondent. They rang me two weeks later and said, “You’ve got the fellowship.” And I said, “Why? I couldn’t answer any of the questions.” And they said, “We were deliberately putting this to you because we wanted to make sure that you were going to be honest and not bullshit and say things like, ‘Oh, I know all about nuclear physics,’ when I don’t.” I’m not sure that they’re telling the truth, but that’s what they told me anyway. And so I became a senior media fellow. So while running the lab that meant I had to do a lot of public talks. So I had to go out and talk to the public a lot, do a lot of media interviews, press conferences, explain to the public about artificial intelligence. And after three years of that I got to apply for an extension.

But during that period, journalists started asking me, because I was working with journalists a lot, journalists started asking me a lot about military robotics, you know, what’s going on with the military with robots. And I knew, I’d heard a little bit. I’d heard about something about drones and bomb disposal robots. But I didn’t know much. It wasn’t my field really. This was about 2005, I think. ‘Round about there, 2006. And very little had been written about it as well. Though it did run into the paper by this funny guy, Peter Asaro, and I read some of his work and Rob Sparrow. And then I went into this sort of really, really deep study period where I read nothing but plans from the U.S. Military. So I read all the sort of road maps, all those things.

And there was Ron Arkham starting right then. We were all starting up together really probing this for the first time. And I was pretty shocked by what I read. Because the military were talking about autonomous robotics which is my field in a way that I thought these guys really don’t understand the limitations of what they’re talking about. If they use these robots in the way they’re saying, they’re going to kill a lot of civilians. And what is this nonsense? How can they be talking about this? So then I thought well what are the protections? So I started reading international humanitarian law. I read Geneva Conventions. I just went into full time study for a while because this fellowship allowed me to do that. And so then I applied for an extension for the fellowship for two years and told them, “This is important. I think I need to be looking at this area.” And so they agreed with me and so they let me go on for two years. And I got into that full time, really.

And 2007 I wrote an article for “The Guardian” and they titled it because of my work with “Robot Wars,” “Robot Wars are a Reality.” That was the first time I talked about the principles of how robots can’t discriminate between combatants and non-combatants and how they can’t be proportional. So quite a lot of beginning to get quite sophisticated about the military stuff, though I knew very little about military affairs. I knew very little about anything to do with the military. I mean I’d been when I was born in 1948, I was born into an estate where it was all military people. They had all been in the War so you know, Mr. McKinsky across the road with one arm and the guy with one leg. So it was all like servicemen. And when they talked about the War, it was all the men would get together and have a chat about the War, but they never talked about the nasty side of it. They never talked about the killing. It was, “Oh, we met up and had beers and drinks.” And just like a bunch of young guys. But it was all war and it was very serious when it came to November the 11th which is Memorial Sunday.

And you go down to the Times Center and because it wasn’t long from the War, all these men you respected, all my uncles and things would be standing there with tears running down their eyes. But they never really talked about why. So you know, I never had much to do with the war at all. And then when I was in the sixties, of course, I was a peace and love hippie.

<laughs> You know, Vietnam War, peace and love. But not really paying much attention apart from thinking we shouldn’t have wars. But suddenly I was reading all this stuff about the military. And after I wrote “The Guardian” article I started getting invitations from the military to go and talk to them. And I found the military quite different than what I expected. They weren’t these evil guys with guns, you know, especially the officers. They were quite intelligent. They could discuss the ethical questions with you. And so I started learning an awful lot about the kinds of weapons they were using and what their ideas were and writing about it and thinking about it a lot. Just pausing there because I think you might want a question.

Committee for Robot Arms Control

Asaro:

Well I guess the next logical question is how did that lead to the Committee?

Sharkey:

Well I guess that after quite a while of working at this and what I said in “The Guardian” article, I was calling for international, you know, legislation about it. Because as far as I was concerned, robots couldn’t do the job of a human soldier. People like Ron Arkham were saying they could be more ethical in the battlefield than soldiers. And I thought this was ludicrous. How could a robot be more ethical? This is nonsense. It’s sci-fi. It’s fantasy. And I have a lot of respect for Ron’s work but this was a sort of a fantasy thing about ethics and how does a robot interpret these kinds of things? It takes a human thinking about the laws of war.

It’s a really interpretive human thing. I won’t go on to that too much. Once in a bit. I mean I’ve written enough papers about it and people could read about it. But I was bothered by it and I thought you know, how come I’m reading about all this drone stuff as well and all these plans and there’s no international discussion whatsoever? Why isn’t there international discussion? And one of the things I found was in the 1950s I found a journal article by a professor whose name I can’t remember. Not a journal article, an article in “The New York Times,” 1950, calling for a robot commission to look at robotics in warfare. And I thought that’s what we need now.

Why have we not got an international commission and discussion? And then I contacted you, Peter, and Jurgen Altmann contacted me. He had read some of my stuff. And I was in contact with Rob Sparrow and John Canning. These are the sort of people. And Ron Arkham. And we had various discussions together on email. But I saw that Rob was coming to England. And so I said, “Why don’t you come down to Sheffield and talk to me?” And Jurgen Altmann who is a German physicist who works in arms control who was just beginning to get interested in this UAV stuff, he was interested in how it would be used as delivery systems for nuclear weapons. So I said, “Why don’t you come over? Rob is coming. Come over, Jurgen.” So he came over and the three of us met here in this very room that the camera is in. And we had three days of really intense discussion together. And we talked about your work as well. And so the three of us decided, well you know, let’s start something up. Let’s call it the International Committee for Robot Arms Control. The first name we thought of, really. And we agreed to do that. And then we fought for ages <laughs> over the mission statement and then we contacted you, Peter, and sent you the mission statement and got you to fiddle with it as well. And I think working with philosophers, you have to get it exactly worded, you know? “Oh, that R is in the wrong place.” But we got it. It was a very good mission statement and we’ve still got it today. So that’s how The International Committee for Robot Arms Control started.

Influence on Graduate/Ph.D. Students

Asaro:

So if you could go back to maybe you talk about who some of your graduate students, Ph.D. students have been, especially any ones that are continuing work in robotics and where they’ve gone off to and what they’ve bene up to.

Sharkey:

Well I had a lot of very good students over time but I’ve lost touch with most of them. <laughs> Several of them are professors. All of my Ph.D. students apart from I think about two of them ended up as researchers and all the rest ended up as faculty. But none of them really famous. But there’s, you know, Tom Zinca is very good. He’s in Switzerland, the University of Hochschule. [ph?] <laughs> I’m hopeless at pronouncing names, so my Swedish pronunciation is not good and I’ve spent a lot of time in Sweden. And Lars Nicklasson was another one from Sweden. Those two were excellent. I’m slightly disappointed in Lars because he was a brilliant Ph.D. student and he worked only with representation. So he worked on language and reusable representations. So looking at non-combatant representations, how you develop them with back prop, take them out of the net and use them in other tasks. But why he was a slight disappointment to me was because he went back to Sweden, set up a lab, became a professor and now he’s second in charge of the university and he’s an administrator now more than he’s a researcher. And you know, that’s not what I trained them in. I trained them to be academics. But Tom Zinca, we worked a lot together on autopoiesis and robot embodiment and Tom is still working in that. He’s got large European grants. Tom was remarkable in the sense that he went from Ph.D. student to full professor in two years. <laughs> That is really an achievement. That was a great achievement. But he was a very sophisticated – He was already a faculty member before he started his Ph.D., so he was very good. And then another Ph.D. student of mine that was really good but he’s retired before me because he’s the same age as me but he retired at a bit younger. And he was Neal Griffith. [ph?] And he was very successful because he was the same age as me when he started doing his Ph.D. when he was 40. And he had been because he had been a historian at Oxford and then he was an archaeologist. So going out he’d spent a lot of time digging up Egyptian stuff and things. But he was also a musician.

So he came and worked with me on music perceptions and he’s written an awful lot on that. So music perception by machine, doing timbre and things. And we became very good friends.

Whenever I write I use them all the time because when I write papers I send them to him and he looks at them and proofs them and points out my bad grammar or points out oh, this concept, you mean. He’s very good at that. So that’s another Ph.D. student. And he went to the University of Limerick with my very first Ph.D. student, Richard Sutcliffe, who is still a professor at the University of Limerick. So those are kind of highlights I think. I mean there have been plenty of others, but those are the ones that are coming to my mind. I mean essentially it’s great you train them up, you get them skilled, you send them off to a job and then I forget about them. Because well, I keep moving disciplines and they never were into this military stuff so when I moved into that I’ve lost touch with all the old conference people that I used to know. And so I don’t see my students in the same way, because I used to examine their Ph.D. students and stuff, but now I’ve moved on to doing other things completely different.

Robotics Conferences, Collaborations

Asaro:

So what were the conferences you went to regularly around robotics?

Sharkey:

Well I used to go to Cognitive Science a lot and AAAI, Artificial Intelligence. And then ICAN I think was one of the ones. I can’t even remember it. I must be going senile. But I went to a lot of conferences. I used to do quite a lot of conference keynotes so I didn’t really choose to go to conferences. You know, I would be doing enough keynotes that that was enough for my conferences for the year. But you know in those days I would have thought if I did sort of seven or eight talks a year, keynotes, that was a lot. That was all because it was taking me away from my lab and my research. But now it’s not unusual for me to do maybe 30, 40 talks a year. <laughs> So it’s a lot more. And that’s sort of a big strain. But in those days, you know, when you’re doing lab work. I mean because I’m doing conceptual work nowadays, so it’s all thinking, I can do all my writing on planes and trains and wherever. Hotel lobbies.

Asaro:

Okay. I’m just going to pull wrap up questions to cover so we get all of it covered. So one is other roboticists that you’ve collaborated with over the years that were significant collaborations. You mentioned some of those names and what kind of projects you worked on with them.

Sharkey:

I didn’t collaborate.

Asaro:

Okay. That’s a quick one.

<laughter>

Sharkey:

I mean I did but I collaborated with a number of people, but I don’t really remember now. One guy I collaborated with was Ulrich Nehmzow. Ulrich was a very good roboticist and we worked together but unfortunately he died very young. Cancer. So I tend to put him out of my head because that is a sort of sadness, really. But mainly, you know I was running a big lab and so I didn’t go out. You have to remember I was in England and you know at this point there wasn’t that much robotics going on. There were a few people scattered ‘round. I mean there’s a lot going on now but then there was Owen Holland and Alan Winfield. And I’ve collaborated quite a lot with Alan Winfield and Owen Holland and the Walking with Robots projects. I’m just remembering now. The Walking with Robots project was a public project. So we were supposed to be going out a lot to the public with robotics. And so we took our robots and we went to Parnham. We did all kinds of things together. Owen Holland is an exceptionally good roboticist and so is Alan Winfield. And so we’d get together and talk a lot but we didn’t actually do physical projects together. And there was Barbara Webb working on cricket robots. And there was a little bit going on at Edinborough. And so there wasn’t massive amounts going on around the country. There’s not a lot of option to collaborate with people. And the way things in those days used to work in England, and it’s changing now because the research councils, the funding bodies are saying you’ve got to have these big centers of excellence and collaboration. But the way we used to work was we’d leave each other’s work alone. It was kind of like niche stuff. So you would work on your little niche, you know. My niche was neural network learning with robots and a bit of genetic algorithms. And other people would have their niche like the cricket robot or whatever. Or Alan was swarms. So we would all have our own little niches and you would know each other and you would talk together but we didn’t. It wasn’t a lot of collaboration going on. I’ve never been a really big collaborator. I mean I’ve written a lot with people who work in my lab over time. But I’m somebody who really likes to work on their own and that’s why I don’t like management and that’s why eventually I got rid of my lab and my students and got into more of the ethics side because I could actually think by myself or with my wife. We work together. I mean we’re good collaborators, of course.

Asaro:

What are some of the robot ethics collaborations you’ve done with your wife?

Sharkey:

Oh, well we worked on I think one of the first ones was “The Crying Shame of Robot Nannies” where we looked at child care with robotics. And I really enjoyed that. We had a great time with that. Because we’ve been writing together since we were first year undergraduates all the way through, you know, so this was great to work together because it brought together a lot of skills that we had had in the past. Because as you move on, I mean I’ve always moved on through things. I mean when I was a psychology undergraduate I did an awful lot in developmental psychology and that kind of stuff. And when I was a Ph.D. student I was working in AI and natural language understanding. I was also doing work with visual allusions. I did experimental work with that and child development. So we had quite a lot of experience in these things.

So when I worked on the child care thing we were both working in what we knew about robotics. And I had been working quite a lot in ethics at the time. But we were also brought back again to all of the psychological stuff we had done on child care. And my wife had worked at Stanford in child care with some really great names there, like Ellen Markman. And so she was really well versed in that. So we brought together the kinds of – We looked at robotics and we were looking at child care in the sense of what would happen in near exclusive care by robot and what is the kind of things like the subtle sort of movements that robots don’t do that humans do. And we’re talking about very small children. Because we found 14 companies in Japan and North Korea, South Korea I mean that were developing child care robots. And we thought are these guys crazy? And then we started looking back at our old attachment work and looking at the kinds of attachment disorders that they could possibly give you. It’s speculative, of course. But it’s speculative based on, you know, signed empirical work that was out there and it’s just putting together. It was fun to put together. It’s like a jigsaw of all these little pieces of multidisc – All of the different disciplines that we’ve been involved in.

So we did that together. And then we started working and we got asked to write a book chapter on companion robots. And so we were looking at some of my criticisms of AI and Amanda’s criticisms of AI, anthropomorphism and deception with companion robots. But you can’t look at companion robots because they work for the elderly. So the idea was you know, you’re going to leave elderly people in the hands of machines. They need human contact. I know, let’s give them companion robots. That will give them contact. And it didn’t seem right to us, really. But in writing of it, that got us into thinking about elder care. And as I said earlier, I used to be a psychiatric nurse, so I had worked with geriatrics, psycho-geriatrics. And my wife Amanda had also worked independently from me when she was young with geriatrics.

So thinking about the practice of care and the horror of you know, how could a robot do the practice of care? Yes, it could do tasks of care but not the practice of care. So we began writing about that. You know we like these sort of titles that are a bit dramatic. Like we wrote one for the “Journal of Gerontology” called, “The Elder Care Factory.” <laughs> You know, it’s about automating elder care. And sometimes we kind of use sort of scare tactics so you’re taken out to the extreme of it. But the idea is not to promote ourselves as experts in these areas at all but to offer warnings and try to get the care of the aged, or the proper experts to look at this and say, “Look at this for goodness sake. Don’t be fooled by a roboticist telling you, robotics salesmen coming with their suitcase full of, you know, aspirations and telling you all these things and then you’re going to put financial investment into it and go down the wrong path and then it’s going to be too late.

Why don’t you sit down now and not get caught off guard and write guidelines. You know, you’re the experts. Write the guidelines. I’ll help you by telling you about the technology. And by the way, I’m not looking for funding from you with this. I’m an ethicist and I’m giving you ethical advice and there’s nothing coming back to me for it. So it’s trying to give people objective ethical device that you think, oh, why am I such a good guy? Well it’s not so much that. It’s that, you know, I think it’s any professor, if you’re a young professor it’s kind of – there’s some areas that I would say were career dangerous. And if you’re a young professor in engineering and you’re a roboticist, you know it’s a bit risky going into ethics too much. It can disrupt your funding and it can maybe spoil your career a bit. You know you might not be able to get on and get tenure quite as quickly. So I think once your hair turns white and you’ve got your reputation and you’ll be able to do more research, but you know, what is your research going to be? You know, we could go on forever doing the same kind of research over and over again. It’s time to pay back your society and look at ethical responsibility. You’ve got a career of experience in this technology so you’ve got some sense as to what you could do with it. And you can stop the hype. If you stop taking funding, you stop the hype and stop going on about, you know, robots will be able to do this, that and the other thing. And think about it seriously and think what can they actually do? Let’s think about what I did and what’s the objective truth of that rather than you know, some pie in the sky DM thinking for my next grant. And so I think it’s incumbent on people like myself to do it. And I think we should all be doing it. I’ll get off the soapbox now and stop preaching. <laughs>

Ethical Problems for Robots

Asaro:

Well I’ll give you one more soapbox to stand on, which is what do you see as the biggest ethical problems facing robotics in the near and long term future?

Sharkey:

Oh that’s easy. I look at a lot of issues so let me just say I’ve looked at policing. I’m worried about that kind of thing, privacy with policing. I’ve looked at criminality. I’ve looked at transport. We’ve written papers on robot surgery. We’ve written papers on all of these things. But they’re all worries of ethics. But the biggest thing really is the automation of weaponry without question, you know, the military. So I had a fellowship for two years just on where they paid my salary looking from the Leverhulme Trust looking at the ethics of battlefield robots, and there’s no question that is the biggest problem facing us. I mean it’s not the biggest problem facing us, but in robotics it’s the biggest problem facing us. We’ve really got to grips with it. There are many reasons but I really don’t think that machines should be given the power to decide to kill people. They’re not up to the task of doing it and it’s going to be disastrous I think. And that to me is the biggest problem in robotics and I will work on that till I die to try to get it stopped in any way I can.

Greatest Achievements

Asaro:

Alright. And looking back at your career what do you see as your greatest achievements or accomplishments or contributions?

Sharkey:

Well I think my biggest contribution is yet to come because I’m going to get these robots stopped from killing people, and that’s going to be my main contribution. And all of my scientific work, you could throw it out the window as far as I’m concerned in response to that. You know, I’ve made contributions to machine learning. I’ve made contributions to safety critical systems and methods of doing multi-layer nets and all sorts of stuff in language. And I was one of the first people to use schema theory with neural nets. But I don’t really care about that now. You know, academia is wonderful and it’s great when you’re young and you’re using your brain in a way that’s really they’re all great puzzles. But they’re games really, you know, to me. That was all gaming. It was all great fun, intellectual puzzles. I made contributions to the field but now I’m really concerned about the next 10 years of my career which is trying to get these weapons stopped and that’s what I really think is going to be my greatest contribution.

Even if I fail it’s going to be my greatest contribution. I actually feel as if I’m doing something useful now rather than what I was doing before which was getting funding for the university and running big labs and thinking I was great, but you know, my research, my research, in my lab we do this, in my lab we do that. And working with campaigners is just wonderful because it’s all we, we, we. It’s what we’re doing. Working with the International Committee for Robot Arms Control is extraordinary and it meets many of the goals of my early career actually because it’s incredibly multi-disciplinary. We’ve got political people. We’ve got philosophers. We’ve got law people. We’ve got good engineers, good AI people. We’ve got child development psychologists. We’ve got anthropologists. We’ve got campaigners, you know, and advocates.

And advocacy is a completely new thing to me. I’m learning how to do that. I’m learning how to work with political people. And you know, I didn’t think at this point—I’m 64 now and nearly ready to retire from my job in England—and I didn’t think at this point I’m in probably the steepest learning curve of my career. And it feels to me that it’s a more important one and you know, how I can facilitate, how I can join in this really and work with other people to meet a goal. It’s wonderful for once because as I’ve said earlier, I’ve never really been a team player.

I’ve not done a lot of collaboration. And now I’m collaborating on a massive scale and it’s incredible. I’m really, really enjoying it.

Advice for Young People

Asaro:

This is the wrap up question we ask everybody which is for young people who might be interested in a career in robotics, what kind of advice do you give them?

Sharkey:

Do something else.

<laughter>

Sharkey:

Do you want me to do that properly?

Asaro:

Sure. So what kind of advice do you give to young people that are interested in a career in robotics?

Sharkey:

You see, they’d have to think about what they’re really interested in because there’s so many ways you can contribute to robotics. If somebody says to you what can I do for a career in robotics, you can say to them well there’s two things you can do. You could either go into a department of mechanical engineering or you could go and do a degree in control theory. You could go and do a degree in computer science. Those are the three real things. But you know, in my experience some of the best people I’ve had work in robotics with me have been physicists because they really understand sensing and how to use sensors and how to do that kind of control. But there’s a lot of other work in robotics. You know, there’s these very male dominated subjects, engineering and stuff. And there’s a lot of really good work to be done nowadays because robotics has changed so much and there’s so much off the shelf robotics that you can start thinking now about how robots are going to interact with humans in the human world. Service robots. That’s going to come online very strongly. And I could say to you within the next 20 years there’s going to be robots doing every household chore and be like people said in the 1950s that within 20 years robots are going to do household chores. And I think it’s going to take some time longer but it’s definitely happening a lot quicker now. So you could be doing a degree in psychology or sociology and it’s what you do afterwards that counts. It really is. You know, if you want to get into industry in robotics, do mechanical engineering, do control engineering, do computer science. Do one of them first and do a master’s in one of the others. But if you want to do control theory with human level control or look at robot interaction, do a degree in psychology so that you can actually do it properly. Because a lot of the work, to be honest, <laughs> a lot of the work I see as an experimental psychologist in human-robot interaction is amateurish. I mean you get roboticists doing experimental work but they don’t really understand the control conditions properly and stuff. So train as a psychologist with a minor in computer science would be a really good way in as well. Multiple ways in. I know artists who are great roboticists. They make beautiful looking robots. So you know, it depends on what kind of career in robotics and it depends on what kind of person you are and the kind of things you like. So no matter what you do, I guess English Literature is not such a good route. <laughs> But use your imagination. Be creative and just do what you like.

Asaro:

Great. Well thank you. Is there anything that we forgot or you would like to add?

Sharkey:

I’ve forgotten most of my career so I won’t add anything, thank you. <laughs>

Asaro:

Thanks a lot.