Oral-History:Manuela Veloso

From ETHW

Manuela Veloso

Manuela Veloso was born in Lisbon, Portugal. She completed a Licenciatura degree in Electrical Engineering and a M.Sc. degree in Electrical and Computer Engineering at the Instituto Superior Técnico in 1980 and 1984, respectively. She then attended Boston University, receiving a M.A. in Computer Science in 1986, and Carnegie Mellon University where she received a Ph.D. in Computer Science in 1992. Following graduation she joined the CMU faculty as an Assistant Professor in the School of Computer Science, later being promoted to Associate Professor in 1997 and full Professor in 2002. In 2006 she became the Herbert A. Simon Professor at CMU and in 2014 she was promoted to University Professor. Additionally she has served as Visiting Professor at MIT from 1999-2000 and Sargent-Faull Fellow at the Radcliffe Institute for Advanced Study at Harvard University from 2013-2014, as well as current President-Elect of AAAI and the Past President of the International RoboCup Federation.

Veloso's research interests focus on robotics, artificial intelligence, and autonomous agents. Her contributions to the field presented her with numerous awards and honors, including the NSF CAREER Award in 1995 and the 2009 ACM/SIGART Autonomous Agents Research Award.

In this interview, Veloso discusses her career in robotics, focusing on her activities at CMU. Outlining her contributions to the robotics community, she recounts her involvement in various robotics projects, including CoBots and the RoboCup. Reflecting on her career decisions, and commenting on the challenges of robotics and on the human-robot relationship, she provides advice for young people interested in the field.

About the Interview

MANUELA VELOSO: An Interview Conducted by Peter Asaro, IEEE History Center, 16 September 2014.

Interview #712 for Indiana University and IEEE History Center, The Institute of Electrical and Electronics Engineers Inc.

Copyright Statement

This manuscript is being made available for research purposes only. All literary rights in the manuscript, including the right to publish, are reserved to Indiana University and to the IEEE History Center. No part of the manuscript may be quoted for publication without the written permission of the Director of IEEE History Center.

Request for permission to quote for publication should be addressed to the IEEE History Center Oral History Program, IEEE History Center, 445 Hoes Lane, Piscataway, NJ 08854 USA or ieee-history@ieee.org. It should include identification of the specific passages to be quoted, anticipated use of the passages, and identification of the user. Inquiries concerning the original video recording should be sent to Professor Selma Sabanovic, selmas@indiana.edu.

It is recommended that this oral history be cited as follows:

Manuela Veloso, an oral history conducted in 2014 by Peter Asaro, Indiana University, Bloomington Indiana, for Indiana University and the IEEE.

Interview

INTERVIEWEE: Manuela Veloso
INTERVIEWER: Peter Asaro
DATE: 16 September 2014
PLACE: Chicago, IL

Early Life and Education

Peter Asaro:

So if we could just start by having you tell us where you were born and where you grew up and went to school.

Manuela Veloso:

I see. So I was born in Lisbon in Portugal and I went to school in Lisbon. First I attended a French school for my primary school and then I was in the public school for high school and then I did my undergrad degree also in Portugal in Lisbon in the Instituto Superior Técnico which is the school of engineering in Lisbon and I studied Electrical Engineering.

Early Robotics Work

Peter Asaro:

And what the first kind of robotic work that you did?

Manuela Veloso:

Okay, so this is an interesting story, so I actually only did robotics work much later after my Ph.D. at CMU. I studied Electrical Engineering and then I did a Master's thesis in Portugal, this is back in the '80s on the problem of actually automating information, so databases and all sorts of like a generation of lists of parts for a company. It was very much like just data processing. And in those days I started getting interested on the automation problem, just the actual, how do you have something that comes out of information by itself. Then in 1984 I came to the States and I did a Master's degree in computer science which was new to me, at Boston University for two years. And after that in '86, I joined the Ph.D. program at Carnegie Mellon University in Pittsburgh. And that's where I started actually – my interested have all been at the artificial intelligence level so my thesis actually was on the problem of how do we generate plans automatically by reusing information, so again, this automation problem that would build on experience and you would solve problems, save the solutions with the rationale why this was the right solution. And my thesis was about this planning by analogical reasoning, which was something similar, how to use a past case and adapt it and generate solutions to problems that were very complex by reusing solutions to simpler problems rather than doing it all from scratch, the end is automation. So my thesis was all in planning, so by this time it's 1992 and you have to understand that at Carnegie Mellon, since '86 since I was there I became very exposed to robotics. There was, I remember, in '86 there was these efforts on autonomous navigation and there was this robot when I would walk to my office which was outdoors and it was at some position and it would move really slowly by itself and there's snow, no snow and there was this project that was led by I believe Hans Moravec and Chuck Thorpe. And they actually had this robot bravely outside and I always loved to see how much it went.

But just to say in those days when I would go home at the end of the day, the robot barely moved like what, three meters, five meters, it was a whole other story. And at CMU also from that beginning even during my thesis it was also all about sensors in the physical space, there was Marc Raibert did like the hopping robot there, there was all sorts of exposure to the automation reaching the physical creature. So when I finished my thesis, which was all about planning and planning is about thinking, I actually decided to delve into the world of actually, what about the execution, so if you plan, oh it gets executed. These were also the days of Deep Blue and Deep Thought and high tech, all the chess playing part but there was no robot that actually moved the pieces on the board. So it became this passion for, "What about if things get executed, what happens in the execution part, the planning and execution?", and I was fortunate that at CMU there was Reid Simmons that had Xavier which was this robot that moved around roaming in the corridors and I was very lucky to have this first grad student Karen Haigh who was very interested on planning execution and learning in a real robot. So it was my beginning of these passion, that became a passion for actually robots. And you have to understand that historically this is also something that was interesting in those days, early '90s, it was also the beginning of all the Internet stuff. And many grad students that would come to Carnegie Mellon wanted to work on the Internet, like text and reading and all online. And I was like moving out from the symbolic world and the text world into the physical world myself. I remember having at my door a sign that says, "Here I do not advise thesis on the web, I only want on the robots." And it was find people in those days interested in robots and everybody was just interested in the web. But I was lucky to have these great students, Karen, Peter Stone, Astro Teller in the beginning and with this openness for also including this planning and execution in their work.

Soccer Robots

Manuela Veloso:

So let me just then say what happened. So we had Xavier and Karen did a great thesis on learning from execution, it was improving the planning from execution, she introduced situation dependent cause that first robot but in some sense it was Reid Simmons' robot but he was generous, we could use it all the time, it was great. But at the same time, in 1994, my student Peter Stone went to a – we were both at the AAAI conference and at the AAAI conference there was a demonstration of soccer with two little cars, one on one by Alan Mackworth a professor at British Columbia and Michael Sahota’s student and they were demonstrating these one on one robots, soccer. And my student Peter Stone was really a lover of soccer, human soccer and I remember him coming down this escalator, and I was down in some hall and he told me, "I found my thesis; I want to do robot soccer." And for me it was the first time I had heard about robot soccer, through Peter and I thought, robot soccer, I didn't even care about soccer myself and it was kind of seemed like, what is this all about. But on the other hand it was a really amazing point because it made these robots available in our lab. So what robot soccer started with was really these small robots that we could build a field in our lab and we could really delve into the research of this planning and execution in a very complex environment which is a lot of uncertainty in which we actually had to close the loop of the perception problem where is the ball, where are the other robots, where's the goal, where are the lines, the cognition problem, where should I position myself, what is the strategy, what am I learning from this game and the action, they needed to move. So you cannot imagine the fascination that started then to get these robots to play soccer. So yes, there were like the space robotics, there was like the underground robotics, there were the ground robotics, the outdoor robots, and we started this robot soccer.

So in '94, that's how it started and then I didn't have any robots to play robot soccer and Peter had to go on with his thesis, so there was this simulation robot soccer, again very compelling these 11 on 11 very interesting software agents and Peter started working for his thesis on this problem of multi agent coordination in these simulation environment, as in parallel we were trying, I was trying to develop these robots that would eventually move. But you have to realize something, in those days it was so kind of new to build small robots, that our first robots, in 1995 and 1996 were built on a ball of glue. So we didn't even know how to mount such small things so we had this melting glue, we would put like, Sorin Achim, one of the engineers that worked in my lab would put all these motors, all these things and then when we needed even for example to change batteries we would melt the glue, change the battery, put the battery in and blow so that it would dry.

And that's how it all started in '95, '96 and we got first involved with the competition in Korea and then we got involved with RoboCop which was in '96, a fantastic workshop to start RoboCop with Hiroaki Kitano, Minoru Asada, Dominique Duhaut, Enrico Pagello, Silvia Coradeschi and myself, somehow the founders of RoboCop. And in those days, this is '96 and I remember that we set up to have a competition in '97 at IJCAI. We had no clue if anyone would have any robots there to play soccer but we started it. And I think that one of the major things in that starting was the fact that Kitano and myself and Minoru, we were very kind of like trying to reach the whole world. So I remember a discussion in Japan in which we were trying to decide how the small size league was going to look like. And we were trying to see it has to be something that eventually in the world can understand and what is this about and we were thinking do we give dimensions in meters or in inches, I mean the whole paradigm of how do we make it universal. And I remember coming up with this idea and I told Kitano, "That's it, the field is a ping pong table." Everybody knows what's a ping pong table; everybody can buy a ping pong table. So it became a ping pong table, the small size field was literally the size of a ping pong table, the same material as a ping pong table, you could just buy it anywhere and put it on the floor in your lab. And then we had also another league which was the middle sized league, much bigger robots and we were debating how to do it and I also said, "Okay, let's make it nine ping pong tables," because the field had to be larger and there you go, the size was nine ping pong tables and everybody could buy, could understand what we were talking about. And it made a big difference I believe this type of like trying to come up with a problem that was reachable by everyone and we decided that there was a camera overhead and that was one league, the other one, the robots will have cameras on board, so someone would have more interest on the strategy, could work on one, someone that would care more about the actual perception problem and communication would work on the other. Very, how do you say, very engaging, it was really a unique moment and since then I've been completely dedicated to robots, the fascination of having robots that move, that move. Because it was not a robot receptionist, or it was not like a remote controlled robot or it was not an operation, it was really, they would move by themselves. And I don't know how to explain but the fact that there was this opponent and there is this opponent makes us develop solutions for very uncertain worlds because you don't know what's going to happen. And that's it, how it all started and I've been in this business of autonomous robots, autonomous, this was a – you have to realize that this was the condition of the robot soccer framework and also Xavier was autonomous, the mobile robot at CMU and all of them but you could not have input from humans, that's it, you were supposed to just let them do everything.

And at the beginning it was hard to do the perception, the lighting conditions, the color, the detection, all of these in real time which meant that these things had to be processors at the speed of the balls. And the whole paradigm was extremely challenging at the beginning and if you would look at videos from 1997, our robots, the CM United Robots all the way to our current robots, the CM Dragons, I mean you are going to see the gap. So everything went on from there and a lot of research on understanding these different robots, we were very lucky also to have been part of the Sony AIBO robots from the very beginning. And, through Masahiro Fujita, we had an opportunity to have robots at CMU, the small – these AIBO robots since '97, we participated since then. And then that opened our mind to legged robots, I mean four-legged robots, I remember they would fall and I mean they didn't come with any way to stand back up and I told my student, Will Uther in those days and I kept telling him, "Will, I mean for autonomy, we cannot go and pick them up so we have to actually have a mechanism to do all the controller, to have them stand back up by themselves." And it was beautiful on top of the strategy of trying to have – and the perception and the conviction, "Okay, they need to stand up by themselves." And so all the understanding of not just the walk itself but also the bringing the body back up by itself. And the position that we actually had, the controller we had developed was quite complex to move the legs, I don't know where and then defeat gravity and put the forces on the right places and get the robot to stand up by itself. So we were very lucky also to have that and that led us to actually a lot of problems, it did not have an overhead camera so the Sony AIBOs had their own cameras, their own sensors, their own motion and only in 2002 were they able to communicate with each other. Until 2002, all the teamwork was actually through vision and through seeing each other and eventually they would all go to the ball if they were at the same distance from the ball. I want to point out one thing that was also crucial about the Sony AIBOs and these major landmarks in terms of researching robotics for us.

So when we were in a game in '98 we had these sophisticated probabilistic reasoner about the localization of the robot and being able to decide its motion based on localization within the field. We were in the middle of a game and the robots, ours and the opponents' team, they got all entangled on each other trying to get to the ball and legs were like entangled with other legs and everybody was like, a nightmare. And the referees of the game decided that the only way to solve this problem was really to pick them up and put them apart, these robots. Now the problem was that <laughs> in our algorithm, the robot would use its own odometry and its own sensing to figure out where it was in the field. The moment they were picked up and put somewhere else, my God, a nightmare. And I remember they were completely lost, they barely could find their own goal. I remember telling Will, my student, Will Uther, "Do not let these referees pick up our robots, <laughs> do not let them pick up our robots please." But there was nothing we could do. And then another big research question, yes, people had talked about the kidnapped robot problem but having lived it there live, seeing these people picking up your robots, I never saw anyone pick up Xavier which is very heavy or someone pick up a robot up in Mars or some – but these little things. And then all our algorithms are crawled apart because they were like just teleported somewhere, not around motion and they would wake up in the middle of the field thinking that we're in the other corner, seeing this sensing and say, okay, I'm seeing a yellow goal in front of me but this is noise because the probability that I'm there given that I was somewhere else is zero and therefore our patience, our probabilities would all fall apart.

And then we had to solve this research problem, how do you localize a robot that actually is teleported from time to time from wherever it is to another place? And that led for example, very nice algorithm that is the sensory setting localization by Scott Lenser on actually trying to reset your localization based on the sensors and believe the sensors and believe the sensors and so we use the a posteriori to actually update where the robot is supposed to think it is. But just so you understand, it was like this thing, having this referee picking up the robot, <whispering> that made us, like, understand that this is not like a beautiful mathematical exercise but it's something that it's really needed if you have a small robot. You'll carry the robot somewhere so within the autonomy there is also this carrying part, very beautiful. And so we went on and then we worked on – we had a very big landmark also when Mike Licitra, Stefan Zickler and James Bruce engaged at CMU in the small sized robots with new robots, fantastic, mechanical, electrical hardware platform, we still use them since 2005, 2006 that we have these robots. We are in 2014 and these robots still work without barely with any maintenance, currently Joydeep Biswas does all the maintenance, Mike Licitra's a genius, that generated that hardware. So in those days we were able to do for example in 2006 the first, the very first kind of like passes and receivers of balls at 60 hertz which means that in 16 milliseconds all the predictions of where the robots were supposed to go with balls at eight meters per second, we would intercept a moving ball at eight meters per second, all based on Jim Bruce's navigation, Jim Bruce's like planning for the multi robot world. So those were like amazing and in parallel with, these were like the big days, my lab had I don't know, 30 people in those days in 2005, 2006, 2007, 2008, those years, it was the CM Dragons, this time of small sized robots being like amazing. And then these AIBO robots with Sonia Chernova and Doug Vail being amazing also in terms of actually their abilities to walk fast, Sonia did the learning algorithm to have them learn how to walk really fast on four legs. Doug did an activity recognition using conditional random fields and then Scott was doing these sensory setting and environmental change and who knows. So all of those completely focused on these robots.

Humanoid Robots

Manuela Veloso:

And then we moved into the humanoid robots and those we were also extremely fortunate to have access to the QRIOs, the Sony QRIOs, the early robots. Again Masahiro Fujita was very influential in having Carnegie Mellon have access to these QRIOs and these robots were used by Sonia for learning by demonstration and very successfully we were able to teach them tasks. And also in parallel we had the Segway robots, Brett Browning and Brenna Argall were doing also learning by demonstration there, very interesting. So it was the Segway robots, the AIBO robots, the small size robots, the QRIO robots, all of these in my lab and all dedicated to just autonomous robots. So we were not really except for the small size building robots, we were using the AIBOs, the QRIOs and the Aldebaran Naos but the small size were the platform we were building. And then finally this is the stage where I am now, which is also a major kind of change and I'll explain in a second, which was like this around 2011, I started thinking that we had done so much for autonomy in the soccer field and I was wondering how can we build upon so much research on indoor robotics and try to build also robots in our indoor environments, let's say at CMU moving around. And so in those days Joydeep Biswas another one of my best students and a genius in terms of hardware and strategy and software and all sorts of skills had joined the lab. And I told Joydeep, "Joydeep, we have to get these robots moving." So we wanted to have robots now, wheeled robots, the tall robots, service robots and Mike Licitra very generously built the base for those robots with four omnidirectional wheels, really beautiful, low clearance so the robot would not tip over and a column in the left up on top and we were ready to just have it move. And at the base of the robot was in some sense a scaled up version of the small size soccer robots also been built by Mike Licitra.

But anyway, so what happened is that now we had these service robots and we were able to navigate in our environments and many people had done museum tour guide robots and indoor robotics so it was fun, we had SLAMs or we could use a map, we could learn the map, we could use a map and we would do localization. And that turned out to be, well, a very good start but things did not work as – eventually people had tried on limited spaces indoors and we wanted the robot to move in the whole building and there many things that started to come out as challenges, glass bridges that had a lot of light, for our connects within the later also did not see the legs of chairs that were metallic and very thin or God knows what. We had a lot of work to do and Joydeep did a marvelous thesis on these non-Markovian localization and navigation, non-Markovian localization and believe it or not just to say where we are now and I'll explain what's in between, our CoBot robot, these collaborative robots, we have four of them, have navigated at CMU close to 1,000 kilometers by themselves. And the reason why they are able also to do this is because of another breakthrough that I think we have. In the middle of, I don't know, maybe the fall of 2011, I couldn't stand anymore this concept that they were never able to do it all, even it was something they couldn't detect or it was something, a door they couldn't open or some – there were all these problems. So the beautiful scenario in the robot soccer in which we could have – we close the loop and it was always working, these robots in the building were never always working, never, there were always problems. And one day I said to myself, this was a big breakthrough for me, I said, this is it, we have to accept the fact that robots have limitations and will always have, perceptual limitations, they cannot see everything, cognitive limitations, they might not understand all the tasks they are supposed to do, they might not be able to plan for all the tasks, they might not be able to actually schedule all the tasks that are required within the time limit and actuation limitations.

So our CoBot robots do not have arms, don't have legs, so they don't go upstairs, they don't open doors, they don't pick up objects. And so now the question was, given all these limitations, what do we do, do we wait until I have money to buy more arms for the robot and better sensors and more, I don't know what and equip these robots with all these things, better algorithms or do we just accept that they have limitations? And it was a major breakthrough, this thinking about saying, it's okay for them to have limitations. And what we introduced and what we started was this concept of symbiotic autonomy in which now they are autonomous but they can proactively ask for help, they can ask for help from humans. If they were lost in the old days, somehow in some hallway that you could not see the map, features of the map, you would just stop and say, "I'm lost, can you tell me where I am?" and they would pop up on the screen, a map of the building and some generous human would come by and say, "You are here," and so as soon as you say, "You are here," the robots say, "Thank you," and would go. If they were actually having problems with like opening a door, pressing an elevator button, picking up an object, they would just say, "Would you mind pressing the elevator button for me so I can get to the eighth floor?" Again, a human would generously press the button, hold the elevator door and they could go. And so it changed the paradigm, this concept that the robots would still be autonomous, nobody follows them at CMU, they are not chaperoned, nobody goes. But we know that if they find a situation outside or if their uncertainty's too high, if the step of the task, they don't have actuators, they just are planned to ask for help. And they can actually also go to the web using maybe somebody's OPENevo architecture in which they actually can query like, "What's the most probable place to have coffee in the building?" And the web tells, "83 percent, the kitchen, 25 percent, the printer room," I don't know what, and the robot happily plans a route to go and pick up coffee in a kitchen where nobody before had entered, "Well, for kitchen go to location 7605." So this symbiotic autonomy enables our robots at CMU to go everywhere and Brian Coltin for example, one of my students to address the problem of scheduling even introduce this algorithm in which the multiple robots transfer items between them and to optimize the schedule of their tasks. So that's where we are.

What I work on a lot currently is to just keep pushing forward this autonomy, this autonomy of more than one robot. So if you think about, "What am I going to do for the next 20 years if I make it," it's really about understanding in further depth the autonomy problem of more than one robot in environments that are very challenging and certain and changing in a way that they also can interact and service humans. So for this interaction we now have a way to talk with the robot, Thomas Kollar, Robin Sutton, Vittorio Perera and Yichao Sun – oh and let me tell you, the student who actually did his thesis on this symbiotic autonomy and understanding how humans would relate with robots was Stephanie Rosenthal, really a major person in my lab to introduce the concept of human-robot interaction when I had spent all my life thinking about robots by themselves and then magically now there are humans in the picture when we are servicing in our environments and we have to know how to interact with people. And it became a big new phase in my lab to have this human-robot interaction, not the learning by demonstration that Sonia was doing before but really these interaction with humans. And we are still in an infancy from my point of view of understanding all the human-robot interaction for mobile robots, functional robots all the way. So we are passionate about this, deploying robots. So we are very hands on robotics, we have beautiful algorithms, we have beautiful approaches but I love things that actually work, so our robots move in our buildings. I mean if you come to CMU, you would be escorted by a robot to my office. I never give directions to my Gates Hillman Center, 7002 office, I just say, "CoBot will meet you in the elevator." And there's CoBot, you say, "I'm here," and then CoBot, "Follow me," and there it goes all the way. So it's a part of our daily life, everybody at CMU knows that these CoBots move around, it's the platform for research for many undergrads, for many grad students. We are not done; I mean there are many things that they are not working on. Fine, we have a big focus now on safety, guarantees for safety and Pablo Mendoza is working on this. We have a lot of work – so one of the fascinations is like what do they do when they are not tasked to do anything, if they are so autonomous, what do they do? And Max Korein is working on this problem of self, generating self goals, I mean, "I'm going to go there because I've never been there and I want to learn what's happening there. And I want to just do this task, I think it's coming, I'm going to do it before it comes." So what is thing about our own goals? So it's a fascination for autonomy, so robots that can do things by themselves and learn from experience and become better and know what they don't know.

So in summary, I really care about the AI problem, the artificial intelligence problem. And in fact, the robots just for the sake of having a platform that moves and services. So then I delve into all the problems of having something that's not human and these robots are going to be and people – for me these robots are going to be always these creatures that are artificial, there are these entities that are not dogs, not cats, not humans but they still move. And they're not refrigerators that don't move or smartphones that are in your pocket, nothing, it's something that moves like all these other creatures that have life and this thing is going to be artificial who are moving and helping, "Hold me this. Take me there. Help me like push this thing forward. Arrange this room. You better clean up this particular mess that's here. Receive my guests at the door." Do all sorts of tasks that require mobility and they are not just plain things that you can have on your cell phone. That's all.

Robotics Challenges

Peter Asaro:

So you talked about the challenge of closing this loop in real time of perception, cognition and strategy and actuation. I just wonder, was one of those tasks a lot more difficult than the others and how has that changed over time as technology's improved?

Manuela Veloso:

So definitely the perception task is the nightmare and to understand an environment from pixels is quite difficult. I think there was a major breakthrough when the Kinect came up, depth cameras that were accessible from a price point of view were available. In two weeks we had this robot segment image in terms of depth easily and therefore there was no more need for these two cameras, I mean they are there in the Kinect but I'm just saying, it was a major breakthrough, these 150 dollars Kinects actually from a perception point of view. I keep thinking that perception is a very difficult problem but I tend not to believe that just a, how can I say, general learning about objects in the world may help but what I think is that perception is a purposeful perception. So in a sense we want to know, to understand a scene when we have a goal. So I'm very much goal oriented. Imagine you and I coming out of an airplane on some random airport, I have no clue what the chairs in my gate looked like when I landed here but I found my way out of the airport because I just followed baggage claim. So look at my perception, it's completely driven by my goal of getting out of the airport. It's not really that I'm processing this whole scene and modeling everything and finding where everything is, you just perceive and understand a scene based on your goals. So I think perception will continue to advance more and more if we actually also include these goals like we do in the robot soccer, like we do for CoBot. CoBot is looking at scenes finding walls so that it moves in the building finding people or shapes of people so that it can avoid them and obstacles but it's not really understanding everything about the scene. And it gets to the kitchen, has no way yet to find where cups are and he just says, "Where's the cup? Put the cup on me," done. So I think that perception has evolved a lot after this Kinect and we have also these learning algorithms for objects that will be used by robots more and they are used by many robots more and more. Perception made a big advance.

In terms of planning, representations, the reasoning about probabilistic states and probabilistic representation, there's choice of actions that is capable of being in real time, go through sampling techniques is important for us too. Actuators, we have some legs now, we are getting better, we have Atlas, we have some various – all the Aldebaran Nao robot, we have that. But I'm not sure that we have made as much progress in actuators. We have manipulators, we have – but I mean our hands our still a mystery how they are so good or in terms of cost, maybe the actuators are still further away than the preceptors, the perception devices. And so that's – yeah, and of course when the laser wrench, when the laser scanner came it was also a big deal because we could have robotics without really just vision. But now we really need it all in order to be able to have these robots that are intelligent, so we continue making hopefully progress along all the three dimensions.

Peter Asaro:

And do you think a RoboCup team is ever going to challenge the World Cup teams? <overlapping conversation>

Manuela Veloso:

Oh, I have no doubt, I have no doubt. So our goals from the beginning was by 2050 to have this soccer players beat the human World Cup champions by then. And I tell you something, yes, if you look at our games now, they are still, even you had kind of like a smile there. And it's true, we are far, but the gradient has been dramatic, it has been dramatic. You have to realize that in '97 when we started this, I remember that the robots barely moved, there was a reporter standing by me looking at the middle size field and the robot and he asked me, "Tell me when the game started," and we were into like seven minutes of the game. The robots basically just cha, cha, cha, as soon as they saw the ball, they could barely go to the ball; they lost the ball again, searched for the ball. We were only in the search for the ball business. Nothing of that now. All of them can see the ball, all of them can communicate, they can strategize, yes, we are short on legs and we don't play outdoors yet but I have to tell you that since 2007, the RoboCup trustees play a game of five humans against the five winning robots in the middle sized league which is this league that robots have wheels but they are big and most fast and they play on some kind of large field like an indoor soccer field. And every year we do this and this year in 2014 the humans were struggling, they were not really as if Peter Stone or Gerardo or Daniela would be here, they would say, "No, we were not struggling," but they were. And Claude and Itsuki and all of them, they played great soccer but the robots were actually moving really fast to intercept their passes. So they have to run more than they used to do, maybe of course they can still beat the robots and pass and they are smarter but they have to run more and that they would acknowledge is right. So who knows by 2050, if we just have better legs and if we – next year we are going to start playing outdoors. So RoboCup it is amazing, amazing enterprise in which every year we change the conditions, we make it more challenging, more challenging. So RoboCup is not going to win until eventually we get there, we keep doing more and more and that's it, and eventually we'll get there. Whether we'll be there, I don't know, you'll be – 2050, I'm not going to be here anymore but it's a very interesting from a robotics point of view, it's a very interesting example of having like a grand challenge somewhere, they are not giving up, see. We have even the RoboCup at home now which is like robots that interact with people so we are in that side also understanding how are these soccer robots when they also are going to interact with people, currently they ignore people, they just balls and goals and players. So we are seriously thinking about this complete challenge, what's going to happen. But I don't know.

Carnegie Mellon

Peter Asaro:

So who was your thesis advisor for your Ph.D.?

Manuela Veloso:

So my thesis advisor was Jaime Carbonell and Jaime had done also these derivation analogies, these planning or these reasoning through reuse of analogy, reuse of experience by use of reasons. So Jaime is still, I mean at CMU, he doesn't do robotics per se but he does planning and he does also a lot of language technology now. So my thesis was core AI planning but because it was the planning issue and choices of actions became then into robotics, but it was Jaime. But I was also very fortunate to have in my Ph.D. thesis a committee, Herb Simon who from whom I also learned a lot and who got, I can send you the video clip in which Herb Simon talks about robot soccer and how much he thought it was a fascinating avenue of research. So Herb always cared about more than one decision maker, like organizational science. So when robot soccer came about with having more than one robot having to coordinate, having to exchange strategies, having to decide which actions to take, all this problem of coordinating a team was something he really thought was fundamental. So I got a lot of support in terms of advice, scientific advice in terms of exploration and understanding of the depth of this problem from Herb Simon.

Peter Asaro:

And you decided to stay at Carnegie Mellon after your Ph.D.…

Manuela Veloso:

Yeah.

Peter Asaro:

So how did that work out?

Manuela Veloso:

It's a very good question, I actually, I believe I interviewed at 16 places when I finished my thesis. And I mean I actually in those days I could have gone to excellent places and I'm sure I would have been very happy wherever I would have gone but there was this fascination of Herb Simon and Allen Newell at CMU and Allen Newell died the year that I actually graduated by I still remember on May, end of May Allen Newell was already I believe quite sick. And when CMU made me the offer, he gave me a call and I still remember his voice on the phone saying, "Hey, Manuela, you have to stay here and what can I do to make you stay?" I can remember his voice. And then he died in July. But I'm saying – so it was very hard for me, I had developed all my love for AI, all my love for automation learning, all this love for planning at CMU and Raj Reddy there, I mean also like Takeo Kanade who then gave me all these cameras to actually start. And if it weren't also for Takeo to believe you can do it, it was hard to believe that I was going to do robot soccer, you know. It's really hard at CMU because people were doing other things and I remember some faculty saying, "Oh, it's so hard to get one robot moving let alone five. Everybody was like skeptical about – I mean some encouraging but others quite skeptical that five robots would move anywhere. And so there was a lot of people that supported like – so I was happy I stayed, I don't know if I would have been able to push so much away from where I was to the robot world of autonomy in other places but probably yes. But maybe I would not have had these fantastic Ph.D. students that I had Astro Teller there also with all his neural computation. And many of my students, Mike Bowling later and many students were involved in this robot soccer and robotics. And it was beautiful to work with these students who really – and it was not that we were like not working on the web, we were not, but this passion for the physical world and to try to do AI with vision and cognition, not just images, not vision per se but vision for robots that need to make decisions and need to act always this vision in the service of other things and having students that had that passion to or develop that passion in my lab and post docs Jay Modi, <inaudible>, and Tucker Balch, I mean all of them working there, I mean I was very lucky to – so I don't know, we cannot go back and try, "What if she had gone somewhere else?" right? I don't know, I just know that I made that decision. And I have to confess that had Allen Newell not given me that call or Herb Simon not telling me anything or Raj Reddy not having walked on the corridor and say, "Manuela stay," I might have gone somewhere else. But the three of them and then Jim Morris and Takeo, all of them were so influential in – especially that call from Allen Newell. So, I guess…

Sabbaticals

Peter Asaro:

Did you ever do any sabbaticals …?

Manuela Veloso:

Yes. So in 1999, I had this amazing sabbatical at MIT and that was like I think a fantastic time with Patrick Winston and Rodney Brooks. I was in Rodney Brooks' lab and I got to teach 6034 I believe it was the number with Patrick Winston and Brian Williams and Tommi Jaakkola. And it was fascinating because first of all I had been all this time at CMU, always at CMU and being in Rodney Brooks' lab getting to know Cynthia Breazeal, Brian Scassellati was quite a treat. And again, I mean I was always pushing my planning, you know, my trying to get robots to do things, I mean to have them move, to do things and with Patrick Winston also understanding this, we talked a lot about the vision problem and connecting the vision with the purposeful perception kind of problem and understanding the scene. So that was really good, it was a great year for me, yeah. And then in 2006, seven years later, I went to Harvard to Radcliffe and interestingly there, I actually chose that sabbatical to learn more about social issues, about the world, about life and we were 50 fellows from all different disciplines from the social sciences and doctors and biologists and astronomers and artists and all sorts of disciplines. And it was interesting to understand how to talk about robotics with that audience. And I remember an interaction with someone with when I gave a talk that I dared to say, "And the robots think that," something and a philosopher in the group really questioned me, "What do you mean they think?" And I was like, "They're like thinking." What do I mean, they think? For me it was obvious, to talk about these concepts like robots doing these things, moving our hands about the actual meaning of what we are saying in terms of the depth of what a human is. And so that made me learn how to talk with doctors, how to talk with social scientists, how to talk with historians, it was a also fascinating year from a different perspective and then being exposed at Harvard of all the talks, the phenomenal environment. So also a great sabbatical there and I also got to work with Radhika Nagpal at Harvard on the multi robot problem and Avi Pfeffer and Barbara Grosz who was – and so a lot of multi agent work there. And then finally last year, I was at NYU in 2013, '14 and again in this learning different thing, I was in this CUSP Center, the Center for Urban Science and Progress, and trying to understand data about cities.

And believe it or not, I believe that cities should all be digital <laughs> because there is no reason to have these static patterns of traffic. My view is that you get out of your house and the road would tell you in which direction the traffic is going that morning at that time of the day and everything would be like optimized so that in the morning when there is rush hour, Fifth Avenue would be going up, or Sixth Avenue would, all of them would be going up, why always Fifth Avenue down and Sixth Avenue up, why? It depends on the time of the day and what you need. And imagine the whole city being automated. So I have a big passion for automation and of course I work with you, I'm in a lot of data trying to understanding camera data, 311 call data and so I've been delving into understanding more and more data about the times but also data about the physical world, not as much data about the news or but again. So that was what I did last year and I'm back to CMU now and again, completely convinced that robots can actively get data, for example, this big data issue, you get a lot of data but if you want more than that data, unless there is a human with some cell phone that collects the data at some position, you don't get that data. But if you had robots, you can send them actively wherever you want and they can traverse your space to gather whatever that you need. And you are not just in the hands of the humans, collecting the data, you can send robots, you can send drones, you can work on many things that you – it's like this physical space can be handled by other machines, like robots and ground or drones. Anyway, so that was my third sabbatical and I am back at CMU now.

Robotics and Social Problems

Peter Asaro:

And so have you thought more about the relationship between robotics and social problems apart from traffic?

Manuela Veloso:

That's true, yeah, so we need to understand better how robots can be part of the social life but I do think that these robots are such a rich platform that it will be a shame if humans don't figure out how to use them to the improvement of our life. And so I put more of the burden on the humans to figure this out than on the development of that technology which was what I really do is to have them move. And I have to tell one final thought or one thing is like this, so I really think that these robots, ground robots, drone robots, autonomous robots will conquer eventually people instead of having them be afraid of these robots or whatever by the functionality. Imagine if the dishwasher was not invented, right, so people would still be washing dishes by hand. Imagine if like the washer – you would be washing clothes in a river by hand probably. So the functionality that that machine brought to the humans is so good, I mean it's so rewarding, so accepted, if we have robots that actually help people, help the blind cross roads, help the people in nursing homes, help people escorting some – help capturing data about the environment. They are data collectors, when they move they can get the temperature everywhere, the humidity, the Wi-Fi signals, they are just carrying, they are just moving. If they help you, if you need some kind of help for all these tasks then you start thinking, oh it's so good to have something that moves around and I can ask for help for something, can go instead of me somewhere, while I'm holding something, it can go and get me something back.

It becomes – it's the function that I think I'm going to spend more and more time of my life thinking about having these robots do something useful for people. And then people will understand that they don't have to be – they are just things that will be useful for people and we are not there yet because you and I are here in this conference center and do you see any robots moving around? I don't. I don't. Why not? So we have to get to a point, that we get to a place and there are these robots moving, there are. Where are they? Keep talking about robots and this and that and that. Yes, at CMU if you go to Gates Hillman Center they will be there, they are there but nowhere else. Even in our lab, they are in the labs, I mean CMU, I don't want to say only, but it's true that they move in the building by themselves and we do not chase them. And in fact, just to finish a story, in August of 2000 and I don't 12 like that, I remember – maybe '11, I remember entering the lab and Joydeep was there and CoBot was not there. We only had one CoBot, CoBot 1 and the robot was not there. And I very casually asked Joydeep, "So, where is CoBot?" And Joydeep told me, "I don't know." I said, "What do you mean you don't know?" And he said, "I thought you had called the robot." And I said, "I didn't." "Well I didn't send it anywhere either." This feeling for the first time of creature that was autonomous. Where did it go? Then we found out that Brian Coltin, one of my students had used it or I don't know, it had gone somewhere but at that moment in the lab, "Where is the robot?" "I don't know." And we are like saying, "What do you mean you don't know?" And these debates, should we go and find the robot, where should we go and find, we didn't have yet any logging capabilities, we didn't have any way of knowing where it was in the building, nothing, it was just gone. And I remember like this feeling about like three minutes later of listening to the robot, the wheels of the robot coming down the corridor and we would say, "Oh God, thank goodness, it's here." It just arrived magically. But it's this – yeah, we have to accept that this creature will be there in the world doing their stuff and they need to be of help to us and we can figure out how to make them useful ourselves because that's what we are good at is to know how to use technology for our own good and for our own improvement and hopefully that's a burden on us if we choose to use it for poor goals, that's our problem but we have to make the technology that really is available. And robots are about moving, moving around by themselves, let them go.

Robot and Human Autonomy

Peter Asaro:

So you said also when you were at Harvard that some philosophers challenged…

Manuela Veloso:

Yeah.

Peter Asaro:

…you on this notion of thought. I want to as a philosopher challenge you a little bit on autonomy.

Manuela Veloso:

Yes.

Peter Asaro:

Roboticists talk about autonomy primarily as autonomous navigation.

Manuela Veloso:

Yes.

Peter Asaro:

But you said some of your students are also working on goal planning…

Manuela Veloso:

Yes.

Peter Asaro:

…or goal creation, so it would actually be kind of generating its own goals…

Manuela Veloso:

Yes.

Peter Asaro:

…and tasks like that, so I wondered if you could tell us a bit about what you think about robot autonomy and whether that might encourage something more like human autonomy?

Manuela Veloso:

Well it's a good question, right? I don't know the limits of what their own goals will be. I do believe that robots have an extremely special purpose and I tell you what I find that humans are really impossible to reproduce is the breadth of things we do. I scramble eggs, I play squash, I speak five languages, I can read a book, I can walk, I can – come on, these robots when we get a robot to kick a ball or when we get a robot to avoid an obstacle and move in the building, my God, that's already half a million lines of code or a million lines of code. To get the same creature to be able to do all these things, it's kind of like complicated. I think maybe 100 years from now we will be able to integrate everything into a single body. So the goals that CoBot creates, its own goals are all navigational goals, they are goals within that, how do you say, pickup and delivery tasks, that service space in which the robot is and nothing else. And actually we don't have a representation that allows you to create goals in general from a philosophy point of view, it's goals that are basically within the task, there are goals that are very constrained to the actions. The robot has a portfolio of action and so that's from a philosophical point of view, you can see that they are in fact creating their own goals but they are also very limited in the things that they do. And I'm not very good about talking about, you know – I'm a very engineering mind and I really like to have things that work and therefore this is a little bit, I plan on collaborating with people that care about these issues to understand better how to do the right thing and that's my goal but other than that I'm not an expert on these philosophical questions unfortunately.

Advice to Young People

Peter Asaro:

Okay. So we usually wrap up with a question about, what's your advice to young people who are interested in a career in robotics?

Manuela Veloso:

Well, my advice is that it's a fascinating kind of field in which they are able to get to understand how these creatures, these artificial creatures can really help people and to devise ways to help people. And I say that it's, you know, we have civil engineers that build fascinating bridges and doctors that solve extremely important problems in health and we have all these other pillars of our society that are fundamental and I think that it's still a curiosity kind of field. Can machines do this? It's not really like that we need them. And so people should, if they are curious about machines that can do things in the environment, can handle physical space, that's the advice now in 2014, maybe later it would be that they are also fundamental pillars of what we need in a society. Currently it's more of a curiosity goal. And I do believe that they will be useful to help the elderly and the handicapped and all the limitations, filling the limitations of humans, but we are still not there yet. So people who want to join robotics, they just have to have a big heart for something crazy and they have to be curious about the technology, they have to learn their math, be able to delve into this very unknown space still from a societal point of view which is robotics. So that's vague but it's not that – I don't think that robotics will cure cancer as of now, so you know you cannot have those big goals and I don't think that they should also do robotics because by 2050 they are going to beat the World Cup soccer players. It's more the curiosity about what can they do and with these devices is putting together, cell phones talking with machines that move, static cameras, drones. What is this all about? The physical space, they have to have a passion for autonomy for machines in their physical space.

Peter Asaro:

Great. Thank you very much.

Manuela Veloso:

Thank you. Anyway, I've never told my story for so long here.