Q&A: Bruce Maxwell


Computer Science Chair Bruce Maxwell on robots (big and small) and ways they are proving useful in his classroom on Mayflower Hill.

By Rob Clockedile
Photography by Fred Field

Bruce Maxwell

Bruce Maxwell is a computer programmer, roboticist, violinist, and swimmer. He talked with Colby’s Managing Editor for the Web Rob Clockedile about opportunities that come with teaching computer science at a liberal arts college.

So, you’re relatively new to Colby?
I came a year ago fall. This is my thirteenth year teaching and ten of those have been at small, liberal arts colleges. I knew what to expect, and I’ve been very pleased with the students.

Do you ever feel marginalized by your big university peers?
I don’t. I went to Cambridge University for a master’s and Carnegie Mellon for a graduate degree [Ph.D.]. I maintain lots of contacts with people there. When you come to a place like Colby, you understand your research isn’t going to move as fast. You’re not going to have graduate students working full-time on multiyear projects. That doesn’t mean you can’t be cutting edge and do very good work.

You end up building a large family of former students who have gone on to be graduate students. I have former students who are becoming peers. We’re reading each other’s papers and I’m starting to work with them.

We hear about the unique nature of the relationships at Colby, relationships that go beyond the classroom and beyond students’ stay here.
It’s one of the nice things about being at a small place in a small department. At a big university your only contact with students might be standing in front of a course for fifty people. Here I’ve got fourteen in one intro course and that’s big. It’s fantastic.

I also play violin in the Colby orchestra and train with the swim team. I have a lot of informal contact, even with students outside of the major, and that’s really nice. On Monday nights we get out of orchestra at ten. I’ll come up to the lab, and the students know I’m going to be here, so we have a big programming party here on Mondays between ten and one.

There’s a lot of value in that impromptu, out-of-classroom contact.
Last spring in my intro course I started using iChat, because I live twenty minutes off campus and, when I go home for the day, I tend not to come back. I’d get on iChat at nine p.m. and students would get on and ask me questions. They had been very hesitant to make use of that for professor-student relationships. I think they feel that’s their communication mode—it’s not to use with a professor. But the really nice thing is that I can usually help solve their problems in five or ten minutes. They don’t have to spend two hours getting frustrated, they feel better about the course, and neither of us has to move.

You mentioned the intro to programming class. That course has more than just CS majors in it?
We call in Computational Thinking. We focus on multimedia processing. It’s a little more interesting, a little more fun, a little more immediately gratifying. We integrate a lot of graphics and image and sound processing, which appeals to a wider variety of students. We’re getting art students interested in digital art. We’re getting students who are interested in video games.

When I first taught it, the students implemented a system that models the way plants grow, making trees and fractal patterns. They ended up with very nice, very sophisticated programs, and it gave them confidence in their ability to work with a computer.

What else is going on in the CS program?
We’re trying to focus the CS program more on interdisciplinary applications of CS. That’s where my interests really lie. I enjoy knowing how computers work and can certainly teach that stuff, but at the end of the day the purpose of computer science is to enable other people to be more productive.

Stephanie Taylor [assistant professor in computer science]—her Ph.D. is in modeling biological systems at the cellular level—was looking at how collections of cells can, with regular exposure to light, be fairly accurate clocks. So she’s tying CS in with the biology part of the curriculum.

I’ve also been working with Frank Fekete in biology on a system for using computer vision to analyze bacteria colonies. We watch the colonies with time-lapse photography, then analyze the properties as they grow.

Philip Nyhus, in environmental studies, has colleagues who want to know what types of habitat elk like. They have GIS [geographical information systems] data about the geographic characteristics of where the elk are and want to use it to find other places where elk would like to be. So a student integrated a machine-learning package with the GIS package to create something more powerful than either one by itself.

Those are the sorts of things that I find interesting because we can leverage things that we do well to enable people to be more productive and discover new things.

That’s the beauty of the liberal arts approach?
That’s one of the reasons I love to be at a small liberal arts college. Computer science at a place like this has so many possibilities. It’s a
lot of fun.

Where do you see your students heading when they leave Colby?
A lot of them eventually do some graduate work, but most of them get out there and get jobs in a variety of places—they might work at a small company doing database stuff, or a financial firm doing market predictions. Some do go directly to grad school, but they’re also interested in getting away from school for a little while.

I’ve seen students turn down multiple offers from big firms to take a less lucrative offer where they get more responsibility doing something more interesting to them. That’s a very good thing. Many of the students who go to work for the big firm doing some sort of pigeonhole job get out of it pretty quickly.

There’s a Colby student, Katelyn Mann ’03, on the team in charge of Google’s home page. She’s actually part of the team that manages the code that makes the page that pops up when you type, “Google.com.” That’s just fun. You’re there, front and center.

You’re working on a multi-college, full-sized humanoid project?
It’s a five-institution project. The robot was built by the Kaist Lab—a major government research lab in Korea—and the proposal was put together by a colleague, Paul Oh, at Drexel University. The University of Pennsylvania, Virginia Tech, Bryn Mawr, and Colby are involved. Doug Blank at Bryn Mawr has done a lot of robot education and he’s working on how we would build simulators. My experience has been building social robots and computer vision, so our piece is the vision system for HUBO and the social interaction controller-manager.

HUBO is the four-foot human robot. I’m not quite sure what it stands for.

UPenn is involved with the actual mechanics of getting the robot to move, run, and walk. VT is building a miniature version that will be cheaper and more freely available but have the same capabilities as big HUBO. And then Drexel is working with big HUBO directly, so big HUBO is at their lab now.

Is big HUBO ever going to make it up to Colby?
I don’t think it’ll ever end up at Colby. It’s going to stay down in the Philadelphia area. But this summer we built a test bed where we could test out programs. We have a one-foot tall humanoid robot that you can buy off the shelf – a $1,000 robot that emulates a lot of the things HUBO can do. We’re able to test out programs and the vision system without doing anything in simulation. We have a real robot, real cameras, real people, but it’s all small-scale. The hope is that we can take that stuff and transfer it to the full-scale robot.

So how are the students involved? How many and at what level?
This is a five-year grant and there’s money in the grant to cover two students every summer. The program is called PIRE (Partnerships for International Research and Education) through NSF (National Science Foundation) and the purpose is to create collaborations between U.S. and overseas institutions, to take American expertise in social robots and combine it with the Korean expertise in humanoid robots.

What is the end goal of the project? A development platform, or the robot itself?
A couple of things. There’s a hope for more knowledge about how to build these things here in the states. The Drexel team is coming up with a recipe for your own HUBO. It’ll still cost a hundred thousand dollars because the parts are very expensive, but that’s not an unreasonable thing for a major research lab to invest in.

We’re also going to be doing some demonstrations—you may have heard of the Please Touch Museum in Philadelphia. It’s a very nice museum in Philadelphia that’s intended for kids. All the exhibitions are interactive. The hope is that we can bring HUBO into that situation and actually have kids interact with the robot. Whether they’ll be within touching distance or not depends on how safe we can guarantee the robot’s actions. At the very least they might be able to stand on the other side of the table and play blocks or something.

How does your past robotics work feed into this?
We’ve spent the last eight or nine years developing a vision system for social robots. It can do a lot of things like find faces, find people and analyze their shirt colors, track blobs, and find text. We had it trying to read nametags at one point. These are all things you want a social robot to be able to do quickly and in real time and accurately.

Each new generation of students has to learn what came before and how to put the basics together and how to integrate the systems. That’s part of undergraduate education. For them it’s always new. With all of those things, it’s the experience of building systems, but the continuous thread has been the vision system.

Two summers ago we had a vision system running continuously for two weeks. This was a level of stability that the students aren’t used to in their classes where things just have to run once, given the right parameters. This can’t crash and has to do a lot of unpredictable things.

Does Colby have any plans to get involved in the many robotic competitions out there?
I’ve been involved in the urban search and rescue competition for five years. I was involved in the hors d’oeuvre’s serving competition for five or six years. It takes a lot of work. It’s what you dedicate your summer to. It takes some time during the year to get that up and running. I’m much more interested in robots interacting with people, which is why, instead, I’m working on projects like robot avatars for the art museum. We just had a conversation with (Director of the Colby Museum of Art) Sharon Corwin this morning.

Explain a little bit more about what that is.
This is another NSF grant, which I have with a colleague at Washington University St. Louis, William Smart. So it’s the Maxwell Smart proposal.

Wash. U has a nice art museum; we have a nice art museum. So the plan is to set up a kiosk and a robot in each museum. You can go to [a kiosk in] the Colby museum and control a robot in St. Louis and wander around the St. Louis museum.

The Maxwell-Smart robot is going to be wandering among millions of dollars worth of art?
We’re going to be mapping the museums. We’re going to have no-go areas, and the user’s not going to have direct control over the robot. They’re going to pick places on the map, but the robot’s going to be in charge of itself. This is something you instill into the students from day one, that in that line of codes that tells the robot “go that fast in this direction,” you bound that with tests. So if the system tells the robot to go really fast or go somewhere it shouldn’t, the robot says, “No.” Period.

We’re working on the pieces of that right now. We’re working on the mapping piece. We’re thinking about what the interface may look like.  A lot of these pieces we’ve done before, it’s just a matter of integrating them in the right way. We’re going to be making a big push on that this coming summer.

With the popularity of toys like Lego MindStorm, Robo-sapien, and other commercially available robotic toys, some of which are programmable, do you see more students coming in having played with robots?
I haven’t. Maybe it’s because the students who had the opportunity to do Lego MindStorms are just now coming in as first-years.

In fact, it’s interesting. In computer science I feel like we’re seeing fewer students who’ve opened up the hood on a computer and looked inside. Every student uses a computer. A lot of their social life is based on the computer, but if you ask how many have cracked a case and inserted a memory module or a PCI card or changed a hard drive, it’s very few. They can install software if it involves point and click, but how many have downloaded GIMP and UNIX tools on thier machine – very few.

Why do you think that it is?
I think it’s because they don’t have to. The Windows machines are more user-friendly, and with Macs it’s all right there. You don’t have to know what’s going on under the hood, whereas in my generation, to program a computer you really had to dig down into the details. After you owned a computer for two years you cracked the case and stuck more memory in, or put a different hard drive in. It was just something you did.

Did you start messing around with robots back then too, or did that wait until you were in school?
That waited until I was in school, but I started programming when I was ten or eleven, and I was teaching Logo at a summer camp when I was thirteen.  I remember sixth grade, our teacher brought in an Apple I and was showing us how to do things. And the whole time I…wanted to know how the thing worked…how I could do what I wanted to do. And it just hurt.

Movies like AI and Millennium Man lead us to expect domestic helper robots in our homes someday. Do you see a future with a humanoid robot in every home?
I definitely see applications for robots in a home. In Japan they’re really concerned about elder care so they’re looking at humanoid robots. This is why they put a lot of money into the mechanical side of the human robots. They need robots that can function in a human environment and do the sorts of things that people do to provide a first line of elder care for an aging population.

There’ve been several large projects in the States looking at nurse-bots and public care robots. Getting a robot to be functional inside a person’s house has certainly been a goal of the robotics field and with computation continuing to increase exponentially we can do things today that we couldn’t imagine ten years ago.

What are the constraints that keep us from being there now?
Processing power is a constraint. When you have more processing power it opens up new techniques that aren’t possible without it. Things like building a map. You have all these landmarks you’re trying to keep in memory, then you change one landmark and that changes the relative location of all the others and now you have to solve these very large complicated matrices.

Today, I can buy a one-foot tall humanoid robot for a thousand bucks that has fifteen or twenty degrees of freedom on it that I can control wirelessly. Ten years ago, no way. But now, it’s off the shelf.

blog comments powered by Disqus


  • On January 19, 2009, Amy Becker wrote:
    I am a 1972 Colby College grad who is just finishing nursing school after another career. I was very interested in your article since I have a long standing interesting in public health nursing. I can definitely see a role for robots to supplement home health care such as monitoring vital signs, lab results, medication compliance and patient safety. More and more patients want to remain in their homes -- even if they are ill, and alone -- it will be interesting to see the applications that robotics provides. the big question will be costs - if nurses and home heath care aids are being cut back, who will be able to affort robots? Nevertheless, robotic telemetry I believe has a future in our healthcare system. Keep up the good work. Amy Becker