The Future of Computer Science: An Interview with Ken Calvert and Jim Griffioen

Posted in:

Future of Computer Science


Computer science is a dynamic field where, as Ken Calvert, Ph.D. and chair of the Department of Computer Science, states, “The only way to stay on the leading edge is to invent everything.” Consider that 10 years ago, Facebook, Twitter and iPhones didn’t exist and iPods and digital internet were just coming into play. Ten years from now, what will serve as our technological staples—and have the ideas for those creations even been conceived? Thinking about the future of computer science necessitates a short-term perspective because the industry sheds its skin with increasing frequency. Nonetheless, questions about the opportunities and perils abound: Are innocent, everyday folks who simply want to catch up with high school friends or purchase a cookbook at the mercy of malevolent, identity-thieving hackers? How will sea changes in the industry, such as the advent of cloud computing, affect traditional computer science jobs? And how can anyone hope to keep up with waves of technology hailed as cutting-edge one year and disregarded as antiquated the next? To sort through these questions, we sat down with Calvert and Jim Griffioen, Ph.D., professor of computer science and director of the Laboratory for Advanced Networking.

Q: What are shaping up to be the greatest areas of opportunity in the computer science field over the next few years? 

K.C. I think this is an exciting time in computer science. Hardware has become so cheap that both compute cycles and storage bytes have essentially become commoditized. We’re seeing this right now with the cloud computing model. A company can now pay someone a relatively low monthly fee to run their web server instead of shelling out thousands of dollars for hardware, software and maintenance. It’s basically the same transition that happened with electric power 100 years ago. Nicholas Carr’s book, The Big Switch, describes how, back then, factories had to be located next to big streams because that’s where they got the power to run their machines. When electric power grids came along, generation of power became centralized. The same exact centralization is happening with the advent of cloud computing. It makes a lot more sense to have one big centralized data center run by people who know what they’re doing than for every little company to run its own.

J.G. Historically, computer scientists have created technology without fully knowing how it’s going to play out. The internet was built so machines could communicate back and forth and share information. Well, then users came along and said, “I need this to be easy to use. I need a web interface. I need a browser.” None of those uses were part of the original design. Now we have virtualization through cloud computing as well as ubiquitous networking—you can be on the network at all times. In addition, we also have a very mobile society. Devices which can maximize the benefits of the cloud will need to be developed. I think we’re on the edge of some of these things just exploding and once it explodes, we’ll have a whole new set of issues to address—how to secure such a world, etc.

K.C. What virtualization also means is that software is going to be king. Everything is going to be about software because hardware is so cheap. I think the opportunities in software are tremendous. However, as Jim mentioned, we now have to consider questions such as: how do I keep control of my information? How do I know what information people are collecting about me? Businesses already know a lot about us and they are going to try to monetize that any way they can. Why do Facebook and Twitter have such astronomical valuations?  I believe it’s because they know who is talking to whom and what they’re saying. Privacy is a huge issue going forward and it’s not just “old people” who are concerned about it. We need to understand how to maximize the benefits of virtualization without the Big Brother risks.

 

Q: What does the future look like on the security front? 

J.G. When everyday users weigh the prospective gain of a new application against the possible security risks, they almost always accept the tradeoff. It is difficult to keep up with potential threats and understand the risks because the landscape changes so quickly. On the positive side, though, industry has finally recognized that security is not an afterthought. In the past, companies created products and tacked security onto the back end of the development process. Often, that made it hard to add the security because it wasn’t present from the start. Now, computer scientists are asking, “How do I design the architecture so that if it doesn’t have security now, it is amenable to it later?” There are discussions going on right now about the next generation of the Internet. Naturally, security is a central topic.

K.C. As long as we have the Internet architecture we have, we’re not going to solve many of the current problems. The architecture doesn’t have the things we need to solve them, and there’s just too much inertia to counteract. So it’s hard to say what the future is going to look like there. But again, almost as important as security is privacy. When it comes to the leaders in software and social media, people aren’t given a choice to use the product and still maintain their privacy. Those companies say, “Here are our policies, take them or leave them.” And people agree, even though the policies are not in their favor, because they want to use the product. I printed out the iTunes license agreement once. It was 29 pages of 9 point font. No one is going to read that! That’s why I think we really need more collaboration between experts in computer science and experts in psychology. As systems get more and more complex and everyday people have to make decisions about privacy settings on their computer or home router, we need to design systems and educate users so the consequences of each decision they have to make is much clearer. That is certainly not the case right now. Unfortunately, until software providers accept accountability for their products—until they have incentive to change—the situation will remain challenging.

 

Q. What areas in the field besides security and privacy need attention? 

K.C. We need to focus on parallelism. You often hear that Moore’s Law is running out of gas. On the contrary, Moore’s Law is still going strong; but the dividends of Moore’s Law are now being paid in parallelism, not in faster sequential computation. Rather than doing each step of the computation faster, you can do multiple steps at once in the same amount of time.

J.G. As far as teaching parallelism in the classroom, we have to change our approach. We’ve been teaching the students a step-by-step process; basically, that’s how computer scientists have always conceived writing programs. Well, now we have multiple processors running on chips and we have to start thinking, “How do I write a program that does three things at once? Eight things at once?” What happens when the chips allow us to do hundreds of things at once? We need to start changing the mindset of everyone in our program and challenge them to think, “I’m going to do lots of things at once.”

K.C. If you’re only doing one thing at a time, you cannot take advantage of the additional power that Moore’s Law is giving you. So, like Jim said, we have to be able to figure out how to do multiple things at once, like putting meat on the stove to brown and, while that’s happening, mixing other ingredients. That’s the way we need to think about things all the time. It’s not trivial. We want to turn out graduates who can master doing things in parallel because this is the way it’s going to be from now on. Right now, though, the tools we have for taking advantage of Moore’s Law and parallelism aren’t very good, so it’s definitely an area that needs attention.

 

Q. How much of a challenge is it to stay on the leading edge of an industry where technology changes so rapidly, let alone translate those changes into your curricula? 

K.C. It’s almost impossible. We could spend all of our time just trying to keep up. It’s a catch-22: we have to show our students technology and let them get their hands dirty, but the reality is whatever we show them as freshmen will have changed and might even be obsolete by the time they are seniors. Five years ago, everybody was using Perl and CGI scripts on the web. Now those tools have been replaced by a new generation of languages and platforms. So, our task is to teach fundamental principles and I think we do a good job of that. Fortunately, students quickly adapt to the rate of change. They’re fearless and not afraid to pick up new technology and play with it. I consider that a good thing and we need to try to leverage it in the classroom.

J.G. At the same time, we faculty have to make the purpose of learning fundamental concepts and principles clear to them. They have to know that chances are whatever programming language we teach them their freshman year will probably be out of date by the time they graduate. The turnaround times really are that short.

K.C. That actually seems to make it easier to motivate our students to learn the fundamentals, though, because incoming students have seen the short life cycles of various technologies several times already.  It’s pretty obvious to them now that if they don’t focus on the stuff that doesn’t change, they’re not going to be able to adapt when they’re forced to.

J.G. Even though I’m a longstanding faculty member, I often learn from the students. There is so much software out there, so many programs, so many computing languages, that I can’t play with them all. Students will come to me and tell me about a program and I’ll say, “Explain it to me. How does it work? What does it do?” I learn a lot from interacting with them.

K.C. The only way to stay on the leading edge is to invent everything. We have a weekly “Keeping Current” seminar, where students share what they’ve learned or some new technology they’ve discovered. They’re always coming in and telling us about stuff we’ve never heard of. It’s a volunteer thing, very informal, but a lot of fun. There are so many tools around, it’s just unbelievable.

 

Q. How does the future of computer science look from the perspective of college students choosing it as a career?

K.C. It couldn’t be better. In the early 2000s, people were afraid all the computer science jobs were going to be outsourced overseas. That hasn’t happened. In fact, the Bureau of Labor projects software engineering jobs will grow by 38% over the next ten years—one of the top professions as far as growth. Our students are in demand and will continue to be in demand for a long time. I am constantly being contacted by people wanting to hire our graduates. It’s clear there are more jobs than people to do them, and I don’t see that changing.

J.G. I was contacted by a mid-sized company the other day that decided they were going to get into the mobile world, but didn’t have a clue as to how to go about it and wanted to know if any of our students or graduates could help them figure it out. Companies need people who know how to take advantage of the technology, not just throw around terms. One aspect that will change in light of the switch to cloud computing, however, will be the kinds of jobs available. There won’t be as much need for systems administrator jobs if everything is run through a centralized data center. So what a graduate might do once they’re in the marketplace might change, but the demand is still very high.

K.C. Our goal is to equip students to be able to adapt to change. We teach them how to think and how to learn because that’s the only way they’re going to survive. If they think they’re going to learn C++, graduate and be a C++ programmer all their lives, it’s just not going to happen.

 

Q. What are some myths and misconceptions about the computer science industry?

J.G. One myth I often hear is that all the exciting stuff is happening in industry. “Companies are where the exciting things are happening,” someone will say, downplaying the need for education in the field. While it’s now true that bright high school kids can get programming jobs with big companies right away, we still believe in the importance of developing a skill set based on the fundamentals that will last a long time.

K.C. I think another myth is that computer science is all about programming. Computing professionals need to have an understanding of programming, but it’s even more important to have a broad understanding of the business you’re in: social networking, data mining, business concepts, etc. The future is about applications and applying computing to problems in biology, medicine, engineering, the environment, business, entertainment and other industries—it’s a great time to be a software entrepeneur! Another myth is that computer science is something only guys would want to do. The stereotypical image of scruffy-haired guys with beards staring at computer screens needs to be replaced by one which illustrates the openness of the field to anyone who wants to get in on the opportunities available.

 

--

Kel Hahn