Joy After Sun With his corporate ties cut, the "Edison of the Net" speaks freely on the challenges facing Sun, the Net, and, of course, Microsoft.
By Bill Joy; Brent Schlender

(FORTUNE Magazine) – What's next? There's no one in IT better equipped to answer that question than Bill Joy, a man FORTUNE once dubbed the "Edison of the Internet." Nor could there be a better time to ask him than now, as the 48-year-old computer whiz embarks on his next gig after resigning as chief scientist of Sun Microsystems, the company he helped found 21 years ago. As you will see in the next few pages, the breadth of Joy's interests and expertise is exceeded only by the piquancy of his opinions. He's a curious, often self-contradictory blend of philosopher, doomsayer, aesthete, Microsoft basher, political idealist, and, naturally, geek.

The day after he resigned, he dropped by the office of FORTUNE's Brent Schlender for a three-hour chat covering everything from how best to scan old photos into a computer to the high jinks of Orson Welles. He talked about the fallout from his April 2000 article in Wired, which concluded that robotics, nanotech, and genetic engineering were emerging so quickly that--if we weren't careful--they could endanger the human species. But mainly he talked about the Internet, a hotbed of innovation that he--perhaps more than any other individual--has helped shape. His musings tumbled forth, bordering on rambling before being saved by just-in-time epiphanies as he discovered ever more parallels in the workings of nature, markets, human behavior, and computer science. There was one question about the future he couldn't really answer, however, and that was what he would be doing later that evening ...

WHAT'S ON YOUR TO-DO LIST NOW THAT YOU'VE LEFT SUN?

Yesterday I had my tech guy come to the house and disconnect my Sun network, and tomorrow I'm having them shut down my company e-mail account.

MAN, YOU'RE REALLY CUTTING THE CORD COMPLETELY.

Well, limbo is not a good place to be.

Seriously, though, I'm interested in figuring out how we can build a Net that is a lot less prone to viruses and spam, and not just by putting in filters and setting up caches to test things before they get into your computer. That doesn't really solve anything. We need an evolutionary step of some sort, or we need to look at the problem in a different way.

I'm not convinced there's not something modest we can do that would make a big difference. You have to find a way to structure your systems in a safer way. Writing everything in Java [a programming language created by Sun] will help, because stuff written in antique programming languages like C [a widely used language created by Bell Labs in the early 1970s] is full of holes. Those languages weren't designed for writing distributed programs to be used over a network. Yet that's what Microsoft still uses. But even Java doesn't prevent people from making stupid mistakes.

My own biggest mistake in the last 20 years was that sometimes I designed solutions for problems that people didn't yet know they had. That's why some of the things that could've made a difference couldn't find a market. When people get hit between the eyes with a two-by-four by these viruses, they know they have a problem. Still, the right time to address it would have been a while ago. The hardest part isn't inventing the solution but figuring out how to get people to adopt it.

DO YOU HAVE ANY SPECIFIC IDEAS FOR SOLVING THE VIRUS PROBLEM? AFTER ALL, MOST COMMERCIAL ACTIVITY DEPENDS ON THE INTERNET.

Nature deals with breakdowns in a complex system with evolution, and a very important part of evolution is the extinction of particular species. It's a sort of backtracking mechanism that corrects an evolutionary mistake. The Internet is an ecology, so if you build a species on it that is vulnerable to a certain pathogen, it can very well undergo extinction. By the way, the species that go extinct tend to have limited genetic diversity.

ARE YOU IMPLYING THAT MICROSOFT WINDOWS IS VULNERABLE TO EXTINCTION PRECISELY BECAUSE IT IS SO DOMINANT?

I wasn't thinking of any particular piece of software, but if you're running a monoculture of software--duh, this is not good. People have studied how to make software systems more reliable by running three distinctly diverse implementations at the same time and then comparing the results. That's what they used to do in the space program, when not only were redundant systems built for, say, guidance, but each of them also ran on different computers with different software.

It may seem like a big effort to write programs several times, but not if you do it in a modular way. That's because, if a program is built out of 20 modules and you write two versions of each, you've now got some enormous number of possible combinations. Then, if you test each combination to see how "fit" it is in some fitness landscape, you're basically doing what evolution does.

This is not something I thought of. People have been publishing papers about it for years. But the fact that the standard industry practice is to do none of this shows that software engineering as a discipline is in the Dark Ages compared with something like mechanical engineering. We shouldn't really build servers or operating systems that are genetically inferior, but we do.

WHY CAN'T WHAT'S BEEN GOING ON IN THE LABS FOR DECADES WORK ITS WAY INTO THE REAL WORLD?

Because the business is such a horserace. I've tried to get people at Sun to completely re-architect Solaris [Sun's version of Unix] in a more modular form. We did provide people with tools like Java to build more safe and reliable services on the network. But Java has been underappreciated because, once again, it was a solution to a problem people had heard about but had not felt viscerally--whereas the perceived cost of not choosing Microsoft or IBM was felt much more measurably and emotionally.

IS THE RECENT VIRUS EPIDEMIC WAKING DEVELOPERS TO THE NEED TO DESIGN THEIR SOFTWARE DIFFERENTLY?

People still don't recognize the scope of what we have to do. You can't simply write a new, multimillion-line program in C and expect it to be reliable unless you're willing to work on it for 20 years. It takes such a long time because that language doesn't support the easy detection of the kinds of flaws most viruses exploit to bring down systems. Instead, you need to use a programming language with solid rules so that you can have the software equivalent of chemistry: the predictable interaction of code as it runs. But on the network, where part of the software works here and part of it works there, programs also behave in emergent ways that are more biological and difficult to predict. So until you have a science of doing distributed computing, software developers will continue to just throw stuff out there. That's why the Net is not going to be secure.

Also, distributed software systems have to be a lot simpler than they are now for us to have any hope of understanding even the mechanistic consequences, much less the nonlinear, biological consequences. You may not want to print this, but why have we been so fortunate that no one has done a Sobig virus that wipes your hard disk clean? It's just one more line of code. Just one line.

That said, I suspect some of these virus writers never expected their bugs to replicate quite the way they did. The fact that a virus goes hypercritical doesn't necessarily mean it was intended to. You could take a loop of code that is perfectly functional and add or delete a single character and unintentionally turn it into an exponential. Then again, perhaps they were just curious what would happen.

Spam is a different matter. It is mainly the result of the Internet having no friction. As long as e-mail is free, you're going to get a lot of spam because there's no disincentive to send it. A simple thing like requiring every Internet service provider to charge for sending mail could be a limiting factor.

Another reason spam is so bad is that so many companies use Microsoft Outlook for reading e-mail. Again, because that program is written in C, it's quite easy to design a virus to go through your e-mail address book and broadcast spam to all the people you know. As soon as your company starts using Outlook, you can see emergent, horrible, almost biological things start to happen. So by using Outlook, you're not practicing safe e-mail. We need a "condomized" version of it.

IS IT REALLY FAIR TO BLAME MICROSOFT FOR SO MANY OF THE NET'S WOES?

The problem with Windows isn't so much that it's insecure, but that it is stale. The company has flailed away, making changes mainly to protect its monopoly. So lately, instead of getting better with each new release, Windows is just getting different.

Also, Windows isn't well architected. There's a simple way to find out if an operating system has been well designed. When you get an error message, go to the help system and look up the exact words in that message to see if there was enough of a concept of an architecture that they have a consistent vocabulary to talk about what's broken.

All you have to do is try it on a Mac and on a PC to see the difference. Apple took the time to come up with a concise vocabulary, but in Windows the designers of the help system used different terminology from the programmers. That reflects a lack of design discipline, which means that as the system grows, so does the ambiguity of the software itself. The result is a system encrusted with multiple layers of things that weren't really designed in so much as bolted on. Plus there are inessential artifacts of DOS from 20 years ago that still peek through and make trouble.

Now Microsoft's working on a new version of Windows called Longhorn. But there are so many people working on it that it can't be conceptually simple. Bill Gates is a very smart person and is very dedicated, but you can't change the fact that it is human nature for people to carve up a problem and try to own things, for the complexity to accrete in corners, and for the vocabulary of the project not to make it all the way across.

DESCRIBE THE TRAJECTORY OF YOUR CAREER AND WHERE IT MIGHT LEAD NEXT.

I'd divide it into six chunks. As an undergraduate at the University of Michigan, I did numerical supercomputing and got to program some of the early Crays. Then I went to Berkeley and started working on Unix and building Internet protocols into it. My third stage was when we started Sun and built workstations and a distributed network file system and the Sparc microprocessor.

I was all set to leave Sun in 1987 when the company entered into a contract with AT&T--which actually owned Unix--and asked me to completely rewrite it in a modular way. But I couldn't find the right programming language, so my fourth career didn't really go anywhere. Then, after the San Francisco earthquake in 1989, I moved to Aspen and started a research lab for Sun called Smallworks, where I messed around some more with the Sparc chip and some other odds and ends.

In 1994, when a large block of ten-year options vested, I was thinking about leaving Sun again, but then the web came along and [CEO Scott McNealy] asked me to stick around a little longer. So I re-enlisted for the second time. That turned out to be the fifth stage, when I worked on the Java programming language, the Jini and JXTA concepts [networking and peer-to-peer technologies, respectively], Java chips for cellphones and smart cards--all that J-stuff. And finally, what really sucked up a lot of time the past couple of years was the aftermath of my Wired article, when I decided to try to expand it into a book that warns about why biotech, nanotechnology, and robotics have the power to render human beings extraneous. That is what I'd have to call the sixth phase of my career.

But I also did other things on the side. I had a gallery in San Francisco that sold the work of untrained, "primitive" artists. I was on the board of the Oregon Shakespeare Festival for four or five years. I'm also really into architecture and architectural modeling on computers. I've worked with Christopher Alexander [the renowned Berkeley professor, artist, and author of The Pattern Language] and Richard Meier [who designed the Getty Museum in Los Angeles]. Great architects are the last of the purists. What they do is not derivative.

When I think of my own work, most of it is built upon the efforts of others. The Unix work I did was derived from the work of Bell Labs and was more like a remodel than new construction. I'd really like to go and do something that's more like Java--that starts from a clean sheet and that isn't required by its compatibility with something else to be so complicated. Unfortunately, too few people get to do that in our industry.

TELL ME A LITTLE MORE ABOUT THAT SIXTH PHASE, AFTER YOU WROTE THE WIRED ARTICLE.

I came to view that article as my version of either public service or public penance. I eventually found that it's not fun talking about not-fun stuff all the time and trying to think of ways to shoot down arguments people use to make themselves happy but which aren't true. It's a very negative energy, and I just couldn't do it anymore. I mean, what good did Orson Welles do scaring the bejesus out of people with War of the Worlds?

You see, the book initially was to be a warning book. But Sept. 11 rang a bell louder than any bell I could possibly ring about the perils of the world. And we've had other warning shots that have raised people's awareness of some of the things I was going to write about: We've had SARS, we've had mad cow disease, we've had weapons-of-mass-destruction as the word of the year. Just saying there's a problem is no longer sufficient, so now I'm thinking of more of a prescriptive book. And I have some ideas.

If I were to propose one thing that we as the human race need to do, I'd say we can't let the future just happen anymore. If too many of the possible futures are catastrophes, we have to try to steer down the less dangerous paths. That implies that you somehow have to manage markets, geopolitics, and human behavior in the way we have become able to manage the scientific process. Those are inconceivable things.

So what does it mean to apply design to the choice of our future? I don't have a good answer for that. It's an existential question: If we don't choose, the choice will be made for us in a way we won't likely want. But it's so much more convenient to go on pretending that the bad guys aren't out there and not acknowledging that all it would take would be some teenager making a minor modification to a virus like Sobig that could shut down all of corporate America.

OR EVEN SOMETHING RANDOM. LOOK AT THE BIG POWER BLACKOUT IN THE NORTHEAST.

That's another result of flawed design, because the grid wasn't really designed at all; it just evolved into what it is. Why did the state of Michigan, which had plenty of generating capacity to supply its own needs, go black because there was a problem in Ohio? Can you even build a grid like we have now that is reliable? It may not be possible. Can you secure it against people blowing up the power poles? That's what they did in Iraq. Pipelines and everything else are vulnerable too.

So the real question is, What does it take to get past one of these evolutionary dead ends? One answer is an extinction, which is the kind of catastrophe nobody wants. The alternative is to design a whole new approach to ultimately replace the old one.

Amory Lovins, a neighbor of mine, talks a lot about relying more on local generation of power than on these large centralized plants and the big power grid. If you get local, renewable generation using wind or solar, and you use more energy-efficient devices and hydrogen fuel cells or some other noncarbon technology that doesn't generate greenhouse gases, and develop some kind of power-storage technology so that you can have it in reserve rather than have to be so dependent, then you wouldn't even have to have the grid in many places.

THE GRID WE HAVE IS PARTIALLY A RESULT OF REGULATION. DO YOU CONSIDER THAT TO BE THE ANTITHESIS OF DESIGN?

It's a very clumsy form of design. It's about preventing bad things, but it also often makes presumptions about what is possible that turn out to be limiting, that prevent better solutions from emerging. So instead of prohibiting things, a much better way is to provide economic feedback reflecting true cost, so that things you don't want to happen cost more than the things you do want to happen.

What is the actual cost of greenhouse gases, for instance? If you create a marketplace mechanism to solve that problem, you will probably end up creating wealth, and people would stop doing the stupid things they do now because it doesn't cost them anything. The Soviet Union collapsed not because of communism or central planning, but because of corrupt accounting. They couldn't organize the means of production because everybody was lying about everything. It was a game of fake numbers, and when you do that, you get crap for answers.

WHY, REALLY, DID YOU LEAVE SUN? TO BECOME MORE INVOLVED IN PUBLIC ISSUES LIKE THESE?

There's no ideal time to leave a company, but I feel now that all the projects and strategies at Sun are in good hands. Sure, I could've found another project that needed incubation. We had one to design a new kind of network data-storage architecture that involved 20 or 30 people that I could've stayed involved in. It doesn't have a code name that starts with a J, though.

The problem with big projects like Java or rewriting Unix or designing the Sparc chip is that they require a five-year commitment. So when you come right down to it, I had to decide, "Do I want to push this big rock up a hill again?" Not this time.

Bill Gates faced a similar choice with his Longhorn project. He probably has a lot of great ideas and all these brilliant people, but he also has this antecedent condition he has to take into account--keeping it somewhat in sync with the old Windows. So the beautiful vision may fail because it has to be compatible. I've often wondered why they can't, for once, do something new. I mean really, really new? But then, when I asked myself that same question, that's when I knew I had to leave Sun.