R. W. Johnson Jr., the founder of Johnson and Johnson liked to say frequently about his company: “Failure is our most important product.” He meant that J&J learned how to create useful products by attempting to create many unsuccessful ones as well.
There is no way to predict the outcome of research: that is what makes it research in the first place (as opposed to just search), you are looking for something that you don’t know yet.
Given the hard constraints that have to be satisfied by the “real world,” it shouldn’t be a surprise if most research results are actually failures, in the sense that their immediate results are not really useful or applicable.
The unfortunate fact is that, in order to get your results published, you have to make the research look good. Almost no conference committe really likes to publish negative results. To have a paper accepted, you have to beat everybody out there, by some metric. The consequence is that many authors spend a great deal of effort to obfuscate the real results that they have obtained, packaging them in such a way to make them look good.
For example, this can involve a “careful” selection of benchmarks, which are not really representing reality, but just highlighting the positive features of the ideas. There’s always a benchmark that will make your reseach look good. If there is no good benchmark, you can always tweak the baseline against which you are comparing: why not compare against unoptimized code, or some obsolete implementation, or perhaps use some favorable units of measure (e.g. plot everything in clock cycles, forgetting to mention that a cycle is 1ns for the adversary and 30ns for the evaluated system).
But good research provides more results than just an artifact. A very important result of research, which is often neglected, is the lesson that has been learned by doing the research. Even if the results are bad, the lesson can be extremely valuable.
Let me give you a concrete and dramatic example to illustrate: starting in the ’70s and tapering off in the mid ’90s there has been a substantial amount of research on dataflow machine architecture. One of the most active groups investigating this topic was at MIT, led by Jack Dennis and Arvind. To be more specific I will just focus on Arvind. Dataflow was trampled by superscalar microprocessors, and most of that research does not seem to have a lot of commercial applicability nowadays. However, the vast majority of Arvind’s students that have worked on dataflow machines, from designing them, building their compilers, and building their hardware, have become virtually a mini “who’s who” list of computing personalities. You can see some of them at the web page about Arvind’s 60-th birthday.
Here’s a sample: Bob Ianucci is head of Nokia’s research center, Greg Papadopoulos is Chief Technology Officer and Executive Vice President of Research and Development at Sun, Keshav Pingali and Derek Chiou are Professors at the University of Texas at Austin, David Culler is Professor at University of California at Berkeley, James Hoe is Professor At Carnegie Mellon, but there are many other. The point I want to make is: what you learn from your work may be more important than the work itself. What Arvind taught these people is much more than dataflow machine architecture, it is how to think critically about research, and how to explore new fields by asking the right questions and seeking a deep understanding of the solution. This has enabled them to be successful researchers themselves.