04 Feb Predicting success of early-stage biotechnology

Madison, Wis. – After a session at the recent Wisconsin Early Stage Symposium, I was talking with an investor from Indiana and explained to him that I analyze biotechnology. He then asked me the $64,000 question every investor would like to know: how do I determine if an early-stage biotechnology company will be successful?
It is a great question but, I submit, the wrong one to ask because there are no unambiguous or concrete criteria one can use to determine if a promising technology will succeed scientifically. On the other hand, one can look for warning flags that provide a measure of the risk of failure of that technology.
Therefore, a better question to ask is this: are there reasons that a particular technology might not succeed. By asking the question this way, you avoid the impossible task of trying to predict success, and instead you look to reduce risk, thereby increasing your chance of success by investing in technologies with a lower chance of failure.
Invariably, when considering a promising technology at this early stage of development, the investor is presented with very exciting laboratory and maybe animal studies. Also, the inventor invariably claims that his product has tremendous market potential and that there may be a lot of press hype around this (remember Interferon?)
What savvy investors do
The savvy investor does his due diligence and makes his own assessment of the market potential, the business plan, company structure, etc. All of these are certainly important considerations, but even with the best structure, financing, and business plan in place, the whole enterprise ultimately rests on the success of the technology, which usually is not fully tested.
Let me turn to a case study to illustrate the technological pitfalls that can derail even the most promising science.
Several years ago, a colleague published very exciting results showing that a simple plant oil – the oil that makes oranges taste “orangey” – could both prevent and cure advanced breast cancer in rats. Better yet, the compound, perillyl alcohol, or POH, showed no toxicity in the animals. POH already was approved for human consumption as a food flavoring, was cheap to produce, and was readily available, so there was high hope that POH would become the first cancer chemotherapy and chemoprevention agent devoid of side effects.
Laboratory studies showed that POH stops cancer cells from growing and causes them to self-destruct. Studies in the rat breast cancer model confirmed this and further revealed that normal tissues were not affected. Other research suggested that POH interfered with a biochemical pathway that often is abnormal in human breast cancer. All of these pieces of evidence fit into a convincingly coherent picture of an exciting and novel anti-cancer agent. Based on these findings, clinical trials began.
The early phase I trial revealed that in humans, POH is metabolized precisely as it was in rats and also confirmed that POH was non-toxic in humans. These results added to the enthusiasm for the product.
Phase II trials were then undertaken in attempt to treat human breast cancer. In these trials, POH showed no anti-cancer effect at all and it was removed from the experimental therapeutic pipeline. What went wrong?
POH lessons
The first lesson from the POH failure is this: It always is risky to extrapolate experimental results from rodents to humans. Simply because a rodent malignancy occurs in the same tissue as human cancer does not mean that it is the same type of cancer in both species. Rodent cancer models, like the one employed in the POH experiments, use genetically homogeneous inbred animals and the experimental cancer arises from a single, artificial genetic cause. In contrast, human cancers occur in a genetically diverse population and are initiated by many different genetic events. Thus, there is significant risk of failure when human trials are based on the results of a single animal disease model.
Second, the mechanism of action of POH was insufficiently established before the clinical trials were initiated. The data were not adequately repeated and were weak to begin with. In fact, while the clinical trials were underway, another lab found that POH actually affects a completely different biochemical mechanism than originally believed, and the original results were wrong. Importantly, the correct mechanism of POH anti-cancer activity may only be relevant for a small subset of human breast cancers and more important in other malignancies.
Since the proper mechanism of action of POH was not accurately established and the rat cancer model was inadequate to generalize to human breast cancer, the human trials were not targeted for the appropriate malignancy and thus doomed to fail.
Yet, the risks of failure were discernable before the POH clinical trials began. Critical laboratory data were weak, the rat cancer model was too narrowly focused and untested, and the clinical trials were initiated too early. These warning flags could have been picked up by an objective reviewer who understood the science.
Evaluating science
I sometimes am called upon to evaluate the science behind products and technologies at a similar stage of development as POH was when it entered clinical trials — that is, the technology shows great promise based on lab and animal studies, but no one knows if it will work in humans. This is a high-risk, make-or-break juncture in the long process of taking a science idea to market.
The difficulty in identifying the warning flags at this critical stage of development is that each technology will have its own unique warning flags that portend possible failure. Furthermore, there likely are as many or more different types of warning flags as there are technologies to be developed.
Therefore, the first and obvious requirement in any technology analysis is to seek the input from a professional who has good knowledge of the science. But doesn’t this beg the question: who has better understanding of the technology than the scientists who developed it and aren’t they already telling you it is sound?
This brings me to the second, and equally important, requirement for any technology analysis — it must be objective.
An objective, informed opinion is critical for thorough due diligence, and I submit this is almost impossible to do by a non-scientist, or even by a scientist who is invested in the success of the technology. It is as hard to realistically see flaws in one’s pet project as in one’s own children.
The opinions expressed herein or statements made in the above column are solely those of the author, and do not necessarily reflect the views of Wisconsin Technology Network, LLC.
WTN accepts no legal liability or responsibility for any claims made or opinions expressed herein.