Default HubSpot Blog

Man Versus Machine: Predictive Analytics in Emerging Technology

Posted by Evan Kodra and Chris Hartshorn on Aug 10, 2015 12:10:43 PM

In 2012, a report came out showing that public stock markets have been outperforming VC funds for a decade. It’s been widely noted that almost all returns from VC come from less than 5% of deals – take out the one outlying, top performing investment and the average performance of VC funds can completely deflate.

Why is it so hard to predict success in innovation and startups?

In the era of “Big Data,” predictive analytics might seem like the way to go: take top performing investments, throw them in one bucket; take failed startups, and throw them in another bucket. Have a data scientist build a model that discriminates the two classes from each other. That model should map out the features for what makes an idea or startup successful. In fact, there are a very small number of companies out there claiming to do this. The likes of Growth Science and Trendify both (somewhat vaguely) claim to have accurate models.

This all sounds good and fine but the reality is deceptively simple and has been described well: this is not actually Big Data in any sense. There are not enough examples of successful startups to recover a general pattern for what works. The world tends to overanalyze the rare success stories and infer patterns from those successes that are not at all generic. How many times have you heard the phrase, “the next Google/Facebook/Climate Corporation/Uber/AirBnB/Zillow”? How many VCs have you heard talking about the next shiny object while not being able to define it beyond banal generalities? There may be Big Data-enabled startups and use cases for Big Data (as users of Lux Research’s Big Data services already know). There is no Big Data about startups beyond web scrapers and aggregators that can tell you what exists but not what will succeed.

In reality, the next big success is usually completely distinct from the previous one – meaning not only are there few data points, but those few data points don’t even have much in common besides the fact that they worked.

Measure what matters

Search the web using the phrase “predicting startup success” if you’d like and you’ll find plenty of articles and blog posts out there claiming to know what matters for startup success. How unique is the technology or idea? Does the business model make sense? Does the company have a big enough addressable market? Does the management team have the experience and acumen to translate the idea to revenue?

Virtually nobody would argue that these facets are all important.

But what matters is consistently measuring these features, over and over again, year after year – and becoming skillful in calibrating that measurement so that it works well across an extremely broad spectrum of startups.

Our recent research paper details the quantitative validation of an approach that does just that. For a decade, Lux Research’s subject matter expert analysts have been interviewing startups across the emerging technology space – companies inventing solutions in nanotech, water, solar power, energy storage, advanced materials, and many more.

What we found in evaluating hard and soft attributes of startup companies was not particularly surprising, but pretty intuitive, and, importantly, statistically significant. Companies with:

  • strong management are ~ four times more likely to succeed than those with poor management
  • strong partnerships are ~ two times more likely to succeed than those with weak partnerships
  • identified lower barriers to entry are ~ three times more likely to succeed than those with high barriers to entry
  • …and the list goes on

Being able to extract this insight quantitatively and with the statistical significance in demands, only comes with the ability to consistently measure this phenomenon across all startups. This is possible across the widest variety of startups, but can be isolated and refined even further in a sector by sector analysis. As a recent TechCrunch article documented, even notoriously tough investment sectors – such as water – are goldmines with the right methodology.

There is no substitute for hours’ worth of expert time spent evaluating a startup’s idea and converting an assessment into a quantitative metric. This is something that you can’t get by scraping conventional numeric data on startups. The human element is still a key ingredient.

So where is the ceiling and what’s next?

Certainly we have seen data science change a number of industries in the past couple of decades – and with the continual explosion of computing power, data storage, and more user friendly advanced analytics tools, that trend is here to stay. In the case of predicting the success of innovation, ideas, and companies, there is likely no substitute for expert judgment.

But data science is not necessarily a dead-end here – even if it never totally replaces human expert-based assessment of startups, it could play an important augmenting role. Consider the emerging possibility of machine learning-based prioritization of radiology images – this type of advance could never replace a radiologist, but it could certainly enhance his or her workflow.

Disciplined experts can be relatively good at making subjective assessments in the presence of soft, fuzzy knowledge and uncertainty (“…does it sound like that CTO really understands what neural networks can do?...” – or – “…this company might do well if they have a good partner, but it just doesn’t have the right network yet; plus, it’s running out of money…”) – but we aren’t effective at all at taking in large reams of complex information all at once. That’s where data science can be leveraged.

Contemporary data scientists and software are well equipped to quickly scrape streams of numeric and text data from sources like academic publications, patent applications, venture investment, social media, and more. Can we crunch that information and consolidate it into metrics that can complement the primary research that has already lent us significant insight? Can those metrics help us do a better job contextualizing the companies? Can they help us quantify achievable market size or identify adjacent market opportunities? Can we use those data streams to find prospect companies that we weren’t even aware of that are similar to the ones that expert analysts have identified as more likely to be successful – and could that in turn help experts spend more hours evaluating and less time searching?

These are some of the questions that data driven approaches could help us answer – flipping the Man versus Machine into Man + Machine. In the many domains Lux Research covers, and in the very industry Lux Research is in, the companies that see past the “versus” and monetize the “+” will position to win.

Capturing and unlocking your venturing and scouting data

All corporate functions need to develop and display quantifiable value to the organizations that they serve. This is easy to do for business units given the direct financial tools they can use. Operations can point to metrics around utilization rates, inventories, waste, and on-time delivery to show their performance.

Functions that feed the new business pipeline are far less mature and rigorous in terms of measuring value, let alone getting better at predicting tangible outcomes. The R&D function will track patents, technology pipelines and product launches, but stand on (often fuzzy) new product introduction financial metrics when their tangible value to the business needs to be validated. The innovation-focused functions that look outside the organization – corporate venturing and technology scouting being the obvious two – are not even reaching this standard.

In the case of corporate venturing, the vast majority of groups claim to focus more on strategic value and outcomes, as opposed to financial return. More specifically, in our most recent survey, over 50% of corporate venturing respondents claimed that strategic goals dominated their objectives, with a further 28% indicating that there was at least an even balance between strategic and financial metrics. Very few have quality metrics on how to measure this, let alone the data to see how they in fact did in maximizing their success in delivering strategic value. Technology scouting functions are no better off. Surveys we have done in this space have shown that metrics vary, but will generally relate to transitioning technologies to the businesses. The rigor with which impact on the business is measured, whether the right technologies were selected, or any level of analysis to get to better predictions is, again, lacking.

There is little doubt that these teams have the content expertise required for their roles. Again, our survey data of these functions shows a predisposition towards industry veterans for these teams, with the vast majority of individuals having at least 5 years experience, and many of these bringing 10 or more years experience to the table. What’s missing is the data and analytics to layer on top of that deep expertise, and to maximize its on-going impact. The man, in this case, has fundamentally failed to develop the machine.

 

For more on Lux’s research on predictive analytics, contact Chris at chris.hartshorn@luxresearchinc.com.