Add Your Heading Text Here
We know the quakes are coming. We just don’t know how to tell enough people early enough to avoid the catastrophe ahead. Around the world more than 13,000 people are killed each year by earthquakes, and almost 5 million have their lives affected by injury or loss of property. Add to that $12 billion a year in economic losses to the global economy (the average annual toll between 1980 and 2008). Understandably for some time scientists have been asking if earthquakes can be predicted more accurately.
Unfortunately, the conventional answer has often been “no”. For many years earthquake prediction relied almost entirely on monitoring the frequency of quakes or natural events in the surroundings and using this to establish when they were likely to reoccur. A case in point is the Haicheng earthquake that occurred in eastern China on February 4, 1975. Just prior to this earthquake, the temperatures were high and the pressure was abnormal. Many snakes and rodents also emerged from the ground as a warning sign. With this information, the State Seismological Bureau (SSB) was able to predict an earthquake that helped to save many lives. However, this prediction was issued on the day when the earthquake occurred, so it did cause heavy loss of property. Had this earthquake been predicted a few days earlier, it could have been possible to completely evacuate the affected cities, and this is exactly where big data fits in.
Nature is always giving cues about the occurrence of events, and it is simply up to us to tune in to these cues so that we can act accordingly. Since these cues are widespread, it is best to use big data to collectively bring in this data to a central location so that analysis and the resulting predictions are more accurate. Some common information that can be tracked by big data is the movement of animals and the atmospheric conditions preceding earthquakes.
Scientists today predict where major earthquakes are likely to occur, based on the movement of the plates in the Earth and the location of fault zones. They calculate quake probabilities by looking at the history of earthquakes in the region and detecting where pressure is building along fault lines. These can go wrong as a strain released along a section of the fault line can transfer strain to another section. This is also what happened in the recent quake, say French scientists, noting that the 1934 quake on the eastern segment had moved a part of the strain to the eastern section where the latest quake was triggered.
Academics often put forward arguments that accurate earthquake prediction is inherently impossible, as conditions for potential seismic disturbance exist along all tectonic fault lines, and a build-up of small-scale seismic activity can effectively trigger larger, more devastating quakes at any point. However all this is changing. Big Data analysis has opened up the game to a new breed of earthquake forecasters using satellite and atmospheric data combined with statistical analysis. And their striking results seem to be proving the naysayers wrong.
One of these innovators is Jersey-based Terra Seismic, which uses satellite data to predict major earthquakes anywhere in the world with 90% accuracy. It uses unparalleled satellite Big Data technology, in many cases they could forecast major (magnitude 6+) quakes from one to 30 days before they occur in all key seismic prone countries. It uses open source software written in Python and running on Apache web servers to process large volumes of satellite data, taken each day from regions where seismic activity is ongoing or seems imminent. Custom algorithms analyze the satellite images and sensor data to extrapolate risk, based on historical facts of which combinations of circumstances have previously led to dangerous quakes.
Of course plenty of other organizations have monitored these signs – but it is big data analytics which is now providing the leap in levels of accuracy. Monitored in isolation these particular metrics might be meaningless – due to the huge number of factors involved in determining where a quake will hit, and how severe it will be. But with the ability to monitor all potential quake areas, and correlate any data point on one quake, with any other – predictions can become far more precise, and far more accurate models of likely quake activity can be constructed, based on statistical likelihood.
So once again we see Big Data being put to use to make the impossible possible – and hopefully cut down on the human misery and waste of life caused by natural disasters across the globe.
Add Your Heading Text Here
In the venture capital world, it’s all about the “hits.” A hit is a startup that makes it big, returning many multiples of a venture fund’s initial investment. Hits are great for everyone—investors, entrepreneurs, job seekers—but the problem is they don’t happen very often. The odds of a big hit are about one in 10.
Boosting the odds for VCs
But what if venture capital could boost its odds to 50-50, or even two out of three? With $48 billion in VC investment in 2014, such an improvement would prevent huge amounts of money from being lost on startups that never had much of a chance of surviving the harsh competitive environment. The challenge is to identify those likely laggards well before the market rejects their idea and, perhaps more importantly, to see the big hits before anyone else. Venture capital has long relied on subjective, intuitive methods of assessing startups, but that’s changing as more firms are bringing data science and consistency into their decision-making.
The next is to try to do the science, build the tools, and do the research all around this one question: How can we better predict when innovations will survive or fail, both for startups and when corporations launch new products or do acquisitions? There’s no human subjectivity involved anywhere along the line. All the algorithms converge on a discrete yes or no. That yes or no depends on majorly two areas – those inside the startup, and those external to the startup. Only around 20% of the predictive value to come from details specific to the startup itself whereas 80% comes from things outside of the startup, such as the market, customers, competitors, technology trends, and timing.
In this scenario, managers are trying to predict the future before investing money in it. According to analysis of data from the Small Business Administration, and data on startups, between 20% and 30% of new businesses survive to their 10th birthday. Startups with VC-backing, aren’t doing much better than spin-offs from large corporations. Has the entrepreneurship economy gotten any better at predicting any business? Data suggest no, not in any statistically significant amount. But Investment Analytics is turning that around. When it comes to predicting survivorship of companies after a 10-year period, implementing investment analytics at present setup can point you to the right answer 67% of the time, and totally wrong on the remaining third. So, If investment analytics has vastly improved the odds of investing in new technologies and businesses, why isn’t the entire VC world knocking on its door? Because A lot of venture capitalists, like the scouts in Moneyball reacting to sabermetrics, are skeptical. Most of the people don’t see quants taking over VC, even in the distant future, but they do see the potential of using data to help venture capitalists make decisions.
Data analytics are undoubtedly creeping into venture capital—Google Ventures uses an algorithm to help with investment decisions, and a Silicon Valley firm called Correlation Ventures is built upon an algorithmic investing strategy. But the old-fashioned process of detailed research and human judgment still has a lot going for it. Just ask the people at Lux Research, an emerging-technology consulting firm in Boston. For the past 10 years, Lux’s science-trained analysts have been scouring the business landscape for new technology firms, interviewing employees of those firms, and slowly compiling their own database of companies that succeeded or failed. Lux rates each company it profiles according to nine key factors, which are available to the public on its website in a report called “Measuring and Quantifying Success in Innovation. The result of that rating is a company profile with a “Lux Take,” which ranges from “strong positive” to “strong caution.”
The company recently looked back at five years’ worth of profiles and found that 50% of the companies that earned a “positive” rating went on to be successful, an outcome which Lux defines as an IPO, acquisition, or transition to standalone profitability. Given the usual odds of new business survival, the Lux system seems to inject a significant amount certainty into the process of evaluating startups. The company’s high rate of accuracy is attributable to two things: capability and methodology.
People talk a lot about the importance of innovation to economic growth. In a recent survey of voters in swing states by the Economic Innovation Group, 75% of those surveyed agreed that America needs more entrepreneurs and investors in order to improve long-standing economic problems. The innovation economy has an information problem. The information that drives it isn’t good. How can countries become innovation economies in a more efficient way? Let’s get better at funding the startup companies that will grow and drive employment. For every dollar that goes in the wrong place, that’s a bad dollar.
Better Investment Strategy for PEs
Data, and specifically the analysis of data is becoming a critical component in enabling private equity firms to create and maintain value for their investors. I know some of you are thinking, “What does a data management geek know about private equity?” One thing to note about private equity firms is that there has been a greater focus on IT and a growing need for data analytics services recently, especially for deals that involve middle-market companies and/or are add-ons to existing platforms. Increasing acquisition valuations and the dearth of available debt financing are making it increasingly difficult for private equity firms to generate outsized returns for their investors. This is causing private equity firms to focus more on operational improvements within their portfolio companies as a means of driving growth and value creation. Given the strong correlation between operational improvement and higher returns for limited partners, private equity managers’ hands-on involvement is, and will continue to be, critical to the future success of the fund. For private equity managers, this means identifying key drivers of EBITDA, understanding customer behavior, developing competitor analyses, determining the best place to invest capital, managing budgets, and controlling costs—for each of their portfolio companies. And to do it effectively and efficiently, managers must leverage data.
Nobody said it was going to be easy and the fact is: it’s not. Unfortunately, most of the private equity managers are not in a position to dig deep into the recesses of their portfolio company systems and extract the data tables necessary to produce key operational metrics. Therefore, managers will have to rely on the current workforce at each of their portfolio companies or hire an outside party to do the ‘dive into the data’ for them. Many of them struggle with capturing, extracting, and ultimately analyzing operational data within their businesses. This is largely due to the fact that, historically, middle-market companies have rarely made technology a priority, leaving them with an unsophisticated IT platform. As a result, when dealing with legacy and perhaps “home-grown” solutions, there are a variety of challenges that arise when attempting to harmonize, integrate, and perform data analytics. Additionally, many middle-market companies are experiencing data volumes that are growing exponentially and are held in disparate systems throughout the organization. In short, private equity managers are faced with massive amounts of “dirty” data that must be converted into meaningful metrics. This leaves private equity firm managers asking themselves, “How much time and money do I need to commit to information technology and data analytics?” Answer: more than you have in the past. The key is to invest wisely and in a way that will get you the greatest returns in the shortest amount of time. There are an infinite number of technology solutions out there. From off-the-shelf, in-the-box solutions to massive enterprise resource planning (ERP) solutions, software companies have done their best to provide an answer to your “big data” problems. Although there are many great tools out there, there is no cookie-cutter solution when trying to capitalize on the presumed synergies of integration. Private equity managers must understand the specific business needs and work to develop strategic analytical solutions driven from the data that already exists in their organization. It is critical, especially in today’s competitive market, to “leverage your data” and apply these targeting solutions in a way that will provide the greatest returns for your investors.
Big data may indeed be able to help, but it’s more likely to be a piece of the puzzle, not the solution. For instance, academic studies have shown that serial entrepreneurs successful in the past are more likely to do well in new ventures. That implies there is some explanatory power in looking backwards for guidance on what’s ahead. “But the nature of entrepreneurship is always changing,” says Josh Lerner, the Jacob H. Schiff Professor of Investment Banking at Harvard Business School. Most regressions predicting entrepreneurial success in the literature have very low goodness of fit, which suggests the limits of a ‘Moneyball’ approach here. Predicting which startup is going to be successful is much harder than [predicting] which baseball player is. It is as if the baseball rules are being changed every year in unpredictable ways.”