I recently attended a conference where the key note speaker said that it seems as if the Fourth Industrial Revolution has crept up on the world.
Upon reflection, he actually was correct. It seemed as if we were just getting used to the growth of the internet and online industry when social media and predictive analytics began to run every aspect of the internet.
As pointed out before, there are many benefits associated with Industry 4.0 if we know what we are looking for and are prepared to embrace a certain measure of chaos.
I recently read an article on forbes.com that tries to help us look for signs that technology is having a positive influence on the industry that you are working in.
The article points out that ten years ago, you were lucky if you could apply for insurance online. Today, insurance technology is a major industry force, offering data-driven improvements in underwriting, applying for insurance and filing claims for health, renters, homeowners, auto and other types of insurance.
The article adds that the transformation is especially beneficial for consumers, who now enjoy more competitive prices and easier claims processes. For unprepared industry giants, though, the pressure is on to adapt or flounder.
The article points out that thanks to companies like Geico, customers went from being wary about applying for insurance online to expecting to handle almost everything online.
Expansion of funding options. It used to take a lot of capital to make an insurance company work, meaning it wasn’t an accessible industry for startups. In the last few years, though, funding has become available thanks to increased use of reinsurers to distribute risk.
The article adds that the availability of new technology or data has allowed insurance companies to live and die by the accuracy of their underwriting. Today’s data processing technology, combined with the wealth of publicly available data, mean that insurers can underwrite more accurately than ever, can control their loss ratios and can pass the savings along to customers.
Maybe the most visible example of tech disruption in recent years was the way Netflix displaced Blockbuster (and the ways it has maintained dominance over the years).
These changes are easy to diagnose in retrospect. But how can you determine in the moment if your industry is on the brink of a significant transformation? Here are a few signals to pay attention to.
The article points out that the internet upended our lives and expectations as customers. Then the smartphone changed everything again. Now voice-based search is leading another revolution, as will (we’re told) artificial intelligence in the near future.
The article adds that it’s easy to write these changes off as relevant only to those in e-commerce, but it’s smart to understand what they mean for customer expectations in any industry:
On the flip side, all the digitization means companies can win loyalty by offering a genuine human touch. As always, standing out from the crowd is important; what’s changed is what the crowd is doing and what it takes to set yourself apart.
The article pointed out that thirty years ago, the idea of growing a user base before proving profitability would have sounded like an unfeasible way to build a business. But for many of today’s VC-funded startups, it’s the norm.
Similarly, technology was such that Netflix couldn’t have existed 30 years ago and Uber couldn’t have existed 15 years ago.
The article adds that you can embrace this reality in a few ways:
Invest in what used to be called R&D. This is now often referred to as an innovation lab, or a sector of your business dedicated entirely to dreaming up new ways to solve problems with the best resources you have available. This will give you the edge of a startup with the strength of an industry giant;
If you’re paying attention to the right signals and making decisions based on the current realities of your industry, you should be able to survive and thrive when tech comes for your industry.
Possibly one of the biggest influencers of change is Big Data. I recently read an article on epmmagazine.com which pointed to the importance of Big Data.
The article points out that Big Data can provide scientists with valuable insights that might otherwise be inaccessible, but it is crucial to capture the outcome of the analysis in a way that it can be reused. Choosing the right technology ecosystem is essential for scientists aiming to leverage Big Data analytics.
The article adds that the time-consuming task of aggregating and analysing data can disrupt scientists and engineers from focusing on high-value tasks that require greater human input — meaning they spend valuable time copying and pasting data between applications!
Some of the obvious and perennial scientific questions present well known challenges — and they are automatically now attributed to the term ‘Big Data’. These tricky questions, for instance ‘is there a relationship between these two apparently disparate things?’, or ‘provide me with a data set that is an aggregation of all data we have collected from the past 15 years’ so I can do some analysis’ have been continually asked by scientists. They are starting to become tractable due to advances in storage (cloud based) and high-performance computing (HPC + GPU technologies) which can hold large sets of data in memory and then do high compute cycles/second on the data, producing results tantalisingly quick — what took weeks just a few years ago now takes seconds.
The article points out that working with ‘current data’ requires that all the data (contextual and structured) is captured and tagged as best as possible at the point of creation — getting the user to make sure that the data is fully described with meta data — or at least getting the user to confirm what is correct or not correct if algorithms are used. While this has the advantage of increasing the level of trust in the data, reaching 100% is unachievable. The same requirements exist for both the historical and new data — an organisational semantic taxonomy and ontology of data types and tags that describe the science being done in a way that allows both humans and computers to interrogate the data and consume it effectively (search, aggregate, analyse etc.).
The article adds that whilst this sounds easy — it is not. Why? Because the science and the data landscape are continually evolving and so the systems that are being used to aggregate, tag and distil data need to do the same. These corporate scientific data ecosystems should be viewed as a living system — one that needs to be fed, curated and manged in the same way any other ‘live systems’ are.
The article points out that using Big Data analytics, researchers can now begin to explore and expand their data sets and the types of analysis they use. A key to all of this however, is choosing the right ecosystem of applications, tools and data management infrastructures to manage these tasks. Making Big Data analytics, machine learning and other algorithm-based AI techniques available to all is still a barrier that needs to be overcome.
We have seen similar hype over the years with other computational techniques and these are only just becoming ‘democratised’ and integrated into scientists’ working practices — 20 years after the initial push and introduction to the masses. We may see a similar trajectory with Big Data analytics and AI in science — early adopters and pioneers are already looking at how to leverage these technologies and most large pharma and biotechs have programmes of work looking at problems.
The article adds that the majority of the problems people are tackling with Big Data and AI are in the clinical trials, drug repurposing and real-world evidence space — not so much in early research and development. But given the amount of investment and continual simplification of deployment it is almost inevitable that more and more case studies will be available showing how Big Data analytics and AI tools can be used to help research and development.