By most estimates, ours included, 2018 is going to be a milestone year for the insurance industry’s adoption of big data. Even though the current adoption rates in this area are way below the half-way mark, the acceptance of big data analytics as the tool of choice to combat fraud has passed the tipping point. Its intensive use by large insurers has given them significant benefits and is set to change the competitive landscape. The rest of the industry knows that it has to catch up before its offerings become outmoded.

The challenges in adopting big data, unfortunately, are as well known as the benefits. The technology and infrastructural requirements of big data pose serious funding challenges for insurers. Data variety, specifically unstructured data, is a key challenge. Apart from company-held data, almost 80 percent of claims data comprises unstructured data such as hand-written notes, videos and images. The growing role of social media and social network analytics in fraud prevention further adds to the complexity. Finally, data sources — legacy systems, disparate systems and third-party systems that are not easy to integrate — are another challenge. Thus, when calculating costs of enhancing data infrastructure (storage and processing) and sources, building a case for a consistent return on big data investments is usually the most difficult.

Getting the required results from a big data setup involves close attention to data accuracy, integrity and relevance. For example, most insurers analyze historical data. But pattern analysis based on past fraudulent data may not be useful as newer fraud patterns emerge frequently. Predictive analytics is therefore playing a bigger role in calling out potentially fraudulent claims. Also, excessive false positives are the major concern when designing an analytical solution. With legacy and disparate systems across business lines, there are no single views of customers. This can be a major source of false positives.

The adoption of big data analytics is thus a long-term project with extensive knowledge gathering, pilot and deployment stages. Before identifying data source, infrastructure and technology investments, organizations need to perform a detailed cost-to-benefit analysis. They should also have internal clarity on what the business hopes to achieve. The people and processes around the big data infrastructure need to change in tandem to drive closer collaboration and coordination between business units. Companies should define clear roles and responsibilities with respect to owning data and processes, and consider appointing a dedicated fraud management team.

Data processing models should be designed to maximize efficiency and minimize costs based on the use case for each model. For example, real-time processing may not be the need of all data models. It is probably most relevant at the First Notice of Loss (FNOL) stage as it allows the handler to check for propensity of fraud and direct the questions accordingly.

Another decisive factor driving broader big data adoption is regulatory and industry support. Greater data sharing and standardized data formats across the industry can be a significant step in boosting adoption. On the other hand, regulations forbid insurance companies from sharing policyholder data with each other, preventing them from gaining a holistic understanding of customer profiles. With privacy and security concerns on the rise, will insurers face greater challenges in adopting big data?

Join the conversation