Search
Close this search box.

Getting Big Data Right in Insurance – the Gain is worth the Pain

Big Data is everywhere in insurance at the moment, but Big Data projects are still new: they’re tough to deliver organisationally, but for the insurance industry the gain will be well worth the pain as they mean better knowledge of customers and the market than competitors, better pricing and (in the run up to Solvency II) better capital deployment.

To take three consumer insurance examples.  First, in the motor insurance market, location based data from drivers’ mobiles shows where they were, and telematics data from on-board IT shows how safely they were driving when the accident happened; similarly, data from smart domestic sensors help improve responsiveness to risks of fire, flooding or theft at home; and health apps and ‘wearables’ provide data relevant to health and life insurance.

Technology has impacted insurance as long as insurance has been around, and the impact gets greater as time goes one – a case in point is driverless cars and their likely effect on driver premiums (down) and on manufacturers’ product and IT/software liability premiums (up).  But Big Data is different and the motor, domestic and health insurance sectors use tons of it.

In legal terms, data is funny stuff. Although legally inert – you can’t steal it – a wide and increasingly valuable range of legal rights and obligations is developing in relation to data, based on traditional intellectual property rights, contract law and regulatory law. We’ll soon be talking about data law as a legal subject in its own right. If you’re using data without the right licences or permissions, you can be landed with large damages claims; and regulatory liability around data protection, for example, just keeps on expanding.  There’s a real and increasing tension between Big Data and the privacy of the insured’s personal data in the context of its availability to insurers – a tension that becomes greater when looking at data about genetic pre-disposition to illness and the availability and price of health and life insurance.

So insurers need to make sure they have all the rights they need to all the data they use and in all the ways they use it. In legal terms, this means licensing it in, processing it and using it correctly, and, especially with personal data, obtaining the explicit informed consent of the individual concerned in order to comply with data protection law. In short, a structured approach to Big Data governance in the insurance company.

In organisational terms, Big Data projects mean, first, understanding the business’s Big Data ‘engine’ in terms of data inputs, processing and data outputs: where does all the data – the structured datasets (industry, marketing, personal) and unstructured data (social media, mobile, Internet) – come from?; how does the business intelligence software that analyses the input data work?; and where (internally and externally) do the data outputs go, who uses them and how?

This, in turn, means close cooperation between the insurance company’s legal team and its technology group.  Where the two teams meet in a Big Data project is in the company’s Information Architecture – the structure that maps in IT/systems terms the ‘real world’ flow of information – a car insurance policy bought by a driver for example. Here, the data model must enable tagging of all the ‘attributes’ of each item of relevant data and all the relevant licences, permissions and consents that attach to them. In practice, what this means is that the lawyers need to understand the technical vocabulary of information architecting and the technology team needs to become familiar with the building blocks of copyright licensing, confidentiality, contracts and explicit informed consent under data protection law.

The third feature of this structured approach is Big Data governance in the organisation, built in turn around four elements. A ‘deep dive’ risk assessment is carried out into current data use, reviewing, assessing, reporting back to senior management on, and remediating any issues around current Big Data use.  The senior management working team looking after Big Data – an inclusive group of all stakeholders across the organisation – should then articulate the organisation’s Big Data Strategy as a written statement of high level objectives, goals and relevant considerations.   The Big Data Working Group should next set out a written Big Data Policy – essentially a large project plan again involving all stakeholders showing who’s doing what, when and how.   Finally, at a more granular level, the particular processes and procedures to be followed in operating Big Data governance should be then be spelt out.

Big Data projects are tough, but the prize is worth it, especially at the moment with first mover advantage: a recent survey by Gartner, Inc found only 8% of companies currently using Big Data analytics, and that only 15% of the Fortune 500 will be able to exploit Big Data by the end of 2015.

Share:

More Posts

Send Us A Message