Apidata

  • TOUS
  • ARTICLES
  • PRESS
  • PRESS RELEASES

What if insurance companies were to transform their processes by rethinking their costs for data storage and management systems, which require huge investment in development, integration and maintenance? Co-founder of the insurtech Apidata, Michel Ramos argues for a better urbanisation of data, with specialised modules connected to a hub that manages data security, quality and circulation. He explains how.

Data is the new engine of insurance. It is at the heart of all the challenges of personalising and segmenting products and services, risk prevention and control, security and solvency… Doesn’t it make sense to put it at the centre?

Michel Ramos: Putting data at the centre does not necessarily mean centralising data! By creating gigantic data lakes that irrigate all their activities, insurance companies are mobilising considerable resources that seem to me, in many respects, to be counter-productive. After all, all the company’s functions don’t need to have access to all the data stored at the same time: they need to be able to access the data they need, when they need it, with the certainty that this data is regularly updated, homogeneous and GDPR compatible.

You argue in favour of a genuine data urbanisation. What does this mean in practical terms?

Urbanisation is the use of data in service mode, with access and circulation rules defined in advance between specialised modules for each business function. To define this urbanisation plan, we went back to the source, i.e. the needs of the various business teams within an insurance company. An actuary’s job, for example, is to calculate risks and provisions: organising the data according to an insurance logic will make his or her work more efficient. A management controller, on the other hand, has a more accounting-based approach, so it’s best if his data management tool enables him to monitor the financial position of his clients in real time.

Businesses have different needs, but they all have an interest in working with data that is managed, updated, secure and synchronised. Experience shows that when data is poured into a huge reservoir, it becomes outdated if no one keeps it alive. When we’re dealing with delegated management, for example, it’s better to channel external data into a hub whose sole mission is to check all incoming data line by line and then dispatch it to the various business modules. The distribution of data flows between the interconnected modules will then be organised according to a simple urbanisation logic, with authorisation rules, between suppliers and consumers with as few copies as possible, while respecting security, traceability and the principles of non-repudiation of transactions. This is what our ecosystem offers.

Is such an organisation of data flows compatible with the systems that insurance companies have put in place over the years?

Many data management tools are running out of steam, stifled by the complexity of new functions piled on top of old ones, making the system increasingly unstable. As a result, insurers are looking for solutions from innovative start-ups or major tech players, and end up becoming their prisoners. And what’s more, they transfer their data and skills to them! It’s a strange way of calculating things, when what’s needed is simplicity. It’s often said that to solve a complex problem, it’s best to break it down into several simple ones. That’s what we do in the area of data management: we create ‘business’ modules that are interconnected in a dedicated ecosystem that manages all the exchange processes. Thanks to our system of connectors, we can also integrate tools from outside this ecosystem as consumers or producers.

Who controls the rules of urbanisation?

We have created special connectors, the Security Managers, which centralise all the authorisation rules between modules. They constantly check rights, transaction validity and non-repudiation by hashing messages in replicable intermediate databases. We have built a true data orchestrator capable of initiating Middle Data and Big Data transactions, if required.

Although the model may seem complex, we believe that we have succeeded in creating an exchange mechanism based on extremely simple, high-performance principles!