In a "precedented" year, risk managers are running at full capacity working on renewals, updates, ESG, management, and an array of other varied duties. In an unprecedented year, such as 2020, risk managers (and their brokers!) are now being asked by management to explain the material adverse swings in year-over-year rates and in turn, if the company is doing all that it can to minimize the impact.
While insurance brokers, like me, can help our clients benchmark the prices they are paying and the appropriate limits to purchase relative to their peers, for a lot of good (and some not so good) reasons, it’s not simple nor straightforward to provide transparency on the drivers of the markets’ pricing, and what can and should be done to improve renewal outcomes.
Take, for example, the data in the schedule of values (SOV). We know from hard-earned experience that this data matters: the specifics drive both insurers’ catastrophe and rating models, and underwriters’ confidence in the quality of this data influences their assessment of the risk on each of their submissions. Yet quantifying the impact of this data, and improvement thereof, is difficult to do.
Despite the importance of the SOV as the primary source of data for insurance models and underwriting assessment, the insurance market isn’t very transparent about its treatment of data quality and even more elliptical about how individual submissions stack-rank against these criteria. What if we improved the data; better yet, what if we improved the underlying risks and then shared verifiable data demonstrating this differentiation?
Given current market conditions, it’s time for transparency. Not anecdotes, not general assurances, not vague encouragement to ‘collect better data.’ Insurance is a data-driven industry, and it’s time we deliver data-driven transparency on what the relationships are between inputs and outputs. Our customers deserve nothing less so they can make proactive decisions and take more control of their risk and insurance outcomes.