Generative AI in insurance with Planck
Planck is a NYC-based AI platform for commercial insurance offering submission validation and underwriting solutions. Founded in 2016, Planck has raised more than $70M in venture funding, including a $23M venture round in 2022 that saw participation from Vintage Investment Partners, Team8, Arbor Ventures, and others.
According to Elad Tsur, Co-Founder & CEO of Planck, the startup sees serious potential in—and clear successes from—deploying generative AI in insurance and insurtech. Where previous AI models were black boxes that prevented observers from understanding the reasoning behind core underwriting decisions and other actuarial processes, generative AI can offer clearer reasoning behind the risk factors that lead to a “yes” or “no” answer.
“Now that you have explainability and full transparency about the predictions of the AI—the reasoning why the AI answered a or b—then you came to the point where you can actually start to integrate new, additional risk factors,” Tsur explained.
With larger and more dynamic sets, Planck’s tools can also react more immediately to shifting risk conditions, and integrate data in more than one language. This means, for example, that news that was exclusively covered in one language—such as in Italian but not English—can still inform underwriting decisions the way a human being limited to one language cannot.
“We give a very powerful tool that allows the underwriters to manually drill down into risk and ask questions in English,” Tsur said. “And the answers we get are in English, regardless of the language of the source, of the country of the source, and of the media type that the answer is based on.”
These product developments can further help automate and reduce the overhead involved in insurance.
However, generative AI has the potential to “hallucinate,” inventing data and information to create an output that meets the demands of the input prompt. Planck helps mitigate this risk by evaluating and constraining the data sources that the AI uses—leveraging its “ability to validate the relevancy of the data,” Tsur said. Creating a more siloed dataset helps avoid importing erroneous information from the internet, or irrelevant data that bleeds into how the AI performs.
Planck also demands that these AI-powered tools explain the decision making behind their conclusions, which simplifies evaluation for human observers. What’s more, notably, Planck wraps the explanation component of an LLM’s work with another LLM, which follows the same process to validate the assets.
“By adding another layer of elegance at the end of it to actually validate what was answered, we get a very good handling of that hallucination problem,” Tsur said.
While the sector at present lacks meaningful regulatory guidance, Tsur said he sees the private sector working internally to establish rules of the road.
“I do think that the government's running too slow; we need to regulate ourselves to say, ‘This is right, this is wrong, this is adequate enough,’” Tsur said. Private actors should share opinions and knowledge to build on a common vision, he said.