Observations from 2016 SEER Data

By Jorge Sirgo

The latest annual federal data on mesothelioma diagnoses in the United States became available on April 15, 2016. The data reports on mesothelioma diagnoses during 2013, as determined from “cancer registries” from a sampling of some hospitals in the United States.

The data arises from the Surveillance, Epidemiology, and End Results (“SEER”) Program of the National Cancer Institute. The report is more technically described as the 1973-2013 SEER Research Incidence data (November 2015 submission). SEER collects data on cancer cases from various locations and sources throughout the United States. Data collection began in 1973 with a limited amount of registries and continues to expand to include even more areas and demographics today.

The data in 2013 reflect modest changes from 2012, but appear to support the trends previously identified in the data.

The SEER 9 database (the database with registry information collected since 1973) shows that the overall rate of mesothelioma diagnoses has been falling since the early 1990s. The rate trend differs, however, by sex. While the rate of male mesothelioma diagnoses has been falling since the early 1990s, the rate of female mesothelioma diagnoses appears to remain constant. See the figure below to review the trends.

2017 SEER-1

The rates above are used to extrapolate from the sample to an estimate of the population of mesothelioma diagnoses. The SEER incidence data are represented below in the following chart:

2017 SEER-2

Overall, the data indicate the estimated number of mesothelioma diagnoses is down from 3,174 in 2012 to 2,828 in 2013. Estimated male mesothelioma diagnoses is down from 2,405 in 2012 to 2,169 in 2013. Estimated female mesothelioma diagnoses decreased from 769 in 2012 to 659 in 2013. The data does not distinguish between pleural and peritoneal mesothelioma diagnoses. The declining trend in rates noted above for overall and male diagnoses can be translated into potential declining incidence. However, the relatively constant rate of female mesothelioma diagnoses (combined with increasing population, longevity, etc.) appear to show increasing female incidence of mesothelioma diagnoses.

The decline in the rates and point estimates can be examined further by drilling down to the registry level.

2017 SEER-3

There are three noteworthy declines in the registry-level rate:

• Connecticut drops from 1.31 to 0.86 (-34%)
• Hawaii drops from 0.63 to 0.36 (-43%)
• New Mexico drops from 1.06 to 0.71 (-33%)

The overall SEER 9 rate declines from 0.94 to 0.81 between 2012 to 2013 (-14%). Excluding the three registries noted above, the overall rate declines from 0.89 to 0.85 between 2012 to 2013 (-4%). On the surface, it would appear these 3 registries are driving the drop observed in the overall rate.

However, while the decreases of the 3 registries noted above are relatively large, the 95% confidence intervals suggest they are not statistically different (remember these are extrapolated estimates). For example, in Connecticut the 2012 estimate of 1.31 has a 95% CI of [0.97, 1.72] and in 2013 the estimate of 0.86 has a 95% CI of [0.60, 1.21]. These intervals overlap suggesting the true rates in these two years are very likely equivalent.

In addition, the low rates associated with 2013 are not unprecedented. Looking at their history back through 1973, these values have been present before.

Jorge Sirgo works with clients to estimate the liability from future bodily injury claims. Mr. Sirgo prepares valuations of the population of claims pending against the company to project the cost to settle future claims, evaluate insurance recovery, and estimate reserves. His bio can be found here.

Post-Viking Developments

By Dan Maloney

Now that we’ve had a chance to digest the ruling and because there have been some further developments, I thought it to be an appropriate time to revisit the potential impact of Viking.

Much has been written and said about how Viking is a game changer, and the implication has been that policyholders should rejoice. The initial headlines often read along the lines of the decision being a “major victory” for policyholders. But in those initial reports, blog posts, and opinion pieces, the more objective point, and the one with which I agree, was often that the decision is a significant development worthy of further evaluation.

At the very least, Viking confirms that New York should not automatically be viewed as a strict pro-rata jurisdiction. What is also clear, however, is that New York should not now be automatically viewed as an all-sums jurisdiction. Further, the decision:
1. Potentially protects and improves policyholders’ recovery on insurance policies;
2. Potentially makes it easier to trigger excess coverage;
3. Clarifies the position of New York that the language of the policies at issue in a case will determine the appropriate allocation methodology; and
4. Confirms that the facts and circumstances of an individual situation matter.

One key outcome could be that there is now more uncertainty given that the decision was about the specific language in the insurance contracts at issue. In addition, Viking leaves a number of issues and questions untested and unresolved, such as how other provisions in the contracts would be interpreted, what you do when there is mixed non-cumulation language within an insurance program, what you do about contributions from other insurers, and so forth.

Much has also been written and said about the practical impact of the decision, and in fact I spoke about just that at a recent conference. But to summarize why we care — a policyholder with a coverage action where New York law applies should not be assumed to be forced into a strict pro-rata allocation methodology with the application of non-cumulation clauses. Let’s use a very simple illustration that shows if that was the case. I will assume that a policyholder is paying a $10M claim, that the claim triggers ten years of annual $1M policies, and that the policies contain applicable non-cumulation language. In this scenario, the first policy would pay $1M, the remaining policies’ limits would be reduced to $0, and the policyholder would be responsible for the remaining $9M in costs. If able to apply an all-sums allocation methodology, a policyholder can avoid periods with non-existent or uncollectible coverage, like gap periods, policies issued by insolvent insurers, SIRs, and the like, potentially finding a path to full recovery of its loss.

So what to do? We have clients in settlement discussions who are looking to re-run allocations with Viking in mind. First, it’s important to note that if you want the same allocation outcome as Viking, the policies need to include policy language similar to what the Court reviewed. Then, the particular situation can help guide how the policyholder could proceed. It could mean:
1. A pick-and-spike scenario in which multiple years are selected and the policyholder chooses at what point in each policy period it stops;
2. Collapsing costs previously allocated to “white space” into the coverage block; or
3. Something in between, like allocating to one year of coverage, a “net of contributions” analysis, etc.

As someone who has been responsible for more allocations than I’d perhaps like to admit, I’ve learned that what might seem like the obvious best path for the policyholder isn’t always the case when you consider the impact of multiple variables. Something else I probably shouldn’t admit is that I find it rather fun to identify coverage maximizing paths forward that seem counter to conventional wisdom, like treating policies as defense within limits, or even not covered, versus defense covered in excess of limits. Sometimes accepting the other side’s positions on choice of law or underlying exhaustion or any number of other variables results in a better recovery for the policyholder. In other words, understanding the facts and circumstances of the situation at hand and taking a fresh look at the situation can often be beneficial.

With the passage of some time, we have the benefit of seeing how insurers are responding to Viking. Liberty Mutual Insurance Co. v. Fairbanks Co. is an interesting current case study because a recent development in that matter is a direct result of Viking.

Liberty Mutual issued, among other coverage, umbrella policies to Fairbanks for annual periods from 1974 to 1981. Leading up to a March decision by the District Court in New York, Fairbanks argued that New York law, which applied to Liberty Mutual’s policies, supported an all-sums approach. However, the Court determined that Liberty Mutual’s policies were subject to pro-rata allocation of indemnity.

Then along came the Viking decision in May. Liberty Mutual argued on June 17, in a motion of summary judgment, that the umbrella policies issued to Fairbanks contain non-cumulation clauses identical to those in Viking, that Viking held that policies containing non-cumulation clauses must be subject to all-sums allocation, and that Viking held that non-cumulation clauses are unambiguous and must be enforced according to their plain language.

Liberty Mutual argues that the policies are therefore subject to an all-sums allocation methodology AND that the non-cumulation provisions apply, with the result that the 1974 policy would pay its $10M limit and the limits under the 1975 to 1981 umbrella policies would be reduced to zero.

Viking continues to be a significant development that is having an impact on policyholders and insurers on its own and due to its impact on other cases. The situation will continue to unfold in ways that may be unexpected and ultimately time will tell how the decision plays out for policyholders. In the meantime, policyholders and their counsel can revisit allocation scenarios and related discussions, watch as Viking returns to the Delaware Supreme Court, and monitor how Fairbanks and other cases play out.

Dan Maloney has more than 15 years of experience in the areas of economic and financial modeling, data analytics, damages calculations, and financial analysis. He has supported clients and their legal counsel on dozens of high profile, complex cases requiring close collaboration, creative problem solving, technical skill, and subject matter expertise. You can read his full bio here.

Slashing Document Review Costs with Sampling and The Cloud (Part 1)

By Eric Kirschner & Jorge Sirgo

Frequently, in litigation, a large number of claims [or other relevant data] need to be reviewed to establish critical evidence (e.g., damages, years of product exposure). However, under many circumstances, it would be prohibitively expensive to review each of these individual claims.

In these instances, constructing a sample of relevant claims and reviewing the sampled claims via a Cloud based application may dramatically cut document review and data management costs.

Sampling: Sampling is a process whereby a small subset of data is reviewed and then the results of that review are extrapolated to the larger population of data. For example, in many instances we are asked to evaluate the accuracy of a client’s underlying claims database. The database may contain tens (or hundreds) of thousands of records each summarizing a separate claim – reviewing every one of these claim files would be cost and time prohibitive – and is, as described below, unnecessary.

Instead of engaging in such an inefficient review, a more prudent and widely accepted approach is to select and review a representative sample of claim files. Relevant information is obtained from these files, interpreted, and statistically evaluated. If the sampled information produces an estimate within the expected level of precision, a review of the entire population is unnecessary. Typically, precise estimates can be obtained by reviewing less than 10% of a total population, thereby eliminating the need to review the vast majority of the individual files.

Additionally, the sample can be structured to address the specific needs of the litigation. For example, if the litigation requires an evaluation of the accuracy of a plaintiff’s damages – and those damages consist of numerous individual elements – the sample would be designed so that only a small sub-set of those elements are actually reviewed.

Let’s look at a more specific example of a situation in which sampling can be a useful tool. The task is to evaluate a database summarizing 5,000 separate claims that allegedly settled for an average of $1,000. Instead of reviewing the underlying documentation for all 5,000 claims to verify their settlement values, it would be more efficient to review a small sample of files – say 300 or 400 – for accuracy. This sample review might very well yield a result that confirms the $1,000 average settlement amount with a very high degree of accuracy at a fraction of the cost of a full review (see the graph below for an illustration of precision as sample size increases).

Sampling

A modest sample size (the x axis) can yield a tight
confidence interval at a modest cost.

Furthermore, the U.S. government and courts are becoming increasingly comfortable employing statistical sampling to cut the time and costs of complex, multi-claim litigation. Some examples of recent applications of sampling by the U.S. government are,

  • Determining over or underpayments related to Medicare — United States v. Fadul, Civil Action No. DKC 11-0385, 2013 WL 781614 (D. Md. Feb. 28, 2013); United States v. Rogan, 517 F.3d 449, 453 (7th Cir. 2008);
  • Determining the Basis in Property Acquired in Transferred Basis Transaction – Rev. Proc. 81-70, 1981-2 C.B. 729; and
  • False Claims Act cases – U.S. ex rel. Martin v. Life Care Centers, No. 08-cv-251, Dkt. 184 (E.D. Tenn. Sept. 29, 2014).

Slashing Document Review Costs with Sampling and The Cloud (Part 2)

By Eric Kirschner & Jorge Sirgo

The Cloud: Of course, all this information is useless if not stored and analyzed properly. And this is an area where costs can vary wildly depending upon the effectiveness of the software employed. Fortunately, just as sampling can drastically reduce document review costs, a well designed, secure Cloud-based data solution can also drastically reduce review and analysis costs.

Cloud based systems provide a number of advantages over more traditional legal data systems. First, they are easy to set up. The system is installed on a single server and all users access the data from that server. There is no need for multiple, complex site and machine installations.

Second, data and documents are available to all team members from any location (subject to the privileges granted). Different people in different offices with different roles are able to seamlessly view and analyze data and documents. Similarly, counsel, client, witnesses and the Court can be granted rights to review specific data and documents as needed. Witness testimony is better focused and flows more smoothly while judges are similarly able to more easily follow the proofs being proffered.

Third, all this information is immediately available. If, for example, two staff members, Betty and Dan, are working on a document, and Betty in New York City finds a useful document, she can code the relevant information into the database, and call Dan in Washington. Dan can instantly summon the document to his screen and review both the document and Betty’s comments.

Fourth, a well designed Cloud based system is exceptionally cost effective. A basic review and coding system can be quickly set up and customized so that reviewers are focused only on critical and/or relevant information (see the sample screen capture below).

And fifth, they are immensely flexible. Documents, fields, queries, etc. can all be added to an existing Cloud based system with only minimal programming. Similarly, additional reviewers or parties (attorneys, witnesses, etc.) can also be looped into the system with minimal cost and zero downtime.

DETCap01

A Cloud based document review and analysis system can be easy to use
while providing instantaneous access to data and documents.
(the above data is fictitious, the document is public)

Summary: The above discussion covers two techniques that can be used to drastically reduce the costs of large claim file reviews. Sampling can be used to limit the scope of the review and Cloud based data can be used to reduce the cost of coding and data management. Taken together, these two techniques can dramatically reduce the expense involved in litigating cases that involve thousands – or hundreds of thousands – of individual claims.

Observations from the 2015 SEER Data (part 2)

A month ago, I reported on the latest the Surveillance, Epidemiology, and End Results (“SEER”) Program of the National Cancer Institute on mesothelioma diagnoses in the United States. Click here. The blog post reports on mesothelioma diagnoses through 2012, as determined from the SEER 9 registries (Atlanta, Connecticut, Detroit, Hawaii, Iowa, New Mexico, San Francisco-Oakland, Seattle-Puget Sound, and Utah). Data from these SEER 9 registries are available for cases diagnosed from 1973 and later for all of these registries with the exception of Seattle-Puget Sound and Atlanta. For the two exceptions, the data from Seattle-Puget Sound and Atlanta registries joined the SEER program in 1974 and 1975, respectively.

Over time, the scope of the SEER registries has expanded. Data from registries that expand SEER 9 were included in SEER 13 (SEER 9 plus Los Angeles and San Jose-Monterey, Rural Georgia, and the Alaska Native Tumor Registry), starting in 1992. In addition, there was further expansion starting in 2000 through SEER 18 (SEER 13 plus Greater California, Kentucky, Louisiana, New Jersey, and Greater Georgia). Greater California includes Central California, Sacramento, Tri-County, Desert Sierra, Northern California, San Diego/Imperial, and Orange County. Greater Georgia is represented by the entire state excluding: Clayton, Cobb, DeKalb, Fulton, Glascock, Greene, Gwinnett, Hancock, Jasper, Jefferson, Morgan, Putnam, Taliaferro, Warren and Washington Counties.

The SEER 9, SEER 13, and SEER 18 data show that the extrapolated estimates of the population of mesothelioma diagnoses does not wildly differ. The SEER incidence data are represented below in the following chart:

SEER Image001-Part2

The figure shows that extrapolated estimates of the population of mesothelioma diagnoses from SEER 18 are generally greater than those from SEER 9. Conversely, estimates of the population of mesothelioma diagnoses from SEER 13 are generally less than those from SEER 9. Differences in the estimates can be attributed to the coastal representation of the registries (Stallard, Manton, & Cohen, 2004). Nevertheless, the patterns in the series (for years in common) appear similar. Given the shorter duration of the SEER 13 and SEER 18 data, the smoothed trend fit to the SEER 9 extrapolated estimates may not necessarily translate. Ignoring data prior to 1992 may suggest that mesothelioma diagnoses may be thought of as more or less “flat.”

Observations from the 2015 SEER Data

The latest annual federal data on mesothelioma diagnoses in the United States became available on April 15, 2015. The data reports on mesothelioma diagnoses during 2012, as determined from “cancer registries” from a sampling of some hospitals in the United States.

The data arises from the Surveillance, Epidemiology, and End Results (“SEER”) Program of the National Cancer Institute. The report is more technically described as the 1973-2012 SEER Research Incidence data (November 2014 submission). SEER collects data on cancer cases from various locations and sources throughout the United States. Data collection began in 1973 with a limited amount of registries and continues to expand to include even more areas and demographics today.

The data in 2012 reflect modest changes from 2011, but appear to support the trends previously identified in the data.

The SEER 9 database (the database with registry information collected since 1973) shows that the overall rate of mesothelioma diagnoses has been falling since the early 1990s. The rate trend differs, however, by sex. While the rate of male mesothelioma diagnoses has been falling since the early 1990s, the rate of female mesothelioma diagnoses appears to remain constant. See the figure below to review the trends.

image003

The rates above are used to extrapolate from the sample to an estimate of the population of mesothelioma diagnoses. The SEER incidence data are represented below in the following chart:

image004

Overall, the data indicate the estimated number of mesothelioma diagnoses is down from 3,229 in 2011 to 3,174 in 2012. Estimated male mesothelioma diagnoses is down from 2,488 in 2011 to 2,404 in 2012. Estimated female mesothelioma diagnoses increased from 741 in 2011 to 769 in 2012. The data does not distinguish between pleural and peritoneal mesothelioma diagnoses. The declining trend in rates noted above for overall and male diagnoses can be translated into potential declining incidence. However, the relatively constant rate of female mesothelioma diagnoses (combined with increasing population, longevity, etc.) appear to show increasing female incidence of mesothelioma diagnoses.

The estimates from the other SEER databases with expanded registries (SEER 13 starts in 1992 and SEER 18 starts in 2000) will be examined in a future blog post.

Cyber Security Risk, Threats Edition

“The only solution that provides complete security is to put that data on a hard drive, incinerate the drive until it is completely turned to vapor, and then randomly mix the hard drive vapor with outside air until completely dissipated.”

— Mike Danseglio, Securing Windows Server 2003

Cyber security is a critical aspect of data management and one that I should of touched on in earlier posts (but we’ll go with “better late then never” theory . . . plus, I’ll be making up for the lost time with volume).

I started off with the Danseglio security quote above for a number of reasons. First, it’s completely accurate. All data security is a trade off between security and access. If you have too much security, then you inhibit access. And vice-versa. The trick is finding the proper balance between the two.

Second, the situation he describes is all too common. Clients are always – quite rightfully – asking how we can ensure that data is absolutely, positively secure. The answer – as Danseglio elegantly explains – is you can’t.

Finally, Danseglio’s quote is from a cyber security book published back in 2005. That’s before widespread broadband and wireless, before tablets, before BYOD, thumbdrives, cloud computing, java vulnerabilities, etc., etc. The information that Danseglio was conveying back in 2005 is exponentially more relevant today. Smart fella.

Anyway, let’s start with an oldie, but goodie, the Verizon Data Breach Investigations Report (2013) (the “Verizon Report”, get it here). The annual Verizon Report covers 47,000+ security incidents and 621 confirmed data breaches from 2012 and probably represents the largest, enterprise level, publicly available survey of annual cyber security risks. While there is a fair amount of sample bias in the survey (which the authors’ readily note), the report is an invaluable overview of enterprise level data security risks.

Here are some highlights:

1. 92% of the breaches involved outsiders, but only 14% involved insiders (the figures add up to over 100% because a breach might involve multiple parties; for example, if a criminal enlists a waiter to steal a credit card number, Verizon will record that as involving both an outsider and an insider).

That’s a pretty striking number. Less than one in seven breaches involved an insider in any sense.

2. Contra. the above, of the 47,000+ security incidents (as opposed to the 621 breaches), 69% came from insiders. Whoa, what does this mean? That outsiders are better at breaching data security than insiders? Nope. Most of the insider incidents were simply errors, e.g., misplacing a thumbdrive, losing a laptop, sending a document to the wrong recipient. In general, insiders don’t represent a conscious threat to data (the occasional Edward Snowden notwithstanding), but they do represent a major inadvertent threat to data. So there’s an important lesson – educate your employees with regard to data security and enact safeguards to limit any damage done by an inadvertent breach.

The primary motivation behind data breaches was financial gain. No surprise there. Next on the list, however, was espionage (remember, we’re dealing mostly with economic/industrial espionage).

Security Threats

And here’s an eye-opening figure: “96% of espionage cases were attributed to threat actors in China and the remaining 4% were unknown.” [p. 21]

3. 35% of all data breaches involved physical attack. Verizon explains that most physical attacks are either point of sale (“POS”) attacks or ATM skimming. In the former, the hacker typically substitutes a credit card reader that transmits user information in place of a valid reader. In the latter, the hacker typically attaches a mechanical device to an ATM (maybe a reader, maybe even a small camera that captures the user account and PIN).

4. Ransomware. This is not very widespread, but it seems like a high-growth “industry” and is worth noting. In a ransomware attack, hackers typically will break into a victim’s server and alter the server’s backups so that they run regularly, but do not actually store any data. After a few weeks of phantom backups, the hackers will then encrypt the victim’s data and demand payment from the victim in return for providing the encryption key. The victim is then faced with the choice of either paying the ransom or losing all of their recent work and data.

So that’s a quick overview of some of the more interesting nuggets in the Verizon Report. If you’re interested in data security, then the report is an annual must read.

In the next post, we’ll look at some recent cyber security related insurance issues.

Eric Kirschner is a former insurance coverage attorney who now specializes in assisting companies and law firms efficiently manage data in large scale litigation. You can read his full bio here.

Another cool Excel add-in

Okay, I know I promised a follow-up on my software licensing post and I know this post is a bit of a hard core post for us stat and data geeks, but if you ever have to work with Stata .dta files, you should find this useful.

Stata is a program that crunches data and generates various statistical analyses from that data. I have no idea how to use it. Jorge Sirgo is our statistical expert and he uses SAS (I have no idea how to use that either, but I digress . . .).

Anyway, it’s fairly common for us to get Stata data files in litigation and then Jorge does his magic and converts them to a more common format like Excel. But if you need a basic (and free) tool for viewing and converting Stata files, then an organization called Colectica publishes a great little add-in for viewing and saving these. Download it here.

Note that the tool is not perfect — date formats may trip it up. But I’ve found it to be a great little add-in for pulling and analyzing Stata .dta files. Plus it apparently handles a number of other formats.

Eric Kirschner is a former insurance coverage attorney who now specializes in assisting companies and law firms efficiently manage data in large scale litigation. You can read his full bio here.

Observations from the Latest SEER Data on Mesothelioma

Over the past several decades, product liability lawsuits alleging asbestos related illnesses have reached into the hundreds of thousands for many companies.  Due to the long latency period for asbestos related diseases, these claims (particularly those alleging mesothelioma) continue today.  Numerous entities have relied on the projections of occupational-related mesothelioma deaths by Nicholson (1982) and others based on his work as a way to forecast the magnitude of product liability lawsuits that allege disease related to asbestos exposure.  In order to assess the accuracy of these 30 year old forecasts, comparisons to publicly available data can be a useful exercise.

The Surveillance, Epidemiology, and End Results (“SEER”) Program of the National Cancer Institute collects data on cancer cases from various locations and sources throughout the United States.  Data collection began in 1973 with a limited amount of registries and continues to expand to include even more areas and demographics today.  In April 2013, SEER announced the availability of its 1973-2010 SEER Research Incidence data (November 2012 submission).  Among the cancers tracked in the SEER data is mesothelioma.

The SEER data (albeit extrapolated from a sample) provides some insight regarding the actual incidence of mesothelioma diagnoses relative to the Nicholson projections of occupational-related mesothelioma deaths.  The SEER data are represented below in the following chart:

Image1Full

The chart shows that a smoothed fit (2nd order polynomial) of mesothelioma diagnoses indentified in the SEER data peaks later (approximately in 2006) than the Nicholson projection of asbestos related deaths from mesothelioma (approximately in 2002).  A later peak may or may not indicate that the incidence of mesothelioma might not begin disappear as soon as projected by Nicholson.  It is possible that a later peak could be followed by a faster decline, but more data are needed to evaluate this possibility.

When the SEER data are parsed by sex, however, it appears that the number of male only diagnoses of mesothelioma may have peaked in 2004 while the number of female only diagnoses of mesothelioma may not have peaked.  As a consequence, the combined male and female smoothed fit of the SEER data peaks later (approximately 2006) than the male only curve.

The overall SEER diagnoses of mesothelioma do appear to be declining (when viewed in their historical context since 1973), but the decline may be slowed as a result of female diagnoses.  It has been suggested by some in the literature that female diagnoses are largely “background” cases (i.e. not due to occupational exposure to asbestos).  One potentially valuable way to explore this suggestion is to examine the rate (as opposed to the incidence) of mesothelioma diagnoses.  The rate of diagnoses shows that the male rate increased through the mid-1990s and then has been decreasing ever since, while the female rate has been relatively flat.  The rise, peak, and subsequent fall of the male rate of mesothelioma diagnoses supports the introduction, continuation, and termination of occupational exposure in earlier years, while the apparently flat historical female rate of mesothelioma diagnoses suggests a predominance of background cases.

Image2Full

Forthcoming blog entries will examine the impact foreign born diagnoses may have on the observed trend in the SEER data as well a comparison of the SEER data (diagnoses) to the mortality data collected by the Centers for Disease Control (“CDC”).

Jorge Sirgo works with clients to estimate the liability from future bodily injury claims. Mr. Sirgo prepares valuations of the population of claims pending against the company to project the cost to settle future claims, evaluate insurance recovery, and estimate reserves. His bio can be found here.

Plaintiff Firms and Settlement Averages

I recently attended an asbestos conference (ACI in Philadelphia) where a prominent plaintiff attorney was discussing the cases he chooses to take on.  He meets with a plaintiff (or the plaintiff’s family) and decides to take the case or pass the case on to other plaintiff attorneys.  It caught my attention—of course, the prominent attorney has his choice—he can select the meso case of the 40-year old man exposed as a child to his father’s take-home fibers.  The plaintiff is a non-smoker, has young children and will leave them fatherless in a short time.  However, the prominent plaintiff attorney need not file a case for the 80-year old man who was exposed as a naval officer in World War II, who has lived a long life with children and grandchildren and has smoked since he was 11 years old.  That case can be passed on to a less prominent plaintiff attorney….

Prominent plaintiff attorneys not only can but do select the strongest cases that will move a jury or result in a quick and large settlement.  However, what happens to those cases that are passed over by the “big guys”?  It is an interesting data question, and one that can be answered using the vast data available in the asbestos world.  Do those passed on cases take longer to settle?  Can we identify the second and third tier plaintiff firms based on the length of time from when a claim is first filed to when it is settled?  Are those cases universally settled for lower amounts?  Do they rarely go to trial?  Is it more useful to examine the length of time from diagnosis to file date—the time in which the plaintiff is being passed down the ranks from top tier firm to the second or third firm before an attorney will bother to file a case?

In the next few months, I hope to work with some defendant-specific data analysis towards a published article that will attempt to answer some of the questions I raise.    As the world of asbestos litigation continues to change through mass screenings, tort reform, focus on mesothelioma and an emergence of lung cancer claims, there are always new trends to review and describe as the plaintiff bar continues to adjust to the changing environment.  In my work in financial reporting, understanding such trends and adjustments becomes part of the anecdotal evidence we incorporate into forecasting.  Following the raw data alone misses part of the picture.  We are better forecasters for understanding the market forces as well as the data.

Dr. Jessica Horewitz offers 15 years of consulting experience in the litigation environment, using analytical and statistical tools to assist clients with a variety of economic and econometric analyses in different arenas.  She has substantial expertise in managing large volumes of data and conceptualizing analyses to best use the data for clients’ analytical needs.  You can read her full bio here.