Scientific Evidence in Oregon

A Limitless Limit:

The Test for the Admissibility of Scientific Evidence in Oregon

By Bill Masters

Wallace, Klor & Mann, P.C.

 

 

In state court in Oregon, the rule of Daubert v Merrell Dow Pharmaceuticals, 509 US 579 (1993) has been judicially interpreted in a way to provide–ostensibly through the inability of the Oregon Supreme Court to understand the meaning of the concept of “scientific method”—a limitless limit on the admission of scientific evidence. Jennings v Baxter Healthcare, 331 Or 285 (2000).

The Basic Relevancy Test—Stage I

 

The basic “relevancy test” for admission into evidence of scientific opinions is defined by Professor McCormick as providing that “a relevant conclusion supported by a qualified expert witness should be received unless * * * its probative value is overborne by the familiar dangers of prejudicing or misleading the jury, unfair surprise and undue consumption of time.” C. McCormick, Evidence 491 (2d ed 1972) (Emphasis supplied).

 

The relevancy test was adopted in Oregon. State v. Brown, 297 Or 404 (1984). In Brown, the Oregon Supreme Court held that for expert testimony to be admitted, the trial court must determine that the proffered evidence is “relevant” under OEC 401, ” helpful” under OEC 702 (that is, it is within the expert’s field, the expert is qualified, and the foundation of the opinion intelligently relates the testimony to the facts) and that its “probative value” not be substantially outweighed by the threefold dangers of unfair prejudice, of confusion of the issues or of misleading the jury under OEC 403. (In Brown, the Oregon Supreme Court did not have the phrase in OEC 702 ” scientific knowledge” carry any analytical water.)

 

“Relevant” =df as the minimal degree of probative value needed to make the existence of any fact of consequence to the determination of the action more probable or less probable. OEC 401; State v Hampton, 317 Or 251, 255 (1993).

 

“Probative Value” =df as the degree to which evidence makes the existence of any fact of consequence to the determination of the action more probable or less probable. State v O’Key, 321 Or 285, 299 n. 14 (1995).

 

The Specified Relevancy Test—Stage II

 

But in Brown, the Oregon Supreme Court refined this basic relevancy test. It required that the trial court, in applying the criteria of OEC 401, 702 and 403, consider a number of specific factors in assessing both (1) the “probative value” of the proffered evidence (the power of the proffered evidence to help the jury) and (2) its power to mislead the jury.[1]

 

This “specified relevancy test” is often interpreted rather cavalierly by trial judges to admit proffered evidence without much, if any, regard for its validity. Whether or not the evidence is valid, it is urged, is best determined by the jury. This interpretation is suspect because it forces certain, unwanted corollary interpretations of OEC 401 and 403. For example, under OEC 401, proffered evidence that is invalid would still be “relevant” if on its face it tended to make the existence of any fact of consequence more probable or less probable.[2] It would follow, given that relevant evidence has, by definition, some probative value, that invalid evidence may still have “probative value under OEC 403” if it is probative on its face. That is, that the evidence have what is characterized as “face validity.” Face validity is not technically validity. It refers to whether or not the proffered evidence looks or appears to be what it is claimed to be to untrained observers (i.e., jurors).

But this preference for the superficial can only be taken so far. For OEC 403 purports to require the court to assess not only the probative value of proffered evidence but also its power to mislead the jury. Proffered evidence which is offered as valid, but which is invalid, would definitely mislead the jury. So if the probative value of proffered evidence is to be weighed against its power to mislead the jury, the trial court must assess its validity beyond its “face validity.”

Under Brown, then, if the proffered evidence is not probative under OEC 401, the trial court should keep it from the jury. If the proffered evidence is, in fact, weakly probative but apparently significantly probative, then under OEC 403, the trial court must take steps to keep it from the jury to protect the integrity of the judicial system. The overall pragmatic effect of the rule in Brown is simple: keep from the jury proffered evidence high on the scale of rhetoric but low on or off the scale of validity.

 

 

The Strengthened Relevancy Test—Stage III

 

This specified relevancy test was strengthened in State v O’Key, 321 Or 285 (1995). There, following Daubert, the Oregon Supreme Court added to the analysis in Brown the requirement under OEC 702 that the proffered evidence first fall within that set of beliefs characterized as known facts or truths accepted as such on good grounds, and then fall within that subset of such beliefs characterized as “scientific” knowledge–defined as a knowledge derived by the “scientific method,” a method based on generating hypotheses and testing them to determine whether or not they can be falsified. (What the court directly evaluates is not so much the coherence of the belief with other beliefs, but whether it has “good grounds” and whether it was generated by a reliable method.)

 

Both Brown and O’Key require the court to screen proffered evidence, however minimally, for validity beyond mere “face validity.” As the Oregon Supreme Court remarked in O’Key:

 

“Both decisions view the validity of a particular scientific theory or technique to be the key to admissibility. Both require trial courts to provide a screening function to determine whether the proffered scientific evidence is sufficiently valid to assist the trier of fact. Under both decisions, a trial court should exclude “bad science” in order to control the flow of confusing, misleading, erroneous, prejudicial, or useless information to the trier of fact.”

The Strengthened Relevancy Test—Ultimately Misunderstood

 

But the Oregon Supreme Court later—in an apparent fit of cognitive dissonance–gave lip service to these requirements. Jennings v Baxter Healthcare, 331 Or 285 (2000).[3] That is, it applied the strengthened relevancy test as though the requirements of OEC 403 stressed in Brown and the requirement of OEC 702 that the proffered evidence be scientific knowledge stressed in O’Key did not set appreciable limits on the admissibility of proffered evidence.

 

At issue was the proffered testimony of a local neurologist on the issue of “general causation:” whether or not silicone in silicone breast implants (SBIs) stimulates the immune system to cause injury. Plaintiff’s neurologist testified it did, although he acknowledged he was neither an immunologist nor a rheumatologist, those medical specialists on diseases mediated by the immune system.

 

Plaintiff’s clinical and forensic expert performed a basic clinical neurological examination on fifty women with SBIs referred to him by plaintiffs’ attorneys.[4] On examination, in a majority of these women, he allegedly found patchy sensory loss in their extremities and symptoms of inner ear dysfunction.

 

On the basis of these examinations, he formed an hypothesis that silicone stimulated the immune system to cause a focal (not systemic) neurological injury. He then purportedly tested this hypothesis by referring to these same neurological examinations of these same 50 women, and by ruling out those alternate causes of which he could conceive that might produce these two symptoms.[5]

 

 

The findings in this group of women were not compared to appropriately selected control groups of women with and without SBIs. And so, these 50 women merely constituted a “case series.”[6] This case series had not been published. Nor had the neurologist’s opinions been peer reviewed. Nor had he an hypothesis about how silicone could spur the immune system to cause that kind of neurological injury.

This testimony of this purported expert is a classic example of an unreliable opinion on scientific issues. It is superficially persuasive to the untrained mind, but without probative value to the scientifically trained mind on the issue of general causation. And this is what is so disturbing about Jennings. If this kind of testimony is sufficient to pass the screen of Brown and O’Key, then no proffered evidence on its merits could be so unreliable to warrant exclusion.[7] That is why Jennings provides a limitless limit.

 

In Jennings, then, what the Oregon Supreme Court has decided, probably unwittingly, is that a bare correlation is sufficient evidence from which the jury may infer general causation. In doing that, it has thereby ignored two very basic rules: (1) a correlation or association may be entirely due to chance, and (2) even if it is not due to chance, “correlation does not imply causation.”

 

Furthermore, that bare correlation is not transformed into causation by virtue of a differential diagnosis.[8] In a differential diagnosis, the expert ideally identifies all causes of these symptoms and rules out all but one. This approach is based on two faulty assumptions. First, it assumes the expert knows what are all causes of these symptoms, save the potential cause under consideration. (This is an unlikely prospect when the expert is not an expert, as here, in the field about which he is testifying.) Second, it also assumes that no other causes can exist save possibly the potential cause under consideration. That possible cause is assumed to be a late but final possible addition to what was formerly considered to be a closed set or menu of causes.

 

But this line of reasoning triggers an infinite regress. That is, if this new potential cause can be considered for inclusion in the set, why cannot others pregnant in the manifold of nature also not be considered? And if these other possible causes can be considered applicable, the set of causes to be considered is no longer closed but open, thereby nullifying the effectiveness of a differential diagnosis to prove general causation.

 

The fact is, reliance on a differential diagnosis to prove general causation begs the question: what is not ruled out is essentially assumed to be a cause. Moreover, this neurologist ignored one important correlation that could be most telling: the correlation between the symptoms identified in the examination and the neurologist performing the examination. That is, perhaps “observer bias” [9] is alone responsible for the correlation. If other neurologists failed to find the same symptoms on their examinations, then the correlation between SBIs and the symptoms is less important than the correlation between the symptoms and this particular neurologist.[10] That is, what is responsible for the correlation is the bias of this examiner hired to testify for plaintiff and not the putative toxin or antigen.[11]

 

* * *

 

We would cringe in disbelief should a trial court admit evidence on the issue of general causation couched by an expert as follows:

 

“I have an untested hypothesis that event A usually causes event B.”

 

But should we not cringe even more if the expert were to say about the same untested hypothesis, without revealing to the jury that the hypothesis is untested:

 

“I believe event A usually causes event B.”

 

* * *

 

In Jennings, the Oregon Supreme Court criticized the former, while blithely admitting the latter. The irony of this decision is reflected in the court’s remark that the neurologist’s examination of 50 women enabled him to formulate a hypothesis that exposure of human tissue to silicone produces neurological injuries, a hypothesis that the neurologist then purportedly tested by his evaluation of those same 50 women. Jennings, 331 Or at 308. This process of verification is reminiscent of Wittgenstein’s remark, “as if someone were to buy several copies of the morning paper to assure himself that what it said was true.”[12] It is also a process that violates a cardinal rule of scientific method: a scientific investigator cannot test an hypothesis with the same data used to generate that hypothesis (unless the sample is split before the members of the sample are examined).

 

In the end, the proffered clinical evidence in Jennings failed to satisfy basic requirements of scientific method. It was neither knowledge generally nor scientific knowledge specifically. The alleged expert’s bare case series is not proof of an association beyond that arising from chance. It was, after all, just speculation ex cathedra. See R.J. Simpson, Jr. and T.R. Griggs, Case Reports and Medical Progress in Perspectives in Biology & Medicine, 28: 402-406 (1985).

 

In Brown and O’Key, the Oregon Supreme Court had provided a clear message to lower courts about the admissibility of expert testimony on scientific issues. If the proffered testimony has an aura of reliability but is too complex for the jury to analyze effectively, the court must keep that proffered evidence from the jury if it is in fact invalid or has been generated in a way that does not differentiate it from the set of speculative beliefs.

 

In Jennings, the Oregon Supreme Court was presented with proffered evidence fitting that profile perfectly. The proffered evidence was only a hypothesis, yet was represented to be a validated theory. So although the Oregon Supreme Court acknowledged the requirements of O’Key, it applied them in a way that ignored their content. In that circumstance, we are compelled to believe that either (1) the Oregon Supreme Court did not understand the meaning of those requirements (their use in context) or (2) it intentionally disregarded them in order to undermine O’Key sub silentio (presumably because it could not rationalize an explicit reversal of O’Key under the rationale of stare decisis. See G.L. v Kaiser Foundation Hospital, Inc., 306 Or 54, 59, 757 P2d 1347 (1988)).

 

It is difficult to attribute the motive expressed in (2) to a body that surely prides itself as intellectually honest. As a result, by default, the Oregon Supreme Court must not have understood the meaning of the requirements of O’Key. That is, it did not understand how those requirements of scientific method are instantiated in practice. In that event, one is reminded of Wittgenstein’s remark that “Now we get the pupil to continue a series (say +2) beyond 1000—and he writes 1000, 1004, 1008, 1012.” “We say to him: ‘Look what you’ve done!’—He doesn’t understand.” (PI 185).

 

 

ENDNOTES

1 Following are these factors:

(1) General acceptance in the field of the principle; (2) experts qualifications and stature; (3) the use of the technique; (4) the potential rate of error; (5) the existence of specialized literature; (6) the novelty of the technique; (7) the extent to which the technique relies on the subjective interpretation of the expert; (8) the existence and maintenance of standards governing its use; (9) presence of safeguards in the characteristics of the technique; (10) analogy to other admissible scientific techniques; (11) the nature and breadth of the inference adduced; (12) the clarity and simplicity with which the technique can be described and its results explained; (13) the extent to which the basic data are verifiable by the court and jury; (14) the availability of other experts to test and evaluate the technique; (15) the probative significance of the evidence in the circumstances of the case; (16) the care with which the technique was employed in the case.

[2] This would limit the meaning of validity to what scientists call “face validity.” Face validity is not technically validity. It refers to whether or not the proffered evidence looks or appears to be what it is claimed to be to untrained observers.

[3] It acknowledged the basic rules of scientific methodology, but then ignored them in its analysis of whether the expert satisfied those basic rules.

[4] This is a classic example of “selection bias.” To illustrate the notion of “selection bias”: say you want to devise a test to identify good hitters in baseball. So you interview all the baseball players and you ask them what they’re individual batting averages are. Once you have identified those with batting averages over .300, you send them to be tested. The tester then observes their ability to hit the baseball. Obviously, the tester will invariably find that they are good hitters. This is because they have been selected from all the baseball players for this very ability.

[5] He also reviewed animal studies conducted by Dow Corning. But it is unclear what relevance these studies had to his opinion, given that none of these studies provided information about sensory loss or inner ear dysfunction.

[6] Case studies have some value in proving causation in some very limited circumstances. But those circumstance do not exist here. That is, neither “slam-bang” nor “signature” effects exist, to justify reliance on case studies. To illustrate, suppose you sponsor a company picnic. Ninety-nine employees and you attend, for a total of 100. After the picnic begins, three employees become sick, each with symptoms of severe stomach pain. Of the picnickers, only those three ate the deviled eggs. On that basis, we would likely conclude, with justification, that what caused the stomach aches were salmonella bacteria in the deviled eggs.

[7] Indeed, the proffered testimony of plaintiff’s expert in Jennings is a cut below proffered polygraph evidence, which the OSC, OCA and USSC have cast as the basic example of inadmissible junk science.

[8] It is unfortunate the OSC did not inquire about the error rate of this process of using differential diagnosis to establish general causation.

[9] Observer or confirmatory bias is a cardinal sin in science. To illustrate the notion of “confirmatory bias:” Not long ago, phrenology was considered by some to be a branch of science. Phrenology, for those who don’t know, is the study of the bumps and knobs and shape of human skulls. Phrenologists believed they could tell how intelligent someone was merely by examining the shape of the cranium. Not surprisingly, many who were not phrenologists thought phrenology was mischievous nonsense. One skeptic decided to do what all good skeptics are wont to do–apply a little scientific method to these grand phrenologic claims. The plan was to have the most distinguished phrenologist of the time inspect a skull represented to be that of an imbecile, and explain based on the principles of phrenology how he could know it belonged to an imbecile. However, unbeknownst to the phrenologist, the skeptic gave him the skull of Laplace, the great French mathematician. Laplace was a genius, in any age; no imbecile he. Well, of course, the phrenologist failed the test, describing Laplace’s skull, after judiciously caressing its knobs and bumps, as truly that of an imbecile. Imagine his chagrin when he was informed of the hoax. And old Laplace, imagine his disembodied glee, outwitting others even in death.

[10] One cannot establish inter-examiner reliability without other experts performing examinations on the same or similar sample of people.

[11] Indeed in Jennings, observer bias was the most likely reason for the alleged “correlation” because none of the other studies in the extant published literature about SBIs identified a correlation between SBIs and inner ear dysfunction.

[12] Philosophical Investigations, ¶ 265.