PATHFINDER: Not Quite Hitting the Mark?
Medscape Medscape
227K subscribers
168 views
0

 Published On Apr 18, 2024

Dr Kathy Miller discusses the potential mismatch between results and conclusions from the PATHFINDER study.
https://www.medscape.com/viewarticle/...

TRANSCRIPT
Good morning, everyone. It's Dr Kathy Miller from Indiana University. I was catching up on some reading over the recent holidays, and I want to make sure you all take a critical look at the PATHFINDER study by Professor Deb Schrag and colleagues. You'll find this in the October 7 issue of The Lancet.

Many of you know the PATHFINDER study was one of the first large studies looking at the Grail multitumor detection blood test. This test looks for methylation signatures in a blood sample that might suggest the presence of malignancy, and based on the methylation signature, it also gives some direction as to where the site of origin of that cancer may be. The test is not diagnostic. Users of the test then need to undergo additional evaluation to find or not to find a cancer.

I have several problems with this study after looking at the details. First, here's what's reported. There were 6662 participants enrolled, and about two thirds of them were women. Most of the patients were up to date on screening for colon cancer, at about 91%, and 80% of patients were up to date on screening for breast cancer. That already tells you this is a pretty high-resource population, because those screening adherence rates are much better than what we typically see in the US.

The test was positive in 92 participants, or 1.4%. Of those, after an evaluation taking an average of 57 days, 35 participants were found to have a confirmed cancer, and after testing that took an average of 162 days, 57 participants were not found to have cancer. Over the 12-month period of follow-up, of those patients who tested negative, 86 were found to have cancer.

Let's think about this for a minute. You get this blood test; you're told it's positive and we think you probably have a cancer. Then it took 162 days to be told, "We've looked as hard as we can look, and we don't find a cancer." That's not a very comfortable position for patients and their providers to be in.

The test told roughly two patients they would probably have a cancer for every one patient in whom they found cancer. The test also missed two to three cancers for every one case that they found over just a year of follow-up. Now, that is a screening test compared with the false-positive and false-negative rates for organ-specific screening like colorectal cancer and breast cancer. Those are not good numbers.

We also need to take a careful look at what cancers they found. There were 12 patients found to have lymphoma. Most of those were follicular low-grade lymphomas. There were two patients with chronic lymphocytic leukemia, many of whom would be observed; two patients with Waldenström macroglobulinemia, and one patient with plasma cell malignancy.

If we look at the solid tumors, there were five patients found to have breast cancer; a couple of patients with gastrointestinal cancer, most of those stage IV; one patient with stage III high-grade serous ovarian cancer; two patients with prostate cancer, including one with biochemical relapse, so this is a patient who had already been diagnosed previously; and one patient with stage IV disease.

It's a little hard to look at the cancers they found and come to the conclusion that this test would really have been helpful. Did we actually help these patients, or did we just find disease for which treatment and outcome was not going to be different? That would really take a randomized test, and it will take much more investigation to know.

As a researcher, my biggest problem with this study is a mismatch between the data that are reported in the primary endpoints and their goals and conclusions. The goal of this study was to prove that this test was feasible, and the conclusion in the abstract is that it was feasible. However, the primary endpoint was to simply describe how many days it took to come to a final evaluation in patients who had a positive test and say, yes, there's a cancer or no there's not.

The primary objective of feasibility is never defined. You have no idea what they would have considered feasible. What time was too long? What time was too short? What results would possibly have led these investigators to conclude that this was not feasible?

Before you think I'm entirely negative in this area, I do think these tests will find a place. I think there are situations where they may be valuable, but this study and the conclusions simply don't get us there. Take a look and see what you think.

Transcript in its entirety can be found by clicking here: https://www.medscape.com/viewarticle/...

show more

Share/Embed