Phar Lap (1926-1932) was a thoroughbred horse bred in New Zealand. After winning the Melbourne Cup and 37 other races, his victory at the Agua Caliente racecourse in Tijuana, Mexico, established the track record in 1932.
With each victory, his detractors became more strident. He was even the target of an assassination attempt.
To prevent him from winning (and thereby disrupting the betting odds) officials would add lead bricks to his saddle.
On the occasion of the Melbourne cup of 1930 he carried enough lead to have 138 pounds on top of him, yet won the race.
A quote from the Sydney Morning Herald dated Wednesday, November 5, 1930, read, “The question was not which horse could win, but could Phar Lap carry the weight. Could he do what no other horse before him had done?”
It appeared that the one thing that race officialdom feared above all else, was a horse that could consistently beat the field and win the race.
The tale of Phar Lap was brought to mind after a colleague forwarded a paper published in the journal Leukemia on August 10, 2012: “The use of individualized tumor response testing in treatment selection: second randomization results from the LRF CLL4 trial and the predictive value of the test at trial entry.” (E Matutes, AG Bosanquet et al, Leukemia, Letter to the Editor.)
Published as a letter to the editor, the paper describes correlations between the TRAC (tumor response to antineoplastic compounds) assay, a short-term suspension culture cell death laboratory assay (very similar to our work) and clinical response, time to progression and overall survival in patients with chronic lymphocytic leukemia (CLL) who received chemotherapy as part of the LRF CLL4 trial conducted in England bet
ween 1999 and 2004.
The initial trial was a blinded correlation between laboratory assay results and patient response to one of three treatment regimens. An examination of the data reveals a clear and statistically significant correlation between drug sensitivity and overall survival (p = .0001). The 10-year survival of drug sensitive patients was 28 percent, while the 10-year survival for drug resistant patients was 12 percent.
Significant correlations with survival were observed for known prognostic factors like 17p and 11q deletion, as well as IGHV mutational status. Correlations were also observed between the TRAC assay results and these prognostic factors.
The report goes on to describe a second randomization that took place at the time of disease progression, either failure of first-line therapy or reoccurrence within 12 months. In this part of the study, 84 relapsed patients were allocated to standard therapy and their outcomes were compared with 84 patients allocated to treatment guided by the TRAC assay.
The drugs tested in the assay-directed arm included chlorambucil, cytoxan, methylprednisolone, prednisolone, vincristine, doxorubicin, mitoxantrone, 2CDA, fludarabine and pentostatin. In vitro resistance for combinations was defined as resistance to all constituent drugs in the combination, while drug sensitivity was defined as TRAC-assay sensitivity for any of the drugs used in combination. No discussion of synergy analysis was included.
In examining this study, I cannot help but be reminded of Phar Lap. First, marshaling a study of 777 CLL patients, and conducting 544 TRAC analyses, is a phenomenal undertaking for which these authors should be commended.
Second, the observation of a significant correlation between laboratory assay results and overall survival, as well as the biological implications of this platform’s capacity to correlate with molecular markers is a demonstrable and noteworthy success, however unheralded.
Where the analogy with poor Phar Lap’s struggles, weighted down with lead, becomes most poignant is the final portion of the study wherein 84 patients received assay-directed therapy.
To wit, we must remember that in 2012, drug refractory CLL remains an incurable malignancy (with the exception of a small subset of successfully transplanted patients) and that no chemotherapy-alone trial has provided a survival advantage in this group. But this only begins to explain this trial’s results.
Among the virtually insurmountable hurdles that these investigators were forced to confront was the fact that fully 52 percent of the standard treatment arm group were destined to receive fludarabine.
This drug, the current gold standard for previously treated patients who fail chlorambucil (constituting 73 percent of the patients in this part of the trial), has an objective response rate of 48 – 52 percent in this population. As the drug would likely be identified as active in vitro as well, this had the impact of pitting the assay arm and the standard arm against one another, frequently using exactly the same treatment.
While this does not mean that the assay arm could not succeed, it does have an enormous impact upon the sample size calculations used to determine the number of patients required to achieve significance.
No pharmaceutical company would ever allow a registration trial to be conducted against an “unknown” control arm, particularly one using the same therapy as the study arm – not ever!
Despite these burdens, the assay-directed arm had a superior one-year survival, while virtually all other trends favored the group who received assay-selected therapy.
The results of this study are worthy of recognition and further support the clinical relevance, predictive validity and importance of functional analyses.
Yet, this interesting study in CLL is unceremoniously relegated to the status of a Letter to the Editor in Leukemia.
Perhaps, like Phar Lap, no one really wants to upset the odds.