Hawaii Medical Journal

ISSN 2026-XXXX | Volume 1 | March 2026

Alzheimer's Anti-Amyloid Drugs: Review Sparks Debate

A sweeping review of seven anti-amyloid Alzheimer's drugs finds negligible benefit, but researchers push back hard on the methodology.

5 min read

Two FDA-approved Alzheimer’s drugs sit at the center of a methodological fight that’s now spilling into policy territory, and the stakes aren’t abstract.

A sweeping review of anti-amyloid monoclonal antibody therapies, covering roughly two decades of clinical trial data across seven drugs, concluded that the evidence base does not support meaningful cognitive benefit for Alzheimer’s patients. The backlash from specialists was immediate. Multiple Alzheimer’s researchers disputed the review’s core analytical choice: collapsing pharmacologically distinct agents into a single framework and drawing a unified conclusion from the result. Several of those critics weren’t newcomers to skepticism about the drug class. They’d raised concerns before. This time, their objection was to the method, not the medicine.

That distinction matters.

The drugs aren’t interchangeable. Leqembi and Kisunla, the two most recent agents among the seven the review examined, bind to amyloid species through mechanisms that differ from the earlier, largely failed compounds in the same class. Earlier agents didn’t reach regulatory approval. They operated at different disease stages, targeted different amyloid conformations, and produced clinical trial data that reflected those differences. Treating their outcome data as equivalent to Leqembi and Kisunla’s, then averaging across all seven, is the methodological move that drew the sharpest objections.

The tension here isn’t new. Umbrella reviews and pooled analyses offer statistical power, which is genuinely valuable when the underlying agents are sufficiently similar. When they’re not, the power comes at a cost. Granularity disappears. Signal gets buried. Whether that’s what happened here is precisely what’s being argued, and it’s an argument the evidence synthesis community has had in other contexts, from oncology to cardiovascular pharmacology.

“The data from Leqembi and Kisunla showed they could slow cognitive decline,” one expert told STAT News, pointing to the review’s analytical design as the source of the problem rather than any flaw in the underlying trial results.

Both drugs cleared the U.S. Food and Drug Administration’s evidentiary threshold for approval. Both are currently available to patients in the United States. The Food and Drug Administration’s standard for approval requires demonstration of clinical benefit or a reasonable surrogate, and both drugs met that bar in their respective trials. Critics of the review argued that the aggregate negative finding effectively erased a regulatory determination that had already been made on the basis of trial data the review subsumed into a broader, unfavorable pool.

This isn’t only a scientific disagreement. It’s a policy problem.


Methodological Concerns Merit Attention Before Conclusions Are Drawn

The review’s design raises questions that deserve direct engagement before its conclusions carry weight in coverage decisions. The scope was wide by any measure: approximately two decades of research, seven drugs, trials ranging considerably in size and duration, and patient populations that differed on disease-stage variables that have repeatedly proven relevant in Alzheimer’s drug development. Early-stage versus late-stage disease isn’t a minor variable. It’s one of the central reasons earlier anti-amyloid trials failed while more recent ones showed signal.

Averaging across that heterogeneity produces a number, but it’s not obvious that number reflects clinical reality for any identifiable patient group.

The Cochrane Collaboration has long grappled with this problem in systematic review methodology. The question of when pooling is appropriate, and when it obscures more than it clarifies, doesn’t have a formulaic answer. It requires judgment about mechanism, population, and outcome definitions. Several of the researchers objecting to this review appear to be arguing that the judgment call here was wrong, that the seven drugs don’t constitute a coherent enough class to support the kind of aggregate analysis the review performed.

That’s a serious charge. It’s also a testable one. The drug-specific trial data exists. A stratified analysis, separating the earlier failed compounds from Leqembi and Kisunla, would either confirm or challenge the critics’ position. That analysis hasn’t displaced the umbrella review’s framing yet, but it’s the natural next step.

What complicates the picture further is cost. Leqembi and Kisunla aren’t cheap drugs. In 2026, the annual cost burden associated with Alzheimer’s care in the United States reached $177 billion by some estimates. The drug costs themselves sit at a different scale: Leqembi’s list price has been reported at $26,500 annually, while Kisunla’s cost profile puts it in comparable territory. A Senate Report cited figures in the range of $70 billion in projected Medicare spending exposure, with per-patient cost estimates running as high as $353,000 over a treatment course when infusion and monitoring costs are included. The Senate HELP Committee has taken an active interest in how coverage determinations for these drugs will affect federal health spending.

Those numbers create pressure. When a review concludes that a drug class shows negligible clinical benefit, payers notice. Formulary committees read that finding differently than a specialist who can interrogate the methodology. If the review’s conclusion gains traction in that space, coverage restrictions could follow, affecting patients who are currently accessing Leqembi or Kisunla on the basis of their FDA approval status.

The $107 billion figure that appears in projections of long-term Alzheimer’s drug market size by 2026 reflects how much the pharmaceutical industry, investors, and health systems have riding on how this methodological dispute resolves. That’s not a reason to dismiss the review. It’s a reason to scrutinize it carefully, and to hold the critics to the same standard.

Several unanswered questions remain worth tracking. The review’s handling of trial heterogeneity is the immediate issue. But there’s a longer-term question about what the appropriate clinical outcome measure even is for Alzheimer’s drug trials, given ongoing disagreement about whether amyloid reduction, cognitive scale scores, or functional outcomes should drive approval and coverage decisions. The Food and Drug Administration has signaled interest in this question. The 04 regulatory guidance updates released in 2026 touched on it without fully resolving it.

16 clinical trials are currently underway examining next-generation anti-amyloid approaches and combination strategies. Their results will land in a field where the evidentiary bar is now actively contested.

Don’t mistake that contestation for scientific dysfunction. Disagreement about methods, in evidence synthesis especially, is how the field corrects itself. The Cochrane Collaboration exists precisely because pooling evidence badly can be worse than not pooling it at all. Whether this review pooled badly is what’s being argued, and it’s the right argument to be having.

What can’t happen is allowing a methodologically disputed aggregate finding to drive coverage decisions for two drugs that cleared independent regulatory review on their individual merits. The Senate HELP Committee and payers watching this space should hold the review to the same evidentiary standard they’d apply to any clinical claim.

“The data from Leqembi and Kisunla showed they could slow cognitive decline,” the expert told STAT News. That finding came from the trials, not the review. And so far, no one has produced a credible analysis that directly refutes it.

Get Hawaii Medical Journal Weekly

Top stories from Hawaii Medical Journal in your inbox. Free.