Expert Consensus Rankings and Start/Sit: How to Use ECR Effectively

Expert Consensus Rankings aggregate the projections of dozens of analysts into a single ranked list, giving fantasy managers a calibrated signal that's harder to game than any single pundit's opinion. This page explains what ECR actually measures, how the aggregation mechanism works, where consensus is genuinely useful for start/sit decisions, and — just as importantly — where it breaks down. Understanding those limits is what separates managers who use ECR as a crutch from those who use it as a baseline.


Definition and scope

Expert Consensus Rankings, almost universally abbreviated as ECR, represent a statistically aggregated view of where analysts expect players to rank at their position over a given scoring period. The aggregation is typically a trimmed mean or median of individual expert rankings, which means one outlier projection doesn't skew the whole list the way it would in a simple average.

FantasyPros, which has published ECR since 2008, is the most widely cited source. Their methodology pools rankings from verified analysts — a pool that has exceeded 100 contributors in recent seasons — and weights consistency and historical accuracy through a proprietary scoring system called Expert Accuracy Scores (EAS). The result is a ranked list that reflects the center of informed opinion, not any single analyst's conviction.

ECR exists at multiple levels of specificity. Season-long ECR tracks expected finish over a full season. Weekly ECR — sometimes called "start/sit rankings" — projects expected points for a specific game week. The distinction matters enormously. A running back ranked RB12 for the season might be a borderline RB20 in a given week against a top-five run defense. Treating these two rankings as interchangeable is one of the more common start/sit mistakes managers make.


How it works

The aggregation process has four practical layers worth understanding:

  1. Collection — Individual analysts submit weekly rankings for every rostered-eligible player, typically by a Thursday deadline.
  2. Trimming — Extreme outliers (top and bottom percentiles) are excluded before averaging, which reduces noise from analysts with strong single-player biases.
  3. Weighting — Platforms like FantasyPros apply historical accuracy weights so that analysts with stronger track records contribute proportionally more to the final number.
  4. Output — The result is an ordinal rank (WR14, RB7, etc.) plus, in most implementations, an average draft position (ADP) or projected point total as a secondary signal.

The ECR number itself doesn't tell the whole story. The standard deviation across contributing analysts is equally important. When 80 out of 90 analysts rank a receiver between WR8 and WR12, that's high consensus — the signal is strong. When the same receiver spans WR4 to WR20 across the same pool, the consensus is effectively telling managers "nobody knows." High variance in the underlying rankings is a flag, not a green light. The full start/sit decision framework incorporates this kind of uncertainty quantification explicitly.


Common scenarios

Scenario A: Clear consensus with low variance. A receiver with an ECR of WR6 and a tight analyst spread of WR4–WR8 is a near-automatic start in most formats. The signal is clean. Vegas lines and game totals will usually corroborate this — favorable spreads and high over/unders tend to cluster around the same names showing up in the top-10 ECR.

Scenario B: Strong ECR but high variance. A tight end ranked TE5 overall but ranging from TE1 to TE14 across analysts is a different animal. That spread usually signals that some analysts are discounting an injury concern, a target-share question, or a matchup the consensus hasn't fully absorbed. Digging into target share and snap counts can resolve what the variance is actually about.

Scenario C: Low ECR driven by name recognition. Established veterans sometimes carry residual ECR credibility based on prior-season performance rather than current workload. This is a textbook case of recency bias in start/sit operating at the analysis level, not just the manager level. ECR reflects human analysts, and human analysts are not immune to their own biases.

Scenario D: Format mismatch. ECR defaults are almost always generated for half-PPR or standard scoring. In a PPR vs. standard scoring context, wide receivers who catch a high volume of short passes are systematically undervalued in standard-format ECR and overvalued in full-PPR rankings when the same list is applied without adjustment.


Decision boundaries

ECR is most reliable within positions, least reliable across positions, and nearly useless in isolation for flex decisions. Here is a practical breakdown of where to trust it:

The starting point for any structured decision process is understanding what inputs ECR draws from and what it was never designed to do. For managers building a repeatable process, the main reference hub connects all the component tools that work alongside ECR rather than in place of it.


References