Expert Consensus Rankings for Start/Sit: How to Use ECR Effectively

Expert Consensus Rankings (ECR) aggregate the start/sit projections of dozens of fantasy analysts into a single ordered list, smoothing out individual blind spots and noise. Understanding what ECR actually measures — and where it breaks down — separates managers who use it as a crutch from those who use it as a calibration tool. This page covers how ECR is constructed, how to read it in context, and the specific situations where it earns its trust and where it does not.


Definition and scope

ECR, as formalized and published by FantasyPros, is a weighted average of individual expert rankings across a defined panel. The panel size varies by platform — FantasyPros typically draws from 80 to 120+ analysts for weekly start/sit rankings — and each expert's historical accuracy, measured by FantasyPros' own Expert Accuracy Score, influences how much weight their opinion carries in the aggregate.

The output is a positional rank: WR14, RB22, TE7, and so on. That rank reflects where a panel of experts collectively expects a player to finish relative to position peers in a given week. It does not output a point projection directly, though most platforms display an associated average projected score alongside the rank.

Scope matters here. ECR exists in several flavors — season-long overall rankings, weekly start/sit rankings, and format-specific versions for PPR, half-PPR, and standard scoring. Plugging a season-long ECR rank into a week-14 lineup decision is a common misuse. The weekly version, updated through the week as injury and weather news lands, is the relevant instrument for start/sit choices. The relationship between format and ECR is detailed further at PPR vs. Standard Scoring Impact.


How it works

The mechanics behind ECR are worth understanding precisely because the output looks deceptively simple.

  1. Individual expert rankings are collected. Each participating analyst submits their personal positional rankings for the week, independent of one another.
  2. Ranks are averaged, then weighted. FantasyPros applies its Expert Accuracy Score weighting, meaning analysts with stronger historical track records contribute more to the consensus position. A raw average of all experts' ranks forms the base; weighting adjusts it.
  3. Standard deviation is calculated. This is the underused part of ECR. A player ranked WR18 with a standard deviation of 2.1 is near-universal agreement. A player ranked WR18 with a standard deviation of 11.4 is a coin flip wrapped in a consensus label.
  4. The result is published as a rank with associated average projected points and, on most interfaces, a best/worst range.

The standard deviation column — sometimes displayed as a "high/low" rank range — is where ECR becomes genuinely useful for decision-making rather than just validation. Managers who sort by standard deviation alone can quickly identify the week's true boom/bust volatile plays versus the stable consensus anchors.


Common scenarios

The locked-in starter: A player ranked WR4 with a standard deviation under 3.0 is essentially unanimous. Barring a last-minute injury report, ECR is telling the manager there is nothing to second-guess. This is ECR at its most efficient — it compresses the noise from 100 analyst opinions into a clean signal.

The controversial sit: A veteran receiver on a bad matchup might land at WR28 with a standard deviation of 14. Half the panel sees him as a low-end starter; half see him as a firm sit. ECR is not delivering an answer here — it is surfacing disagreement. The manager's job shifts to understanding why the disagreement exists, which usually comes down to target share data (see Target Share and Snap Counts) or game environment (see Vegas Lines and Game Totals).

The matchup-driven spike: A running back who ranks RB24 on the season but RB11 in a given week because of a historically porous run defense. ECR captures this well when analysts are doing their jobs, but it captures it more slowly than raw matchup data. The Matchup Analysis for Start/Sit layer should confirm or challenge these spike projections before a lineup locks.

Injury-affected weeks: ECR degrades in accuracy during high-injury periods, particularly early in the week before practice participation reports are filed. Checking ECR on Tuesday for a Thursday night game is meaningfully less reliable than checking it Friday afternoon. The Injury Report and Start/Sit page addresses this timing problem directly.


Decision boundaries

ECR is most reliable and least reliable in predictable situations — knowing the boundary is the actual skill.

Where ECR earns full trust:
- Players ranked in the top 12 at their position with standard deviation under 4. The consensus is tight enough that individual analyst noise has been neutralized.
- Deep flex decisions between two players separated by 10+ ranks. A WR31 vs. WR19 decision does not need nuance — ECR settles it.

Where ECR should be treated as one input among many:
- Players with standard deviation above 10. The consensus number is an arithmetic average of genuine disagreement, not a signal.
- Emerging players whose targets and snap counts are growing faster than the panel has adjusted. ECR is a lagging indicator for role changes; Advanced Stats for Start/Sit tends to lead it.
- Format-specific edges. ECR defaults to PPR weighting on most platforms. In standard scoring leagues, tight ends and pass-catching backs are systematically overranked relative to bruising rushers. Always confirm the format version being used.

The broader Start/Sit Decision Framework positions ECR as an anchoring layer — useful for establishing baseline expectations, but not a substitute for the situational analysis that separates a good week from a frustrating one. The home page at FantasyStartSit.com organizes the full toolkit of which ECR is one part.


References