Pedigree scoring and weighted mid-projections

Ensuring every projection has a clear chain of custody

When it comes to projecting the future, the hardest part isn’t running the model—it’s deciding what inputs to trust. The world is full of forecasts: government outlooks, intergovernmental assessments, academic studies, consultancy scenarios, industry white papers, NGO reports, and even marketing claims. Each has value, but not all should be weighted equally.

At Viable Pathway, we don’t treat external trends as interchangeable. Instead, we use a pedigree scoring framework that evaluates each source against four simple but rigorous dimensions: Authority, Transparency, Recency, and Confidence. This allows us to turn a noisy landscape of data into a robust, weighted mid-projection that privileges the best evidence, while still capturing the full spread of uncertainty.

Here’s how it works.

Authority: Who is speaking?

Not all institutions have the same standing. Authority measures the formal or institutional weight of the source—not whether the data is recent or the methods transparent (those are separate scores).

We use a simple 1–5 scale:

  • 1 — Government / Official Statutory: National statistical offices, regulatory agencies, system operators, ministries. These bodies have a legal mandate to publish authoritative figures.

  • 2 — Intergovernmental / Treaty Bodies: IEA, IPCC, UNFCCC, World Bank, OECD. These organizations synthesize across countries but don’t hold statutory authority.

  • 3 — Peer-Reviewed Academia, National Labs, Reputable NGOs: Independent research with replicable methods and scholarly review.

  • 4 — Consultancies, Industry Associations, Proprietary Analysts: Often rigorous, but methods may be opaque and incentives may introduce bias. (McKinsey, BCG, Bain, industry associations all default to 4.)

  • 5 — Unverified / Marketing Sources: Blogs, brochures, press releases, or infographics without evidence.

Transparency: How open are the methods?

Transparency measures whether a projection is auditable. A government forecast might be authoritative, but if the dataset is locked behind paywalls and the methodology isn’t explained, its transparency score is weaker.

Our rubric:

  • 1 — Fully Transparent & Replicable: Full data, full documentation, external review.

  • 2 — Mostly Transparent: Detailed methods with minor gaps.

  • 3 — Partially Transparent: General overview but no replicable data.

  • 4 — Opaque: Proprietary “black box” models, vague methods.

  • 5 — No Transparency: Unsupported numbers, no methods disclosed.

Recency: Is the data fresh enough?

Recency simply captures whether the analysis is still timely. A 2010 forecast might have been excellent, but by 2025 it no longer reflects today’s technological, policy, or market realities.

We map publication year to a 1–10 scale, with “1” meaning published this year, and “10” meaning more than a decade old. By scoring recency explicitly, we ensure outdated sources don’t dominate.

Confidence: How well is uncertainty expressed?

Every projection is uncertain—the real question is whether the source admits it. Confidence measures how clearly and rigorously the source quantifies (or acknowledges) uncertainty.

  • 1 — Explicit Uncertainty Quantification: Probability intervals, scenario ranges with clear methodology.

  • 2 — Partial Quantification: High/low scenarios, qualitative caveats.

  • 3 — Informal Acknowledgement: Mentions uncertainty but no numbers.

  • 4 — Unsupported Point Estimate: Single trajectory, no caveats.

  • 5 — Misleading Certainty: Claims inevitability without evidence.

From Pedigree Scores to Weights

Each dimension produces a score. These are combined mathematically into a single pedigree score. Lower numbers mean higher quality. To make sure high-quality sources influence projections more strongly, we invert the score to produce a weight:

Weight = 1 / Pedigree Score

Thus, a transparent, authoritative, recent government dataset will carry far more influence than an outdated consultancy slide deck. But both can still be present in the system—the difference is how much they shape the mid-projection.

Mid-Projections and Envelopes

Once each trend is scored and normalized, we produce two key outputs:

  • The Mid (vpmid): A year-by-year weighted average of all sources, using pedigree weights. This ensures your central projection is tilted toward the most authoritative and transparent evidence.

  • The Envelope (vpmin, vpmax): A bounding band that captures the full spread of credible projections. If a new trend consistently falls outside the existing range, the envelope is updated to include it.

This way, you see both the best-evidence midline and the credible extremes.

Why This Matters

For users, the value of pedigree scoring is confidence. You don’t need to know the internals of every consultancy model or NGO scenario. You can trust that our mid-projection systematically prioritizes:

  • Governments and intergovernmental bodies over trade associations.

  • Transparent, auditable methods over black-box claims.

  • Recent data over outdated projections.

  • Sources that acknowledge uncertainty over those that pretend to have none.

In short: your projections are weighted toward the best available evidence, without discarding the wider range of published futures.

Closing the Loop

Finally, every decision is logged. When a new source updates the mid, min, or max, the change is time-stamped and appended to a changelog. This ensures full version control, traceability, and auditability.

In conclusion

Projecting the future is always uncertain. But uncertainty does not mean arbitrariness. By scoring every trend against Authority, Transparency, Recency, and Confidence—and by blending them with a clear, mathematical weighting system—Viable Pathway gives you projections you can trust: rigorous, balanced, and audit-ready.

Our promise is simple: evidence-based midlines, credible ranges, and full transparency on how we got there.