Back to home

SportingCP.ai: How It Works

SportingCP.ai uses real match data, team stats, and historical performance to estimate win probabilities and expected goals for every Sporting CP game. A custom machine-learning model processes recent form, xG trends, and opponent strength to simulate outcomes before kickoff. Results are continuously updated as new data comes in, keeping predictions accurate and transparent. Everything you see on the site, from match odds to model confidence, comes straight from this automated system built specifically for Sporting CP.

1. Data foundation

Before every fixture we pull together a scouting sheet so the model understands context. Each match becomes a row packed with engineered signals about team quality, momentum, and calendar pressure.

  • Match history: Historical league and cup fixtures with final scores, expected goals, kickoff timestamps, and market odds anchor the supervised labels.
  • Team strength: Elo-style ratings adjust after every result to encode opponent quality and a calibrated home-advantage term.
  • Form & fatigue: Rolling windows over the previous 5-10 matches track points, goal difference, xG balance, and rest days.
  • Feature hygiene: Missing inputs fall back to neutral defaults so the model never encounters NaNs or infinities during training or inference.

2. Outcome probabilities

To answer "Who is most likely to win?" we lean on an XGBoost classifier. It ingests the pre-match feature vector and produces a three-way split for home win, draw, and away win that balances explainability with performance.

  • Training signal: Historical fixtures are labelled as away win (0), draw (1), or home win (2). Stratified splits hold back data for validation and out-of-time testing.
  • Boosted trees: Each tree refines the logits for one outcome class, and ensembling hundreds of shallow trees captures interactions such as "high Elo gap with short rest".
  • Metrics: Log-loss and Brier score measure calibration, while accuracy and macro F1 track directional performance.

3. Expected-goals forecast

We also forecast the total expected goals so fans know the likely tempo. A sibling regression head uses the same features to estimate combined xG that downstream tools can price or simulate.

  • Separate model: A dedicated XGBoost regressor learns to predict the total xG target using the same engineered features.
  • Why xG: xG smooths the volatility of scorelines and captures shot quality, which helps the frontend explain confidence in the prediction even when a match ends unexpectedly.
  • Evaluation: Mean absolute error and R² monitor drift, and the output feeds the API alongside the outcome probabilities.

4. Calibration diagnostics

Raw model scores are rarely perfect, so we calibrate them before publishing. Five-fold out-of-fold passes create steady targets, temperature scaling fine-tunes confidence, and the reliability chart shows how closely the published percentages track reality.

Calibration curves comparing predicted home, draw, and away probabilities with observed outcomes.
Bins along the x-axis represent predicted probabilities for each outcome; the vertical bars show how many historical fixtures fall into each bucket. The diagonal reference line indicates perfect calibration, and each curve shows how closely the model tracks reality after isotonic regression and temperature scaling.

How to read the chart

  1. Start with the diagonal reference line: when a curve sits close to it, the published probability matched what actually happened in that band.
  2. Points above the line mean the model was cautious; points below mean it was too confident.
  3. Taller histogram bars show more matches in that probability range, so those portions of the curve carry more weight.
  4. The shaded bar marks the observed frequency and the dark marker shows the average prediction, making any gap easy to spot.
Away Calibration
0%0–10%n=57
0%10–20%n=5
38%30–40%n=90
50%50–60%n=2
100%70–80%n=2
100%80–90%n=6
100%90–100%n=27

Shaded bar = observed frequency; dark marker = mean predicted probability per bin.

Draw Calibration
3%0–10%n=80
40%10–20%n=5
17%20–30%n=90
100%70–80%n=5
100%80–90%n=8
100%90–100%n=1

Shaded bar = observed frequency; dark marker = mean predicted probability per bin.

Home Calibration
0%0–10%n=43
0%10–20%n=5
0%20–30%n=2
0%30–40%n=1
44%40–50%n=88
0%50–60%n=1
0%60–70%n=1
100%70–80%n=1
100%80–90%n=15
97%90–100%n=32

Shaded bar = observed frequency; dark marker = mean predicted probability per bin.

  1. Isotonic regression: Non-parametric smoothing enforces monotonicity, ensuring equally likely fixtures share similar probabilities.
  2. Temperature scaling: A final scalar adjustment tightens or spreads the distribution so out-of-sample log-loss and Brier score align with the validation set.

5. Backtesting coverage

We replay recent fixtures through the pipeline to make sure live predictions stay sharp. Automated backtests compare the model's picks with actual outcomes and surface accuracy, Brier score, and per-outcome precision.

  • Latest snapshot: In the latest 189 labelled matches the classifier hit 72.0% accuracy. Its Brier score of 0.334 beat the baseline's 0.625, a 46.6% lift.
  • Precision by outcome: Away 97.3%, draw 100.0%, and home 62.3% across the same window.
  • Operational use: Automations refresh these snapshots so the dashboard and docs always reflect the latest evidence.

Backtest downloads & quick-look charts

Updated Oct 31, 2025, 9:13 PM
Latest snapshot generated Oct 31, 2025, 9:19 PM for the trailing 50 fixtures.
Top pick accuracy

Highest-confidence prediction each matchday

92%

46 / 50 (92%) correct

Win pick accuracy

Matches where the model sided with a winner

91%

39 / 43 (91%) correct

Draw pick accuracy

Fixtures flagged as likely stalemates

100%

7 / 7 (100%) correct

Actual \ PredAwayDrawHome
Away
3651%
00%
3449%
Draw
13%
1442%
1855%
Home
00%
00%
86100%
Confusion matrix for the trailing 189 labelled matches. Rows show actual outcomes in the order Away, Draw, and Home, and columns show model predictions in the same order. Darker shading means more samples.

6. Serving architecture

Predictions live behind a FastAPI service that keeps a stable contract for the web app and automation jobs.

  • Inference API: The service loads the latest model checkpoints, fetches feature rows, applies calibration steps, and returns probabilities plus xG in a single payload.
  • Freshness: Scheduled tasks ingest results, update ratings, and rescore upcoming fixtures so fans always see up-to-date context.
  • Frontend integration: The Next.js app renders outcome distributions, xG expectations, and historical validation charts directly from the API responses.

7. Stewardship and monitoring

We keep the predictor honest by monitoring uncertainty, retraining when drift appears, and allowing controlled overrides when needed.

  • Retraining cadence: The pipeline retrains after significant data additions, such as mid-season and end-of-season checkpoints, or when calibration drift exceeds thresholds.
  • Quality metrics: Accuracy, macro F1, log-loss, Brier score, MAE, and R² are tracked to surface regressions before deployment.
  • Controlled overrides: Operators can apply bounded temperature adjustments for short-term interventions without redeploying the model.

8. Market analysis and model validation

We benchmark the model against trusted betting markets to measure predictive edge and catch blind spots.

Probability Delta Analysis

Sporting probability deltas showing model vs market differences for away/draw/home outcomes

Model vs market probability differences for Sporting matches across different outcome types, with bootstrap confidence bands and opponent strength indicators.

Log-loss Edge Tracking

Sporting log-loss edge over time showing model performance vs market

Match-by-match log-loss advantage of the model vs betting markets, with 180-day rolling average. Values below zero indicate model outperformance.

  • Probability delta analysis: We line up the model's home, draw, and away probabilities with Bet365 closing odds across Sporting fixtures. Positive deltas mean we are more bullish than the market; negative values show the market is. Rolling 180-day bootstrap bands highlight where the gap is statistically meaningful.
  • Log-loss edge tracking: Game-by-game log-loss comparisons quantify prediction quality. Scores below zero mean the model beat market expectations, and a 180-day rolling average smooths noise to reveal persistent edges or weaknesses.
  • Opponent strength analysis: We group opponents by Elo quartile (top, middle, bottom) to see how the model behaves against different strength tiers and to catch systematic bias.
  • Multi-competition validation: Primeira Liga, Europa League, and Champions League fixtures all feed the analysis so we know the model travels well across competitions.

9. Frequently asked questions

What model architecture powers the predictor?
Under the hood we rely on gradient-boosted tree models. Production pairs two XGBoost heads trained on the same engineered feature store: a multi-class classifier for the 1X2 outcome distribution and a regression head for expected goals. Gradient-boosted trees stay performant on structured football data, capture non-linear interactions, and keep inference latency low enough for real-time product surfaces.
How do you calibrate the published probabilities?
We keep published probabilities truthful by fitting an isotonic regressor on out-of-fold predictions and then applying temperature scaling. The combination smooths logits and corrects for over- or under-confidence before fans see them. Reliability curves are monitored continuously; if they drift we adjust the scaler or retrain.
Where do the inputs and labels come from?
We blend trustworthy feeds for fixtures, betting odds, expected goals, and squad information to build inputs and labels. Historical results power Elo-style team ratings, while rolling windows measure form, momentum, and rest. Clean-room pipelines enforce quality checks so the downstream model never trains on missing or corrupted records.