The five-stage pipeline
Every signal AgoraIQ knows about passes through the same five stages. The pipeline is deterministic and fully automated — we don't curate, we don't editorialize, and we don't remove losing signals.
- Ingest — capture the raw post from a monitored public channel
- Hash — serialize the signal and commit it to an append-only chain
- Track — match against live exchange candles, minute by minute
- Resolve — compute the R-multiple outcome when TP/SL triggers
- Score — update the provider's rolling composite IQ
The rest of this page details each stage, including the edge cases we've had to decide on. Where we've made a judgment call, we've marked it so you can disagree.
Capture the raw post, exactly as published.
We monitor public channels on Telegram, Discord, and X for messages that match our signal-detection grammar. The moment a matching message is posted, we record:
What gets rejected at ingest
A message is dropped before it reaches the hash stage if:
- The parser can't extract a pair, a direction, and at least one of SL or TP
- The entry is more than 5% from the current spot price (likely a limit order already past)
- The SL is on the wrong side of entry for the stated direction (malformed)
- TP ≤ entry for a long, or TP ≥ entry for a short (inverted)
- The same (channel, pair, entry, direction) was posted within the last 30 minutes (deduplication)
We do not retroactively accept "updates" to an open signal. If a provider posts "move SL to breakeven", that's logged as a note but does not change the original hashed record. A signal is judged as it was originally called.
Commit the signal to an append-only chain.
Once parsed, the signal is serialized into a canonical JSON form (keys sorted, no whitespace) and hashed with SHA-256. The hash, together with the previous record's hash, forms a linked chain — tampering with any historical record invalidates every record posted after it.
# Input: parsed signal signal = { "provider_id": "signalcity_pro", "platform": "telegram", "pair": "BTCUSDT", "direction": "long", "entry": 67240.00, "stop_loss": 65900.00, "take_profit": [68800.00, 70500.00], "leverage": null, "posted_at": "2026-04-19T11:32:07.412Z" } # Canonical JSON (sorted keys, no whitespace) canonical = json.dumps(signal, sort_keys=True, separators=(',', ':')) # Chain link record = { "seq": prev_seq + 1, "signal": canonical, "prev_hash": prev_hash, "hash": sha256(canonical + prev_hash).hexdigest() } # Committed to Postgres (signals_chain) + mirrored to DO Spaces hourly
Why a linked chain and not just a hash per signal?
A standalone hash only proves that a specific record hasn't been altered. A linked chain proves that the entire ordering hasn't been altered. If we wanted to quietly delete a losing signal, we'd have to rehash every record after it — and the mirror copy on DigitalOcean Spaces would immediately disagree with the live DB.
Every signal on the Live Proof page shows its seq, hash, and prev_hash. Concatenate the canonical form with the prev_hash, SHA-256 it, and confirm the hash we published matches. A small CLI verifier is on GitHub: agoraiq/chain-verify.
Match against live exchange candles.
Once hashed, the signal becomes active and enters the tracker. Every minute, the tracker pulls the 1m candle for the signal's pair from the exchange specified (or Binance as default), and checks whether any of the signal's levels were touched during that candle's high/low range.
expired and excluded from scoringThe SL-first tiebreak is deliberately conservative — it assumes the worst outcome for the provider. Providers who believe a specific signal should have been judged differently can submit the exchange trade ticks for review.
Compute the R-multiple outcome.
R is the distance from entry to SL, expressed as the unit of risk. A +2R trade returned twice the risked amount; a −1R trade lost the full risk. All outcomes are reported in R so providers with tight stops and providers with wide stops are comparable.
# For LONG (mirror logic for SHORT) R_unit = entry - stop_loss # Full TP hits if tp2_hit_first: outcome = "win_tp2" R = (tp2 - entry) / R_unit # e.g. +2.4R if tp1_hit_then_sl_be: # partial: TP1 closed 50%, remainder to BE outcome = "partial_be" R = 0.5 * ((tp1 - entry) / R_unit) + 0.5 * 0 # e.g. +0.5R if sl_hit_direct: outcome = "loss" R = -1.0 if timeout: outcome = "expired" R = None # excluded from scoring
How we handle partial closes
Most signal providers explicitly or implicitly close a portion at TP1 and move SL to breakeven. We model this with a fixed 50/50 assumption — 50% of position at TP1, 50% runs toward TP2 with SL at BE. It's not perfect, but it's consistent across providers and matches the most common stated rule in signal groups.
If a provider explicitly states a different close schedule (e.g., "30% at TP1, 30% at TP2, 40% at TP3"), we'll honor it if declared before the signal resolves. Retroactive schedule changes are ignored.
Fees and slippage
R is computed on mid-candle pricing without fee or slippage adjustments. This makes the number a pure strategy metric, not an execution-cost estimate. If you act on these signals with a 10bp taker fee and 5bp slippage, your realized return will be lower than the quoted R by roughly 0.05–0.15R per trade depending on your SL distance.
The composite IQ.
The IQ score blends four components. Each is normalized to a 0–1 range, weighted, and summed to a 0–100 composite. Scores are recomputed nightly on a rolling 30-day window.
| Component | Measures | Weight |
|---|---|---|
| Win rate | (wins + 0.5 × partials) / total resolved | 35 |
| Avg R | Mean R-multiple, normalized to [−2, +3] band | 30 |
| Consistency | 1 − stddev(rolling 7-day WR over the last 30 days) | 20 |
| Sample weight | min(1.0, signals_resolved / 50) | 15 |
signals_total = 112 wins = 52 # TP2 hit: 31 · TP1+BE: 21 losses = 47 # SL direct: 47 partials = 13 # TP1 then SL (BE or partial) win_rate = (wins + 0.5 * partials) / signals_total # 0.522 avg_R = sum(R_i) / N_resolved # +1.41R consistency = 1.0 - std(rolling_7d_WR) # 0.72 sample_weight = min(1.0, signals_total / 50) # 1.00 # Normalization for avg_R: clip to [-2, +3], then scale to [0, 1] avg_R_norm = (clip(avg_R, -2, 3) + 2) / 5 # 0.682 iq = 35 * win_rate + 30 * avg_R_norm + 20 * consistency + 15 * sample_weight iq = 78.6 # → rank #2 of 47 providers tracked
Why these weights?
The weighting is opinionated. The short answer: win rate and expectancy matter most; consistency prevents a single home-run trade from dominating a score; and sample weight prevents a provider with three lucky calls from topping the leaderboard.
- 35 for win rate — it's what most subscribers actually experience emotionally
- 30 for avg R — but a 40% WR at +2R is still better than 70% at +0.3R
- 20 for consistency — steady is harder than lucky
- 15 for sample weight — providers below 50 resolved signals in 30d are capped
A provider who only posts signals in obviously favorable conditions can game consistency. We partially offset this with the sample-weight cap (you can't stay at full weight while only posting twice a week), but it's a real bias we're actively working on. The next revision will add a "market regime balance" penalty.
Eight states a signal can be in.
Every signal in the system is always in exactly one of these states. State transitions are logged and part of the audit trail.
Only RESOLVED signals contribute to scoring. EXPIRED, REJECTED, and DISPUTED are visible on the provider's page but excluded from the IQ calculation.
The limits of this methodology.
We've tried to be clear-eyed about what this scoring system does and doesn't tell you. Things we explicitly do not claim:
- We don't claim signals are profitable for you. R-multiples ignore fees, funding, slippage, and your own execution latency. A +1.4R average can become −0.1R after real costs.
- We don't adjust for market regime. A provider who posted mostly longs during an uptrend will score higher than one who posted balanced signals through chop — by design, because that's what happened.
- We don't evaluate private signals. If a provider sends calls only to paid subscribers, we can't track them. The leaderboard is a ranking of public-call performance.
- We don't predict future performance. A provider at IQ 78 today may be at IQ 52 in six weeks. Past performance is not indicative of future results. This is statistically true of almost every signal provider we've tracked.
- We don't execute trades. AgoraIQ is an intelligence layer, not a trading bot. No custody, no API keys, no execution risk on our side.
How to verify anything on this site.
Three layers of verification are available to any user — subscribed or not:
1. Chain verification
The full hash chain is exportable as JSON. Point the open-source verifier at it and re-derive every hash. If it passes, no record has been altered or reordered.
2. Per-signal source
Each signal on the Live Proof page links to its source URL (the original Telegram/Discord/X post) where still public. Hover any signal to see the ingest timestamp, server latency, and candle source.
3. Candle replay
For any resolved signal, we publish the minute-bar that triggered resolution, plus the prev and next bars. Anyone can pull the same bars from the exchange API and confirm the outcome we reported.
If you believe a signal was mis-resolved, email audit@agoraiq.net with the signal hash. Every dispute goes into the DISPUTED state, is manually re-reviewed within 48 hours, and the outcome is published — whether we got it right or wrong.
What changed, and when.
Each methodology version is tagged. Historical scores are recomputed when a new version ships so that the leaderboard always reflects the current methodology. Per-version archives are available on request.