How this differs from a chart scanner
A scanner typically checks price patterns and indicators on a single chart. Our engine adds derivatives positioning, macro regime context, geopolitical data, event catalysts, and outcome-based learning to decide whether a technical setup meets the quality threshold.
Why we enforce minimum 1.5x Risk-to-Reward
With a 1.5x R/R ratio, you only need to win 40% of trades to break even. The engine rejects every setup below this threshold — per timeframe, with higher minimums for shorter timeframes. This isn't a suggestion. It's a hard gate.
1.2–1.8x
Min R/R
~40%
Break-even WR
Real S/R levels
Targets from
Each layer adjusts or rejects the signal
Technical analysis sets the base confidence. The remaining layers adjust it up or down, or reject the setup entirely if a critical condition is not met.
Structure, Trend & Momentum
Proprietary S/R detection (not Fibonacci), 3-layer trend scoring (ST/MT/LT), RSI momentum gates, volume confirmation, pattern matching with weak-breakout detection. 4 pattern types, each with timeframe-specific RSI bounds and volume thresholds. This is the foundation — but alone it's not enough.
Base confidence (35–65 range) + 5 rejection gatesFunding Rate, Open Interest & Long/Short Ratio
Real-time perpetual futures data from Binance. When funding rates show the crowd is overleveraged in the same direction as the signal, the engine applies contrarian penalties or hard-gates the signal entirely. Open interest divergence, extreme long/short ratios, and liquidation cascades (OI drops >5% in 4 hours) trigger additional adjustments. This layer catches the trades that look good on a chart but are crowded.
Hard gate (kills signal) OR ±12 combined confidence adjustmentMarket Regime Detection + DXY Correlation + Coin Beta
Classifies the macro environment as BULL / NEUTRAL / BEAR using BTC price structure relative to its 200-day moving average, volatility indices, and sentiment gauges. Checks lagged DXY (dollar index) movements with confirmed inverse correlation to crypto. Computes each coin's rolling beta against BTC — high-beta coins get amplified regime adjustments, low-beta coins get dampened ones.
±12 confidence adjustment (largest single layer)14+ Indicators + GDELT News Tone + Sanctions Monitoring
Ingests Treasury yields (2Y, 10Y), Fed funds rate, DXY, VIX, S&P 500, gold, oil, Fear & Greed Index, CPI, and FX rates — refreshed continuously from FRED, Alpha Vantage, and exchange APIs. Geopolitical tone computed from thousands of global news sources via GDELT. Sanctions data cross-referenced from OFAC and EU consolidated lists. All compressed into a directional risk score.
±8 confidence adjustment via combined risk scoreToken Unlocks, Halvings, Listings, Hard Forks, Airdrops
Per-coin upcoming events aggregated from exchange announcements, halving schedules, and market calendars. A massive token unlock in 3 days penalizes LONG signals on that coin. A confirmed exchange listing boosts them. Impact is pre-computed per coin and applied directionally — LONG and SHORT get different adjustments from the same event.
±8 directional adjustment per coinSession Liquidity + Day-of-Week + Macro Calendar Proximity
Hourly weights derived from our own outcome data (not generic "best hours" lists). US session gets a bonus, low-liquidity windows get penalized. Weekend signals penalized for gap risk. Signals generated within ±2 hours of high-impact macro releases (CPI, FOMC decisions) are penalized because the volatility makes entry zones unreliable.
±5 confidence adjustmentOnline Bayesian Logistic Regression
An 8-feature logistic regression model trained on proxy outcomes — the actual price movement 24 hours after every signal snapshot. Uses online SGD with L2 regularization, meaning it learns continuously from new market data. Prior weights are initialized from the existing confidence formula, then adjusted by empirical evidence. When the ML model disagrees strongly with the base score, confidence is shifted. Safety cap prevents overcorrection at extremes.
±8 confidence adjustment (capped at ±4 near extremes)Fundamental intelligence layer
The engine doesn't just read price data. It ingests, processes, and acts on fundamental intelligence from 7 primary data sources continuously.
Macro Dashboard
Sources: FRED (Federal Reserve), Alpha Vantage, Open Exchange Rates, Alternative.me
Geopolitical Risk
Sources: GDELT Project (Georgetown Univ.), OpenSanctions, exchange announcements
Crypto Event Catalysts
Sources: Binance announcements, halving schedules, CoinMarketCal API
Analyst Consensus
Sources: RSS feeds, YouTube transcripts, Substack, academic publications
Three intelligence layers working together
A signal tells you what to trade. The Intelligence layer tells you why markets are moving. The Forecast tells you what's coming next. The analyst model tells you whose opinion to weight more.
Intelligence Layer
Real-time macro, geopolitical and crypto-specific event scanning — straight from primary sources. A proprietary correlation engine compresses 14+ indicators into a single market-mood score. When stress rises, the signal engine automatically dampens exposure.
- 14+ macro indicators refreshed continuously
- Geopolitical tone analysis across thousands of news sources
- Crypto-specific events: upgrades, unlocks, airdrops, halvings
- Market mood score factored into every signal's confidence
Logic Forecast
A scenario engine for the 30–90 day window. For each major event, 3–4 probability-weighted scenarios grounded in 50+ curated historical precedents. Also powers the 24-month country-risk forecast across 250 countries.
- Probability-weighted scenarios with plain-English explanations
- Matched against similar past events — not invented from thin air
- Predictions refresh only when underlying context changes
- Every scenario traceable back to its inputs
Analyst Consensus & Reliability
Public views from 60+ respected analysts across 6 domains, each continuously scored by outcome. Consistently-right voices weigh more than confidently-wrong ones. The opposite of Twitter: loudness doesn't matter, accuracy does.
- 60+ tracked analysts across 6 domains (RSS, Substack, YouTube)
- Each call scored by outcome — reliability adjusts accordingly
- Ranking by accuracy × conviction, not follower count
- Bayesian Beta posterior — not simple win/loss percentages
250-country geopolitical risk model
A separate engine running a modified PITF (Political Instability Task Force) model: logistic regression on regime type, infant mortality, and neighborhood instability. Walk-forward validated on 2019–2022 holdout. Ground truth from UCDP conflict data, V-Dem democracy indices, and World Bank macro series.
Quantitative methods used in production
These are the statistical and machine learning techniques running live in the engine — not marketing terms, not plans. Each one has code in production, parameters in version control, and outcomes being tracked.
Online Bayesian Logistic Regression
ML Calibration Layer8-feature logistic model with stochastic gradient descent (SGD) and L2 regularization (ridge penalty). Numerically stable sigmoid with branch x≥0 vs x<0 to avoid overflow. Prior initialized from existing confidence formula coefficients, then updated with each labeled outcome. Proxy outcomes (0.3 weight) and real outcomes (1.0 weight) trained jointly.
The model learns continuously from tracked outcomes, adjusting its weights as new data arrives.
Newton-Raphson Maximum Likelihood Estimation
Structural Risk Model (PITF refit)Custom 4×4 Gauss-Jordan matrix inversion for Hessian. Ridge regularization λ=0.5 to prevent coefficient explosion on sparse conflict data. Backtracking line search for step size. Converged in 7 iterations on 2,134 country-year observations with 4.55% base rate.
The same class of optimization used in epidemiological models and political science research, adapted for our conflict-onset predictions.
Walk-Forward Cross-Validation
Model validationTrain on 2010–2018 (1,746 observations, 3.32% base rate). Holdout 2019–2022 (776 observations, 6.19% base rate — period shift due to COVID + Myanmar + Sudan). Refit Brier = 0.0587, baseline Brier = 0.0591 (ratio 0.9929). Confirmed that v0 baseline transfers robustly despite base rate doubling.
Walk-forward validation prevents overfitting by testing on data the model has never seen during training.
Page-Hinkley Change Detection (CUSUM)
Drift monitoringContinuous monitoring of model Brier scores for distributional shift. Parameters: δ=0.005, λ_watch=0.05, λ_alert=0.10, λ_critical=0.20, minObs=20. Drift score = cumulative sum minus running minimum. Three severity levels trigger different responses.
Originally an industrial process control method, applied here to detect when market regime changes invalidate the model's assumptions.
Bayesian Beta-Decay Reliability Scoring
Analyst & method reliabilityBeta(α,β) posterior per (method, source, analyst) × event class. Update: Brier loss → α += (1-brier), β += brier, with exponential decay γ = 2^(-Δt/halfLife), halfLife=30 days. Naturally weights recent performance over stale predictions.
A probability distribution over reliability that accounts for recency, sample size, and domain — updated after every resolved prediction.
Rolling Cross-Asset Correlation
Regime detection + DXY impact30-day rolling Pearson correlation between BTC daily returns and DXY daily returns. Confirmed inverse correlation used to adjust signal confidence when DXY moves >0.5% in a session. Per-coin beta computed as Cov(coin,BTC)/Var(BTC) over rolling window.
Measures the live statistical relationship between crypto and the dollar, and adjusts signal confidence accordingly.
Shadow Model Promotion Protocol
Automated model governanceNew parameter sets run in shadow mode alongside the active model. Promotion criteria: shadow Brier < active Brier × 0.95 on ≥30 common resolved predictions. Demotion if shadow Brier > active × 1.10. Full audit trail with ParameterSet versioning, timestamps, and metrics snapshots.
A/B testing for prediction models — ensures new parameter sets prove themselves on live data before replacing the active version.
Spatial Neighborhood Feature (Graph-Based)
Structural risk model648 directed border edges between 250 countries (from REST Countries API). Bad neighborhood = 1 if ≥4 neighbors have UCDP fatalities ≥25 in adjacent years, or fallback WGI Political Stability ≤ -1.0. Batch pre-computation for O(1) lookup during prediction.
Graph-based spatial features drawn from conflict prediction research (ViEWS, PITF), implemented for country-level risk assessment.
Kelly Criterion Position Sizing
Risk managementf* = (p × b - q) / b where p = empirical win rate, b = avg_win / avg_loss, q = 1-p. Quarter Kelly (0.25×f*) applied for variance reduction. Confidence-scaled: maps confidence [35,95] to multiplier [0.7, 1.3]. Risk level discount: MEDIUM ×0.75, HIGH ×0.5. Market cap safety caps: small-cap max 1x, mid-cap max 2x, large-cap max 5x.
Mathematically optimal sizing for long-term capital growth, with quarter-Kelly dampening to reduce variance.
Adaptive Drawdown Circuit Breaker
Portfolio risk controlRolling 7-day performance check before every engine run. Two severity levels: CAUTIOUS (cumulative < -10%, WR < 30%, 4+ losses → raises minConf +5, minRR +0.3, caps leverage at 2x) and DEFENSIVE (cumulative < -25%, WR < 20%, 6+ losses → raises minConf +10, minRR +0.5, forces spot only). Confidence penalized by 3–5 points. Auto-reverts when metrics recover.
During drawdowns, the engine automatically tightens its own filters — fewer signals, higher quality thresholds, lower risk exposure.
Liquidation Cascade Detection
Derivatives intelligenceHourly OI history from Binance futures (8 data points). Cascade detected when OI drops >5% in 4 hours. Direction classified by concurrent price move: OI down + price down = long liquidation (contrarian LONG signal); OI down + price up = short squeeze. Adjustment ±5 to derivatives score.
Identifies moments when leveraged positions are being forcefully closed, which often marks short-term exhaustion points.
Isotonic Probability Calibration (PAV)
Output calibrationPool Adjacent Violators algorithm maps confidence scores to empirically calibrated win probabilities. Built from all closed signals with known outcomes. Monotonically increasing by construction. Linear interpolation between bucket midpoints. Brier score validation: before vs after calibration. Updated weekly from growing outcome data.
Answers the question: "when the engine outputs confidence 70, what is the empirical win rate?" — and corrects for any miscalibration.
Every method listed above has code running in production, not in a whitepaper. Parameters are stored in versioned ParameterSets. We publish the methods — the coefficients, thresholds, and feature engineering are proprietary.
The engine improves from its own mistakes
Every signal — accepted or rejected — is stored as a full snapshot with all indicators. Outcomes are tracked automatically. This data feeds back into parameter calibration.
Full Snapshot Recording
Every analysis (2,047+ so far) is stored with its complete indicator state — RSI, trend scores, ATR, S/R levels, volume, market cap, sentiment, pattern type, rejection reason. Accepted and rejected alike.
MFE / MAE Outcome Tracking
Maximum Favorable Excursion and Maximum Adverse Excursion tracked at 24h, 48h, and 7d after signal creation. Shows not just if a signal won, but how far it went in our favor and how far against.
Proxy Outcome Labeling
For snapshots without a linked signal, the ML layer fetches the price 24h later and labels the setup as proxy-win or proxy-loss. This gives the ML model training data from the entire analysis universe — not just published signals.
Page-Hinkley Drift Detection
Continuous monitoring for distributional shift in model performance. If Brier scores start drifting (market regime change, new correlation patterns), alerts fire and shadow model promotion is triggered.
Shadow Model Promotion
New parameter sets run in shadow mode alongside the active model. When a shadow variant beats the active one by a statistically significant margin (Brier score ratio < 0.95 on 30+ common predictions), it's automatically promoted.
Versioned Parameter Audit Trail
Every parameter change — ML weights, timing weights, PITF coefficients, confidence thresholds — is stored in versioned ParameterSets with creation timestamp, creator, activation date, retirement date, and performance metrics snapshot.
Signal rejection pipeline
A setup must survive every gate. One failure = rejected. The pipeline is ordered by computational cost — cheap filters first, expensive lookups last.
Live production numbers
Queried from the production database at page load.
Evolution, not decoration
Each version solved real failure modes found in tracked outcomes.
Engine Audit
+ Sortino Ratio fix (normalized by total sample, not just losses), infinity/NaN guards on leverage and ML features, coin beta bounds validation, derivatives comment alignment. 5 code fixes deployed.
Full Edge Engine
+ Adaptive circuit breaker (tightens filters during drawdowns instead of stopping), Kelly position sizing (mathematically optimal risk per trade), liquidation cascade detector (OI history analysis), isotonic probability calibration (PAV algorithm), portfolio metrics (Sharpe, Sortino, Profit Factor, Max Drawdown). Output: calibrated win probability + recommended risk %.
Advanced Engine
+ Derivatives positioning gates, cross-asset regime, timing optimization, ML calibration. Confidence extended to 35–95. Funding rate hard-gate added.
Intelligence Integration
+ 14 macro indicators, geopolitical risk from GDELT, per-coin crypto event catalysts (unlocks, halvings, listings). Fundamental analysis feeds into every signal.
Calibration Engine
+ Per-timeframe parameter tuning from outcome data. MFE/MAE tracking at 24h/48h/7d. Research mode for parameter exploration.
Snapshot + Validation
+ Full snapshot recording for ALL analyses (accepted + rejected). 14 rejection reasons. Volume gating. Weak-breakout detection.
Core Engine
S/R detection, 3-layer trend scoring, pattern matching, confidence formula, Binance klines integration.
Common questions
Reasonable questions deserve straightforward answers.
"Isn't this just RSI + support/resistance + trend?"
Technical analysis is layer 1 of 7 in the scoring pipeline. It generates the initial setup, but the remaining 6 layers — derivatives positioning, regime classification, macro risk, event catalysts, timing, and ML calibration — determine whether the setup meets the quality threshold. Most setups are rejected by these additional layers.
"How do fundamentals affect entry prices?"
They don't set entry prices — technicals do that. Fundamentals serve as gates and confidence adjusters. For example, extreme funding rate alignment can reject an otherwise valid setup, while a confirmed regime with no macro headwinds can boost its confidence score. Fundamentals filter which technical setups get published.
"Where can I see backtest results?"
We track every outcome with MFE/MAE at 24h, 48h, and 7d windows. The structural model has walk-forward validation (Brier 0.059 on 2019–2022 holdout). Portfolio metrics — Sharpe, Sortino, Profit Factor, Max Drawdown — are computed from live signal outcomes and available on the Track Record page.
"What about on-chain analysis?"
On-chain metrics (whale flows, exchange inflows, MVRV, SOPR) are on the roadmap. Currently, derivatives positioning (funding rate, OI, long/short ratio) covers crowd sentiment and leverage exposure. We add new data sources when we can validate their impact against tracked outcomes.
Signals with context
Each signal comes with the conditions it was generated in — the dollar's trend, the funding rate, the news tone, scheduled catalysts, and analyst consensus. This way you can evaluate the setup yourself, not just follow a number.
Every outcome is tracked, every parameter is version-controlled, and the ML model learns from each result. The engine is designed to improve incrementally as more data accumulates.
Protected IP
- All numerical thresholds and weights
- Confidence formula coefficients per layer
- S/R clustering and scoring parameters
- ML feature engineering and model weights
- Funding rate gate and scoring boundaries
- Regime classification conditions
- Trend scoring gradient formulas
Disclosed on this page
- Full architecture (7 layers, 15 gates)
- Data sources and what each provides
- How the feedback loop works
- Scale and live production numbers
- Version history and what each solved
- Validation methodology (walk-forward, Brier)
- Enough to evaluate seriousness — not to copy
Risk Disclaimer & IP Notice
This page describes the architecture of the LogiSignals engine for transparency and audit purposes. Nothing here constitutes financial advice. Past signal performance does not guarantee future results. Trading cryptocurrencies carries substantial risk. The engine's parameters, thresholds, scoring formulas, and trained model weights are proprietary intellectual property. Unauthorized reproduction, reverse-engineering, or use of this information to build competing systems is prohibited. Full parameter audit available under NDA.