13 NaN Features. Same Code. Different Data.

When V5 went live, every single inference run logged the same warning: WARNING: 13 features contain NaN values. Filling with 0. I saw it on day one. I told myself it was a data warmup issue — the live feed just needed more bars to stabilize. That was wrong. Same Code, Different Data V5 solved the training/live code divergence problem from V4. One shared feature_core.py. Every pipeline — training, backtest, all five live scripts — imports from the same file. ...

March 14, 2026 Â· 3 min

Backtest Design Is 60% of the Result. Model Training Is 40%.

Most people obsess over model accuracy. Win rate. Precision. Feature importance. That’s 40% of the problem. The other 60% is how you build the backtest that validates it. What the Backtest Actually Controls A model tells you: this bar looks like a long. The backtest decides everything else: How big is the position? Where does the stop go? How does the stop trail? When do you decide the signal is gone? Do you re-enter after an exit? How do you handle overnight funding? Every one of those decisions compounds over hundreds of trades. ...

March 13, 2026 Â· 3 min

My Quant Model Had 5 Silent Data Bugs. The Backtest Looked Great.

My V4 trading system used 11 data sources: price data, funding rates, open interest, institutional long/short ratio, liquidation data, fear & greed index, CVD, and more. The backtest results looked solid. Win rates above 80%. Drawdown under 15%. Then I audited the code line by line. What I found made me rebuild the entire system from scratch. Bug 1: The Fear & Greed index was always 50 The Alternative.me API returns data in this format: ...

March 13, 2026 Â· 4 min