Category: Quant Finance (Page 1 of 2)

Sampling Stock Prices Directly from Option Prices

For a single maturity, European call prices encode the risk-neutral distribution of the underlying. You can turn them into Monte Carlo samples without fitting a model or estimating a density.

For strikes K_0 < … < K_n with call prices C_0, …, C_n, define

    \[F_i = 1 + e^{rT} \frac{C_{i+1}-C_i}{K_{i+1}-K_i}, \quad F_0 = 0, \quad F_n = 1\]

This is a discrete approximation of the cumulative distribution function of S_T.

To sample:

  1. Draw U \sim Uniform(0,1)
  2. Find i such that F_i \le U < F_{i+1}
  3. Set

    \[S_T = K_i + (K_{i+1}-K_i)\frac{U-F_i}{F_{i+1}-F_i}\]

Repeat for as many samples as needed.

This produces risk-neutral samples directly from observed call prices using only simple finite differences. It’s fully model-free, requires no volatility surface fitting, and preserves arbitrage constraints!

Below and example results from S&P500 option prices:

New open-source library: Conditional Gaussian Mixture Models (CGMM)

pip install cgmm

I’ve released a small, lightweight Python library that learns conditional distributions and turns them e.g. into scenarios, fan charts, and risk bands with just a few lines of code. It’s built on top of scikit-learn (fits naturally into sklearn-style workflows and tooling).

Example usage:

In the figure below, a non-parametric model is fit on ΔVIX conditioned on the VIX level, so it naturally handles:

  • Non-Gaussian changes (fat tails / asymmetry), and
  • Non-linear, level-dependent drift (behavior differs when VIX is low vs. high).

Features:

  • Conditional densities and scenario generation for time series and tabular problems
  • Quantiles, prediction intervals, and median/mean paths vuia MC simulation
  • Multiple conditioning features (macro, technicals, regimes, etc.)
  • Lightweight & sklearn-friendly; open-source and free to use (BSD-3)

VIX example notebook: https://cgmm.readthedocs.io/en/latest/examples/vix_predictor.html

Call for examples & contributions:

  • Do you have a use-case we should showcase (rates, spreads, realized vol, token flows, energy, demand, order-book features…)?
  • Send a brief description or PR—examples will be attributed.
  • Contributions, issues, and feature requests are very welcome. And if you find this useful, please share or star the repo to help others discover it.

Not investment advice. The library is still work in progress!

Forecasting Current Market Turbulence with the GJR-GARCH Model

The Current Market Shake-Up

Last week, global stock markets faced a sharp and sudden correction. The S&P 500 dropped 10% in just two trading days, its worst weekly since the Covid crash 5 years ago.

Big drops like this remind us that market volatility isn’t random, it tends to stick around once it starts. When markets fall sharply, that volatility often continues for days or even weeks. And importantly, negative returns usually lead to bigger increases in volatility than positive returns do. This behavior is called asymmetry, and it’s something that simple models don’t handle very well.

In this post, we’ll explore the Glosten-Jagannathan-Runkle GARCH model (GJR-GARCH), a widely-used asymmetric volatility model. We’ll apply it to real S&P 500 data, simulate future price and volatility scenarios, and interpret what it tells us about market expectations.

Continue reading

Using Fractional Brownian Motion in Finance: Simulation, Calibration, Prediction and Real World Examples

Long Memory in Financial Time Series

In finance, it is common to model asset prices and volatility using stochastic processes that assume independent increments, such as geometric Brownian motion. However, empirical observations suggest that many financial time series exhibit long memory or persistence. For example, volatility shocks can persist over extended periods, and high-frequency order flow often displays non-negligible autocorrelation. To capture such behavior, fractional Brownian motion (fBm) introduces a flexible framework where the memory of the process is governed by a single parameter: the Hurst exponent.

Continue reading

Yield Curve Interpolation with Gaussian Processes: A Probabilistic Perspective

Here we present a yield curve interpolation method, one that’s based on conditioning a stochastic model on a set of market yields. The concept is closely related to a Brownian bridge where you generate scenario according to an SDE, but with the extra condition that the start and end of the scenario’s must have certain values. In this paper we use Gaussian process regression to generalization the Brownian bridge and allows for more complicated conditions. As an example, we condition the Vasicek spot interest rate model on a set of yield constraints and provide an analytical solution.

The resulting model can be applied in several areas:

  • Monte Carlo scenario generation
  • Yield curve interpolation
  • Estimating optimal hedges, and the associated risk for non tradable products
Continue reading

Faster Monte Carlo Exotic Option Pricing with Low Discrepancy Sequences

In this post, we discuss the usefulness of low-discrepancy sequences (LDS) in finance, particularly for option pricing. Unlike purely random sampling, LDS methods generate points that are more evenly distributed over the sample space. This uniformity reduces the gaps and clustering seen in standard Monte Carlo (MC) sampling and improves convergence in numerical integration problems.

A key measure of sampling quality is discrepancy, which quantifies how evenly a set of points covers the space. Low-discrepancy sequences minimize this discrepancy, leading to faster convergence in high-dimensional simulations.

Continue reading

Finding the Nearest Valid Correlation Matrix with Higham’s Algorithm

Introduction

In quantitative finance, correlation matrices are essential for portfolio optimization, risk management, and asset allocation. However, real-world data often results in correlation matrices that are invalid due to various issues:

  • Merging Non-Overlapping Datasets: If correlations are estimated separately for different periods or asset subsets and then stitched together, the resulting matrix may lose its positive semidefiniteness.
  • Manual Adjustments: Risk/assert managers sometimes override statistical estimates based on qualitative insights, inadvertently making the matrix inconsistent.
  • Numerical Precision Issues: Finite sample sizes or noise in financial data can lead to small negative eigenvalues, making the matrix slightly non-positive semidefinite.
Continue reading

Optimal Labeling in Trading: Bridging the Gap Between Supervised and Reinforcement Learning

When building trading strategies, a crucial decision is how to translate market information into trading actions.

Traditional supervised learning approaches tackle this by predicting price movements directly, essentially guessing if the price will move up or down.

Typically, we decide on labels in supervised learning by asking something like: “Will the price rise next week?” or “Will it increase more than 2% over the next few days?” While these are intuitive choices, they often seem arbitrarily tweaked and overlook the real implications on trading strategies. Choices like these silently influence trading frequency, transaction costs, risk exposure, and strategy performance, without clearly tying these outcomes to specific label modeling decisions. There’s a gap here between the supervised learning stage (forecasting) and the actual trading decisions, which resemble reinforcement learning actions.

In this post, I present a straightforward yet rigorous solution that bridges this gap, by formulating label selection itself as an optimization problem. Instead of guessing or relying on intuition, labels are derived from explicitly optimizing a defined trading performance objective -like returns or Sharpe ratio- while respecting realistic constraints such as transaction costs or position limits. The result is labeling that is no longer arbitrary, but transparently optimal and directly tied to trading performance.

Continue reading

Fast Rolling Regression: An O(1) Sliding Window Implementation

In finance and signal processing, detecting trends or smoothing noisy data streams efficiently is crucial. A popular tool for this task is a linear regression applied to a sliding (rolling) window of data points. This approach can serve as a low-pass filter or a trend detector, removing short-term fluctuations while preserving longer-term trends. However, naive methods for sliding-window regression can be computationally expensive, especially as the window grows larger, since their complexity typically scales with window size.

Continue reading
« Older posts