# Calibrating the Ornstein-Uhlenbeck (Vasicek) model

In this article I’ll describe two methods for calibrating the model parameters of the Ornstein-Uhlenbeck process to a given dataset.

• The least squares regression method
• maximum likelihood method

## Introduction

The stochastic differential equation (SDE) for the Ornstein-Uhlenbeck process is given by

with the mean reversion rate, the mean, and the volatility.

## An example simulation

The table and figure below show a simulated scenario for the Ornstein-Uhlenbeck process with time step =0.25, mean reversion rate =3.0, long term mean =1.0 and a noise term of =0.50. We will use this data to explain the model calibration steps.

A scenarios of a the Ornstein-Uhlenbeck process. The scenarios start at S(0)=3 and reverting to a long term mean of 1.

i t
0 0.00 3.0000
1 0.25 1.7600 -1.0268
2 0.50 1.2693 -0.4985
3 0.75 1.1960 0.3825
4 1.00 0.9468 -0.8102
5 1.25 0.9532 -0.1206
6 1.50 0.6252 -1.9604
7 1.75 0.8604 0.2079
8 2.00 1.0984 0.9134
9 2.25 1.4310 2.1375
10 2.50 1.3019 0.5461
11 2.75 1.4005 1.4335
12 3.00 1.2686 0.4414
13 3.25 0.7147 -2.2912
14 3.50 0.9237 0.3249
15 3.75 0.7297 -1.3019
16 4.00 0.7105 -0.8995
17 4.25 0.8683 0.0281
18 4.50 0.7406 -1.0959
19 4.75 0.7314 -0.8118
20 5.00 0.6232 -1.3890

The following simulation equation is used for generating paths (sampled with fixed time steps of =0.25). The equation is an exact solution of the SDE.

The random numbers used in this example are shown in the last column of the table 1.

## Calibration using least squares regression

The relationship between consecutive observations is linear with a iid normal random term

Least square fitting of a line to the data.

The relationship between the linear fit and the model parameters is given by

rewriting these equations gives

## Calculating the least squares regression

Most software tools (Excel, Matlab, R, Octave, Maple, …) have built in functionality for least square regression. If its not available, a least square regression can easily be done by calculating the the quantities below:

from which we get the following parameters of the least square fit

## Example

Applying the regression to the data in table 1 we get

Param Value
22.5301
20.1534
30.8338
25.1973
22.2222
0.4574
0.4924
0.2073

These results allow us to recover the model parameters:

Param Value
0.9075
3.1288
0.5831

## Calibration using Maximum Likelihood estimates

### Conditional probability density function

The conditional probability density function is easily derived by combining the simulation equation above with the probability density function of the normal distribution function:

Conditional probability density function -red- of S at t=1.

The equation of the conditional probability density of an observation given a previous observation (with a time step between them) is given by

with

### Log-likelihood function

The log-likelihood function of a set of observation can be derived from the conditional density function

### Maximum likelihood conditions

The maximum of this log-likelihood surface can be found at the location where all the partial derivatives are zero. This leads to the following set of constraints.

Log likelihood function as function of mu.

Log likelihood function as function of lambda.

### Solution of the conditions

The problem with these conditions is that the solutions depend on each other. However, both and are independent of , and knowing either or will directly give the value the other.
The solution of can be found once both and are determined. To solve these equations it is thus sufficient to find either or .
Finding can be done by substituting the condition into the .

First we change notation of the and condition using the same notation as before, i.e

which gives us:

substituting into gives

removing denominators

collecting terms

moving all to the left

### Final results: The maximum likelihood equations

mean:

mean reversion rate:

variance:

with

### Example

Calculating the sums based on table 1 we get

Param Value
22.5301
20.1534
30.8338
25.1973
22.2222

These results allow us to recover the model parameters:

Param Value
0.9075
3.1288
0.5532

## Matlab Code

### Least Squares Calibration

function [mu,sigma,lambda] = OU_Calibrate_LS(S,delta) n = length(S)-1;   Sx = sum( S(1:end-1) ); Sy = sum( S(2:end) ); Sxx = sum( S(1:end-1).^2 ); Sxy = sum( S(1:end-1).*S(2:end) ); Syy = sum( S(2:end).^2 );   a = ( n*Sxy - Sx*Sy ) / ( n*Sxx -Sx^2 ); b = ( Sy - a*Sx ) / n; sd = sqrt( (n*Syy - Sy^2 - a*(n*Sxy - Sx*Sy) )/n/(n-2) );   lambda = -log(a)/delta; mu = b/(1-a); sigma = sd * sqrt( -2*log(a)/delta/(1-a^2) ); end

### Maximum Likelyhood Calibration

function [mu,sigma,lambda] = OU_Calibrate_ML(S,delta) n = length(S)-1;   Sx = sum( S(1:end-1) ); Sy = sum( S(2:end) ); Sxx = sum( S(1:end-1).^2 ); Sxy = sum( S(1:end-1).*S(2:end) ); Syy = sum( S(2:end).^2 );   mu = (Sy*Sxx - Sx*Sxy) / ( n*(Sxx - Sxy) - (Sx^2 - Sx*Sy) ); lambda = -log( (Sxy - mu*Sx - mu*Sy + n*mu^2) / (Sxx -2*mu*Sx + n*mu^2) ) / delta; a = exp(-lambda*delta); sigmah2 = (Syy - 2*a*Sxy + a^2*Sxx - 2*mu*(1-a)*(Sy - a*Sx) + n*mu^2*(1-a)^2)/n; sigma = sqrt(sigmah2*2*lambda/(1-a^2)); end

### Example Usage

S = [ 3.0000 1.7600 1.2693 1.1960 0.9468 0.9532 0.6252 ... 0.8604 1.0984 1.4310 1.3019 1.4005 1.2686 0.7147 ... 0.9237 0.7297 0.7105 0.8683 0.7406 0.7314 0.6232 ];   delta = 0.25;   [mu1, sigma1, lambda1] = OU_Calibrate_LS(S,delta) [mu2, sigma2, lambda2] = OU_Calibrate_ML(S,delta)

gives

mu1 = 0.90748788828331 sigma1 = 0.58307607458526 lambda1 = 3.12873217812387   mu2 = 0.90748788828331 sigma2 = 0.55315453345189 lambda2 = 3.12873217812386

1. Jhonny
Posted July 9, 2011 at 2:30 pm | Permalink

Good work! Do you have any reference for this and other process like the CIR or CLKS? Thanks

2. Posted August 11, 2011 at 1:14 pm | Permalink

Hello Thijs, thank you for your excellent work! The values in the N(0,1) column are all positive, which is very strange for normally distributed numbers. Can you please check/explain?

• Thijs van den Berg
Posted August 17, 2011 at 7:51 am | Permalink

Fixed the dropped sign in the N(0,1) column at the top table !

3. Thijs van den Berg
Posted August 11, 2011 at 1:17 pm | Permalink

Good find.
It’s probably something that went wrong when I migrated the page to a new platform. I’ll do the manual boolean search tonight!

Thanks!

4. Thijs van den Berg
Posted August 11, 2011 at 1:22 pm | Permalink
5. Yohan
Posted August 17, 2011 at 7:37 am | Permalink

nice job Thijs btw i have one novice question

the data used are the historical data? for examplethe market interest rate from 2005 to 2010?
sta

6. Thijs van den Berg
Posted August 17, 2011 at 7:42 am | Permalink

Yohan,

Yes, you calibrate it to historical spot rate data.

I’m also working on a very cool calibration method where you make it fit market yields! It’s still in progress though. I can send you yesterdays draft..

• Yohan
Posted August 17, 2011 at 8:09 am | Permalink

Thanks a lot i would be happy to see your draft. I have bloomberge access in the case that you dont use it if you need any data just let me know.

• Thijs van den Berg
Posted August 17, 2011 at 8:15 am | Permalink

Good to know that you have Bloomberg access! I have access to one too now, but that’s just temporarily.

I’ve send you the draft.

• Yohan
Posted August 17, 2011 at 9:57 am | Permalink

Thanks a lot yes I received !my gmail is blocked by companyITpolicy so i can only reply you by mail tonight.i tested your matlab function with the one of daminao brigo did you have the same results!
I still have one novice question
The time step △t = ti − ti−1 what is it in the real world? in the market use the inflation data YOY and the dt is always one?

• Thijs van den Berg
Posted August 17, 2011 at 10:01 am | Permalink

Yes that’s fine, it’s a convention choice. Iy you set it to 1 (day) then youer mean reversion rate Lambda and volatility Sigma will be expressed in days. Iy you want annualized mean reversion rates and volatilities (which is what I would do), then you’l need to set dt=1/number-of-observations-per-year, for businessdays, that’
s probably close to 1/252

• Yohan
Posted August 17, 2011 at 12:20 pm | Permalink

thanks you! this is very helpful

7. Silvio
Posted November 24, 2011 at 6:44 am | Permalink

How to adjust the calibration for the process dSt/St = lambda(mu – ln(St))*dt + sigma*dWt?

8. Fred
Posted November 29, 2011 at 3:32 am | Permalink

Hi Thijs,

I just a query regarding the data in table 1. When I input delta=0.25, lambda=3.0, mu=1.0, sigma=0.50, S0 = 3, N(0,1) = -1.0268 into the exact solution of the SDE, I get 1.39 as S1. Why am I getting different values? Are the parameters annualized?

Thanks!

• Thijs van den Berg
Posted November 29, 2011 at 7:39 am | Permalink

strange. I just checked, but I still get 1.7600. You just plug in the parameters: Here is an Excel formula that gives 1.7600

=S_*EXP(-lambda_*dt_)+mean_*(1-EXP(-lambda_*dt_))+sigma_*SQRT((1-EXP(-2*lambda_*dt_))/(2*lambda_))*N_

• Fred
Posted November 29, 2011 at 10:19 pm | Permalink

Yes, you’re right I just forgot to place a brackets.

9. Fred
Posted November 29, 2011 at 7:03 am | Permalink

Hi,

Just have another simple question:

If I have a series of historical data that is recorded monthly for 5 years with units % per annum. So I have 60 pieces of data each with units % per annum. If I use the above calibration will the resulting calibrated parameters be monthly or annual values % per annum?

Thanks again

• Thijs van den Berg
Posted November 29, 2011 at 7:18 am | Permalink

Hee Fred,

The dimension of the mean reversion rate and volatility would then be monthly too.

• Fred
Posted November 29, 2011 at 10:22 pm | Permalink

If that is the case then with you’re example above, shouldn’t you have changed the mean reversion rate and the volatility to an annual value before using it in the exact solution of the SDE with dt as 0.25?

• Thijs van den Berg
Posted November 29, 2011 at 10:33 pm | Permalink

Ah, you’re right, I’ve explained it wrong. Both mean reversion rate (lambda) and volatility (sigma) are annualized!

What I wanted to say is that if you calibrate the model to some data set, and have time labels in either units of years, or number of months.. and if you *then* use the same time unit in simulation, then the results will be exactly the same.

This means that you can just set dt to 1 in both calibration and simulation (if the time-step in both cases is the same).

• Fred
Posted November 29, 2011 at 10:42 pm | Permalink

Thanks, thats what I thought!

I have been doing a similar test with the Cox-Ingersoll-Ross model and I was getting wierd results.

Thanks again!

• Thijs van den Berg
Posted November 29, 2011 at 10:45 pm | Permalink

I know, I have the same thing, it’s sometimes not clear (like here too) how the interpret the annualization of parameters. Thanks for bringing it up & posting it here!

10. Gustavo Oliveira
Posted February 5, 2012 at 11:30 am | Permalink

Thijs,

How do you treat messy data? My data doesn’t have periodic intervals or same quantity for every, hour, week or month. Basically I have data where I only know which month it was taken on. Do you have any good reference on how to manage bad data sets?

regards and congrats on the excelent work. I’m anxious to implement your OU model and compare it to other models.

Gustavo

• Thijs van den Berg
Posted February 5, 2012 at 12:28 pm | Permalink

Hey Gustavo,

that’s a tough problem! You’ll have to assume some distribution of the timestamp within the interval -e.g. uniform?- and then go through some horrible equations. Maybe it’s best to start out with assuming that the data comes from the mid of each period?

11. Rangga Handika
Posted March 21, 2012 at 2:56 pm | Permalink

Dear Thjis,

Thanks for posting this useful materials as well as the codes. Currently I am learning one-factor, two-factor and then three-factor model. Your posting here helps me in understanding the one-factor (Vasicek or OU model) in simple way. Thank you very much.

Btw, regarding the questions and responds from Fred (about “annual vs month”), why didn’t we always put any time in year unit. Thus, if we have the monthly data, the delta will be 1/12. For weekly data, the delta will be 1/52. I think this is the safe side as common finance literatures (i.e. Hull, 2009) always use T as fraction of year (such as in the Geometric Brownian Motion, for example). Thefore, the data will produce the parameters (u, lambda, etc) yearly. What do you think? Just for my two cent opinion.

Sincerely

• Thijs van den Berg
Posted March 21, 2012 at 3:04 pm | Permalink

HI Rangga,
Yes I agree, I also think it’s good practice to annualize parameters.

Sometimes it’s a bit weird though: e.g. when you have an intraday pairs trading strategy and you’re working with 1 minute time-steps. In those cases annualized volatility and mean-reversion rates will be very big numbers, and those values will be difficult to check intuitively. A volatility of 3 cents per minute might feel right, but a annual volatility of 20 dollar (roughly the same thing) might not easy be relatable to the 1 minute bar chart your looking at.

• Rangga Handika
Posted March 21, 2012 at 3:24 pm | Permalink

Hi Thijs,

Yes, when dealing with any microstructure markets (i.e. any market that has periodic less than daily), the result should be carefully interpreted. For instance, in the power markets which prices are quoted hourly, it is “strange” to put 1 hour as a fraction of one year (it would be very small to get 1/(24*365)). So, perhaps for microstructure markets, the mean reversion and “long term” variance would refer to certain period that is less than 1 year. For example, we put hourly data as a fraction of one-day (1/24) so that the parameters will refer to the one-day in case of microstructure markets.

It might be interesting that we realize “long term” is not long term anymore because 1 year for hourly data is “long term” but actually 1 year is not long term. Interesting . . .

Thanks again for your warm response.

Kind regards

• Thijs van den Berg
Posted March 21, 2012 at 3:32 pm | Permalink

Indeed!
Power markets are even more complicated!

E.g. the ‘hourly’ products are hourly w.r.t. time of delivery, not time of trade. If you look at the price of the single hour 14:00-15:00 delivery for next Monday, than that will have a low volatility. There will be little new info the upcomming days that will move the price of that single hourly delivery. However, when you look at the hour on Sunday -the day before delivery- in the day-ahead market, that hour 14:00-15:00 and the next hourly delivery 15:00-16:00 might have complete different prices (and high volatility if you move from delivery hour to delivery hour). The problem is that you can’t compare those two electricity deliveries, they are not like stocks or other storable goods where you can buy it and keep it as time progresses. Electricity prices have two time dimensions: trade time and delivery time.

12. Anthony
Posted April 2, 2012 at 10:01 am | Permalink

If you model
d(ln S_t) = \lambda ( \mu – ln S_t ) dt + \sigma dW_t

Are all the steps the same and then you need to just use exponential on the end result:

S_{t_{+1}}=exp\left(e^{-\lambda\delta}lnS_{t}+\mu(1-e^{-\lambda\delta})+\sigma\sqrt{\frac{1-e^{-2\lambda\delta}}{2\lambda}N_{0,1}}\right)

• Thijs van den Berg
Posted April 3, 2012 at 7:10 am | Permalink

Indeed. Thanks for pointing out the lognormal extension!

13. Paul
Posted April 6, 2012 at 2:19 am | Permalink

I would like to know if you could explain the difference between the mean reversion rate, lambda, and half-life. I have seen different technical papers related to pairs trading and general mean reverting processes that call out half-life, calculated as log(2)/theta (or alpha from the regression), versus the lambda calculation defined above. Is there a derivation or concept I am missing?

• Thijs van den Berg
Posted April 6, 2012 at 7:08 am | Permalink

In your notation: lambda = mean reversion rate = theta.

The mean reversion rate is the factor that determines how strong the process get’s pulled to it’s mean. The half life is related to that, it’s the amount of time it takes (on average) for the process to get pulled half-way back to the mean. If one is big, the other is small. Your equation ln(2)/lambda is correct and gives the half-live.

14. Luke_G
Posted April 15, 2012 at 1:55 pm | Permalink

Hi,

I just came across your webpage! You are doing a great job at explaining things! Regarding the calibration of the OU-process. Is there / Do you have a similar elegant explanation for OLSfor the CIR-process? I was searching the web, but to no avail

kind regards

• Thijs van den Berg
Posted April 15, 2012 at 4:13 pm | Permalink

Hi Luke,
Ai.. no sorry, I don’t have that But thanks for enjoying this one. Maybe CIR is more complicated?

15. George Mylnikov
Posted May 3, 2012 at 2:40 pm | Permalink

Thijs,
Thanks a lot for a very helpful site and for the code. Question: do you know of a good reference for
the estimation of multivariate OU processes? I did find a number of papers on this subject but they
all go into much more detail than I need. I am looking for a simple set of formulas – just like you have for
the univariate case.
Cheers

• Thijs van den Berg
Posted May 3, 2012 at 2:56 pm | Permalink

Hee George,

I don’t have that ready, but I think you can easily do it this way:
* first calibrate each timeseries individually
* then use you data and the calibrate model parameters to back out the noise sequences per time series
* them model those as a multivariate Normal distribution (i.e. calculate the covariance matrix)

After that you can do simulations by generating correlated normal variates using the COV from the last step, and them feed to values to generate OU sequences.

16. George Mylnikov
Posted May 3, 2012 at 9:17 pm | Permalink

Thanks again, Thijs!

17. michael colacino
Posted May 6, 2012 at 6:04 pm | Permalink

Thijs: Thanks for the great blog. Question: I have seen analytic solutions to the OU SDE but they all have a stochastic integral as the rightmost term. Your solution doesn’t have that and I haven’t seen this version explained anywhere. Any citation t help me understand how you go from the stochastic integration to the deterministic function times standard normal distribution formulation? TIA.

• Thijs van den Berg
Posted May 6, 2012 at 10:58 pm | Permalink

Hee Michael,

Some stochastic integral have analytical exact solutions, OU is one of them. Solving stochastic integrals is a bit technical, but a method I started out with is to discretize the integral with Euler into small time steps with little Normal distributed variates and then see it the sum is solvable when you let the stepsize go to zero.

Hope that helped a bit..

18. John
Posted May 10, 2012 at 9:20 pm | Permalink

Hi Thiis, this is great work

I was wondering if you had matlab code to simulate the process using the solution to the SDE? I’m guessing you used excel for your example above, but the matlab code would be useful for monte carlo analysis – I’m trying to build it myself at the moment but to not much success

Cheers

• Thijs van den Berg
Posted May 10, 2012 at 10:01 pm | Permalink

Hee John,

I have this function. It works nicely, but if you want performance then that should/could be improved (but I don’t have the time now). Good luck!
 function [ou] = OrnsteinUhlenbeck(S0, dt, N, sigma, lambda, mu) a = exp(-lambda*dt); b = mu*(1-exp(-lambda*dt)); c = sigma*sqrt( (1-exp(-2*lambda*dt))/(2*lambda) );

 

ou = zeros(N,1); ou(1) = S0; for i=2:N ou(i) = a*ou(i-1) + b + c*randn(1); end 

19. John
Posted May 11, 2012 at 9:56 am | Permalink

Thanks Thijs, ideally I want a function to do X amount for simulations and output the data such as

function [ou] = OrnsteinUhlenbeckSim(NumSim, S0, dt, N, sigma, lambda, mu)

to save me time running lots of simulations manually but as a Matlab beginner I can’t get this working.

Also I was wondering if you knew how to calibrate the Geometric Brownian motion to find values for mu & sigma for a given data set? Using either methods in this article or some other method

Cheers

• Thijs van den Berg
Posted May 11, 2012 at 10:05 am | Permalink

Hee John,
I currently lack the time to work on those.., I have tons of work to do, sorry!

Probably one of the best investment you can make is to try and acquire coding skills, it makes you independent and more reactive to test your future ideas, right?

Anyway, good luck!

20. Stat
Posted May 16, 2012 at 9:47 pm | Permalink

Hello,
Good job

i ask what would delta equal to if i have a daily data

dela=1 or delta=1/252

thank u
Best regards

• Thijs van den Berg
Posted May 17, 2012 at 8:34 am | Permalink

I would use 1/252, but it really doesn’t matter as long as you use the same delta consistenty for simulation etc.

21. Steven
Posted May 31, 2012 at 3:24 pm | Permalink

Hi,
Thanks for your very good job,

Maybe I made a mistake but when I try to calculate the standard deviation ( sd(epsilon) ), I find the denominator in n² and not in n*(n-2).

Made it several times and still not understanding the denominator.

Do you have any idea ?

Steven

• Thijs van den Berg
Posted May 31, 2012 at 5:35 pm | Permalink

Hi Steven, that has to do with correcting for bias in sample based estimation of variance. In particular check out this section on degrees of freedom in linear regression Wikipedia: degrees of freedom, and -for the concept in a different application- read a bit about “Bessel correction”

22. Kate
Posted June 7, 2012 at 7:42 am | Permalink

Hi Thijs! Thank you for the work above. I was working with Libor rates trying to calibrate them to Vasicek model using least squares regression. For some periods in my data, parameter a is slightly more than one. This leads to negative Lambda and sometimes Mu (depending on parameter b). This apparently doesn’t make much sence (sime lambda is sort of 1/time, and historical average value on the period is positive). Do you have any thoughts on why this is happening and if there is any way this can be amended?

• Thijs van den Berg
Posted June 7, 2012 at 7:59 am | Permalink

Hi Kate!

A value of a>1 implies the opposite of mean reversion, it’s called “persistence”, an increase in the Libor rate will typically be followed by another increase. There is not much you can do about that. What your seeing is that the data and your model assumption sometimes don’t match (so your model choice is wrong for those period).

Maybe you can solve it like thiss:
* you pre-assume that Libor rate will show mean reverting behaviour in the future, even though it did not do that all the time in the history. The question then becomes, what historical periods are representative of your future expected behavior, and thus good for using in calibation? Maybe that assumption will allow you to exclude these period causing problems form your calibration?

Good luck!

• Kate
Posted June 7, 2012 at 10:36 am | Permalink

Thank you Thijs for such a quick and useful response! That thing drived me crazy….

• Kate
Posted November 14, 2012 at 12:51 pm | Permalink

Hi Thijs!

I was thinking about above problem with persistence again and was wondering if one can indicate it in MLE calibration as easily as we see it for least squares regression approach (a>1)? Can you think of a criteria?

Also, do you have any general thoughts about advantages/disadvantes of these two approaches? I mean, which one would you rather use and why?

Best Regards,
Kate

• Thijs van den Berg
Posted November 14, 2012 at 1:01 pm | Permalink

Hi Kate,

So what happens when you do a MLE on a persistent time series? It doesn’t work out of the box?

This has been a long time and I’m currently breaking my head on something else in the limited time I have, so my brain is a bit distracted

I use MLE a lot lately, but I think MLE can be biased compared least squares, but on the other hand least squares is a bit of an arbitrary measure, whereas MLE is the most likely value, right?

Anyway: I don’t have strong opinions about which one is better.

• Kate
Posted November 14, 2012 at 1:47 pm | Permalink

Thanks Thijs,

No, it does work, though generating not nice results. And while for least square I could just say: “Model is not working here.” when a>1, for MLE I am just left with huge and ugly numbers. I will post it here if I can come up with a criteria.

23. Kate
Posted June 8, 2012 at 9:07 am | Permalink

Hi Thijs,

Just noticed a tiny typo and felt like contributing my 0.05 cents into this excellent work.

In the second part for MLE method you are doing all the manipulations in terms of S_i and S_(i-1) except for the second formula where you have f(S_(i+1), S_i) on the left hand side as well as in the definition above the formula.

Thanks,
Kate

• Thijs van den Berg
Posted June 8, 2012 at 1:39 pm | Permalink

Thanks Kate! Thanks for helping, I’ll fix it.

Cheers,
Thijs

24. zhongrong zheng
Posted August 16, 2012 at 9:14 pm | Permalink

It is not easy to show that the formula for both mu and lambda from the two calibration methods are the same. So, we see the same results from your example.

25. zhongrong zheng
Posted August 17, 2012 at 12:55 am | Permalink

Sorry. I actually want to comment that it is not difficult to show that the formulas for mu (and lambda) from the two methods are equivalent, hence they must lead to the same results as we see from the given example.

26. Sami
Posted October 21, 2012 at 5:31 am | Permalink

Hello,
Can you give me a code to calibrate the Cox-Ingersol-ross model to market data?

Thank you very much.

27. Hana
Posted November 2, 2012 at 8:35 pm | Permalink

Hello,

I have calibrated vasicek model to daily data – having dt=1/252 to anualize the data. Now i would like to simulate the model, but I do not know which dt to use for simulations, i would prefer monthly simulations, is it possible or do i have to have the same dt in simulations as in calibration?????

thanks a lot, I am holpless as I cannot find a clue anywhere

• Thijs van den Berg
Posted November 3, 2012 at 10:31 am | Permalink

That’s easy, you can set dt to 1/12 for simulations.

28. florelle
Posted December 5, 2012 at 10:21 pm | Permalink

I’m trying to estimate the parameters of vasicek process by your matlab code (least squares and MLE). i don’t found the logical value. I ask what would delta equal to if i have a daily data???

* if i use delta=250, i have the value of lambda and very small sigma.

* if i use delta=1/250, i have very large value of lambda (theta in my case).