Fit Gaussian Process
This example shows how to fit a Gaussian process to model stellar activity in RV data. It continues from Basic RV Fit.
Radial velocity modelling is supported in Octofitter via the extension package OctofitterRadialVelocity. To install it, run pkg> add OctofitterRadialVelocity
For this example, we will fit the orbit of the planet K2-131 to perform the same fit as in the RadVel Gaussian Process Fitting tutorial.
We will use the following packages:
using Octofitter
using OctofitterRadialVelocity
using PlanetOrbits
using CairoMakie
using PairPlots
using CSV
using DataFrames
using Distributions
We will start by downloading and preparing a table of radial velocity measurements, and create a StarAbsoluteRVLikelihood
object to hold them.
The following functions allow you to directly load data from various public RV databases:
HARPS_DR1_rvs("star-name")
HARPS_RVBank_observations("star-name")
Lick_rvs("star-name")
HIRES_rvs("star-name")
If you would like to manually specify RV data, use the following format:
rv_like = StarAbsoluteRVLikelihood(
# epoch is in units of MJD. `jd` is a helper function to convert.
# you can also put `years2mjd(2016.1231)`.
# rv and σ_rv are in units of meters/second
(epoch=jd2mjd(2455110.97985), rv=−6.54, σ_rv=1.30),
(epoch=jd2mjd(2455171.90825), rv=−3.33, σ_rv=1.09),
# ... etc.
instrument_name="insert name here",
offset=:rv0_harps,
jitter=:jitter_harps,
)
StarAbsoluteRVLikelihood Table with 3 columns and 2 rows:
epoch rv σ_rv
┌─────────────────────
1 │ 55110.5 -6.54 1.3
2 │ 55171.4 -3.33 1.09
For this example, to replicate the results of RadVel, we will download their example data for K2-131 and format it for Octofitter:
rv_file = download("https://raw.githubusercontent.com/California-Planet-Search/radvel/master/example_data/k2-131.txt")
rv_dat_raw = CSV.read(rv_file, DataFrame, delim=' ')
rv_dat = DataFrame();
rv_dat.epoch = jd2mjd.(rv_dat_raw.time)
rv_dat.rv = rv_dat_raw.mnvel
rv_dat.σ_rv = rv_dat_raw.errvel
tels = sort(unique(rv_dat_raw.tel))
# This table includes data from two insturments. We create a separate
# likelihood object for each:
rvlike_harps = StarAbsoluteRVLikelihood(
rv_dat[rv_dat_raw.tel .== "harps-n",:],
instrument_name="harps-n",
offset=:rv0_harps,
jitter=:jitter_harps,
)
rvlike_pfs = StarAbsoluteRVLikelihood(
rv_dat[rv_dat_raw.tel .== "pfs",:],
instrument_name="pfs",
offset=:rv0_pfs,
jitter=:jitter_pfs,
)
StarAbsoluteRVLikelihood Table with 3 columns and 31 rows:
epoch rv σ_rv
┌──────────────────────
1 │ 57828.4 -45.17 3.88
2 │ 57828.4 -36.47 4.03
3 │ 57829.2 -20.52 3.55
4 │ 57829.2 -4.58 6.07
5 │ 57830.1 17.71 3.56
6 │ 57830.1 23.34 3.48
7 │ 57830.2 21.34 3.51
8 │ 57830.2 19.92 3.26
9 │ 57830.2 -0.74 3.44
10 │ 57830.3 16.9 3.5
11 │ 57830.3 8.25 4.64
12 │ 57830.4 -4.06 4.91
13 │ 57832.1 -29.51 4.96
14 │ 57832.1 -21.56 4.11
15 │ 57832.2 -25.76 5.59
16 │ 57833.1 2.1 3.91
17 │ 57833.1 4.38 3.73
⋮ │ ⋮ ⋮ ⋮
Gaussian Process Fit
Let us now add a Gaussian process to model stellar activity. This should improve the fit.
We start by writing a function that creates a Gaussian process kernel from a set of system parameters. We will create a quasi-periodic kernel.
using AbstractGPs
function gauss_proc_quasi_periodic(θ_system)
# θ_system is a named tuple of variable values for this posterior draw.
# Note: for multiple instruments, you can supply different gaussian process
# functions that e.g. use different hyper-parameter names.
η₁ = θ_system.gp_η₁ # h
η₂ = θ_system.gp_η₂ # λ
η₃ = θ_system.gp_η₃ # θ
η₄ = θ_system.gp_η₄ # ω
kernel = η₁^2 *
(SqExponentialKernel() ∘ ScaleTransform(1/(η₂))) *
(PeriodicKernel(r=[η₄]) ∘ ScaleTransform(1/(η₃)))
gp = GP(kernel)
return gp
end
gauss_proc_quasi_periodic (generic function with 1 method)
Now provide this function when we construct our likelihood object:
rvlike_harps = StarAbsoluteRVLikelihood(
rv_dat[rv_dat_raw.tel .== "harps-n",:],
instrument_name="harps-n",
offset=:rv0_harps,
jitter=:jitter_harps,
# NEW:
gaussian_process = gauss_proc_quasi_periodic
)
rvlike_pfs = StarAbsoluteRVLikelihood(
rv_dat[rv_dat_raw.tel .== "pfs",:],
instrument_name="pfs",
offset=:rv0_pfs,
jitter=:jitter_pfs,
# NEW:
gaussian_process = gauss_proc_quasi_periodic
)
StarAbsoluteRVLikelihood Table with 3 columns and 31 rows:
epoch rv σ_rv
┌──────────────────────
1 │ 57828.4 -45.17 3.88
2 │ 57828.4 -36.47 4.03
3 │ 57829.2 -20.52 3.55
4 │ 57829.2 -4.58 6.07
5 │ 57830.1 17.71 3.56
6 │ 57830.1 23.34 3.48
7 │ 57830.2 21.34 3.51
8 │ 57830.2 19.92 3.26
9 │ 57830.2 -0.74 3.44
10 │ 57830.3 16.9 3.5
11 │ 57830.3 8.25 4.64
12 │ 57830.4 -4.06 4.91
13 │ 57832.1 -29.51 4.96
14 │ 57832.1 -21.56 4.11
15 │ 57832.2 -25.76 5.59
16 │ 57833.1 2.1 3.91
17 │ 57833.1 4.38 3.73
⋮ │ ⋮ ⋮ ⋮
Note that the two instruments do not need to use the same Gaussian process kernels, nor the same hyper parameter names.
We create a new system model with added parameters gp_η₁
etc for the Gaussian process fit.
gp_explength_mean = 9.5*sqrt(2.) # sqrt(2)*tau in Dai+ 2017 [days]
gp_explength_unc = 1.0*sqrt(2.)
gp_perlength_mean = sqrt(1. /(2. *3.32)) # sqrt(1/(2*gamma)) in Dai+ 2017
gp_perlength_unc = 0.019
gp_per_mean = 9.64 # T_bar in Dai+ 2017 [days]
gp_per_unc = 0.12
# No change to planet model
@planet b RadialVelocityOrbit begin
e = 0
ω = 0.0
P ~ truncated(Normal(0.3693038/365.25, 0.0000091/365.25),lower=0.00001)
a = cbrt(system.M * b.P^2)
mass ~ LogUniform(0.001, 10)
τ ~ UniformCircular(1.0)
tp = b.τ*b.P*365.25 + 57782 # reference epoch for τ. Choose an MJD date near your data.
end
@system k2_132_gp begin
M ~ truncated(Normal(0.82, 0.02),lower=0.1) # (Baines & Armstrong 2011).
rv0_harps ~ Normal(-6693,100) # m/s
rv0_pfs ~ Normal(0,100) # m/s
jitter_harps ~ LogUniform(0.1,100) # m/s
jitter_pfs ~ LogUniform(0.1,100) # m/s
# Add priors on GP kernel hyper-parameters.
gp_η₁ ~ truncated(Normal(25,10),lower=0.1,upper=100)
# Important: ensure the period and exponential length scales
# have physically plausible lower and upper limits to avoid poor numerical conditioning
gp_η₂ ~ truncated(Normal(gp_explength_mean,gp_explength_unc),lower=5,upper=100)
gp_η₃ ~ truncated(Normal(gp_per_mean,1),lower=2, upper=100)
gp_η₄ ~ truncated(Normal(gp_perlength_mean,gp_perlength_unc),lower=0.2, upper=10)
end rvlike_harps rvlike_pfs b
model = Octofitter.LogDensityModel(k2_132_gp)
LogDensityModel for System k2_132_gp of dimension 13 and 71 epochs with fields .ℓπcallback and .∇ℓπcallback
Sample from the model as before:
# Seed the random number generator
using Random
rng = Random.Xoshiro(0)
chain = octofit(
rng, model,
adaptation = 100,
iterations = 100,
)
Chains MCMC chain (100×31×1 Array{Float64, 3}):
Iterations = 1:1:100
Number of chains = 1
Samples per chain = 100
Wall duration = 23.24 seconds
Compute duration = 23.24 seconds
parameters = M, rv0_harps, rv0_pfs, jitter_harps, jitter_pfs, gp_η₁, gp_η₂, gp_η₃, gp_η₄, b_P, b_mass, b_τy, b_τx, b_e, b_ω, b_a, b_τ, b_tp
internals = n_steps, is_accept, acceptance_rate, hamiltonian_energy, hamiltonian_energy_error, max_hamiltonian_energy_error, tree_depth, numerical_error, step_size, nom_step_size, is_adapt, loglike, logpost, tree_depth, numerical_error
Summary Statistics
parameters mean std mcse ess_bulk ess_tail rha ⋯
Symbol Float64 Float64 Float64 Float64 Float64 Float6 ⋯
M 0.8237 0.0221 0.0022 103.6751 50.7015 1.028 ⋯
rv0_harps -6693.7252 14.4947 1.5702 109.2935 34.3628 1.001 ⋯
rv0_pfs -4.2816 18.2767 1.6652 117.5778 77.8272 1.000 ⋯
jitter_harps 0.9268 0.7699 0.0812 74.4450 70.0289 1.035 ⋯
jitter_pfs 4.9464 1.4359 0.1299 115.4331 70.0289 1.029 ⋯
gp_η₁ 27.5481 4.8747 0.4063 164.2684 66.9638 0.993 ⋯
gp_η₂ 13.5149 1.3254 0.1413 87.5723 78.3393 0.999 ⋯
gp_η₃ 9.3317 0.9602 0.1160 66.7448 52.4503 1.013 ⋯
gp_η₄ 0.3839 0.0202 0.0031 40.1885 56.8419 1.023 ⋯
b_P 0.0010 0.0000 0.0000 54.6935 54.8565 1.107 ⋯
b_mass 0.0202 0.0049 0.0005 82.6868 36.6043 1.018 ⋯
b_τy 0.6556 0.1739 0.0154 125.0497 65.4098 0.997 ⋯
b_τx -0.7491 0.1405 0.0129 117.3981 76.1462 0.991 ⋯
b_e 0.0000 0.0000 NaN NaN NaN Na ⋯
b_ω 0.0000 0.0000 NaN NaN NaN Na ⋯
b_a 0.0094 0.0001 0.0000 103.7543 50.7015 1.027 ⋯
b_τ -0.1360 0.0317 0.0027 140.8424 96.8032 0.998 ⋯
b_tp 57781.9498 0.0117 0.0010 141.1035 96.8032 0.998 ⋯
2 columns omitted
Quantiles
parameters 2.5% 25.0% 50.0% 75.0% 97.5 ⋯
Symbol Float64 Float64 Float64 Float64 Float6 ⋯
M 0.7825 0.8091 0.8269 0.8354 0.863 ⋯
rv0_harps -6717.5457 -6703.3097 -6694.3497 -6686.0560 -6661.254 ⋯
rv0_pfs -39.5650 -16.5157 -5.5686 9.4367 27.552 ⋯
jitter_harps 0.1221 0.2900 0.7096 1.3855 2.816 ⋯
jitter_pfs 2.3851 3.9614 4.8908 5.8224 8.093 ⋯
gp_η₁ 20.2603 24.0444 26.8302 30.9530 37.858 ⋯
gp_η₂ 10.7921 12.6386 13.6720 14.2941 16.284 ⋯
gp_η₃ 7.5112 8.7273 9.2850 9.9790 11.191 ⋯
gp_η₄ 0.3435 0.3688 0.3849 0.3981 0.420 ⋯
b_P 0.0010 0.0010 0.0010 0.0010 0.001 ⋯
b_mass 0.0104 0.0171 0.0202 0.0232 0.030 ⋯
b_τy 0.3166 0.5339 0.6831 0.7762 0.970 ⋯
b_τx -1.0250 -0.8412 -0.7654 -0.6596 -0.465 ⋯
b_e 0.0000 0.0000 0.0000 0.0000 0.000 ⋯
b_ω 0.0000 0.0000 0.0000 0.0000 0.000 ⋯
b_a 0.0093 0.0094 0.0095 0.0095 0.009 ⋯
b_τ -0.1968 -0.1582 -0.1335 -0.1138 -0.080 ⋯
b_tp 57781.9273 57781.9416 57781.9507 57781.9580 57781.970 ⋯
1 column omitted
For real data, we would want to increase the adaptation and iterations to about 1000 each.
Plot one sample from the results:
fig = Octofitter.rvpostplot(model, chain) # saved to "k2_132-rvpostplot.png"
Plot many samples from the results:
fig = octoplot(
model,
chain,
# Some optional tweaks to the appearance:
N=50, # only plot 50 samples
figscale=1.5, # make it larger
alpha=0.05, # make each sample more transparent
colormap="#0072b2",
) # saved to "k2_132-plot-grid.png"
Corner plot:
octocorner(model, chain, small=true) # saved to "k2_132-pairplot-small.png"