Skip to contents

Other vignettes available

If this vignette is being viewed on CRAN, note that many other vignettes describing how to use sdmTMB are available on the documentation site under Articles.

Notation conventions

This appendix uses the following notation conventions, which generally follow the guidance in Edwards & Auger-MΓ©thΓ© (2019):

  • Greek symbols for parameters,

  • the Latin/Roman alphabet for data (except 𝐐\boldsymbol{Q} and 𝐇\boldsymbol{H}, which are used by convention),

  • bold symbols for vectors or matrices (e.g., π›š\boldsymbol{\omega} is a vector and ω𝐬\omega_{\boldsymbol{s}} is the value of π›š\boldsymbol{\omega} at point in space 𝐬\boldsymbol{s}),

  • Ο•\phi for all distribution dispersion parameters for consistency with the code,

  • 𝔼[y]\mathbb{E}[y] to define the expected value (mean) of variable yy,

  • Var[y]\mathrm{Var}[y] to define the expected variance of the variable yy,

  • a *^* superscript represents interpolated or projected values as opposed to values at knot locations (e.g., π›š\boldsymbol{\omega} vs.Β π›š*\boldsymbol{\omega}^*), and

  • where possible, notation has been chosen to match VAST (Thorson 2019) to maintain consistency (e.g., π›š\boldsymbol{\omega} for spatial fields and π›œt\boldsymbol{\epsilon}_t for spatiotemporal fields).

We include tables of all major indices (Table 1) and symbols (Table 2).

Table 1: Subscript notation
Symbol Description
𝐬\boldsymbol{s} Index for space; a vector of x and y coordinates
tt Index for time
gg Group
Table 2: Symbol notation, code representation (in model output or in model template code), and descriptions.
Symbol Code Description
yy y_i Observed response data
ΞΌ\mu mu_i Mean
Ο•\phi phi A dispersion parameter for a distribution
ff fit$family$link Link function
fβˆ’1f^{-1} fit$family$linkinv Inverse link function
𝛃\boldsymbol{\beta} b_j Parameter vector
𝐗\boldsymbol{X} X_ij A predictor model matrix
O𝐬,tO_{\boldsymbol{s}, t} offset An offset variable at point 𝐬\boldsymbol{s} and time tt
ω𝐬\omega_{\boldsymbol{s}} omega_s Spatial random field at point 𝐬\boldsymbol{s} (knot)
ω𝐬*\omega_{\boldsymbol{s}}^* omega_s_A Spatial random field at point 𝐬\boldsymbol{s} (interpolated)
΢𝐬\zeta_{\boldsymbol{s}} zeta_s Spatially varying coefficient random field at point 𝐬\boldsymbol{s} (knot)
΢𝐬*\zeta_{\boldsymbol{s}}^* zeta_s_A Spatially varying coefficient random field at point 𝐬\boldsymbol{s} (interpolated)
ϡ𝐬,t\epsilon_{\boldsymbol{s}, t} epsilon_st Spatiotemporal random field at point 𝐬\boldsymbol{s} and time tt (knot)
ϡ𝐬,t*\epsilon_{\boldsymbol{s}, t}^* epsilon_st_A Spatiotemporal random field at point 𝐬\boldsymbol{s} and time tt (interpolated)
δ𝐬,t\delta_{\boldsymbol{s},t} b_t AR(1) or random walk spatiotemporal deviations (knot)
Ξ±g\alpha_g RE IID random intercept deviation for group gg
πšΊΟ‰\boldsymbol{\Sigma}_\omega - Spatial random field covariance matrix
𝚺΢\boldsymbol{\Sigma}_\zeta - Spatially varying coefficient random field covariance matrix
𝚺ϡ\boldsymbol{\Sigma}_\epsilon - Spatiotemporal random field covariance matrix
𝐐ω\boldsymbol{Q}_\omega Q_s Spatial random field precision matrix
𝐐΢\boldsymbol{Q}_\zeta Q_s Spatially varying coefficient random field precision matrix
𝐐ϡ\boldsymbol{Q}_\epsilon Q_st Spatiotemporal random field precision matrix
σα2\sigma_\alpha^2 sigma_G IID random intercept variance
σϡ2\sigma_\epsilon^2 sigma_E Spatiotemporal random field marginal variance
σω2\sigma_\omega^2 sigma_O Spatial random field marginal variance
σ΢2\sigma_\zeta^2 sigma_Z Spatially varying coefficient random field marginal variance
ΞΊΟ‰\kappa_\omega kappa(0) Spatial decorrelation rate
ΞΊΟ΅\kappa_\epsilon kappa(1) Spatiotemporal decorrelation rate
ρ\rho ar1_rho Correlation between random fields in subsequent time steps
ργ\rho_{\gamma} rho_time Correlation between time-varying coefficients in subsequent time steps
𝐀\boldsymbol{A} A Sparse projection matrix to interpolate between knot and data locations
𝐇\boldsymbol{H} H 2-parameter rotation matrix used to define anisotropy

sdmTMB model structure

The complete sdmTMB model can be written as

𝔼[y𝐬,t]=μ𝐬,t,μ𝐬,t=fβˆ’1(𝐗𝐬,tmain𝛃+O𝐬,t+Ξ±g+𝐗𝐬,ttvc𝛄𝐭+𝐗𝐬,tsvc΢𝐬+ω𝐬+ϡ𝐬,t), \begin{aligned} \mathbb{E}[y_{\boldsymbol{s},t}] &= \mu_{\boldsymbol{s},t},\\ \mu_{\boldsymbol{s},t} &= f^{-1} \left( \boldsymbol{X}^{\mathrm{main}}_{\boldsymbol{s},t} \boldsymbol{\beta} + O_{\boldsymbol{s},t} + \alpha_g + \boldsymbol{X}^{\mathrm{tvc}}_{\boldsymbol{s},t} \boldsymbol{\gamma_t} + \boldsymbol{X}^{\mathrm{svc}}_{\boldsymbol{s},t} \zeta_{\boldsymbol{s}} + \omega_{\boldsymbol{s}} + \epsilon_{\boldsymbol{s},t} \right), \end{aligned}

where

  • y𝐬,ty_{\boldsymbol{s},t} represents the response data at point 𝐬\boldsymbol{s} and time tt;
  • ΞΌ\mu represents the mean;
  • ff represents a link function (e.g., log or logit) and fβˆ’1f^{-1} represents its inverse;
  • 𝐗main\boldsymbol{X}^{\mathrm{main}}, 𝐗tvc\boldsymbol{X}^{\mathrm{tvc}}, and 𝐗svc\boldsymbol{X}^{\mathrm{svc}} represent design matrices (the superscript identifiers β€˜main’ = main effects, β€˜tvc’ = time varying coefficients, and β€˜svc’ = spatially varying coefficients);
  • 𝛃\boldsymbol{\beta} represents a vector of fixed-effect coefficients;
  • O𝐬,tO_{\boldsymbol{s},t} represents an offset: a covariate (usually log transformed) with a coefficient fixed at one;
  • Ξ±g\alpha_{g} represents random intercepts by group gg, Ξ±g∼N(0,σα2)\alpha_{g}\sim \mathrm{N}(0,\sigma^2_\alpha);
  • Ξ³t\gamma_{t} represents time-varying coefficients (a random walk), Ξ³t∼N(Ξ³tβˆ’1,σγ2)\gamma_{t} \sim \mathrm{N}(\gamma_{t-1},\sigma^2_\gamma);
  • ΢𝐬\zeta_{\boldsymbol{s}} represents spatially varying coefficients (a random field), ΢𝐬∼MVN(𝟎,𝚺΢)\zeta_{\boldsymbol{s}} \sim \mathrm{MVN}(\boldsymbol{0},\boldsymbol{\Sigma}_\zeta);
  • ω𝐬\omega_{\boldsymbol{s}} represents a spatial component (a random field), Ο‰π¬βˆΌMVN(𝟎,πšΊΟ‰)\omega_{\boldsymbol{s}} \sim \mathrm{MVN}(\boldsymbol{0},\boldsymbol{\Sigma}_\omega); and
  • ϡ𝐬,t\epsilon_{\boldsymbol{s},t} represents a spatiotemporal component (a random field), ϡ𝐬,t∼MVN(𝟎,𝚺ϡ)\epsilon_{\boldsymbol{s},t} \sim \mathrm{MVN}(\boldsymbol{0},\boldsymbol{\Sigma}_{\epsilon}).

A single sdmTMB model will rarely, if ever, contain all of the above components. Next, we will split the model to describe the various parts in more detail using β€˜β€¦\ldots’ to represent the other optional components.

Main effects

μ𝐬,t=fβˆ’1(𝐗𝐬,tmain𝛃…) \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \boldsymbol{X}^{\mathrm{main}}_{\boldsymbol{s},t} \boldsymbol{\beta} \ldots \right) \end{aligned}

Within sdmTMB(), 𝐗𝐬,tmain𝛃\boldsymbol{X}^{\mathrm{main}}_{\boldsymbol{s},t} \boldsymbol{\beta} is defined by the formula argument and represents the main-effect model matrix and a corresponding vector of coefficients. This main effect formula can contain optional penalized smoothers or non-linear functions as defined below.

Smoothers

Smoothers in sdmTMB are implemented with the same formula syntax familiar to mgcv (Wood 2017) users fitting GAMs (generalized additive models). Smooths are implemented in the formula using + s(x), which implements a smooth from mgcv::s(). Within these smooths, the same syntax commonly used in mgcv::s() can be applied, e.g.Β 2-dimensional smooths may be constructed with + s(x, y); smooths can be specific to various factor levels, + s(x, by = group); smooths can vary according to a continuous variable, + s(x, by = x2); the basis function dimensions may be specified, e.g.Β + s(x, k = 4) (see ?mgcv::choose.k); and various types of splines may be constructed such as cyclic splines to model seasonality, e.g.Β + s(month, bs = "cc", k = 12).

While mgcv can fit unpenalized (e.g., B-splines) or penalized splines (P-splines), sdmTMB only implements penalized splines. The penalized splines are constructed in sdmTMB using the function mgcv::smooth2random(), which transforms splines into random effects (and associated design matrices) that are estimable in a mixed-effects modelling framework. This is the same approach as is implemented in the R packages gamm4 (Wood & Scheipl 2020) and brms (BΓΌrkner 2017).

Linear break-point threshold models

The linear break-point or β€œhockey stick” model can be used to describe threshold or asymptotic responses. This function consists of two pieces, so that for x<b1x < b_{1}, s(x)=xβ‹…b0s(x) = x \cdot b_{0}, and for x>b1x > b_{1}, s(x)=b1β‹…b0s(x) = b_{1} \cdot b_{0}. In both cases, b0b_{0} represents the slope of the function up to a threshold, and the product b1β‹…b0b_{1} \cdot b_{0} represents the value at the asymptote. No constraints are placed on parameters b0b_{0} or b1b_{1}.

These models can be fit by including + breakpt(x) in the model formula, where x is a covariate. The formula can contain a single break-point covariate.

Logistic threshold models

Models with logistic threshold relationships between a predictor and the response can be fit with the form

s(x)=Ο„+ψ[1+eβˆ’ln(19)β‹…(xβˆ’s50)/(s95βˆ’s50)]βˆ’1, s(x)=\tau + \psi\ { \left[ 1+{ e }^{ -\ln \left(19\right) \cdot \left( x-s50 \right) / \left(s95 - s50 \right) } \right] }^{-1},

where ss represents the logistic function, ψ\psi is a scaling parameter (controlling the height of the y-axis for the response; unconstrained), Ο„\tau is an intercept, s50s50 is a parameter controlling the point at which the function reaches 50% of the maximum (ψ\psi), and s95s95 is a parameter controlling the point at which the function reaches 95% of the maximum. The parameter s50s50 is unconstrained but s95s95 is constrained to be larger than s50s50.

These models can be fit by including + logistic(x) in the model formula, where x is a covariate. The formula can contain a single logistic covariate.

Spatial random fields

Spatial random fields, ω𝐬\omega_{\boldsymbol{s}}, are included if spatial = 'on' (or TRUE) and omitted if spatial = 'off' (or FALSE).

μ𝐬,t=fβˆ’1(…+ω𝐬+…),π›šβˆΌMVNormal(𝟎,πšΊΟ‰), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \omega_{\boldsymbol{s}} + \ldots \right),\\ \boldsymbol{\omega} &\sim \operatorname{MVNormal} \left( \boldsymbol{0}, \boldsymbol{\Sigma}_\omega \right),\\ \end{aligned} The marginal standard deviation of π›š\boldsymbol{\omega} is indicated by Spatial SD in the printed model output or as sigma_O in the output of sdmTMB::tidy(fit, "ran_pars"). The β€˜O’ is for β€˜omega’ (Ο‰\omega).

Internally, the random fields follow a Gaussian Markov random field (GMRF)

π›šβˆΌMVNormal(𝟎,σω2πΟ‰βˆ’1), \boldsymbol{\omega} \sim \mathrm{MVNormal}\left(\boldsymbol{0}, \sigma_\omega^2 \boldsymbol{Q}^{-1}_\omega\right), where 𝐐ω\boldsymbol{Q}_\omega is a sparse precision matrix and σω2\sigma_\omega^2 is the marginal variance.

Spatiotemporal random fields

Spatiotemporal random fields are included by default if there are multiple time elements (time argument is not NULL) and can be set to IID (independent and identically distributed, 'iid'; default), AR(1) ('ar1'), random walk ('rw'), or off ('off') via the spatiotemporal argument. These text values are case insensitive.

Spatiotemporal random fields are represented by π›œt\boldsymbol{\epsilon}_t within sdmTMB. This has been chosen to match the representation in VAST (Thorson 2019). The marginal standard deviation of π›œt\boldsymbol{\epsilon}_t is indicated by Spatiotemporal SD in the printed model output or as sigma_E in the output of sdmTMB::tidy(fit, "ran_pars"). The β€˜E’ is for β€˜epsilon’ (Ο΅\epsilon).

IID spatiotemporal random fields

IID spatiotemporal random fields (spatiotemporal = 'iid') can be represented as

μ𝐬,t=fβˆ’1(…+ϡ𝐬,t+…),π›œπ­βˆΌMVNormal(𝟎,𝚺ϡ). \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \epsilon_{\boldsymbol{s},t} + \ldots \right),\\ \boldsymbol{\epsilon_{t}} &\sim \operatorname{MVNormal} \left( \boldsymbol{0}, \boldsymbol{\Sigma}_{\epsilon} \right). \end{aligned}

where ϡ𝐬,t\epsilon_{\boldsymbol{s},t} represent random field deviations at point 𝐬\boldsymbol{s} and time tt. The random fields are assumed independent across time steps.

Similarly to the spatial random fields, these spatiotemporal random fields (including all versions described below) are parameterized internally with a sparse precision matrix (𝐐ϡ\boldsymbol{Q}_\epsilon)

π›œπ­βˆΌMVNormal(𝟎,σϡ2πΟ΅βˆ’1). \boldsymbol{\epsilon_{t}} \sim \mathrm{MVNormal}\left(\boldsymbol{0}, \sigma_\epsilon^2 \boldsymbol{Q}^{-1}_\epsilon\right).

AR(1) spatiotemporal random fields

First-order auto regressive, AR(1), spatiotemporal random fields (spatiotemporal = 'ar1') add a parameter defining the correlation between random field deviations from one time step to the next. They are defined as

μ𝐬,t=fβˆ’1(…+δ𝐬,t…),𝛅t=1∼MVNormal(𝟎,𝚺ϡ),𝛅t>1=ρ𝛅tβˆ’1+1βˆ’Ο2π›œπ­,π›œπ­βˆΌMVNormal(𝟎,𝚺ϡ), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \delta_{\boldsymbol{s},t} \ldots \right),\\ \boldsymbol{\delta}_{t=1} &\sim \operatorname{MVNormal} (\boldsymbol{0}, \boldsymbol{\Sigma}_{\epsilon}),\\ \boldsymbol{\delta}_{t>1} &= \rho \boldsymbol{\delta}_{t-1} + \sqrt{1 - \rho^2} \boldsymbol{\epsilon_{t}}, \: \boldsymbol{\epsilon_{t}} \sim \operatorname{MVNormal} \left(\boldsymbol{0}, \boldsymbol{\Sigma}_{\epsilon} \right), \end{aligned} where ρ\rho is the correlation between subsequent spatiotemporal random fields. The ρ𝛅tβˆ’1+1βˆ’Ο2\rho \boldsymbol{\delta}_{t-1} + \sqrt{1 - \rho^2} term scales the spatiotemporal variance by the correlation such that it represents the steady-state marginal variance. The correlation ρ\rho allows for mean-reverting spatiotemporal fields, and is constrained to be βˆ’1<ρ<1-1 < \rho < 1. Internally, the parameter is estimated as ar1_phi, which is unconstrained. The parameter ar1_phi is transformed to ρ\rho with ρ=2(logitβˆ’1(πšŠπš›πŸ·_πš™πš‘πš’)βˆ’1)\rho = 2 \left( \mathrm{logit}^{-1}(\texttt{ar1\_phi}) - 1 \right).

Random walk spatiotemporal random fields (RW)

Random walk spatiotemporal random fields (spatiotemporal = 'rw') represent a model where the difference in spatiotemporal deviations from one time step to the next are IID. They are defined as

μ𝐬,t=fβˆ’1(…+δ𝐬,t+…),𝛅t=1∼MVNormal(𝟎,𝚺ϡ),𝛅t>1=𝛅tβˆ’1+π›œπ­,π›œπ­βˆΌMVNormal(𝟎,𝚺ϡ), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \delta_{\boldsymbol{s},t} + \ldots \right),\\ \boldsymbol{\delta}_{t=1} &\sim \operatorname{MVNormal} (\boldsymbol{0}, \boldsymbol{\Sigma}_{\epsilon}),\\ \boldsymbol{\delta}_{t>1} &= \boldsymbol{\delta}_{t-1} + \boldsymbol{\epsilon_{t}}, \: \boldsymbol{\epsilon_{t}} \sim \operatorname{MVNormal} \left(\boldsymbol{0}, \boldsymbol{\Sigma}_{\epsilon} \right), \end{aligned}

where the distribution of the spatiotemporal field in the initial time step is the same as for the AR(1) model, but the absence of the ρ\rho parameter allows the spatiotemporal field to be non-stationary in time. Note that, in contrast to the AR(1) parametrization, the variance is no longer the steady-state marginal variance.

Time-varying regression parameters

Parameters can be modelled as time-varying according to a random walk or first-order autoregressive, AR(1), process. The time-series model is defined by time_varying_type. For all types:

μ𝐬,t=fβˆ’1(…+𝐗𝐬,ttvc𝛄𝐭+…), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \boldsymbol{X}^{\mathrm{tvc}}_{\boldsymbol{s},t} \boldsymbol{\gamma_{t}} + \ldots \right), \end{aligned} where 𝛄𝐭\boldsymbol{\gamma_t} is an optional vector of time-varying regression parameters and 𝐗𝐬,ttvc\boldsymbol{X}^{\mathrm{tvc}}_{\boldsymbol{s},t} is the corresponding model matrix with covariate values. This is defined via the time_varying argument, assuming that the time argument is also supplied a column name. time_varying takes a one-sided formula. ~ 1 implies a time-varying intercept.

For time_varying_type = 'rw', the first time step is estimated independently:

Ξ³t=1∼Uniform(βˆ’βˆž,∞),Ξ³t>1∼Normal(Ξ³tβˆ’1,σγ2). \begin{aligned} \gamma_{t=1} &\sim \operatorname{Uniform} \left(-\infty, \infty \right),\\ \gamma_{t>1} &\sim \operatorname{Normal} \left(\gamma_{t-1}, \sigma^2_{\gamma} \right). \end{aligned}

In this case, the first time-step value is given an implicit uniform prior. I.e., the same variable should not appear in the fixed effect formula since the initial value is estimated as part of the time-varying formula. The formula time_varying = ~ 1 implicitly represents a time-varying intercept (assuming the time argument has been supplied) and, this case, the intercept should be omitted from the main effects (formula ~ + 0 + ... or formula ~ -1 + ...).

For time_varying_type = 'rw0', the first time step is estimated from a mean-zero prior:

Ξ³t=1∼Normal(0,σγ2),Ξ³t>1∼Normal(Ξ³tβˆ’1,σγ2). \begin{aligned} \gamma_{t=1} &\sim \operatorname{Normal} \left(0, \sigma^2_{\gamma} \right),\\ \gamma_{t>1} &\sim \operatorname{Normal} \left(\gamma_{t-1}, \sigma^2_{\gamma} \right). \end{aligned} In this case, the time-varying variable (including the intercept) should be included in the main effects. We suggest using this formulation, but leave the 'rw' option so that legacy code works.

For time_varying_type = 'ar1':

Ξ³t=1∼Normal(0,σγ2),Ξ³t>1∼Normal(ργγtβˆ’1,1βˆ’ΟΞ³2σγ2), \begin{aligned} \gamma_{t=1} &\sim \operatorname{Normal} \left(0, \sigma^2_{\gamma} \right),\\ \gamma_{t>1} &\sim \operatorname{Normal} \left(\rho_\gamma\gamma_{t-1}, \sqrt{1 - \rho_\gamma^2} \sigma^2_{\gamma} \right), \end{aligned} where ργ\rho_{\gamma} is the correlation between subsequent time steps. The first time step is given a mean-zero prior.

Spatially varying coefficients (SVC)

Spatially varying coefficient models are defined as

μ𝐬,t=fβˆ’1(…+𝐗𝐬,tsvc΢𝐬+…),π›‡βˆΌMVNormal(𝟎,𝚺΢), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \boldsymbol{X}^{\mathrm{svc}}_{\boldsymbol{s}, t} \zeta_{\boldsymbol{s}} + \ldots \right),\\ \boldsymbol{\zeta} &\sim \operatorname{MVNormal} \left( \boldsymbol{0}, \boldsymbol{\Sigma}_\zeta \right), \end{aligned}

where 𝛇\boldsymbol{\zeta} is a random field representing a spatially varying coefficient. Usually, 𝐗𝐬,tsvc\boldsymbol{X}^{\mathrm{svc}}_{\boldsymbol{s}, t} would represent a prediction matrix that is constant spatially for a given time tt as defined by a one-sided formula supplied to spatial_varying. For example spatial_varying = ~ 0 + x, where 0 omits the intercept.

The random fields are parameterized internally with a sparse precision matrix (𝐐΢\boldsymbol{Q}_\zeta)

π›‡βˆΌMVNormal(𝟎,σ΢2πΞΆβˆ’1). \boldsymbol{\zeta} \sim \mathrm{MVNormal}\left(\boldsymbol{0}, \sigma_\zeta^2 \boldsymbol{Q}^{-1}_\zeta\right).

IID random or multi-level intercepts

Multilevel/hierarchical intercepts are defined as

μ𝐬,t=fβˆ’1(…+Ξ±g+…),Ξ±g∼Normal(0,σα2), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + \alpha_{g} + \ldots \right),\\ \alpha_g &\sim \operatorname{Normal} \left(0, \sigma_\alpha^2 \right),\\ \end{aligned}

where Ξ±g\alpha_g is an example optional β€œrandom” interceptβ€”an intercept with mean zero that varies by level gg and is constrained by σα\sigma_\alpha. This is defined by the formula argument via the (1 | g) syntax as in lme4 or glmmTMB. There can be multiple random intercepts, despite only showing one above. E.g., (1 | g1) + (1 | g2), in which case they are assumed independent and uncorrelated from each other.

Offset terms

Offset terms can be included through the offset argument in sdmTMB(). These are included in the linear predictor as

μ𝐬,t=fβˆ’1(…+O𝐬,t+…), \begin{aligned} \mu_{\boldsymbol{s},t} &= f^{-1} \left( \ldots + O_{\boldsymbol{s},t} + \ldots \right), \end{aligned}

where O𝐬,tO_{\boldsymbol{s},t} is an offset termβ€”a log transformed variable without a coefficient (assuming a log link). The offset is not included in the prediction. Therefore, if offset represents a measure of effort, for example, the prediction is for one unit of effort (log(1) = 0).

Observation model families

Here we describe the main observation families that are available in sdmTMB and comment on their parametrization, statistical properties, utility, and code representation in sdmTMB.

Binomial

Binomial(N,ΞΌ) \operatorname{Binomial} \left(N, \mu \right) where NN is the size or number of trials, and ΞΌ\mu is the probability of success for each trial. If N=1N = 1, the distribution becomes the Bernoulli distribution. Internally, the distribution is parameterized as the robust version in TMB, which is numerically stable when probabilities approach 0 or 1. Following the structure of stats::glm(), lme4, and glmmTMB, a binomial family can be specified in one of 4 ways:

  1. the response may be a factor (and the model classifies the first level versus all others)
  2. the response may be binomial (0/1)
  3. the response can be a matrix of form cbind(success, failure), or
  4. the response may be the observed proportions, and the weights argument is used to specify the Binomial size (NN) parameter (probabilty ~ ..., weights = N).

Code defined within TMB.

Example: family = binomial(link = "logit")

Beta

Beta(ΞΌΟ•,(1βˆ’ΞΌ)Ο•) \operatorname{Beta} \left(\mu \phi, (1 - \mu) \phi \right) where ΞΌ\mu is the mean and Ο•\phi is a precision parameter. This parametrization follows Ferrari & Cribari-Neto (2004) and the betareg R package (Cribari-Neto & Zeileis 2010). The variance is ΞΌ(1βˆ’ΞΌ)/(Ο•+1)\mu (1 - \mu) / (\phi + 1).

Code defined within TMB.

Example: family = Beta(link = "logit")

Gamma

Gamma(Ο•,ΞΌΟ•) \operatorname{Gamma} \left( \phi, \frac{\mu}{\phi} \right) where Ο•\phi represents the Gamma shape and ΞΌ/Ο•\mu / \phi represents the scale. The mean is ΞΌ\mu and variance is ΞΌβ‹…Ο•2\mu \cdot \phi^2.

Code defined within TMB.

Example: family = Gamma(link = "log")

Gaussian

Normal(ΞΌ,Ο•2) \operatorname{Normal} \left( \mu, \phi^2 \right) where ΞΌ\mu is the mean and Ο•\phi is the standard deviation. The variance is Ο•2\phi^2.

Example: family = Gaussian(link = "identity")

Code defined within TMB.

Lognormal

sdmTMB uses the bias-corrected lognormal distribution where Ο•\phi represents the standard deviation in log-space:

Lognormal(logΞΌβˆ’Ο•22,Ο•2). \operatorname{Lognormal} \left( \log \mu - \frac{\phi^2}{2}, \phi^2 \right). Because of the bias correction, 𝔼[y]=ΞΌ\mathbb{E}[y] = \mu and Var[logy]=Ο•2\mathrm{Var}[\log y] = \phi^2.

Code defined within sdmTMB based on the TMB dnorm() normal density.

Example: family = lognormal(link = "log")

Negative Binomial 1 (NB1)

NB1(ΞΌ,Ο•) \operatorname{NB1} \left( \mu, \phi \right)

where ΞΌ\mu is the mean and Ο•\phi is the dispersion parameter. The variance scales linearly with the mean Var[y]=ΞΌ+ΞΌ/Ο•\mathrm{Var}[y] = \mu + \mu / \phi(Hilbe 2011). Internally, the distribution is parameterized as the robust version in TMB.

Code defined within sdmTMB based on NB2 and borrowed from glmmTMB.

Example: family = nbinom1(link = "log")

Negative Binomial 2 (NB2)

NB2(ΞΌ,Ο•) \operatorname{NB2} \left( \mu, \phi \right)

where ΞΌ\mu is the mean and Ο•\phi is the dispersion parameter. The variance scales quadratically with the mean Var[y]=ΞΌ+ΞΌ2/Ο•\mathrm{Var}[y] = \mu + \mu^2 / \phi(Hilbe 2011). The NB2 parametrization is more commonly seen in ecology than the NB1. Internally, the distribution is parameterized as the robust version in TMB.

Code defined within TMB.

Example: family = nbinom2(link = "log")

Poisson

Poisson(ΞΌ) \operatorname{Poisson} \left( \mu \right) where ΞΌ\mu represents the mean and Var[y]=ΞΌ\mathrm{Var}[y] = \mu.

Code defined within TMB.

Example: family = poisson(link = "log")

Student-t

Student-t(ΞΌ,Ο•,Ξ½) \operatorname{Student-t} \left( \mu, \phi, \nu \right)

where Ξ½\nu, the degrees of freedom (df), is a user-supplied fixed parameter. Lower values of Ξ½\nu result in heavier tails compared to the Gaussian distribution. Above approximately df = 20, the distribution becomes very similar to the Gaussian. The Student-t distribution with a low degrees of freedom (e.g., ν≀7\nu \le 7) can be helpful for modelling data that would otherwise be suitable for Gaussian but needs an approach that is robust to outliers (e.g., Anderson et al. 2017).

Code defined within sdmTMB based on the dt() distribution in TMB.

Example: family = student(link = "log", df = 7)

Tweedie

Tweedie(ΞΌ,p,Ο•),1<p<2 \operatorname{Tweedie} \left(\mu, p, \phi \right), \: 1 < p < 2

where ΞΌ\mu is the mean, pp is the power parameter constrained between 1 and 2, and Ο•\phi is the dispersion parameter. The Tweedie distribution can be helpful for modelling data that are positive and continuous but also contain zeros.

Internally, pp is transformed from logitβˆ’1(πšπš‘πšŽπšπšŠπš)+1\mathrm{logit}^{-1} (\texttt{thetaf}) + 1 to constrain it between 1 and 2 and is estimated as an unconstrained variable.

The source code is implemented as in the cplm package (Zhang 2013) and is based on Dunn & Smyth (2005). The TMB version is defined here.

Example: family = tweedie(link = "log")

Gamma mixture

This is a 2 component mixture that extends the Gamma distribution,

(1βˆ’p)β‹…Gamma(Ο•,ΞΌ1Ο•)+pβ‹…Gamma(Ο•,ΞΌ2Ο•), (1 - p) \cdot \operatorname{Gamma} \left( \phi, \frac{\mu_{1}}{\phi} \right) + p \cdot \operatorname{Gamma} \left( \phi, \frac{\mu_{2}}{\phi} \right), where Ο•\phi represents the Gamma shape, ΞΌ1/Ο•\mu_{1} / \phi represents the scale for the first (smaller component) of the distribution, ΞΌ2/Ο•\mu_{2} / \phi represents the scale for the second (larger component) of the distribution, and pp controls the contribution of each component to the mixture (also interpreted as the probability of larger events).

The mean is (1βˆ’p)β‹…ΞΌ1+pβ‹…ΞΌ2(1-p) \cdot \mu_{1} + p \cdot \mu_{2} and the variance is (1βˆ’p)2β‹…ΞΌ1β‹…Ο•2+(p)2β‹…ΞΌ2β‹…Ο•2(1-p) ^ 2 \cdot \mu_{1} \cdot \phi^2 + (p) ^ 2 \cdot \mu_{2} \cdot \phi^2.

Here, and for the other mixture distributions, the probability of the larger mean can be obtained from plogis(fit$model$par[["logit_p_mix"]]) and the ratio of the larger mean to the smaller mean can be obtained from 1 + exp(fit$model$par[["log_ratio_mix"]]). The standard errors are available in the TMB sdreport: fit$sd_report.

If you wish to fix the probability of a large (i.e., extreme) mean, which can be hard to estimate, you can use the map list. E.g.:

sdmTMB(...,
  control = sdmTMBcontrol(
    start = list(logit_p_mix = qlogis(0.05)), # 5% probability of 'mu2'
    map = list(logit_p_mix = factor(NA)) # don't estimate
  )
)

Example: family = gamma_mix(link = "log"). See also family = delta_gamma_mix() for an extension incorporating this distribution with delta models.

Lognormal mixture

This is a 2 component mixture that extends the lognormal distribution,

(1βˆ’p)β‹…Lognormal(logΞΌ1βˆ’Ο•22,Ο•2)+pβ‹…Lognormal(logΞΌ2βˆ’Ο•22,Ο•2). (1 - p) \cdot \operatorname{Lognormal} \left( \log \mu_{1} - \frac{\phi^2}{2}, \phi^2 \right) + p \cdot \operatorname{Lognormal} \left( \log \mu_{2} - \frac{\phi^2}{2}, \phi^2 \right).

Because of the bias correction, 𝔼[y]=(1βˆ’p)β‹…ΞΌ1+pβ‹…ΞΌ2\mathbb{E}[y] = (1-p) \cdot \mu_{1} + p \cdot \mu_{2} and Var[logy]=(1βˆ’p)2β‹…Ο•2+p2β‹…Ο•2\mathrm{Var}[\log y] = (1-p)^2 \cdot \phi^2 + p^2 \cdot \phi^2.

As with the Gamma mixture, pp controls the contribution of each component to the mixture (also interpreted as the probability of larger events).

Example: family = lognormal_mix(link = "log"). See also family = delta_lognormal_mix() for an extension incorporating this distribution with delta models.

Negative binomial 2 mixture

This is a 2 component mixture that extends the NB2 distribution,

(1βˆ’p)β‹…NB2(ΞΌ1,Ο•)+pβ‹…NB2(ΞΌ2,Ο•) (1 - p) \cdot \operatorname{NB2} \left( \mu_1, \phi \right) + p \cdot \operatorname{NB2} \left( \mu_2, \phi \right)

where ΞΌ1\mu_1 is the mean of the first (smaller component) of the distribution, ΞΌ2\mu_2 is the mean of the larger component, and pp controls the contribution of each component to the mixture.

Example: family = nbinom2_mix(link = "log")

Gaussian random fields

MatΓ©rn parameterization

The MatΓ©rn defines the covariance Ξ¦(sj,sk)\Phi \left( s_j, s_k \right) between spatial locations sjs_j and sks_k as

Ξ¦(sj,sk)=Ο„2/Ξ“(Ξ½)2Ξ½βˆ’1(ΞΊdjk)Ξ½KΞ½(ΞΊdjk), \Phi\left( s_j,s_k \right) = \tau^2/\Gamma(\nu)2^{\nu - 1} (\kappa d_{jk})^\nu K_\nu \left( \kappa d_{jk} \right),

where Ο„2\tau^2 controls the spatial variance, Ξ½\nu controls the smoothness, Ξ“\Gamma represents the Gamma function, djkd_{jk} represents the distance between locations sjs_j and sks_k, KΞ½K_\nu represents the modified Bessel function of the second kind, and ΞΊ\kappa represents the decorrelation rate. The parameter Ξ½\nu is set to 1 to take advantage of the Stochastic Partial Differential Equation (SPDE) approximation to the GRF to greatly increase computational efficiency (Lindgren et al. 2011). Internally, the parameters ΞΊ\kappa and Ο„\tau are converted to range and marginal standard deviation Οƒ\sigma as range=8/ΞΊ\textrm{range} = \sqrt{8} / \kappa and Οƒ=1/4Ο€exp(2log(Ο„)+2log(ΞΊ))\sigma = 1 / \sqrt{4 \pi \exp \left(2 \log(\tau) + 2 \log(\kappa) \right) }.

In the case of a spatiotemporal model with both spatial and spatiotemporal fields, if share_range = TRUE in sdmTMB() (the default), then a single ΞΊ\kappa and range are estimated with separate σω\sigma_\omega and σϡ\sigma_\epsilon. This often makes sense since data are often only weakly informative about ΞΊ\kappa. If share_range = FALSE, then separate ΞΊΟ‰\kappa_\omega and ΞΊΟ΅\kappa_\epsilon are estimated. The spatially varying coefficient field always shares ΞΊ\kappa with the spatial random field.

Projection 𝐀\boldsymbol{A} matrix

The values of the spatial variables at the knots are multiplied by a projection matrix 𝐀\boldsymbol{A} that bilinearly interpolates from the knot locations to the values at the locations of the observed or predicted data (Lindgren & Rue 2015)

π›š*=π€π›š, \boldsymbol{\omega}^* = \boldsymbol{A} \boldsymbol{\omega}, where π›š*\boldsymbol{\omega}^* represents the values of the spatial random fields at the observed locations or predicted data locations. The matrix 𝐀\boldsymbol{A} has a row for each data point or prediction point and a column for each knot. Three non-zero elements on each row define the weight of the neighbouring 3 knot locations for location 𝐬\boldsymbol{s}. The same bilinear interpolation happens for any spatiotemporal random fields

π›œt*=π€π›œt. \boldsymbol{\epsilon}_t^* = \boldsymbol{A} \boldsymbol{\epsilon}_t.

Anisotropy

TMB allows for anisotropy, where spatial covariance may be asymmetric with respect to latitude and longitude (full details). Anisotropy can be turned on or off with the logical anisotropy argument to sdmTMB(). There are a number of ways to implement anisotropic covariance (Fuglstad et al. 2015), and we adopt a 2-parameter rotation matrix 𝐇\textbf{H}. The elements of 𝐇\textbf{H} are defined by the parameter vector 𝐱\boldsymbol{x} so that H1,1=x1H_{1,1} = x_{1}, H1,2=H2,1=x2H_{1,2} = H_{2,1} = x_{2} and H2,2=(1+x22)/x1H_{2,2} = (1 + x_{2}^2) / x_{1}.

Once a model is fitted with sdmTMB(), the anisotropy relationships may be plotted using the plot_anisotropy() function, which takes the fitted object as an argument. If a barrier mesh is used, anisotropy is disabled.

Incorporating physical barriers into the SPDE

In some cases the spatial domain of interest may be complex and bounded by some barrier such as by land or water (e.g., coastlines, islands, lakes). SPDE models allow for physical barriers to be incorporated into the modelling (Bakka et al. 2019). With sdmTMB() models, the mesh construction occurs in two steps: the user (1) constructs a mesh with a call to sdmTMB::make_mesh(), and (2) passes the mesh to sdmTMB::add_barrier_mesh(). The barriers must be constructed as sf objects (Pebesma 2018) with polygons defining the barriers. See ?sdmTMB::add_barrier_mesh for an example.

The barrier implementation requires the user to select a fraction value (range_fraction argument) that defines the fraction of the usual spatial range when crossing the barrier (Bakka et al. 2019). For example, if the range was estimated at 10 km, range_fraction = 0.2 would assume that the range was 2 km across the barrier. This would let the spatial correlation decay 5 times faster with distance. From experimentation, values around 0.1 or 0.2 seem to work well but values much lower than 0.1 can result in convergence issues.

This website by Francesco Serafini and Haakon Bakka provides an illustration with INLA. The implementation within TMB was borrowed from code written by Olav Nikolai Breivik and Hans Skaug at the TMB Case Studies Github site.

Optimization

Optimization details

The sdmTMB model is fit by maximum marginal likelihood. Internally, a TMB (Kristensen et al. 2016) model template calculates the marginal log likelihood and its gradient, and the negative log likelihood is minimized via the non-linear optimization routine stats::nlminb() in R (Gay 1990; R Core Team 2021). Random effects are estimated at values that maximize the log likelihood conditional on the estimated fixed effects and are integrated over via the Laplace approximation (Kristensen et al. 2016).

Like AD Model Builder (Fournier et al. 2012), TMB allows for parameters to be fit in phases and we include the multiphase argument in sdmTMB::sdmTMBcontrol() to allow this. For high-dimensional models (many fixed and random effects), phased estimation may be faster and result in more stable convergence. In sdmTMB, phased estimation proceeds by first estimating all fixed-effect parameters contributing to the likelihood (holding random effects constant at initial values). In the second phase, the random-effect parameters (and their variances) are also estimated. Fixed-effect parameters are also estimated in the second phase and are initialized at their estimates from the first phase.

In some cases, a single call to stats::nlminb() may not be result in convergence (e.g., the maximum gradient of the marginal likelihood with respect to fixed-effect parameters is not small enough yet), and the algorithm may need to be run multiple times. In the sdmTMB::sdmTMBcontrol() function, we include an argument nlminb_loops that will restart the optimization at the previous best values. The number of nlminb_loops should generally be small (e.g., 2 or 3 initially), and defaults to 1. For some sdmTMB models, the Hessian may also be unstable and need to be re-evaluated. We do this optionally with the stats::optimHess() routine after the call to stats::nlminb(). The stats::optimHess() function implements a Newton optimization routine to find the Hessian, and we include the argument newton_loops in sdmTMB::sdmTMBcontrol() to allow for multiple function evaluations (each starting at the previous best value). By default, this is not included and newton_loops is set to 0. If a model is already fit, the function sdmTMB::run_extra_optimization() can run additional optimization loops with either routine to further reduce the maximum gradient.

Assessing convergence

Much of the guidance around diagnostics and glmmTMB also applies to sdmTMB, e.g.Β the glmmTMB vignette on troubleshooting. Optimization with stats::nlminb() involves specifying the number of iterations and evaluations (eval.max and iter.max) and the tolerances (abs.tol, rel.tol, x.tol, xf.tol)β€”a greater number of iterations and smaller tolerance thresholds increase the chance that the optimal solution is found, but more evaluations translates into longer computation time. Warnings of non-positive-definite Hessian matrices (accompanied by parameters with NAs for standard errors) often mean models are improperly specified given the data. Standard errors can be observed in the output of print.sdmTMB() or by checking fit$sd_report. The maximum gradient of the marginal likelihood with respect to fixed-effect parameters can be checked by inspecting (fit$gradients). Guidance varies, but the maximum gradient should likely be at least <0.001< 0.001 before assuming the fitting routine is consistent with convergence. If maximum gradients are already relatively small, they can often be reduced further with additional optimization calls beginning at the previous best parameter vector as described above with sdmTMB::run_extra_optimization().

References

Anderson, S.C., Branch, T.A., Cooper, A.B. & Dulvy, N.K. (2017). Black-swan events in animal populations. Proceedings of the National Academy of Sciences, 114, 3252–3257.
Bakka, H., Vanhatalo, J., Illian, J., Simpson, D. & Rue, H. (2019). Non-stationary Gaussian models with physical barriers. arXiv:1608.03787 [stat]. Retrieved from https://arxiv.org/abs/1608.03787
BΓΌrkner, P.-C. (2017). brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80, 1–28.
Cribari-Neto, F. & Zeileis, A. (2010). Beta regression in R. Journal of Statistical Software, 34, 1–24.
Dunn, P.K. & Smyth, G.K. (2005). Series evaluation of Tweedie exponential dispersion model densities. Statistics and Computing, 15, 267–280.
Edwards, A.M. & Auger-MΓ©thΓ©, M. (2019). Some guidance on using mathematical notation in ecology. Methods in Ecology and Evolution, 10, 92–99.
Ferrari, S. & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31, 799–815.
Fournier, D.A., Skaug, H.J., Ancheta, J., Ianelli, J., Magnusson, A., Maunder, M.N., Nielsen, A. & Sibert, J. (2012). AD model builder: Using automatic differentiation for statistical inference of highly parameterized complex nonlinear models. Optimization Methods and Software, 27, 233–249.
Fuglstad, G.-A., Lindgren, F., Simpson, D. & Rue, H. (2015). Exploring a new class of non-stationary spatial Gaussian random fields with varying local anisotropy. Statistica Sinica, 25, 115–133.
Gay, D.M. (1990). Usage summary for selected optimization routines. Computing Science Technical Report, 153, 1–21.
Hilbe, J.M. (2011). Negative Binomial Regression. Cambridge University Press.
Kristensen, K., Nielsen, A., Berg, C.W., Skaug, H. & Bell, B.M. (2016). TMB: Automatic differentiation and Laplace approximation. Journal of Statistical Software, 70, 1–21.
Lindgren, F. & Rue, H. (2015). Bayesian spatial modelling with R-INLA. Journal of Statistical Software, 63, 1–25.
Lindgren, F., Rue, H. & LindstrΓΆm, J. (2011). An explicit link between Gaussian fields and Gaussian Markov random fields: The stochastic partial differential equation approach. J. R. Stat. Soc. Ser. B Stat. Methodol., 73, 423–498.
Pebesma, E. (2018). Simple Features for R: Standardized Support for Spatial Vector Data. The R Journal, 10, 439–446. Retrieved from https://doi.org/10.32614/RJ-2018-009
R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from https://www.R-project.org/
Thorson, J.T. (2019). Guidance for decisions using the Vector Autoregressive Spatio-Temporal (VAST) package in stock, ecosystem, habitat and climate assessments. Fisheries Research, 210, 143–161.
Wood, S.N. (2017). Generalized additive models: An introduction with R, 2nd edn. Chapman and Hall/CRC.
Wood, S. & Scheipl, F. (2020). gamm4: Generalized Additive Mixed Models using ’mgcv’ and ’lme4’. Retrieved from https://CRAN.R-project.org/package=gamm4
Zhang, Y. (2013). Likelihood-based and Bayesian methods for Tweedie compound Poisson linear mixed models. Statistics and Computing, 23, 743–757.