--- title: "Distributions with Delay" author: matthias.kuhn@tu-dresden.de date: 2024-07-25 output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Distributions with Delay} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library("incubate") library("dplyr") library("ggplot2") library("patchwork") ``` The delayed exponential distribution and the delayed Weibull distribution are the foundational data models in the `incubate`-package. ## Three-parameter Weibull The classical 2-parameter Weibull distribution has shape parameter $γ>0$ and scale parameter $β>0$. As additional third parameter we use delay $α>0$ and then, density (PDF) and cumulative distribution function (CDF) read, for any $x>0$: $$ \begin{align*} f(x) &= \frac{γ· (x-α)^{γ-1}}{β^γ} · \exp(-\big(\frac{x-α}{β}\big)^γ)\\ F(x) &= 1 - \exp(-\big(\frac{x-α}{β}\big)^γ) \qquad \text{for } x > α \text{ and 0 otherwise} \end{align*} $$ If we have a data sample $\mathbf{x}$ of size $n$ and want to apply **maximum likelihood (ML)** estimation for parameters we get as log-likelihood function $$ l_\mathbf{x}(α, γ, β) = n \log(γ) - nγ \log(β) + (γ-1) · ∑_{i=1}^n \log(x_i-α) - β^{-γ} · ∑_{i=1}^n (x_i - α)^γ $$ In the presence of **right-censored observations**, the log-likelihood function changes as well. Assume, we have in total $n=n_e + n_r$ observations, where * $n_e$ event observations with index set $O$ _and_ * $n_r$ right censored observations with index set $R$, _hence_ * $|O| + |R| = n$ and $O ⨃ R = \{1, .., n\}$ \begin{multline*} l_\mathbf{x}(α, γ, β) = n_e \log(γ) - n_e γ \log(β) + (γ-1) · ∑_{i ∈ O} \log(x_i-α) - β^{-γ} · ∑_{i ∈ O} (x_i - α)^γ\\ - β^{-γ} · ∑_{r ∈ R} (x_r - α)^γ \end{multline*} This simplifies to $$ l_\mathbf{x}(α, γ, β) = n_e \log(γ) - n_e γ \log(β) + (γ-1) · ∑_{i ∈ O} \log(x_i-α) - β^{-γ} · ∑_{i=1}^n (x_i - α)^γ $$ One can directly go to (numerically) maximize this log-likelihood function. Cousineau (2009) calls this the **iterative approach**. ### Profiling scale parameter β We are able to profile out the scale parameter estimate $\hat β$: the candidate value for a local extremum with respect to scale β is found by taking the 1^st^ derivative and equating to 0: $$ \frac{\partial l}{\partial β}(α, γ, β) = -\frac{nγ}{β} + γ β^{-γ-1} · ∑_{i=1}^n (x_i - α)^γ \overset{!}{=}0 $$ Hence (as $γ>0$), it must hold $-n + β^{-γ} · ∑_{i=1}^n (x_i - α)^γ \overset{!}{=}0$ and further simplifying, $\hat β = \sqrt[γ]{\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ}$. With **right-censored observations** present in the data, the result is similar: $$ \frac{\partial l}{\partial β}(α, γ, β) = -\frac{n_e γ}{β} + γ β^{-γ-1} · ∑_{i=1}^n (x_i - α)^γ \overset{!}{=}0 $$ Hence (as $γ>0$), it must hold $-n_e + β^{-γ} · ∑_{i=1}^n (x_i - α)^γ \overset{!}{=}0$. The solution is $\hat β = \sqrt[γ]{\frac{1}{n_e} ∑_{i=1}^n (x_i - α)^γ}$, we divide by the number of events $n_e$ only but we sum over all $x_i$ (observed and right-censored). The only problem is when we have right-censorings earlier than the (estimated) delay α, then we potentially get negative contributions to the radicand. It seems appropriate to set these terms to zero. ### Simplified log-likelihood If we substitute the β-estimate into the log-likelihood equation we get a simplified function to maximize. $$ \begin{align} l^P_{\mathbf{x}}(α, γ) &= n \log(γ) - n \log(\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ) + (γ-1) · ∑_{i=1}^n \log(x_i-α) - n\\ &= n · \Big[ \log(γ) - \log\big(\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ\big) + (γ-1) · \frac{1}{n} ∑_{i=1}^n \log(x_i-α) - 1\Big] \end{align} $$ In the presence of **right-censored observations**, the equation for $l^P_{\mathbf{x}}(α, γ)$ is similar but different. We have to use the number of observed event $n_e$ (instead of $n$) in most cases. \begin{align*} l^P_{\mathbf{x}}(α, γ) &= n_e \log(γ) - n_e \log\big(\frac{1}{n_e} ∑_{i=1}^n (x_i - α)^γ\big) + (γ-1) · ∑_{i∈O} \log(x_i-α) - n_e\\ &= n_e · \Big[ \log(γ) - \log\big(\frac{1}{n_e} ∑_{i=1}^n (x_i - α)^γ\big) + (γ-1) · \frac{1}{n_e} ∑_{i∈O} \log(x_i-α) - 1 \Big] \end{align*} The right-censored observations occur both in the simplified log-likelihood as well as in the scale parameter estimate $\hat β$. ### Partial Derivatives of simplified log-likelihood We can use derivatives of the simplified log-likelihood to find candidates for **local maxima** of it. Cousineau (2009) calls this the **two step approach** (as opposed to the non-profiled iterative approach above). Forming the partial derivatives of $l^P(α, γ)$ we get: $$ \begin{aligned} \frac{\partial l^P}{\partial α}(α, γ) &= n · \Big[ \frac{\frac{1}{n} ∑_{i=1}^n γ (x_i - α)^{γ-1}}{\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ} - (γ-1) · \frac{1}{n} ∑_{i=1}^n \frac{1}{x_i-α}\Big]\\ &= γ · \frac{∑_{i=1}^n (x_i - α)^{γ-1}}{\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ} - (γ-1) · ∑_{i=1}^n \frac{1}{x_i-α}\\ \frac{\partial l^P}{\partial γ}(α, γ) &= n · \Big[ \frac{1}{γ} - \frac{\frac{1}{n} ∑_i \log(x_i-α) · (x_i-α)^γ}{\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ} + \frac{1}{n} ∑_{i=1}^n \log(x_i-α) \Big]\\ &= \frac{n}{γ} - \frac{∑_i \log(x_i-α) · (x_i-α)^γ}{\frac{1}{n} ∑_{i=1}^n (x_i - α)^γ} + ∑_{i=1}^n \log(x_i-α) \end{aligned} $$ In the presence of **right-censored observations**, the derivatives become: \begin{align*} \frac{\partial l^P}{\partial α}(α, γ) &= n_e · \Big[ \frac{\frac{1}{n_e} ∑_{i=1}^n γ (x_i - α)^{γ-1}}{\frac{1}{n_e} ∑_{i=1}^n (x_i - α)^γ} - (γ-1) · \frac{1}{n_e} ∑_{i∈O} \frac{1}{x_i-α}\Big]\\ &= γ · \frac{∑_{i=1}^n (x_i - α)^{γ-1}}{\frac{1}{n_e}∑_{i=1}^n (x_i - α)^γ} - (γ-1) · ∑_{i∈O} \frac{1}{x_i-α}\\ \frac{\partial l^P}{\partial γ}(α, γ) &= n_e · \Big[ \frac{1}{γ} - \frac{\frac{1}{n_e} ∑_{i=1}^n \log(x_i-α) · (x_i-α)^γ}{\frac{1}{n_e} ∑_{i=1}^n (x_i - α)^γ} + \frac{1}{n_e} ∑_{i∈O} \log(x_i-α) \Big]\\ &= \frac{n_e}{γ} - \frac{∑_{i=1}^n\log(x_i-α) · (x_i-α)^γ}{\frac{1}{n_e} ∑_{i=1}^n (x_i - α)^γ} + ∑_{i∈O} \log(x_i-α) \end{align*} ### Score equations: local extrema candidates Setting these two partial derivatives to 0 we get the **score equations** that lead to candidate points for local extrema: * from delay α \begin{align*} \frac{γ}{γ-1} · \frac{∑_{i=1}^n (x_i - α)^{γ-1}}{∑_{i=1}^n (x_i - α)^γ} - \frac{1}{n} ∑_{i=1}^n \frac{1}{x_i-α} &= 0\\ \frac{γ}{γ-1} - \frac{1}{n} \big(∑_{i=1}^n \frac{1}{x_i-α}\big) · \frac{∑_{i=1}^n (x_i - α)^γ}{∑_{i=1}^n (x_i - α)^{γ-1}} &= 0 \end{align*} * from shape $γ>0$ $$ \frac{1}{γ} - \frac{∑_{i=1}^n \log(x_i-α) · (x_i-α)^γ}{∑_{i=1}^n (x_i - α)^γ} + \frac{1}{n} ∑_{i=1}^n \log(x_i-α) = 0 $$ In the presence of **right-censored observations**, the solution equations are: * from delay α $$ \frac{γ}{γ-1} - \frac{1}{n_e} \big(∑_{i∈O} \frac{1}{x_i-α}\big) · \frac{∑_{i=1}^n (x_i - α)^γ}{∑_{i=1}^n (x_i - α)^{γ-1}} = 0 $$ * from shape $γ>0$ $$ \frac{1}{γ} - \frac{∑_{i=1}^n \log(x_i-α) · (x_i-α)^γ}{∑_{i=1}^n (x_i - α)^γ} + \frac{1}{n_e} ∑_{i∈O} \log(x_i-α) = 0 $$ ### Weighted MLE (MLEw) Cousineau (2009) proposes a slightly varied equation to solve for the parameter estimates. The scale parameter β gets estimated using a weighting factor $W_1$, $$ \hat β = \sqrt[γ]{\frac{1}{n} ∑_{i=1}^n \frac{(x_i - α)^γ}{W_1}} $$ For the remaining parameters, we modify the partial derivatives equations of the profiled log-likelihood function. It appears to come from the following partial derivatives of a modified log-likelihood function, where $W_1$, $W_2$ and $W_3(γ)$ are seen as weights that depend on sample size $n$. Actually, these weights are random variables that ultimately depend on the true parameter α and $γ$ but we treat $W_1$ and $W_2$ as fixed for given $n$ and we model $W_3$ as function of shape $γ$ for given $n$. $$ \begin{aligned} \frac{\partial l_w^{P}}{\partial α}(α, γ) &= n · \Big[ W_3(γ) · (γ-1) · \frac{∑_{i=1}^n (x_i - α)^{γ-1}}{∑_{i=1}^n (x_i - α)^γ} - (γ-1) · \frac{1}{n} ∑_{i=1}^n \frac{1}{x_i-α}\Big]\\ \frac{\partial l_w^{P}}{\partial γ}(α, γ) &= n · \Big[ \frac{W_2}{γ} - \frac{∑_i \log(x_i-α) · (x_i-α)^γ}{∑_{i=1}^n (x_i - α)^γ} + \frac{1}{n} ∑_{i=1}^n \log(x_i-α) \Big] \end{aligned} $$ Compared to the MLE-equations (partial derivative equation for delay), we see that $W_3(γ)$ takes the role of $\frac{γ}{γ-1}$. ## Parameter transformation The model fitting uses numeric optimization routines. We use parameter transformation to improve this process, in particular through suitable parameter transformation we can avoid constraints on the bounds of the parameters. For instance, the initial delay parameter $α_1$ is by definition bounded upwards by the first observation $x_{(1)}$ and conventionally we assume $α_1 ≥ 0$ as well, in particular when we model a survival time. This 2^nd^ assumption is not strictly necessary, however, as a model with negative delay might also describe well a survival data. To get rid of the (upper) bounds, an idea is to use the 1^st^ observation as a normalization for $α_1$ and find a transformation to **whole ℝ**: ### Log-based transformation For instance, assuming only an upper bound ($α_1 ≤ x_{(1)}$, also negative values allowed) we get a transformation involving the logarithm: $$ \tilde{α}_1 := \log(\frac{x_{(1)}-α_1}{x_{(1)}}) = \log(1 - \frac{α_1}{x_{(1)}}) $$ ### Logit-based transformation Alternatively, if we want to enforce also the lower bound for $α_1$ we can apply a **logit-based transformation** to rescale the delay from $[0,x_{(1)}]$ to whole ℝ: $$ \tilde{α}_1 := \operatorname{logit}(\frac{α_1}{x_{(1)}}) = -\log(1 + \frac{x_{(1)} - 2α_1}{α_1}) $$ In R, the logit function is implemented efficiently in `stats::qlogis`. The back-transformation is $α_1 = \frac{x_{(1)}}{1+\exp(-\tilde{α}_1)}$ (cf `stats::plogis` in R). We visualize this logit-based transformation: ```{r plot_logit_transf, echo=FALSE, fig.align='center', fig.width=8, fig.height=3} x1 <- 7 ggplot() + xlim(0, 7) + geom_function(fun = \(alpha1) -log1p((x1 - 2 * alpha1)/alpha1), n = 1001) + labs(x = expression(alpha), y = expression(tilde(alpha))) | ggplot() + xlim(-9, 9) + geom_function(fun = \(alpha1_tr) x1 / (expm1(-alpha1_tr)+2), n = 1001) + labs(x = expression(tilde(alpha)), y = expression(alpha), caption = bquote("transformation for data with " * x[(1)] == .(x1))) ``` ## Implementing random right censoring The functions for random generation of data `rexp_delayed()` and `rweib_delayed()` have an option to produce right censored data. Currently, we implement random right censoring under a **uniform censoring scheme**, starting after the initial delay phase.^[Question is if we allow $C$ to occur during the incubation period. If yes and there is a long delay then we will see many censored observations during the incubation period which might be odd. Therefore, we decided to model the censoring process as $C \sim U(α, Z)$.] The censoring process $C$ is independent of the event process $X$. The observed time is $\min(X,C)$ and it is an event if and only if $X≤C$, i.e., when the event "makes it" before the censoring happens. You provide the desired expected proportion of right censoring π via parameter `cens=`. This information determines the upper limit `Z` for the censoring process `U(α,Z)`. The expected proportion of censorings π is $P(C We make sure to never exceed the specified proportion of right censorings. ### Exponential Substituting the density $f_X(x) = 𝟙_{x≥α} · λ \exp(-λ·(x-α))$ and using integration by parts: $$ \begin{equation*} \begin{split} π = P(C plot(mark.time=T, conf.int=F, lty = 1:2, col = 1:2) ``` ## MLEc Corrected MLE (MLEc) was proposed by Cheng and Iles (1987). It uses the normal likelihood except for the first observation that is instrumental for the delay estimation. For the first observation MLEc uses the difference in cumulative distribution function between the first two observations. The log-likelihood function becomes, using $z_i := x_i - α$, $$ \begin{align} l_\mathbf{x}(α, γ, β) &= \log(\exp(-(\frac{z_2}{β})^γ) - \exp(-(\frac{z_1}{β})^γ)) + (n-1) \log(γ) - (n-1) γ \log(β) + (γ-1) · ∑_{i=2}^n \log z_i - β^{-γ} · ∑_{i=2}^n z_i^γ\\ &= -(\frac{z_1}{β})^γ + \log(1-\exp(-\frac{z_2^γ - z_1^γ}{β^γ}) + (n-1) \log(γ) - (n-1) γ \log(β) + (γ-1) · ∑_{i=2}^n \log z_i - β^{-γ} · ∑_{i=2}^n z_i^γ \end{align} $$ We could use 1st Taylor approximation to simplify $\log(1+z) = z - \frac{z^2}{2} + \frac{z^3}{3} ∓ ... ≈ z$ for all $z$ close to zero, $|z|<1$: $$ l_\mathbf{x}(α, γ, β) ≈ -(\frac{z_1}{β})^γ - \exp(-\frac{z_2^γ - z_1^γ}{β^γ}) + (n-1) \log(γ) - (n-1) γ \log(β) + (γ-1) · ∑_{i=2}^n \log z_i - β^{-γ} · ∑_{i=2}^n z_i^γ $$ We could use partial derivatives to find local maxima of this approximate log-likelihood.