Last updated: 2018-05-05

workflowr checks: (Click a bullet for more information)
  • R Markdown file: up-to-date

    Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

  • Environment: empty

    Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

  • Seed: set.seed(20180501)

    The command set.seed(20180501) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

  • Session information: recorded

    Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

  • Repository version: b551a8f

    Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

    Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
    
    Ignored files:
        Ignored:    .Rhistory
        Ignored:    .Rproj.user/
        Ignored:    log/
    
    
    Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
Expand here to see past versions:
    File Version Author Date Message
    Rmd 3bd6a61 Dongyue 2018-05-05 first commit
    html ca322e8 Dongyue 2018-05-05 first commit


Poisson Distribution

A random variable \(X\) has a Poisson distribution with parameter \(\mu\) if it takes integer values \(y = 0, 1, 2, \dots\) with probability \(P(X=x)=\frac{e^{-\mu}\mu^x}{x!}\) where \(\mu>0\). The mean and variance of \(X\) is \(E(X)=Var(X)=\mu\).

Exponential family

\(f_Y(y;\theta,\phi)=\exp[(y\theta-b(\theta))/a(\phi)+c(y,\phi)]\). If \(\phi\) is known, then \(\theta\) is called canonical parameter. For example, \(\theta=\mu, \phi=\sigma^2\) in normal distribution.

\(E(Y)=\mu=b'(\theta)\), \(Var(Y)=b''(\theta)a(\phi)\). \(b''(\theta)\) is variance function and is denoted \(V(\mu)\) if it’s a function of \(\mu\). \(a(\phi)\) has the form \(a(\phi)=\frac{\phi}{a}\), where \(\phi\) is the dispersion parameter and is constant over observations, and \(a\) is the weight on each observation.

For Poisson distribution, \(\phi=1, b(\theta)=\exp(\theta),V(\mu)=1\)

Weighted least squares

\(y_i=x_i\beta+\epsilon_i\) where \(Var(\epsilon_i|x_i)=f(x_i)\neq constant\) and \(Cov(\epsilon_i,\epsilon_j)=0\).

Let \(w_i\propto 1/\sigma_i^2\), we have \(E(W^{-1/2}y|X)=W^{-1/2}X\beta\) and \(\hat\beta=(X^TWX)^{-1}X^TWy\).

Iterative weighted least squares

Weight: \(W=V^{-1}(\frac{d\mu}{d\eta})^2\)

Dependent variate: \(z=\eta+(y-\mu)\frac{d\eta}{d\mu}\)

\(z\) is a linearized form of link function applied to the data: \(g(y)\simeq g(\mu)+(y-\mu)g'(\mu)=\eta+(y-\mu)\frac{d\eta}{d\mu}\).

Derivation:

The score function of glm model is \[s(\beta_j)=\frac{\partial l}{\partial \beta_j}=\Sigma_{i=1}^na(\phi_i)^{-1}V(\mu_i)^{-1}(y_i-\mu_i)x_{ij}\frac{d\eta_i}{d\mu_i}\] \[=\Sigma_{i=1}^na_iV(\mu_i)^{-1}(y_i-\mu_i)x_{ij}\frac{d\eta_i}{d\mu_i}\]

A method of solving score equations is the iterative algorithm Fisher’s Method of Scoring.

It turns out that the updates can be written in the form of weighted least squares where the weight matrix is \(W\) defined above and the variable \(z\) is called adjusted variable(working variable).

Log-linear model

\(\log(\mu)=X\beta, \eta=\log(\mu)\) and \(z=\log(\mu)+\frac{y-\mu}{\mu}\)

\(z_i=\log(\hat\mu_i)+\frac{y_i-\hat\mu_i}{\hat\mu_i}\) and \(w_{ii}=\hat\mu_i\)

Notice that \(E(z_i)=\log(\mu_i)\) and \(Var(z_i)=\frac{1}{\mu_i^2}Var(y_i)=1/\mu_i\).

Smoothing via adaptive shrinkage(smash)

Review

Gaussian nonparametric regression(Gaussian sequence model) is defined as \(y_i=\mu_i+\sigma_iz_i\) where \(z_i\sim N(0,1)\), \(i=1,2,\dots,T\). In multivariate form, it can be formulated as \(y|\mu\sim N_T(\mu,D)\) where D is the diagonal matrix with diagonal elements (\(\sigma_1^2,\dots,\sigma_T^2\)). Applying a discrete wavelet transform(DWT) represented as an orthogonal matrix \(W\), we have \(Wy|W\mu\sim N(W\mu,WDW^T)\) which is \(\tilde y|\tilde \mu\sim N(\tilde\mu,WDW^T)\). If \(\mu\) has spatial structure, then \(\tilde\mu\) would be sparse.

In heterokedastic variance case, we only use the diagonal of \(WDW^T\) so we can apply EB shrinkage to \(\tilde y_j|\tilde\mu_j\sim N(\tilde\mu_j,w_j^2)\) where \(w_j^2=\Sigma_{t=1}^T\sigma_t^2W_{jt}^2\).

If \(D\) is unknown, we estimate it using shrinkage methods under the assumption that the variances are also spatially structured.

Define \(Z_t^2=(y_t-\mu_t)^2\). Since \(y_t-\mu_t\sim N(0,\sigma_t^2)\), \(Z_t^2\sim\sigma_t^2\chi_1^2\) and \(E(Z_t^2)=\sigma_t^2\). Though \(Z_t^2\) is chi-squared distributed, we treat it as Gaussian. We use \(\frac{2}{3}Z_t^4\) to estimate \(Var(Z_t^2)\).(\(Var(Z_t^2)=2\sigma_t^4\), \(E(Z_t^4)=3\sigma_t^4\).)

Generalization of smash - normal version

Suppose \(Y_t = \mu_t + N(0,s^2_t) + N(0,\sigma^2)\) where \(s^2_t\) is known and the mean vector \(\mu\) and (constant) \(\sigma\) are to be estimated.

If assume \(\sigma\) is known too, then it is equivalent to the above heterokedastic variance case.

For Poisson data, \(X_t\sim Poi(m_t)\), let \(Y_t=\log(m_t)+\frac{x_t-m_t}{m_t}\) and set \(\mu_t=\log(m_t)\), \(s_t^2=\frac{1}{m_t}\). We can then estimate \(\mu_t\) and \(\sigma^2\) using the above generalized smash model.

Algorithm:

  1. Initialize \(m_t^{(0)}=\Sigma_{t=1}^Tx_t\) and \(s_t^2=1/m_t^{(0)}\).
  2. Estimate \(\mu_t\) using generalized smash model.
  3. Update \(m_t=exp(\mu_t)\) and repeat 1,2,3 until convergence.

Some Rational: consider the Taylor series expansion of \(\log(X_t)\) around \(\lambda_t\): \(\log(X_t)=\log(\lambda_t)+\frac{1}{\lambda_t}(X_t-\lambda_t)-\frac{1}{2\lambda_t^2}(X_t-\lambda_t)^2+\dots\)

Session information

sessionInfo()
R version 3.4.0 (2017-04-21)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 16299)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252 
[2] LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] workflowr_1.0.1   Rcpp_0.12.16      digest_0.6.13    
 [4] rprojroot_1.3-2   R.methodsS3_1.7.1 backports_1.0.5  
 [7] git2r_0.21.0      magrittr_1.5      evaluate_0.10    
[10] stringi_1.1.6     whisker_0.3-2     R.oo_1.21.0      
[13] R.utils_2.6.0     rmarkdown_1.8     tools_3.4.0      
[16] stringr_1.3.0     yaml_2.1.19       compiler_3.4.0   
[19] htmltools_0.3.5   knitr_1.20       

This reproducible R Markdown analysis was created with workflowr 1.0.1