Skip to contents

This vignette demonstrates the implementation of heterogeneous treed distributed lag model (HDLM). More details can be found in Mork et al. (2024) <doi: 10.1080/01621459.2023.2258595>.

Load data

Simulated data is available on GitHub. It can be loaded with the following code.

sbd_dlmtree <- get_sbd_dlmtree()

Data preparation

# Response and covariates
sbd_cov <- sbd_dlmtree %>% 
            select(bwgaz, ChildSex, MomAge, GestAge, MomPriorBMI, Race,
                    Hispanic, MomEdu, SmkAny, Marital, Income,
                    EstDateConcept, EstMonthConcept, EstYearConcept)

# Exposure data
sbd_exp <- list(PM25 = sbd_dlmtree %>% select(starts_with("pm25_")),
                TEMP = sbd_dlmtree %>% select(starts_with("temp_")),
                SO2 = sbd_dlmtree %>% select(starts_with("so2_")),
                CO = sbd_dlmtree %>% select(starts_with("co_")),
                NO2 = sbd_dlmtree %>% select(starts_with("no2_")))
sbd_exp <- sbd_exp %>% lapply(as.matrix)

Fitting the model

hdlm.fit <- dlmtree(formula = bwgaz ~ ChildSex + MomAge + MomPriorBMI +
                                        Race + Hispanic + SmkAny + EstMonthConcept,
                    data = sbd_cov,
                    exposure.data = sbd_exp[["PM25"]],
                    family = "gaussian", dlm.type = "linear", het = TRUE,
                    hdlm.modifiers = c("ChildSex", "MomAge", "MomPriorBMI", "SmkAny"),
                    hdlm.modifier.splits = 15,
                    n.burn = 2500, n.iter = 10000, n.thin = 5)
#> Preparing data...
#> 
#> Running shared HDLM:
#> Burn-in % complete 
#> [0--------25--------50--------75--------100]
#>  ''''''''''''''''''''''''''''''''''''''''''
#> MCMC iterations (est time: 3.1 minutes)
#> [0--------25--------50--------75--------100]
#>  ''''''''''''''''''''''''''''''''''''''''''
#> Compiling results...

Model fit summary

hdlm.sum <- summary(hdlm.fit)
hdlm.sum
#> ---
#> HDLM summary
#> 
#> Model run info:
#> - bwgaz ~ ChildSex + MomAge + MomPriorBMI + Race + Hispanic + SmkAny + EstMonthConcept 
#> - sample size: 10,000 
#> - family: gaussian 
#> - 20 trees
#> - 2500 burn-in iterations
#> - 10000 post-burn iterations
#> - 5 thinning factor
#> - 0.5 modifier sparsity prior
#> - 0.95 confidence level
#> 
#> Fixed effects:
#>                        Mean  Lower  Upper
#> *(Intercept)          1.284  0.905  1.649
#>  ChildSexM            0.110 -0.327  0.560
#>  MomAge               0.000 -0.004  0.004
#> *MomPriorBMI         -0.021 -0.023 -0.018
#>  RaceAsianPI          0.047 -0.080  0.170
#>  RaceBlack            0.056 -0.071  0.183
#>  Racewhite            0.036 -0.087  0.154
#> *HispanicNonHispanic  0.255  0.232  0.277
#> *SmkAnyY             -0.403 -0.454 -0.354
#>  EstMonthConcept2    -0.049 -0.106  0.005
#> *EstMonthConcept3    -0.133 -0.197 -0.071
#> *EstMonthConcept4    -0.208 -0.275 -0.142
#> *EstMonthConcept5    -0.194 -0.247 -0.145
#> *EstMonthConcept6    -0.200 -0.253 -0.148
#>  EstMonthConcept7    -0.032 -0.087  0.020
#> *EstMonthConcept8     0.151  0.088  0.210
#> *EstMonthConcept9     0.392  0.325  0.455
#> *EstMonthConcept10    0.382  0.317  0.444
#> *EstMonthConcept11    0.339  0.286  0.392
#> *EstMonthConcept12    0.139  0.091  0.188
#> ---
#> * = CI does not contain zero
#> 
#> Modifiers:
#>                PIP
#> ChildSex    1.0000
#> MomAge      0.8595
#> MomPriorBMI 0.8660
#> SmkAny      0.1920
#> ---
#> PIP = Posterior inclusion probability
#> 
#> residual standard errors: 0.004
#> ---
#> To obtain exposure effect estimates, use the 'shiny(fit)' function.

Launching Shiny app

# shiny(hdlm.fit)