Flexible use of visual and acoustic cues during roost finding in Spix’s disc-winged bat
Statistical analysis
Source code and data also found at https://github.com/maRce10/Roost-finding-behavior-in-Thyroptera-tricolor
Data analysis for the paper
Gioiosa, Miriam; Araya-Salas, Marcelo; Castillo Salazar, Cristian; Chaves-Ramírez, Silvia; Gioiosa, Maurizio; Rojas, Nazareth; Sanchez, Mariela; Scaravelli, Dino; Chaverri, Gloriana. 2023. Flexible use of visual and acoustic cues during roost finding in Spix’s disc-winged bat. Behavioral Ecology.
Abstract
The ability of an animal to detect environmental cues is crucial for its survival and fitness. In bats, sound certainly plays a significant role in the search for food, spatial navigation and social communication. Yet, the efficiency of bat’s echolocation could be limited by atmospheric attenuation and background clutter. In this context, sound can be complemented by other sensory modalities, like smell or vision. Spix’s disc-winged bat uses acoustic cues from other group members to locate the roost (tubular unfurled leaves of plants in the order Zingiberales). Our research focused on how individuals find a roost that has not been yet occupied, considering the urge to find a suitable leaf approximately every day, during nighttime or in daylight. We observed the process of roost finding in T. tricolor in a flight cage, manipulating the audio/visual sensory input available for each trial. A broadband noise was broadcast in order to mask echolocation, while experiments conducted at night reduced significantly the amount of light. We measured the time needed to locate the roost under these different conditions. Results show that with limited visual and acoustic cues, search time increases significantly. In contrast bats seemed capable of using acoustic and visual cues in a similarly efficient manner, since roost search times were equally short when bats could use only sound, only vision, or both senses at the same time. Our results show that non-acoustic inputs can still be an important source of information for finding critical resources in bats.
Load packages
First install/load packages:
Read and prepare data
The data file can be downloaded at the paper’s dryad repository or at the github repository (link)
Sensory input treatments:
- Sound & vision: diurnal experiments with no noise playback
- Noise control: diurnal experiments with control (no echolocation-masking) playback
- Sound: nocturnal experiments with no playback
- Vision: diurnal experiments with echolocation-masking playback
- Lessen input: nocturnal experiments with echolocation-masking playback
1 Sample sizes
First, exclude individuals with 1 experiment:
Experiments per date:
2019-11-06 2019-11-09 2019-11-10 2019-11-11 2019-11-12 2019-11-13 2019-11-14
4 3 4 3 4 7 4
2019-11-16 2019-11-17 2019-11-19 2019-11-25 2019-11-26 2019-11-27 2019-11-28
3 7 6 3 6 9 4
2019-11-29 2019-12-01 2019-12-07 2020-01-24 2020-01-28 2020-01-29
6 3 2 4 4 10
Test per individual:
982.126051278486 982.126051278498 982.126051278508 982.126051278509
3 7 2 4
982.126051278515 982.126051278518 982.126051278520 982.126051278530
3 3 4 3
982.126051278531 982.126051278554 982.126051278558 982.126051278559
6 3 3 3
982.126051278588 982.126057845184 982.126057845188 982.126057845230
3 4 3 3
982.126058484283 982.126058484309 982.126058484311 982.126058484344
3 3 3 6
982.126058484345 982.126058484348 982126052945834 982126052945878
3 3 2 2
982126052945887 982126052945892 982126052945893 982126052945904
2 2 2 2
982126052945905 982126058484333 982126058484340
2 2 2
Number of individuals by number of tests:
sensory_input | individual |
---|---|
Sound & vision | 22 |
Noise control | 21 |
Sound | 9 |
Vision | 22 |
Lessen input | 9 |
Experiments per individual:
982.126051278486 982.126051278498 982.126051278508 982.126051278509
3 3 2 3
982.126051278515 982.126051278518 982.126051278520 982.126051278530
3 3 3 3
982.126051278531 982.126051278554 982.126051278558 982.126051278559
3 3 3 3
982.126051278588 982.126057845184 982.126057845188 982.126057845230
3 3 3 3
982.126058484283 982.126058484309 982.126058484311 982.126058484344
3 3 3 3
982.126058484345 982.126058484348 982126052945834 982126052945878
3 3 2 2
982126052945887 982126052945892 982126052945893 982126052945904
2 2 2 2
982126052945905 982126058484333 982126058484340
2 2 2
Number of individuals by number of experiments:
Tests per treatment:
day night
Sound & vision 24 0
Noise control 30 0
Sound 0 9
Vision 24 0
Lessen input 0 9
Experiments per treatment:
Code
day night
Sound & vision 22 0
Noise control 21 0
Sound 0 9
Vision 22 0
Lessen input 0 9
31 individuals were tested
The 2 individuals with only 1 treatment were excluded so the final individual sample size was 31
The mean number of tests per individual (after excluding those with 1 treatment) was 3.1 (range = 2, 7)
The mean number of experimental treatments in which each individual was tested (after excluding those with 1 treatment) was 2.68
2 Effect of sensory input on the time to find the roost
Bayesian generalized linear models on time (in s) to enter the roost, with individual as a random effect and sensory input treatment as predictors. An intercept-only (null) model was also included in the analysis:
- Sensory input as a categorical variable: \[Time\ to\ enter\ roost \sim + categorical\ input + (1 | individual)\]
- Null model with no predictor: \[Time\ to\ enter\ roost \sim 1 + (1 | individual)\]
A loop is used to run these models, 4 chains per model, 5000 iterations each. Models were run on the complete data set and on a subset including only trials in which individuals entered the roost.
2.1 Models using all the data
Raw data plot
Code
cols <- viridis(10)
agg_dat <- aggregate(time_to_enter ~ sensory_input, dat, mean)
agg_dat$n <- sapply(1:nrow(agg_dat), function(x) length(unique(dat$individual[dat$sensory_input == agg_dat$sensory_input[x]])))
agg_dat$n.labels <- paste("n =", agg_dat$n)
agg_dat$sensory_input <- factor(agg_dat$sensory_input)
# raincoud plot:
fill_color <- adjustcolor("#e85307", 0.6)
ggplot(dat, aes(y = time_to_enter, x = sensory_input)) +
# add half-violin from {ggdist} package
ggdist::stat_halfeye(
fill = fill_color,
alpha = 0.5,
# custom bandwidth
adjust = .5,
# adjust height
width = .6,
.width = 0,
# move geom to the cright
justification = -.2,
point_colour = NA
) +
geom_boxplot(fill = fill_color,
width = .15,
# remove outliers
outlier.shape = NA # `outlier.shape = NA` works as well
) +
# add justified jitter from the {gghalves} package
gghalves::geom_half_point(
color = fill_color,
# draw jitter on the left
side = "l",
# control range of jitter
range_scale = .4,
# add some transparency
alpha = .5,
transformation = ggplot2::position_jitter(height = 0)
) +
ylim(c(-30, 310)) +
geom_text(data = agg_dat, aes(y = rep(-25, 5), x = sensory_input, label = n.labels), nudge_x = -0.13, size = 6) +
scale_x_discrete(labels=c("Control" = "Noise control", "Sound vision" = "Sound & vision", "Vision" = "Vision", "Lessen input" = "Lessen input")) +
labs(x = "Sensory input ", y = "Time to enter roost (s)") + theme(axis.text.x = element_text(angle = 15, hjust = 1))
Code
model_formulas <- c("time_to_enter ~ sensory_input + (1 | individual)",
"time_to_enter ~ 1 + (1 | individual)")
iter <- 5000
chains <- 4
priors <- c(set_prior("student_t(10,0,1)", class = "sigma"), set_prior("student_t(10,0,1)",
class = "sd"))
# Run loops with models
brms_models <- lapply(model_formulas, function(x) {
mod <- brm(formula = x, iter = iter, thin = 1, data = dat, family = lognormal(),
silent = 2, chains = chains, cores = chains, prior = priors)
mod <- add_criterion(mod, c("loo"), save_pars = save_pars(all = TRUE))
return(mod)
})
names(brms_models) <- model_formulas
saveRDS(brms_models, "./data/processed/regresion_models_brms.RDS")
Code
Compare models:
Code
elpd_diff | se_diff | elpd_loo | se_elpd_loo | p_loo | se_p_loo | looic | se_looic | |
---|---|---|---|---|---|---|---|---|
time_to_enter ~ sensory_input + (1 | individual) | 0.000 | 0.000 | -510.740 | 16.182 | 20.972 | 1.890 | 1021.481 | 32.364 |
time_to_enter ~ 1 + (1 | individual) | -7.225 | 3.742 | -517.965 | 16.101 | 19.136 | 1.764 | 1035.930 | 32.202 |
Best model:
- time_to_enter ~ sensory_input + (1 | individual)
Code
priors | formula | iterations | chains | thinning | warmup | diverg_transitions | rhats > 1.05 | min_bulk_ESS | min_tail_ESS | seed | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | Intercept-student_t(3, 3.5, 2.5) sd-student_t(10,0,1) sigma-student_t(10,0,1) | time_to_enter ~ sensory_input + (1 | individual) | 5000 | 4 | 1 | 2500 | 0 | 0 | 7632.569 | 7283.589 | 652773362 |
Estimate | l-95% CI | u-95% CI | Rhat | Bulk_ESS | Tail_ESS | |
---|---|---|---|---|---|---|
b_Intercept | 3.228 | 2.643 | 3.827 | 1 | 7632.569 | 7360.967 |
b_Noisecontrol | 0.208 | -0.424 | 0.854 | 1 | 13237.365 | 7884.717 |
b_Sound | 0.476 | -0.647 | 1.636 | 1 | 8482.597 | 7968.398 |
b_Vision | 0.604 | -0.075 | 1.269 | 1 | 13132.351 | 7283.589 |
b_Lesseninput | 2.407 | 1.284 | 3.534 | 1.001 | 8279.037 | 7899.409 |
Compare all contrasts
Code
# contrasts
contrasts(fit = brms_models[[rownames(comp_mods)[1]]], predictor = "sensory_input",
n.posterior = 2000, level.sep = " VS ", fill = "#e85307", gsub.pattern = c("Lesseninput",
"Soundvision", "Noisecontrol"), gsub.replacement = c("Lessen input",
"Sound & vision", "Noise control"), html.table = TRUE, plot = TRUE,
sort.levels = c("Lessen input", "Vision", "Sound", "Sound & vision",
"Noise control"))
Hypothesis | Estimate | Est.Error | l-95% CI | u-95% CI | |
---|---|---|---|---|---|
1 | Lessen input VS Vision | 1.803 | 0.571 | 0.669 | 2.918 |
2 | Lessen input VS Sound | 1.931 | 0.570 | 0.811 | 3.062 |
3 | Lessen input VS Sound & vision | 2.407 | 0.572 | 1.284 | 3.534 |
4 | Lessen input VS Noise control | 2.198 | 0.561 | 1.079 | 3.262 |
5 | Vision VS Sound | 0.128 | 0.577 | -1.022 | 1.257 |
6 | Vision VS Sound & vision | 0.604 | 0.344 | -0.075 | 1.269 |
7 | Vision VS Noise control | 0.395 | 0.331 | -0.245 | 1.044 |
8 | Sound VS Sound & vision | 0.476 | 0.574 | -0.647 | 1.636 |
9 | Sound VS Noise control | 0.267 | 0.568 | -0.859 | 1.405 |
10 | Sound & vision VS Noise control | 0.208 | 0.325 | -0.424 | 0.854 |
Code
2.1.1 Takeaways
- “lessen input” was the only sensory input treatment in which a increase in time was detected: it show a longer time to enter the roost than all other treatments
2.2 Models on the data excluding bats that did not enter the roost
Raw data plot
Code
agg_dat <- aggregate(time_to_enter ~ sensory_input, dat[dat$time_to_enter < 300, ], mean)
agg_dat$n <- sapply(1:nrow(agg_dat), function(x) length(unique(dat$individual[dat$sensory_input == agg_dat$sensory_input[x] & dat$time_to_enter < 300])))
agg_dat$labels <- c("a", "a", "a", "a", "b")
agg_dat$n.labels <- paste("n =", agg_dat$n)
agg_dat$sensory_input <- factor(agg_dat$sensory_input)
# raincoud plot:
ggplot(dat[dat$time_to_enter < 300, ], aes(y = time_to_enter, x = sensory_input)) +
# add half-violin from {ggdist} package
ggdist::stat_halfeye(
fill = fill_color,
alpha = 0.5,
# custom bandwidth
adjust = .5,
# adjust height
width = .6,
.width = 0,
# move geom to the cright
justification = -.2,
point_colour = NA
) +
geom_boxplot(fill = fill_color,
width = .15,
# remove outliers
outlier.shape = NA # `outlier.shape = NA` works as well
) +
# add justified jitter from the {gghalves} package
gghalves::geom_half_point(
color = fill_color,
# draw jitter on the left
side = "l",
# control range of jitter
range_scale = .4,
# add some transparency
alpha = .5,
transformation = ggplot2::position_jitter(height = 0)
) +
ylim(c(-30, 310)) +
# geom_text(data = agg_dat, aes(y = rep(340, 5), x = sensory_input, label = labels), size = 7) +
geom_text(data = agg_dat, aes(y = rep(-25, 5), x = sensory_input, label = n.labels), nudge_x = -0.13, size = 6) +
scale_x_discrete(labels=c("Control" = "Noise control", "Sound vision" = "Sound & vision", "Vision" = "Vision", "Lessen input" = "Lessen input")) +
labs(x = "Sensory input ", y = "Time to enter roost (s)") + theme(axis.text.x = element_text(angle = 15, hjust = 1))
Code
model_formulas <- c("time_to_enter ~ sensory_input + (1 | individual)",
"time_to_enter ~ 1 + (1 | individual)")
iter <- 5000
chains <- 4
priors <- c(set_prior("student_t(10,0,1)", class = "sigma"), set_prior("student_t(10,0,1)",
class = "sd"))
# Run loops with models
brms_models <- lapply(model_formulas, function(x) {
mod <- brm(formula = x, iter = iter, thin = 1, data = dat[dat$time_to_enter <
300, ], family = lognormal(), silent = 2, chains = chains,
cores = chains, prior = priors, control = list(adapt_delta = 0.99,
max_treedepth = 15))
mod <- add_criterion(mod, c("loo"), save_pars = save_pars(all = TRUE))
return(mod)
})
names(brms_models) <- model_formulas
saveRDS(brms_models, "./data/processed/regression_models_brms_subset.RDS")
Code
Compare models:
Code
elpd_diff | se_diff | elpd_loo | se_elpd_loo | p_loo | se_p_loo | looic | se_looic | |
---|---|---|---|---|---|---|---|---|
time_to_enter ~ sensory_input + (1 | individual) | 0.000 | 0.000 | -398.054 | 14.044 | 18.956 | 1.898 | 796.107 | 28.087 |
time_to_enter ~ 1 + (1 | individual) | -2.318 | 2.771 | -400.372 | 14.170 | 15.192 | 1.660 | 800.744 | 28.340 |
Best model:
- time_to_enter ~ sensory_input + (1 | individual)
Code
priors | formula | iterations | chains | thinning | warmup | diverg_transitions | rhats > 1.05 | min_bulk_ESS | min_tail_ESS | seed | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | Intercept-student_t(3, 3.1, 2.5) sd-student_t(10,0,1) sigma-student_t(10,0,1) | time_to_enter ~ sensory_input + (1 | individual) | 5000 | 4 | 1 | 2500 | 0 | 0 | 5171.14 | 6488.326 | 1582467647 |
Estimate | l-95% CI | u-95% CI | Rhat | Bulk_ESS | Tail_ESS | |
---|---|---|---|---|---|---|
b_Intercept | 3.018 | 2.417 | 3.634 | 1 | 5171.140 | 6488.326 |
b_Noisecontrol | 0.124 | -0.554 | 0.803 | 1 | 8844.558 | 7129.572 |
b_Sound | 0.693 | -0.388 | 1.814 | 1 | 6254.619 | 6711.689 |
b_Vision | 0.518 | -0.186 | 1.225 | 1 | 9433.443 | 7726.126 |
b_Lesseninput | 2.948 | 0.985 | 4.924 | 1 | 7428.895 | 7424.103 |
Compare all contrasts
Code
# contrasts
contrasts(fit = brms_models[[rownames(comp_mods)[1]]], predictor = "sensory_input",
n.posterior = 2000, level.sep = " VS ", fill = "#e85307", gsub.pattern = c("Lesseninput",
"Soundvision", "Noisecontrol"), gsub.replacement = c("Lessen input",
"Sound & vision", "Noise control"), html.table = TRUE, plot = TRUE,
sort.levels = c("Lessen input", "Vision", "Sound", "Sound & vision",
"Noise control"))
Hypothesis | Estimate | Est.Error | l-95% CI | u-95% CI | |
---|---|---|---|---|---|
1 | Lessen input VS Vision | 2.429 | 1.008 | 0.472 | 4.434 |
2 | Lessen input VS Sound | 2.255 | 1.008 | 0.256 | 4.221 |
3 | Lessen input VS Sound & vision | 2.948 | 1.000 | 0.985 | 4.924 |
4 | Lessen input VS Noise control | 2.824 | 0.992 | 0.906 | 4.778 |
5 | Vision VS Sound | -0.175 | 0.555 | -1.277 | 0.922 |
6 | Vision VS Sound & vision | 0.518 | 0.356 | -0.186 | 1.225 |
7 | Vision VS Noise control | 0.394 | 0.345 | -0.285 | 1.081 |
8 | Sound VS Sound & vision | 0.693 | 0.560 | -0.388 | 1.814 |
9 | Sound VS Noise control | 0.569 | 0.552 | -0.516 | 1.656 |
10 | Sound & vision VS Noise control | 0.124 | 0.343 | -0.554 | 0.803 |
2.2.1 Takeaways
- Results remain qualitatively similar after excluding individuals that did not enter the roost
3 Treatment diagram
Code
reverse.viridis <- function(...) viridis(..., direction = -1)
wv <- read_wave("https://github.com/maRce10/Roost-finding-behavior-in-Thyroptera-tricolor/raw/main/data/raw/thyroptera_echolocation_clip.wav")
wv <- ffilter(wv, from = 40000, to = 170000, bandpass = TRUE, output = "Wave")
wv1 <- cutw(wv, from = 0.005, to = 0.009, output = "Wave")
wv2 <- cutw(wv, from = 0.036, to = duration(wv) - 0.004, output = "Wave")
subwv <- pastew(wv1, wv2, output = "Wave")
ns <- seewave::noisew(f = subwv@samp.rate, output = "Wave", d = duration(subwv))
mask <- ffilter(ns, from = 0, to = 45000, bandpass = FALSE, output = "Wave")
no_mask <- ffilter(ns, from = 0, to = 45000, bandpass = TRUE, output = "Wave")
subwv@left <- subwv@left[1:length(mask)]
ovlp <- 99
cex <- 0.8 # try cex 2 for saving plot
res <- 200
bl <- 4
hgh <- wdh <- 480 * 4
lf <- c(0, 2/14, 6/14, 10/14, 0, 1/6, 1/2, 5/6)
rgh <- c(2/14, 6/14, 10/14, 1, 1/6, 1/2, 5/6, 1)
btm <- c(1/2, 1/2, 1/2, 1/2, 0, 0, 0, 0)
tp <- c(1, 1, 1, 1, 1/2, 1/2, 1/2, 1/2)
m <- cbind(lf, rgh, btm, tp)
# graphics.off()
invisible(close.screen(all.screens = TRUE))
# tiff(filename = './output/diagram_experimental_design.tiff',
# res = 300, width = 3500, height = 2000)
sc <- split.screen(figs = m)
# sound and vision
screen(2)
par(mar = c(3, 6, 1, 1))
warbleR:::spectro_wrblr_int2(subwv, grid = FALSE, collev.min = -35,
wl = 120, palette = reverse.viridis, ovlp = ovlp, zp = 1000, tlim = c(0.001,
0.0085), axisX = FALSE, tlab = NULL, axisY = FALSE, flab = NULL,
flim = c(0, 220))
box(lwd = bl)
axis(side = 2, cex.axis = cex, labels = c(0, 100, 200), at = c(0,
100, 200))
mtext(side = 2, text = "Frequency (kHz)", line = 4, cex = cex)
mtext(side = 1, text = "Sound & vision ", line = 1.5, cex = cex)
# vision
screen(3)
par(mar = c(3, 6, 1, 1))
warbleR:::spectro_wrblr_int2(subwv + mask * 1000, grid = FALSE, collev.min = -35,
wl = 120, palette = reverse.viridis, ovlp = ovlp, zp = 1000, tlim = c(0.001,
0.0085), axisX = FALSE, tlab = NULL, axisY = FALSE, flab = NULL,
flim = c(0, 220))
box(lwd = bl)
axis(side = 2, cex.axis = cex, labels = c(0, 100, 200), at = c(0,
100, 200))
mtext(side = 1, text = "Vision", line = 1.5, cex = cex)
# Noise control
screen(4)
par(mar = c(3, 6, 1, 1), new = TRUE)
warbleR:::spectro_wrblr_int2(subwv + no_mask * 1000, grid = FALSE,
collev.min = -35, wl = 120, palette = reverse.viridis, ovlp = ovlp,
zp = 1000, tlim = c(0.001, 0.0085), axisX = FALSE, tlab = NULL,
axisY = FALSE, flab = NULL, flim = c(0, 220))
box(lwd = bl)
axis(side = 2, cex.axis = cex, labels = c(0, 100, 200), at = c(0,
100, 200))
mtext(side = 1, text = "Noise control", line = 1.5, cex = cex)
# sound
par(bg = "black", new = TRUE)
screen(6)
par(mar = c(3, 8.5, 1, 1))
warbleR:::spectro_wrblr_int2(subwv, grid = FALSE, collev.min = -35,
wl = 120, palette = reverse.viridis, ovlp = ovlp, zp = 1000, tlim = c(0.001,
0.0085), axisX = FALSE, tlab = NULL, axisY = FALSE, flab = NULL,
flim = c(0, 220))
box(lwd = bl, col = "white")
axis(side = 2, cex.axis = cex, labels = c(0, 100, 200), at = c(0,
100, 200), col.ticks = "white", col = "white", col.axis = "white")
mtext(side = 1, text = "Sound", line = 1.5, cex = cex, col = "white")
# lessen input
par(bg = "black", new = TRUE)
screen(7)
par(mar = c(3, 8.5, 1, 1))
# par(mar = c(1, 9, 1, 1), bg = 'black', new = TRUE)
warbleR:::spectro_wrblr_int2(subwv + mask * 1000, grid = FALSE, collev.min = -35,
wl = 120, palette = reverse.viridis, ovlp = ovlp, zp = 1000, tlim = c(0.001,
0.0085), axisX = FALSE, tlab = NULL, axisY = FALSE, flab = NULL,
flim = c(0, 220))
box(lwd = bl, col = "white")
axis(side = 2, cex.axis = cex, labels = c(0, 100, 200), at = c(0,
100, 200), col.ticks = "white", col = "white", col.axis = "white")
mtext(side = 1, text = "Lessen input", line = 1.5, cex = cex, col = "white")
# dev.off()
par(bg = "black", new = TRUE)
screen(5)
par(mar = c(0, 0, 0, 0))
plot(1, frame.plot = FALSE, type = "n")
text(x = 1, y = 1.05, "Nighttime", srt = 90, cex = 1.2 * cex, col = "white")
par(bg = "black", new = TRUE)
screen(8)
par(bg = "white", new = TRUE)
screen(1)
par(mar = c(0, 0, 0, 0))
plot(1, frame.plot = FALSE, type = "n")
text(x = 1, y = 1.05, "Daytime", srt = 90, cex = 1.2 * cex)
Session information
R version 4.1.0 (2021-05-18)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.2 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/atlas/libblas.so.3.10.3
LAPACK: /usr/lib/x86_64-linux-gnu/atlas/liblapack.so.3.10.3
locale:
[1] LC_CTYPE=pt_BR.UTF-8 LC_NUMERIC=C
[3] LC_TIME=es_CR.UTF-8 LC_COLLATE=pt_BR.UTF-8
[5] LC_MONETARY=es_CR.UTF-8 LC_MESSAGES=pt_BR.UTF-8
[7] LC_PAPER=es_CR.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=es_CR.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] brmsish_1.0.0 warbleR_1.1.28 NatureSounds_1.0.4 knitr_1.42
[5] seewave_2.2.0 tuneR_1.4.1 kableExtra_1.3.4 cowplot_1.1.1
[9] pbapply_1.6-0 ggplot2_3.4.0 viridis_0.6.2 viridisLite_0.4.1
[13] brms_2.18.0 Rcpp_1.0.9 readxl_1.3.1
loaded via a namespace (and not attached):
[1] backports_1.4.1 systemfonts_1.0.4 plyr_1.8.7
[4] igraph_1.3.5 splines_4.1.0 crosstalk_1.2.0
[7] TH.data_1.1-0 rstantools_2.2.0 inline_0.3.19
[10] digest_0.6.31 htmltools_0.5.4 fansi_1.0.3
[13] magrittr_2.0.3 checkmate_2.1.0 RcppParallel_5.1.5
[16] matrixStats_0.62.0 xts_0.12.2 sandwich_3.0-1
[19] svglite_2.1.0 prettyunits_1.1.1 colorspace_2.0-3
[22] signal_0.7-7 rvest_1.0.3 ggdist_3.2.0
[25] textshaping_0.3.5 xfun_0.36 dplyr_1.0.10
[28] callr_3.7.3 crayon_1.5.2 RCurl_1.98-1.9
[31] jsonlite_1.8.4 lme4_1.1-27.1 ape_5.6-2
[34] survival_3.2-11 zoo_1.8-11 glue_1.6.2
[37] gtable_0.3.1 emmeans_1.8.1-1 webshot_0.5.4
[40] distributional_0.3.1 pkgbuild_1.4.0 rstan_2.21.7
[43] abind_1.4-5 scales_1.2.1 mvtnorm_1.1-3
[46] DBI_1.1.1 miniUI_0.1.1.1 dtw_1.23-1
[49] xtable_1.8-4 diffobj_0.3.4 proxy_0.4-27
[52] stats4_4.1.0 StanHeaders_2.21.0-7 DT_0.26
[55] htmlwidgets_1.5.4 httr_1.4.4 threejs_0.3.3
[58] posterior_1.3.1 ellipsis_0.3.2 pkgconfig_2.0.3
[61] loo_2.4.1.9000 farver_2.1.1 utf8_1.2.2
[64] labeling_0.4.2 tidyselect_1.2.0 rlang_1.0.6
[67] reshape2_1.4.4 later_1.3.0 munsell_0.5.0
[70] cellranger_1.1.0 tools_4.1.0 cli_3.6.0
[73] generics_0.1.3 ggridges_0.5.4 evaluate_0.20
[76] stringr_1.5.0 fastmap_1.1.0 ragg_1.1.3
[79] yaml_2.3.7 processx_3.8.0 nlme_3.1-152
[82] mime_0.12 projpred_2.0.2 formatR_1.11
[85] xml2_1.3.3 compiler_4.1.0 bayesplot_1.9.0
[88] shinythemes_1.2.0 rstudioapi_0.14 gamm4_0.2-6
[91] tibble_3.1.8 stringi_1.7.12 highr_0.10
[94] ps_1.7.2 Brobdingnag_1.2-9 lattice_0.20-44
[97] Matrix_1.5-1 nloptr_1.2.2.2 markdown_1.3
[100] shinyjs_2.1.0 fftw_1.0-7 tensorA_0.36.2
[103] vctrs_0.5.2 pillar_1.8.1 lifecycle_1.0.3
[106] bridgesampling_1.1-2 estimability_1.4.1 bitops_1.0-7
[109] httpuv_1.6.6 R6_2.5.1 promises_1.2.0.1
[112] gridExtra_2.3 codetools_0.2-18 boot_1.3-28
[115] colourpicker_1.2.0 MASS_7.3-54 gtools_3.9.3
[118] assertthat_0.2.1 rjson_0.2.21 rprojroot_2.0.3
[121] withr_2.5.0 shinystan_2.6.0 multcomp_1.4-17
[124] mgcv_1.8-36 parallel_4.1.0 gghalves_0.1.3
[127] grid_4.1.0 coda_0.19-4 minqa_1.2.4
[130] rmarkdown_2.20 shiny_1.7.3 base64enc_0.1-3
[133] dygraphs_1.1.1.6