Bootstrapped t-test
boot_t_test.Rd
Performs t-tests with bootstrapped p-values and confidence intervals. This function supports
standard hypothesis testing alternatives as well as equivalence and minimal effect testing,
all with the familiar htest
output structure.
Usage
boot_t_test(x, ...)
# Default S3 method
boot_t_test(
x,
y = NULL,
var.equal = FALSE,
paired = FALSE,
alternative = c("two.sided", "less", "greater", "equivalence", "minimal.effect"),
mu = 0,
alpha = 0.05,
boot_ci = c("stud", "basic", "perc"),
R = 1999,
...
)
# S3 method for class 'formula'
boot_t_test(formula, data, subset, na.action, ...)
Arguments
- x
a (non-empty) numeric vector of data values.
- ...
further arguments to be passed to or from the underlying test functions.
- y
an optional (non-empty) numeric vector of data values.
- var.equal
a logical variable indicating whether to treat the two variances as being equal. If TRUE then the pooled variance is used to estimate the variance otherwise the Welch (or Satterthwaite) approximation to the degrees of freedom is used.
- paired
a logical indicating whether you want a paired t-test.
- alternative
the alternative hypothesis: * "two.sided": different from mu (default) * "less": less than mu * "greater": greater than mu * "equivalence": between specified bounds * "minimal.effect": outside specified bounds
- mu
a number or vector specifying the null hypothesis value(s): * For standard alternatives: a single value (default = 0) * For equivalence/minimal.effect: two values representing the lower and upper bounds
- alpha
alpha level (default = 0.05)
- boot_ci
method for bootstrap confidence interval calculation: "stud" (studentized, default), "basic" (basic bootstrap), or "perc" (percentile bootstrap).
- R
number of bootstrap replications (default = 1999).
- formula
a formula of the form lhs ~ rhs where lhs is a numeric variable giving the data values and rhs either 1 for a one-sample or paired test or a factor with two levels giving the corresponding groups. If lhs is of class "Pair" and rhs is 1, a paired test is done.
- data
an optional matrix or data frame (or similar: see model.frame) containing the variables in the formula formula. By default the variables are taken from environment(formula).
- subset
an optional vector specifying a subset of observations to be used.
- na.action
a function which indicates what should happen when the data contain NAs. Defaults to getOption("na.action").
Value
A list with class "htest"
containing the following components:
"p.value": the bootstrapped p-value for the test.
"stderr": the bootstrapped standard error.
"conf.int": a bootstrapped confidence interval for the mean appropriate to the specified alternative hypothesis.
"estimate": the estimated mean or difference in means.
"null.value": the specified hypothesized value(s) of the mean or mean difference.
"alternative": a character string describing the alternative hypothesis.
"method": a character string indicating what type of bootstrapped t-test was performed.
"boot": the bootstrap samples of the mean or mean difference.
"data.name": a character string giving the name(s) of the data.
"call": the matched call.
Details
This function performs bootstrapped t-tests, providing more robust inference than standard parametric t-tests. It supports one-sample, two-sample (independent), and paired designs, as well as five different alternative hypotheses.
The bootstrap procedure follows these steps:
Calculate the test statistic from the original data
Generate R bootstrap samples by resampling with replacement
Calculate the test statistic for each bootstrap sample
Compute the p-value by comparing the original test statistic to the bootstrap distribution
Calculate confidence intervals using the specified bootstrap method
Three bootstrap confidence interval methods are available:
Studentized bootstrap ("stud"): Accounts for the variability in standard error estimates
Basic bootstrap ("basic"): Uses the empirical distribution of bootstrap estimates
Percentile bootstrap ("perc"): Uses percentiles of the bootstrap distribution directly
For different alternatives, the p-values are calculated as follows:
"two.sided": Proportion of bootstrap statistics at least as extreme as the observed statistic (in either direction), multiplied by 2
"less": Proportion of bootstrap statistics less than or equal to the observed statistic
"greater": Proportion of bootstrap statistics greater than or equal to the observed statistic
"equivalence": Maximum of two one-sided p-values (for lower and upper bounds)
"minimal.effect": Minimum of two one-sided p-values (for lower and upper bounds)
For two-sample tests, the test is of \(\bar x - \bar y\) (mean of x minus mean of y). For paired samples, the test is of the difference scores (z), wherein \(z = x - y\), and the test is of \(\bar z\) (mean of the difference scores). For one-sample tests, the test is of \(\bar x\) (mean of x).
Unlike the t_TOST
function, this function returns a standard htest
object for
compatibility with other R functions, while still providing the benefits of bootstrapping.
For detailed information on calculation methods, see vignette("robustTOST")
.
Purpose
Use this function when:
You need more robust inference than provided by standard t-tests
Your data don't meet the assumptions of normality or homogeneity
You want to perform equivalence or minimal effect testing with bootstrap methods
Sample sizes are small or standard parametric approaches may be unreliable
You prefer the standard
htest
output format for compatibility with other R functions
See also
Other Robust tests:
boot_log_TOST()
,
boot_t_TOST()
,
brunner_munzel()
,
log_TOST()
,
wilcox_TOST()
Examples
# Example 1: Basic two-sample test with formula notation
data(sleep)
result <- boot_t_test(extra ~ group, data = sleep)
result # Standard htest output format
#>
#> Bootstrapped Welch Two Sample t-test
#>
#> data: extra by group
#> p-value = 0.04402
#> alternative hypothesis: true difference in means is not equal to 0
#> 95 percent confidence interval:
#> -3.5252843 0.1529321
#> sample estimates:
#> mean of x mean of y
#> 0.75 2.33
#>
# Example 2: One-sample bootstrapped t-test
set.seed(123)
x <- rnorm(20, mean = 0.5, sd = 1)
boot_t_test(x, mu = 0, R = 999) # Using fewer replicates for demonstration
#>
#> Bootstrapped One Sample t-test
#>
#> data: x
#> p-value = 0.002002
#> alternative hypothesis: true mean is not equal to 0
#> 95 percent confidence interval:
#> 0.184273 1.112973
#> sample estimates:
#> mean of x
#> 0.6416238
#>
# Example 3: Paired samples test with percentile bootstrap CI
before <- c(5.1, 4.8, 6.2, 5.7, 6.0, 5.5, 4.9, 5.8)
after <- c(5.6, 5.2, 6.7, 6.1, 6.5, 5.8, 5.3, 6.2)
boot_t_test(x = before, y = after,
paired = TRUE,
alternative = "less", # Testing if before < after
boot_ci = "perc",
R = 999)
#>
#> Bootstrapped Paired t-test
#>
#> data: x and y
#> p-value < 2.2e-16
#> alternative hypothesis: true mean difference is less than 0
#> 95 percent confidence interval:
#> -0.4625 -0.3875
#> sample estimates:
#> mean difference
#> -0.425
#>
# Example 4: Equivalence testing with bootstrapped t-test
# Testing if the effect is within ±0.5 units
data(mtcars)
boot_t_test(mpg ~ am, data = mtcars,
alternative = "equivalence",
mu = c(-0.5, 0.5),
boot_ci = "stud",
R = 999)
#>
#> Bootstrapped Welch Two Sample t-test
#>
#> data: mpg by am
#> p-value = 1
#> alternative hypothesis: equivalence
#> null values:
#> difference in means difference in means
#> -0.5 0.5
#> 90 percent confidence interval:
#> -10.568744 -4.177433
#> sample estimates:
#> mean of x mean of y
#> 17.14737 24.39231
#>
# Example 5: Minimal effect testing with bootstrapped t-test
# Testing if the effect is outside ±3 units
boot_t_test(mpg ~ am, data = mtcars,
alternative = "minimal.effect",
mu = c(-3, 3),
R = 999)
#>
#> Bootstrapped Welch Two Sample t-test
#>
#> data: mpg by am
#> p-value = 0.003003
#> alternative hypothesis: minimal.effect
#> null values:
#> difference in means difference in means
#> -3 3
#> 90 percent confidence interval:
#> -10.598981 -4.175593
#> sample estimates:
#> mean of x mean of y
#> 17.14737 24.39231
#>