Skip to contents

[Stable]

A function to compare standardized mean differences (SMDs) between independent studies. This function is intended to be used to compare the compatibility of original studies with replication studies (lower p-values indicating lower compatibility).

Usage

compare_smd(
  smd1,
  n1,
  se1 = NULL,
  smd2,
  n2,
  se2 = NULL,
  paired = FALSE,
  alternative = c("two.sided", "less", "greater", "equivalence", "minimal.effect"),
  null = 0,
  TOST = FALSE
)

Arguments

smd1, smd2

SMDs from study 1 & 2, respectively.

n1, n2

Sample size(s) from study 1 & 2, respectively. Can be a single number (total sample size) or a vector of 2 numbers (group sizes) for independent samples designs.

se1, se2

User supplied standard errors (SEs). This will override the internal calculations for the standard error.

paired

A logical indicating whether the SMD is from a paired or independent samples design. If a one-sample design, then paired must be set to TRUE.

alternative

A character string specifying the alternative hypothesis:

  • "two.sided": difference is not equal to null (default)

  • "greater": difference is greater than null

  • "less": difference is less than null

  • "equivalence": difference is within the equivalence bounds (TOST)

  • "minimal.effect": difference is outside the equivalence bounds (TOST)

You can specify just the initial letter.

null

A number or vector indicating the null hypothesis value(s):

  • For standard tests: a single value representing the null difference (default = 0)

  • For equivalence/minimal effect tests: either a single value (symmetric bounds ±value will be created) or a vector of two values representing the lower and upper bounds

TOST

Defunct: use alternative argument. Logical indicator (default = FALSE) to perform two one-sided tests of equivalence (TOST).

Value

A list with class "htest" containing the following components:

  • statistic: z-score with name "z"

  • p.value: numeric scalar containing the p-value for the test under the null hypothesis

  • estimate: difference in SMD between studies

  • null.value: the specified hypothesized value(s) for the null hypothesis

  • alternative: character string indicating the alternative hypothesis

  • method: description of the method used for comparison

  • data.name: "Summary Statistics" to denote summary statistics were utilized

  • smd: list containing the SMDs used in the comparison

  • sample_sizes: list containing the sample sizes used in the comparison

  • call: the matched call

Details

This function tests for differences between SMDs from independent studies (e.g., original vs replication). It is particularly useful for:

  • Comparing effect sizes between an original study and its replication

  • Meta-analytic comparisons between studies

  • Testing if effect sizes from different samples are equivalent

The function handles both paired/one-sample designs and independent samples designs:

  • For paired/one-sample designs (paired = TRUE), standard errors are calculated for Cohen's dz, and n1 and n2 must be single values.

  • For independent samples designs (paired = FALSE), standard errors are calculated for Cohen's ds, and n1 and n2 can be either single values (total sample size) or vectors of length 2 (group sizes).

  • For all other SMDs, you should supply your own standard errors using the se1 and se2 arguments.

The function supports both standard hypothesis testing and equivalence/minimal effect testing:

  • For standard tests (two.sided, less, greater), the function tests whether the difference between SMDs differs from the null value (typically 0).

  • For equivalence testing ("equivalence"), it determines whether the difference falls within the specified bounds, which can be set asymmetrically.

  • For minimal effect testing ("minimal.effect"), it determines whether the difference falls outside the specified bounds.

When performing equivalence or minimal effect testing:

  • If a single value is provided for null, symmetric bounds ±value will be used

  • If two values are provided for null, they will be used as the lower and upper bounds

See also

Other compare studies: boot_compare_cor(), boot_compare_smd(), compare_cor()

Examples

# Example 1: Comparing two independent samples SMDs (standard test)
compare_smd(smd1 = 0.5, n1 = c(30, 30),
            smd2 = 0.3, n2 = c(25, 25),
            paired = FALSE, alternative = "two.sided")
#> 
#> 	Difference in Cohen's ds (two-sample)
#> 
#> data:  Summary Statistics
#> z = 0.517, p-value = 0.6052
#> alternative hypothesis: true difference in SMDs is not equal to 0
#> sample estimates:
#> difference in SMDs 
#>                0.2 
#> 

# Example 2: Comparing two paired samples SMDs
compare_smd(smd1 = 0.6, n1 = 40,
            smd2 = 0.4, n2 = 45,
            paired = TRUE, alternative = "two.sided")
#> 
#> 	Difference in Cohen's dz (paired)
#> 
#> data:  Summary Statistics
#> z = 0.86468, p-value = 0.3872
#> alternative hypothesis: true difference in SMDs is not equal to 0
#> sample estimates:
#> difference in SMDs 
#>                0.2 
#> 

# Example 3: Testing for equivalence between SMDs
# Testing if the difference between SMDs is within ±0.2
compare_smd(smd1 = 0.45, n1 = c(25, 25),
            smd2 = 0.35, n2 = c(30, 30),
            paired = FALSE, alternative = "equivalence", null = 0.2)
#> 
#> 	Difference in Cohen's ds (two-sample)
#> 
#> data:  Summary Statistics
#> z = -0.25845, p-value = 0.398
#> alternative hypothesis: equivalence
#> null values:
#> difference in SMDs difference in SMDs 
#>                0.2               -0.2 
#> sample estimates:
#> difference in SMDs 
#>                0.1 
#> 

# Example 4: Testing for minimal effects
# Testing if the difference between SMDs is outside ±0.3
compare_smd(smd1 = 0.7, n1 = 30,
            smd2 = 0.3, n2 = 35,
            paired = TRUE, alternative = "minimal.effect", null = 0.3)
#> 
#> 	Difference in Cohen's dz (paired)
#> 
#> data:  Summary Statistics
#> z = 0.37435, p-value = 0.3541
#> alternative hypothesis: minimal.effect
#> null values:
#> difference in SMDs difference in SMDs 
#>                0.3               -0.3 
#> sample estimates:
#> difference in SMDs 
#>                0.4 
#> 

# Example 5: Using asymmetric bounds for equivalence testing
compare_smd(smd1 = 0.45, n1 = c(30, 30),
            smd2 = 0.35, n2 = c(25, 25),
            paired = FALSE, alternative = "equivalence", null = c(-0.1, 0.3))
#> 
#> 	Difference in Cohen's ds (two-sample)
#> 
#> data:  Summary Statistics
#> z = -0.51712, p-value = 0.3025
#> alternative hypothesis: equivalence
#> null values:
#> difference in SMDs difference in SMDs 
#>               -0.1                0.3 
#> sample estimates:
#> difference in SMDs 
#>                0.1 
#> 

# Example 6: Using user-supplied standard errors
compare_smd(smd1 = 0.5, n1 = 50, se1 = 0.15,
            smd2 = 0.7, n2 = 45, se2 = 0.16,
            paired = TRUE, alternative = "two.sided")
#> User supplied standard errors. Proceed with caution.
#> 
#> 	Difference in SMDs (user-supplied SE)
#> 
#> data:  Summary Statistics
#> z = -0.91192, p-value = 0.3618
#> alternative hypothesis: true difference in SMDs is not equal to 0
#> sample estimates:
#> difference in SMDs 
#>               -0.2 
#>