| Version: | 2.2.0 | 
| Title: | Canonical Correlations and Tests of Independence | 
| Description: | A simple interface for multivariate correlation analysis that unifies various classical statistical procedures including t-tests, tests in univariate and multivariate linear models, parametric and nonparametric tests for correlation, Kruskal-Wallis tests, common approximate versions of Wilcoxon rank-sum and signed rank tests, chi-squared tests of independence, score tests of particular hypotheses in generalized linear models, canonical correlation analysis and linear discriminant analysis. | 
| Author: | Robert Schlicht [aut, cre] | 
| Maintainer: | Robert Schlicht <robert.schlicht@tu-dresden.de> | 
| License: | EUPL version 1.1 | EUPL version 1.2 [expanded from: EUPL (≥ 1.1)] | 
| Imports: | stats | 
| NeedsCompilation: | no | 
| Packaged: | 2025-09-17 06:53:19 UTC; Schlicht | 
| Repository: | CRAN | 
| Date/Publication: | 2025-09-17 07:20:02 UTC | 
Tests of Independence Based on Canonical Correlations
Description
cctest estimates canonical correlations between two sets of 
variables, possibly after removing effects of a third set of variables, and 
performs a classical multivariate test of (conditional) independence based 
on Pillai’s statistic.
Usage
  cctest(formula, data=NULL, df=formula[-2L], ..., tol=1e-07, stats=FALSE)
Arguments
| formula | A  
 | 
| data | An optional list (or data frame) or environment containing 
the variables in the model. If  | 
| df | An optional  | 
| ... | Additional optional arguments. These are passed to 
 | 
| tol | The tolerance in the QR decomposition for detecting linear dependencies of the matrix columns. | 
| stats | A logical value that determines the interpretation of 
 | 
Details
cctest unifies various classical statistical procedures that involve 
the same underlying computations, including t-tests, tests in univariate and 
multivariate linear models, parametric and nonparametric tests for 
correlation, Kruskal–Wallis tests, common approximate versions of Wilcoxon 
rank-sum and signed rank tests, chi-squared tests of independence, score 
tests of particular hypotheses in generalized linear models, canonical 
correlation analysis and linear discriminant analysis (see Examples).
Specifically, for the matrices with ranks k and l obtained from 
X and Y by subtracting from each column its orthogonal projection 
on the column space of A, the function computes factorizations 
\tilde{X}U and \tilde{Y}V with \tilde{X} and 
\tilde{Y} having k and l columns, respectively, such that 
both \tilde{X}^\top \tilde{X}=rI and \tilde{Y}^\top \tilde{Y}=rI, 
and \tilde{X}^\top \tilde{Y}=rD is a rectangular diagonal matrix with 
decreasing diagonal elements. The scaling factor r, which should be 
nonzero, is the dimension of the orthogonal complement of the column space of 
A_{0}.
The function realizes this variant of the singular value decomposition by 
first computing preliminary QR factorizations of the stated form (taking 
r=1) without the requirement on D, and then, in a second step, 
modifying these based on a standard singular value decomposition of that 
matrix. The main work is done in a rotated coordinate system where the column 
space of A aligns with the coordinate axes. The basic approach and the 
rank detection algorithm are inspired by the implementations in 
cancor and in lm, respectively.
The diagonal elements of D, or singular values, are the estimated 
canonical correlations (Hotelling 1936) of the variables 
represented by X and Y if these follow a linear model 
(X\;\;Y)=A(\alpha\;\;\beta)+(\delta\;\;\epsilon) with known A, 
unknown (\alpha\;\;\beta) and error terms (\delta\;\;\epsilon) 
that have uncorrelated rows with expectation zero and an identical unknown 
covariance matrix. In the most common case, where A is given as a 
constant 1, these are the sample canonical correlations (i.e., based 
on simple centering) most often presented in the literature for full column 
ranks k and l. They are always decreasing and between 0 and 1.
In the case of the linear model with independent normally distributed rows 
and A_{0}=A, the ranks k and l equal, with probability 1, 
the ranks of the covariance matrices of the rows of X and Y, 
respectively, or r, whichever is smaller. Under the hypothesis of 
independence of X and Y, given those ranks, the joint 
distribution of the s squared singular values, where s is the 
smaller of the two ranks, is then known and in the case r\geq k+l has a 
probability density (Hsu 1939, Anderson 2003, Anderson 2007) given by
\rho (t_{1},...,t_{s})\propto \prod _{i=1}^{s}t_{i}^{(\left|k-l
\right|-1)/2}(1-t_{i})^{(r-k-l-1)/2}\prod _{i<j}(t_{i}-t_{j}),
1\geq t_{1}\geq \cdots \geq t_{s}\geq 0. For s=1 this reduces to 
the well-known case of a single beta distributed R^{2} or equivalently 
an F distributed R^{2}/(kl) \over (1-R^{2})/(r-kl), with the divisors 
in the numerator and denominator representing the degrees of freedom, or 
twice the parameters of the beta distribution.
Pillai’s statistic is the sum of squares of the canonical correlations, which 
equals, even without the diagonal requirement on D, the squared 
Frobenius norm of that matrix (or trace of D^\top D). Replacing the 
distribution of that statistic divided by s (i.e., of the mean of 
squares) with beta or gamma distributions with first or shape parameter 
kl/2 and expectation kl/(rs) leads to the F and chi-squared 
approximations that the p-values returned by cctest are based on.
The F or beta approximation (Pillai 1954, p. 99, p. 44) is usually used with 
A_{0}=A and then is exact if s=1. The chi-squared approximation 
represents Rao’s (1948) score test (with a test statistic that is r 
times Pillai’s statistic) in the model obtained after removing (or 
conditioning on) the orthogonal projections on the column space of 
A_{0} provided that is a subset of the column space of A; see 
Mardia and Kent (1991) for the case with independent identically distributed 
rows.
Value
A list with class htest containing the following components:
| x,y | matrices  | 
| xinv,yinv | matrices  | 
| estimate | vector of canonical correlations, i.e., the 
diagonal elements of  | 
| statistic | vector of p-values based on Pillai’s statistic and classical F (beta) and chi-squared (gamma) approximations | 
| df.residual | the number  | 
| method | the name of the function | 
| data.name | a character string representation of  | 
Note
The handling of weights differs from that in lm 
unless the nonzero weights are scaled so as to have a mean of 1. Also, to 
facilitate predictions for rows with zero weights (see Examples), the square 
roots of the weights, used internally for scaling the data, are always 
computed as nonzero numbers, even for zero weights, where they are so small 
that their square is still numerically zero and hence without effect on the 
correlation analysis. An offset is subtracted from all columns in 
X and Y.
The simplified formula syntax is intended to provide a simpler, more 
consistent behavior than the legacy  stats procedure based on 
terms.formula, model.frame and 
model.matrix. Inconsistent or hard-to-predict behavior 
can result in model.matrix, in particular, from the special 
interpretation of common symbols, the identification of variables by deparsed 
expressions, the locale-dependent conversion of character variables to 
factors and the imperfect avoidance of linear dependencies subject to 
options("contrasts").
Author(s)
Robert Schlicht
References
Hotelling, H. (1936). Relations between two sets of variates. Biometrika 28, 321–377. doi:10.1093/biomet/28.3-4.321, doi:10.2307/2333955
Hsu, P.L. (1939). On the distribution of roots of certain determinantal equations. Annals of Eugenics 9, 250–258. doi:10.1111/j.1469-1809.1939.tb02212.x
Rao, C.R. (1948). Large sample tests of statistical hypotheses concerning several parameters with applications to problems of estimation. Mathematical Proceedings of the Cambridge Philosophical Society 44, 50–57. doi:10.1017/S0305004100023987
Pillai, K.C.S. (1954). On some distribution problems in multivariate analysis (Institute of Statistics mimeo series 88). North Carolina State University, Dept. of Statistics.
Mardia, K.V., Kent, J.T. (1991). Rao score tests for goodness of fit and independence. Biometrika 78, 355–363. doi:10.1093/biomet/78.2.355
Anderson, T.W. (2003). An introduction to multivariate statistical analysis, 3rd edition, Ch. 12–13. Wiley.
Anderson, T.W. (2007). Multiple discoveries: distribution of roots of determinantal equations. Journal of Statistical Planning and Inference 137, 3240–3248. doi:10.1016/j.jspi.2007.03.008
See Also
Functions cancor, anova.mlm in 
package stats and implementations of canonical correlation analysis in 
other packages such as CCP (tests only), MVar, candisc 
(both including tests based on Wilks’ statistic), yacca, CCA, 
acca, whitening.
Examples
## Artificial observations in 5-by-5 meter quadrats in a forest for
## comparing cctest analyses with equivalent 'stats' methods:
dat <- within(data.frame(row.names=1:150), {
  u <- function() replicate(150, z<<-(z*69069+2^-32)%%1); z<-0
  plot <- factor(u() < .5, , c("a","b"))        # plot a or b
  x    <- as.integer(30*u() + c(1,82)[plot])    # x position on grid
  y    <- as.integer(30*u() + c(1,62)[plot])    # y position on grid
  ori  <- factor(u()%/%.25,,c("E","N","S","W")) # orientation of slope
  elev <- 40*u() + c(605,610)[plot]             # elevation (in meters)
  h    <- 115 - .15*elev + 2*log(1/u()-1)       # tree height (in meters)
  h5   <- h + log(1/u()-1)                      # tree height 5 years earlier
  h10  <- h5 + log(1/u()-1)                     # tree height 10 years earlier
  c15  <- as.integer(h10 + log(1/u()-1) > 20)   # 0-1 coded, 15 years earlier
  sapl <- as.integer(log(1/u())^.8*elev/40)     # number of saplings
  rm(u, z)
})
dat[1:8,]
## t-tests:
cctest(h~plot~1, dat)
  t.test(h~plot, dat, var.equal=TRUE)
  summary(lm(h~plot, dat))
cctest(h-20~1~0, dat)
  t.test(dat$h, mu=20)
  t.test(h~1, dat, mu=20)
cctest(h-h5~1~0, dat)
  t.test(dat$h, dat$h5, paired=TRUE)
  t.test(Pair(h,h5)~1, dat)
## Test for correlation:
cctest(h~elev~1, dat)
  cor.test(~h+elev, dat)
## One-way analysis of variance:
cctest(h~ori~1, dat)
  anova(lm(h~ori, dat))
## F-tests in linear models:
cctest(h~ori~1|elev, dat)
cctest(h~ori~1+elev, dat, stats=TRUE)
  anova(lm(h~1+elev, dat), lm(h~ori+elev, dat))
cctest(h-h5~(h5-h10):(1|x|x^2)~0, dat, subset=1:50)
  summary(lm(h-h5~0+I(h5-h10)+I(h5-h10):(x+I(x^2)), dat, subset=1:50))
## Test in multivariate linear model based on Pillai's statistic:
cctest(h|h5|h10~x|y~1|elev, dat)
cctest(h+h5+h10~x+y~1+elev, dat, stats=TRUE)
  anova(lm(cbind(h,h5,h10)~elev, dat),
    lm(cbind(h,h5,h10)~elev+x+y, dat))
## Test based on Spearman's rank correlation coefficient:
cctest(rank(h)~rank(elev)~1, dat)
  cor.test(~h+elev, dat, method="spearman", exact=FALSE)
## Kruskal-Wallis and Wilcoxon rank-sum tests:
cctest(rank(h)~ori~1, dat)
  kruskal.test(h~ori, dat)
cctest(rank(h)~plot~1, dat)
  wilcox.test(h~plot, dat, exact=FALSE, correct=FALSE)
## Wilcoxon signed rank test:
cctest(rank(abs(h-h5))~sign(h-h5)~0, subset(dat, h-h5 != 0))
#dat|> within(d<-h-h5)|> subset(d|0)|> with(rank(abs(d))~sign(d)~0)|> cctest()
  wilcox.test(h-h5 ~ 1, dat, exact=FALSE, correct=FALSE)
## Chi-squared test of independence:
cctest(ori~plot~1, dat, ~0)
cctest(ori~plot~1, as.data.frame(xtabs(~ori+plot,dat)), df=~0, weights=Freq)
cctest(ori~plot~1, xtabs(~ori+plot,dat), stats=TRUE, df=~0, weights=Freq)
  summary(xtabs(~ori+plot, dat, drop.unused.levels=TRUE))
  chisq.test(dat$ori, dat$plot, correct=FALSE)
## Score test in logistic regression (logit model, ...~1 only):
cctest(c15~x|y~1, dat, ~0)
  anova(glm(c15~1, binomial, dat, epsilon=1e-12),
    glm(c15~1+x+y, binomial, dat), test="Rao")
## Score test in multinomial logit model (...~1 only):
cctest(ori~x|y~1, dat, ~0)
  with(list(d=dat, e=expand.grid(stringsAsFactors=FALSE,
    i=row.names(dat), j=levels(dat$ori))
  ), anova(
    glm(d[i,"ori"]==j ~ j+d[i,"x"]+d[i,"y"], poisson, e, epsilon=1e-12),
    glm(d[i,"ori"]==j ~ j*(d[i,"x"]+d[i,"y"]), poisson, e), test="Rao"
  ))
## Absolute values of (partial) correlation coefficients:
cctest(h~elev~1, dat)$est
  cor(dat$h, dat$elev)
cctest(h~elev~1|x|y, dat)$est
  cov2cor(estVar(lm(cbind(h,elev)~1+x+y, dat)))
cctest(h~x|y|elev~1, dat)$est^2
  summary(lm(h~1+x+y+elev, dat))$r.squared
## Canonical correlations:
cctest(h|h5|h10~x|y~1, dat)$est
  cancor(dat[c("x","y")],dat[c("h","h5","h10")])$cor
## Linear discriminant analysis:
with(list(
  cc = cctest(h|h5|h10~ori~1, dat, ~ori)
), cc$y / sqrt(1-cc$est^2)[col(cc$y)])[1:7,]
  #predict(MASS::lda(ori~h+h5+h10,dat))$x[1:7,]
## Correspondence analysis:
cctest(ori~plot~1, as.data.frame(xtabs(~ori+plot,dat)), ~0, weights=Freq)[1:2]
  #MASS::corresp(~plot+ori, dat, nf=2)
## Prediction in multivariate linear model:
with(list(
  cc = cctest(h|h5|h10~1|x|y~0, dat, weights=plot=="a")
), cc$x %*% diag(cc$est,ncol(cc$x),ncol(cc$y)) %*% cc$yinv)[1:7,]
  predict(lm(cbind(h,h5,h10)~1+x+y, dat, subset=plot=="a"), dat)[1:7,]
## Not run: 
## Handling of additional arguments and edge cases:
cctest(1:150~ori=="E"|ori=="W"~1, c(dat,`:`=`:`,`|`=`|`))
  anova(lm(1:150~ori=="E"|ori=="W", dat))
cctest(h~h10~0, dat, offset=h5, stats=TRUE)
cctest(h-h5~h10-h5~0, dat)
  anova(lm(h~0+offset(h5), dat), lm(h~0+I(h10-h5)+offset(h5), dat))
cctest(h~x~1, dat, weights=sapl/mean(sapl[sapl!=0]))
  anova(lm(h~1, dat, weights=sapl), lm(h~1+x, dat, weights=sapl))
cctest(sqrt(h-17)~elev~1, dat[1:5,])[1:2]
cctest(sqrt(h-17)~elev~1, dat[1:5,], stats=TRUE, na.action=na.exclude)[1:2]
  scale(resid(lm(cbind(elev,sqrt(h-17))~1, dat[1:5,],
    na.action=na.exclude)), FALSE)
cctest(ori:sum(Freq)/Freq-1~1~0, as.data.frame(xtabs(~ori,dat)),
    weights=Freq^3/Freq/sum(Freq)/c(.2,.3,.4,.1))
  chisq.test(xtabs(~ori,dat), p=c(.2,.3,.4,.1))
cctest(c15~h~1, dat,     tol=0.999*sqrt(1-cctest(h~1~0,dat)$est^2))
  summary(lm(c15~h, dat, tol=0.999*sqrt(1-cctest(h~1~0,dat)$est^2)))
cctest(c15~h~1, dat,     tol=1.001*sqrt(1-cctest(h~1~0,dat)$est^2))
  summary(lm(c15~h, dat, tol=1.001*sqrt(1-cctest(h~1~0,dat)$est^2)))
cctest(NULL~NULL~NULL)
cctest(0~0~0)
  anova(lm(0~0), lm(0~0+0))
cctest(1~0~0)
  anova(lm(1~0), lm(1~0+0))
cctest(1~1~0)
  anova(lm(1~0), lm(1~0+1))
cctest(1~1~0, dat, stats=TRUE)
cctest(h^0~1~0, dat)
  anova(lm(h^0~0, dat), lm(h^0~0+1, dat))
## End(Not run)