RV coefficient

Summary

In statistics, the RV coefficient[1] is a multivariate generalization of the squared Pearson correlation coefficient (because the RV coefficient takes values between 0 and 1).[2] It measures the closeness of two set of points that may each be represented in a matrix.

The major approaches within statistical multivariate data analysis can all be brought into a common framework in which the RV coefficient is maximised subject to relevant constraints. Specifically, these statistical methodologies include:[1]

One application of the RV coefficient is in functional neuroimaging where it can measure the similarity between two subjects' series of brain scans[3] or between different scans of a same subject.[4]

Definitions edit

The definition of the RV-coefficient makes use of ideas[5] concerning the definition of scalar-valued quantities which are called the "variance" and "covariance" of vector-valued random variables. Note that standard usage is to have matrices for the variances and covariances of vector random variables. Given these innovative definitions, the RV-coefficient is then just the correlation coefficient defined in the usual way.

Suppose that X and Y are matrices of centered random vectors (column vectors) with covariance matrix given by

 

then the scalar-valued covariance (denoted by COVV) is defined by[5]

 

The scalar-valued variance is defined correspondingly:

 

With these definitions, the variance and covariance have certain additive properties in relation to the formation of new vector quantities by extending an existing vector with the elements of another.[5]

Then the RV-coefficient is defined by[5]

 

Shortcoming of the coefficient and adjusted version edit

Even though the coefficient takes values between 0 and 1 by construction, it seldom attains values close to 1 as the denominator is often too large with respect to the maximal attainable value of the denominator.[6]

Given known diagonal blocks   and   of dimensions   and   respectively, assuming that   without loss of generality, it has been proved[7] that the maximal attainable numerator is   where   (resp.  ) denotes the diagonal matrix of the eigenvalues of  (resp.  ) sorted decreasingly from the upper leftmost corner to the lower rightmost corner and   is the   matrix  .

In light of this, Mordant and Segers[7] proposed an adjusted version of the RV coefficient in which the denominator is the maximal value attainable by the numerator. It reads

 

The impact of this adjustment is clearly visible in practice.[7]

See also edit

References edit

  1. ^ a b Robert, P.; Escoufier, Y. (1976). "A Unifying Tool for Linear Multivariate Statistical Methods: The RV-Coefficient". Applied Statistics. 25 (3): 257–265. doi:10.2307/2347233. JSTOR 2347233.
  2. ^ Abdi, Hervé (2007). Salkind, Neil J (ed.). RV coefficient and congruence coefficient. Thousand Oaks. ISBN 978-1-4129-1611-0.
  3. ^ Ferath Kherif; Jean-Baptiste Poline; Sébastien Mériaux; Habib Banali; Guillaume Plandin; Matthew Brett (2003). "Group analysis in functional neuroimaging: selecting subjects using similarity measures" (PDF). NeuroImage. 20 (4): 2197–2208. doi:10.1016/j.neuroimage.2003.08.018. PMID 14683722.
  4. ^ Herve Abdi; Joseph P. Dunlop; Lynne J. Williams (2009). "How to compute reliability estimates and display confidence and tolerance intervals for pattern classiffers using the Bootstrap and 3-way multidimensional scaling (DISTATIS)". NeuroImage. 45 (1): 89–95. doi:10.1016/j.neuroimage.2008.11.008. PMID 19084072.
  5. ^ a b c d Escoufier, Y. (1973). "Le Traitement des Variables Vectorielles". Biometrics. 29 (4). International Biometric Society: 751–760. doi:10.2307/2529140. JSTOR 2529140.
  6. ^ Pucetti, G. (2019). "Measuring Linear Correlation Between Random Vectors". SSRN.
  7. ^ a b c Mordant Gilles; Segers Johan (2022). "Measuring dependence between random vectors via optimal transport,". Journal of Multivariate Analysis. 189.