Data Fusion using C-ICT

Consecutive independence and correlation transform (C-ICT) is a data fusion framework developed for the joint analysis of more than two datasets [1]. The method exploits the strength of independent component analysis (ICA) and independent vector analysis (IVA) to identify the maximally correlated components across datasets. In the first step of the method, ICA is performed on individual datasets separately to obtain maximally independent components and their corresponding subject profile matrices (mixing matrices). In the next step, IVA with Gaussian distribution (IVA-G) [2] is performed on the subject profile matrices to identify the profiles (and components) maximally correlated across datasets. Since C-ICT is a framework, other methods such as canonical correlation analysis (CCA) or multiset CCA (MCCA) can also be used instead of IVA-G to identify the correlated components [3]. C-ICT is fully flexible in terms of the number of datasets combined, the numbers of orders of the signal subspace for each dataset, and the discovery of "one-to-many associations" across multiple datasets. C-ICT is also applicable to both multiset and multimodal datasets.

    Consecutive Independence and Correlation Transform (C-ICT) MATLAB package


References:

[1] C. Jia, M. A. B. S. Akhonda, Y. Levin-Schwartz, Q. Long, V. D. Calhoun, and T. Adali, "Consecutive Independence and Correlation Transform for Multimodal Data Fusion: Discovery of One-to-Many Associations in Structural and Functional Imaging Data," Applied Sciences, vol. 11, no. 18, p. 8382, Sep. 2021.
[2] M. Anderson, T. Adali and X. Li, "Joint Blind Source Separation with Multivariate Gaussian Model: Algorithms and Performance Analysis," in IEEE Transactions on Signal Processing, vol. 60, no. 4, pp. 1672-1683, April 2012.
[3] M. A. B. S. Akhonda, Y. Levin-Schwartz, S. Bhinge, V. D. Calhoun and T. Adali, "Consecutive Independence and Correlation Transform for Multimodal Fusion: Application to EEG and fMRI Data," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 2311-2315.