Decoupled Optimization

Matrix optimization of cost functions is a common problem. Construction of methods that enable each row or column to be individually optimized, i.e., decoupled, are desirable for a number of reasons. With proper decoupling, the convergence characteristics such as local stability can be improved. Decoupling can enable density matching in applications such as independent component analysis (ICA). Lastly, efficient Newton algorithms become tractable after decoupling. The most common method for decoupling rows is to reduce the optimization space to orthogonal matrices. Such restrictions can put an unnecessary upper bound on the achievable separation performance. We offer a decoupling trick that uses standard vector optimization procedures while still admitting nonorthogonal solutions. We provide four methods for computing the decoupling vector:
  1. Using a standard QR algorithm.
  2. Using a projection-based algorithm.
  3. Using a recursive projection-based algorithm.
  4. Using a recursive QR update algorithm.
These algorithms are different methods for decoupling with various benefits and drawbacks. Both recursive approaches are faster since the number of operations grows as the square of the number of rows/columns in the demixing matrix rather than as the cube when using QR or the projection-based method. However, the recursive approaches will accumulate errors numerically. Usually the error accumulation is insignificant.


References:

[1] X.-L. Li & X.-D. Zhang, "Nonorthogonal joint diagonalization free of degenerate solution," IEEE Trans. Signal Process., 2007, 55, 1803-1814
[2] X.-L. Li & T. Adali, "Independent component analysis by entropy bound minimization," IEEE Trans. Signal Process., 2010, 58, 5151-5164
[3] M. Anderson, X.-L. Li, P. Rodriguez, & T Adali, "An effective decoupling method for matrix optimization and its application to the ICA problem," Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), 2012