Decomposition Models
The main abstract type for all decomposition models is AbstractDecomposition.
BlockTensorFactorization.Core.AbstractDecomposition — Type
Abstract type for the different decompositions.
Main interface for subtypes are the following functions. Required: array(D): how to construct the full array from the decomposition representation factors(D): a tuple of arrays, the decomposed factors Optional: getindex(D, i::Int) and getindex(D, I::Vararg{Int, N}): how to get the ith or Ith element of the reconstructed array. Defaults to getindex(array(D), x), but there is often a more efficient way to get a specific element from large tensors. size(D): Defaults to size(array(D)).
This can be subtyped to create your own decomposition model.
The following common tensor models are available as valid arguments to model and built into this package.
Tucker types
BlockTensorFactorization.Core.Tucker — Type
Tucker((G, A, B, ...))
Tucker((G, A, B, ...), frozen)Tucker decomposition. Takes the form of a core G times a matrix for each dimension.
For example, a rank (r, s, t) Tucker decomposition of an order three tensor D would be, entry-wise,
D[i, j, k] = ∑_r ∑_s ∑_t G[r, s, t] * A[i, r] * B[j, s] * C[k, t].
Optionally use frozen::Tuple{Bool} to specify which factors are frozen.
See tuckerproduct.
Tucker(full_size::NTuple{N, Integer}, ranks::NTuple{N, Integer};
frozen=false_tuple(length(ranks)+1), init=DEFAULT_INIT, kwargs...) where NConstructs a random Tucker type using init to initialize the factors.
See Tucker1.
BlockTensorFactorization.Core.Tucker1 — Type
Tucker1((G, A))
Tucker1((G, A), frozen)Tucker-1 decomposition. Takes the form of a core G times a matrix A. Entry-wise
D[i₁, …, i_N] = ∑_r G[r, i₂, …, i_N] * A[i₁, r].
Optionally use frozen::Tuple{Bool} to specify which factors are frozen.
Tucker1(full_size::NTuple{N, Integer}, rank::Integer; frozen=false_tuple(2), init=DEFAULT_INIT, kwargs...) where NConstructs a random Tucker1 type using init to initialize the factors.
BlockTensorFactorization.Core.CPDecomposition — Type
CP decomposition. Takes the form of an outerproduct of multiple matrices.
For example, a CP-decomposition of an order three tensor D would be, entry-wise,
D[i, j, k] = ∑_r A[i, r] * B[j, r] * C[k, r]).
CPDecomposition((A, B, C))
Note these are all subtypes of AbstractTucker.
BlockTensorFactorization.Core.AbstractTucker — Type
Abstract type for all Tucker-like decompositions. AbstractTucker decompositions have a core with the same number of dimensions as the full array, and (a) matrix factor(s).
Other types
BlockTensorFactorization.Core.GenericDecomposition — Type
Most general decomposition. Takes the form of interweaving contractions between the factors.
For example, T = A * B + C could be represented as GenericDecomposition((A, B, C), (*, +))
BlockTensorFactorization.Core.SingletonDecomposition — Type
SingletonDecomposition(A::AbstractArray, frozen=false)
Wraps an AbstractArray so it can be treated like an AbstractDecomposition
How Julia treats an AbstractDecomposition
AbstractDecomposition is an abstract subtype of AbstractArray. AbstractDecomposition will keep track of the element type and number of dimensions like other AbstractArray. This is the T and N in the type Array{T,N}. To make AbstractDecomposition behave like other array types, Julia only needs to know how to access/compute indexes of the array through getindex. These indices are computed on the fly when a particular index is requested, or the whole tensor is computed from its factors through array(X). This has the advantage of minimizing the memory used, and allows for the most flexibility since any operation that is supported by AbstractArray will work on AbstractDecomposition types. The drawback is that repeated requests to entries must recompute the entry each time. In these cases, it is best to "flatten" the array with array(X) first before making these repeated calls.
Some basic operations like +, -, *, \, and / will either compute the operation is some optimized way, or call array(X) function to first flatten the decomposition into a regular Array type in some optimized way. Operations that don't have an optimized method (because I can only do so much), will instead call Julia's Array{T,N}(X) to convert the model into a regular Array{T,N} type. This is usually slower and less memory efficient since it calls getindex on every index individually, instead of computing the whole array at once.