Types

BlockTensorFactorization.Core.AbstractDecompositionType

Abstract type for the different decompositions.

Main interface for subtypes are the following functions. Required: array(D): how to construct the full array from the decomposition representation factors(D): a tuple of arrays, the decomposed factors Optional: getindex(D, i::Int) and getindex(D, I::Vararg{Int, N}): how to get the ith or Ith element of the reconstructed array. Defaults to getindex(array(D), x), but there is often a more efficient way to get a specific element from large tensors. size(D): Defaults to size(array(D)).

source
BlockTensorFactorization.Core.AbstractStatType

An AbstractStat is a type which, when created, can be applied to the four arguments (X::AbstractDecomposition, Y::AbstractArray, previous::Vector{<:AbstractDecomposition}, parameters::Dict) to (usually) return a number.

source
BlockTensorFactorization.Core.AbstractStepType

Interface to make a step scheme is

struct MyStep <: AbstractStep ... end

function (step::MyStep)(x::AbstractDecomposition; kwargs...) ... return step::Real end

To use your scheme, construct an instance with any necessary parameters

mystep = MyStep(...)

and then you can call

step = mystep(D; kwargs...)

to compute the step size.

source
BlockTensorFactorization.Core.BlockGradientDescentType

Perform a Block Gradient decent step on the nth factor of an Abstract Decomposition x

The n is only to keep track of the factor that gets updated, and to check if a frozen factor was requested to be updated.

This type allows for more complicated step sizes such as individual steps for sub-blocks of the nth factor.

source
BlockTensorFactorization.Core.CPDecompositionType

CP decomposition. Takes the form of an outerproduct of multiple matrices.

For example, a CP-decomposition of an order three tensor D would be, entry-wise,

D[i, j, k] = ∑_r A[i, r] * B[j, r] * C[k, r]).

CPDecomposition((A, B, C))

source
BlockTensorFactorization.Core.ComposedConstraintType
ComposedConstraint{T<:AbstractConstraint, U<:AbstractConstraint}
outer_constraint ∘ inner_constraint

Composing any two AbstractConstraints with will return this type.

Applies the inner constraint first, then the outer constraint. Checking a ComposedConstraint will check both constraints are satisfied.

source
BlockTensorFactorization.Core.EuclideanStepSizeType

The 2-norm of the stepsizes that would be taken for all blocks.

For example, if there are two blocks, and we would take a stepsize of A to update one block and B to update the other, this would return sqrt(A^2 + B^2).

source
BlockTensorFactorization.Core.GenericConstraintType
GenericConstraint <: AbstractConstraint

General constraint. Simply applies the function apply and checks it was successful with check.

Calling a GenericConstraint on an AbstractArray will use the function in the field apply. Use check(C::GenericConstraint, A) to use the function in the field check.

source
BlockTensorFactorization.Core.GradientDescentType

Perform a Gradient decent step on the nth factor of an Abstract Decomposition x

The n is only to keep track of the factor that gets updated, and to check if a frozen factor was requested to be updated.

source
BlockTensorFactorization.Core.LinearConstraintType
LinearConstraint(A::T, B::AbstractArray) where {T <: Union{Function, AbstractArray}}

The constraint AX = B for a linear operator A and array B.

When A is a matrix, this projects onto the subspace with the solution given by

X .-= A' * ( (A*A') \ (A*X .- b) ).

source
BlockTensorFactorization.Core.NoConstraintType
NoConstraint() <: AbstractConstraint

The constraint that does nothing. Useful for giving a list of AbstractConstraint for each factor where you would like one factor to be unconstrained.

source
BlockTensorFactorization.Core.ProjectedNormalizationType
ProjectedNormalization(projection, norm; whats_normalized=identity)

Main constructor for the constraint where norm of whats_normalized equals scale.

Scale can be a single Real, or an AbstractArray{<:Real}, but should be the same size as the output of whats_normalized.

source
BlockTensorFactorization.Core.RescaleType
Rescale{T<:Union{Nothing,Missing,Function}} <: ConstraintUpdate
Rescale(n, scale::ScaledNormalization, whats_rescaled::T)

Applies the scaled normalization scale to factor n, and tries to multiply the scaling of factor n to other factors.

If whats_rescaled=nothing, then it will not rescale any other factor.

If whats_rescaled=missing, then it will try to evenly distribute the weight to all other factors using the (N-1) root of each weight where N is the number of factors. If the weights are not broadcastable, (e.g. you want to scale each row but each factor has a different number of rows), will use the geometric mean of the weights as the single weight to distribute evenly among the other factors.

If typeof(whats_rescaled) <: Function, will broadcast the weight to the output of calling this function on the entire decomposition. For example, whats_rescale = x -> eachcol(factor(x, 2)) will rescale each column of the second factor of the decomposition.

source
BlockTensorFactorization.Core.ScaledNormalizationType
ScaledNormalization(norm; whats_normalized=identity, scale=1)

Main constructor for the constraint where norm of whats_normalized equals scale.

Scale can be a single Real, or an AbstractArray{<:Real}, but should be broadcast-able with the output of whats_normalized. Lastly, scale can be a Function which will act on an AbstractArray{<:Real} and return something that is broadcast-able whats_normalized.

source
BlockTensorFactorization.Core.TuckerMethod
Tucker(full_size::NTuple{N, Integer}, ranks::NTuple{N, Integer};
    frozen=false_tuple(length(ranks)+1), init=DEFAULT_INIT, kwargs...) where N

Constructs a random Tucker type using init to initialize the factors.

See Tucker1.

source
BlockTensorFactorization.Core.TuckerMethod
Tucker((G, A, B, ...))
Tucker((G, A, B, ...), frozen)

Tucker decomposition. Takes the form of a core G times a matrix for each dimension.

For example, a rank (r, s, t) Tucker decomposition of an order three tensor D would be, entry-wise,

D[i, j, k] = ∑_r ∑_s ∑_t G[r, s, t] * A[i, r] * B[j, s] * C[k, t].

Optionally use frozen::Tuple{Bool} to specify which factors are frozen.

See tuckerproduct.

source
BlockTensorFactorization.Core.Tucker1Method
Tucker1(full_size::NTuple{N, Integer}, rank::Integer; frozen=false_tuple(2), init=DEFAULT_INIT, kwargs...) where N

Constructs a random Tucker1 type using init to initialize the factors.

source
BlockTensorFactorization.Core.Tucker1Method
Tucker1((G, A))
Tucker1((G, A), frozen)

Tucker-1 decomposition. Takes the form of a core G times a matrix A. Entry-wise

D[i₁, …, i_N] = ∑_r G[r, i₂, …, i_N] * A[i₁, r].

Optionally use frozen::Tuple{Bool} to specify which factors are frozen.

See ×₁ and mtt.

source