Types
BlockTensorFactorization.Core.AbstractConstraint — Type
Abstract parent type for the various constraints
BlockTensorFactorization.Core.AbstractDecomposition — Type
Abstract type for the different decompositions.
Main interface for subtypes are the following functions. Required: array(D): how to construct the full array from the decomposition representation factors(D): a tuple of arrays, the decomposed factors Optional: getindex(D, i::Int) and getindex(D, I::Vararg{Int, N}): how to get the ith or Ith element of the reconstructed array. Defaults to getindex(array(D), x), but there is often a more efficient way to get a specific element from large tensors. size(D): Defaults to size(array(D)).
BlockTensorFactorization.Core.AbstractObjective — Type
AbstractObjective <: Function
General interface is
struct L2 <: AbstractObjective end
after constructing
myobjective = L2()
you can call
myobjective(X, Y)
BlockTensorFactorization.Core.AbstractStat — Type
An AbstractStat is a type which, when created, can be applied to the four arguments (X::AbstractDecomposition, Y::AbstractArray, previous::Vector{<:AbstractDecomposition}, parameters::Dict) to (usually) return a number.
BlockTensorFactorization.Core.AbstractStep — Type
Interface to make a step scheme is
struct MyStep <: AbstractStep ... end
function (step::MyStep)(x::AbstractDecomposition; kwargs...) ... return step::Real end
To use your scheme, construct an instance with any necessary parameters
mystep = MyStep(...)
and then you can call
step = mystep(D; kwargs...)
to compute the step size.
BlockTensorFactorization.Core.AbstractTucker — Type
Abstract type for all Tucker-like decompositions. AbstractTucker decompositions have a core with the same number of dimensions as the full array, and (a) matrix factor(s).
BlockTensorFactorization.Core.BlockGradientDescent — Type
Perform a Block Gradient decent step on the nth factor of an Abstract Decomposition x
The n is only to keep track of the factor that gets updated, and to check if a frozen factor was requested to be updated.
This type allows for more complicated step sizes such as individual steps for sub-blocks of the nth factor.
BlockTensorFactorization.Core.CPDecomposition — Type
CP decomposition. Takes the form of an outerproduct of multiple matrices.
For example, a CP-decomposition of an order three tensor D would be, entry-wise,
D[i, j, k] = ∑_r A[i, r] * B[j, r] * C[k, r]).
CPDecomposition((A, B, C))
BlockTensorFactorization.Core.ComposedConstraint — Type
ComposedConstraint{T<:AbstractConstraint, U<:AbstractConstraint}
outer_constraint ∘ inner_constraintComposing any two AbstractConstraints with ∘ will return this type.
Applies the inner constraint first, then the outer constraint. Checking a ComposedConstraint will check both constraints are satisfied.
BlockTensorFactorization.Core.ConstraintUpdate — Method
ConstraintUpdate(n, constraint)Converts an AbstractConstraint to a ConstraintUpdate on the factor n
BlockTensorFactorization.Core.DiagonalTuple — Type
Alias for NTuple{N, Diagonal{T}}
BlockTensorFactorization.Core.DisplayDecomposition — Type
DisplayDecomposition(; kwargs...)Does not use any of the kwargs. Simply displays the current iteration.
BlockTensorFactorization.Core.Entrywise — Type
Entrywise <: AbstractConstraint
Entrywise(apply::Function, check::Function)Entry-wise constraint. Both apply and check need are performed entry-wise on an array.
BlockTensorFactorization.Core.Entrywise — Method
(C::Entrywise)(A::AbstractArray)Applies C.apply to A entry-wise. Mutates A.
BlockTensorFactorization.Core.EuclideanLipschitz — Type
The 2-norm of the lipschitz constants that would be taken for all blocks.
Need the stepsizes to be lipschitz steps since it is calculated similarly to EuclideanStepSize.
BlockTensorFactorization.Core.EuclideanStepSize — Type
The 2-norm of the stepsizes that would be taken for all blocks.
For example, if there are two blocks, and we would take a stepsize of A to update one block and B to update the other, this would return sqrt(A^2 + B^2).
BlockTensorFactorization.Core.FactorNorms — Type
FactorNorms(; norm, kwargs...)Makes a tuple containing the norm of each factor in the decomposition.
BlockTensorFactorization.Core.GenericConstraint — Type
GenericConstraint <: AbstractConstraintGeneral constraint. Simply applies the function apply and checks it was successful with check.
Calling a GenericConstraint on an AbstractArray will use the function in the field apply. Use check(C::GenericConstraint, A) to use the function in the field check.
BlockTensorFactorization.Core.GenericDecomposition — Type
Most general decomposition. Takes the form of interweaving contractions between the factors.
For example, T = A * B + C could be represented as GenericDecomposition((A, B, C), (*, +))
BlockTensorFactorization.Core.GradientDescent — Type
Perform a Gradient decent step on the nth factor of an Abstract Decomposition x
The n is only to keep track of the factor that gets updated, and to check if a frozen factor was requested to be updated.
BlockTensorFactorization.Core.GradientNNCone — Type
GradientNNCone{T} <: AbstractStat2-norm vector-set distance between the negative gradient and nonnegative cone at the iterate.
BlockTensorFactorization.Core.GradientNorm — Type
GradientNorm{T} <: AbstractStat2-norm of the gradient.
BlockTensorFactorization.Core.IterateNormDiff — Type
IterateNormDiff{T<:Function} <: AbstractStat2-norm of the difference between the previous and current iterate.
BlockTensorFactorization.Core.IterateRelativeDiff — Type
IterateRelativeDiff{T<:Function} <: AbstractStatRelative difference between the previous and current iterate.
BlockTensorFactorization.Core.Iteration — Type
Iteration <: AbstractStatIteration number.
BlockTensorFactorization.Core.L2 — Type
L2 <: AbstractObjectiveThe least squares objective.
BlockTensorFactorization.Core.L2 — Method
(objective::L2)(X, Y)Calculates the least squares objective at tensors X and Y.
BlockTensorFactorization.Core.LinearConstraint — Type
LinearConstraint(A::T, B::AbstractArray) where {T <: Union{Function, AbstractArray}}The constraint AX = B for a linear operator A and array B.
When A is a matrix, this projects onto the subspace with the solution given by
X .-= A' * ( (A*A') \ (A*X .- b) ).
BlockTensorFactorization.Core.LipschitzStep — Type
LipschitzStep <: AbstractStepHas a single property lipschitz which stores a function for calculating the Lipschitz constant of the gradient with respect to a factor.
BlockTensorFactorization.Core.LipschitzStep — Method
(step::LipschitzStep)(x; kwargs...)Computes the step size 1/L.
BlockTensorFactorization.Core.MomentumUpdate — Method
Makes a MomentumUpdate from an AbstractGradientDescent assuming the AbstractGradientDescent has a lipschitz step size
BlockTensorFactorization.Core.NoConstraint — Type
NoConstraint() <: AbstractConstraintThe constraint that does nothing. Useful for giving a list of AbstractConstraint for each factor where you would like one factor to be unconstrained.
BlockTensorFactorization.Core.ObjectiveRatio — Type
ObjectiveRatio{T<:AbstractObjective} <: AbstractStatRatio between the previous and current objective value.
BlockTensorFactorization.Core.ObjectiveValue — Type
ObjectiveValue{T<:AbstractObjective} <: AbstractStatThe current objective value.
BlockTensorFactorization.Core.PrintStats — Type
PrintStats(; kwargs...)Does not use any of the kwargs. Simply prints the most recent row of the stats.
BlockTensorFactorization.Core.ProjectedNormalization — Type
ProjectedNormalization(projection, norm; whats_normalized=identity)Main constructor for the constraint where norm of whats_normalized equals scale.
Scale can be a single Real, or an AbstractArray{<:Real}, but should be the same size as the output of whats_normalized.
BlockTensorFactorization.Core.ProjectedNormalization — Method
ProjectedNormalization(S::ScaledNormalization{T}) where {T <: Real}Convert from a ScaledNormalization to a ProjectedNormalization.
Only works when the scale is 1.
BlockTensorFactorization.Core.Projection — Type
Perform a projected gradient update on the nth factor of an Abstract Decomposition x
BlockTensorFactorization.Core.RelativeError — Type
RelativeError{T<:Function} <: AbstractStatRelative error between the decomposition model, and input array.
BlockTensorFactorization.Core.Rescale — Type
Rescale{T<:Union{Nothing,Missing,Function}} <: ConstraintUpdate
Rescale(n, scale::ScaledNormalization, whats_rescaled::T)Applies the scaled normalization scale to factor n, and tries to multiply the scaling of factor n to other factors.
If whats_rescaled=nothing, then it will not rescale any other factor.
If whats_rescaled=missing, then it will try to evenly distribute the weight to all other factors using the (N-1) root of each weight where N is the number of factors. If the weights are not broadcastable, (e.g. you want to scale each row but each factor has a different number of rows), will use the geometric mean of the weights as the single weight to distribute evenly among the other factors.
If typeof(whats_rescaled) <: Function, will broadcast the weight to the output of calling this function on the entire decomposition. For example, whats_rescale = x -> eachcol(factor(x, 2)) will rescale each column of the second factor of the decomposition.
BlockTensorFactorization.Core.ScaledNormalization — Type
ScaledNormalization(norm; whats_normalized=identity, scale=1)Main constructor for the constraint where norm of whats_normalized equals scale.
Scale can be a single Real, or an AbstractArray{<:Real}, but should be broadcast-able with the output of whats_normalized. Lastly, scale can be a Function which will act on an AbstractArray{<:Real} and return something that is broadcast-able whats_normalized.
BlockTensorFactorization.Core.ScaledNormalization — Method
ScaledNormalization(P::ProjectedNormalization)Convert from a ProjectedNormalization to a ScaledNormalization.
BlockTensorFactorization.Core.SingletonDecomposition — Type
SingletonDecomposition(A::AbstractArray, frozen=false)
Wraps an AbstractArray so it can be treated like an AbstractDecomposition
BlockTensorFactorization.Core.SuperDiagonal — Type
SuperDiagonal(v::AbstractVector, ndims::Integer=2)Constructs a SuperDiagonal array from the vector v.
BlockTensorFactorization.Core.SuperDiagonal — Type
SuperDiagonal{T, N, V<:AbstractVector{T}} <: AbstractArray{T, N}Array of order N that is zero everywhere except possibly along the super diagonal.
BlockTensorFactorization.Core.Tucker — Method
Tucker(full_size::NTuple{N, Integer}, ranks::NTuple{N, Integer};
frozen=false_tuple(length(ranks)+1), init=DEFAULT_INIT, kwargs...) where NConstructs a random Tucker type using init to initialize the factors.
See Tucker1.
BlockTensorFactorization.Core.Tucker — Method
Tucker((G, A, B, ...))
Tucker((G, A, B, ...), frozen)Tucker decomposition. Takes the form of a core G times a matrix for each dimension.
For example, a rank (r, s, t) Tucker decomposition of an order three tensor D would be, entry-wise,
D[i, j, k] = ∑_r ∑_s ∑_t G[r, s, t] * A[i, r] * B[j, s] * C[k, t].
Optionally use frozen::Tuple{Bool} to specify which factors are frozen.
See tuckerproduct.
BlockTensorFactorization.Core.Tucker1 — Method
Tucker1(full_size::NTuple{N, Integer}, rank::Integer; frozen=false_tuple(2), init=DEFAULT_INIT, kwargs...) where NConstructs a random Tucker1 type using init to initialize the factors.
BlockTensorFactorization.Core.Tucker1 — Method
Tucker1((G, A))
Tucker1((G, A), frozen)Tucker-1 decomposition. Takes the form of a core G times a matrix A. Entry-wise
D[i₁, …, i_N] = ∑_r G[r, i₂, …, i_N] * A[i₁, r].
Optionally use frozen::Tuple{Bool} to specify which factors are frozen.