Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
name = "ScaleInvariantAnalysis"
uuid = "727e6139-ff52-4636-a344-ed1d23e73ffc"
version = "1.0.0-DEV"
authors = ["Tim Holy <tim.holy@gmail.com> and contributors"]
version = "1.0.0-DEV"

[deps]
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LoopVectorization = "bdcacae8-1622-11e9-2a5c-532679323890"
PrecompileTools = "aea7be01-6a6a-4083-8856-8a6e6704d82a"
SparseArrays = "2f01184e-e22b-5df5-ae63-d93ebab69eaf"

[weakdeps]
Expand All @@ -22,6 +23,8 @@ JuMP = "1"
LinearAlgebra = "1.10.0"
NautyGraphs = "0.7"
ParametricOptInterface = "0.15"
PrecompileTools = "1"
Random = "1"
SparseArrays = "1.10.0"
julia = "1.10.10"

Expand All @@ -31,8 +34,9 @@ HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
NautyGraphs = "7509a0a4-015a-4167-b44b-0799a1a2605e"
ParametricOptInterface = "0ce4ce61-57bf-432b-a095-efac525d185e"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

[targets]
test = ["Graphs", "HiGHS", "JuMP", "NautyGraphs", "ParametricOptInterface", "Statistics", "Test"]
test = ["Graphs", "HiGHS", "JuMP", "NautyGraphs", "ParametricOptInterface", "Random", "Statistics", "Test"]
4 changes: 4 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
[deps]
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
ScaleInvariantAnalysis = "727e6139-ff52-4636-a344-ed1d23e73ffc"

[compat]
Documenter = "1"
HiGHS = "1"
JuMP = "1"
LinearAlgebra = "1"
2 changes: 2 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
using ScaleInvariantAnalysis
using Documenter
using JuMP, HiGHS

DocMeta.setdocmeta!(ScaleInvariantAnalysis, :DocTestSetup, :(using ScaleInvariantAnalysis); recursive=true)

Expand All @@ -12,6 +13,7 @@ makedocs(;
edit_link="main",
assets=String[],
),
checkdocs=:exports,
pages=[
"Home" => "index.md",
],
Expand Down
71 changes: 42 additions & 29 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ how the matrix entries change. Scalar summaries like `norm(A)` or
arbitrary choice of units.

A concrete example: a 3×3 matrix whose rows and columns correspond to physical
variables at very different scales (position in metres, velocity in m/s, force
variables with different units (position in meters, velocity in m/s, force
in N):

```jldoctest coverones
Expand All @@ -39,8 +39,8 @@ julia> a = symcover(A)
0.001
```

The cover `a` captures the natural per-variable scale. The normalised matrix
`A ./ (a .* a')` is all-ones and is scale-invariant.
The cover `a` captures the natural per-variable scale. The normalized matrix
`A ./ (a .* a')` is all-ones and scale-invariant.

## Measuring cover quality

Expand All @@ -49,8 +49,8 @@ better capture the scaling of `A`. The *log-excess* of an entry is
`log(a[i] * b[j] / abs(A[i, j])) >= 0`; it is zero when the constraint is
exactly tight. Two summary statistics aggregate these excesses:

- [`lobjective`](@ref) — sum of log-excesses (L1 in log space).
- [`qobjective`](@ref) — sum of squared log-excesses (L2 in log space).
- [`cover_lobjective`](@ref) — sum of log-excesses (L1 in log space).
- [`cover_qobjective`](@ref) — sum of squared log-excesses (L2 in log space).

Both equal zero if and only if every constraint is tight.

Expand All @@ -64,27 +64,51 @@ julia> a = symcover(A)
2.0
3.0

julia> lobjective(a, A)
julia> cover_lobjective(a, A)
2.1972245773362196

julia> qobjective(a, A)
julia> cover_qobjective(a, A)
2.413897921625164
```

Here's an example where the quadratically-optimal cover differs slightly from the one returned by `cover`:

```jldoctest quality2; filter = r"(\d+\.\d{6})\d+" => s"\1"
julia> using ScaleInvariantAnalysis, JuMP, HiGHS

julia> A = [1 2 3; 6 5 4];

julia> a, b = cover(A)
([1.2544610775677627, 3.4759059767492304], [1.7261686708831454, 1.6581941714076147, 2.391465190627206])

julia> aq, bq = cover_qmin(A)
([1.1986299970143055, 3.25358233351279], [1.8441211516912772, 1.6685716234216104, 2.5028574351324164])

julia> a * b'
2×3 Matrix{Float64}:
2.16541 2.08014 3.0
6.0 5.76373 8.31251

julia> aq * bq'
2×3 Matrix{Float64}:
2.21042 2.0 3.0
6.0 5.42884 8.14325
```

## Choosing a cover algorithm

| Function | Symmetric | Minimizes | Requires |
|---|---|---|---|
| [`symcover`](@ref) | yes | (fast heuristic) | — |
| [`cover`](@ref) | no | (fast heuristic) | — |
| [`symcover_lmin`](@ref) | yes | `lobjective` | JuMP + HiGHS |
| [`cover_lmin`](@ref) | no | `lobjective` | JuMP + HiGHS |
| [`symcover_qmin`](@ref) | yes | `qobjective` | JuMP + HiGHS |
| [`cover_qmin`](@ref) | no | `qobjective` | JuMP + HiGHS |
| [`symcover_lmin`](@ref) | yes | `cover_lobjective` | JuMP + HiGHS |
| [`cover_lmin`](@ref) | no | `cover_lobjective` | JuMP + HiGHS |
| [`symcover_qmin`](@ref) | yes | `cover_qobjective` | JuMP + HiGHS |
| [`cover_qmin`](@ref) | no | `cover_qobjective` | JuMP + HiGHS |

**`symcover` and `cover` are recommended for production use.** They run in
O(n²) time and often land within a few percent of the `lobjective`-optimal
cover (see the quality tests in `test/testmatrices.jl`).
O(n²) time and often land within a few percent of the `cover_lobjective`-optimal
cover (see the quality tests involving `test/testmatrices.jl`).

The `*_lmin` and `*_qmin` variants solve a convex program (via
[JuMP](https://jump.dev/) and [HiGHS](https://highs.dev/)) and return a
Expand All @@ -95,31 +119,20 @@ package extension — simply load both libraries before calling them:
using JuMP, HiGHS
using ScaleInvariantAnalysis

a = symcover_lmin(A) # globally l-minimal symmetric cover
a, b = cover_qmin(A) # globally q-minimal general cover
a = symcover_lmin(A) # globally linear-minimal symmetric cover
a, b = cover_qmin(A) # globally quadratic-minimal general cover
```

## Scale-invariant magnitude estimation

[`divmag`](@ref) combines `symcover` with a right-hand side vector to produce a
scale-covariant estimate of the magnitude of `A \ b` without solving the system:

```julia
a, mag = divmag(A, b)
```

`a` is `symcover(A)` and `mag` estimates `dotabs(A \ b, a)`. Both transform
covariantly when `A` and `b` are rescaled together, so `mag` serves as a
reliable unit for assessing roundoff in the solution. Pass `cond=true` to fold
in the scale-invariant condition number for ill-conditioned systems.

## Index of available tools

```@index
Modules = [ScaleInvariantAnalysis]
Private = false
```

## Reference documentation

```@autodocs
Modules = [ScaleInvariantAnalysis]
Private = false
```
8 changes: 7 additions & 1 deletion src/ScaleInvariantAnalysis.jl
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@ module ScaleInvariantAnalysis
using LinearAlgebra
using SparseArrays
using LoopVectorization
using PrecompileTools

export lobjective, qobjective, symcover, cover, symcover_lmin, cover_lmin, symcover_qmin, cover_qmin
export cover_lobjective, cover_qobjective, symcover, cover, symcover_lmin, cover_lmin, symcover_qmin, cover_qmin
export dotabs

include("covers.jl")
Expand Down Expand Up @@ -56,4 +57,9 @@ function __init__()
end
end

@compile_workload begin
symcover([1.0 0.1; 0.1 4.0])
cover([1.0 0.5; 0.1 4.0])
end

end # module
83 changes: 63 additions & 20 deletions src/covers.jl
Original file line number Diff line number Diff line change
@@ -1,30 +1,32 @@
"""
lobjective(a, b, A)
lobjective(a, A)
cover_lobjective(a, b, A)
cover_lobjective(a, A)

Compute the sum of log-domain excesses over nonzero entries of `A`:

∑_{i,j : A[i,j] ≠ 0} log(a[i] * b[j] / |A[i,j]|)

The two-argument form is for symmetric matrices where the cover is `a*a'`.`
The two-argument form is for symmetric matrices where the cover is `a*a'`.

See also: [`qobjective`](@ref) for the sum of squared log-domain excesses.
See also: [`cover_qobjective`](@ref) for the sum of squared log-domain excesses.
"""
lobjective(a, b, A) = sum(log(a[i] * b[j] / abs(A[i, j])) for i in axes(a, 1), j in axes(b, 1) if A[i, j] != 0)
lobjective(a, A) = lobjective(a, a, A)
cover_lobjective(a, b, A) = sum(log(a[i] * b[j] / abs(A[i, j])) for i in axes(a, 1), j in axes(b, 1) if A[i, j] != 0)
cover_lobjective(a, A) = cover_lobjective(a, a, A)

"""
qobjective(a, b, A)
qobjective(a, A)
cover_qobjective(a, b, A)
cover_qobjective(a, A)

Compute the sum of squared log-domain excesses over nonzero entries of `A`:

∑_{i,j : A[i,j] ≠ 0} log(a[i] * b[j] / |A[i,j]|)²

The two-argument form is for symmetric matrices where the cover is `a*a'`.`

See also: [`lobjective`](@ref) for the sum of log-domain excesses.
See also: [`cover_lobjective`](@ref) for the sum of log-domain excesses.
"""
qobjective(a, b, A) = sum(log(a[i] * b[j] / abs(A[i, j]))^2 for i in axes(a, 1), j in axes(b, 1) if A[i, j] != 0)
qobjective(a, A) = qobjective(a, a, A)
cover_qobjective(a, b, A) = sum(log(a[i] * b[j] / abs(A[i, j]))^2 for i in axes(a, 1), j in axes(b, 1) if A[i, j] != 0)
cover_qobjective(a, A) = cover_qobjective(a, a, A)

"""
a = symcover(A; iter=3)
Expand All @@ -33,10 +35,27 @@ Given a square matrix `A` assumed to be symmetric, return a vector `a`
representing the symmetric cover of `A`, so that `a[i] * a[j] >= abs(A[i, j])`
for all `i`, `j`.

`a` is not minimal, but with increasing `iter` it is increasingly tight.
`a` may not be minimal, but it is tightened iteratively, with `iter` specifying
the number of iterations (more iterations make tighter covers).
`symcover` is fast and generally recommended for production use.

See also: [`symcover_lmin`](@ref), [`symcover_qmin`](@ref), [`cover`](@ref).

# Examples

```jldoctest
julia> A = [4 -1; -1 0];

julia> a = symcover(A)
2-element Vector{Float64}:
2.0
0.5

julia> a * a'
2×2 Matrix{Float64}:
4.0 1.0
1.0 0.25
```
"""
function symcover(A::AbstractMatrix; kwargs...)
ax = axes(A, 1)
Expand All @@ -54,6 +73,8 @@ function symcover(A::AbstractMatrix; kwargs...)
if iszero(aj)
if !iszero(ai)
a[j] = Aij / ai
else
a[i] = a[j] = sqrt(Aij)
end
elseif iszero(ai)
a[i] = Aij / aj
Expand All @@ -75,10 +96,25 @@ end
Given a matrix `A`, return vectors `a` and `b` such that `a[i] * b[j] >= abs(A[i, j])`
for all `i`, `j`.

`a .* b'` is not minimal, but with increasing `iter` it is increasingly tight.
`a .* b'` may not be minimal, but it is tightened iteratively, with `iter` specifying
the number of iterations (more iterations make tighter covers).
`cover` is fast and generally recommended for production use.

See also: [`cover_lmin`](@ref), [`cover_qmin`](@ref), [`symcover`](@ref).

# Examples

```jldoctest; filter = r"(\\d+\\.\\d{6})\\d+" => s"\\1"
julia> A = [1 2 3; 6 5 4];

julia> a, b = cover(A)
([1.2544610775677627, 3.4759059767492304], [1.7261686708831454, 1.6581941714076147, 2.391465190627206])

julia> a * b'
2×3 Matrix{Float64}:
2.16541 2.08014 3.0
6.0 5.76373 8.31251
```
"""
function cover(A::AbstractMatrix; kwargs...)
T = float(eltype(A))
Expand Down Expand Up @@ -167,14 +203,15 @@ end
a = symcover_qmin(A)

Given a square matrix `A` assumed to be symmetric, return a vector `a` representing the
symmetric q-minimal (quadratic-minimal) cover of `A`. This solves the optimization problem
symmetric quadratic-minimal cover of `A`. This solves the optimization problem

min ∑_{i,j : A[i,j] ≠ 0} log(a[i] * a[j] / |A[i,j]|)²
s.t. a[i] * a[j] ≥ |A[i, j]| for all i, j

This implementation is *slow*. See also:
- [`symcover_lmin`](@ref) for a much more efficient option that is not quadratically-optimal
- [`cover_qmin`](@ref) for a generalization to non-symmetric matrices.
- [`cover_qobjective`](@ref) for the objective minimized by this function
- [`symcover`](@ref) for a much more efficient option that is not quadratically-optimal
- [`cover_qmin`](@ref) for a generalization to non-symmetric matrices

!!! note
This function requires loading the JuMP and HiGHS packages, which are not dependencies of this package.
Expand All @@ -184,26 +221,32 @@ function symcover_qmin end
"""
a = symcover_lmin(A)

Similar to [`symcover_qmin`](@ref), but returns a symmetric l-minimal (linear-minimal) cover of `A`.
Similar to [`symcover_qmin`](@ref), but returns a symmetric linear-minimal cover of `A`.
"""
function symcover_lmin end

"""
a, b = cover_qmin(A)

Given a matrix `A`, return vectors `a` and `b` representing the q-minimal (quadratic-minimal) cover of `A`. This solves the optimization problem
Given a matrix `A`, return vectors `a` and `b` representing the quadratic-minimal cover of `A`.
This solves the optimization problem

min ∑_{i,j : A[i,j] ≠ 0} log(a[i] * b[j] / |A[i,j]|)²
s.t. a[i] * b[j] ≥ |A[i, j]| for all i, j

This implementation is *slow*. See also:
- [`cover_lmin`](@ref) for a much more efficient option that is not quadratically-optimal
- [`cover_qobjective`](@ref) for the objective minimized by this function
- [`cover`](@ref) for a much more efficient option that is not quadratically-optimal
- [`symcover_qmin`](@ref) for a specialization to symmetric matrices

!!! note
This function requires loading the JuMP and HiGHS packages, which are not dependencies of this package.
"""
function cover_qmin end

"""
a, b = cover_lmin(A)

Similar to [`cover_qmin`](@ref), but returns an l-minimal (linear-minimal) cover of `A`.
Similar to [`cover_qmin`](@ref), but returns a linear-minimal cover of `A`.
"""
function cover_lmin end
Loading
Loading