Spectral clusting is one commonly used clustering approach in machine learning. ScikitLearn tutorial.
Given an undirected, weighted graph \(G=(V, E)\), we can define the weighted adjacency matrix\(\mathbf{W}=(w_{ij})_{i,j=1,...,n}\) with nonnegative weights \(w_{ij}\). \(w_{ij}=0\) means that the vertices \(v_i\) and \(v_j\) are not connected. The degree of a vertex \(v_i\) is \(d_i = \sum_{j=1}^n w_{ij}\).
The unnormalized graph Laplacian is \[
\mathbf{L} = \mathbf{D} - \mathbf{W},
\] where \(\mathbf{D} = \text{diag}(d_1,\ldots,d_n)\).
Exercise: For this graph, what is \(\mathbf{L}\)?
Properties of the unnormalized graph Laplacian:
For every vector \(\mathbf{f} \in \mathbb{R}^n\), we have \[
\mathbf{f}' \mathbf{L} \mathbf{f} = \frac 12 \sum_{i,j=1}^n w_{ij} (f_i - f_j)^2.
\]
Proof: BV exercises 3.21, 7.9.
\(\mathbf{L}\) is symmetric and positive semidefinite.
Proof: Part 1.
The smallest eigenvalue of \(\mathbf{L}\) is 0, the corresponding eigenvector is the constant one vector \(\mathbf{1}_n\).
Proof: Obvious.
(Number of connected components and the spectrum of \(\mathbf{L}\)) The multiplicity \(k\) of the eigenvalue 0 of \(\mathbf{L}\) equals the number of connected components \(A_1,\ldots,A_k\) in the graph. The eigenspace of eigenvalue 0 is spanned by the indicator vectors \(\mathbf{1}_{A_1}, \ldots, \mathbf{1}_{A_k}\) of those components.
Normalized graph Laplacians. There are two versions of normalized graph Laplacians in the literature: \[
\mathbf{L}_{\text{sym}} = \mathbf{D}^{-1/2} \mathbf{L} \mathbf{D}^{-1/2} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{W} \mathbf{D}^{-1/2}
\] and \[
\mathbf{L}_{\text{rw}} = \mathbf{D}^{-1} \mathbf{L} = \mathbf{I} - \mathbf{D}^{-1} \mathbf{W}.
\]
Unnormalized spectral clustering algorithm.
Input: A similarity matrix \(\mathbf{S} \in \mathbb{R}^{n \times n}\) and number \(k\) of clusters to construct.
Construct a similarity graph. Let \(\mathbf{W}\) be its weighted adjacency matrix.
Compute the unnormalized Laplacian \(\mathbf{L}\).
Computer the first \(k\) eigenvectors \(\mathbf{u}_1, \ldots, \mathbf{u}_k\).
Let \(\mathbf{U} = (\mathbf{u}_1 \cdots \mathbf{u}_k) \in \mathbb{R}^{n \times k}\).
Treat rows of \(\mathbf{U}\) as data points in \(\mathbb{R}^k\) and cluster them using the \(k\)-means algorithm into clusters \(C_1,\ldots,C_k\).
Output: Clusters \(A_1,\ldots,A_k\) according to \(C_1,\ldots,C_k\).
Normalized spectral clustering according to Shi and Malik (2000)
Input: A similarity matrix \(\mathbf{S} \in \mathbb{R}^{n \times n}\) and number \(k\) of clusters to construct.
Construct a similarity graph. Let \(\mathbf{W}\) be its weighted adjacency matrix.
Compute the unnormalized Laplacian \(\mathbf{L}\).
Computer the first \(k\) generalized eigenvectors \(\mathbf{u}_1, \ldots, \mathbf{u}_k\) of the generalized eigenproblem\(\mathbf{L} \mathbf{u} = \lambda \mathbf{D} \mathbf{U}\).
Let \(\mathbf{U} = (\mathbf{u}_1 \cdots \mathbf{u}_k) \in \mathbb{R}^{n \times k}\).
Treat rows of \(\mathbf{U}\) as data points in \(\mathbb{R}^k\) and cluster them using the \(k\)-means algorithm into clusters \(C_1,\ldots,C_k\).
Output: Clusters \(A_1,\ldots,A_k\) according to \(C_1,\ldots,C_k\).
Remark: Generalized eigenproblem \(\mathbf{L} \mathbf{u} = \lambda \mathbf{D} \mathbf{U}\) is same as the eigenproblem \(\mathbf{L}_{\text{rw}} \mathbf{u} = \lambda \mathbf{u}\).
Normalized spectral clustering according to Ng, Jordan, and Weiss (2002)
Input: A similarity matrix \(\mathbf{S} \in \mathbb{R}^{n \times n}\) and number \(k\) of clusters to construct.
Construct a similarity graph. Let \(\mathbf{W}\) be its weighted adjacency matrix.
Compute the normalized Laplacian \(\mathbf{L}_{\text{sym}}\).
Computer the first \(k\) generalized eigenvectors \(\mathbf{u}_1, \ldots, \mathbf{u}_k\) of \(\mathbf{L}_{\text{sym}}\).
Let \(\mathbf{U} = (\mathbf{u}_1 \cdots \mathbf{u}_k) \in \mathbb{R}^{n \times k}\).
Form the matrix \(\mathbf{T} \in \mathbf{R}^{n \times k}\) from \(\mathbf{U}\) by normalizing the rows to norm 1.
Treat rows of \(\mathbf{T}\) as data points in \(\mathbb{R}^k\) and cluster them using the \(k\)-means algorithm into clusters \(C_1,\ldots,C_k\).
Output: Clusters \(A_1,\ldots,A_k\) according to \(C_1,\ldots,C_k\).
2 Matrix completion
Snapshot of the kind of data collected by Netflix. Only 100,480,507 ratings (1.2% entries of the 480K-by-18K matrix) are observed
Netflix challenge: impute the unobserved ratings for personalized recommendation. http://en.wikipedia.org/wiki/Netflix_Prize
Matrix completion problem. Observe a very sparse matrix \(\mathbf{Y} = (y_{ij})\). Want to impute all the missing entries. It is possible only when the matrix is structured, e.g., of low rank.
Example: Load the 128×128 Lena picture with missing pixels.
We fill out the missing pixels uisng a matrix completion technique developed by Candes and Tao \[
\text{minimize } \|\mathbf{X}\|_*
\]\[
\text{subject to } x_{ij} = y_{ij} \text{ for all observed entries } (i, j).
\] Here \(\|\mathbf{M}\|_* = \sum_i \sigma_i(\mathbf{M})\) is the nuclear norm. In words we seek the matrix with minimal nuclear norm that agrees with the observed entries. This is a semidefinite programming (SDP) problem readily solved by modern convex optimization software.
We use the convex optimizaion package COSMO.jl to solve for this semi-definite program.
# # Use COSMO solverusingConvex, COSMOsolver = COSMO.Optimizer# Linear indices of obs. entriesobsidx =findall(Y[:] .≠0.0)# Create optimization variablesX =Variable(size(Y))# Set up optmization problemproblem =minimize(nuclearnorm(X))problem.constraints += X[obsidx] == Y[obsidx]# Solve the problem by calling solve@timesolve!(problem, solver) # fast
┌ Warning: Concatenating collections of constraints together with `+` or `+=` to produce a new list of constraints is deprecated. Instead, use `vcat` to concatenate collections of constraints.
└ @ Convex ~/.julia/packages/Convex/QKz6m/src/deprecations.jl:129
[ Info: [Convex.jl] Compilation finished: 3.96 seconds, 1.063 GiB of memory allocated
------------------------------------------------------------------
COSMO v0.8.9 - A Quadratic Objective Conic Solver
Michael Garstka
University of Oxford, 2017 - 2022
------------------------------------------------------------------
Problem: x ∈ R^{32897},
constraints: A ∈ R^{41025x32897} (41281 nnz),
matrix size to factor: 73922x73922,
Floating-point precision: Float64
Sets: DensePsdConeTriangle of dim: 32896 (256x256)
ZeroSet of dim: 8128
Nonnegatives of dim: 1
Settings: ϵ_abs = 1.0e-05, ϵ_rel = 1.0e-05,
ϵ_prim_inf = 1.0e-04, ϵ_dual_inf = 1.0e-04,
ρ = 0.1, σ = 1e-06, α = 1.6,
max_iter = 5000,
scaling iter = 10 (on),
check termination every 25 iter,
check infeasibility every 40 iter,
KKT system solver: QDLDL
Acc: Anderson Type2{QRDecomp},
Memory size = 15, RestartedMemory,
Safeguarded: true, tol: 2.0
Setup Time: 230.69ms
Iter: Objective: Primal Res: Dual Res: Rho:
1 -1.4585e+03 1.5985e+01 5.9854e-01 1.0000e-01
25 1.4525e+02 5.1105e-02 1.1356e-03 1.0000e-01
50 1.4758e+02 1.1725e-02 1.4834e-03 6.8658e-01
75 1.4797e+02 5.5160e-04 4.7489e-05 6.8658e-01
100 1.4797e+02 1.7304e-05 1.3870e-06 6.8658e-01
------------------------------------------------------------------
>>> Results
Status: Solved
Iterations: 100
Optimal objective: 148
Runtime: 2.254s (2253.88ms)
10.270626 seconds (46.91 M allocations: 3.466 GiB, 5.00% gc time, 92.00% compilation time: <1% of which was recompilation)
Problem statistics
problem is DCP : true
number of variables : 1 (16_384 scalar elements)
number of constraints : 1 (8_128 scalar elements)
number of coefficients : 8_128
number of atoms : 3
Solution summary
termination status : OPTIMAL
primal status : FEASIBLE_POINT
dual status : FEASIBLE_POINT
objective value : 147.9711
Expression graph
minimize
└─ nuclearnorm (convex; positive)
└─ 128×128 real variable (id: 946…791)
subject to
└─ == constraint (affine)
└─ + (affine; real)
├─ index (affine; real)
│ └─ …
└─ 8128×1 Matrix{Float64}
colorview(Gray, X.value)
3 Compressed sensing
Compressed sensingCandes and Tao (2006) and Donoho (2006) tries to address a fundamental question: how to compress and transmit a complex signal (e.g., musical clips, mega-pixel images), which can be decoded to recover the original signal?
Suppose a signal \(\mathbf{x} \in \mathbb{R}^n\) is sparse with \(s\) non-zeros. We under-sample the signal by multiplying a (flat) measurement matrix \(\mathbf{y} = \mathbf{A} \mathbf{x}\), where \(\mathbf{A} \in \mathbb{R}^{m\times n}\) has iid normal entries. Candes, Romberg and Tao (2006) show that the solution to \[
\begin{eqnarray*}
&\text{minimize}& \|\mathbf{x}\|_1 \\
&\text{subject to}& \mathbf{A} \mathbf{x} = \mathbf{y}
\end{eqnarray*}
\] exactly recovers the true signal under certain conditions on \(\mathbf{A}\) when \(n \gg s\) and \(m \approx s \ln(n/s)\). Why sparsity is a reasonable assumption? Virtually all real-world images have low information content.
Generate a sparse signal and sub-sampling:
usingCairoMakie, Makie, Random# random seedRandom.seed!(123)# Size of signaln =1024# Sparsity (# nonzeros) in the signals =10# Number of samples (undersample by a factor of 8) m =128# Generate and display the signalx0 =zeros(n)x0[rand(1:n, s)] =randn(s)# Generate the random sampling matrixA =randn(m, n) / m# Subsample by multiplexingy = A * x0# plot the true signalf =Figure()Makie.Axis( f[1, 1], title ="True Signal x0", xlabel ="x", ylabel ="y")lines!(1:n, x0)f
Solve the linear programming problem.
# Use COSMO solversolver = COSMO.Optimizer# MOI.set(solver, MOI.RawOptimizerAttribute("max_iter"), 5000)# Set up optimizaiton problemx =Variable(n)problem =minimize(norm(x, 1))problem.constraints += A * x == y# Solve the problem@timesolve!(problem, solver)
------------------------------------------------------------------
COSMO v0.8.9 - A Quadratic Objective Conic Solver
Michael Garstka
University of Oxford, 2017 - 2022
------------------------------------------------------------------
Problem: x ∈ R^{2048},
constraints: A ∈ R^{2176x2048} (135168 nnz),
matrix size to factor: 4224x4224,
Floating-point precision: Float64
Sets: Nonnegatives of dim: 2048
ZeroSet of dim: 128
Settings: ϵ_abs = 1.0e-05, ϵ_rel = 1.0e-05,
ϵ_prim_inf = 1.0e-04, ϵ_dual_inf = 1.0e-04,
ρ = 0.1, σ = 1e-06, α = 1.6,
max_iter = 5000,
scaling iter = 10 (on),
check termination every 25 iter,
check infeasibility every 40 iter,
KKT system solver: QDLDL
Acc: Anderson Type2{QRDecomp},
Memory size = 15, RestartedMemory,
Safeguarded: true, tol: 2.0
Setup Time: 24.29ms
Iter: Objective: Primal Res: Dual Res: Rho:
1 -8.1920e+03 8.3337e+00 5.9999e-01 1.0000e-01
25 -4.3108e-09 2.0860e-01 3.1188e-02 1.0000e-01
50 4.7129e+00 1.7996e-01 5.3711e-03 1.0000e-01
75 6.7191e+00 7.9040e-02 1.0756e-02 1.0000e-01
100 7.4400e+00 9.2727e-03 1.5535e-01 9.0268e+00
125 7.5387e+00 3.1036e-06 1.0998e-05 9.4803e-01
------------------------------------------------------------------
>>> Results
Status: Solved
Iterations: 136 (incl. 11 safeguarding iter)
Optimal objective: 7.539
Runtime: 0.244s (244.09ms)
1.188182 seconds (3.00 M allocations: 287.420 MiB, 3.01% gc time, 84.44% compilation time: 43% of which was recompilation)
Problem statistics
problem is DCP : true
number of variables : 1 (1_024 scalar elements)
number of constraints : 1 (128 scalar elements)
number of coefficients : 131_200
number of atoms : 4
Solution summary
termination status : OPTIMAL
primal status : FEASIBLE_POINT
dual status : FEASIBLE_POINT
objective value : 7.5387
Expression graph
minimize
└─ sum (convex; positive)
└─ abs (convex; positive)
└─ 1024-element real variable (id: 590…510)
subject to
└─ == constraint (affine)
└─ + (affine; real)
├─ * (affine; real)
│ ├─ …
│ └─ …
└─ 128×1 Matrix{Float64}
# Display the solutionf =Figure()Makie.Axis( f[1, 1], title ="Reconstructed signal overlayed with x0", xlabel ="x", ylabel ="y")scatter!(1:n, x0, label ="truth")lines!(1:n, vec(x.value), label ="recovery")axislegend(position =:lt)f
4 Automatic differentiation (Auto-Diff)
Last week we scratched the surface of matrix/vector calculus and chain rules. Recent surge of machine learning sparked rapid advancement of automatic differentiation, which applies chain rule to the computer code to obtain exact gradients (up to machine precision).
Surprise! Gradients of the covariance matrix do not match. Close inspection reveals that Julia calculates the gradient with respect to the upper triangular part of the covariance matrix \(\text{vech}(\Omega)\)