IEEE Signal Processing Letters ·

Tuning-Free Online
Robust PCA

Utilize implicit regularization to remove the dependency of Online RPCA on parameter tuning with zero dataset-dependent adjustments.

Lakshmi Jayalal, Gokularam Muthukrishnan, Sheetal Kalyani IIT Madras

OR-PCA needs careful tuning

Standard OR-PCA decomposes streaming data into low-rank + sparse components, but its performance heavily depends on tuning two explicit regularization parameters — dataset-sensitive and impractical for real-world streaming.

\[\min_{\mathbf{X},\,\mathbf{E}}\ \frac{1}{2}\|\mathbf{Z}-\mathbf{X}-\mathbf{E}\|_F^2 + \lambda_1\|\mathbf{X}\|_* + \lambda_2\|\mathbf{E}\|_1\]

Three subproblems, two parameters to tune

OR-PCA factorizes \(\mathbf{X} = \mathbf{L}\mathbf{R}^\top\) and alternates between estimating the sparse outlier \(\mathbf{e}_t\), the coefficient \(\mathbf{r}_t\), and the subspace basis \(\mathbf{L}\). Each involves an explicit regularizer (\(\lambda_1\) or \(\lambda_2\)) that must be tuned via grid search or cross-validation — methods that are computationally expensive and often fail to generalize to unseen data.

\[\begin{aligned} P_1:&\quad \min_{\mathbf{e}_t} \tfrac{1}{2}\|\mathbf{z}_t - \mathbf{L}\mathbf{r}_t^\top - \mathbf{e}_t\|_F^2 + \lambda_2\|\mathbf{e}_t\|_1 \;\mid\; \{\mathbf{L}, \mathbf{r}_t\}\\[6pt] P_2:&\quad \min_{\mathbf{r}_t} \tfrac{1}{2}\|\mathbf{z}_t - \mathbf{L}\mathbf{r}_t^\top - \mathbf{e}_t\|_F^2 + \tfrac{\lambda_1}{2}\|\mathbf{r}_t\|_2^2 \;\mid\; \{\mathbf{L}, \mathbf{e}_t\}\\[6pt] P_3:&\quad \min_{\mathbf{L}} \tfrac{1}{2}\|\mathbf{z}_t - \mathbf{L}\mathbf{r}_t^\top - \mathbf{e}_t\|_F^2 + \tfrac{\lambda_1}{2}\|\mathbf{L}\|_F^2 \;\mid\; \{\mathbf{e}_t, \mathbf{r}_t\} \end{aligned}\]

TF-ORPCA — three implicit regularizers

Replace each explicit regularizer with an implicit one that emerges from the optimization algorithm itself.

👆 click a card to view its pseudocode

The TF-ORPCA loop

for t = 1 to n do Reveal z_t repeat r_t ← MGD(z_t - e_t, T_r) // implicit l2 e_t ← HPGrad(z_t - L·r_t, T_e) // implicit l1 until convergence L ← HPGroupGrad(z_t - e_t, r_t, T_L, L) // implicit Frobenius end for return L, R, E

Expressed Variance & ablation

TF-ORPCA matches carefully tuned baselines without any tuning; performance is stable across wide ranges of \(\eta\) and \(\alpha\).

EV comparison
Fig 1. EV vs. number of samples. TF-ORPCA consistently outperforms default OR-PCA (EV ≤ 0.4).
Ablation
Fig 2. Ablation — EV is stable across learning rate and initialization ranges.

Background / foreground separation

Same hyperparameters for all three datasets. Frames loop at 10 fps, matching the paper’s animated results.

Bungalows — Dynamic lighting

Input \(\mathbf{z}_t\)
Ours — Background \(\mathbf{L}\)
Ours — Foreground \(\mathbf{E}\)
Frame 2 / 40

PETS2006 — Surveillance

Input \(\mathbf{z}_t\)
Ours — Background \(\mathbf{L}\)
Ours — Foreground \(\mathbf{E}\)
Frame 2 / 40

Pedestrians — Outdoor

Input \(\mathbf{z}_t\)
Ours — Background \(\mathbf{L}\)
Ours — Foreground \(\mathbf{E}\)
Frame 2 / 40

Three implicit regularizers, zero tuning

TF-ORPCA employs three problem-specific implicit regularization techniques — a modified gradient descent for implicit \(\ell_1\), an early-stopped MGD for implicit \(\ell_2\), and a novel reparameterization for implicit Frobenius norm control. Unlike traditional OR-PCA, the algorithm is insensitive to its own hyperparameters and does not require extensive tuning. It clearly extracts foreground and background components without the artifacts — shadowing, gray patches — often seen with traditional approaches.