-
Notifications
You must be signed in to change notification settings - Fork 0
Experimental Design #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
7dc2968
47fc954
787a88d
2dd2295
173cdd1
042d5d6
70cfa67
9b8c48c
6c923ab
6727a90
358e881
2ad65a9
fc2595f
788665c
698aff7
f904351
49c8a81
8529e34
3734a46
48edb29
8c2e5bc
8192f16
dc10590
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,7 +1,15 @@ | ||
| name = "RidgeRegression" | ||
| uuid = "739161c8-60e1-4c49-8f89-ff30998444b1" | ||
| authors = ["Vivak Patel <vp314@users.noreply.github.com>"] | ||
| version = "0.1.0" | ||
| authors = ["Eton Tackett <etont@icloud.com>", "Vivak Patel <vp314@users.noreply.github.com>"] | ||
|
|
||
| [deps] | ||
| CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b" | ||
| DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" | ||
| Downloads = "f43a241f-c20a-4ad4-852c-f6b1247861c6" | ||
|
|
||
| [compat] | ||
| CSV = "0.10.15" | ||
| DataFrames = "1.8.1" | ||
| Downloads = "1.7.0" | ||
| julia = "1.12.4" |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -14,6 +14,7 @@ makedocs(; | |
| ), | ||
| pages=[ | ||
| "Home" => "index.md", | ||
| "Design" => "design.md", | ||
| ], | ||
| ) | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,93 @@ | ||
| # Motivation and Background | ||
| Many modern science problems involve regression problems with extremely large numbers of predictors. Genome-wide association studies (GWAS), for example, try to identify genetic variants associated with a disease phenotype using hundreds of thousands or millions of genomic features. In such settings, traditional least squares methods fail because noise and ill-conditioning. Penalized Least Squares (PLS) extends ordinary least squares (OLS) regression by adding a penalty term to shrink parameter estimates. Ridge regression, an approach within PLS, adds a regularization term, producing a regularized estimator. | ||
|
|
||
| Mathematically, ridge regression estimates the regression coefficients by solving the penalized least squares problem | ||
| ```math | ||
| \hat{\boldsymbol{\beta}} = | ||
| \arg\min_{\boldsymbol{\beta}} | ||
| \left( | ||
| \| \mathbf{y} - X\boldsymbol{\beta} \|^2 | ||
| + | ||
| \lambda \| \boldsymbol{\beta} \|^2 | ||
| \right) | ||
|
Comment on lines
+6
to
+12
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You should indicate which norms you are using by using a subscript |
||
| ``` | ||
| where $\lambda > 0$ is a regularization parameter that controls the strength of the penalty. | ||
|
|
||
| The purpose of ridge regression is to stabilize regression estimates where the predictors are highly correlated or the design matrix $X$ is almost singular. Ridge regression shrinks the estimated coefficient vector in a way such that the coefficient estimates minimize the sum of squared residuals subject to a constraint on the $\ell_2$ norm of the coefficient vector, $\|\boldsymbol{\beta}\|^2 \leq t$, which shrinks the least squares estimates toward the origin. This reduces the variance of the coefficient estimates and mitigates the effects of multicollinearity. | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ridge Regression does not impose a constraint. It uses a penalty. This needs to be clarified and be made more precise |
||
|
|
||
| There are many numerical algorithms available to compute ridge regression estimates including direct methods, Krylov subspace methods, gradient-based optimization, coordinate descent, and stochastic gradient descent. These algorithms differ in their computational costs and numerical stability. | ||
|
|
||
| The goal of this experiment is to investigate the performance of these algorithms when we vary the structure and scale of the regression problem. To do this, we consider the linear model $\mathbf{y} = X\boldsymbol{\beta} + \boldsymbol{\varepsilon}$ where the matrix ${X}$ may be constructed with varying dimensions, sparsity patterns, and conditioning properties. | ||
| # Questions | ||
| The primary goal of this experiment is to compare numerical algorithms for computing ridge regression estimates under various conditions. In particular, we aim to address the following questions: | ||
|
|
||
| 1. How does the performance of ridge regression algorithms change as the structural and numerical properties of the regression problem vary? | ||
|
|
||
| 2. Which ridge regression algorithm provides the best balance between numerical stability and computational cost across these problem regimes? | ||
|
|
||
| # Experimental Units | ||
| The experimental units are the datasets under fixed penalty weights. For each experimental unit, all treatments will be applied to the dataset. This will be done so that differences in performance can be attributed to the algorithms themselves rather than the data. Each dataset will contain a matrix ${X}$, a response vector $\mathbf{y}$, and a regularization parameter ${\lambda}$ for some specific ${\lambda}$. | ||
|
Comment on lines
+16
to
+29
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is unclear to me. What does, "for each experimental unit, all treatments will be applied to the dataset." mean? |
||
|
|
||
| Blocks are defined by combinations of the experimental blocking factors, including dimensional regime, matrix sparsity, and ridge penalty magnitude. Each block represents datasets with similar structural properties. Within each block, multiple datasets will be generated, and each dataset forms an experimental unit. For every experimental unit all treatments are applied. | ||
|
|
||
| Datasets will be grouped according to their dimensional regime, characterized as $p \ll n$, p ≈ n, and $p \gg n$. These regimes correspond to fundamentally different geometric properties of the design matrix, including rank behavior, conditioning, and the stability of the normal equations. | ||
|
|
||
| In addition to dimensional block, the strength of the ridge penalty will be incorporated as a secondary blocking factor. The ridge estimator is $\hat{\beta_R} = (X^\top X + \lambda I)^{-1}X^\top y$. The matrix conditioning number is defined as $\kappa(A) = \frac{\sigma_{\max}(A)}{\sigma_{\min}(A)}$. In the context of ridge regression, the regularization parameter ${\lambda}$, can impact the conditioning number. Let $X = U\Sigma V^\top$ be the SVD of $X$, with singular values $\sigma_1,\dots,\sigma_p$. | ||
|
|
||
| Then | ||
| ```math | ||
| X^\top X = V \Sigma^\top \Sigma V^\top | ||
| = V \,\mathrm{diag}(\sigma_1^2,\dots,\sigma_p^2)\, V^\top . | ||
| ``` | ||
|
|
||
| Adding the ridge term gives | ||
|
|
||
| ```math | ||
| X^\top X + \lambda I | ||
| = | ||
| V \,\mathrm{diag}(\sigma_1^2+\lambda,\dots,\sigma_p^2+\lambda)\, V^\top . | ||
| ``` | ||
|
|
||
| ```math | ||
| \kappa_2(X^\top X+\lambda I) | ||
| = | ||
| \frac{\sigma_{\max}^2+\lambda}{\sigma_{\min}^2+\lambda}. | ||
| ``` | ||
|
|
||
| Because the performance of numerical algorithms is strongly influenced by the conditioning of the system they solve, the ridge penalty effectively creates regression problems with different numerical difficulty. This provides a way to assess how algorithm performance, convergence behavior, and computational cost depend on the numerical stability of the problem. In this experiment, the magnitude of $\lambda$ is selected relative to the smallest and largest singular values of $X$. A weak regularization regime corresponds to $\lambda \approx \sigma_{\min}^2$, where the ridge penalty begins to influence the smallest singular directions but the system remains moderately ill-conditioned. A moderate regularization regime corresponds to $\lambda \approx \sigma_{\min}\sigma_{\max}$, which substantially improves the conditioning of the problem by increasing the smallest eigenvalues of $X^\top X + \lambda I$. Finally, a strong regularization regime corresponds to $\lambda \approx \sigma_{\max}^2$, where the ridge penalty dominates the spectral scale of the problem and produces a well-conditioned system. | ||
|
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Who are \sigma_{\min} and \sigma_{\max}? If my system has zero singular values, is \sigma_{\min} = 0? In this case, your condition number is not defined. |
||
|
|
||
| Another blocking factor that will be considered is how sparse or dense the matrix $X$ is. Many algorithms behave differently depending on whether the matrix is sparse or dense. In ridge regression, there are many operations involving $X$ including matrix-matrix products and matrix-vector products. A dense matrix leads to high computational cost whereas a sparse matrix we can significantly reduce the cost. As such, different algorithms may perform better depending on the sparsity structure of X, making matrix sparsity a relevant blocking factor when comparing algorithm behavior and computational efficiency. | ||
|
|
||
| The total number of block combinations is determined by the product of the number of levels in each blocking factor, denoted b. For example, if the experiment includes three dimensional regimes, two sparsity levels, and two regularization strengths, then there are $3 * 2 * 2 = 12$ block combinations. We will also denote r to be the number of replicated datasets in each block. Here, we mean the number datasets within a block. The total number of experimental units is then ${b * r}$. | ||
|
|
||
| | Blocking System | Factor | Blocks | | ||
| |:----------------|:-------|:-------| | ||
| | Dataset | Dimensional regime | $(p \ll n)$, $(p \approx n)$, $(p \gg n)$| | ||
| | Ridge Penalty | Magnitude of ${\lambda}$ relative to the spectral scale of $X^\top X$ | Weak ($\lambda \approx \sigma_{\min}^2$), Moderate ($\lambda \approx \sigma_{\min}\sigma_{\max}$), Strong ($\lambda \approx \sigma_{\max}^2$), where $\sigma_{\min}$ and $\sigma_{\max}$ denote the smallest and largest singular values of $X$. | | ||
| | Matrix Sparsity| Density of non-zero values in $X$ | Sparse (< 10% non-zero), Moderate (10%-50% non-zero), Dense (> 50% non-zero)| | ||
| # Treatments | ||
|
|
||
| The treatments are the ridge regression solution methods: | ||
|
|
||
| - Gradient-based optimization | ||
| - Stochastic gradient descent | ||
| - Direct Methods | ||
| - Golub Kahan Bidiagonalization | ||
|
|
||
| Since each experimental unit will recieves all t treatments, the total number of algorithm runs in the experiment is ${t * b * r}$. For this experiment, ${t=3}$. To ensure fair comparison between algorithms, each treatment will be applied under a fixed time constraint. Each algorithm will be run for a maximum of two hours per experimental unit. | ||
| # Observational Units and Measurements | ||
|
|
||
| The observational units are each algorithm-dataset pair. For each combination we will observe the following | ||
|
|
||
| | Column Name | Data Type | Description | | ||
| |:---|:---|:---| | ||
| | `dataset_id` | Positive Integer | Identifier for the generated dataset (experimental unit). | | ||
| | `dimensional_regime` | String | Relationship between predictors and observations: `p << n`, `p ≈ n`, or `p >> n`. | | ||
| | `sparsity_level` | String | Density of the matrix `X`: `Sparse`, `Moderate`, or `Dense`. | | ||
| | `lambda_level` | String | Relative magnitude of the ridge penalty parameter `λ`: `Weak`, `Moderate`, or `Strong`. | | ||
| | `algorithm` | String | Ridge regression solution method used: `GradientDescent`, `SGD`, or `DirectMethod`. | | ||
| | `runtime_seconds` | Positive Floating-point | Time required for the algorithm to compute a solution. | | ||
| | `iterations` | Positive Integer | Number of iterations performed by the algorithm (`NA` for direct methods). | | ||
|
|
||
|
|
||
| The collected measurements will be written to a CSV file. Each row in the file corresponds to a single algorithm–dataset pair, which forms the observational unit of the experiment. The columns represent the recorded measurements. After the experiment, the resulting CSV file should contain ${Algorithms∗Datasets}$ number of rows and each row will contain exactly seven columns. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,2 +1,5 @@ | ||
| [deps] | ||
| CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b" | ||
| DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" | ||
| Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40" | ||
| LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to obey the 92 character line limit for this file.