Skip to content

Comments

add ScaledModel#123

Open
frapac wants to merge 2 commits intoJuliaSmoothOptimizers:mainfrom
frapac:fp/scaler
Open

add ScaledModel#123
frapac wants to merge 2 commits intoJuliaSmoothOptimizers:mainfrom
frapac:fp/scaler

Conversation

@frapac
Copy link

@frapac frapac commented Jul 25, 2024

Following a suggestion by @dpo

@codecov
Copy link

codecov bot commented Jul 25, 2024

Codecov Report

Attention: Patch coverage is 99.01961% with 1 line in your changes missing coverage. Please review.

Project coverage is 97.56%. Comparing base (40f0c0f) to head (051c3d2).
Report is 12 commits behind head on main.

Files Patch % Lines
src/scaled-model.jl 99.01% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #123      +/-   ##
==========================================
+ Coverage   97.40%   97.56%   +0.15%     
==========================================
  Files           6        7       +1     
  Lines         888      986      +98     
==========================================
+ Hits          865      962      +97     
- Misses         23       24       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@github-actions
Copy link
Contributor

Package name latest stable
ADNLPModels.jl
AmplNLReader.jl
CUTEst.jl
CaNNOLeS.jl
DCI.jl
FletcherPenaltySolver.jl
JSOSolvers.jl
LLSModels.jl
NLPModelsIpopt.jl
NLPModelsJuMP.jl
NLPModelsTest.jl
Percival.jl
QuadraticModels.jl
SolverBenchmark.jl
SolverTools.jl

Copy link
Member

@tmigot tmigot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @frapac for the PR! Here is a first pass of comments. Sorry if I ask for a lot of clarification.
By the way, would you have a more general use case that would serve as a basis for a tutorial?

return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x)
end

function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more a general comment on future work. We recently split the constraint API to nonlinear and linear. Would it make sense in a future work to have two different scaling for linear and nonlinear constraints?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, anyway it would be better to have cons_lin! and cons_nln! instead of cons!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to keep the interface as is. As far as I understand cons! is calling by default cons_lin! and cons_nln! internally, and here the scaling does not depend on the nature of the constraint.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but calling a solver on a ScaledModel would return an cons_nln! unimplemented.

return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x)
end

function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, but calling a solver on a ScaledModel would return an cons_nln! unimplemented.

the gradient and the Jacobian evaluated at the initial point ``x0``.

"""
struct ScaledModel{T, S, M} <: NLPModels.AbstractNLPModel{T, S}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and same comment throughout the file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants