Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #123 +/- ##
==========================================
+ Coverage 97.40% 97.56% +0.15%
==========================================
Files 6 7 +1
Lines 888 986 +98
==========================================
+ Hits 865 962 +97
- Misses 23 24 +1 ☔ View full report in Codecov by Sentry. |
| return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x) | ||
| end | ||
|
|
||
| function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector) |
There was a problem hiding this comment.
This is more a general comment on future work. We recently split the constraint API to nonlinear and linear. Would it make sense in a future work to have two different scaling for linear and nonlinear constraints?
There was a problem hiding this comment.
Actually, anyway it would be better to have cons_lin! and cons_nln! instead of cons!
There was a problem hiding this comment.
I would prefer to keep the interface as is. As far as I understand cons! is calling by default cons_lin! and cons_nln! internally, and here the scaling does not depend on the nature of the constraint.
There was a problem hiding this comment.
Okay, but calling a solver on a ScaledModel would return an cons_nln! unimplemented.
| return nlp.scaling_obj * NLPModels.obj(nlp.nlp, x) | ||
| end | ||
|
|
||
| function NLPModels.cons!(nlp::ScaledModel, x::AbstractVector, c::AbstractVector) |
There was a problem hiding this comment.
Okay, but calling a solver on a ScaledModel would return an cons_nln! unimplemented.
| the gradient and the Jacobian evaluated at the initial point ``x0``. | ||
|
|
||
| """ | ||
| struct ScaledModel{T, S, M} <: NLPModels.AbstractNLPModel{T, S} |
There was a problem hiding this comment.
and same comment throughout the file
Following a suggestion by @dpo