Skip to content

Commit 2fc24a2

Browse files
committed
still messing with docstring, sorry
1 parent 55d05cd commit 2fc24a2

1 file changed

Lines changed: 9 additions & 8 deletions

File tree

pynumdiff/kalman_smooth/_kalman_smooth.py

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -260,7 +260,7 @@ def constant_jerk(x, dt, params=None, options=None, r=None, q=None, forwardbackw
260260

261261
def robustdiff(x, dt, order, log_q, log_r, proc_huberM=6, meas_huberM=0):
262262
"""Perform outlier-robust differentiation by solving the Maximum A Priori optimization problem:
263-
:math:`\\min_{\\{x_n\\}} \\sum_{n=0}^{N-1} V(R^{-1/2}(y_n - C x_n)) + \\sum_{n=1}^{N-1} J(Q^{-1/2}(x_n - A x_{n-1}))`,
263+
:math:`\\argmin_{\\{x_n\\}} \\sum_{n=0}^{N-1} V(R^{-1/2}(y_n - C x_n)) + \\sum_{n=1}^{N-1} J(Q^{-1/2}(x_n - A x_{n-1}))`,
264264
where :math:`A,Q,C,R` come from an assumed constant derivative model and :math:`V,J` are the :math:`\\ell_1` norm or Huber
265265
loss rather than the :math:`\\ell_2` norm optimized by RTS smoothing. This problem is convex, so this method calls
266266
:code:`convex_smooth`.
@@ -270,11 +270,12 @@ def robustdiff(x, dt, order, log_q, log_r, proc_huberM=6, meas_huberM=0):
270270
deviation. In other words, this choice affects which portion of inputs are treated as outliers. For example, assuming
271271
Gaussian inliers, the portion beyond :math:`M\\sigma` is :code:`outlier_portion = 2*(1 - scipy.stats.norm.cdf(M))`. The
272272
inverse of this is :code:`M = scipy.stats.norm.ppf(1 - outlier_portion/2)`. As :math:`M \\to \\infty`, Huber becomes the
273-
1/2-sum-of-squares case, :math:`\\frac{1}{2}\\|\\cdot\\|_2^2`, and the normalization constant of the Huber loss (See
274-
:math:`c_2` `in section 6 <https://jmlr.org/papers/volume14/aravkin13a/aravkin13a.pdf>`_, missing a :math:`\\sqrt{\\cdot}`
275-
term there, see p2700) approaches 1 as :math:`M` increases. Similarly, as :code:`M` approaches 0, Huber reduces to the
276-
:math:`\\ell_1` norm case, because the normalization constant approaches :math:`\\frac{\\sqrt{2}}{M}`, cancelling the
277-
:math:`M` multiplying :math:`|\\cdot|` and leaving behind :math:`\\sqrt{2}`, the proper :math:`\\ell_1` normalization.
273+
1/2-sum-of-squares case, :math:`\\frac{1}{2}\\|\\cdot\\|_2^2`, because the normalization constant of the Huber loss (See
274+
:math:`c_2` in `section 6 of this paper <https://jmlr.org/papers/volume14/aravkin13a/aravkin13a.pdf>`_, missing a
275+
:math:`\\sqrt{\\cdot}` term there, see p2700) approaches 1 as :math:`M` increases. Similarly, as :code:`M` approaches 0,
276+
Huber reduces to the :math:`\\ell_1` norm case, because the normalization constant approaches :math:`\\frac{\\sqrt{2}}{M}`,
277+
cancelling the :math:`M` multiplying :math:`|\\cdot|` in the Huber function, and leaving behind :math:`\\sqrt{2}`, the
278+
proper :math:`\\ell_1` normalization.
278279
279280
Note that :code:`log_q` and :code:`proc_huberM` are coupled, as are :code:`log_r` and :code:`meas_huberM`, via the relation
280281
:math:`\\text{Huber}(q^{-1/2}v, M) = q^{-1}\\text{Huber}(v, Mq^{-1/2})`, but these are still independent enough that for
@@ -283,8 +284,8 @@ def robustdiff(x, dt, order, log_q, log_r, proc_huberM=6, meas_huberM=0):
283284
:param np.array[float] x: data series to differentiate
284285
:param float dt: step size
285286
:param int order: which derivative to stabilize in the constant-derivative model (1=velocity, 2=acceleration, 3=jerk)
286-
:param float log_q: base 10 logarithm of the process noise variance, so :code:`q = 10**log_q`
287-
:param float log_r: base 10 logarithm of the measurement noise variance, so :code:`r = 10**log_r`
287+
:param float log_q: base 10 logarithm of process noise variance, so :code:`q = 10**log_q`
288+
:param float log_r: base 10 logarithm of measurement noise variance, so :code:`r = 10**log_r`
288289
:param float proc_huberM: quadratic-to-linear transition point for process loss
289290
:param float meas_huberM: quadratic-to-linear transition point for measurement loss
290291

0 commit comments

Comments
 (0)