@@ -160,7 +160,7 @@ def _objective_function(point, func, x, dt, singleton_params, categorical_params
160160
161161def optimize (func , x , dt , dxdt_truth = None , tvgamma = 1e-2 , search_space_updates = {}, metric = 'rmse' ,
162162 padding = 0 , opt_method = 'Nelder-Mead' , maxiter = 10 , parallel = True ):
163- """Find the optimal parameters for a given differentiation method.
163+ """Find the optimal hyperparameters for a given differentiation method.
164164
165165 :param function func: differentiation method to optimize parameters for, e.g. linear_model.savgoldiff
166166 :param np.array[float] x: data to differentiate
@@ -170,21 +170,21 @@ def optimize(func, x, dt, dxdt_truth=None, tvgamma=1e-2, search_space_updates={}
170170 that yield a smooth derivative. Larger value results in a smoother derivative.
171171 :param dict search_space_updates: At the top of :code:`_optimize.py`, each method has a search space of parameters
172172 settings structured as :code:`{param1:[values], param2:[values], param3:value, ...}`. The Cartesian
173- product of values are used as initial starting points in optimization. If left None, the default search
174- space is used.
173+ product of values are used as initial starting points in optimization. If left None, the default
174+ search space is used.
175175 :param str metric: either :code:`'rmse'` or :code:`'error_correlation'`, only applies if :code:`dxdt_truth`
176176 is not None, see _objective_function
177177 :param int padding: number of time steps to ignore at the beginning and end of the time series in the
178178 optimization, or :code:`'auto'` to ignore 2.5% at each end. Larger value causes the
179179 optimization to emphasize the accuracy of dxdt in the middle of the time series
180180 :param str opt_method: Optimization technique used by :code:`scipy.minimize`, the workhorse
181181 :param int maxiter: passed down to :code:`scipy.minimize`, maximum iterations
182- :param bool parallel: whether to use multiple processes to optimize, typically faster for single optimizations, but
183- for experiments it is a better use of resources to pool at that higher level
182+ :param bool parallel: whether to use multiple processes to optimize, typically faster for single optimizations.
183+ For experiments, it is a usually a better use of resources to parallelize at that level, meaning
184+ each must run in its own process, since spawned processes are not allowed to further spawn.
184185
185- :return: tuple[dict, float] of\n
186- - **opt_params** -- best parameter settings for the differentation method
187- - **opt_value** -- lowest value found for objective function
186+ :return: - **opt_params** (dict) -- best parameter settings for the differentation method
187+ - **opt_value** (float) -- lowest value found for objective function
188188 """
189189 if metric not in ['rmse' ,'error_correlation' ]:
190190 raise ValueError ('`metric` should either be `rmse` or `error_correlation`.' )
0 commit comments