API Core#

Core summaries#

class satlas2.core.Fitter[source]

Main class for performing fits and organising data

Methods

addSource(source)

Add a datasource to the Fitter structure

createMetadataDataframe()

Generates a dataframe containing the fitting information and statistics.

createResultDataframe()

Generates a dataframe containing all information about the parameters after a fit.

evaluateOverWalk(filename[, burnin, x, evals])

The parameters saved in the h5 file are evaluated in the models a specific number of times.

fit([llh, llh_method, method, mcmc_kwargs, ...])

Perform a fit of the models (added to the sources) to the data in the sources.

readWalk(filename[, burnin])

Read and process the h5 file containing the results of a random walk.

removeExpr(parameter_name)

Remove the expression for the given parameters.

removeParamPrior(source, model, parameter_name)

Removes a prior set on a parameter.

removeShareModelParams(parameter_name)

Remove parameters shared across all models with the same name.

removeShareParams(parameter_name)

Removed shared parameter.

reportFit([modelpars, show_correl, ...])

Generate a report of the fitting results.

revertFit()

Reverts the parameter values to the original values.

setExpr(parameter_name, parameter_expression)

Set the expression to be used for the given parameters.

setParamPrior(source, model, parameter_name, ...)

Set a Gaussian prior on a parameter, mainly intended to represent literature values.

shareModelParams(parameter_name)

Add parameters to the list of shared parameters across all models with the same name.

shareParams(parameter_name)

Add parameters to the list of shared parameters.

class satlas2.core.Source(x: ArrayLike, y: ArrayLike, yerr: ArrayLike | callable, name: str, xerr: ArrayLike | None = None, **kwargs)[source]

Initializes a source of data

Parameters:
  • x (ArrayLike) – x values of the data

  • y (ArrayLike) – y values of the data

  • yerr (Union[ArrayLike, callable]) – The yerr of the data, either an array for fixed uncertainties or a callable to be applied to the result of the models in the source.

  • name (str) – The name given to the source. This must be a unique value!

  • xerr (ArrayLike, optional) – If enlargement of the yerr with the xerr is required, supply this, by default None.

Methods

addModel(model)

Add a model to the Source

evaluate(x)

Evaluates all models in the given points and returns the sum.

f()

Returns the sum of the evaluation of all models in the x-coordinates defined in the source.

class satlas2.core.Model(name: str, prefunc: callable | None = None)[source]

Base Model class

Parameters:
  • name (str) – Name given to the model

  • prefunc (callable, optional) – Transformation function to be applied to the evaluation points before evaluating the model, by default None

Methods

f(x)

Evaluates the model in the given points.

setTransform(func)

Set the transformation for the pre-evaluation.

Extensive Core#

Implementation of the base Fitter, Source, Model and Parameter classes

class satlas2.core.Fitter[source]#

Main class for performing fits and organising data

addSource(source: Source) None[source]#

Add a datasource to the Fitter structure

Parameters:

source (Source) – Source to be added to the fitter

createMetadataDataframe() DataFrame[source]#

Generates a dataframe containing the fitting information and statistics.

Return type:

pd.DataFrame

createResultDataframe() DataFrame[source]#

Generates a dataframe containing all information about the parameters after a fit.

Return type:

pd.DataFrame

customLlh()[source]#

Calculate a custom likelihood.

evaluateOverWalk(filename: str, burnin: int = 0, x: ArrayLike | None = None, evals: int = 0) Tuple[list, list][source]#

The parameters saved in the h5 file are evaluated in the models a specific number of times. From these evaluations, the 16, 50 and 84 percentiles (corresponding to the 1-sigma band) are calculated.

Parameters:
  • filename (str) – Filename of the random walk results.

  • burnin (int, optional) – Amount of steps to skip, by default 0

  • x (ArrayLike, optional) – Evaluation points for the model, defaults to the datapoints in Source

  • evals (int, optional) – Number of selected parameter values, defaults to using all values

Returns:

A tuple with, as the first element, a list of arrays x-values for which the band has been evaluated. Each source contributes an array. The second element is a list of 2D arrays, one for each source. The first row is the sigma- boundary, the second row is the median value and the third row is the sigm+ boundary.

Return type:

Tuple of lists

f() ArrayLike[source]#

Calculate the response of the models in the different sources, stacked horizontally.

Returns:

Horizontally concatenated response from each source.

Return type:

ArrayLike

fit(llh: bool = False, llh_method: str = 'gaussian', method: str = 'leastsq', mcmc_kwargs: dict = {}, sampler_kwargs: dict = {}, filename: str | None = None, overwrite: bool = True, nwalkers: int = 50, steps: int = 1000, convergence: bool = False, convergence_iter: int = 50, convergence_tau: float = 0.05, scale_covar: bool = True, iter_cb: callable | None = None) None[source]#

Perform a fit of the models (added to the sources) to the data in the sources. Models in the same source are summed together, models in different sources can be linked through their parameters.

Parameters:
  • llh (bool, optional) – Selects if a chisquare (False) or likelihood fit is performed, by default False.

  • llh_method (str, optional) – Selects which likelihood calculation is used, by default ‘gaussian’.

  • method (str, optional) – Selects the method used by the lmfit.minimizer(), by default ‘leastsq’. Set to ‘emcee’ for random walk.

  • mcmc_kwargs (dict, optional) – Dictionary of keyword arguments to be supplied to the MCMC routine (see emcee.EnsembleSampler.sample()), by default {}

  • sampler_kwargs (dict, optional) – Dictionary of keyword arguments to be supplied to the emcee.EnsembleSampler() , by default {}

  • filename (str, optional) – Filename in which the random walk should be saved, by default None

  • overwrite (bool, optional) – If True, the generated file is overwritten. If False, the number of walkers and the last position is taken from the saved file. By default True.

  • nwalkers (int, optional) – Number of walkers to be used in the random walk, by default 50

  • steps (int, optional) – Number of steps the random walk should take, by default 1000

  • convergence (bool, optional) – Controls automatically stopping of the random walk based on the autocorrelation criteria, by default False.

  • convergence_iter (int, optional) – Factor by which the number of steps taken should be greater than the autocorrelation time, by default 50.

  • convergence_tau (float, optional) – Relative value within which subsequent autocorrelation estimates should lie for convergence, by default 0.05.

  • scale_covar (bool, optional) – Scale the calculated uncertainties by the root of the reduced chisquare, by default True. Set to False when llh is True, since the reduced chisquare calculated in this case is not applicable.

getSourceAttr(attr: str) ArrayLike[source]#

Stack the giveen attributed in the different sources, horizontally.

Parameters:

attr (str) – Attribute of the sources to be retrieved.

Returns:

Horizontally concatenated attribute from each source.

Return type:

ArrayLike

readWalk(filename: str, burnin: int | None = 0)[source]#

Read and process the h5 file containing the results of a random walk. The parameter values and uncertainties are extracted from the walk.

Parameters:
  • filename (str) – Filename of the random walk results.

  • burnin (Optional[int]) – Optional amount of steps to remove from the start, defaults to 0.

removeAllPriors()[source]#

Removes all priors on parameters.

removeExpr(parameter_name: list | str) None[source]#

Remove the expression for the given parameters.

Parameters:

parameter_name (list or str) – Either a single parameter name or a list of them.

removeParamPrior(source: str, model: str, parameter_name: str) None[source]#

Removes a prior set on a parameter.

Parameters:
  • source (str) – Name of the datasource in which the parameter is present.

  • model (str) – Name of the model in which the parameter is present.

  • parameter_name (str) – Name of the parameter.

removeShareModelParams(parameter_name: list | str) None[source]#

Remove parameters shared across all models with the same name.

Parameters:

parameter_name (Union[list, str]) – List of parameters or single parameter name.

removeShareParams(parameter_name: list | str) None[source]#

Removed shared parameter.

Note

The full parameter name should be given.

Parameters:

parameter_name (Union[list, str]) – List of parameters or single parameter name.

reportFit(modelpars: Parameters | None = None, show_correl: bool = False, min_correl: float = 0.1, sort_pars: bool | callable = False) str[source]#

Generate a report of the fitting results.

The report contains the best-fit values for the parameters and their uncertainties and correlations.

Parameters:
  • modelpars (lmfit.Parameters, optional) – Known Model Parameters

  • show_correl (bool, optional) – Whether to show a list of sorted correlations, by default False

  • min_correl (float, optional) – Smallest correlation in absolute value to show, by default 0.1

  • sort_pars (bool or callable, optional) – Whether to show parameter names sorted in alphanumerical order. If False (default), then the parameters will be listed in the order they were added to the Parameters dictionary. If callable, then this (one argument) function is used to extract a comparison key from each list element.

Returns:

Multi-line text of fit report.

Return type:

str

revertFit()[source]#

Reverts the parameter values to the original values.

setExpr(parameter_name: list | str, parameter_expression: list | str) None[source]#

Set the expression to be used for the given parameters. The given parameter names should be the full description i.e. containing the source and model name.

Note

The priority order on expressions is
  1. Expressions given by setExpr()

  2. Sharing of parameters through shareParams()

  3. Sharing of parameters through shareModelParams()

Parameters:
  • parameter_name (list or str) – Either a single parameter name or a list of them.

  • parameter_expression (list or str) – The parameter expression to be associated with parameter_name.

setParamPrior(source: str, model: str, parameter_name: str, value: float, uncertainty: float) None[source]#

Set a Gaussian prior on a parameter, mainly intended to represent literature values.

Parameters:
  • source (str) – Name of the datasource in which the parameter is present.

  • model (str) – Name of the model in which the parameter is present.

  • parameter_name (str) – Name of the parameter.

  • value (float) – Central value of the Gaussian

  • uncertainty (float) – Standard deviation associated with the value.

shareModelParams(parameter_name: list | str) None[source]#

Add parameters to the list of shared parameters across all models with the same name.

Note

The priority order on expressions is
  1. Expressions given by setExpr()

  2. Sharing of parameters through shareParams()

  3. Sharing of parameters through shareModelParams()

Parameters:

parameter_name (list or str) – List of parameters or single parameter name.

shareParams(parameter_name: list | str) None[source]#

Add parameters to the list of shared parameters.

Note

The full parameter name should be given.

Note

The priority order on expressions is
  1. Expressions given by setExpr()

  2. Sharing of parameters through shareParams()

  3. Sharing of parameters through shareModelParams()

Parameters:

parameter_name (list or str) – List of parameters or single parameter name.

y() ArrayLike[source]#

Stack the data in the different sources, horizontally.

Returns:

Horizontally concatenated data from each source.

Return type:

ArrayLike

yerr() ArrayLike[source]#

Stack the uncertainty in the different sources, horizontally.

Returns:

Horizontally concatenated uncertainty from each source.

Return type:

ArrayLike

class satlas2.core.Model(name: str, prefunc: callable | None = None)[source]#

Base Model class

Parameters:
  • name (str) – Name given to the model

  • prefunc (callable, optional) – Transformation function to be applied to the evaluation points before evaluating the model, by default None

f(x: ArrayLike) float[source]#

Evaluates the model in the given points.

Parameters:

x (ArrayLike) – Points in which the model has to be evaluated

Return type:

ArrayLike

setTransform(func: callable)[source]#

Set the transformation for the pre-evaluation.

Parameters:

func (callable) –

class satlas2.core.Source(x: ArrayLike, y: ArrayLike, yerr: ArrayLike | callable, name: str, xerr: ArrayLike | None = None, **kwargs)[source]#

Initializes a source of data

Parameters:
  • x (ArrayLike) – x values of the data

  • y (ArrayLike) – y values of the data

  • yerr (Union[ArrayLike, callable]) – The yerr of the data, either an array for fixed uncertainties or a callable to be applied to the result of the models in the source.

  • name (str) – The name given to the source. This must be a unique value!

  • xerr (ArrayLike, optional) – If enlargement of the yerr with the xerr is required, supply this, by default None.

addModel(model: Model)[source]#

Add a model to the Source

Parameters:

model (Model) – The Model to be added to the source. Multiple models give, as a result, the sum of the individual models

evaluate(x: ArrayLike) ArrayLike[source]#

Evaluates all models in the given points and returns the sum.

Parameters:

x (ArrayLike) – Points in which the models have to be evaluated

Return type:

ArrayLike

f() ArrayLike[source]#

Returns the sum of the evaluation of all models in the x-coordinates defined in the source.

Return type:

ArrayLike