speclite.downsample.downsample(data_in, downsampling, weight=None, axis=-1, start_index=0, auto_trim=True, data_out=None)[source] [edit on github]

Downsample spectral data by a constant factor.

Downsampling consists of dividing the input data into fixed-size groups of consecutive bins, then calculated downsampled values as weighted averages within each group. The basic usage is:

>>> data = np.ones((6,), dtype=[('flux', float), ('ivar', float)])
>>> out = downsample(data, downsampling=2, weight='ivar')
>>> np.all(out ==
... np.array([(1.0, 2.0), (1.0, 2.0), (1.0, 2.0)],
... dtype=[('flux', '<f8'), ('ivar', '<f8')]))

Any partial group at the end of the input data will be silently ignored unless auto_trim=False:

>>> out = downsample(data, downsampling=4, weight='ivar')
>>> np.all(out ==
... np.array([(1.0, 4.0)], dtype=[('flux', '<f8'), ('ivar', '<f8')]))
>>> out = downsample(data, downsampling=4, weight='ivar', auto_trim=False)
Traceback (most recent call last):
ValueError: Input data does not evenly divide with downsampling = 4.

A multi-dimensional array of spectra with the same binning can be downsampled in a single operation, for example:

>>> data = np.ones((2,16,3,), dtype=[('flux', float), ('ivar', float)])
>>> results = downsample(data, 4, axis=1)
>>> results.shape
(2, 4, 3)

If no axis is specified, the last axis of the input array is assumed.

If the input data is masked, only unmasked entries will be used to calculate the weighted averages for each downsampled group and the output will also be masked:

>>> data = ma.ones((6,), dtype=[('flux', float), ('ivar', float)])
>>> data.mask[3:] = True
>>> out = downsample(data, 2, weight='ivar')
>>> type(out) == ma.core.MaskedArray

If the input fields have different masks, their logical OR will be used for all output fields since, otherwise, each output field would require its own output weight field. As a consequence, masking a single input field is equivalent to masking all input fields.

data_innumpy.ndarray or

Structured numpy array containing input spectrum data to downsample.


Number of consecutive bins to combine into each downsampled bin. Must be at least one and not larger than the input data size.

weightstring or None.

The name of a field whose values provide the weights to use for downsampling. When None, a weight value of one will be used. The output array will contain a field with this name, unless it is None, containing values of the downsampled weights. All weights must be non-negative.


Index of the first bin to use for downsampling. Any bins preceeding the start bin will not be included in the downsampled results. Negative indices are not allowed.


Index of the axis to perform downsampling in. The default is to use the last index of the input data array.


When True, any bins at the end of the input data that do not fill a complete downsampled bin will be automatically (and silently) trimmed. When False, a ValueError will be raised.

data_outnumpy.ndarray or None

Structured numpy array where output spectrum data should be written. If none is specified, then an appropriately sized array will be allocated and returned. Use this method to take control of the memory allocation and, for example, re-use the same output array for a sequence of downsampling operations.

numpy.ndarray or

Structured numpy array of downsampled result, containing the same fields as the input data and the same shape except along the specified downsampling axis. If the input data is masked, the output data will also be masked, with each output field’s mask determined by the combination of the optional weight field mask and the corresponding input field mask.