aurora.time_series package¶
Submodules¶
aurora.time_series.apodization_window module¶
@author: kkappler
Module to manage windowing prior to FFT. Intended to support most apodization windows available via scipy.signal.get_window()
The Taper Config has 2 possible forms: 1. Standard form for accessing scipy.signal: [“taper_family”, “num_samples_window”, “additional_args”] 2. User-defined : for defining custom tapers
Example 1 : Standard form “taper_family” = “hamming” “num_samples_window” = 128 “additional_args” = {}
Example 2 : Standard form “taper_family” = “kaiser” “num_samples_window” = 64 “additional_args” = {“beta”:8}
Examples 3 : User Defined 2. user-defined: [“array”] In this case num_samples_window is defined by the array. “array” = [1, 2, 3, 4, 5, 4, 3, 2, 1] If “array” is non-empty then assume the user-defined case.
It is a little bit unsatisfying that the args need to be ordered for scipy.signal.get_window(). Probably use OrderedDict() for any windows that have more than one additional args.
For example “taper_family” = ‘general_gaussian’ “additional_args” = OrderedDict(“power”:1.5, “sigma”:7)
- class aurora.time_series.apodization_window.ApodizationWindow(taper_family: str | None = 'boxcar', num_samples_window: int | None = 0, taper: ndarray | None = array([], dtype=float64), taper_additional_args: dict | None = None, **kwargs)[source]¶
Bases:
object
Instantiate an apodization window object.
Example usages: apod_window = ApodizationWindow() taper=ApodizationWindow(taper_family=’hanning’, num_samples_window=55 )
Window factors S1, S2, CG, ENBW are modelled after Heinzel et al. p12-14 [1] Spectrum and spectral density estimation by the Discrete Fourier transform (DFT), including a comprehensive list of window functions and some new flat-top windows. G. Heinzel, A. Roudiger and R. Schilling, Max-Planck Institut fur Gravitationsphysik (Albert-Einstein-Institut) Teilinstitut Hannover February 15, 2002 See Also [2] Harris FJ. On the use of windows for harmonic analysis with the discrete Fourier transform. Proceedings of the IEEE. 1978 Jan;66(1):51-83.
Nomenclature from Heinzel et al. ENBW: Effective Noise BandWidth, see Equation (22) NENBW Normalized Equivalent Noise BandWidth, see Equation (21)
- Attributes:
S1
Return the sum of the window coefficients
S2
Retursn the sum of squares of the window coefficients.
apodization_factor
Returns the apodization factor (computes if it is None)
coherent_gain
Returns the DC gain of the window normalized by window length.
nenbw
Returns the Normalized Equivalent Noise BandWidth
num_samples_window
returns the window length in samples
summary
Returns a string summarizing the window properties
taper
Returns the values of the taper as a numpy array
Methods
enbw
(fs)Return the "Equivalent Noise Bandwidth of the window.
make
()A wrapper call to scipy.signal
T This is just a test to verify some algebra
- enbw(fs) float [source]¶
Return the “Equivalent Noise Bandwidth of the window.
Notes: Effective Noise BandWidth = fs*NENBW/N = fs S2/(S1**2) Unlike NENBW, CG, S1, S2, this is not a pure property of the window – but instead this is a property of the window combined with the sample rate.
- make() None [source]¶
A wrapper call to scipy.signal
Note: see scipy.signal.get_window for a description of what is expected in args[1:]. http://docs.scipy.org/doc/scipy/reference/ generated/scipy.signal.get_window.html
note: this is just repackaging the args so that scipy.signal.get_window() accepts all cases.
- property nenbw: float¶
Returns the Normalized Equivalent Noise BandWidth
Notes: a.k.a NENBW, see Equation (21) in Heinzel et al. 2002.
- property summary: str¶
Returns a string summarizing the window properties
TODO: This should be the __str__(), i.e. the readable version https://stackoverflow.com/questions/1436703/what-is-the-difference-between-str-and-repr
- Returns:
out_str – String comprised of the taper_family, number_of_samples, and True/False if self.taper is not None
- Return type:
- test_linear_spectral_density_factor() None [source]¶
T This is just a test to verify some algebra
TODO Move this into tests
Claim: The lsd_calibration factors A (1./coherent_gain)*np.sqrt((2*dt)/(nenbw*N)) and B np.sqrt(2/(sample_rate*self.S2))
are identical.
Note sqrt(2*dt)==sqrt(2*sample_rate) so we can cancel these terms and A=B IFF
(1./coherent_gain) * np.sqrt(1/(nenbw*N)) == 1/np.sqrt(S2) which I show in githib aurora issue #3 via . (CG**2) * NENBW *N = S2
aurora.time_series.decorators module¶
This module is a place to put decorators used by methods in aurora. It is a work in progress.
Development notes:
- aurora.time_series.decorators.can_use_xr_dataarray(func)[source]¶
Decorator to allow functions that operate on xr.Dataset to also operate on xr.DataArray, by converting xr.DataArray to xr.Dataset, calling the function, and then converting back to xr.DataArray
Most of the windowed time series methods are intended to work with xarray.Dataset class. But I would like to be able to pass them xarray.DataArray objects. This class casts a DataArray to a Dataset, runs it through func and casts back to a DataArray.
A similar decorator could be written for numpy arrays. :param func:
aurora.time_series.frequency_band_helpers module¶
This module contains functions that are associated with time series of Fourier coefficients
- aurora.time_series.frequency_band_helpers.adjust_band_for_coherence_sorting(frequency_band, spectrogram, rule='min3')[source]¶
WIP: Intended to broaden band to allow more FCs for spectral features - used in coherence sorting and general feature extraction
- Parameters:
frequency_band –
spectrogram (Spectrogram) –
rule –
- aurora.time_series.frequency_band_helpers.check_time_axes_synched(X, Y)[source]¶
Utility function for checking that time axes agree. Raises ValueError if axes do not agree.
It is critical that X, Y, RR have the same time axes for aurora processing.
- Parameters:
X (xarray) –
Y (xarray) –
- aurora.time_series.frequency_band_helpers.cross_spectra(X, Y)[source]¶
WIP: Returns the cross power spectra between two arrays
- aurora.time_series.frequency_band_helpers.extract_band(frequency_band, fft_obj, channels=[], epsilon=1e-07)[source]¶
Extracts a frequency band from xr.DataArray representing a spectrogram.
Stand alone version of the method that is used by WIP Spectrogram class.
Development Notes: #1: 20230902 TODO: Decide if base dataset object should be a xr.DataArray (not xr.Dataset) - drop=True does not play nice with h5py and Dataset, results in a type error. File “stringsource”, line 2, in h5py.h5r.Reference.__reduce_cython__ TypeError: no default __reduce__ due to non-trivial __cinit__ However, it works OK with DataArray, so maybe use data array in general
- Parameters:
frequency_band (mt_metadata.transfer_functions.processing.aurora.band.Band) – Specifies interval corresponding to a frequency band
fft_obj (xarray.core.dataset.Dataset) – To be replaced with an fft_obj() class in future
epsilon (float) – Use this when you are worried about missing a frequency due to round off error. This is in general not needed if we use a df/2 pad around true harmonics.
- Returns:
band – The frequencies within the band passed into this function
- Return type:
xr.DataArray
- aurora.time_series.frequency_band_helpers.get_band_for_tf_estimate(band, dec_level_config, local_stft_obj, remote_stft_obj)[source]¶
Returns spectrograms X, Y, RR for harmonics within the given band
- Parameters:
band (mt_metadata.transfer_functions.processing.aurora.FrequencyBands) – object with lower_bound and upper_bound to tell stft object which subarray to return
config (mt_metadata.transfer_functions.processing.aurora.decimation_level.DecimationLevel) – information about the input and output channels needed for TF estimation problem setup
local_stft_obj (xarray.core.dataset.Dataset or None) – Time series of Fourier coefficients for the station whose TF is to be estimated
remote_stft_obj (xarray.core.dataset.Dataset or None) – Time series of Fourier coefficients for the remote reference station
- Returns:
X, Y, RR – data structures as local_stft_object and remote_stft_object, but restricted only to input_channels, output_channels, reference_channels and also the frequency axes are restricted to being within the frequency band given as an input argument.
- Return type:
xarray.core.dataset.Dataset or None
aurora.time_series.spectrogram module¶
WORK IN PROGRESS (WIP): This module contains a class that represents a spectrogram, i.e. A 2D time series of Fourier coefficients with axes time and frequency.
- class aurora.time_series.spectrogram.Spectrogram(dataset=None)[source]¶
Bases:
object
Class to contain methods for STFT objects. TODO: Add support for cross powers TODO: Add OLS Z-estimates TODO: Add Sims/Vozoff Z-estimates
- Attributes:
dataset
returns the underlying xarray data
frequency_increment
returns the “delta f” of the frequency axis
time_axis
returns the time axis of the underlying xarray
Methods
extract_band
(frequency_band[, channels])Returns another instance of Spectrogram, with the frequency axis reduced to the input band.
flatten
([chunk_by])Returns the flattened xarray (time-chunked by default).
num_harmonics_in_band
(frequency_band[, epsilon])Returns the number of harmonics within the frequency band in the underlying dataset
- property dataset¶
returns the underlying xarray data
- extract_band(frequency_band, channels=[])[source]¶
Returns another instance of Spectrogram, with the frequency axis reduced to the input band.
TODO: Consider returning a copy of the data…
- Parameters:
frequency_band –
channels –
- Returns:
spectrogram – Returns a Spectrogram object with only the extracted band for a dataset
- Return type:
- flatten(chunk_by: str | None = 'time') Dataset [source]¶
Returns the flattened xarray (time-chunked by default).
- Parameters:
chunk_by (str) – Controlled vocabulary [“time”, “frequency”]. Reshaping the 2D spectrogram can be done two ways (basically “row-major”, or column-major). In xarray, but we either keep frequency constant and iterate over time, or keep time constant and iterate over frequency (in the inner loop).
- Returns:
xarray.Dataset (The dataset from the band spectrogram, stacked.)
Development Notes
The flattening used in tf calculation by default is opposite to here
dataset.stack(observation=(“frequency”, “time”))
However, for feature extraction, it may make sense to swap the order
xrds = band_spectrogram.dataset.stack(observation=(“time”, “frequency”))
This is like chunking into time windows and allows individual features to be computed on each time window – if desired.
Still need to split the time series though–Splitting to time would be a reshape by (last_freq_index-first_freq_index).
Using pure xarray this may not matter but if we drop down into numpy it could be useful.
- property frequency_increment¶
returns the “delta f” of the frequency axis - assumes uniformly sampled in frequency domain
- num_harmonics_in_band(frequency_band, epsilon=1e-07)[source]¶
Returns the number of harmonics within the frequency band in the underlying dataset
- Parameters:
band –
stft_obj –
- property time_axis¶
returns the time axis of the underlying xarray
aurora.time_series.time_axis_helpers module¶
This module contains functions for generating time axes.
20240723: There are two approaches used to generate time axes that should be equivalent if there are integer nanoseconds per sample, but otherwise they will differ.
These functions are not used outside of tests and may be removed if future. For now, keep them around as they may be useful in addressing mth5 issue 225 https://github.com/kujaku11/mth5/issues/225 which wants to characterize roudnd-off error in timestamps.
- aurora.time_series.time_axis_helpers.decide_time_axis_method(sample_rate: float) str [source]¶
Based on sample rate, decide method of time axis generation.
- aurora.time_series.time_axis_helpers.do_some_tests() None [source]¶
Placeholder for tests
highlights the difference in time axes when there are integer number of ns per sample vs not.
- aurora.time_series.time_axis_helpers.fast_arange(t0: datetime64, n_samples: int, sample_rate: float) ndarray [source]¶
creates an array of (approximately) equally spaced time stamps
- aurora.time_series.time_axis_helpers.make_time_axis(t0: datetime64, n_samples: int, sample_rate: float) ndarray [source]¶
Passthrough method that calls a function from TIME_AXIS_GENERATOR_FUNCTIONS
- Parameters:
- t0: np.datetime64
The time of the first sample
- n_samples: int
The number of samples on the time axis
- sample_rate: float
The number of samples per second
- Returns:
- time_index: np.ndarray
An array of np.datetime64 objects – the time axis.
- aurora.time_series.time_axis_helpers.slow_comprehension(t0: datetime64, n_samples: int, sample_rate: float) ndarray [source]¶
- aurora.time_series.time_axis_helpers.test_generate_time_axis(t0, n_samples, sample_rate)[source]¶
Method to compare different ways to generate a time axis.
Development Notes: Two obvious ways to generate an axis of timestamps here. One method is slow and more precise, the other is fast but drops some nanoseconds due to integer roundoff error.
To see this, consider the example of say 3Hz, we are 333333333ns between samples, which drops 1ns per second if we scale a nanoseconds=np.arange(N) The issue here is that the nanoseconds granularity forces a roundoff error
Probably will use logic like: | if there_are_integer_ns_per_sample: | time_stamps = do_it_the_fast_way() | else: | time_stamps = do_it_the_slow_way() | return time_stamps
- Parameters:
t0 (_type_) – _description_
n_samples (_type_) – _description_
sample_rate (_type_) – _description_
aurora.time_series.window_helpers module¶
This module contains some helper functions that are used when working with sliding windows.
- Development Notes:
Not all the functions here are needed, some of them are just examples and tests.
for example there are three sliding window functions that were considered. The idea was to compare their performance, but currently we use sliding_window_crude.
Notes in google doc: https://docs.google.com/document/d/1CsRhSLXsRG8HQxM4lKNqVj-V9KA9iUQAvCOtouVzFs0/edit?usp=sharing
- aurora.time_series.window_helpers.available_number_of_windows_in_array(n_samples_array: int, n_samples_window: int, n_advance: int) int [source]¶
- Returns the number of whole windows that can be extracted from array of length
n_samples_array by a window of length n_samples_window, if the window advances by n_advance samples at each step.
- Parameters:
- Returns:
available_number_of_strides – The number of windows the time series will yield
- Return type:
- aurora.time_series.window_helpers.check_all_sliding_window_functions_are_equivalent() None [source]¶
This is a test to see if the sliding window functions all return the same output.
TODO: Move this into tests.
Development Notes: - simple sanity check that runs each sliding window function on a small array and confirms the results are numerically identical. - Note that striding window will return int types where others return float.
- aurora.time_series.window_helpers.do_some_tests()[source]¶
A placeholder for things that should be moved to tests/
TODO: Move these into tests
- aurora.time_series.window_helpers.sliding_window_crude(data, num_samples_window, num_samples_advance, num_windows=None)[source]¶
Reshapes input data with a sliding window.
- Parameters:
data (np.ndarray) – The time series data to be windowed
num_samples_window (int) – The length of the window (in samples)
num_samples_advance (int) – The number of samples the window advances at each step
num_windows (int) – The number of windows to “take”. Must be less or equal to the number of available windows.
- Returns:
output_array – The windowed time series
- Return type:
- aurora.time_series.window_helpers.sliding_window_numba(data, num_samples_window, num_samples_advance, num_windows)[source]¶
Reshapes input data with a sliding window.
- Parameters:
- Returns:
output_array – The windowed time series
- Return type:
- aurora.time_series.window_helpers.striding_window(data: ndarray, num_samples_window: int, num_samples_advance: int, num_windows: int | None = None) ndarray [source]¶
Reshapes input data with a sliding window.
Not currently used.
Development Notes: Applies a striding window to an array. We use 1D arrays here. Note that this method is extendable to N-dimensional arrays as was once shown at http://www.johnvinyard.com/blog/?p=268
Here the code is restricted to 1D. This is because of several warnings encountered, on the notes of stride_tricks.py, as well as for example here: https://stackoverflow.com/questions/4936620/using-strides-for-an-efficient-moving-average-filter
While we can possibly set up Aurora so that no copies of the strided window are made downstream, we cannot guarantee that another user may not add methods that require copies. To avoid this, use 1d implementation only for now.
Another clean example of this method can be found in the razorback codes from brgm.
- Parameters:
data (np.ndarray) – The time series data to be windowed
num_samples_window (int) – The length of the window (in samples)
num_samples_advance (int) – The number of samples the window advances at each step
num_windows (int) – The number of windows to “take”. Must be less or equal to the number of available windows.
- Returns:
strided_window – The windowed time series. result is 2d: result[i] is the i’th window.
- Return type:
aurora.time_series.windowed_time_series module¶
This module contains methods associated with operating on windowed time series. i.e. Arrays that have been chunked so that an operator can operate chunk-wise.
- Development Notes:
Many of the methods in this module are not currently in use. It looks like I once thought we should have a class to handle windowed time series, but then decided that static methods applied to xr.Dataset would be better.
- class aurora.time_series.windowed_time_series.WindowedTimeSeries[source]¶
Bases:
object
Time series that has been chopped into (possibly) overlapping windows.
This is a place where we can put methods that operate on these sorts of objects.
Assumes xr.Datasets keyed by “channel”
- Specific methods:
Demean Detrend Prewhiten stft invert_prewhitening
- TODO: Consider making these @staticmethod so import WindowedTimeSeries
and then call the static methods
Methods
staticmethod(function) -> method
staticmethod(function) -> method
detrend
(data[, detrend_axis, detrend_type])De-trends input data.
- apply_fft()¶
staticmethod(function) -> method
Convert a function to be a static method.
A static method does not receive an implicit first argument. To declare a static method, use this idiom:
- class C:
@staticmethod def f(arg1, arg2, …):
…
It can be called either on the class (e.g. C.f()) or on an instance (e.g. C().f()). Both the class and the instance are ignored, and neither is passed implicitly as the first argument to the method.
Static methods in Python are similar to those found in Java or C++. For a more advanced concept, see the classmethod builtin.
- apply_taper()¶
staticmethod(function) -> method
Convert a function to be a static method.
A static method does not receive an implicit first argument. To declare a static method, use this idiom:
- class C:
@staticmethod def f(arg1, arg2, …):
…
It can be called either on the class (e.g. C.f()) or on an instance (e.g. C().f()). Both the class and the instance are ignored, and neither is passed implicitly as the first argument to the method.
Static methods in Python are similar to those found in Java or C++. For a more advanced concept, see the classmethod builtin.
- static detrend(data: Dataset, detrend_axis: int | None = None, detrend_type: str | None = 'linear')[source]¶
De-trends input data.
- Development Notes:
It looks like this function was concerned with nan in the time series. Since there is no general implemented solution for this in aurora/MTH5 it maybe better to just get rid of the nan-checking. The data can be pre-screened for nans if needed.
- Parameters:
data (xarray Dataset) –
detrend_axis (int) –
detrend_type (string) – Controlled vocabulary [“linear”, “constant”] This argument is provided to scipy.signal.detrend
- Returns:
data – The input data, modified in-place with de-trending.
- Return type:
xr.Dataset
- aurora.time_series.windowed_time_series.fft_xr_ds(dataset: Dataset, sample_rate: float, detrend_type: str | None = 'linear') Dataset [source]¶
Apply Fourier transform to an xarray Dataset (already windowed) time series.
Notes: - The returned harmonics do not include the Nyquist frequency. To modify this add +1 to n_fft_harmonics. Also, only 1-sided ffts are returned. - For each channel within the Dataset, fft is applied along the within-window-time axis of the associated numpy array
TODO: add support for prewhitening per-window
- Parameters:
- Returns:
output_ds – The FFT of the input. Only contains non-negative frequencies i.e. input dataset had coords: (time, within-window time) but output ds has coords (time, frequency)
- Return type:
xr.Dataset
aurora.time_series.windowing_scheme module¶
This module is concerned with windowing time series.
Development Notes:
The windowing scheme defines the chunking and chopping of the time series for the Short Time Fourier Transform. Often referred to as a “sliding window” or a “striding window”. In its most basic form it is a taper with a rule to say how far to advance at each stride (or step).
To generate an array of data-windows from a data series we only need the two parameters window_length (L) and window_overlap (V). The parameter “window_advance” (L-V) can be used in lieu of overlap. Sliding windows are normally described terms of overlap but it is cleaner to code in terms of advance.
Choices L and V are usually made with some knowledge of time series sample rate, duration, and the frequency band of interest. In aurora because this is used to prep for STFT, L is typically a power of 2.
In general we will need one instance of this class per decimation level, but in practice often leave the windowing scheme the same for each decimation level.
This class is a key part of the “gateway” to frequency domain, so it has been given a sampling_rate attribute. While sampling rate is a property of the data, and not the windowing scheme per se, it is good for this class to be aware of the sampling rate.
Future modifications could involve: - binding this class with a time series. - Making a subclass with only L, V, and then having an extension with sample_rate
When 2D arrays are generated how should we index them? | [[ 0 1 2] | [ 2 3 4] | [ 4 5 6] | [ 6 7 8] | [ 8 9 10] | [10 11 12] | [12 13 14]]
In this example the rows are indexing the individual windows … and so they should be associated with the time of each window. Will need a standard for this. Obvious options are center_time of window and time_of_first sample. I prefer time_of_first sample. This can always be transformed to center time or another standard later. We can call this the “window time axis”. The columns are indexing “steps of delta-t”. The actual times are different for every row, so it would be best to use something like [0, dt, 2*dt] for that axis to keep it general. We can call this the “within-window sample time axis”
TODO: Regarding the optional time_vector input to self.apply_sliding_window() … this current implementation takes as input numpy array data. We need to also allow for an xarray to be implemented. In the simplest case we would take an xarray in and extract its “time” axis as time vector
20210529 This class is going to be modified to only accept xarray as input data. We can force any incoming numpy arrays to be either xr.DataArray or xr.Dataset. Similarly, output will be only xr.DataArray or xr.Dataset
- class aurora.time_series.windowing_scheme.WindowingScheme(**kwargs)[source]¶
Bases:
ApodizationWindow
Development notes 20210415: Casting window length, overlap, advance, etc. in terms of number of samples (“taps”, “points”) here. May allow defining these in terms of percent, duration in seconds etc. in future.
Note: Technically, sample_rate is a property of the data, and not of the window … but once the window is applied to data, the sample rate is defined. Sample rate is defined here because this window will operate on time series with a defined time axis.
- Attributes:
S1
Return the sum of the window coefficients
S2
Retursn the sum of squares of the window coefficients.
apodization_factor
Returns the apodization factor (computes if it is None)
coherent_gain
Returns the DC gain of the window normalized by window length.
dt
Returns the sample interval of of the time series.
duration_advance
Return the duration of the window advance
linear_spectral_density_calibration_factor
Gets the calibration factor for Spectral density.
nenbw
Returns the Normalized Equivalent Noise BandWidth
num_samples_advance
Returns the number of samples the window advances at each step.
num_samples_window
returns the window length in samples
summary
Returns a string summarizing the window properties
taper
Returns the values of the taper as a numpy array
window_duration
Return the duration of the window.
Methods
apply_fft
(data[, detrend_type])Applies the Fourier transform to each window in the windowed time series.
apply_sliding_window
(data[, time_vector, ...])Applies the windowing scheme (self) to the input data.
apply_taper
(data)modifies the data in place by applying a taper to each window
available_number_of_windows
(num_samples_data)Returns the number of windows for a dataset with num_samples_data.
cast_windowed_data_to_xarray
(windowed_array, ...)Casts numpy array to xarray for windowed time series.
clone
()return a deepcopy of self
downsample_time_axis
(time_axis)Returns a time-axis for the windowed data.
enbw
(fs)Return the "Equivalent Noise Bandwidth of the window.
left_hand_window_edge_indices
(num_samples_data)Makes an array with the indices of the first sample of each window
make
()A wrapper call to scipy.signal
test_linear_spectral_density_factor
()T This is just a test to verify some algebra
frequency_axis
- apply_fft(data: DataArray | Dataset, detrend_type: str | None = 'linear') Dataset [source]¶
Applies the Fourier transform to each window in the windowed time series.
Assumes sliding window and taper already applied.
TODO: Make this return a Specrtogram() object.
- Parameters:
data (xarray.core.dataset.Dataset) – The windowed data to FFT
detrend_type (string) – Passed through to scipy.signal during detrend operation.
- Returns:
spectral_ds – Dataset same channels as input but data are now complex values Fourier coefficients.
- Return type:
xr.Dataset
- apply_sliding_window(data: ndarray | DataArray | Dataset, time_vector: ndarray | None = None, dt: float | None = None, return_xarray: bool | None = False)[source]¶
Applies the windowing scheme (self) to the input data.
- Parameters:
data (1D numpy array, xr.DataArray, xr.Dataset) – The data to break into ensembles.
time_vector (1D numpy array) – The time axis of the data.
dt (float) – The sample interval of the data (reciprocal of sample_rate)
return_xarray (boolean) – If True will return an xarray object, even if the input object was a numpy array
- Returns:
windowed_obj – Normally an object of type xarray.core.dataarray.DataArray Could be numpy array as well.
- Return type:
arraylike
- available_number_of_windows(num_samples_data: int) int [source]¶
Returns the number of windows for a dataset with num_samples_data.
Development Note: Only take as many windows as available without wrapping. Start with one window for free, move forward by num_samples_advance and don’t walk over the cliff.
- cast_windowed_data_to_xarray(windowed_array: ndarray, time_vector: ndarray, dt: float | None = None) DataArray [source]¶
Casts numpy array to xarray for windowed time series.
- Parameters:
windowed_array –
time_vector –
dt –
- Returns:
Input data with a time and “within-window time” axis.
- Return type:
xr.DataArray
- downsample_time_axis(time_axis: ndarray) ndarray [source]¶
Returns a time-axis for the windowed data.
TODO: Add an option to use window center, instead of forcing LHWE.
Notes: Say that we had 1Hz data starting at t=0 and 100 samples. Then window, with window length 10, and advance 10. The window_time_axis is [0, 10, 20 , … 90]. If Same window length, but advance were 5. Then return [0, 5, 10, 15, … 90].
- Parameters:
time_axis (arraylike) – This is the time axis associated with the time-series prior to the windowing operation.
- Returns:
window_time_axis – This is a time axis for the windowed data. One value per window.
- Return type:
array-like
- property duration_advance¶
Return the duration of the window advance
- left_hand_window_edge_indices(num_samples_data: int) ndarray [source]¶
Makes an array with the indices of the first sample of each window
- property linear_spectral_density_calibration_factor: float¶
Gets the calibration factor for Spectral density.
The factor is applied via multiplication.
scale_factor = self.linear_spectral_density_calibration_factor linear_spectral_data = data * scale_factor
- Following Hienzel et al. 2002, Equations 24 and 25 for Linear Spectral Density
correction for a single-sided spectrum.
- Returns:
calibration_factor: Following Hienzel et al 2002,
- Return type:
- aurora.time_series.windowing_scheme.window_scheme_from_decimation(decimation)[source]¶
Helper function to workaround mt_metadata to not import form aurora
- Parameters:
decimation (mt_metadata.transfer_function.processing.aurora.decimation_level) –
.DecimationLevel –
- Return type:
windowing_scheme aurora.time_series.windowing_scheme.WindowingScheme
aurora.time_series.xarray_helpers module¶
Placeholder module for methods manipulating xarray time series
- aurora.time_series.xarray_helpers.covariance_xr(X: DataArray, aweights: ndarray | None = None) DataArray [source]¶
Compute the covariance matrix with numpy.cov.
- Parameters:
X (xarray.core.dataarray.DataArray) – Multivariate time series as an xarray
aweights (array_like, optional) – Doc taken from numpy cov follows: 1-D array of observation vector weights. These relative weights are typically large for observations considered “important” and smaller for observations considered less “important”. If
ddof=0
the array of weights can be used to assign probabilities to observation vectors.
- Returns:
S – The covariance matrix of the data in xarray form.
- Return type:
xarray.DataArray
- aurora.time_series.xarray_helpers.handle_nan(X, Y, RR, drop_dim='')[source]¶
Drops Nan from multiple channel series’.
Initial use case is for Fourier coefficients, but could be more general.
Idea is to merge X,Y,RR together, and then call dropna. We have to be careful with merging because there can be namespace clashes in the channel labels. Currently handling this by relabelling the remote reference channels from for example “hx”–> “remote_hx”, “hy”–>”remote_hy”. If needed we could add “local” to local the other channels in X, Y.
It would be nice to maintain an index of what was dropped.
TODO: We can probably eliminate the config argument by replacing config.reference_channels with list(R.data_vars) and setting a variable input_channels to X.data_vars. In general, this method could be robustified by renaming all the data_vars with a prefix, not just the reference channels
- Parameters:
X (xr.Dataset) –
Y (xr.Dataset or None) –
RR (xr.Dataset or None) –
drop_dim (string) – specifies the dimension on which dropna is happening. For 3D STFT arrays this is “time”, for 2D stacked STFT this is “observation”
- Returns:
X (xr.Dataset)
Y (xr.Dataset)
RR (xr.Dataset or None)
- aurora.time_series.xarray_helpers.initialize_xrda_1d(channels: list, dtype=typing.Union[type, NoneType], value: complex | float | bool | None = 0) DataArray [source]¶
Returns a 1D xr.DataArray with variable “channel”, having values channels named by the input list.
- Parameters:
- Returns:
xrda – An xarray container for the channels, initialized to zeros.
- Return type:
xarray.core.dataarray.DataArray
- aurora.time_series.xarray_helpers.initialize_xrda_2d(channels, dtype=<class 'complex'>, value: complex | float | bool | None = 0, dims=None)[source]¶
TODO: consider merging with initialize_xrda_1d TODO: consider changing nomenclature from dims=[“channel_1”, “channel_2”], to dims=[“variable_1”, “variable_2”], to be consistent with initialize_xrda_1d
- Parameters:
- channels: list
The channels in the multivariate array
- dtype: type
The datatype to initialize the array. Common cases are complex, float, and bool
- value: Union[complex, float, bool]
The default value to assign the array
- Returns:
- xrda: xarray.core.dataarray.DataArray
An xarray container for the channel variances etc., initialized to zeros.