aurora.transfer_function.weights package

Submodules

aurora.transfer_function.weights.edf_weights module

This module contains a class for computing so-called “Effective Degrees of Freedom” weights.

Development notes: The code here is based on the function Edfwts.m from egbert_codes- 20210121T193218Z-001/egbert_codes/matlabPrototype_10-13-20/TF/functions/Edfwts.m

class aurora.transfer_function.weights.edf_weights.EffectiveDegreesOfFreedom(edf_l1: float | None = 20.0, alpha: float | None = 0.5, c1: float | None = 2.0, c2: float | None = 10.0, p3: float | None = 5.0, n_data: int | None = 0)[source]

Bases: object

Attributes:
p1

Threshold applied to edf.

p2

Threshold applied to edf.

Methods

compute_weights(X, use)

Compute the EDF Weights

compute_weights(X: ndarray, use: ndarray) ndarray[source]

Compute the EDF Weights

Development Notes: The data covariance matrix s and its inverse h are iteratively recomputed using fewer and fewer observations. However, the edf is also computed at every iteration but doesn’t seem to use any fewer observations. Thus the edf weights change as use drops, even for indices that were previously computed … TODO: Could that be an error?

Discussing this with Gary: “… because you are down-weighting (omitting) more and more highpower events the total signal is going down. The signal power goes down with every call to this method” … “The “HAT” Matrix, where the diagonals of this matrix are really big it means an individual data point is controlling its own prediction, and the estimate. If the problem was balanced, each data point would contribute equally to the estimate. Each data point should contribute 1/N to each parameter. When one data point is large and the others are tiny, then it may be contributing a lot, say 1/2 rather than 1/n. edf is like the diagonal of the Hat matrix (in the single station case) How much does the data point contribute to the prediction of itself. If there are n data points contributing equally, each datapoint should contribute ~1/n to its prediction - Note: H = inv(S) in general has equal H[0,1] = H[1,0]; 2x2 matrices with matching off-diagonal terms have inverses with the same property.

Parameters:
  • X (np.ndarray) – The data to for which to determine weights.

  • use (np.ndarray) – popolated with booleans

Returns:

edf – The weights values.

Return type:

np.ndarray

property p1: float

Threshold applied to edf. All edf below this value are set to weight=0

property p2: float

Threshold applied to edf. All edf above th this value are set to weight=0

aurora.transfer_function.weights.edf_weights.effective_degrees_of_freedom_weights(X: Dataset, R: Dataset | None, edf_obj=None) ndarray[source]

Computes the effective degrees of freedom weights. Emulates edfwts (“effective dof”) from tranmt. - Based on Edfwts.m matlab code from iris_mt_scratch/egbert_codes-20210121T193218Z-001/egbert_codes/matlabPrototype_10-13-20/TF/functions/

Flow: 0. Initialize weights vector (of 1’s) the length of the “observation” axis 1. Remove any nan in X,R 2. compute the weights on the reduced (no-nan) arrays of X and R 3. Overwrite the weights vector for the non-nan entries. 4. Return weights and broadcast-multiply against data to apply.

Development Notes: Note about the while loop: variable “use” never changes length, it only flips its bit. The while loop exits when n_valid_observations == sum(use) i.e.the effective dof are all below threshold Estimate dof. Then we “use” only points whose dof are smaller than the threshold. Then we recompute dof. This time the covariance matrix diagonals are smaller, there is less energy in the time series for the S, H calculation.

Parameters:
Returns:

weights – Weights for reducing leverage points.

Return type:

numpy.ndarray

Module contents