aurora.test_utils.synthetic package¶
Submodules¶
aurora.test_utils.synthetic.make_mth5_from_asc module¶
aurora.test_utils.synthetic.make_processing_configs module¶
This module contains methods for generating processing config objects that are used in aurora’s tests of processing synthetic data.
- aurora.test_utils.synthetic.make_processing_configs.create_test_run_config(test_case_id, kernel_dataset, matlab_or_fortran='', save='json', channel_nomenclature='default')[source]¶
Use config creator to generate a processing config file for the synthetic data.
- Parameters:
test_case_id (string) – Must be in SUPPORTED_TEST_CASE_IDS
kernel_dataset (aurora.transfer_function.kernel_dataset.KernelDataset) – Description of the dataset to process
matlab_or_fortran (str) – “”, “matlab”, “fortran”
save (str) – if this has the value “json” a copy of the processing config will be written to a json file The json file name is p.json_fn() with p the processing config object.
channel_nomenclature (str) – A label for the channel nomenclature. This should be one of the values in mt_metadata/transfer_functions/processing/aurora/standards/channel_nomenclatures.json currently [“default”, “lemi12”, “lemi34”, “phoenix123”, “musgraves”,]
- Returns:
p – The processing config object
- Return type:
- aurora.test_utils.synthetic.make_processing_configs.main()[source]¶
Allow the module to be called from the command line
- aurora.test_utils.synthetic.make_processing_configs.make_processing_config_and_kernel_dataset(config_keyword: str, station_id: str, remote_id: str | None = None, mth5s: list | tuple | None = None, channel_nomenclature: str | None = 'default')[source]¶
Gets the processing config and the tfk_dataset
- TODO: Move this to aurora/test_utils/synthetic/
this can then be used by test_fourier_coefficients to validate that the FCs are there before processing
- aurora.test_utils.synthetic.make_processing_configs.test_to_from_json()[source]¶
Intended to test that processing config can be stored as a json, then reloaded from json and is equal.
WORK IN PROGRESS – see mt_metadata Issue #222
Development Notes: TODO: This test should be completed and moved into tests. The json does not load into an mt_metadata object. The problem seems to be that at the run-level of the processing config there is an intention to allow for multiple time-periods. This is reasonable, consider a station running for several months, we may want to only process data from certain chunks of the time series. However, the time period reader does not seem to work as expected. A partial fix is on fix_issue_222 branch of mt_metadata
Related to issue #172
aurora.test_utils.synthetic.paths module¶
This module contains a class that helps manage data paths for testing aurora on synthetic data.
- Development Notes:
This class was built to handle Issue #303 (installation on read-only file system). https://github.com/simpeg/aurora/issues/303
- class aurora.test_utils.synthetic.paths.SyntheticTestPaths(sandbox_path=PosixPath('/home/runner/work/aurora/aurora/data/synthetic'), ascii_data_path=None)[source]¶
Bases:
SyntheticTestPaths
sandbox path must be a place that has write access. This class was created because on some installations we only have read access. Originally there was a data/ folder with the synthetic ascii data stored there, and we created the mth5 and other data products in the same place.
Here we have a ascii_data_path which points at the ascii files (which may be read only), but we also accept a kwarg for “sandbox_path” which is writable and this is where the mth5 and etc. will get built.
TODO: consider creating a symlink in aurora’s legacy data path that points at the mth5 ascii files.
Methods
mkdirs
()Makes the directories that the tests will write results to.
writability_check
(...)
aurora.test_utils.synthetic.processing_helpers module¶
This module contains some helper functions that are called during the execution of aurora’s tests of processing on synthetic data.
- aurora.test_utils.synthetic.processing_helpers.get_example_kernel_dataset()[source]¶
- Creates a kernel dataset object from the synthetic data
Helper function for synthetic tests.
- Returns:
kernel_dataset – The kernel dataset from a synthetic, single station mth5
- Return type:
aurora.transfer_function.kernel_dataset.KernelDataset
aurora.test_utils.synthetic.rms_helpers module¶
This module contains methods associated with RMS calculations that are used in testing aurora processing on synthetic data.
- aurora.test_utils.synthetic.rms_helpers.assert_rms_misfit_ok(expected_rms_misfit, xy_or_yx, rho_rms_aurora, phi_rms_aurora, rho_tol=0.0001, phi_tol=0.0001) None [source]¶
Compares actual RMS misfit from processing against expected values. Raises Assertion errors if test processing results different from expected.
- aurora.test_utils.synthetic.rms_helpers.compute_rms(rho, phi, model_rho_a=100.0, model_phi=45.0, verbose=False)[source]¶
Computes the RMS between processing results (rho, phi) and model (rho, phi).
It is used to make annotations for comparative plots for synthetic data. Could be used in general to compare different processing results. For example by replacing model_rho_a and model_phi with other processing results, or other (non-uniform) model results.
- Parameters:
rho (numpy.ndarray) – 1D array of computed apparent resistivities (expected in Ohmm)
phi (numpy.ndarrayx) – 1D array of computed phases (expected in degrees)
model_rho_a (float or numpy array) – if numpy array must be the same shape as rho
model_phi (float or numpy array) – if numpy array must be the same shape as phi.
- Returns:
rho_rms (float) – rms misfit between the model apparent resistivity and the computed resistivity
phi_rms (float) – rms misfit between the model phase (or phases) and the computed phase
- aurora.test_utils.synthetic.rms_helpers.get_expected_rms_misfit(test_case_id: str, emtf_version=None) dict [source]¶
Returns hard-coded expected results from synthetic data processing. These results are a benchmark against which test results are compared on push to github.
- Parameters:
test_case_id –
emtf_version –