The dataloading tests rely on having some raw results data to load. And the results data should be various enough to test the various components of the data-loading code. In other words, effective testing requires a reasonable variety of input files. The repository does not contain sufficient results data for testing. A test set is available in a separate repository, TestingData. If test_dataloading_by_ej.py does not find results data, it will default to downloading the files from that repository.
Call the tests from a working directory with the following structure and files:
.
+-- input_results
| +-- Alabama
| | + <Alabama results file>
| | + <maybe another Alabama results file>
| +-- Alaska
| | + <Alaska results file>
| +-- American-Samoa
| | + <American Samoa results file>
| +-- <etc>
|
+-- reports_and_plots
+-- run_time.ini
The file run_time.ini can be the same as in the Sample Dataloading Session.
The tests in test_dataloading_by_ej.py will attempt to load all raw results files in input_results that are specified by some file in the ini_file_for_results directory. You can check which jurisdictions had files loaded:
- if the test is successful, look at the
compare_*directories in thereports_and_plotsdirectory. - if the test fails, look in the output from the test.
You will need pytest to be installed on your system (see pytest installation instructions if necessary). Commands are run from the shell, referencing the local path to the repository
- dataloading routines:
pytest path/to/repo/tests/dataloading_tests - jurisdiction prep routines:
pytest path/to/repo/tests/jurisdiction_prepper_tests/ - analysis routines:
pytest path/to/repo/tests/analyzer_tests/