How to add large SPI data sets
last update of this page: 08.12.2003Large data sets cannot be processed all in one by the SPI instrument specific software. The best is to process smaller data sets (e.g. one revolution) at a time, and then to add the results after the binning step. Here I describe the necessary steps. You need spiaddobs 6.0.20 or higher to perform the analysis as described here.
- Run the same analysis on the data subsets, i.e. use the same binning and the same number of detectors.
- Run spi_science_analysis up to the BIN_I level on each dataset.
- Create a new directory in your obs/ directory:
mkdir summed_analysis
cd summed_analysis - Create a spi/ subdirectory:
mkdir spi - Create a list of the observation groups you want to add. Use this og_list.fits file and modify it; fill the relative location of the observation groups you want to add in the table and add further rows if necessary.
- Run spiaddobs:
spiaddobs rogroup="og_list.fits[1]" \
truncate=0 \
process_psd_response=0 \
process_background=0 \
stat_err = 1 \
counts_output_file="spi/evts_det.fits(SPI.-OBS.-DSP.tpl)" - use spiback as a single tool (the script spi_science_analysis is not working as the observation group is not existing/complete). DFEE option will not work!
- use a source catalogue from one of the data subsets, or one which covers the region of the observation groups you use. You cannot run cat_extract on the observation group you produced, as there is no science window group index!
- use spiros / spiskymax as a single tool