Satpy’s Documentation
Satpy is a python library for reading, manipulating, and writing data from
remote-sensing earth-observing satellite instruments. Satpy
provides users with readers that convert geophysical parameters from various
file formats to the common Xarray DataArray
and
Dataset
classes for easier interoperability with other
scientific python libraries. Satpy also provides interfaces for creating
RGB (Red/Green/Blue) images and other composite types by combining data
from multiple instrument bands or products. Various atmospheric corrections
and visual enhancements are provided for improving the usefulness and quality
of output images. Output data can be written to
multiple output file formats such as PNG, GeoTIFF, and CF standard NetCDF
files. Satpy also allows users to resample data to geographic projected grids
(areas). Satpy is maintained by the open source
Pytroll group.
The Satpy library acts as a high-level abstraction layer on top of other libraries maintained by the Pytroll group including:
Go to the Satpy project page for source code and downloads.
Satpy is designed to be easily extendable to support any earth observation satellite by the creation of plugins (readers, compositors, writers, etc). The table at the bottom of this page shows the input formats supported by the base Satpy installation.
Note
Satpy’s interfaces are not guaranteed stable and may change until version 1.0 when backwards compatibility will be a main focus.
Getting Help
Having trouble installing or using Satpy? Feel free to ask questions at any of the contact methods for the PyTroll group here or file an issue on Satpy’s GitHub page.
Documentation
Overview
Satpy is designed to provide easy access to common operations for processing meteorological remote sensing data. Any details needed to perform these operations are configured internally to Satpy meaning users should not have to worry about how something is done, only ask for what they want. Most of the features provided by Satpy can be configured by keyword arguments (see the API Documentation or other specific section for more details). For more complex customizations or added features Satpy uses a set of configuration files that can be modified by the user. The various components and concepts of Satpy are described below. The Quickstart guide also provides simple example code for the available features of Satpy.
Scene
Satpy provides most of its functionality through the
Scene
class. This acts as a container for the datasets
being operated on and provides methods for acting on those datasets. It
attempts to reduce the amount of low-level knowledge needed by the user while
still providing a pythonic interface to the functionality underneath.
A Scene object represents a single geographic region of data, typically at a single continuous time range. It is possible to combine Scenes to form a Scene with multiple regions or multiple time observations, but it is not guaranteed that all functionality works in these situations.
DataArrays
Satpy’s lower-level container for data is the
xarray.DataArray
. For historical reasons DataArrays are often
referred to as “Datasets” in Satpy. These objects act similar to normal
numpy arrays, but add additional metadata and attributes for describing the
data. Metadata is stored in a .attrs
dictionary and named dimensions can
be accessed in a .dims
attribute, along with other attributes.
In most use cases these objects can be operated on like normal NumPy arrays
with special care taken to make sure the metadata dictionary contains
expected values. See the XArray documentation for more info on handling
xarray.DataArray
objects.
Additionally, Satpy uses a special form of DataArrays where data is stored
in dask.array.Array
objects which allows Satpy to perform
multi-threaded lazy operations vastly improving the performance of processing.
For help on developing with dask and xarray see
Migrating to xarray and dask or the documentation for the specific
project.
To uniquely identify DataArray
objects Satpy uses DataID. A
DataID
consists of various pieces of available metadata. This usually
includes name and wavelength as identifying metadata, but can also include
resolution, calibration, polarization, and additional modifiers
to further distinguish one dataset from another. For more information on DataID
objects, have a look a Satpy internal workings: having a look under the hood.
Warning
XArray includes other object types called “Datasets”. These are different from the “Datasets” mentioned in Satpy.
Data chunks
The usage of dask as the foundation for Satpy’s operation means that the underlying data is chunked, that is, cut in smaller pieces that can then be processed in parallel. Information on dask’s chunking can be found in the dask documentation here: https://docs.dask.org/en/stable/array-chunks.html The size of these chunks can have a significant impact on the performance of satpy, so to achieve best performance it can be necessary to adjust it.
Default chunk size used by Satpy can be configured by using the following around your code:
with dask.config.set("array.chunk-size": "32MiB"):
# your code here
Or by using:
dask.config.set("array.chunk-size": "32MiB")
at the top of your code.
There are other ways to set dask configuration items, including configuration files or environment variables, see here: https://docs.dask.org/en/stable/configuration.html
The value of the chunk-size can be given in different ways, see here: https://docs.dask.org/en/stable/api.html#dask.utils.parse_bytes
The default value for this parameter is 128MiB, which can translate to chunk sizes of 4096x4096 for 64-bit float arrays.
Note however that some reader might choose to use a liberal interpretation of the chunk size which will not necessarily result in a square chunk, or even to a chunk size of the exact requested size. The motivation behind this is that data stored as stripes may load much faster if the horizontal striping is kept as much as possible instead of cutting the data in square chunks. However, the Satpy readers should respect the overall chunk size when it makes sense.
Note
The legacy way of providing the chunks size in Satpy is the
PYTROLL_CHUNK_SIZE
environment variable. This is now pending deprecation,
so an equivalent way to achieve the same result is by using the
DASK_ARRAY__CHUNK_SIZE
environment variable. The value to assign to the
variable is the square of the legacy variable, multiplied by the size of array data type
at hand, so for example, for 64-bits floats:
export DASK_ARRAY__CHUNK_SIZE=134217728
which is the same as:
export DASK_ARRAY__CHUNK_SIZE="128MiB"
is equivalent to the deprecated:
export PYTROLL_CHUNK_SIZE=4096
Reading
One of the biggest advantages of using Satpy is the large number of input file formats that it can read. It encapsulates this functionality into individual Reading. Satpy Readers handle all of the complexity of reading whatever format they represent. Meteorological Satellite file formats can be extremely complex and formats are rarely reused across satellites or instruments. No matter the format, Satpy’s Reader interface is meant to provide a consistent data loading interface while still providing flexibility to add new complex file formats.
Compositing
Many users of satellite imagery combine multiple sensor channels to bring out certain features of the data. This includes using one dataset to enhance another, combining 3 or more datasets in to an RGB image, or any other combination of datasets. Satpy comes with a lot of common composite combinations built-in and allows the user to request them like any other dataset. Satpy also makes it possible to create your own custom composites and have Satpy treat them like any other dataset. See Composites for more information.
Resampling
Satellite imagery data comes in two forms when it comes to geolocation, native satellite swath coordinates and uniform gridded projection coordinates. It is also common to see the channels from a single sensor in multiple resolutions, making it complicated to combine or compare the datasets. Many use cases of satellite data require the data to be in a certain projection other than the native projection or to have output imagery cover a specific area of interest. Satpy makes it easy to resample datasets to allow for users to combine them or grid them to these projections or areas of interest. Satpy uses the PyTroll pyresample package to provide nearest neighbor, bilinear, or elliptical weighted averaging resampling methods. See Resampling for more information.
Enhancements
When making images from satellite data the data has to be manipulated to be compatible with the output image format and still look good to the human eye. Satpy calls this functionality “enhancing” the data, also commonly called scaling or stretching the data. This process can become complicated not just because of how subjective the quality of an image can be, but also because of historical expectations of forecasters and other users for how the data should look. Satpy tries to hide the complexity of all the possible enhancement methods from the user and just provide the best looking image by default. Satpy still makes it possible to customize these procedures, but in most cases it shouldn’t be necessary. See the documentation on Writing for more information on what’s possible for output formats and enhancing images.
Writing
Satpy is designed to make data loading, manipulating, and analysis easy. However, the best way to get satellite imagery data out to as many users as possible is to make it easy to save it in multiple formats. Satpy allows users to save data in image formats like PNG or GeoTIFF as well as data file formats like NetCDF. Each format’s complexity is hidden behind the interface of individual Writer objects and includes keyword arguments for accessing specific format features like compression and output data type. See the Writing documentation for the available writers and how to use them.
Installation Instructions
Satpy is available from conda-forge (via conda), PyPI (via pip), or from source (via pip+git). The below instructions show how to install stable versions of Satpy. For a development/unstable version see Development installation.
Conda-based Installation
Satpy can be installed into a conda environment by installing the package from the conda-forge channel. If you do not already have access to a conda installation, we recommend installing miniconda for the smallest and easiest installation.
The commands below will use -c conda-forge
to make sure packages are
downloaded from the conda-forge channel. Alternatively, you can tell conda
to always use conda-forge by running:
$ conda config --add channels conda-forge
In a new conda environment
We recommend creating a separate environment for your work with Satpy. To create a new environment and install Satpy all in one command you can run:
$ conda create -c conda-forge -n my_satpy_env python satpy
You must then activate the environment so any future python or conda commands will use this environment.
$ conda activate my_satpy_env
This method of creating an environment with Satpy (and optionally other packages) installed can generally be created faster than creating an environment and then later installing Satpy and other packages (see the section below).
In an existing environment
Note
It is recommended that when first exploring Satpy, you create a new environment specifically for this rather than modifying one used for other work.
If you already have a conda environment, it is activated, and would like to install Satpy into it, run the following:
$ conda install -c conda-forge satpy
Note
Satpy only automatically installs the dependencies needed to process the
most common use cases. Additional dependencies may need to be installed
with conda or pip if import errors are encountered. To check your
installation use the check_satpy
function discussed
here.
Pip-based Installation
Satpy is available from the Python Packaging Index (PyPI). A sandbox environment for satpy can be created using Virtualenv.
To install the satpy package and the minimum amount of python dependencies:
$ pip install satpy
Additional dependencies can be installed as “extras” and are grouped by reader, writer, or feature added. Extras available can be found in the setup.py file. They can be installed individually:
$ pip install "satpy[viirs_sdr]"
Or all at once, although this isn’t recommended due to the large number of dependencies:
$ pip install "satpy[all]"
Ubuntu System Python Installation
To install Satpy on an Ubuntu system we recommend using virtual environments to separate Satpy and its dependencies from the rest of the system. Note that these instructions require using “sudo” privileges which may not be available to all users and can be very dangerous. The following instructions attempt to install some Satpy dependencies using the Ubuntu apt package manager to ease installation. Replace /path/to/pytroll-env with the environment to be created.
$ sudo apt-get install python-pip python-gdal
$ sudo pip install virtualenv
$ virtualenv /path/to/pytroll-env
$ source /path/to/pytroll-env/bin/activate
$ pip install satpy
Configuration
Satpy has two levels of configuration that allow to control how Satpy and its various components behave. There are a series of “settings” that change the global Satpy behavior. There are also a series of “component configuration” YAML files for controlling the complex functionality in readers, compositors, writers, and other Satpy components that can’t be controlled with traditional keyword arguments.
Settings
There are configuration parameters in Satpy that are not specific to one component and control more global behavior of Satpy. These parameters can be set in one of three ways:
Environment variable
YAML file
At runtime with
satpy.config
This functionality is provided by the donfig library. The currently available settings are described below. Each option is available from all three methods. If specified as an environment variable or specified in the YAML file on disk, it must be set before Satpy is imported.
YAML Configuration
YAML files that include these parameters can be in any of the following locations:
<python environment prefix>/etc/satpy/satpy.yaml
<user_config_dir>/satpy.yaml
(see below)~/.satpy/satpy.yaml
<SATPY_CONFIG_PATH>/satpy.yaml
(see Component Configuration Path below)
The above user_config_dir
is provided by the appdirs
package and
differs by operating system. Typical user config directories are:
Mac OSX:
~/Library/Preferences/satpy
Unix/Linux:
~/.config/satpy
Windows:
C:\\Users\\<username>\\AppData\\Local\\pytroll\\satpy
All YAML files found from the above paths will be merged into one
configuration object (accessed via satpy.config
).
The YAML contents should be a simple mapping of configuration key to its
value. For example:
cache_dir: "/tmp"
data_dir: "/tmp"
Lastly, it is possible to specify an additional config path to the above
options by setting the environment variable SATPY_CONFIG
. The file
specified with this environment variable will be added last after all of the
above paths have been merged together.
At runtime
After import, the values can be customized at runtime by doing:
import satpy
satpy.config.set(cache_dir="/my/new/cache/path")
# ... normal satpy code ...
Or for specific blocks of code:
import satpy
with satpy.config.set(cache_dir="/my/new/cache/path"):
# ... some satpy code ...
# ... code using the original cache_dir
Similarly, if you need to access one of the values you can
use the satpy.config.get
method.
Cache Directory
Environment variable:
SATPY_CACHE_DIR
YAML/Config Key:
cache_dir
Default: See below
Directory where any files cached by Satpy will be stored. This directory is not necessarily cleared out by Satpy, but is rarely used without explicitly being enabled by the user. This defaults to a different path depending on your operating system following the appdirs “user cache dir”.
Cache Longitudes and Latitudes
Environment variable:
SATPY_CACHE_LONLATS
YAML/Config Key:
cache_lonlats
Default:
False
Whether or not generated longitude and latitude coordinates should be cached
to on-disk zarr arrays. Currently this only works in very specific cases.
Mainly the lon/lats that are generated when computing sensor and solar zenith
and azimuth angles used in various modifiers and compositors. This caching is
only done for AreaDefinition
-based geolocation, not SwathDefinition
.
Arrays are stored in cache_dir
(see above).
When setting this as an environment variable, this should be set with the
string equivalent of the Python boolean values ="True"
or ="False"
.
See also cache_sensor_angles
below.
Warning
This caching does not limit the number of entries nor does it expire old entries. It is up to the user to manage the contents of the cache directory.
Cache Sensor Angles
Environment variable:
SATPY_CACHE_SENSOR_ANGLES
YAML/Config Key:
cache_sensor_angles
Default:
False
Whether or not generated sensor azimuth and sensor zenith angles should be
cached to on-disk zarr arrays. These angles are primarily used in certain
modifiers and compositors. This caching is only done for
AreaDefinition
-based geolocation, not SwathDefinition
.
Arrays are stored in cache_dir
(see above).
This caching requires producing an estimate of the angles to avoid needing to generate new angles for every new data case. This happens because the angle generation depends on the observation time of the data and the position of the satellite (longitude, latitude, altitude). The angles are estimated by using a constant observation time for all cases (maximum ~1e-10 error) and by rounding satellite position coordinates to the nearest tenth of a degree for longitude and latitude and nearest tenth meter (maximum ~0.058 error). Note these estimations are only done if caching is enabled (this parameter is True).
When setting this as an environment variable, this should be set with the
string equivalent of the Python boolean values ="True"
or ="False"
.
See also cache_lonlats
above.
Warning
This caching does not limit the number of entries nor does it expire old entries. It is up to the user to manage the contents of the cache directory.
Component Configuration Path
Environment variable:
SATPY_CONFIG_PATH
YAML/Config Key:
config_path
Default:
[]
Base directory, or directories, where Satpy component YAML configuration files
are stored. Satpy expects configuration files for specific component types to
be in appropriate subdirectories (ex. readers
, writers
, etc), but
these subdirectories should not be included in the config_path
.
For example, if you have custom composites configured in
/my/config/dir/etc/composites/visir.yaml
, then config_path
should
include /my/config/dir/etc
for Satpy to find this configuration file
when searching for composites. This option replaces the legacy
PPP_CONFIG_DIR
environment variable.
Note that this value must be a list. In Python, this could be set by doing:
satpy.config.set(config_path=['/path/custom1', '/path/custom2'])
If setting an environment variable then it must be a
colon-separated (:
) string on Linux/OSX or semicolon-separate (;
)
separated string and must be set before calling/importing Satpy.
If the environment variable is a single path it will be converted to a list
when Satpy is imported.
export SATPY_CONFIG_PATH="/path/custom1:/path/custom2"
On Windows, with paths on the C: drive, these paths would be:
set SATPY_CONFIG_PATH="C:/path/custom1;C:/path/custom2"
Satpy will always include the builtin configuration files that it is distributed with regardless of this setting. When a component supports merging of configuration files, they are merged in reverse order. This means “base” configuration paths should be at the end of the list and custom/user paths should be at the beginning of the list.
Data Directory
Environment variable:
SATPY_DATA_DIR
YAML/Config Key:
data_dir
Default: See below
Directory where any data Satpy needs to perform certain operations will be
stored. This replaces the legacy SATPY_ANCPATH
environment variable. This
defaults to a different path depending on your operating system following the
appdirs
“user data dir”.
Demo Data Directory
Environment variable:
SATPY_DEMO_DATA_DIR
YAML/Config Key:
demo_data_dir
Default: <current working directory>
Directory where demo data functions will download data files to. Available
demo data functions can be found in satpy.demo
subpackage.
Download Auxiliary Data
Environment variable:
SATPY_DOWNLOAD_AUX
YAML/Config Key:
download_aux
Default: True
Whether to allow downloading of auxiliary files for certain Satpy operations.
See Auxiliary Data Download for more information. If True
then Satpy
will download and cache any necessary data files to Data Directory
when needed. If False
then pre-downloaded files will be used, but any
other files will not be downloaded or checked for validity.
Sensor Angles Position Preference
Environment variable:
SATPY_SENSOR_ANGLES_POSITION_PREFERENCE
YAML/Config Key:
sensor_angles_position_preference
Default: “actual”
Control which satellite position should be preferred when generating sensor
azimuth and sensor zenith angles. This value is passed directly to the
get_satpos()
function. See the documentation for that
function for more information on how the value will be used. This is used
as part of the get_angles()
and
get_satellite_zenith_angle()
functions which is
used by multiple modifiers and composites including the default rayleigh
correction.
Clipping Negative Infrared Radiances
Environment variable:
SATPY_READERS__CLIP_NEGATIVE_RADIANCES
YAML/Config Key:
readers.clip_negative_radiances
Default: False
Whether to clip negative infrared radiances to the minimum allowable value before
computing the brightness temperature.
If clip_negative_radiances=False
, pixels with negative radiances will have
np.nan
brightness temperatures.
Clipping of negative radiances is currently implemented for the following readers:
abi_l1b
Temporary Directory
Environment variable:
SATPY_TMP_DIR
YAML/Config Key:
tmp_dir
Default: tempfile.gettempdir()
Directory where Satpy creates temporary files, for example decompressed input files. Default depends on the operating system.
Component Configuration
Much of the functionality of Satpy comes from the various components it uses, like readers, writers, compositors, and enhancements. These components are configured for reuse from YAML files stored inside Satpy or in custom user configuration files. Custom directories can be provided by specifying the config_path setting mentioned above.
To create and use your own custom component configuration you should:
Create a directory to store your new custom YAML configuration files. The files for each component will go in a subdirectory specific to that component (ex.
composites
,enhancements
,readers
,writers
).Set the Satpy config_path to point to your new directory. This could be done by setting the environment variable
SATPY_CONFIG_PATH
to your custom directory (don’t include the component sub-directory) or one of the other methods for setting this path.Create YAML configuration files with your custom YAML files. In most cases there is no need to copy configuration from the builtin Satpy files as these will be merged with your custom files.
If your custom configuration uses custom Python code, this code must be importable by Python. This means your code must either be installed in your Python environment or you must set your
PYTHONPATH
to the location of the modules.Run your Satpy code and access your custom components like any of the builtin components.
Downloading Data
One of the main features of Satpy is its ability to read various satellite
data formats. However, it currently only provides limited methods for
downloading data from remote sources and these methods are limited to demo
data for Pytroll examples.
See the examples and the demo
API documentation for details.
Otherwise, Satpy assumes all data is available
through the local system, either as a local directory or network
mounted file systems. Certain readers that use xarray
to open data files
may be able to load files from remote systems by using OpenDAP or similar
protocols.
As a user there are two options for getting access to data:
Download data to your local machine.
Connect to a remote system that already has access to data.
The most common case of a remote system having access to data is with a cloud computing service like Google Cloud Platform (GCP) or Amazon Web Services (AWS). Another possible case is an organization having direct broadcast antennas where they receive data directly from the satellite or satellite mission organization (NOAA, NASA, EUMETSAT, etc). In these cases data is usually available as a mounted network file system and can be accessed like a normal local path (with the added latency of network communications).
Below are some data sources that provide data that can be read by Satpy. If you know of others please let us know by either creating a GitHub issue or pull request.
NOAA GOES on Amazon Web Services
Associated Readers:
abi_l1b
In addition to the pages above, Brian Blaylock’s GOES-2-Go
python package is useful for downloading GOES data to your local machine.
Brian also prepared some instructions
for using the rclone
tool for downloading AWS data to a local machine. The
instructions can be found
here.
NOAA GOES on Google Cloud Platform
GOES-16
Associated Readers:
abi_l1b
GOES-17
Associated Readers:
abi_l1b
NOAA CLASS
Associated Readers:
viirs_sdr
NASA VIIRS Atmosphere SIPS
Associated Readers:
viirs_l1b
EUMETSAT Data Center
Examples
Satpy examples are available as Jupyter Notebooks on the pytroll-examples git repository. Some examples are described in further detail as separate pages in this documentation. They include python code, PNG images, and descriptions of what the example is doing. Below is a list of some of the examples and a brief summary. Additional examples can be found at the repository mentioned above or as explanations in the various sections of this documentation.
MTG FCI - Natural Color Example
Satpy includes a reader for the Meteosat Third Generation (MTG) FCI Level 1c data. The following Python code snippet shows an example on how to use Satpy to generate a Natural Color RGB composite over the European area.
Warning
This example is currently a work in progress. Some of the below code may not work with the currently released version of Satpy. Additional updates to this example will be coming soon.
Note
For reading compressed data, a decompression library is
needed. Either install the FCIDECOMP library (see the FCI L1 Product User
Guide, or the
hdf5plugin
package with:
pip install hdf5plugin
or:
conda install hdf5plugin -c conda-forge
If you use hdf5plugin
, make sure to add the line import hdf5plugin
at the top of your script.
from satpy.scene import Scene
from satpy import find_files_and_readers
# define path to FCI test data folder
path_to_data = 'your/path/to/FCI/data/folder/'
# find files and assign the FCI reader
files = find_files_and_readers(base_dir=path_to_data, reader='fci_l1c_nc')
# create an FCI scene from the selected files
scn = Scene(filenames=files)
# print available dataset names for this scene (e.g. 'vis_04', 'vis_05','ir_38',...)
print(scn.available_dataset_names())
# print available composite names for this scene (e.g. 'natural_color', 'airmass', 'convection',...)
print(scn.available_composite_names())
# load the datasets/composites of interest
scn.load(['natural_color','vis_04'], upper_right_corner='NE')
# note: the data inside the FCI files is stored upside down. The upper_right_corner='NE' argument
# flips it automatically in upright position.
# you can access the values of a dataset as a Numpy array with
vis_04_values = scn['vis_04'].values
# resample the scene to a specified area (e.g. "eurol1" for Europe in 1km resolution)
scn_resampled = scn.resample("eurol", resampler='nearest', radius_of_influence=5000)
# save the resampled dataset/composite to disk
scn_resampled.save_dataset("natural_color", filename='./fci_natural_color_resampled.png')
EPS-SG VII netCDF Example
Satpy includes a reader for the EPS-SG Visible and Infrared Imager (VII) Level 1b data. The following Python code snippet shows an example on how to use Satpy to read a channel and resample and save the image over the European area.
Warning
This example is currently a work in progress. Some of the below code may not work with the currently released version of Satpy. Additional updates to this example will be coming soon.
import glob
from satpy.scene import Scene
# find the file/files to be read
filenames = glob.glob('/path/to/VII/data/W_xx-eumetsat-darmstadt,SAT,SGA1-VII-1B-RAD_C_EUMT_20191007055100*')
# create a VII scene from the selected granule(s)
scn = Scene(filenames=filenames, reader='vii_l1b_nc')
# print available dataset names for this scene
print(scn.available_dataset_names())
# load the datasets of interest
# NOTE: only radiances are supported for test data
scn.load(["vii_668"], calibration="radiance")
# resample the scene to a specified area (e.g. "eurol1" for Europe in 1km resolution)
eur = scn.resample("eurol", resampler='nearest', radius_of_influence=5000)
# save the resampled data to disk
eur.save_dataset("vii_668", filename='./vii_668_eur.png')
Name |
Description |
---|---|
Satpy quickstart for loading and processing satellite data, with MSG data in this examples |
|
Plot a single VIIRS SDR granule using Cartopy and matplotlib |
|
Generate and resample a rayleigh corrected true color RGB from Himawari-8 AHI data |
|
Reading OLCI data from Sentinel 3 with Pytroll/Satpy |
|
Reading MSI data from Sentinel 2 with Pytroll/Satpy |
|
Generate a rayleigh corrected true color RGB from VIIRS I- and M-bands |
|
Generate and resample a rayleigh corrected true color RGB from MODIS |
|
Generate a false color composite RGB from SAR-C polarized datasets |
|
Reading Level 2 EARS-NWC cloud products |
|
Reading Level 2 MAIA cloud products |
|
Generate Natural Color RGB from Meteosat Third Generation (MTG) FCI Level 1c data |
|
Reading EPS-SG Visible and Infrared Imager (VII) with Pytroll |
Read and visualize EPS-SG VII L1B test data and save it to an image |
Quickstart
Loading and accessing data
To work with weather satellite data you must create a
Scene
object. Satpy does not currently provide an
interface to download satellite data, it assumes that the data is on a
local hard disk already. In order for Satpy to get access to the data the
Scene must be told what files to read and what
Satpy Reader should read them:
>>> from satpy import Scene
>>> from glob import glob
>>> filenames = glob("/home/a001673/data/satellite/Meteosat-10/seviri/lvl1.5/2015/04/20/HRIT/*201504201000*")
>>> global_scene = Scene(reader="seviri_l1b_hrit", filenames=filenames)
To load data from the files use the Scene.load
method. Printing the Scene object will list each of the
xarray.DataArray
objects currently loaded:
>>> global_scene.load(['0.8', '1.6', '10.8'])
>>> print(global_scene)
<xarray.DataArray 'reshape-d66223a8e05819b890c4535bc7e74356' (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
* x (x) float64 5.567e+06 5.564e+06 5.561e+06 5.558e+06 5.555e+06 ...
* y (y) float64 -5.567e+06 -5.564e+06 -5.561e+06 -5.558e+06 ...
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'pr...
sensor: seviri
platform_name: Meteosat-11
standard_name: brightness_temperature
units: K
wavelength: (9.8, 10.8, 11.8)
start_time: 2018-02-28 15:00:10.814000
end_time: 2018-02-28 15:12:43.956000
area: Area ID: some_area_name\nDescription: On-the-fly ar...
name: IR_108
resolution: 3000.40316582
calibration: brightness_temperature
polarization: None
level: None
modifiers: ()
ancillary_variables: []
<xarray.DataArray 'reshape-1982d32298aca15acb42c481fd74a629' (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
* x (x) float64 5.567e+06 5.564e+06 5.561e+06 5.558e+06 5.555e+06 ...
* y (y) float64 -5.567e+06 -5.564e+06 -5.561e+06 -5.558e+06 ...
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'pr...
sensor: seviri
platform_name: Meteosat-11
standard_name: toa_bidirectional_reflectance
units: %
wavelength: (0.74, 0.81, 0.88)
start_time: 2018-02-28 15:00:10.814000
end_time: 2018-02-28 15:12:43.956000
area: Area ID: some_area_name\nDescription: On-the-fly ar...
name: VIS008
resolution: 3000.40316582
calibration: reflectance
polarization: None
level: None
modifiers: ()
ancillary_variables: []
<xarray.DataArray 'reshape-e86d03c30ce754995ff9da484c0dc338' (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
* x (x) float64 5.567e+06 5.564e+06 5.561e+06 5.558e+06 5.555e+06 ...
* y (y) float64 -5.567e+06 -5.564e+06 -5.561e+06 -5.558e+06 ...
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'pr...
sensor: seviri
platform_name: Meteosat-11
standard_name: toa_bidirectional_reflectance
units: %
wavelength: (1.5, 1.64, 1.78)
start_time: 2018-02-28 15:00:10.814000
end_time: 2018-02-28 15:12:43.956000
area: Area ID: some_area_name\nDescription: On-the-fly ar...
name: VIS006
resolution: 3000.40316582
calibration: reflectance
polarization: None
level: None
modifiers: ()
ancillary_variables: []
Satpy allows loading file data by wavelengths in micrometers (shown above) or by channel name:
>>> global_scene.load(["VIS008", "IR_016", "IR_108"])
To have a look at the available channels for loading from your Scene
object use the
available_dataset_names()
method:
>>> global_scene.available_dataset_names()
['HRV',
'IR_108',
'IR_120',
'VIS006',
'WV_062',
'IR_039',
'IR_134',
'IR_097',
'IR_087',
'VIS008',
'IR_016',
'WV_073']
To access the loaded data use the wavelength or name:
>>> print(global_scene[0.8])
For more information on loading datasets by resolution, calibration, or other advanced loading methods see the Reading documentation.
Visualizing data
To visualize loaded data in a pop-up window:
>>> global_scene.show(0.8)
Alternatively if working in a Jupyter notebook the scene can be converted to
a geoviews object using the
to_geoviews()
method. The geoviews package is not a
requirement of the base satpy install so in order to use this feature the user
needs to install the geoviews package himself.
>>> import holoviews as hv
>>> import geoviews as gv
>>> import geoviews.feature as gf
>>> gv.extension("bokeh", "matplotlib")
>>> %opts QuadMesh Image [width=600 height=400 colorbar=True] Feature [apply_ranges=False]
>>> %opts Image QuadMesh (cmap='RdBu_r')
>>> gview = global_scene.to_geoviews(vdims=[0.6])
>>> gview[::5,::5] * gf.coastline * gf.borders
Creating new datasets
Calculations based on loaded datasets/channels can easily be assigned to a new dataset:
>>> global_scene.load(['VIS006', 'VIS008'])
>>> global_scene["ndvi"] = (global_scene['VIS008'] - global_scene['VIS006']) / (global_scene['VIS008'] + global_scene['VIS006'])
>>> global_scene.show("ndvi")
When doing calculations Xarray, by default, will drop all attributes so attributes need to be
copied over by hand. The combine_metadata()
function can assist with this task.
Assigning additional custom metadata is also possible.
>>> from satpy.dataset import combine_metadata
>>> scene['new_band'] = scene['VIS008'] / scene['VIS006']
>>> scene['new_band'].attrs = combine_metadata(scene['VIS008'], scene['VIS006'])
>>> scene['new_band'].attrs['some_other_key'] = 'whatever_value_you_want'
Generating composites
Satpy comes with many composite recipes built-in and makes them loadable like any other dataset:
>>> global_scene.load(['overview'])
To get a list of all available composites for the current scene:
>>> global_scene.available_composite_names()
['overview_sun',
'airmass',
'natural_color',
'night_fog',
'overview',
'green_snow',
'dust',
'fog',
'natural_color_raw',
'cloudtop',
'convection',
'ash']
Loading composites will load all necessary dependencies to make that composite and unload them after the composite has been generated.
Note
Some composite require datasets to be at the same resolution or shape. When this is the case the Scene object must be resampled before the composite can be generated (see below).
Resampling
In certain cases it may be necessary to resample datasets whether they come from a file or are generated composites. Resampling is useful for mapping data to a uniform grid, limiting input data to an area of interest, changing from one projection to another, or for preparing datasets to be combined in a composite (see above). For more details on resampling, different resampling algorithms, and creating your own area of interest see the Resampling documentation. To resample a Satpy Scene:
>>> local_scene = global_scene.resample("eurol")
This creates a copy of the original global_scene
with all loaded datasets
resampled to the built-in “eurol” area. Any composites that were requested,
but could not be generated are automatically generated after resampling. The
new local_scene
can now be used like the original global_scene
for
working with datasets, saving them to disk or showing them on screen:
>>> local_scene.show('overview')
>>> local_scene.save_dataset('overview', './local_overview.tif')
Saving to disk
To save all loaded datasets to disk as geotiff images:
>>> global_scene.save_datasets()
To save all loaded datasets to disk as PNG images:
>>> global_scene.save_datasets(writer='simple_image')
Or to save an individual dataset:
>>> global_scene.save_dataset('VIS006', 'my_nice_image.png')
Datasets are automatically scaled or “enhanced” to be compatible with the output format and to provide the best looking image. For more information on saving datasets and customizing enhancements see the documentation on Writing.
Slicing and subsetting scenes
Array slicing can be done at the scene level in order to get subsets with consistent navigation throughout. Note that this does not take into account scenes that may include channels at multiple resolutions, i.e. index slicing does not account for dataset spatial resolution.
>>> scene_slice = global_scene[2000:2004, 2000:2004]
>>> vis006_slice = scene_slice['VIS006']
>>> vis006_slice_meas = vis006_slice.values
>>> vis006_slice_lon, vis006_slice_lat = vis006_slice.attrs['area'].get_lonlats()
To subset multi-resolution data consistently, use the crop()
method.
>>> scene_llbox = global_scene.crop(ll_bbox=(-4.0, -3.9, 3.9, 4.0))
>>> vis006_llbox = scene_llbox['VIS006']
>>> vis006_llbox_meas = vis006_llbox.values
>>> vis006_llbox_lon, vis006_llbox_lat = vis006_llbox.attrs['area'].get_lonlats()
Troubleshooting
When something goes wrong, a first step to take is check that the latest Version
of satpy and its dependencies are installed. Satpy drags in a few packages as
dependencies per default, but each reader and writer has it’s own dependencies
which can be unfortunately easy to miss when just doing a regular pip install.
To check the missing dependencies for the readers and writers, a utility
function called check_satpy()
can be used:
>>> from satpy.utils import check_satpy
>>> check_satpy()
Due to the way Satpy works, producing as many datasets as possible, there are times that behavior can be unexpected but with no exceptions raised. To help troubleshoot these situations log messages can be turned on. To do this run the following code before running any other Satpy code:
>>> from satpy.utils import debug_on
>>> debug_on()
Reading
Satpy supports reading and loading data from many input file formats and
schemes through the concept of readers. Each reader supports a specific type of input data.
The Scene
object provides a simple interface around all the complexity of
these various formats through its load
method.
The following sections describe the different way data can be loaded, requested, or added to a Scene object.
Available Readers
For readers currently available in Satpy see Satpy Readers. Additionally to get a list of available readers you can use the available_readers function. By default, it returns the names of available readers. To return additional reader information use available_readers(as_dict=True):
>>> from satpy import available_readers
>>> available_readers()
Filter loaded files
Coming soon…
Load data
Datasets in Satpy are identified by certain pieces of metadata set during
data loading. These include name, wavelength, calibration,
resolution, polarization, and modifiers. Normally, once a Scene
is created requesting datasets by name or wavelength is all that is
needed:
>>> from satpy import Scene
>>> scn = Scene(reader="seviri_l1b_hrit", filenames=filenames)
>>> scn.load([0.6, 0.8, 10.8])
>>> scn.load(['IR_120', 'IR_134'])
However, in many cases datasets are available in multiple spatial resolutions,
multiple calibrations (brightness_temperature
, reflectance
,
radiance
, etc),
multiple polarizations, or have corrections or other modifiers already applied
to them. By default Satpy will provide the version of the dataset with the
highest resolution and the highest level of calibration (brightness
temperature or reflectance over radiance). It is also possible to request one
of these exact versions of a dataset by using the
DataQuery
class:
>>> from satpy import DataQuery
>>> my_channel_id = DataQuery(name='IR_016', calibration='radiance')
>>> scn.load([my_channel_id])
>>> print(scn['IR_016'])
Or request multiple datasets at a specific calibration, resolution, or polarization:
>>> scn.load([0.6, 0.8], resolution=1000)
Or multiple calibrations:
>>> scn.load([0.6, 10.8], calibration=['brightness_temperature', 'radiance'])
In the above case Satpy will load whatever dataset is available and matches
the specified parameters. So the above load
call would load the 0.6
(a visible/reflectance band) radiance data and 10.8
(an IR band)
brightness temperature data.
For geostationary satellites that have the individual channel data
separated to several files (segments) the missing segments are padded
by default to full disk area. This is made to simplify caching of
resampling look-up tables (see Resampling for more information).
To disable this, the user can pass pad_data
keyword argument when
loading datasets:
>>> scn.load([0.6, 10.8], pad_data=False)
For geostationary products, where the imagery is stored in the files in an unconventional orientation
(e.g. MSG SEVIRI L1.5 data are stored with the southwest corner in the upper right), the keyword argument
upper_right_corner
can be passed into the load call to automatically flip the datasets to the
wished orientation. Accepted argument values are 'NE'
, 'NW'
, 'SE'
, 'SW'
,
and 'native'
.
By default, no flipping is applied (corresponding to upper_right_corner='native'
) and
the data are delivered in the original format. To get the data in the common upright orientation,
load the datasets using e.g.:
>>> scn.load(['VIS008'], upper_right_corner='NE')
Note
If a dataset could not be loaded there is no exception raised. You must
check the
scn.missing_datasets
property for any DataID
that could not be loaded.
To find out what datasets are available from a reader from the files that were
provided to the Scene
use
available_dataset_ids()
:
>>> scn.available_dataset_ids()
Or available_dataset_names()
for just the string
names of Datasets:
>>> scn.available_dataset_names()
Load remote data
Starting with Satpy version 0.25.1 with supported readers it is possible to
load data from remote file systems like s3fs
or fsspec
.
For example:
>>> from satpy import Scene
>>> from satpy.readers import FSFile
>>> import fsspec
>>> filename = 'noaa-goes16/ABI-L1b-RadC/2019/001/17/*_G16_s20190011702186*'
>>> the_files = fsspec.open_files("simplecache::s3://" + filename, s3={'anon': True})
>>> fs_files = [FSFile(open_file) for open_file in the_files]
>>> scn = Scene(filenames=fs_files, reader='abi_l1b')
>>> scn.load(['true_color_raw'])
Check the list of Satpy Readers to see which reader supports remote
files. For the usage of fsspec
and advanced features like caching files
locally see the fsspec Documentation .
Search for local/remote files
Satpy provides a utility
find_files_and_readers()
for searching for files in
a base directory matching various search parameters. This function discovers
files based on filename patterns. It returns a dictionary mapping reader name
to a list of filenames supported. This dictionary can be passed directly to
the Scene
initialization.
>>> from satpy import find_files_and_readers, Scene
>>> from datetime import datetime
>>> my_files = find_files_and_readers(base_dir='/data/viirs_sdrs',
... reader='viirs_sdr',
... start_time=datetime(2017, 5, 1, 18, 1, 0),
... end_time=datetime(2017, 5, 1, 18, 30, 0))
>>> scn = Scene(filenames=my_files)
See the find_files_and_readers()
documentation for
more information on the possible parameters as well as for searching on
remote file systems.
Metadata
The datasets held by a scene also provide vital metadata such as dataset name, units, observation time etc. The following attributes are standardized across all readers:
name
, and other identifying metadata keys: See Satpy internal workings: having a look under the hood.start_time
: Left boundary of the time interval covered by the dataset. For more information see the Time Metadata section below.end_time
: Right boundary of the time interval covered by the dataset. For more information see the Time Metadata section below.area
:AreaDefinition
orSwathDefinition
if data is geolocated. Areas are used for gridded projected data and Swaths when data must be described by individual longitude/latitude coordinates. See the Coordinates section below.reader
: The name of the Satpy reader that produced the dataset.orbital_parameters
: Dictionary of orbital parameters describing the satellite’s position. See the Orbital Parameters section below for more information.time_parameters
: Dictionary of additional time parameters describing the time ranges related to the requests or schedules for when observations should happen and when they actually do. See Time Metadata below for details.raw_metadata
: Raw, unprocessed metadata from the reader.
Note that the above attributes are not necessarily available for each dataset.
Time Metadata
In addition to the generic start_time
and end_time
pieces of metadata
there are other time fields that may be provided if the reader supports them.
These items are stored in a time_parameters
sub-dictionary and they include
values like:
observation_start_time
: The point in time when a sensor began recording for the current data.observation_end_time
: Same asobservation_start_time
, but when data has stopped being recorded.nominal_start_time
: The “human friendly” time describing the start of the data observation interval or repeat cycle. This time is often on a round minute (seconds=0). Along with the nominal end time, these times define the regular interval of the data collection. For example, GOES-16 ABI full disk images are collected every 10 minutes (in the common configuration) sonominal_start_time
andnominal_end_time
would be 10 minutes apart regardless of when the instrument recorded data inside that interval. This time may also be referred to as the repeat cycle, repeat slot, or time slot.nominal_end_time
: Same asnominal_start_time
, but the end of the interval.
In general, start_time
and end_time
will be set to the “nominal”
time by the reader. This ensures that other Satpy components get a
consistent time for calculations (ex. generation of solar zenith angles)
and can be reused between bands.
See the Coordinates section below for more information on time information that may show up as a per-element/row “coordinate” on the DataArray (ex. acquisition time) instead of as metadata.
Orbital Parameters
Orbital parameters describe the position of the satellite. As such they typically come in a few “flavors” for the common types of orbits a satellite may have.
For geostationary satellites it is described using the following scalar attributes:
satellite_actual_longitude/latitude/altitude
: Current position of the satellite at the time of observation in geodetic coordinates (i.e. altitude is relative and normal to the surface of the ellipsoid). The longitude and latitude are given in degrees, the altitude in meters.
satellite_nominal_longitude/latitude/altitude
: Center of the station keeping box (a confined area in which the satellite is actively maintained in using maneuvers). Inbetween major maneuvers, when the satellite is permanently moved, the nominal position is constant. The longitude and latitude are given in degrees, the altitude in meters.
nadir_longitude/latitude
: Intersection of the instrument’s Nadir with the surface of the earth. May differ from the actual satellite position, if the instrument is pointing slightly off the axis (satellite, earth-center). If available, this should be used to compute viewing angles etc. Otherwise, use the actual satellite position. The values are given in degrees.
projection_longitude/latitude/altitude
: Projection center of the re-projected data. This should be used to compute lat/lon coordinates. Note that the projection center can differ considerably from the actual satellite position. For example MSG-1 was at times positioned at 3.4 degrees west, while the image data was re-projected to 0 degrees. The longitude and latitude are given in degrees, the altitude in meters.Note
For use in pyorbital, the altitude has to be converted to kilometers, see for example
pyorbital.orbital.get_observer_look()
.
For polar orbiting satellites the readers usually provide coordinates and viewing angles of the swath as ancillary datasets. Additional metadata related to the satellite position includes:
tle
: Two-Line Element (TLE) set used to compute the satellite’s orbit
Coordinates
Each DataArray
produced by Satpy has several Xarray
coordinate variables added to them.
x
andy
: Projection coordinates for gridded and projected data. By default y and x are the preferred dimensions for all 2D data, but these coordinates are only added for gridded (non-swath) data. For 1D data only they
dimension may be specified.crs
: ACRS
object defined the Coordinate Reference System for the data. Requires pyproj 2.0 or later to be installed. This is stored as a scalar array by Xarray so it must be accessed by doingcrs = my_data_arr.attrs['crs'].item()
. For swath data this defaults to alonglat
CRS using the WGS84 datum.longitude
: Array of longitude coordinates for swath data.latitude
: Array of latitude coordinates for swath data.
Readers are free to define any coordinates in addition to the ones above that are automatically added. Other possible coordinates you may see:
acq_time
: Instrument data acquisition time per scan or row of data.
Adding a Reader to Satpy
This is described in the developer guide, see Adding a Custom Reader to Satpy.
Implemented readers
SEVIRI L1.5 data readers
Common functionality for SEVIRI L1.5 data readers.
Introduction
The Spinning Enhanced Visible and InfraRed Imager (SEVIRI) is the primary instrument on Meteosat Second Generation (MSG) and has the capacity to observe the Earth in 12 spectral channels.
Level 1.5 corresponds to image data that has been corrected for all unwanted radiometric and geometric effects, has been geolocated using a standardised projection, and has been calibrated and radiance-linearised. (From the EUMETSAT documentation)
Satpy provides the following readers for SEVIRI L1.5 data in different formats:
Native:
satpy.readers.seviri_l1b_native
netCDF:
satpy.readers.seviri_l1b_nc
Calibration
This section describes how to control the calibration of SEVIRI L1.5 data.
Calibration to radiance
The SEVIRI L1.5 data readers allow for choosing between two file-internal calibration coefficients to convert counts to radiances:
Nominal for all channels (default)
GSICS where available (IR currently) and nominal for the remaining channels (VIS & HRV currently)
In order to change the default behaviour, use the reader_kwargs
keyword
argument upon Scene creation:
import satpy
scene = satpy.Scene(filenames=filenames,
reader='seviri_l1b_...',
reader_kwargs={'calib_mode': 'GSICS'})
scene.load(['VIS006', 'IR_108'])
In addition, two other calibration methods are available:
It is possible to specify external calibration coefficients for the conversion from counts to radiances. External coefficients take precedence over internal coefficients and over the Meirink coefficients, but you can also mix internal and external coefficients: If external calibration coefficients are specified for only a subset of channels, the remaining channels will be calibrated using the chosen file-internal coefficients (nominal or GSICS). Calibration coefficients must be specified in [mW m-2 sr-1 (cm-1)-1].
The calibration mode
meirink-2023
uses coefficients based on an intercalibration with Aqua-MODIS for the visible channels, as found in Inter-calibration of polar imager solar channels using SEVIRI (2013) by J. F. Meirink, R. A. Roebeling, and P. Stammes.
In the following example we use external calibration coefficients for the
VIS006
& IR_108
channels, and nominal coefficients for the
remaining channels:
coefs = {'VIS006': {'gain': 0.0236, 'offset': -1.20},
'IR_108': {'gain': 0.2156, 'offset': -10.4}}
scene = satpy.Scene(filenames,
reader='seviri_l1b_...',
reader_kwargs={'ext_calib_coefs': coefs})
scene.load(['VIS006', 'VIS008', 'IR_108', 'IR_120'])
In the next example we use external calibration coefficients for the
VIS006
& IR_108
channels, GSICS coefficients where available
(other IR channels) and nominal coefficients for the rest:
coefs = {'VIS006': {'gain': 0.0236, 'offset': -1.20},
'IR_108': {'gain': 0.2156, 'offset': -10.4}}
scene = satpy.Scene(filenames,
reader='seviri_l1b_...',
reader_kwargs={'calib_mode': 'GSICS',
'ext_calib_coefs': coefs})
scene.load(['VIS006', 'VIS008', 'IR_108', 'IR_120'])
In the next example we use the mode meirink-2023
calibration
coefficients for all visible channels and nominal coefficients for the
rest:
scene = satpy.Scene(filenames,
reader='seviri_l1b_...',
reader_kwargs={'calib_mode': 'meirink-2023'})
scene.load(['VIS006', 'VIS008', 'IR_016'])
Calibration to reflectance
When loading solar channels, the SEVIRI L1.5 data readers apply a correction for
the Sun-Earth distance variation throughout the year - as recommended by
the EUMETSAT document
Conversion from radiances to reflectances for SEVIRI warm channels.
In the unlikely situation that this correction is not required, it can be
removed on a per-channel basis using
satpy.readers.utils.remove_earthsun_distance_correction()
.
Masking of bad quality scan lines
By default bad quality scan lines are masked and replaced with np.nan
for radiance, reflectance and
brightness temperature calibrations based on the quality flags provided by the data (for details on quality
flags see MSG Level 1.5 Image Data Format Description page 109). To disable masking
reader_kwargs={'mask_bad_quality_scan_lines': False}
can be passed to the Scene.
Metadata
The SEVIRI L1.5 readers provide the following metadata:
The
orbital_parameters
attribute provides the nominal and actual satellite position, as well as the projection centre. See the Metadata section in the Reading chapter for more information.The
acq_time
coordinate provides the mean acquisition time for each scanline. Use aMultiIndex
to enable selection by acquisition time:import pandas as pd mi = pd.MultiIndex.from_arrays([scn['IR_108']['y'].data, scn['IR_108']['acq_time'].data], names=('y_coord', 'time')) scn['IR_108']['y'] = mi scn['IR_108'].sel(time=np.datetime64('2019-03-01T12:06:13.052000000'))
Raw metadata from the file header can be included by setting the reader argument
include_raw_metadata=True
(HRIT and Native format only). Note that this comes with a performance penalty of up to 10% if raw metadata from multiple segments or scans need to be combined. By default, arrays with more than 100 elements are excluded to limit the performance penalty. This threshold can be adjusted using themda_max_array_size
reader keyword argument:scene = satpy.Scene(filenames, reader='seviri_l1b_hrit/native', reader_kwargs={'include_raw_metadata': True, 'mda_max_array_size': 1000})
References
SEVIRI HRIT format reader
SEVIRI Level 1.5 HRIT format reader.
Introduction
The seviri_l1b_hrit
reader reads and calibrates MSG-SEVIRI L1.5 image data in HRIT format. The format is explained
in the MSG Level 1.5 Image Data Format Description. The files are usually named as
follows:
H-000-MSG4__-MSG4________-_________-PRO______-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000001___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000002___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000003___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000004___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000005___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000006___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000007___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000008___-201903011200-__
H-000-MSG4__-MSG4________-_________-EPI______-201903011200-__
Each image is decomposed into 24 segments (files) for the high-resolution-visible (HRV) channel and 8 segments for other visible (VIS) and infrared (IR) channels. Additionally, there is one prologue and one epilogue file for the entire scan which contain global metadata valid for all channels.
Reader Arguments
Some arguments can be provided to the reader to change its behaviour. These are provided through the Scene instantiation, eg:
scn = Scene(filenames=filenames, reader="seviri_l1b_hrit", reader_kwargs={'fill_hrv': False})
To see the full list of arguments that can be provided, look into the documentation
of HRITMSGFileHandler
.
Compression
This reader accepts compressed HRIT files, ending in C_
as other HRIT readers, see
satpy.readers.hrit_base.HRITFileHandler
.
This reader also accepts bzipped file with the extension .bz2
for the prologue,
epilogue, and segment files.
Nominal start/end time
Warning
attribute access change
nominal_start_time
and nominal_end_time
should be accessed using the time_parameters
attribute.
nominal_start_time
and nominal_end_time
are also available directly
via start_time
and end_time
respectively.
Here is an exmaple of the content of the start/end time and time_parameters
attibutes
Start time: 2019-08-29 12:00:00
End time: 2019-08-29 12:15:00
time_parameters:
{'nominal_start_time': datetime.datetime(2019, 8, 29, 12, 0),
'nominal_end_time': datetime.datetime(2019, 8, 29, 12, 15),
'observation_start_time': datetime.datetime(2019, 8, 29, 12, 0, 9, 338000),
'observation_end_time': datetime.datetime(2019, 8, 29, 12, 15, 9, 203000)
}
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
import glob
filenames = glob.glob('data/H-000-MSG4__-MSG4________-*201903011200*')
scn = Scene(filenames=filenames, reader='seviri_l1b_hrit')
scn.load(['VIS006', 'IR_108'])
print(scn['IR_108'])
Output:
<xarray.DataArray (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
acq_time (y) datetime64[ns] NaT NaT NaT NaT NaT NaT ... NaT NaT NaT NaT NaT
* x (x) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
* y (y) float64 -5.566e+06 -5.563e+06 ... 5.566e+06 5.569e+06
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'projection_latit...
platform_name: Meteosat-11
georef_offset_corrected: True
standard_name: brightness_temperature
raw_metadata: {'file_type': 0, 'total_header_length': 6198, '...
wavelength: (9.8, 10.8, 11.8)
units: K
sensor: seviri
platform_name: Meteosat-11
start_time: 2019-03-01 12:00:09.716000
end_time: 2019-03-01 12:12:42.946000
area: Area ID: some_area_name\\nDescription: On-the-fl...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
polarization: None
level: None
modifiers: ()
ancillary_variables: []
The filenames argument can either be a list of strings, see the example above, or a list of
satpy.readers.FSFile
objects. FSFiles can be used in conjunction with fsspec,
e.g. to handle in-memory data:
import glob
from fsspec.implementations.memory import MemoryFile, MemoryFileSystem
from satpy import Scene
from satpy.readers import FSFile
# In this example, we will make use of `MemoryFile`s in a `MemoryFileSystem`.
memory_fs = MemoryFileSystem()
# Usually, the data already resides in memory.
# For explanatory reasons, we will load the files found with glob in memory,
# and load the scene with FSFiles.
filenames = glob.glob('data/H-000-MSG4__-MSG4________-*201903011200*')
fs_files = []
for fn in filenames:
with open(fn, 'rb') as fh:
fs_files.append(MemoryFile(
fs=memory_fs,
path="{}{}".format(memory_fs.root_marker, fn),
data=fh.read()
))
fs_files[-1].commit() # commit the file to the filesystem
fs_files = [FSFile(open_file) for open_file in filenames] # wrap MemoryFiles as FSFiles
# similar to the example above, we pass a list of FSFiles to the `Scene`
scn = Scene(filenames=fs_files, reader='seviri_l1b_hrit')
scn.load(['VIS006', 'IR_108'])
print(scn['IR_108'])
Output:
<xarray.DataArray (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
acq_time (y) datetime64[ns] NaT NaT NaT NaT NaT NaT ... NaT NaT NaT NaT NaT
* x (x) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
* y (y) float64 -5.566e+06 -5.563e+06 ... 5.566e+06 5.569e+06
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'projection_latit...
platform_name: Meteosat-11
georef_offset_corrected: True
standard_name: brightness_temperature
raw_metadata: {'file_type': 0, 'total_header_length': 6198, '...
wavelength: (9.8, 10.8, 11.8)
units: K
sensor: seviri
platform_name: Meteosat-11
start_time: 2019-03-01 12:00:09.716000
end_time: 2019-03-01 12:12:42.946000
area: Area ID: some_area_name\\nDescription: On-the-fl...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
polarization: None
level: None
modifiers: ()
ancillary_variables: []
References
SEVIRI Native format reader
SEVIRI Level 1.5 native format reader.
Introduction
The seviri_l1b_native
reader reads and calibrates MSG-SEVIRI L1.5 image data in binary format. The format is
explained in the MSG Level 1.5 Native Format File Definition. The files are usually named as
follows:
MSG4-SEVI-MSG15-0100-NA-20210302124244.185000000Z-NA.nat
Reader Arguments
Some arguments can be provided to the reader to change its behaviour. These are provided through the Scene instantiation, eg:
scn = Scene(filenames=filenames, reader="seviri_l1b_native", reader_kwargs={'fill_disk': True})
To see the full list of arguments that can be provided, look into the documentation
of NativeMSGFileHandler
.
Example:
Here is an example how to read the data in satpy.
NOTE: When loading the data, the orientation
of the image can be set with upper_right_corner
-keyword.
Possible options are NW
, NE
, SW
, SE
, or native
.
from satpy import Scene
filenames = ['MSG4-SEVI-MSG15-0100-NA-20210302124244.185000000Z-NA.nat']
scn = Scene(filenames=filenames, reader='seviri_l1b_native')
scn.load(['VIS006', 'IR_108'], upper_right_corner='NE')
print(scn['IR_108'])
Output:
<xarray.DataArray 'reshape-969ef97d34b7b0c70ca19f53c6abcb68' (y: 3712, x: 3712)>
dask.array<truediv, shape=(3712, 3712), dtype=float32, chunksize=(928, 3712), chunktype=numpy.ndarray>
Coordinates:
acq_time (y) datetime64[ns] NaT NaT NaT NaT NaT NaT ... NaT NaT NaT NaT NaT
crs object PROJCRS["unknown",BASEGEOGCRS["unknown",DATUM["unknown",...
* y (y) float64 -5.566e+06 -5.563e+06 ... 5.566e+06 5.569e+06
* x (x) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'projection_latit...
time_parameters: {'nominal_start_time': datetime.datetime(2021, ...
units: K
wavelength: 10.8 µm (9.8-11.8 µm)
standard_name: toa_brightness_temperature
platform_name: Meteosat-11
sensor: seviri
georef_offset_corrected: True
start_time: 2021-03-02 12:30:11.584603
end_time: 2021-03-02 12:45:09.949762
reader: seviri_l1b_native
area: Area ID: msg_seviri_fes_3km\\nDescription: MSG S...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
modifiers: ()
_satpy_id: DataID(name='IR_108', wavelength=WavelengthRang...
ancillary_variables: []
References
SEVIRI netCDF format reader
SEVIRI netcdf format reader.
Other xRIT-based readers
HRIT/LRIT format reader.
This module is the base module for all HRIT-based formats. Here, you will find the common building blocks for hrit reading.
One of the features here is the on-the-fly decompression of hrit files. It needs a path to the xRITDecompress binary to be provided through the environment variable called XRIT_DECOMPRESS_PATH. When compressed hrit files are then encountered (files finishing with .C_), they are decompressed to the system’s temporary directory for reading.
JMA HRIT format reader
HRIT format reader for JMA data.
Introduction
The JMA HRIT format is described in the JMA HRIT - Mission Specific Implementation. There are three readers for this format in Satpy:
jami_hrit
: For data from the JAMI instrument on MTSAT-1Rmtsat2-imager_hrit
: For data from the Imager instrument on MTSAT-2ahi_hrit
: For data from the AHI instrument on Himawari-8/9
Although the data format is identical, the instruments have different characteristics, which is why there is a dedicated reader for each of them. Sample data is available here:
Example:
Here is an example how to read Himwari-8 HRIT data with Satpy:
from satpy import Scene
import glob
filenames = glob.glob('data/IMG_DK01B14_2018011109*')
scn = Scene(filenames=filenames, reader='ahi_hrit')
scn.load(['B14'])
print(scn['B14'])
Output:
<xarray.DataArray (y: 5500, x: 5500)>
dask.array<concatenate, shape=(5500, 5500), dtype=float64, chunksize=(550, 4096), ...
Coordinates:
acq_time (y) datetime64[ns] 2018-01-11T09:00:20.995200 ... 2018-01-11T09:09:40.348800
crs object +proj=geos +lon_0=140.7 +h=35785831 +x_0=0 +y_0=0 +a=6378169 ...
* y (y) float64 5.5e+06 5.498e+06 5.496e+06 ... -5.496e+06 -5.498e+06
* x (x) float64 -5.498e+06 -5.496e+06 -5.494e+06 ... 5.498e+06 5.5e+06
Attributes:
orbital_parameters: {'projection_longitude': 140.7, 'projection_latitud...
standard_name: toa_brightness_temperature
level: None
wavelength: (11.0, 11.2, 11.4)
units: K
calibration: brightness_temperature
file_type: ['hrit_b14_seg', 'hrit_b14_fd']
modifiers: ()
polarization: None
sensor: ahi
name: B14
platform_name: Himawari-8
resolution: 4000
start_time: 2018-01-11 09:00:20.995200
end_time: 2018-01-11 09:09:40.348800
area: Area ID: FLDK, Description: Full Disk, Projection I...
ancillary_variables: []
JMA HRIT data contain the scanline acquisition time for only a subset of scanlines. Timestamps of
the remaining scanlines are computed using linear interpolation. This is what you’ll find in the
acq_time
coordinate of the dataset.
Compression
Gzip-compressed MTSAT files can be decompressed on the fly using
FSFile
:
import fsspec
from satpy import Scene
from satpy.readers import FSFile
filename = "/data/HRIT_MTSAT1_20090101_0630_DK01IR1.gz"
open_file = fsspec.open(filename, compression="gzip")
fs_file = FSFile(open_file)
scn = Scene([fs_file], reader="jami_hrit")
scn.load(["IR1"])
GOES HRIT format reader
GOES HRIT format reader.
References
LRIT/HRIT Mission Specific Implementation, February 2012 GVARRDL98.pdf 05057_SPE_MSG_LRIT_HRI
Electro-L HRIT format reader
HRIT format reader.
References
- ELECTRO-L GROUND SEGMENT MSU-GS INSTRUMENT,
LRIT/HRIT Mission Specific Implementation, February 2012
hdf-eos based readers
Modis level 1b hdf-eos format reader.
Introduction
The modis_l1b
reader reads and calibrates Modis L1 image data in hdf-eos format. Files often have
a pattern similar to the following one:
M[O/Y]D02[1/H/Q]KM.A[date].[time].[collection].[processing_time].hdf
Other patterns where “collection” and/or “proccessing_time” are missing might also work
(see the readers yaml file for details). Geolocation files (MOD03) are also supported.
The IMAPP direct broadcast naming format is also supported with names like:
a1.12226.1846.1000m.hdf
.
Saturation Handling
Band 2 of the MODIS sensor is available in 250m, 500m, and 1km resolutions.
The band data may include a special fill value to indicate when the detector
was saturated in the 250m version of the data. When the data is aggregated to
coarser resolutions this saturation fill value is converted to a
“can’t aggregate” fill value. By default, Satpy will replace these fill values
with NaN to indicate they are invalid. This is typically undesired when
generating images for the data as they appear as “holes” in bright clouds.
To control this the keyword argument mask_saturated
can be passed and set
to False
to set these two fill values to the maximum valid value.
scene = satpy.Scene(filenames=filenames,
reader='modis_l1b',
reader_kwargs={'mask_saturated': False})
scene.load(['2'])
Note that the saturation fill value can appear in other bands (ex. bands 7-19) in addition to band 2. Also, the “can’t aggregate” fill value is a generic “catch all” for any problems encountered when aggregating high resolution bands to lower resolutions. Filling this with the max valid value could replace non-saturated invalid pixels with valid values.
Geolocation files
For the 1km data (mod021km) geolocation files (mod03) are optional. If not given to the reader 1km geolocations will be interpolated from the 5km geolocation contained within the file.
For the 500m and 250m data geolocation files are needed.
References
Modis gelocation description: http://www.icare.univ-lille1.fr/wiki/index.php/MODIS_geolocation
Modis level 2 hdf-eos format reader.
Introduction
The modis_l2
reader reads and calibrates Modis L2 image data in hdf-eos format.
Since there are a multitude of different level 2 datasets not all of theses are implemented (yet).
- Currently the reader supports:
m[o/y]d35_l2: cloud_mask dataset
some datasets in m[o/y]d06 files
To get a list of the available datasets for a given file refer to the “Load data” section in Reading.
Geolocation files
Similar to the modis_l1b
reader the geolocation files (mod03) for the 1km data are optional and if not
given 1km geolocations will be interpolated from the 5km geolocation contained within the file.
For the 500m and 250m data geolocation files are needed.
References
Documentation about the format: https://modis-atmos.gsfc.nasa.gov/products
satpy cf nc readers
Reader for files produced with the cf netcdf writer in satpy.
Introduction
The satpy_cf_nc
reader reads data written by the satpy cf_writer. Filenames for cf_writer are optional.
There are several readers using the same satpy_cf_nc.py reader.
Generic reader
satpy_cf_nc
EUMETSAT GAC FDR reader
avhrr_l1c_eum_gac_fdr_nc
Generic reader
The generic satpy_cf_nc
reader reads files of type:
'{platform_name}-{sensor}-{start_time:%Y%m%d%H%M%S}-{end_time:%Y%m%d%H%M%S}.nc'
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
filenames = ['data/npp-viirs-mband-20201007075915-20201007080744.nc']
scn = Scene(reader='satpy_cf_nc', filenames=filenames)
scn.load(['M05'])
scn['M05']
Output:
<xarray.DataArray 'M05' (y: 4592, x: 3200)>
dask.array<open_dataset-d91cfbf1bf4f14710d27446d91cdc6e4M05, shape=(4592, 3200),
dtype=float32, chunksize=(4096, 3200), chunktype=numpy.ndarray>
Coordinates:
longitude (y, x) float32 dask.array<chunksize=(4096, 3200), meta=np.ndarray>
latitude (y, x) float32 dask.array<chunksize=(4096, 3200), meta=np.ndarray>
Dimensions without coordinates: y, x
Attributes:
start_time: 2020-10-07 07:59:15
start_orbit: 46350
end_time: 2020-10-07 08:07:44
end_orbit: 46350
calibration: reflectance
long_name: M05
modifiers: ('sunz_corrected',)
platform_name: Suomi-NPP
resolution: 742
sensor: viirs
standard_name: toa_bidirectional_reflectance
units: %
wavelength: 0.672 µm (0.662-0.682 µm)
date_created: 2020-10-07T08:20:02Z
instrument: VIIRS
Notes
Available datasets and attributes will depend on the data saved with the cf_writer.
EUMETSAT AVHRR GAC FDR L1C reader
The avhrr_l1c_eum_gac_fdr_nc
reader reads files of type:
''AVHRR-GAC_FDR_1C_{platform}_{start_time:%Y%m%dT%H%M%SZ}_{end_time:%Y%m%dT%H%M%SZ}_{processing_mode}_{disposition_mode}_{creation_time}_{version_int:04d}.nc'
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
filenames = ['data/AVHRR-GAC_FDR_1C_N06_19810330T042358Z_19810330T060903Z_R_O_20200101T000000Z_0100.nc']
scn = Scene(reader='avhrr_l1c_eum_gac_fdr_nc', filenames=filenames)
scn.load(['brightness_temperature_channel_4'])
scn['brightness_temperature_channel_4']
Output:
<xarray.DataArray 'brightness_temperature_channel_4' (y: 11, x: 409)>
dask.array<open_dataset-55ffbf3623b32077c67897f4283640a5brightness_temperature_channel_4, shape=(11, 409),
dtype=float32, chunksize=(11, 409), chunktype=numpy.ndarray>
Coordinates:
* x (x) int16 0 1 2 3 4 5 6 7 8 ... 401 402 403 404 405 406 407 408
* y (y) int64 0 1 2 3 4 5 6 7 8 9 10
acq_time (y) datetime64[ns] dask.array<chunksize=(11,), meta=np.ndarray>
longitude (y, x) float64 dask.array<chunksize=(11, 409), meta=np.ndarray>
latitude (y, x) float64 dask.array<chunksize=(11, 409), meta=np.ndarray>
Attributes:
start_time: 1981-03-30 04:23:58
end_time: 1981-03-30 06:09:03
calibration: brightness_temperature
modifiers: ()
resolution: 1050
standard_name: toa_brightness_temperature
units: K
wavelength: 10.8 µm (10.3-11.3 µm)
Conventions: CF-1.8 ACDD-1.3
comment: Developed in cooperation with EUME...
creator_email: ops@eumetsat.int
creator_name: EUMETSAT
creator_url: https://www.eumetsat.int/
date_created: 2020-09-14T10:50:51.073707
disposition_mode: O
gac_filename: NSS.GHRR.NA.D81089.S0423.E0609.B09...
geospatial_lat_max: 89.95386902434623
geospatial_lat_min: -89.97581969005503
geospatial_lat_resolution: 1050 meters
geospatial_lat_units: degrees_north
geospatial_lon_max: 179.99952992568998
geospatial_lon_min: -180.0
geospatial_lon_resolution: 1050 meters
geospatial_lon_units: degrees_east
ground_station: GC
id: DOI:10.5676/EUM/AVHRR_GAC_L1C_FDR/...
institution: EUMETSAT
instrument: Earth Remote Sensing Instruments >...
keywords: ATMOSPHERE > ATMOSPHERIC RADIATION...
keywords_vocabulary: GCMD Science Keywords, Version 9.1
licence: EUMETSAT data policy https://www.e...
naming_authority: int.eumetsat
orbit_number_end: 9123
orbit_number_start: 9122
orbital_parameters_tle: ['1 11416U 79057A 81090.16350942...
platform: Earth Observation Satellites > NOA...
processing_level: 1C
processing_mode: R
product_version: 1.0.0
references: Devasthale, A., M. Raspaud, C. Sch...
source: AVHRR GAC Level 1 Data
standard_name_vocabulary: CF Standard Name Table v73
summary: Fundamental Data Record (FDR) of m...
sun_earth_distance_correction_factor: 0.9975244779999585
time_coverage_end: 19820803T003900Z
time_coverage_start: 19800101T000000Z
title: AVHRR GAC L1C FDR
version_calib_coeffs: PATMOS-x, v2017r1
version_pygac: 1.4.0
version_pygac_fdr: 0.1.dev107+gceb7b26.d20200910
version_satpy: 0.21.1.dev894+g5cf76e6
history: Created by pytroll/satpy on 2020-0...
name: brightness_temperature_channel_4
_satpy_id: DataID(name='brightness_temperatur...
ancillary_variables: []
hdf5 based readers
Advanced Geostationary Radiation Imager reader for the Level_1 HDF format.
The files read by this reader are described in the official Real Time Data Service:
Geostationary High-speed Imager reader for the Level_1 HDF format.
This instrument is aboard the Fengyun-4B satellite. No document is available to describe this format is available, but it’s broadly similar to the co-flying AGRI instrument.
Arctica-M N1 HDF5 format reader
Reader for the Arctica-M1 MSU-GS/A data.
The files for this reader are HDF5 and contain channel data at 1km resolution for the VIS channels and 4km resolution for the IR channels. Geolocation data is available at both resolutions, as is sun and satellite geometry.
This reader was tested on sample data provided by EUMETSAT.
Reading remote files
Using a single reader
Some of the readers in Satpy can read data directly over various transfer protocols. This is done using fsspec and various packages it is using underneath.
As an example, reading ABI data from public AWS S3 storage can be done in the following way:
from satpy import Scene
storage_options = {'anon': True}
filenames = ['s3://noaa-goes16/ABI-L1b-RadC/2019/001/17/*_G16_s20190011702186*']
scn = Scene(reader='abi_l1b', filenames=filenames, reader_kwargs={'storage_options': storage_options})
scn.load(['true_color_raw'])
Reading from S3 as above requires the s3fs library to be installed in addition to fsspec.
As an alternative, the storage options can be given using fsspec configuration. For the above example, the configuration could be saved to s3.json in the fsspec configuration directory (by default placed in ~/.config/fsspec/ directory in Linux):
{
"s3": {
"anon": "true"
}
}
Note
Options given in reader_kwargs override only the matching options given in configuration file and everythin else is left as-is. In case of problems in data access, remove the configuration file to see if that solves the issue.
For reference, reading SEVIRI HRIT data from a local S3 storage works the same way:
filenames = [
's3://satellite-data-eumetcast-seviri-rss/H-000-MSG3*202204260855*',
]
storage_options = {
"client_kwargs": {"endpoint_url": "https://PLACE-YOUR-SERVER-URL-HERE"},
"secret": "VERYBIGSECRET",
"key": "ACCESSKEY"
}
scn = Scene(reader='seviri_l1b_hrit', filenames=filenames, reader_kwargs={'storage_options': storage_options})
scn.load(['WV_073'])
Using the fsspec configuration in s3.json the configuration would look like this:
{
"s3": {
"client_kwargs": {"endpoint_url": "https://PLACE-YOUR-SERVER-URL-HERE"},
"secret": "VERYBIGSECRET",
"key": "ACCESSKEY"
}
}
Using multiple readers
If multiple readers are used and the required credentials differ, the storage options are passed per reader like this:
reader1_filenames = [...]
reader2_filenames = [...]
filenames = {
'reader1': reader1_filenames,
'reader2': reader2_filenames,
}
reader1_storage_options = {...}
reader2_storage_options = {...}
reader_kwargs = {
'reader1': {
'option1': 'foo',
'storage_options': reader1_storage_options,
},
'reader2': {
'option1': 'foo',
'storage_options': reader1_storage_options,
}
}
scn = Scene(filenames=filenames, reader_kwargs=reader_kwargs)
Caching the remote files
Caching the remote file locally can speedup the overall processing time significantly, especially if the data are re-used for example when testing. The caching can be done by taking advantage of the fsspec caching mechanism:
reader_kwargs = {
'storage_options': {
's3': {'anon': True},
'simple': {
'cache_storage': '/tmp/s3_cache',
}
}
}
filenames = ['simplecache::s3://noaa-goes16/ABI-L1b-RadC/2019/001/17/*_G16_s20190011702186*']
scn = Scene(reader='abi_l1b', filenames=filenames, reader_kwargs=reader_kwargs)
scn.load(['true_color_raw'])
scn2 = scn.resample(scn.coarsest_area(), resampler='native')
scn2.save_datasets(base_dir='/tmp/', tiled=True, blockxsize=512, blockysize=512, driver='COG', overviews=[])
The following table shows the timings for running the above code with different cache statuses:
.. _cache_timing_table:
Caching |
Elapsed time |
Notes |
---|---|---|
No caching |
650 s |
remove reader_kwargs and simplecache:: from the code |
File cache |
66 s |
Initial run |
File cache |
13 s |
Second run |
Note
The cache is not cleaned by Satpy nor fsspec so the user should handle cleaning excess files from cache_storage.
Note
Only simplecache is considered thread-safe, so using the other caching mechanisms may or may not work depending on the reader, Dask scheduler or the phase of the moon.
Resources
See FSFile
for direct usage of fsspec with Satpy, and
fsspec documentation for more details on connection options
and detailes.
Composites
Composites are defined as arrays of data that are created by processing and/or combining one or multiple data arrays (prerequisites) together.
Composites are generated in satpy using Compositor classes. The attributes of the resulting composites are usually a combination of the prerequisites’ attributes and the key/values of the DataID used to identify it.
Built-in Compositors
There are many built-in compositors available in Satpy.
The majority use the GenericCompositor
base class
which handles various image modes (L, LA, RGB, and
RGBA at the moment) and updates attributes.
The below sections summarize the composites that come with Satpy and
show basic examples of creating and using them with an existing
Scene
object. It is recommended that any composites
that are used repeatedly be configured in YAML configuration files.
General-use compositor code dealing with visible or infrared satellite
data can be put in a configuration file called visir.yaml
. Composites
that are specific to an instrument can be placed in YAML config files named
accordingly (e.g., seviri.yaml
or viirs.yaml
). See the
satpy repository
for more examples.
GenericCompositor
GenericCompositor
class can be used to create basic single
channel and RGB composites. For example, building an overview composite
can be done manually within Python code with:
>>> from satpy.composites import GenericCompositor
>>> compositor = GenericCompositor("overview")
>>> composite = compositor([local_scene[0.6],
... local_scene[0.8],
... local_scene[10.8]])
One important thing to notice is that there is an internal difference between a composite and an image. A composite is defined as a special dataset which may have several bands (like R, G and B bands). However, the data isn’t stretched, or clipped or gamma filtered until an image is generated. To get an image out of the above composite:
>>> from satpy.writers import to_image
>>> img = to_image(composite)
>>> img.invert([False, False, True])
>>> img.stretch("linear")
>>> img.gamma(1.7)
>>> img.show()
This part is called enhancement, and is covered in more detail in Enhancements.
Single channel composites can also be generated with the
GenericCompositor
, but in some cases, the
SingleBandCompositor
may be more appropriate. For example,
the GenericCompositor
removes attributes such as units
because they are typically not meaningful for an RGB image. Such attributes
are retained in the SingleBandCompositor
.
DifferenceCompositor
DifferenceCompositor
calculates a difference of two datasets:
>>> from satpy.composites import DifferenceCompositor
>>> compositor = DifferenceCompositor("diffcomp")
>>> composite = compositor([local_scene[10.8], local_scene[12.0]])
FillingCompositor
FillingCompositor
:: fills the missing values in three datasets
with the values of another dataset::
>>> from satpy.composites import FillingCompositor
>>> compositor = FillingCompositor("fillcomp")
>>> filler = local_scene[0.6]
>>> data_with_holes_1 = local_scene['ch_a']
>>> data_with_holes_2 = local_scene['ch_b']
>>> data_with_holes_3 = local_scene['ch_c']
>>> composite = compositor([filler, data_with_holes_1, data_with_holes_2,
... data_with_holes_3])
PaletteCompositor
PaletteCompositor
creates a color version of a single channel
categorical dataset using a colormap:
>>> from satpy.composites import PaletteCompositor
>>> compositor = PaletteCompositor("palcomp")
>>> composite = compositor([local_scene['cma'], local_scene['cma_pal']])
The palette should have a single entry for all the (possible) values in the dataset mapping the value to an RGB triplet. Typically the palette comes with the categorical (e.g. cloud mask) product that is being visualized.
Deprecated since version 0.40: Composites produced with PaletteCompositor
will result in
an image with mode RGB when enhanced. To produce an image with mode P, use
the SingleBandCompositor
with an associated
palettize()
enhancement and pass keep_palette=True
to save_datasets()
. If the colormap is sourced from
the same dataset as the dataset to be palettized, it must be contained
in the auxiliary datasets.
Since Satpy 0.40, all built-in composites that used
PaletteCompositor
have been migrated to use
SingleBandCompositor
instead. This has no impact on resulting
images unless keep_palette=True
is passed to
save_datasets()
, but the loaded composite now has only
one band (previously three).
DayNightCompositor
DayNightCompositor
merges two different composites. The
first composite will be placed on the day-side of the scene, and the
second one on the night side. The transition from day to night is
done by calculating solar zenith angle (SZA) weighed average of the
two composites. The SZA can optionally be given as third dataset, and
if not given, the angles will be calculated. Four arguments are used
to generate the image (default values shown in the example below).
They can be defined when initializing the compositor:
- lim_low (float): lower limit of Sun zenith angle for the
blending of the given channels
- lim_high (float): upper limit of Sun zenith angle for the
blending of the given channels
Together with `lim_low` they define the width
of the blending zone
- day_night (string): "day_night" means both day and night portions will be kept
"day_only" means only day portion will be kept
"night_only" means only night portion will be kept
- include_alpha (bool): This only affects the "day only" or "night only" result.
True means an alpha band will be added to the output image for transparency.
False means the output is a single-band image with undesired pixels being masked out
(replaced with NaNs).
Usage (with default values):
>>> from satpy.composites import DayNightCompositor
>>> compositor = DayNightCompositor("dnc", lim_low=85., lim_high=88., day_night="day_night")
>>> composite = compositor([local_scene['true_color'],
... local_scene['night_fog']])
As above, with day_night flag it is also available to use only a day product or only a night product and mask out (make transparent) the opposite portion of the image (night or day). The example below provides only a day product with night portion masked-out:
>>> from satpy.composites import DayNightCompositor
>>> compositor = DayNightCompositor("dnc", lim_low=85., lim_high=88., day_night="day_only")
>>> composite = compositor([local_scene['true_color'])
By default, the image under day_only or night_only flag will come out with an alpha band to display its transparency. It could be changed by setting include_alpha to False if there’s no need for that alpha band. In such cases, it is recommended to use it together with fill_value=0 when saving to geotiff to get a single-band image with black background. In the case below, the image shows its day portion and day/night transition with night portion blacked-out instead of transparent:
>>> from satpy.composites import DayNightCompositor
>>> compositor = DayNightCompositor("dnc", lim_low=85., lim_high=88., day_night="day_only", include_alpha=False)
>>> composite = compositor([local_scene['true_color'])
RealisticColors
RealisticColors
compositor is a special compositor that is
used to create realistic near-true-color composite from MSG/SEVIRI
data:
>>> from satpy.composites import RealisticColors
>>> compositor = RealisticColors("realcols", lim_low=85., lim_high=95.)
>>> composite = compositor([local_scene['VIS006'],
... local_scene['VIS008'],
... local_scene['HRV']])
CloudCompositor
CloudCompositor
can be used to threshold the data so that
“only” clouds are visible. These composites can be used as an overlay
on top of e.g. static terrain images to show a rough idea where there
are clouds. The data are thresholded using three variables:
- `transition_min`: values below or equal to this are clouds -> opaque white
- `transition_max`: values above this are cloud free -> transparent
- `transition_gamma`: gamma correction applied to clarify the clouds
Usage (with default values):
>>> from satpy.composites import CloudCompositor
>>> compositor = CloudCompositor("clouds", transition_min=258.15,
... transition_max=298.15,
... transition_gamma=3.0)
>>> composite = compositor([local_scene[10.8]])
Support for using this compositor for VIS data, where the values for high/thick clouds tend to be in reverse order to brightness temperatures, is to be added.
RatioSharpenedRGB
SelfSharpenedRGB
SelfSharpenedRGB
sharpens the RGB with ratio of a band with a
strided version of itself.
LuminanceSharpeningCompositor
LuminanceSharpeningCompositor
replaces the luminance from an
RGB composite with luminance created from reflectance data. If the
resolutions of the reflectance data _and_ of the target area
definition are higher than the base RGB, more details can be
retrieved. This compositor can be useful also with matching
resolutions, e.g. to highlight shadowing at cloudtops in colorized
infrared composite.
>>> from satpy.composites import LuminanceSharpeningCompositor
>>> compositor = LuminanceSharpeningCompositor("vis_sharpened_ir")
>>> vis_data = local_scene['HRV']
>>> colorized_ir_clouds = local_scene['colorized_ir_clouds']
>>> composite = compositor([vis_data, colorized_ir_clouds])
SandwichCompositor
Similar to LuminanceSharpeningCompositor
,
SandwichCompositor
uses reflectance data to bring out more
details out of infrared or low-resolution composites.
SandwichCompositor
multiplies the RGB channels with (scaled)
reflectance.
>>> from satpy.composites import SandwichCompositor
>>> compositor = SandwichCompositor("ir_sandwich")
>>> vis_data = local_scene['HRV']
>>> colorized_ir_clouds = local_scene['colorized_ir_clouds']
>>> composite = compositor([vis_data, colorized_ir_clouds])
StaticImageCompositor
StaticImageCompositor
can be used to read an image from disk and used just like satellite data, including resampling and using as a part of other composites.>>> from satpy.composites import StaticImageCompositor >>> compositor = StaticImageCompositor("static_image", filename="image.tif") >>> composite = compositor()
BackgroundCompositor
BackgroundCompositor
can be used to stack two composites together. If the composites don’t have alpha channels, the background is used where foreground has no data. If foreground has alpha channel, the alpha values are used to weight when blending the two composites.>>> from satpy import Scene >>> from satpy.composites import BackgroundCompositor >>> compositor = BackgroundCompositor() >>> clouds = local_scene['ir_cloud_day'] >>> background = local_scene['overview'] >>> composite = compositor([clouds, background])
CategoricalDataCompositor
CategoricalDataCompositor
can be used to recategorize categorical data. This is for example useful to
combine comparable categories into a common category. The category remapping from data to composite is done
using a look-up-table (lut):
composite = [[lut[data[0,0]], lut[data[0,1]], lut[data[0,Nj]]],
[[lut[data[1,0]], lut[data[1,1]], lut[data[1,Nj]],
[[lut[data[Ni,0]], lut[data[Ni,1]], lut[data[Ni,Nj]]]
Hence, lut must have a length that is greater than the maximum value in data in orer to avoid an IndexError. Below is an example on how to create a binary clear-sky/cloud mask from a pseodu cloud type product with six categories representing clear sky (cat1/cat5), cloudy features (cat2-cat4) and missing/undefined data (cat0):
>>> cloud_type = local_scene['cloud_type'] # 0 - cat0, 1 - cat1, 2 - cat2, 3 - cat3, 4 - cat4, 5 - cat5,
# categories: 0 1 2 3 4 5
>>> lut = [np.nan, 0, 1, 1, 1, 0]
>>> compositor = CategoricalDataCompositor('binary_cloud_mask', lut=lut)
>>> composite = compositor([cloud_type]) # 0 - cat1/cat5, 1 - cat2/cat3/cat4, nan - cat0
Creating composite configuration files
To save the custom composite, follow the Component Configuration
documentation. Once your component configuration directory is created
you can create your custom composite YAML configuration files.
Compositors that can be used for multiple instruments can be placed in the
generic $SATPY_CONFIG_PATH/composites/visir.yaml
file. Composites that
are specific to one sensor should be placed in
$SATPY_CONFIG_PATH/composites/<sensor>.yaml
. Custom enhancements for your new
composites can be stored in $SATPY_CONFIG_PATH/enhancements/generic.yaml
or
$SATPY_CONFIG_PATH/enhancements/<sensor>.yaml
.
With that, you should be able to load your new composite directly. Example configuration files can be found in the satpy repository as well as a few simple examples below.
Simple RGB composite
This is the overview composite shown in the first code example above
using GenericCompositor
:
sensor_name: visir
composites:
overview:
compositor: !!python/name:satpy.composites.GenericCompositor
prerequisites:
- 0.6
- 0.8
- 10.8
standard_name: overview
For an instrument specific version (here MSG/SEVIRI), we should use the channel _names_ instead of wavelengths. Note also that the sensor_name is now combination of visir and seviri, which means that it extends the generic visir composites:
sensor_name: visir/seviri
composites:
overview:
compositor: !!python/name:satpy.composites.GenericCompositor
prerequisites:
- VIS006
- VIS008
- IR_108
standard_name: overview
In the following examples only the composite receipes are shown, and the header information (sensor_name, composites) and intendation needs to be added.
Using modifiers
In many cases the basic datasets that go into the composite need to be adjusted, e.g. for Solar zenith angle normalization. These modifiers can be applied in the following way:
overview:
compositor: !!python/name:satpy.composites.GenericCompositor
prerequisites:
- name: VIS006
modifiers: [sunz_corrected]
- name: VIS008
modifiers: [sunz_corrected]
- IR_108
standard_name: overview
Here we see two changes:
channels with modifiers need to have either name or wavelength added in front of the channel name or wavelength, respectively
a list of modifiers attached to the dictionary defining the channel
The modifier above is a built-in that normalizes the Solar zenith angle to Sun being directly at the zenith.
More examples can be found in Satpy source code directory satpy/etc/composites.
See the Modifiers documentation for more information on available built-in modifiers.
Using other composites
Often it is handy to use other composites as a part of the composite. In this example we have one composite that relies on solar channels on the day side, and another for the night side:
natural_with_night_fog:
compositor: !!python/name:satpy.composites.DayNightCompositor
prerequisites:
- natural_color
- night_fog
standard_name: natural_with_night_fog
This compositor has three additional keyword arguments that can be defined (shown with the default values, thus identical result as above):
natural_with_night_fog:
compositor: !!python/name:satpy.composites.DayNightCompositor
prerequisites:
- natural_color
- night_fog
lim_low: 85.0
lim_high: 88.0
day_night: "day_night"
standard_name: natural_with_night_fog
Defining other composites in-line
It is also possible to define sub-composites in-line. This example is the built-in airmass composite:
airmass:
compositor: !!python/name:satpy.composites.GenericCompositor
prerequisites:
- compositor: !!python/name:satpy.composites.DifferenceCompositor
prerequisites:
- wavelength: 6.2
- wavelength: 7.3
- compositor: !!python/name:satpy.composites.DifferenceCompositor
prerequisites:
- wavelength: 9.7
- wavelength: 10.8
- wavelength: 6.2
standard_name: airmass
Using a pre-made image as a background
Below is an example composite config using
StaticImageCompositor
, DayNightCompositor
,
CloudCompositor
and BackgroundCompositor
to show how
to create a composite with a blended day/night imagery as background
for clouds. As the images are in PNG format, and thus not
georeferenced, the name of the area definition for the background
images are given. When using GeoTIFF images the area parameter can
be left out.
Note
The background blending uses the current time if there is no timestamps in the image filenames.
clouds_with_background:
compositor: !!python/name:satpy.composites.BackgroundCompositor
standard_name: clouds_with_background
prerequisites:
- ir_cloud_day
- compositor: !!python/name:satpy.composites.DayNightCompositor
prerequisites:
- static_day
- static_night
static_day:
compositor: !!python/name:satpy.composites.StaticImageCompositor
standard_name: static_day
filename: /path/to/day_image.png
area: euro4
static_night:
compositor: !!python/name:satpy.composites.StaticImageCompositor
standard_name: static_night
filename: /path/to/night_image.png
area: euro4
To ensure that the images aren’t auto-stretched and possibly altered, the following should be added to enhancement config (assuming 8-bit image) for both of the static images:
static_day:
standard_name: static_day
operations:
- name: stretch
method: !!python/name:satpy.enhancements.stretch
kwargs:
stretch: crude
min_stretch: [0, 0, 0]
max_stretch: [255, 255, 255]
Enhancing the images
After the composite is defined and created, it needs to be converted
to an image. To do this, it is necessary to describe how the data
values are mapped to values stored in the image format. This
procedure is called stretching
, and in Satpy it is implemented by
enhancements
.
The first step is to convert the composite to an
XRImage
object:
>>> from satpy.writers import to_image
>>> img = to_image(composite)
Now it is possible to apply enhancements available in the class:
>>> img.invert([False, False, True])
>>> img.stretch("linear")
>>> img.gamma(1.7)
And finally either show or save the image:
>>> img.show()
>>> img.save('image.tif')
As pointed out in the composite section, it is better to define
frequently used enhancements in configuration files under
$SATPY_CONFIG_PATH/enhancements/
. The enhancements can either be in
generic.yaml
or instrument-specific file (e.g., seviri.yaml
).
The above enhancement can be written (with the headers necessary for the file) as:
enhancements:
overview:
standard_name: overview
operations:
- name: inverse
method: !!python/name:satpy.enhancements.invert
args: [False, False, True]
- name: stretch
method: !!python/name:satpy.enhancements.stretch
kwargs:
stretch: linear
- name: gamma
method: !!python/name:satpy.enhancements.gamma
kwargs:
gamma: [1.7, 1.7, 1.7]
Warning
If you define a composite with no matching enhancement, Satpy will by
default apply the stretch_linear()
enhancement with
cutoffs of 0.5% and 99.5%. If you want no enhancement at all (maybe you
are enhancing a composite based on DayNightCompositor
where
the components have their own enhancements defined), you need to define
an enhancement that does nothing:
enhancements:
day_x:
standard_name: day_x
operations: []
It is recommended to define an enhancement even if you intend to use the default, in case the default should change in future versions of Satpy.
More examples can be found in Satpy source code directory
satpy/etc/enhancements/generic.yaml
.
See the Enhancements documentation for more information on available built-in enhancements.
Modifiers
Modifiers are filters applied to datasets prior to computing composites. They take at least one input (a dataset) and have exactly one output (the same dataset, modified). They can take additional input datasets or parameters.
Modifiers are defined in composites files in etc/composites
within
$SATPY_CONFIG_PATH
.
The instruction to use a certain modifier can be contained in a composite definition or in a reader definition. If it is defined in a composite definition, it is applied upon constructing the composite.
When using built-in composites, Satpy users do not need to understand the mechanics of modifiers, as they are applied automatically. The Composites documentation contains information on how to apply modifiers when creating new composites.
Some readers read data where certain modifiers are already applied. Here, the reader definition will refer to the Satpy modifier. This marking adds the modifier to the metadata to prevent it from being applied again upon composite calculation.
Commonly used modifiers are listed in the table below. Further details on those modifiers can be found in the linked API documentation.
Label |
Class |
Description |
---|---|---|
|
Modifies solar channels for the solar zenith angle to provide smoother images. |
|
|
Modifies solar channels for atmospheric path length of solar radiation. |
|
|
Calculates reflective part of channels at the edge of solar and terrestrial radiation (3.7 µm or 3.9 µm). |
|
|
Calculates emissive part of channels at the edge of solar and terrestrial radiation (3.7 µm or 3.9 µm) |
|
|
Modifies solar channels to filter out the visual impact of rayleigh scattering. |
A complete list can be found in the etc/composites
source code and in the modifiers
module documentation.
Parallax correction
Warning
The Satpy parallax correction is experimental and subject to change.
Since version 0.37 (mid 2022), Satpy has included a
modifier for parallax correction, implemented in the
ParallaxCorrectionModifier
class.
This modifier is important for some applications, but not applied
by default to any Satpy datasets or composites, because it can be
applied to any input dataset and used with any source of (cloud top)
height. Therefore, users wishing to apply the parallax correction
semi-automagically have to define their own modifier and then apply
that modifier for their datasets. An example is included
with the ParallaxCorrectionModifier
API documentation. Note that Satpy cannot apply modifiers to
composites, so users wishing to apply parallax correction to a composite
will have to use a lower level API or duplicate an existing composite
recipe to use modified inputs.
The parallax correction is directly calculated from the cloud top height. Information on satellite position is obtained from cloud top height metadata. If no orbital parameters are present in the cloud top height metadata, Satpy will attempt to calculate orbital parameters from the platform name and start time. The backup calculation requires skyfield and astropy to be installed. If the metadata include neither orbital parameters nor platform name and start time, parallax calculation will fail. Because the cloud top height metadata are used, it is essential that the cloud top height data are derived from the same platform as the measurements to be corrected are taken by.
The parallax error moves clouds away from the observer. Therefore, the parallax correction shifts clouds in the direction of the observer. The space left behind by the cloud will be filled with fill values. As the cloud is shifted toward the observer, it may occupy less pixels than before, because pixels closer to the observer have a smaller surface area. It can also be deformed (a “rectangular” cloud may get the shape of a parallelogram).

SEVIRI view of southern Sweden, 2021-11-30 12:15Z, without parallax correction.
This is the natural_color
composite as built into Satpy.

The same satellite view with parallax correction. The most obvious change are the gaps left behind by the parallax correction, shown as black pixels. Otherwise it shows that clouds have “moved” south-south-west in the direction of the satellite. To view the images side-by-side or alternating, look at the figshare page
The utility function get_surface_parallax_displacement()
allows to calculate the magnitude of the parallax error. For a cloud with a cloud top height of 10 km:

Magnitude of the parallax error for a fictitious cloud with a cloud top height of 10 km for the GOES-East (GOES-16) full disc.
The parallax correction is currently experimental and subject to change. Although it is covered by tests, there may be cases that yield unexpected or incorrect results. It does not yet perform any checks that the provided (cloud top) height covers the area of the dataset for which the parallax correction shall be applied.
For more general background information and web routines related to the parallax effect, see also this collection at the CIMSS website <https://cimss.ssec.wisc.edu/goes/webapps/parallax/>_.
New in version 0.37.
Resampling
Resampling in Satpy.
Satpy provides multiple resampling algorithms for resampling geolocated
data to uniform projected grids. The easiest way to perform resampling in
Satpy is through the Scene
object’s
resample()
method. Additional utility functions are
also available to assist in resampling data. Below is more information on
resampling with Satpy as well as links to the relevant API documentation for
available keyword arguments.
Resampling algorithms
Resampler |
Description |
Related |
---|---|---|
nearest |
Nearest Neighbor |
|
ewa |
Elliptical Weighted Averaging |
|
ewa_legacy |
Elliptical Weighted Averaging (Legacy) |
|
native |
Native |
|
bilinear |
Bilinear |
|
bucket_avg |
Average Bucket Resampling |
|
bucket_sum |
Sum Bucket Resampling |
|
bucket_count |
Count Bucket Resampling |
|
bucket_fraction |
Fraction Bucket Resampling |
|
gradient_search |
Gradient Search Resampling |
|
The resampling algorithm used can be specified with the resampler
keyword
argument and defaults to nearest
:
>>> scn = Scene(...)
>>> euro_scn = scn.resample('euro4', resampler='nearest')
Warning
Some resampling algorithms expect certain forms of data. For example, the EWA resampling expects polar-orbiting swath data and prefers if the data can be broken in to “scan lines”. See the API documentation for a specific algorithm for more information.
Resampling for comparison and composites
While all the resamplers can be used to put datasets of different resolutions on to a common area, the ‘native’ resampler is designed to match datasets to one resolution in the dataset’s original projection. This is extremely useful when generating composites between bands of different resolutions.
>>> new_scn = scn.resample(resampler='native')
By default this resamples to the
highest resolution area
(smallest footprint per
pixel) shared between the loaded datasets. You can easily specify the lowest
resolution area:
>>> new_scn = scn.resample(scn.coarsest_area(), resampler='native')
Providing an area that is neither the minimum or maximum resolution area may work, but behavior is currently undefined.
Caching for geostationary data
Satpy will do its best to reuse calculations performed to resample datasets,
but it can only do this for the current processing and will lose this
information when the process/script ends. Some resampling algorithms, like
nearest
and bilinear
, can benefit by caching intermediate data on disk in the directory
specified by cache_dir and using it next time. This is most beneficial with
geostationary satellite data where the locations of the source data and the
target pixels don’t change over time.
>>> new_scn = scn.resample('euro4', cache_dir='/path/to/cache_dir')
See the documentation for specific algorithms to see availability and limitations of caching for that algorithm.
Create custom area definition
See pyresample.geometry.AreaDefinition
for information on creating
areas that can be passed to the resample method:
>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> local_scene = scn.resample(my_area)
Create dynamic area definition
See pyresample.geometry.DynamicAreaDefinition
for more information.
Examples coming soon…
Store area definitions
Area definitions can be saved to a custom YAML file (see pyresample’s writing to disk) and loaded using pyresample’s utility methods (pyresample’s loading from disk):
>>> from pyresample import load_area
>>> my_area = load_area('my_areas.yaml', 'my_area')
Or using satpy.resample.get_area_def()
, which will search through all
areas.yaml
files in your SATPY_CONFIG_PATH
:
>>> from satpy.resample import get_area_def
>>> area_eurol = get_area_def("eurol")
For examples of area definitions, see the file etc/areas.yaml
that is
included with Satpy and where all the area definitions shipped with Satpy are
defined.
Enhancements
Built-in enhancement methods
stretch
The most basic operation is to stretch the image so that the data fits to the output format. There are many different ways to stretch the data, which are configured by giving them in kwargs dictionary, like in the example above. The default, if nothing else is defined, is to apply a linear stretch. For more details, see enhancing the images.
linear
As the name suggests, linear stretch converts the input values to
output values in a linear fashion. By default, 5% of the data is cut
on both ends of the scale, but these can be overridden with
cutoffs=(0.005, 0.005)
argument:
- name: stretch
method: !!python/name:satpy.enhancements.stretch
kwargs:
stretch: linear
cutoffs: [0.003, 0.005]
Note
This enhancement is currently not optimized for dask because it requires getting minimum/maximum information for the entire data array.
crude
The crude stretching is used to limit the input values to a certain range by clipping the data. This is followed by a linear stretch with no cutoffs specified (see above). Example:
- name: stretch
method: !!python/name:satpy.enhancements.stretch
kwargs:
stretch: crude
min_stretch: [0, 0, 0]
max_stretch: [100, 100, 100]
It is worth noting that this stretch can also be used to _invert_ the data by giving larger values to the min_stretch than to max_stretch.
histogram
gamma
invert
piecewise_linear_stretch
Use numpy.interp()
to linearly interpolate data to a new range. See
satpy.enhancements.piecewise_linear_stretch()
for more information and examples.
cira_stretch
Logarithmic stretch based on a cira recipe.
reinhard_to_srgb
Stretch method based on the Reinhard algorithm, using luminance.
The function includes conversion to sRGB colorspace.
Reinhard, Erik & Stark, Michael & Shirley, Peter & Ferwerda, James. (2002). Photographic Tone Reproduction For Digital Images. ACM Transactions on Graphics. :doi: 21. 10.1145/566654.566575
lookup
colorize
The colorize enhancement can be used to map scaled/calibrated physical values to colors. One or several standard Trollimage color maps may be used as in the example here:
- name: colorize
method: !!python/name:satpy.enhancements.colorize
kwargs:
palettes:
- {colors: spectral, min_value: 193.15, max_value: 253.149999}
- {colors: greys, min_value: 253.15, max_value: 303.15}
It is also possible to provide your own custom defined color mapping by specifying a list of RGB values and the corresponding min and max values between which to apply the colors. This is for instance a common use case for Sea Surface Temperature (SST) imagery, as in this example with the EUMETSAT Ocean and Sea Ice SAF (OSISAF) GHRSST product:
- name: osisaf_sst
method: !!python/name:satpy.enhancements.colorize
kwargs:
palettes:
- colors: [
[255, 0, 255],
[195, 0, 129],
[129, 0, 47],
[195, 0, 0],
[255, 0, 0],
[236, 43, 0],
[217, 86, 0],
[200, 128, 0],
[211, 154, 13],
[222, 180, 26],
[233, 206, 39],
[244, 232, 52],
[255.99609375, 255.99609375, 63.22265625],
[203.125, 255.99609375, 52.734375],
[136.71875, 255.99609375, 27.34375],
[0, 255.99609375, 0],
[0, 207.47265625, 0],
[0, 158.94921875, 0],
[0, 110.42578125, 0],
[0, 82.8203125, 63.99609375],
[0, 55.21484375, 127.9921875],
[0, 27.609375, 191.98828125],
[0, 0, 255.99609375],
[100.390625, 100.390625, 255.99609375],
[150.5859375, 150.5859375, 255.99609375]]
min_value: 296.55
max_value: 273.55
The RGB color values will be interpolated to give a smooth result. This is contrary to using the palettize enhancement.
If the source dataset already defines a palette, this can be applied directly.
This requires that the palette is listed as an auxiliary variable and loaded
as such by the reader. To apply such a palette directly, pass the dataset
keyword. For example:
- name: colorize
method: !!python/name:satpy.enhancements.colorize
kwargs:
palettes:
- dataset: ctth_alti_pal
color_scale: 255
Warning
If the source data have a valid range defined, one should not define
min_value
and max_value
in the enhancement configuration! If
those are defined and differ from the values in the valid range, the
colors will be wrong.
The above examples are just three different ways to apply colors to images with
Satpy. There is a wealth of other options for how to declare a colormap, please
see create_colormap()
for more inspiration.
palettize
three_d_effect
The three_d_effect enhancement adds an 3D look to an image by convolving with a 3x3 kernel. User can adjust the strength of the effect by determining the weight (default: 1.0). Example:
- name: 3d_effect
method: !!python/name:satpy.enhancements.three_d_effect
kwargs:
weight: 1.0
btemp_threshold
Writing
Satpy makes it possible to save datasets in multiple formats, with writers designed to save in a given format.
For details on additional arguments and features available for a specific Writer see the table below.
Most use cases will want to save datasets using the
save_datasets()
method:
>>> scn.save_datasets(writer="simple_image")
The writer
parameter defaults to using the geotiff
writer.
One common parameter across almost all Writers is filename
and
base_dir
to help automate saving files with custom filenames:
>>> scn.save_datasets(
... filename="{name}_{start_time:%Y%m%d_%H%M%S}.tif",
... base_dir="/tmp/my_ouput_dir")
Changed in version 0.10: The file_pattern keyword argument was renamed to filename to match the save_dataset method”s keyword argument.
Description |
Writer name |
Status |
Examples |
---|---|---|---|
GeoTIFF |
Nominal |
||
Simple Image (PNG, JPEG, etc) |
Nominal |
||
NinJo TIFF (using |
Deprecated from NinJo 7 (use ninjogeotiff) |
||
NetCDF (Standard CF) |
Beta |
||
AWIPS II Tiled NetCDF4 |
Beta |
||
GeoTIFF with NinJo tags (from NinJo 7) |
Beta |
Available Writers
To get a list of available writers use the available_writers function:
>>> from satpy import available_writers
>>> available_writers()
Colorizing and Palettizing using user-supplied colormaps
Note
In the future this functionality will be added to the Scene
object.
It is possible to create single channel “composites” that are then colorized using users’ own colormaps. The colormaps are Numpy arrays with shape (num, 3), see the example below how to create the mapping file(s).
This example creates a 2-color colormap, and we interpolate the colors between the defined temperature ranges. Beyond those limits the image clipped to the specified colors.
>>> import numpy as np
>>> from satpy.composites import BWCompositor
>>> from satpy.enhancements import colorize
>>> from satpy.writers import to_image
>>> arr = np.array([[0, 0, 0], [255, 255, 255]])
>>> np.save("/tmp/binary_colormap.npy", arr)
>>> compositor = BWCompositor("test", standard_name="colorized_ir_clouds")
>>> composite = compositor((local_scene[10.8], ))
>>> img = to_image(composite)
>>> kwargs = {"palettes": [{"filename": "/tmp/binary_colormap.npy",
... "min_value": 223.15, "max_value": 303.15}]}
>>> colorize(img, **kwargs)
>>> img.show()
Similarly it is possible to use discrete values without color interpolation using palettize() instead of colorize().
You can define several colormaps and ranges in the palettes list and they are merged together. See trollimage documentation for more information how colormaps and color ranges are merged.
The above example can be used in enhancements YAML config like this:
hot_or_cold:
standard_name: hot_or_cold
operations:
- name: colorize
method: &colorizefun !!python/name:satpy.enhancements.colorize ''
kwargs:
palettes:
- {filename: /tmp/binary_colormap.npy, min_value: 223.15, max_value: 303.15}
Saving multiple Scenes in one go
As mentioned earlier, it is possible to save Scene datasets directly
using save_datasets()
method. However,
sometimes it is beneficial to collect more Scenes together and process
and save them all at once.
>>> from satpy.writers import compute_writer_results
>>> res1 = scn.save_datasets(filename="/tmp/{name}.png",
... writer="simple_image",
... compute=False)
>>> res2 = scn.save_datasets(filename="/tmp/{name}.tif",
... writer="geotiff",
... compute=False)
>>> results = [res1, res2]
>>> compute_writer_results(results)
Adding text to images
Satpy, via pydecorate, can add text to images when they’re being saved. To use this functionality, you must create a dictionary describing the text to be added.
>>> decodict = {"decorate": [{"text": {"txt": "my_text",
... "align": {"top_bottom": "top", "left_right": "left"},
... "font": <path_to_font>,
... "font_size": 48,
... "line": "white",
... "bg_opacity": 255,
... "bg": "black",
... "height": 30,
... }}]}
Where my_text is the text you wish to add and <path_to_font> is the location of the font file you wish to use, often in /usr/share/fonts/
This dictionary can then be passed to the save_dataset()
or save_datasets()
command.
>>> scene.save_dataset(my_dataset, writer="simple_image", fill_value=False,
... decorate=decodict)
MultiScene (Experimental)
Scene objects in Satpy are meant to represent a single geographic region at a specific single instant in time or range of time. This means they are not suited for handling multiple orbits of polar-orbiting satellite data, multiple time steps of geostationary satellite data, or other special data cases. To handle these cases Satpy provides the MultiScene class. The below examples will walk through some basic use cases of the MultiScene.
Warning
These features are still early in development and may change overtime as more user feedback is received and more features added.
MultiScene Creation
There are two ways to create a MultiScene
. Either by manually creating and
providing the scene objects,
>>> from satpy import Scene, MultiScene
>>> from glob import glob
>>> scenes = [
... Scene(reader='viirs_sdr', filenames=glob('/data/viirs/day_1/*t180*.h5')),
... Scene(reader='viirs_sdr', filenames=glob('/data/viirs/day_2/*t180*.h5'))
... ]
>>> mscn = MultiScene(scenes)
>>> mscn.load(['I04'])
or by using the MultiScene.from_files
class method to create a MultiScene
from a series of files. This uses the
group_files()
utility function to group files by start
time or other filenames parameters.
>>> from satpy import MultiScene
>>> from glob import glob
>>> mscn = MultiScene.from_files(glob('/data/abi/day_1/*C0[12]*.nc'), reader='abi_l1b')
>>> mscn.load(['C01', 'C02'])
New in version 0.12: The from_files
and group_files
functions were added in Satpy 0.12.
See below for an alternative solution.
For older versions of Satpy we can manually create the Scene objects used.
The glob()
function and for loops are used to group files into
Scene objects that, if used individually, could load the data we want. The
code below is equivalent to the from_files
code above:
>>> from satpy import Scene, MultiScene
>>> from glob import glob
>>> scene_files = []
>>> for time_step in ['1800', '1810', '1820', '1830']:
... scene_files.append(glob('/data/abi/day_1/*C0[12]*s???????{}*.nc'.format(time_step)))
>>> scenes = [
... Scene(reader='abi_l1b', filenames=files) for files in sorted(scene_files)
... ]
>>> mscn = MultiScene(scenes)
>>> mscn.load(['C01', 'C02'])
Blending Scenes in MultiScene
Scenes contained in a MultiScene can be combined in different ways.
Stacking scenes
The code below uses the blend()
method of
the MultiScene
object to stack two separate orbits from a VIIRS sensor. By
default the blend
method will use the stack()
function which uses the first dataset as the base of the image and then
iteratively overlays the remaining datasets on top.
>>> from satpy import Scene, MultiScene
>>> from glob import glob
>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> scenes = [
... Scene(reader='viirs_sdr', filenames=glob('/data/viirs/day_1/*t180*.h5')),
... Scene(reader='viirs_sdr', filenames=glob('/data/viirs/day_2/*t180*.h5'))
... ]
>>> mscn = MultiScene(scenes)
>>> mscn.load(['I04'])
>>> new_mscn = mscn.resample(my_area)
>>> blended_scene = new_mscn.blend()
>>> blended_scene.save_datasets()
Stacking scenes using weights
It is also possible to blend scenes together in a bit more sophisticated manner using pixel based weighting instead of just stacking the scenes on top of each other as described above. This can for instance be useful to make a cloud parameter (cover, height, etc) composite combining cloud parameters derived from both geostationary and polar orbiting satellite data close in time and over a given area. This is useful for instance at high latitudes where geostationary data degrade quickly with latitude and polar data are more frequent.
This weighted blending can be accomplished via the use of the builtin
partial()
function (see Partial) and the
default stack()
function. The
stack()
function can take the optional argument
weights (None on default) which should be a sequence (of length equal to
the number of scenes being blended) of arrays with pixel weights.
The code below gives an example of how two cloud scenes can be blended using the satellite zenith angles to weight which pixels to take from each of the two scenes. The idea being that the reliability of the cloud parameter is higher when the satellite zenith angle is small.
>>> from satpy import Scene, MultiScene, DataQuery
>>> from functools import partial
>>> from satpy.resample import get_area_def
>>> areaid = get_area_def("myarea")
>>> geo_scene = Scene(filenames=glob('/data/to/nwcsaf/geo/files/*nc'), reader='nwcsaf-geo')
>>> geo_scene.load(['ct'])
>>> polar_scene = Scene(filenames=glob('/data/to/nwcsaf/pps/noaa18/files/*nc'), reader='nwcsaf-pps_nc')
>>> polar_scene.load(['cma', 'ct'])
>>> mscn = MultiScene([geo_scene, polar_scene])
>>> groups = {DataQuery(name='CTY_group'): ['ct']}
>>> mscn.group(groups)
>>> resampled = mscn.resample(areaid, reduce_data=False)
>>> weights = [1./geo_satz, 1./n18_satz]
>>> stack_with_weights = partial(stack, weights=weights)
>>> blended = resampled.blend(blend_function=stack_with_weights)
>>> blended_scene.save_dataset('CTY_group', filename='./blended_stack_weighted_geo_polar.nc')
Grouping Similar Datasets
By default, MultiScene
only operates on datasets shared by all scenes.
Use the group()
method to specify groups
of datasets that shall be treated equally by MultiScene
, even if their
names or wavelengths are different.
Example: Stacking scenes from multiple geostationary satellites acquired at roughly the same time. First, create scenes and load datasets individually:
>>> from satpy import Scene
>>> from glob import glob
>>> h8_scene = satpy.Scene(filenames=glob('/data/HS_H08_20200101_1200*'),
... reader='ahi_hsd')
>>> h8_scene.load(['B13'])
>>> g16_scene = satpy.Scene(filenames=glob('/data/OR_ABI*s20200011200*.nc'),
... reader='abi_l1b')
>>> g16_scene.load(['C13'])
>>> met10_scene = satpy.Scene(filenames=glob('/data/H-000-MSG4*-202001011200-__'),
... reader='seviri_l1b_hrit')
>>> met10_scene.load(['IR_108'])
Now create a MultiScene
and group the three similar IR channels together:
>>> from satpy import MultiScene, DataQuery
>>> mscn = MultiScene([h8_scene, g16_scene, met10_scene])
>>> groups = {DataQuery('IR_group', wavelength=(10, 11, 12)): ['B13', 'C13', 'IR_108']}
>>> mscn.group(groups)
Finally, resample the datasets to a common grid and blend them together:
>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> resampled = mscn.resample(my_area, reduce_data=False)
>>> blended = resampled.blend() # you can also use a custom blend function
You can access the results via blended['IR_group']
.
Timeseries
Using the blend()
method with the
timeseries()
function will combine
multiple scenes from different time slots by time. A single Scene with each
dataset/channel extended by the time dimension will be returned. If used
together with the to_geoviews()
method, creation of
interactive timeseries Bokeh plots is possible.
>>> from satpy import Scene, MultiScene
>>> from satpy.multiscene import timeseries
>>> from glob import glob
>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> scenes = [
... Scene(reader='viirs_sdr', filenames=glob('/data/viirs/day_1/*t180*.h5')),
... Scene(reader='viirs_sdr', filenames=glob('/data/viirs/day_2/*t180*.h5'))
... ]
>>> mscn = MultiScene(scenes)
>>> mscn.load(['I04'])
>>> new_mscn = mscn.resample(my_area)
>>> blended_scene = new_mscn.blend(blend_function=timeseries)
>>> blended_scene['I04']
<xarray.DataArray (time: 2, y: 1536, x: 6400)>
dask.array<shape=(2, 1536, 6400), dtype=float64, chunksize=(1, 1536, 4096)>
Coordinates:
* time (time) datetime64[ns] 2012-02-25T18:01:24.570942 2012-02-25T18:02:49.975797
Dimensions without coordinates: y, x
Saving frames of an animation
The MultiScene can take “frames” of data and join them together in a single
animation movie file. Saving animations requires the imageio python library
and for most available formats the ffmpeg
command line tool suite should
also be installed. The below example saves a series of GOES-EAST ABI channel
1 and channel 2 frames to MP4 movie files.
>>> from satpy import Scene, MultiScene
>>> from glob import glob
>>> mscn = MultiScene.from_files(glob('/data/abi/day_1/*C0[12]*.nc'), reader='abi_l1b')
>>> mscn.load(['C01', 'C02'])
>>> mscn.save_animation('{name}_{start_time:%Y%m%d_%H%M%S}.mp4', fps=2)
This will compute one video frame (image) at a time and write it to the MPEG-4
video file. For users with more powerful systems it is possible to use
the client
and batch_size
keyword arguments to compute multiple frames
in parallel using the dask distributed
library (if installed).
See the dask distributed documentation
for information on creating a Client
object. If working on a cluster
you may want to use dask jobqueue to take advantage
of multiple nodes at a time.
It is possible to add an overlay or decoration to each frame of an animation. For text added as a decoration, string substitution will be applied based on the attributes of the dataset, for example:
>>> mscn.save_animation(
... "{name:s}_{start_time:%Y%m%d_%H%M}.mp4",
... enh_args={
... "decorate": {
... "decorate": [
... {"text": {
... "txt": "time {start_time:%Y-%m-%d %H:%M}",
... "align": {
... "top_bottom": "bottom",
... "left_right": "right"},
... "font": '/usr/share/fonts/truetype/arial.ttf',
... "font_size": 20,
... "height": 30,
... "bg": "black",
... "bg_opacity": 255,
... "line": "white"}}]}})
If your file covers ABI MESO data for an hour for channel 2 lasting
from 2020-04-12 01:00-01:59, then the output file will be called
C02_20200412_0100.mp4
(because the first dataset/frame corresponds to
an image that started to be taken at 01:00), consist of sixty frames (one
per minute for MESO data), and each frame will have the start time for
that frame floored to the minute blended into the frame. Note that this
text is “burned” into the video and cannot be switched on or off later.
Warning
GIF images, although supported, are not recommended due to the large file sizes that can be produced from only a few frames.
Saving multiple scenes
The MultiScene
object includes a
save_datasets()
method for saving the
data from multiple Scenes to disk. By default this will operate on one Scene
at a time, but similar to the save_animation
method above this method can
accept a dask distributed Client
object via the client
keyword
argument to compute scenes in parallel (see documentation above). Note however
that some writers, like the geotiff
writer, do not support multi-process
operations at this time and will fail when used with dask distributed. To save
multiple Scenes use:
>>> from satpy import Scene, MultiScene
>>> from glob import glob
>>> mscn = MultiScene.from_files(glob('/data/abi/day_1/*C0[12]*.nc'), reader='abi_l1b')
>>> mscn.load(['C01', 'C02'])
>>> mscn.save_datasets(base_dir='/path/for/output')
Combining multiple readers
New in version 0.23.
The from_files()
constructor allows to
automatically combine multiple readers into a single MultiScene. It is no
longer necessary for the user to create the Scene
objects themselves. For example, you can combine Advanced Baseline
Imager (ABI) and Global Lightning Mapper (GLM) measurements.
Constructing a multi-reader MultiScene requires more parameters than a
single-reader MultiScene, because Satpy can poorly guess how to group
files belonging to different instruments. For an example creating
a video with lightning superimposed on ABI channel 14 (11.2 µm)
using the built-in composite C14_flash_extent_density
,
which superimposes flash extent density from GLM (read with the
NCGriddedGLML2
or glm_l2
reader) on ABI
channel 14 data (read with the NC_ABI_L1B
or abi_l1b
reader), and therefore needs Scene objects that combine
both readers:
>>> glm_dir = "/path/to/GLMC/"
>>> abi_dir = "/path/to/ABI/"
>>> ms = satpy.MultiScene.from_files(
... glob.glob(glm_dir + "OR_GLM-L2-GLMC-M3_G16_s202010418*.nc") +
... glob.glob(abi_dir + "C*/OR_ABI-L1b-RadC-M6C*_G16_s202010418*_e*_c*.nc"),
... reader=["glm_l2", "abi_l1b"],
... ensure_all_readers=True,
... group_keys=["start_time"],
... time_threshold=30)
>>> ms.load(["C14_flash_extent_density"])
>>> ms = ms.resample(ms.first_scene["C14"].attrs["area"])
>>> ms.save_animation("/path/for/output/{name:s}_{start_time:%Y%m%d_%H%M}.mp4")
In this example, we pass to
from_files()
the additional parameters
ensure_all_readers=True, group_keys=["start_time"], time_threshold=30
so we only get scenes at times that both ABI and GLM have a file starting
within 30 seconds from each other, and ignore all other differences for
the purposes of grouping the two. For this example, the ABI files occur
every 5 minutes but the GLM files (processed with glmtools) every minute.
Scenes where there is a GLM file without an ABI file starting within at
most ±30 seconds are skipped. The group_keys
and time_threshold
keyword arguments are processed by the group_files()
function. The heavy work of blending the two instruments together is
performed by the BackgroundCompositor
class
through the “C14_flash_extent_density” composite.
Developer’s Guide
The below sections will walk through how to set up a development environment, make changes to the code, and test that they work. See the How to contribute section for more information on getting started and contributor expectations. Additional information for developer’s can be found at the pages listed below.
How to contribute
Thank you for considering contributing to Satpy! Satpy’s development team is made up of volunteers so any help we can get is very appreciated.
Contributions from users are what keep this community going. We welcome any contributions including bug reports, documentation fixes or updates, bug fixes, and feature requests. By contributing to Satpy you are providing code that everyone can use and benefit from.
The following guidelines will describe how the Satpy project structures its code contributions from discussion to code to package release.
For more information on contributing to open source projects see GitHub’s Guide.
What can I do?
Make sure you have a GitHub account.
Submit a ticket for your issue, assuming one does not already exist.
If you’re uncomfortable using Git/GitHub, see Learn Git Branching or other online tutorials.
If you are uncomfortable contributing to an open source project see:
How to Contribute to an Open Source Project on GitHub video series
Aaron Meurer’s Git Workflow
See what issues already exist. Issues marked good first issue or help wanted can be good issues to start with.
Read the Developer’s Guide for more details on contributing code.
Fork the repository on GitHub and install the package in development mode.
Update the Satpy documentation to make it clearer and more detailed.
Contribute code to either fix a bug or add functionality and submit a Pull Request.
Make an example Jupyter Notebook and add it to the available examples.
What if I break something?
Not possible. If something breaks because of your contribution it was our fault. When you submit your changes to be merged as a GitHub Pull Request they will be automatically tested and checked against coding style rules. Before they are merged they are reviewed by at least one maintainer of the Satpy project. If anything needs updating, we’ll let you know.
What is expected?
You can expect the Satpy maintainers to help you. We are all volunteers, have jobs, and occasionally go on vacations. We will try our best to answer your questions as soon as possible. We will try our best to understand your use case and add the features you need. Although we strive to make Satpy useful for everyone there may be some feature requests that we can’t allow if they would require breaking existing features. Other features may be best for a different package, PyTroll or otherwise. Regardless, we will help you find the best place for your feature and to make it possible to do what you want.
We, the Satpy maintainers, expect you to be patient, understanding, and respectful of both developers and users. Satpy can only be successful if everyone in the community feels welcome. We also expect you to put in as much work as you expect out of us. There is no dedicated PyTroll or Satpy support team, so there may be times when you need to do most of the work to solve your problem (trying different test cases, environments, etc).
Being respectful includes following the style of the existing code for any code submissions. Please follow PEP8 style guidelines and limit lines of code to 80 characters whenever possible and when it doesn’t hurt readability. Satpy follows Google Style Docstrings for all code API documentation. When in doubt use the existing code as a guide for how coding should be done.
How do I get help?
The Satpy developers (and all other PyTroll package developers) monitor the:
Slack chat (get an invitation)
How do I submit my changes?
Any contributions should start with some form of communication (see above) to let the Satpy maintainers know how you plan to help. The larger the contribution the more important direct communication is so everyone can avoid duplicate code and wasted time. After talking to the Satpy developers any additional work like code or documentation changes can be provided as a GitHub Pull Request.
To make sure that your code complies with the pytroll python standard, you can run the flake8 linter on your changes before you submit them, or even better install a pre-commit hook that runs the style check for you. To this aim, we provide a configuration file for the pre-commit tool, that you can install with eg:
pip install pre-commit
pre-commit install
running from your base satpy directory. This will automatically check code style for every commit.
Code of Conduct
Satpy follows the same code of conduct as the PyTroll project. For reference it is copied to this repository in CODE_OF_CONDUCT.md.
As stated in the PyTroll home page, this code of conduct applies to the project space (GitHub) as well as the public space online and offline when an individual is representing the project or the community. Online examples of this include the PyTroll Slack team, mailing list, and the PyTroll twitter account. This code of conduct also applies to in-person situations like PyTroll Contributor Weeks (PCW), conference meet-ups, or any other time when the project is being represented.
Any violations of this code of conduct will be handled by the core maintainers of the project including David Hoese, Martin Raspaud, and Adam Dybbroe. If you wish to report one of the maintainers for a violation and are not comfortable with them seeing it, please contact one or more of the other maintainers to report the violation. Responses to violations will be determined by the maintainers and may include one or more of the following:
Verbal warning
Ask for public apology
Temporary or permanent ban from in-person events
Temporary or permanent ban from online communication (Slack, mailing list, etc)
For details see the official code of conduct document.
Migrating to xarray and dask
Many python developers dealing with meteorologic satellite data begin with using NumPy arrays directly. This work usually involves masked arrays, boolean masks, index arrays, and reshaping. Due to the libraries used by Satpy these operations can’t always be done in the same way. This guide acts as a starting point for new Satpy developers in transitioning from NumPy’s array operations to Satpy’s operations, although they are very similar.
To provide the most functionality for users,
Satpy uses the xarray library’s
DataArray
object as the main representation for its data.
DataArray objects can also benefit from the
dask library. The combination of
these libraries allow Satpy to easily distribute operations over multiple
workers, lazy evaluate operations, and keep track additional metadata and
coordinate information.
XArray
import xarray as xr
XArray's DataArray
is now the standard data
structure for arrays in satpy. They allow the array to define dimensions,
coordinates, and attributes (that we use for metadata).
To create such an array, you can do for example
my_dataarray = xr.DataArray(my_data, dims=['y', 'x'],
coords={'x': np.arange(...)},
attrs={'sensor': 'olci'})
where my_data
can be a regular numpy array, a numpy memmap, or, if you
want to keep things lazy, a dask array (more on dask later). Satpy uses dask
arrays with all of its DataArrays.
Dimensions
In satpy, the dimensions of the arrays should include:
x for the x or column or pixel dimension
y for the y or row or line dimension
bands for composites
time can also be provided, but we have limited support for it at the moment. Use metadata for common cases (start_time, end_time)
Dimensions are accessible through
my_dataarray.dims
. To get the size of a
given dimension, use sizes
:
my_dataarray.sizes['x']
Coordinates
Coordinates can be defined for those dimensions when it makes sense:
x and y: Usually defined when the data’s area is an
AreaDefinition
, and they contain the projection coordinates in x and y.bands: Contain the letter of the color they represent, eg
['R', 'G', 'B']
for an RGB composite.
This allows then to select for example a single band like this:
red = my_composite.sel(bands='R')
or even multiple bands:
red_and_blue = my_composite.sel(bands=['R', 'B'])
To access the coordinates of the data array, use the following syntax:
x_coords = my_dataarray['x']
my_dataarray['y'] = np.arange(...)
Most of the time, satpy will fill the coordinates for you, so you just need to provide the dimension names.
Attributes
To save metadata, we use the attrs
dictionary.
my_dataarray.attrs['platform_name'] = 'Sentinel-3A'
Some metadata that should always be present in our dataarrays:
area
the area of the dataset. This should be handled in the reader.start_time
,end_time
sensor
Operations on DataArrays
DataArrays work with regular arithmetic operation as one would expect of eg numpy arrays, with the exception that using an operator on two DataArrays requires both arrays to share the same dimensions, and coordinates if those are defined.
For mathematical functions like cos or log, you can use numpy functions directly and they will return a DataArray object:
import numpy as np
cos_zen = np.cos(zen_xarray)
Masking data
In DataArrays, masked data is represented with NaN values. Hence the default
type is float64
, but float32
works also in this case. XArray can’t
handle masked data for integer data, but in satpy we try to use the special
_FillValue
attribute (in .attrs
) to handle this case. If you come
across a case where this isn’t handled properly, contact us.
Masking data from a condition can be done with:
result = my_dataarray.where(my_dataarray > 5)
Result is then analogous to my_dataarray, with values lower or equal to 5 replaced by NaNs.
Further reading
http://xarray.pydata.org/en/stable/generated/xarray.DataArray.html#xarray.DataArray
Dask
import dask.array as da
The data part of the DataArrays we use in satpy are mostly dask Arrays. That allows lazy and chunked operations for efficient processing.
Creation
From a numpy array
To create a dask array from a numpy array, one can call the
from_array()
function:
darr = da.from_array(my_numpy_array, chunks=4096)
The chunks keyword tells dask the size of a chunk of data. If the numpy array is 3-dimensional, the chunk size provide above means that one chunk will be 4096x4096x4096 elements. To prevent this, one can provide a tuple:
darr = da.from_array(my_numpy_array, chunks=(4096, 1024, 2))
meaning a chunk will be 4096x1024x2 elements in size.
Even more detailed sizes for the chunks can be provided if needed, see the dask documentation.
From memmaps or other lazy objects
To avoid loading the data into memory when creating a dask array, other kinds
of arrays can be passed to from_array()
. For example, a
numpy memmap allows dask to know where the data is, and will only be loaded
when the actual values need to be computed. Another example is a hdf5
variable read with h5py.
Procedural generation of data
Some procedural generation function are available in dask, eg
meshgrid()
, arange()
, or
random.random
.
From XArray to Dask and back
Certain operations are easiest to perform on dask arrays by themselves, especially when certain functions are only available from the dask library. In these cases you can operate on the dask array beneath the DataArray and create a new DataArray when done. Note dask arrays do not support in-place operations. In-place operations on xarray DataArrays will reassign the dask array automatically.
dask_arr = my_dataarray.data
dask_arr = dask_arr + 1
# ... other non-xarray operations ...
new_dataarr = xr.DataArray(dask_arr, dims=my_dataarray.dims, attrs=my_dataarray.attrs.copy())
Or if the operation should be assigned back to the original DataArray (if and only if the data is the same size):
my_dataarray.data = dask_arr
Operations and how to get actual results
Regular arithmetic operations are provided, and generate another dask array.
>>> arr1 = da.random.uniform(0, 1000, size=(1000, 1000), chunks=100)
>>> arr2 = da.random.uniform(0, 1000, size=(1000, 1000), chunks=100)
>>> arr1 + arr2
dask.array<add, shape=(1000, 1000), dtype=float64, chunksize=(100, 100)>
In order to compute the actual data during testing, use the
compute()
method.
In normal Satpy operations you will want the data to be evaluated as late as
possible to improve performance so compute should only be used when needed.
>>> (arr1 + arr2).compute()
array([[ 898.08811639, 1236.96107629, 1154.40255292, ...,
1537.50752674, 1563.89278664, 433.92598566],
[ 1657.43843608, 1063.82390257, 1265.08687916, ...,
1103.90421234, 1721.73564104, 1276.5424228 ],
[ 1620.11393216, 212.45816261, 771.99348555, ...,
1675.6561068 , 585.89123159, 935.04366354],
...,
[ 1533.93265862, 1103.33725432, 191.30794159, ...,
520.00434673, 426.49238283, 1090.61323471],
[ 816.6108554 , 1526.36292498, 412.91953023, ...,
982.71285721, 699.087645 , 1511.67447362],
[ 1354.6127365 , 1671.24591983, 1144.64848757, ...,
1247.37586051, 1656.50487092, 978.28184726]])
Dask also provides cos, log and other mathematical function, that you
can use with da.cos
and
da.log
. However, since satpy uses xarrays as
standard data structure, prefer the xarray functions when possible (they call
in turn the dask counterparts when possible).
Wrapping non-dask friendly functions
Some operations are not supported by dask yet or are difficult to convert to take full advantage of dask’s multithreaded operations. In these cases you can wrap a function to run on an entire dask array when it is being computed and pass on the result. Note that this requires fully computing all of the dask inputs to the function and are passed as a numpy array or in the case of an XArray DataArray they will be a DataArray with a numpy array underneath. You should NOT use dask functions inside the delayed function.
import dask
import dask.array as da
def _complex_operation(my_arr1, my_arr2):
return my_arr1 + my_arr2
delayed_result = dask.delayed(_complex_operation)(my_dask_arr1, my_dask_arr2)
# to create a dask array to use in the future
my_new_arr = da.from_delayed(delayed_result, dtype=my_dask_arr1.dtype, shape=my_dask_arr1.shape)
Dask Delayed objects can also be computed delayed_result.compute()
if
the array is not needed or if the function doesn’t return an array.
http://dask.pydata.org/en/latest/array-api.html#dask.array.from_delayed
Map dask blocks to non-dask friendly functions
If the complicated operation you need to perform can be vectorized and does
not need the entire data array to do its operations you can use
da.map_blocks
to get better performance
than creating a delayed function. Similar to delayed functions the inputs to
the function are fully computed DataArrays or numpy arrays, but only the
individual chunks of the dask array at a time. Note that map_blocks
must
be provided dask arrays and won’t function properly on XArray DataArrays.
It is recommended that the function object passed to map_blocks
not
be an internal function (a function defined inside another function) or it
may be unserializable and can cause issues in some environments.
my_new_arr = da.map_blocks(_complex_operation, my_dask_arr1, my_dask_arr2, dtype=my_dask_arr1.dtype)
Helpful functions
map_blocks()
atop()
tokenize()
Adding a Custom Reader to Satpy
In order to add a reader to satpy, you will need to create two files:
a YAML file for describing the files to read and the datasets that are available
a python file implementing the actual reading of the datasets and metadata
Satpy implements readers by defining a single “reader” object that pulls information from one or more file handler objects. The base reader class provided by Satpy is enough for most cases and does not need to be modified. The individual file handler classes do need to be created due to the small differences between file formats.
The below documentation will walk through each part of making a reader in detail. To do this we will implement a reader for the EUMETSAT NetCDF format for SEVIRI data.
Naming your reader
Satpy tries to follow a standard scheme for naming its readers. These names are used in filenames, but are also used by users so it is important that the name be recognizable and clear. Although some special cases exist, most fit in to the following naming scheme:
<sensor>[_<processing level>[_<level detail>]][_<file format>]
All components of the name should be lowercase and use underscores as the main separator between fields. Hyphens should be used as an intra-field separator if needed (ex. goes-imager).
- sensor:
The first component of the name represents the sensor or instrument that observed the data stored in the files being read. If the files are the output of a specific processing software or a certain algorithm implementation that supports multiple sensors then a lowercase version of that software’s name should be used (e.g. clavrx for CLAVR-x, nucaps for NUCAPS). The
sensor
field is the only required field of the naming scheme. If it is actually an instrument name then the reader name should include one of the other optional fields. If sensor is a software package then that may be enough without any additional information to uniquely identify the reader.- processing level:
This field marks the specific level of processing or calibration that has been performed to produce the data in the files being read. Common values of this field include:
sdr
for Sensor Data Record (SDR),edr
for Environmental Data Record (EDR),l1b
for Level 1B, andl2
for Level 2.- level detail:
In cases where the processing level is not enough to completely define the reader this field can be used to provide a little more context. For example, some VIIRS EDR products are specific to a particular field of study or type of scientific event, like a flood or cloud product. In these cases the detail field can be added to produce a name like
viirs_edr_flood
. This field shouldn’t be used unless processing level is also specified.- file format:
If the file format of the files is informative to the user or can distinguish one reader from another then this field should be specified. Common format names should be abbreviated following existing abbreviations like
nc
for NetCDF3 or NetCDF4,hdf
for HDF4,h5
for HDF5.
The existing reader’s table can be used for reference. When in doubt, reader names can be discussed in the GitHub pull request when this reader is added to Satpy, or in a GitHub issue.
The YAML file
If your reader is going to be part of Satpy, the YAML file should be
located in the satpy/etc/readers
directory, along with the YAML
files for all other readers. If you are developing a reader for internal
purposes (such as for unpublished data), the YAML file should be located
in any directory in $SATPY_CONFIG_PATH
within the subdirectory
readers/
(see Configuration).
The YAML file is composed of three sections:
the reader section, that provides basic parameters for the reader
the file_types section, that gives the patterns of the files this reader can handle
the datasets section, that describes the datasets available from this reader
The reader
section
The reader
section provides basic parameters for the overall reader.
The parameters to provide in this section are:
- name
This is the name of the reader, it should be the same as the filename (without the .yaml extension). The naming convention for this is described above in the Naming your reader section above. short_name (optional): Human-readable version of the reader ‘name’. If not provided, applications using this can default to taking the ‘name’, replacing
_
with spaces and uppercasing every letter.- long_name
Human-readable title for the reader. This may be used as a section title on a website or in GUI applications using Satpy. Default naming scheme is
<space program> <sensor> Level <level> [<format>]
. For example, for theabi_l1b
reader this is"GOES-R ABI Level 1b"
where “GOES-R” is the name of the program and not the name of the platform/satellite. This scheme may not work for all readers, but in general should be followed. See existing readers for more examples.- description
General description of the reader. This may include any restructuredtext formatted text like links to PDFs or sites with more information on the file format. This can be multiline if formatted properly in YAML (see example below).
- status
The status of the reader (one of: Nominal, Beta, Alpha, Defunct; see Status Description for more details).
- supports_fsspec
If the reader supports reading data via fsspec (either true or false).
- sensors
The list of sensors this reader will support. This must be all lowercase letters for full support throughout in Satpy.
- reader
The main python reader class to use, in most cases the
FileYAMLReader
is a good choice.
reader:
name: seviri_l1b_nc
short_name: SEVIRI L1b NetCDF4
long_name: MSG SEVIRI Level 1b (NetCDF4)
description: >
NetCDF4 reader for EUMETSAT MSG SEVIRI Level 1b files.
sensors: [seviri]
reader: !!python/name:satpy.readers.yaml_reader.FileYAMLReader
Optionally, if you need to customize the DataID
for this reader, you can provide the
relevant keys with a data_identification_keys
item here. See the Satpy internal workings: having a look under the hood
section for more information.
The file_types
section
Each file type needs to provide:
file_reader
, the class that will handle the files for this reader, that you will implement in the corresponding python file. See the The python file section below.
file_patterns
, the patterns to match to find files this reader can handle. The syntax to use is basically the same asformat
with the addition of time. See the trollsift package documentation for more details.Optionally, a file type can have a
requires
field: it is a list of file types that the current file types needs to function. For example, the HRIT MSG format segment files each need a prologue and epilogue file to be read properly, hence in this case we have addedrequires: [HRIT_PRO, HRIT_EPI]
to the file type definition.
file_types:
nc_seviri_l1b:
file_reader: !!python/name:satpy.readers.nc_seviri_l1b.NCSEVIRIFileHandler
file_patterns: ['W_XX-EUMETSAT-Darmstadt,VIS+IR+IMAGERY,{satid:4s}+SEVIRI_C_EUMG_{processing_time:%Y%m%d%H%M%S}.nc']
nc_seviri_l1b_hrv:
file_reader: !!python/name:satpy.readers.nc_seviri_l1b.NCSEVIRIHRVFileHandler
file_patterns: ['W_XX-EUMETSAT-Darmstadt,HRV+IMAGERY,{satid:4s}+SEVIRI_C_EUMG_{processing_time:%Y%m%d%H%M%S}.nc']
The datasets
section
The datasets section describes each dataset available in the files. The parameters provided are made available to the methods of the implemented python class.
If your input files contain all the necessary metadata or you have a lot of datasets to configure look at the Dynamic Dataset Configuration section below. Implementing this will save you from having to write a lot of configuration in the YAML files.
Parameters you can define for example are:
name
sensor
resolution
wavelength
polarization
standard_name: The CF standard name for the dataset that will be used to determine the type of data. See existing readers for common standard names in Satpy or the CF standard name documentation for other available names or how to define your own. Satpy does not currently have a hard requirement on these names being completely CF compliant, but consistency across readers is important.
units: The units of the data when returned by the file handler. Although not technically a requirement, it is common for Satpy datasets to use “%” for reflectance fields and “K” for brightness temperature fields.
modifiers: The modification(s) that have already been applied to the data when it is returned by the file handler. Only a few of these have been standardized across Satpy, but are based on the names of the modifiers configured in the “composites” YAML files. Examples include
sunz_corrected
orrayleigh_corrected
. See the metadata wiki for more information.file_type: Name of file type (see above).
coordinates: An optional two-element list with the names of the longitude and latitude datasets describing the location of this dataset. This is optional if the data being read is gridded already. Swath data, from example data from some polar-orbiting satellites, should have these defined or no geolocation information will be available when the data are loaded. For gridded datasets a
get_area_def
function will be implemented in python (see below) to define geolocation information.Any other field that is relevant for the reader or could be useful metadata provided to the user.
This section can be copied and adapted simply from existing seviri
readers, like for example the msg_native
reader.
datasets:
HRV:
name: HRV
resolution: 1000.134348869
wavelength: [0.5, 0.7, 0.9]
calibration:
reflectance:
standard_name: toa_bidirectional_reflectance
units: "%"
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b_hrv
IR_016:
name: IR_016
resolution: 3000.403165817
wavelength: [1.5, 1.64, 1.78]
calibration:
reflectance:
standard_name: toa_bidirectional_reflectance
units: "%"
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
nc_key: 'ch3'
IR_039:
name: IR_039
resolution: 3000.403165817
wavelength: [3.48, 3.92, 4.36]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: K
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
nc_key: 'ch4'
IR_087:
name: IR_087
resolution: 3000.403165817
wavelength: [8.3, 8.7, 9.1]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: K
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
IR_097:
name: IR_097
resolution: 3000.403165817
wavelength: [9.38, 9.66, 9.94]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: K
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
IR_108:
name: IR_108
resolution: 3000.403165817
wavelength: [9.8, 10.8, 11.8]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: K
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
IR_120:
name: IR_120
resolution: 3000.403165817
wavelength: [11.0, 12.0, 13.0]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: K
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
IR_134:
name: IR_134
resolution: 3000.403165817
wavelength: [12.4, 13.4, 14.4]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: K
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
VIS006:
name: VIS006
resolution: 3000.403165817
wavelength: [0.56, 0.635, 0.71]
calibration:
reflectance:
standard_name: toa_bidirectional_reflectance
units: "%"
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
VIS008:
name: VIS008
resolution: 3000.403165817
wavelength: [0.74, 0.81, 0.88]
calibration:
reflectance:
standard_name: toa_bidirectional_reflectance
units: "%"
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
WV_062:
name: WV_062
resolution: 3000.403165817
wavelength: [5.35, 6.25, 7.15]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: "K"
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
WV_073:
name: WV_073
resolution: 3000.403165817
wavelength: [6.85, 7.35, 7.85]
calibration:
brightness_temperature:
standard_name: toa_brightness_temperature
units: "K"
radiance:
standard_name: toa_outgoing_radiance_per_unit_wavelength
units: W m-2 um-1 sr-1
counts:
standard_name: counts
units: count
file_type: nc_seviri_l1b
The YAML file is now ready and you can move on to writing your python code.
Dynamic Dataset Configuration
The above “datasets” section for reader configuration is the most explicit
method for specifying metadata about possible data that can be loaded from
input files. It is also the easiest way for people with little python
experience to customize or add new datasets to a reader. However, some file
formats may have 10s or even 100s of datasets or variations of datasets.
Writing the metadata and access information for every one of these datasets
can easily become a problem. To help in these cases the
available_datasets()
file handler interface can be used.
This method, if needed, should be implemented in your reader’s file handler
classes. The best information for what this method does and how to use it
is available in the
API documentation
.
This method is good when you want to:
Define datasets dynamically without needing to define them in the YAML.
Supplement metadata from the YAML file with information from the file content (ex.
resolution
).Determine if a dataset is available by the file contents. This differs from the default behavior of a dataset being considered loadable if its “file_type” is loaded.
Note that this is considered an advanced interface and involves more advanced Python concepts like generators. If you need help with anything feel free to ask questions in your pull request or on the Pytroll Slack.
The python file
The python files needs to implement a file handler class for each file type that we want to read. Such a class needs to implement a few methods:
the
__init__
method, that takes as arguments
the filename (string)
the filename info (dict) that we get by parsing the filename using the pattern defined in the yaml file
the filetype info that we get from the filetype definition in the yaml file
This method can also receive other file handler instances as parameter if the filetype at hand has requirements. (See the explanation in the YAML file filetype section above)
the
get_dataset
method, which takes as arguments
the dataset ID of the dataset to load
the dataset info that is the description of the channel in the YAML file
This method has to return an xarray.DataArray instance if the loading is successful, containing the data and metadata of the loaded dataset, or return None if the loading was unsuccessful.
The DataArray should at least have a
y
dimension. For data covering a 2D region on the Earth, their should be at least ay
andx
dimension. This applies to non-gridded data like that of a polar-orbiting satellite instrument. The latitude dimension is typically namedy
and longitude namedx
. This may require renaming dimensions from the file, see for thexarray.DataArray.rename()
method for more information and its use in the example below.If the reader should be compatible with opening remote files see Adding remote file support to a reader.
the
get_area_def
method, that takes as single argument theDataID
for which we want the area. It should return aAreaDefinition
object. For data that cannot be geolocated with an area definition, the pixel coordinates will be loaded using theget_dataset
method for the resulting scene to be navigated. The names of the datasets to be loaded should be specified as a specialcoordinates
attribute in the YAML file. For example, by specifyingcoordinates: [longitude_dataset, latitude_dataset]
in the YAML, Satpy will callget_dataset
twice, once to load the dataset namedlongitude_dataset
and once to loadlatitude_dataset
. Satpy will then create aSwathDefinition
with this coordinate information and assign it to the dataset’s.attrs['area']
attribute.Optionally, the
get_bounding_box
method can be implemented if filtering files by area is desirable for this data type
On top of that, two attributes need to be defined: start_time
and
end_time
, that define the start and end times of the sensing.
See the Time Metadata section for a description of the different
times that Satpy readers typically use and what times should be used
for the start_time
and end_time
. Note that these properties will
be assigned to the start_time
and end_time
metadata of any DataArrays
returned by get_dataset
, any existing values will be overwritten.
If you are writing a file handler for more common formats like HDF4, HDF5, or
NetCDF4 you may want to consider using the utility base classes for each:
satpy.readers.hdf4_utils.HDF4FileHandler
,
satpy.readers.hdf5_utils.HDF5FileHandler
, and
satpy.readers.netcdf_utils.NetCDF4FileHandler
. These were added as
a convenience and are not required to read these formats. In many cases using
the xarray.open_dataset()
function in a custom file handler is a much
better idea.
Note
Be careful about the data types of the DataArray attributes (.attrs) your reader is returning. Satpy or other tools may attempt to serialize these attributes (ex. hashing for cache keys). For example, Numpy types don’t serialize into JSON and should therefore be cast to basic Python types (float, int, etc) before being assigned to the attributes.
Note
Be careful about the types of the data your reader is returning. It is easy to let the data be coerced into double precision floats (np.float64). At the moment, satellite instruments are rarely measuring in a resolution greater than what can be encoded in 16 bits. As such, to preserve processing power, please consider carefully what data type you should scale or calibrate your data to.
Single precision floats (np.float32) is a good compromise, as it has 23 significant bits (mantissa) and can thus represent 16 bit integers exactly, as well as keeping the memory footprint half of a double precision float.
One commonly used method in readers is xarray.DataArray.where()
(to
mask invalid data) which can be coercing the data to np.float64. To ensure
for example that integer data is coerced to np.float32 when
xarray.DataArray.where()
is used, you can do:
my_float_dataarray = my_int_dataarray.where(some_condition, np.float32(np.nan))
One way of implementing a file handler is shown below:
# this is seviri_l1b_nc.py
from satpy.readers.file_handlers import BaseFileHandler
from pyresample.geometry import AreaDefinition
class NCSEVIRIFileHandler(BaseFileHandler):
def __init__(self, filename, filename_info, filetype_info):
super(NCSEVIRIFileHandler, self).__init__(filename, filename_info, filetype_info)
self.nc = None
def get_dataset(self, dataset_id, dataset_info):
if dataset_id['calibration'] != 'radiance':
# TODO: implement calibration to reflectance or brightness temperature
return
if self.nc is None:
self.nc = xr.open_dataset(self.filename,
decode_cf=True,
mask_and_scale=True,
chunks={'num_columns_vis_ir': "auto",
'num_rows_vis_ir': "auto"})
self.nc = self.nc.rename({'num_columns_vir_ir': 'x', 'num_rows_vir_ir': 'y'})
dataset = self.nc[dataset_info['nc_key']]
dataset.attrs.update(dataset_info)
return dataset
def get_area_def(self, dataset_id):
return pyresample.geometry.AreaDefinition(
"some_area_name",
"on-the-fly area",
"geos",
"+a=6378169.0 +h=35785831.0 +b=6356583.8 +lon_0=0 +proj=geos",
3636,
3636,
[-5456233.41938636, -5453233.01608472, 5453233.01608472, 5456233.41938636])
class NCSEVIRIHRVFileHandler():
# left as an exercise to the reader :)
If you have any questions, please contact the Satpy developers.
Auxiliary File Download
If your reader needs additional data files to do calibrations, corrections, or anything else see the Auxiliary Data Download document for more information on how to download and cache these files without including them in the Satpy python package.
Adding remote file support to a reader
Warning
This feature is currently very new and might improve and change in the future.
As of Satpy version 0.25.1 the possibility to search for files on remote file systems (see Search for local/remote files) as well as the possibility for supported readers to read from remote filesystems has been added.
To add this feature to a reader the call to xarray.open_dataset()
has to be replaced by the function open_dataset()
included in Satpy which handles passing on the filename to be opened regardless
if it is a local file path or a FSFile
object which can wrap
fsspec.open()
objects.
To be able to cache the open_dataset
call which is favourable for remote files
it should be separated from the get_dataset
method which needs to be implemented
in every reader. This could look like:
from satpy._compat importe cached_property
from satpy.readers.file_handlers import BaseFileHandler, open_dataset
class Reader(BaseFileHandler):
def __init__(self, filename, filename_info, filetype_info):
super(Reader).__init__(filename, filename_info, filetype_info):
@cached_property
def nc(self):
return open_dataset(self.filename, chunks="auto")
def get_dataset(self):
# Access the opened dataset
data = self.nc["key"]
Any parameters allowed for xarray.open_dataset()
can be passed as
keywords to open_dataset()
if needed.
Note
It is important to know that for remote files xarray might use a different backend to open the file than for local files (e.g. h5netcdf instead of netcdf4), which might result in some attributes being returned as arrays instead of scalars. This has to be accounted for when accessing attributes in the reader.
Extending Satpy via plugins
Warning
This feature is experimental and being modified without warnings. For now, it should not be used for anything else than toy examples and should not be relied on.
Satpy is able to load additional functionality outside of the builtin features in the library. It does this by searching a series of configured paths for additional configuration files for:
readers
composites and modifiers
enhancements
writers
For basic testing and temporary configuration changes, you can follow
the instructions in Component Configuration. This will tell Satpy
where to look for your custom YAML configuration files and import any Python
code you’d like it to use for these components. However, this requires telling
Satpy of these paths on every execution (either as an environment variable or
by using satpy.config
).
Satpy also supports being told this information via setuptools “entry points”. Once your custom Python package with entry points is installed Satpy will automatically discover it when searching for composites without the user needing to explicitly import your package. This has the added benefit of organizing your YAML configuration files and any custom python code into a single python package. How to structure a package in this way is described below.
An example project showing the usage of these entry points is available at this github repository where a custom compositor is created. This repository also includes common configuration files and tools for writing clean code and automatically testing your python code.
Plugin package structure
The below sections will use the example package name satpy-myplugin
. This
is only an example and naming a plugin package with a satpy-
prefix is not
required.
A plugin package should consist of three main parts:
pyproject.toml
orsetup.py
: These files define the metadata and entry points for your package. Only one of them is needed. With only a few exceptions it is recommended to use apyproject.toml
as this is the new and future way Python package configuration will be supported by thepip
package manager. See below for examples of the contents of this file.mypkg/etc/
: A directory of Satpy-compatible component YAML files. These YAML files should be inreaders/
,composites/
,enhancements/
, andwriters/
directories. These YAML files must follow the Satpy naming conventions for each component. For example, composites and enhancements allow for sensor-specific configuration files. Other directories can be added in thisetc
directory and will be ignored by Satpy. Satpy will collect all available YAML files from all installed plugins and merge them with those builtin to Satpy. The Satpy builtins will be used as a “base” configuration with all external YAML files applied after.mypkg/
: The python package with any custom python code. This code should be based on or at least compatible with Satpy’s base classes for each component or use utilities available from Satpy whenever possible.readers:
FileYAMLReader
for any reader subclasses andBaseFileHandler
for any custom file handlers. See Adding a Custom Reader to Satpy for more information.composites and modifiers:
CompositeBase
for any generic compositor andGenericCompositor
for any composite that represents an image (RGB, L, etc). For modifiers, useModifierBase
.enhancements: Although not required, consider using
satpy.enhancements.apply_enhancement()
.writers:
Writer
Lastly, this directory should be structured like a standard python package. This primarily means a
mypkg/__init__.py
file should exist.
pyproject.toml
We recommend using a pyproject.toml file can be used to define the metadata and configuration for a python package. With this file it is possible to use package building tools to make an installable package. By using a special feature called “entry points” we can configure our package to its satpy features are automatically discovered by Satpy.
A pyproject.toml
file is typically placed in the root of a project
repository and at the same level as the package (ex. satpy_myplugin/
directory). An example for a package called satpy-myplugin
with
custom composites is shown below.
[project]
name = "satpy-myplugin"
description = "Example Satpy plugin package definition."
version = "1.0.0"
readme = "README.md"
license = {text = "GPL-3.0-or-later"}
requires-python = ">=3.8"
dependencies = [
"satpy",
]
[tool.setuptools]
packages = ["satpy_myplugin"]
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
[project.entry-points."satpy.composites"]
example_composites = "satpy_myplugin"
This definition uses
setuptools
to build the resulting package (under build-system
). There are other
alternative tools (like poetry)
that can be used.
Other custom components like readers and writers can be defined in the same
package by using additional entry points named satpy.readers
for readers,
satpy.writers
for writers, and satpy.enhancements
for enhancements.
Note the difference between the usage of the package name (satpy-myplugin
)
which includes a hyphen and the package directory (satpy_myplugin
) which uses
an underscore. Your package name does not need to have a separator (hyphen) in
it, but is used here due to the common practice of naming plugins this way.
Package directories can’t use hyphens as this would be a syntax error when
trying to import the package. Underscores can’t be used in package names as
this is not allowed by PyPI.
The first project
section in this TOML file specifies metadata about the
package. This is most important if you plan on distributing your package on
PyPI or similar package repository. We specify that our package depends on
satpy
so if someone installs it Satpy will automatically be installed.
The second tools.setuptools
section
tells the package building (via setuptools
) what directory the Python
code is in. The third section, build-system
, says what tool(s) should be
used for building the package and what extra requirements are needed during
this build process.
The last section, project.entry-points."satpy.composites"
is the only
section specific to this package being a Satpy plugin. At the time of writing
the example_composites = "satpy_myplugin"
portion is not actually used
by Satpy but is required to properly define the entry point in the plugin
package. Instead Satpy will assume that a package that defines the
satpy.composites
(or any of the other component types) entry point will
have a etc/
directory in the root of the package structure. Even so,
for future compatibility, it is best to use the name of the package directory
on the right-hand side of the =
.
Warning
Due to some limitations in setuptools you must also define a setup.py
file in addition to pyproject.toml
if you’d like to use “editable”
installations (pip install -e .
). Once
this setuptools issue
is resolved this won’t be needed. For now this minimal setup.py
will
work:
from setuptools import setup
setup()
Alternative: setup.py
If you are more comfortable creating a setup.py
-based python package you
can use setup.py
instead of pyproject.toml
. When used for custom
composites, in a package called satpy-myplugin
it would look something like
this:
from setuptools import setup
import os
setup(
name='satpy-myplugin',
entry_points={
'satpy.composites': [
'example_composites = satpy_myplugin',
],
},
package_data={'satpy_myplugin': [os.path.join('etc', 'composites/*.yaml')]},
install_requires=["satpy"],
)
Note the difference between the usage of the package name (satpy-plugin
)
which includes a hyphen and the package directory (satpy_plugin
) which uses
an underscore. Your package name does not need to have a separator (hyphen) in
it, but is used here due to the common practice of naming plugins this way.
See the pyproject.toml
information above for more information on what each
of these values means.
Licenses
Disclaimer: We are not lawyers.
Satpy source code is under the GPLv3 license. This license requires any derivative works to also be GPLv3 or GPLv3 compatible. It is our understanding that importing a Python module could be considered “linking” that source code to your own (thus being a derivative work) and would therefore require your code to be licensed with a GPLv3-compatible license. It is currently only possible to make a Satpy-compatible plugin without importing Satpy if it contains only enhancements. Writers and compositors are possible without subclassing, but are likely difficult to implement. Readers are even more difficult to implement without using Satpy’s base classes and utilities. It is also our understanding that if your custom Satpy plugin code is not publicly released then it does not need to be GPLv3.
Satpy internal workings: having a look under the hood
Querying and identifying data arrays
DataQuery
The loading of data in Satpy is usually done through giving the name or the wavelength of the data arrays we are interested in. This way, the highest, most calibrated data arrays is often returned.
However, in some cases, we need more control over the loading of the data arrays. The way to accomplish this is to load data arrays using queries, eg:
scn.load([DataQuery(name='channel1', resolution=400)]
Here a data array with name channel1 and of resolution 400 will be loaded if available.
Note that None is not a valid value, and keys having a value set to None will simply be ignored.
If one wants to use wildcards to query data, just provide ‘*’, eg:
scn.load([DataQuery(name='channel1', resolution=400, calibration='*')]
Alternatively, one can provide a list as parameter to query data, like this:
scn.load([DataQuery(name='channel1', resolution=[400, 800])]
DataID
Satpy stores loaded data arrays in a special dictionary (DatasetDict) inside scene objects. In order to identify each data array uniquely, Satpy is assigning an ID to each data array, which is then used as the key in the scene object. These IDs are of type DataID and are immutable. They are not supposed to be used by regular users and should only be created in special circumstances. Satpy should take care of creating and assigning these automatically. They are also stored in the attrs of each data array as _satpy_id.
Default and custom metadata keys
One thing however that the user has control over is which metadata keys are relevant to which datasets. Satpy provides two default sets of metadata key (or ID keys), one for regular imager bands, and the other for composites. The first one contains: name, wavelength, resolution, calibration, modifiers. The second one contains: name, resolution.
As an example here is the definition of the first one in yaml:
data_identification_keys: name: required: true wavelength: type: !!python/name:satpy.dataset.WavelengthRange resolution: calibration: enum: - reflectance - brightness_temperature - radiance - counts transitive: true modifiers: required: true default: [] type: !!python/name:satpy.dataset.ModifierTuple
To create a new set, the user can provide indications in the relevant yaml file. It has to be provided in header of the reader configuration file, under the reader section, as data_identification_keys. Each key under this is the name of relevant metadata key that will used to find relevant information in the attributes of the data arrays. Under each of this, a few options are available:
required: if the item is required, False by default
type: the type to use. More on this further down.
enum: if the item has to be limited to a finite number of options, an enum can be used. Be sure to place the options in the order of preference, with the most desirable option on top.
default: the default value to assign to the item if nothing (or None) is provided. If this option isn’t provided, the key will simply be omitted if it is not present in the attrs or if it is None. It will be passed to the type’s convert method if available.
transitive: whether the key is to be passed when looking for dependencies of composites/modifiers. Here for example, a composite that has in a given calibration type will pass this calibration type requirement to its dependencies.
If the definition of the metadata keys need to be done in python rather than in a yaml file, it will be a dictionary very similar to the yaml code. Here is the same example as above in python:
from satpy.dataset import WavelengthRange, ModifierTuple id_keys_config = {'name': { 'required': True, }, 'wavelength': { 'type': WavelengthRange, }, 'resolution': None, 'calibration': { 'enum': [ 'reflectance', 'brightness_temperature', 'radiance', 'counts' ], 'transitive': True, }, 'modifiers': { 'required': True, 'default': ModifierTuple(), 'type': ModifierTuple, }, }
Types
Types are classes that implement a type to be used as value for metadata in the DataID. They have to implement a few methods:
a convert class method that returns it’s argument as an instance of the class
__hash__, __eq__ and __ne__ methods
a distance method the tells how “far” an instance of this class is from it’s argument.
An example of such a class is the WavelengthRange
class.
Through its implementation, it allows us to use the wavelength in a query to find out which of the
DataID in a list which has its central wavelength closest to that query for example.
DataID and DataQuery interactions
Different DataIDs and DataQuerys can have different metadata items defined. As such we define equality between different instances of these classes, and across the classes as equality between the sorted key/value pairs shared between the instances. If a DataQuery has one or more values set to ‘*’, the corresponding key/value pair will be omitted from the comparison. Instances sharing no keys will no be equal.
Breaking changes from DatasetIDs
The way to access values from the DataID and DataQuery is through getitem: my_dataid[‘resolution’]
For checking if a dataset is loaded, use ‘mydataset’ in scene, as ‘mydataset’ in scene.keys() will always return False: the DatasetDict instance only supports DataID as key type.
Creating DataID for tests
Sometimes, it is useful to create DataID instances for testing purposes. For these cases, the satpy.tests.utils module now has a make_dsid function that can be used just for this:
from satpy.tests.utils import make_dataid
did = make_dataid(name='camembert', modifiers=('runny',))
Auxiliary Data Download
Sometimes Satpy components need some extra data files to get their work done properly. These include files like Look Up Tables (LUTs), coefficients, or Earth model data (ex. elevations). This includes any file that would be too large to be included in the Satpy python package; anything bigger than a small text file. To help with this, Satpy includes utilities for downloading and caching these files only when your component is used. This saves the user from wasting time and disk space downloading files they may never use. This functionality is made possible thanks to the Pooch library.
Downloaded files are stored in the directory configured by Data Directory.
Adding download functionality
The utility functions for data downloading include a two step process:
Registering: Tell Satpy what files might need to be downloaded and used later.
Retrieving: Ask Satpy to download and store the files locally.
Registering
Registering a file for downloading tells Satpy the remote URL for the file,
and an optional hash. The hash is used to verify a successful download.
Registering can also include a filename
to tell Satpy what to name the
file when it is downloaded. If not provided it will be determined from the URL.
Once registered, Satpy can be told to retrieve the file (see below) by using a
“cache key”. Cache keys follow the general scheme of
<component_type>/<filename>
(ex. readers/README.rst
).
Satpy includes a low-level function and a high-level Mixin class for
registering files. The higher level class is recommended for any Satpy
component like readers, writers, and compositors. The lower-level
register_file()
function can be used for any other
use case.
The DataMixIn
class is automatically included
in the FileYAMLReader
and
Writer
base classes. For any other component (like
a compositor) you should include it as another parent class:
from satpy.aux_download import DataDownloadMixin
from satpy.composites import GenericCompositor
class MyCompositor(GenericCompositor, DataDownloadMixin):
"""Compositor that uses downloaded files."""
def __init__(self, name, url=None, known_hash=None, **kwargs):
super().__init__(name, **kwargs)
data_files = [{'url': url, 'known_hash': known_hash}]
self.register_data_files(data_files)
However your code registers files, to be consistent it must do it during
initialization so that the find_registerable_files()
.
If your component isn’t a reader, writer, or compositor then this function
will need to be updated to find and load your registered files. See
Offline Downloads below for more information.
As mentioned, the mixin class is included in the base reader and writer class.
To register files in these cases, include a data_files
section in your
YAML configuration file. For readers this would go under the reader
section and for writers the writer
section. This parameter is a list
of dictionaries including a url
, known_hash
, and optional
filename
. For example:
reader:
name: abi_l1b
short_name: ABI L1b
long_name: GOES-R ABI Level 1b
... other metadata ...
data_files:
- url: "https://example.com/my_data_file.dat"
- url: "https://raw.githubusercontent.com/pytroll/satpy/main/README.rst"
known_hash: "sha256:5891286b63e7745de08c4b0ac204ad44cfdb9ab770309debaba90308305fa759"
- url: "https://raw.githubusercontent.com/pytroll/satpy/main/RELEASING.md"
filename: "satpy_releasing.md"
known_hash: null
See the DataDownloadMixin
for more information.
Retrieving
Files that have been registered (see above) can be retrieved by calling the
retrieve()
function. This function expects a single
argument: the cache key. Cache keys are returned by registering functions, but
can also be pre-determined by following the scheme
<component_type>/<filename>
(ex. readers/README.rst
).
Retrieving a file will download it to local disk if needed and then return
the local pathname. Data is stored locally in the Data Directory.
It is up to the caller to then open the file.
Offline Downloads
To assist with operational environments, Satpy includes a
retrieve_all()
function that will try to find all
files that Satpy components may need to download in the future and download
them to the current directory specified by Data Directory.
This function allows you to specify a list of readers
, writers
, or
composite_sensors
to limit what components are checked for files to
download.
The retrieve_all
function is also available through a command line script
called satpy_retrieve_all_aux_data
. Run the following for usage information.
satpy_retrieve_all_aux_data --help
To make sure that no additional files are downloaded when running Satpy see Demo Data Directory.
Writing unit tests
Satpy tests are written using the third-party pytest package.
Fixtures
The usage of Pytest fixtures is encouraged for code re-usability.
As the builtin fixtures (and those defined in conftest.py
file) are injected by
Pytest without them being imported explicitly, their usage can be very confusing for
new developers. To lessen the confusion, it is encouraged to add a note at the
top of the test modules listing all the automatically injected external fixtures
that are used in the module:
# NOTE:
# The following fixtures are not defined in this file, but are used and injected by Pytest:
# - tmp_path
# - fixture_defined_in_conftest.py
Coding guidelines
Satpy is part of Pytroll, and all code should follow the Pytroll coding guidelines and best practices.
Satpy is now Python 3 only and it is no longer needed to support Python 2.
Check setup.py
for the current Python versions any new code needs
to support.
Development installation
See the Installation Instructions section for basic installation instructions. When it comes time to install Satpy it should be installed from a clone of the git repository and in development mode so that local file changes are automatically reflected in the python environment. We highly recommend making a separate conda environment or virtualenv for development. For example, you can do this using conda:
conda create -n satpy-dev python=3.11
conda activate satpy-dev
This will create a new environment called “satpy-dev” with Python 3.11 installed. The second command will activate the environment so any future conda, python, or pip commands will use this new environment.
If you plan on contributing back to the project you should first fork the repository and clone your fork. The package can then be installed in development mode by doing:
conda install --only-deps satpy
pip install -e .
The first command will install all dependencies needed by the Satpy
conda-forge package, but won’t actually install Satpy. The second command
should be run from the root of the cloned Satpy repository (where the
setup.py
is) and will install the actual package.
You can now edit the python files in your cloned repository and have them immediately reflected in your conda environment.
All the required dependencies for a full development environment, i.e. running the tests and building the documentation, can be installed with:
conda install eccodes
pip install -e ".[all]"
Running tests
Satpy tests are written using the third-party pytest package. There is usually no need to run all Satpy tests, but instead only run the tests related to the component you are working on. All tests are automatically run from the GitHub Pull Request using multiple versions of Python, multiple operating systems, and multiple versions of dependency libraries. If you want to run all Satpy tests you will need to install additional dependencies that aren’t needed for regular Satpy usage. To install them run:
conda install eccodes
pip install -e ".[tests]"
Satpy tests can be executed by running:
pytest satpy/tests
You can also run a specific tests by specifying a sub-directory or module:
pytest satpy/tests/reader_tests/test_abi_l1b.py
Running benchmarks
Satpy benchmarks are written using the
Airspeed Velocity
package (asv
).
The benchmarks can be run using:
asv run
These are pretty computation intensive, and shouldn’t be run unless you want to diagnose some performance issue for example.
Once the benchmarks have run, you can use:
asv publish
asv preview
to have a look at the results. Again, have a look at the asv documentation for more information.
Documentation
Satpy’s documentation is built using Sphinx. All documentation lives in the
doc/
directory of the project repository. For building the documentation,
additional packages are needed. These can be installed with
pip install -e ".[all]"
After editing the source files there the documentation can be generated locally:
cd doc
make html
The output of the make command should be checked for warnings and errors. If code has been changed (new functions or classes) then the API documentation files should be regenerated before running the above command:
sphinx-apidoc -f -T -o source/api ../satpy ../satpy/tests
satpy
satpy package
Subpackages
satpy.cf package
Submodules
satpy.cf.area module
CF processing of pyresample area information.
- satpy.cf.area._add_grid_mapping(data_arr: DataArray) tuple[DataArray, DataArray] [source]
Convert an area to at CF grid mapping.
satpy.cf.attrs module
CF processing of attributes.
- class satpy.cf.attrs.AttributeEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]
Bases:
JSONEncoder
JSON encoder for dataset attributes.
Constructor for JSONEncoder, with sensible defaults.
If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.
If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.
If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an RecursionError). Otherwise, no such check takes place.
If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.
If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.
If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.
If specified, separators should be an (item_separator, key_separator) tuple. The default is (’, ‘, ‘: ‘) if indent is
None
and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a
TypeError
.
- satpy.cf.attrs._add_ancillary_variables_attrs(data_arr: DataArray) None [source]
Replace ancillary_variables DataArray with a list of their name.
- satpy.cf.attrs._drop_attrs(data_arr: DataArray, user_excluded_attrs: list[str] | None) None [source]
Remove undesirable attributes.
- satpy.cf.attrs._encode_numpy_array(obj)[source]
Encode numpy array as a netCDF4 serializable datatype.
- satpy.cf.attrs._encode_object(obj)[source]
Try to encode obj as a netCDF/Zarr compatible datatype which most closely resembles the object’s nature.
- Raises:
ValueError if no such datatype could be found –
- satpy.cf.attrs._encode_python_objects(obj)[source]
Try to find the datatype which most closely resembles the object’s nature.
If on failure, encode as a string. Plain lists are encoded recursively.
- satpy.cf.attrs._format_prerequisites_attrs(data_arr: DataArray) None [source]
Reformat prerequisites attribute value to string.
- satpy.cf.attrs._get_none_attrs(data_arr: DataArray) list[str] [source]
Remove attribute keys with None value.
- satpy.cf.attrs.encode_attrs_to_cf(attrs)[source]
Encode dataset attributes as a netcdf compatible datatype.
satpy.cf.coords module
Set CF-compliant spatial and temporal coordinates.
- satpy.cf.coords._add_declared_coordinates(data_arrays: dict[str, DataArray], dataarray_name: str) dict[str, DataArray] [source]
Add declared coordinates to the dataarray if they exist.
- satpy.cf.coords._add_xy_geographic_coords_attrs(data_arr: DataArray, x: str = 'x', y: str = 'y') DataArray [source]
Add relevant attributes to x, y coordinates of a geographic CRS.
- satpy.cf.coords._add_xy_projected_coords_attrs(data_arr: DataArray, x: str = 'x', y: str = 'y') DataArray [source]
Add relevant attributes to x, y coordinates of a projected CRS.
- satpy.cf.coords._get_coordinates_list(data_arr: DataArray) list[str] [source]
Return a list with the coordinates names specified in the ‘coordinates’ attribute.
- satpy.cf.coords._get_is_nondimensional_coords_dict(data_arrays: dict[str, DataArray]) dict[str, bool] [source]
- satpy.cf.coords._is_lon_or_lat_dataarray(data_arr: DataArray) bool [source]
Check if the DataArray represents the latitude or longitude coordinate.
- satpy.cf.coords._is_projected(data_arr: DataArray) bool [source]
Guess whether data are projected or not.
- satpy.cf.coords._rename_coords(data_arrays: dict[str, DataArray], coord_name: str) dict[str, DataArray] [source]
Rename coordinates in the datasets.
- satpy.cf.coords._try_add_coordinate(data_arrays: dict[str, DataArray], dataarray_name: str, coord: str) dict[str, DataArray] [source]
Try to add a coordinate to the dataarray, warn if not possible.
- satpy.cf.coords._try_get_units_from_coords(data_arr: DataArray) str | None [source]
Try to retrieve coordinate x/y units.
- satpy.cf.coords._try_to_get_crs(data_arr: DataArray) CRS [source]
Try to get a CRS from attributes.
- satpy.cf.coords._warn_if_pretty_but_not_unique(pretty, coord_name)[source]
Warn if coordinates cannot be pretty-formatted due to non-uniqueness.
- satpy.cf.coords.add_coordinates_attrs_coords(data_arrays: dict[str, DataArray]) dict[str, DataArray] [source]
Add to DataArrays the coordinates specified in the ‘coordinates’ attribute.
It deal with the ‘coordinates’ attributes indicating lat/lon coords The ‘coordinates’ attribute is dropped from each DataArray
If the coordinates attribute of a data array links to other dataarrays in the scene, for example coordinates=’lon lat’, add them as coordinates to the data array and drop that attribute.
In the final call to xr.Dataset.to_netcdf() all coordinate relations will be resolved and the coordinates attributes be set automatically.
- satpy.cf.coords.add_time_bounds_dimension(ds: Dataset, time: str = 'time') Dataset [source]
Add time bound dimension to xr.Dataset.
- satpy.cf.coords.add_xy_coords_attrs(data_arr: DataArray) DataArray [source]
Add relevant attributes to x, y coordinates.
- satpy.cf.coords.check_unique_projection_coords(data_arrays: dict[str, DataArray]) None [source]
Check that all datasets share the same projection coordinates x/y.
- satpy.cf.coords.ensure_unique_nondimensional_coords(data_arrays: dict[str, DataArray], pretty: bool = False) dict[str, DataArray] [source]
Make non-dimensional coordinates unique among all datasets.
Non-dimensional coordinates, such as scanline timestamps, may occur in multiple datasets with the same name and dimension but different values.
In order to avoid conflicts, prepend the dataset name to the coordinate name. If a non-dimensional coordinate is unique among all datasets and
pretty=True
, its name will not be modified.Since all datasets must have the same projection coordinates, this is not applied to latitude and longitude.
- Parameters:
data_arrays – Dictionary of (dataset name, dataset)
pretty – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
- Returns:
Dictionary holding the updated datasets
satpy.cf.data_array module
Utility to generate a CF-compliant DataArray.
- satpy.cf.data_array._preprocess_data_array_name(dataarray, numeric_name_prefix, include_orig_name)[source]
Change the DataArray name by prepending numeric_name_prefix if the name is a digit.
- satpy.cf.data_array.make_cf_data_array(dataarray, epoch=None, flatten_attrs=False, exclude_attrs=None, include_orig_name=True, numeric_name_prefix='CHANNEL_')[source]
Make the xr.DataArray CF-compliant.
- Parameters:
dataarray (xr.DataArray) – The data array to be made CF-compliant.
epoch (str, optional) – Reference time for encoding of time coordinates. If None, the default reference time is defined using from satpy.cf.coords import EPOCH.
flatten_attrs (bool, optional) – If True, flatten dict-type attributes. Defaults to False.
exclude_attrs (list, optional) – List of dataset attributes to be excluded. Defaults to None.
include_orig_name (bool, optional) – Include the original dataset name in the netcdf variable attributes. Defaults to True.
numeric_name_prefix (str, optional) – Prepend dataset name with this if starting with a digit. Defaults to
"CHANNEL_"
.
- Returns:
A CF-compliant xr.DataArray.
- Return type:
xr.DataArray
satpy.cf.datasets module
Utility to generate a CF-compliant Datasets.
- satpy.cf.datasets._collect_cf_dataset(list_dataarrays, epoch=None, flatten_attrs=False, exclude_attrs=None, include_lonlats=True, pretty=False, include_orig_name=True, numeric_name_prefix='CHANNEL_')[source]
Process a list of xr.DataArray and return a dictionary with CF-compliant xr.Dataset.
- Parameters:
list_dataarrays (list) – List of DataArrays to make CF compliant and merge into an xr.Dataset.
epoch (str, optional) – Reference time for encoding the time coordinates. Example format: “seconds since 1970-01-01 00:00:00”. If None, the default reference time is defined using from satpy.cf.coords import EPOCH.
flatten_attrs (bool, optional) – If True, flatten dict-type attributes.
exclude_attrs (list, optional) – List of xr.DataArray attribute names to be excluded.
include_lonlats (bool, optional) – If True, includes ‘latitude’ and ‘longitude’ coordinates also for a satpy.Scene defined on an AreaDefinition. If the ‘area’ attribute is a SwathDefinition, it always includes latitude and longitude coordinates.
pretty (bool, optional) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
include_orig_name (bool, optional) – Include the original dataset name as a variable attribute in the xr.Dataset.
numeric_name_prefix (str, optional) – Prefix to add to each variable with a name starting with a digit. Use ‘’ or None to leave this out.
- Returns:
A partially CF-compliant xr.Dataset.
- Return type:
xr.Dataset
- satpy.cf.datasets._get_extra_ds(dataarray, keys=None)[source]
Get the ancillary_variables DataArrays associated to a dataset.
- satpy.cf.datasets._get_group_dataarrays(group_members, list_dataarrays)[source]
Yield DataArrays that are part of a specific group.
- satpy.cf.datasets._get_groups(groups, list_datarrays)[source]
Return a dictionary with the list of xr.DataArray associated to each group.
If no groups (groups=None), return all DataArray attached to a single None key. Else, collect the DataArrays associated to each group.
- satpy.cf.datasets.collect_cf_datasets(list_dataarrays, header_attrs=None, exclude_attrs=None, flatten_attrs=False, pretty=True, include_lonlats=True, epoch=None, include_orig_name=True, numeric_name_prefix='CHANNEL_', groups=None)[source]
Process a list of xr.DataArray and return a dictionary with CF-compliant xr.Datasets.
If the xr.DataArrays does not share the same dimensions, it creates a collection of xr.Datasets sharing the same dimensions.
- Parameters:
list_dataarrays (list) – List of DataArrays to make CF compliant and merge into groups of xr.Datasets.
header_attrs (dict) – Global attributes of the output xr.Dataset.
epoch (str, optional) – Reference time for encoding the time coordinates. Example format: “seconds since 1970-01-01 00:00:00”. If None, the default reference time is retrieved using from satpy.cf.coords import EPOCH.
flatten_attrs (bool, optional) – If True, flatten dict-type attributes.
exclude_attrs (list, optional) – List of xr.DataArray attribute names to be excluded.
include_lonlats (bool, optional) – If True, includes ‘latitude’ and ‘longitude’ coordinates also for a satpy.Scene defined on an AreaDefinition. If the ‘area’ attribute is a SwathDefinition, it always includes latitude and longitude coordinates.
pretty (bool, optional) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
include_orig_name (bool, optional) – Include the original dataset name as a variable attribute in the xr.Dataset.
numeric_name_prefix (str, optional) – Prefix to add to each variable with a name starting with a digit. Use ‘’ or None to leave this out.
groups (dict, optional) – Group datasets according to the given assignment: {‘<group_name>’: [‘dataset_name1’, ‘dataset_name2’, …]}. Used to create grouped netCDFs using the CF_Writer. If None, no groups will be created.
- Returns:
- A tuple containing:
grouped_datasets (dict): A dictionary of CF-compliant xr.Dataset: {group_name: xr.Dataset}.
header_attrs (dict): Global attributes to be attached to the xr.Dataset / netCDF4.
- Return type:
satpy.cf.decoding module
CF decoding.
satpy.cf.encoding module
CF encoding.
- satpy.cf.encoding._set_default_chunks(encoding, dataset)[source]
Update encoding to preserve current dask chunks.
Existing user-defined chunks take precedence.
- satpy.cf.encoding._set_default_fill_value(encoding, dataset)[source]
Set default fill values.
Avoid _FillValue attribute being added to coordinate variables (https://github.com/pydata/xarray/issues/1865).
- satpy.cf.encoding._set_default_time_encoding(encoding, dataset)[source]
Set default time encoding.
Make sure time coordinates and bounds have the same units. Default is xarray’s CF datetime encoding, which can be overridden by user-defined encoding.
- satpy.cf.encoding._update_encoding_dataset_names(encoding, dataset, numeric_name_prefix)[source]
Ensure variable names of the encoding dictionary account for numeric_name_prefix.
A lot of channel names in satpy starts with a digit. When preparing CF-compliant datasets, these channels are prefixed with numeric_name_prefix.
If variables names in the encoding dictionary are numeric digits, their name is prefixed with numeric_name_prefix
Module contents
Code for generation of CF-compliant datasets.
satpy.composites package
Submodules
satpy.composites.abi module
Composite classes for the ABI instrument.
- class satpy.composites.abi.SimulatedGreen(name, fractions=(0.465, 0.465, 0.07), **kwargs)[source]
Bases:
GenericCompositor
A single-band dataset resembling a Green (0.55 µm) band.
This compositor creates a single band product by combining three other bands in various amounts. The general formula with dependencies (d) and fractions (f) is:
result = d1 * f1 + d2 * f2 + d3 * f3
See the fractions keyword argument for more information. Common used fractions for ABI data with C01, C02, and C03 inputs include:
SatPy default (historical): (0.465, 0.465, 0.07)
CIMSS (Kaba): (0.45, 0.45, 0.10)
EDC: (0.45706946, 0.48358168, 0.06038137)
Initialize fractions for input channels.
- Parameters:
name (str) – Name of this composite
fractions (iterable) – Fractions of each input band to include in the result.
satpy.composites.agri module
Composite classes for the AGRI instrument.
- class satpy.composites.agri.SimulatedRed(name, fractions=(1.0, 0.13, 0.87), **kwargs)[source]
Bases:
GenericCompositor
A single-band dataset resembling a Red (0.64 µm) band.
This compositor creates a single band product by combining two other bands by preset amounts. The general formula with dependencies (d) and fractions (f) is:
result = (f1 * d1 - f2 * d2) / f3
See the fractions keyword argument for more information. The default setup is to use:
f1 = 1.0
f2 = 0.13
f3 = 0.87
Initialize fractions for input channels.
- Parameters:
name (str) – Name of this composite
fractions (iterable) – Fractions of each input band to include in the result.
satpy.composites.ahi module
Composite classes for AHI.
satpy.composites.cloud_products module
Compositors for cloud products.
- class satpy.composites.cloud_products.CloudCompositorCommonMask(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
SingleBandCompositor
Put cloud-free pixels as fill_value_color in palette.
Initialise the compositor.
- class satpy.composites.cloud_products.CloudCompositorWithoutCloudfree(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
SingleBandCompositor
Put cloud-free pixels as fill_value_color in palette.
Initialise the compositor.
- class satpy.composites.cloud_products.PrecipCloudsRGB(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Precipitation clouds compositor.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
satpy.composites.config_loader module
Classes for loading compositor and modifier configuration files.
- class satpy.composites.config_loader._CompositeConfigHelper(loaded_compositors, sensor_id_keys)[source]
Bases:
object
Helper class for parsing composite configurations.
The provided loaded_compositors dictionary is updated inplace.
- class satpy.composites.config_loader._ModifierConfigHelper(loaded_modifiers, sensor_id_keys)[source]
Bases:
object
Helper class for parsing modifier configurations.
The provided loaded_modifiers dictionary is updated inplace.
- satpy.composites.config_loader._lru_cache_with_config_path(func: Callable)[source]
Use lru_cache but include satpy’s current config_path.
- satpy.composites.config_loader.all_composite_sensors()[source]
Get all sensor names from available composite configs.
- satpy.composites.config_loader.load_compositor_configs_for_sensor(sensor_name: str) tuple[dict[str, dict], dict[str, dict], dict] [source]
Load compositor, modifier, and DataID key information from configuration files for the specified sensor.
- Parameters:
sensor_name – Sensor name that has matching
sensor_name.yaml
config files.- Returns:
Where comps is a dictionary:
composite ID -> compositor object
And mods is a dictionary:
modifier name -> (modifier class, modifiers options)
Add data_id_keys is a dictionary:
DataID key -> key properties
- Return type:
(comps, mods, data_id_keys)
- satpy.composites.config_loader.load_compositor_configs_for_sensors(sensor_names: Iterable[str]) tuple[dict[str, dict], dict[str, dict]] [source]
Load compositor and modifier configuration files for the specified sensors.
- Parameters:
sensor_names (list of strings) – Sensor names that have matching
sensor_name.yaml
config files.- Returns:
Where comps is a dictionary:
sensor_name -> composite ID -> compositor object
And mods is a dictionary:
sensor_name -> modifier name -> (modifier class, modifiers options)
- Return type:
(comps, mods)
satpy.composites.glm module
Composite classes for the GLM instrument.
- class satpy.composites.glm.HighlightCompositor(name, min_highlight=0.0, max_highlight=10.0, max_factor=(0.8, 0.8, -0.8, 0), **kwargs)[source]
Bases:
GenericCompositor
Highlight pixels of a layer by an amount determined by a secondary layer.
The highlighting is applied per channel to either add or subtract an intensity from the primary image. In the addition case, the code is essentially doing:
highlight_factor = (highlight_data - min_highlight) / (max_highlight - min_highlight) channel_result = primary_data + highlight_factor * max_factor
The
max_factor
is defined per channel and can be positive for an additive effect, negative for a subtractive effect, or zero for no effect.Initialize composite with highlight factor options.
- Parameters:
min_highlight (float) – Minimum raw value of the “highlight” data that will be used for linearly scaling the data along with
max_hightlight
.max_highlight (float) – Maximum raw value of the “highlight” data that will be used for linearly scaling the data along with
min_hightlight
.max_factor (tuple) – Maximum effect that the highlight data can have on each channel of the primary image data. This will be multiplied by the linearly scaled highlight data and then added or subtracted from the highlight channels. See class docstring for more information. By default this is set to
(0.8, 0.8, -0.8, 0)
meaning the Red and Green channel will be added to by at most 0.8, the Blue channel will be subtracted from by at most 0.8, and the Alpha channel will not be effected.
satpy.composites.sar module
Composite classes for the VIIRS instrument.
- class satpy.composites.sar.SARIce(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
The SAR Ice composite.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.sar.SARIceLegacy(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
The SAR Ice composite, legacy version with dynamic stretching.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.sar.SARIceLog(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
The SAR Ice composite, using log-scale data.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.sar.SARQuickLook(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
The SAR QuickLook composite.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.sar.SARRGB(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
The SAR RGB composite.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- satpy.composites.sar._square_root_channels(*projectables)[source]
Return the square root of the channels, preserving the attributes.
- satpy.composites.sar.overlay(top, bottom, maxval=None)[source]
Blending two layers.
from: https://docs.gimp.org/en/gimp-concepts-layer-modes.html
- satpy.composites.sar.soft_light(top, bottom, maxval)[source]
Apply soft light.
http://www.pegtop.net/delphi/articles/blendmodes/softlight.htm
satpy.composites.spectral module
Composite classes for spectral adjustments.
- class satpy.composites.spectral.HybridGreen(*args, fraction=0.15, **kwargs)[source]
Bases:
SpectralBlender
Corrector of the FCI or AHI green band.
The green band in FCI and AHI (and other bands centered at 0.51 microns) deliberately misses the chlorophyll spectral reflectance local maximum at 0.55 microns in order to focus on aerosol and ash rather than on vegetation. This affects true colour RGBs, because vegetation looks brown rather than green and barren surface types typically gets a reddish hue.
To correct for this the hybrid green approach proposed by Miller et al. (2016, DOI:10.1175/BAMS-D-15-00154.2) is used. The basic idea is to include some contribution also from the 0.86 micron channel, which is known for its sensitivity to vegetation. The formula used for this is:
hybrid_green = (1 - F) * R(0.51) + F * R(0.86)
where F is a constant value, that is set to 0.15 by default in Satpy.
For example, the HybridGreen compositor can be used as follows to construct a hybrid green channel for AHI, with 15% contibution from the near-infrared 0.85 µm band (B04) and the remaining 85% from the native green 0.51 µm band (B02):
hybrid_green: compositor: !!python/name:satpy.composites.spectral.HybridGreen fraction: 0.15 prerequisites: - name: B02 modifiers: [sunz_corrected, rayleigh_corrected] - name: B04 modifiers: [sunz_corrected, rayleigh_corrected] standard_name: toa_bidirectional_reflectance
Other examples can be found in the
ahi.yaml
andami.yaml
composite files in the satpy distribution.Set default keyword argument values.
- class satpy.composites.spectral.NDVIHybridGreen(*args, ndvi_min=0.0, ndvi_max=1.0, limits=(0.15, 0.05), strength=1.0, **kwargs)[source]
Bases:
SpectralBlender
Construct a NDVI-weighted hybrid green channel.
This green band correction follows the same approach as the HybridGreen compositor, but with a dynamic blend factor f that depends on the pixel-level Normalized Differece Vegetation Index (NDVI). The higher the NDVI, the smaller the contribution from the nir channel will be, following a liner (default) or non-linear relationship between the two ranges [ndvi_min, ndvi_max] and limits.
As an example, a new green channel using e.g. FCI data and the NDVIHybridGreen compositor can be defined like:
ndvi_hybrid_green: compositor: !!python/name:satpy.composites.spectral.NDVIHybridGreen ndvi_min: 0.0 ndvi_max: 1.0 limits: [0.15, 0.05] strength: 1.0 prerequisites: - name: vis_05 modifiers: [sunz_corrected, rayleigh_corrected] - name: vis_06 modifiers: [sunz_corrected, rayleigh_corrected] - name: vis_08 modifiers: [sunz_corrected ] standard_name: toa_bidirectional_reflectance
In this example, pixels with NDVI=0.0 will be a weighted average with 15% contribution from the near-infrared vis_08 channel and the remaining 85% from the native green vis_05 channel, whereas pixels with NDVI=1.0 will be a weighted average with 5% contribution from the near-infrared vis_08 channel and the remaining 95% from the native green vis_05 channel. For other values of NDVI a linear interpolation between these values will be performed.
A strength larger or smaller than 1.0 will introduce a non-linear relationship between the two ranges [ndvi_min, ndvi_max] and limits. Hence, a higher strength (> 1.0) will result in a slower transition to higher/lower fractions at the NDVI extremes. Similarly, a lower strength (< 1.0) will result in a faster transition to higher/lower fractions at the NDVI extremes.
Initialize class and set the NDVI limits, blending fraction limits and strength.
- _apply_strength(ndvi)[source]
Introduce non-linearity by applying strength factor.
The method introduces non-linearity to the ndvi for a non-linear scaling from ndvi to blend fraction in _compute_blend_fraction. This can be used for a slower or faster transision to higher/lower fractions at the ndvi extremes. If strength equals 1.0, this operation has no effect on the ndvi.
- _compute_blend_fraction(ndvi)[source]
Compute pixel-level fraction of NIR signal to blend with native green signal.
This method linearly scales the input ndvi values to pixel-level blend fractions within the range [limits[0], limits[1]] following this implementation <https://stats.stackexchange.com/a/281164>.
- class satpy.composites.spectral.SpectralBlender(*args, fractions=(), **kwargs)[source]
Bases:
GenericCompositor
Construct new channel by blending contributions from a set of channels.
This class can be used to compute weighted average of different channels. Primarily it’s used to correct the green band of AHI and FCI in order to allow for proper true color imagery.
Below is an example used to generate a corrected green channel for AHI using a weighted average from three channels, with 63% contribution from the native green channel (B02), 29% from the red channel (B03) and 8% from the near-infrared channel (B04):
corrected_green: compositor: !!python/name:satpy.composites.spectral.SpectralBlender fractions: [0.63, 0.29, 0.08] prerequisites: - name: B02 modifiers: [sunz_corrected, rayleigh_corrected] - name: B03 modifiers: [sunz_corrected, rayleigh_corrected] - name: B04 modifiers: [sunz_corrected, rayleigh_corrected] standard_name: toa_bidirectional_reflectance
Other examples can be found in the``ahi.yaml`` composite file in the satpy distribution.
Set default keyword argument values.
satpy.composites.viirs module
Composite classes for the VIIRS instrument.
- class satpy.composites.viirs.AdaptiveDNB(*args, **kwargs)[source]
Bases:
HistogramDNB
Adaptive histogram equalized DNB composite.
The logic for this code was taken from Polar2Grid and was originally developed by Eva Schiffer (SSEC).
This composite separates the DNB data in to 3 main regions: Day, Night, and Mixed. Each region is equalized separately to bring out the most information from the region due to the high dynamic range of the DNB data. Optionally, the mixed region can be separated in to multiple smaller regions by using the mixed_degree_step keyword.
Initialize the compositor with values from the user or from the configuration file.
Adaptive histogram equalization and regular histogram equalization can be configured independently for each region: day, night, or mixed. A region can be set to use adaptive equalization “always”, or “never”, or only when there are multiple regions in a single scene “multiple” via the adaptive_X keyword arguments (see below).
- Parameters:
adaptive_day – one of (“always”, “multiple”, “never”) meaning when adaptive equalization is used.
adaptive_mixed – one of (“always”, “multiple”, “never”) meaning when adaptive equalization is used.
adaptive_night – one of (“always”, “multiple”, “never”) meaning when adaptive equalization is used.
- class satpy.composites.viirs.ERFDNB(*args, **kwargs)[source]
Bases:
CompositeBase
Equalized DNB composite using the error function (erf).
The logic for this code was taken from Polar2Grid and was originally developed by Curtis Seaman and Steve Miller. The original code was written in IDL and is included as comments in the code below.
Initialize ERFDNB specific keyword arguments.
- class satpy.composites.viirs.HistogramDNB(*args, **kwargs)[source]
Bases:
CompositeBase
Histogram equalized DNB composite.
The logic for this code was taken from Polar2Grid and was originally developed by Eva Schiffer (SSEC).
This composite separates the DNB data in to 3 main regions: Day, Night, and Mixed. Each region is equalized separately to bring out the most information from the region due to the high dynamic range of the DNB data. Optionally, the mixed region can be separated in to multiple smaller regions by using the mixed_degree_step keyword.
Initialize the compositor with values from the user or from the configuration file.
- Parameters:
high_angle_cutoff – solar zenith angle threshold in degrees, values above this are considered “night”
low_angle_cutoff – solar zenith angle threshold in degrees, values below this are considered “day”
mixed_degree_step – Step interval to separate “mixed” region in to multiple parts by default does whole mixed region
- class satpy.composites.viirs.NCCZinke(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
CompositeBase
Equalized DNB composite using the Zinke algorithm [1].
References
Initialise the compositor.
- class satpy.composites.viirs.SnowAge(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Create RGB snow product.
Product is based on method presented at the second CSPP/IMAPP users’ meeting at Eumetsat in Darmstadt on 14-16 April 2015
Bernard Bellec snow Look-Up Tables V 1.0 (c) Meteo-France These Look-up Tables allow you to create the RGB snow product for SUOMI-NPP VIIRS Imager according to the algorithm presented at the second CSPP/IMAPP users’ meeting at Eumetsat in Darmstadt on 14-16 April 2015 The algorithm and the product are described in this presentation : http://www.ssec.wisc.edu/meetings/cspp/2015/Agenda%20PDF/Wednesday/Roquet_snow_product_cspp2015.pdf as well as in the paper http://dx.doi.org/10.1016/j.rse.2017.04.028 For further information you may contact Bernard Bellec at Bernard.Bellec@meteo.fr or Pascale Roquet at Pascale.Roquet@meteo.fr
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- satpy.composites.viirs._calculate_weights(tile_size)[source]
Calculate a weight array for bilinear interpolation of histogram tiles.
The weight array will be used to quickly bilinearly-interpolate the histogram equalizations tile size should be the width and height of a tile in pixels.
- Returns: 4D weight array where the first 2 dimensions correspond to the
grid of where the tiles are relative to the tile being interpolated.
- satpy.composites.viirs._check_moon_phase(moon_datasets: list[DataArray], start_time: datetime) float [source]
Check if we have Moon phase as an input dataset and, if not, calculate it.
- satpy.composites.viirs._compute_tile_dist_and_bin_info(data: ndarray, valid_data_mask: ndarray, std_mult_cutoff: float, do_log_scale: bool, log_offset: float, clip_limit: float, slope_limit: float, number_of_bins: int, row_tiles: int, col_tiles: int, tile_size: int)[source]
- satpy.composites.viirs._get_cumul_bin_info_for_tile(num_row_tile, weight_row, num_col_tile, weight_col, all_cumulative_dist_functions, all_bin_information)[source]
- satpy.composites.viirs._histogram_equalization_helper(valid_data, number_of_bins, clip_limit=None, slope_limit=None)[source]
Calculate the simplest possible histogram equalization, using only valid data.
- Returns:
cumulative distribution function and bin information
- satpy.composites.viirs._histogram_equalize_one_tile(data, valid_data_mask, std_mult_cutoff, do_log_scale, log_offset, clip_limit, slope_limit, number_of_bins, num_row_tile, num_col_tile, tile_size)[source]
- satpy.composites.viirs._interpolate_local_equalized_tiles(data, out, mask_to_equalize, valid_data_mask, do_log_scale, log_offset, tile_weights, all_bin_information, all_cumulative_dist_functions, row_idx, col_idx, tile_size)[source]
- satpy.composites.viirs._linear_normalization_from_0to1(data, mask, theoretical_max, theoretical_min=0, message='normalizing equalized data to fit in 0 to 1 range')[source]
Do a linear normalization so all data is in the 0 to 1 range.
This is a sloppy but fast calculation that relies on parameters giving it the correct theoretical current max and min so it can scale the data accordingly.
- satpy.composites.viirs.histogram_equalization(data, mask_to_equalize, number_of_bins=1000, std_mult_cutoff=4.0, do_zerotoone_normalization=True, out=None)[source]
Perform a histogram equalization on the data.
Data is selected by the mask_to_equalize mask. The data will be separated into number_of_bins levels for equalization and outliers beyond +/- std_mult_cutoff*std will be ignored.
If do_zerotoone_normalization is True the data selected by mask_to_equalize will be returned in the 0 to 1 range. Otherwise the data selected by mask_to_equalize will be returned in the 0 to number_of_bins range.
Note: the data will be changed in place.
- satpy.composites.viirs.local_histogram_equalization(data, mask_to_equalize, valid_data_mask=None, number_of_bins=1000, std_mult_cutoff=3.0, do_zerotoone_normalization=True, local_radius_px: int = 300, clip_limit=60.0, slope_limit=3.0, do_log_scale=True, log_offset=1e-05, out=None)[source]
Equalize the provided data (in the mask_to_equalize) using adaptive histogram equalization.
Tiles of width/height (2 * local_radius_px + 1) will be calculated and results for each pixel will be bilinearly interpolated from the nearest 4 tiles when pixels fall near the edge of the image (there is no adjacent tile) the resultant interpolated sum from the available tiles will be multiplied to account for the weight of any missing tiles:
pixel total interpolated value = pixel available interpolated value / (1 - missing interpolation weight)
If
do_zerotoone_normalization
is True the data will be scaled so that all data in the mask_to_equalize falls between 0 and 1; otherwise the data in mask_to_equalize will all fall between 0 and number_of_bins.Returns: The equalized data
- satpy.composites.viirs.make_day_night_masks(solarZenithAngle, good_mask, highAngleCutoff, lowAngleCutoff, stepsDegrees=None)[source]
Generate masks for day, night, and twilight regions.
Masks are created from the provided solar zenith angle data.
Optionally provide the highAngleCutoff and lowAngleCutoff that define the limits of the terminator region (if no cutoffs are given the DEFAULT_HIGH_ANGLE and DEFAULT_LOW_ANGLE will be used).
Optionally provide the stepsDegrees that define how many degrees each “mixed” mask in the terminator region should be (if no stepsDegrees is given, the whole terminator region will be one mask).
Module contents
Base classes for composite objects.
- class satpy.composites.BackgroundCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
A compositor that overlays one composite on top of another.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.CategoricalDataCompositor(name, lut=None, **kwargs)[source]
Bases:
CompositeBase
Compositor used to recategorize categorical data using a look-up-table.
Each value in the data array will be recategorized to a new category defined in the look-up-table using the original value as an index for that look-up-table.
Example
data = [[1, 3, 2], [4, 2, 0]] lut = [10, 20, 30, 40, 50] res = [[20, 40, 30], [50, 30, 10]]
Get look-up-table used to recategorize data.
- Parameters:
lut (list) – a list of new categories. The lenght must be greater than the maximum value in the data array that should be recategorized.
- class satpy.composites.CloudCompositor(name, transition_min=258.15, transition_max=298.15, transition_gamma=3.0, invert_alpha=False, **kwargs)[source]
Bases:
GenericCompositor
Detect clouds based on thresholding and use it as a mask for compositing.
Collect custom configuration values.
- Parameters:
transition_min (float) – Values below or equal to this are clouds -> opaque white
transition_max (float) – Values above this are cloud free -> transparent
transition_gamma (float) – Gamma correction to apply at the end
invert_alpha (bool) – Invert the alpha channel to make low data values transparent and high data values opaque.
- class satpy.composites.ColorizeCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
ColormapCompositor
A compositor colorizing the data, interpolating the palette colors when needed.
Warning
Deprecated since Satpy 0.39. See the
ColormapCompositor
docstring for documentation on the alternative.Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.ColormapCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
A compositor that uses colormaps.
Warning
Deprecated since Satpy 0.39.
This compositor is deprecated. To apply a colormap, use a
SingleBandCompositor
composite with acolorize()
orpalettize()
enhancement instead. For example, to make acloud_top_height
composite based on a datasetctth_alti
palettized byctth_alti_pal
, the composite would be:cloud_top_height: compositor: !!python/name:satpy.composites.SingleBandCompositor prerequisites: - ctth_alti tandard_name: cloud_top_height
and the enhancement:
cloud_top_height: standard_name: cloud_top_height operations: - name: palettize method: !!python/name:satpy.enhancements.palettize kwargs: palettes: - dataset: ctth_alti_pal color_scale: 255 min_value: 0 max_value: 255
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- static build_colormap(palette, dtype, info)[source]
Create the colormap from the raw_palette and the valid_range.
Colormaps come in different forms, but they are all supposed to have color values between 0 and 255. The following cases are considered:
Palettes comprised of only a list of colors. If dtype is uint8, the values of the colormap are the enumeration of the colors. Otherwise, the colormap values will be spread evenly from the min to the max of the valid_range provided in info.
Palettes that have a palette_meanings attribute. The palette meanings will be used as values of the colormap.
- class satpy.composites.CompositeBase(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
object
Base class for all compositors and modifiers.
A compositor in Satpy is a class that takes in zero or more input DataArrays and produces a new DataArray with its own identifier (name). The result of a compositor is typically a brand new “product” that represents something different than the inputs that went into the operation.
See the
ModifierBase
class for information on the similar concept of “modifiers”.Initialise the compositor.
- static align_geo_coordinates(data_arrays: Sequence[DataArray]) list[DataArray] [source]
Align DataArrays along geolocation coordinates.
See
align()
for more information. This function uses the “override” join method to essentially ignore differences between coordinates. Thecheck_geolocation()
should be called before this to ensure that geolocation coordinates and “area” are compatible. Thedrop_coordinates()
method should be called before this to ensure that coordinates that are considered “negligible” when computing composites do not affect alignment.
- apply_modifier_info(origin, destination)[source]
Apply the modifier info from origin to destination.
- check_geolocation(data_arrays: Sequence[DataArray]) None [source]
Check that the geolocations of the data_arrays are compatible.
For the purpose of this method, “compatible” means:
All arrays should have the same dimensions.
Either all arrays should have an area, or none should.
If all have an area, the areas should be all the same.
- Parameters:
data_arrays – Arrays to be checked
- Raises:
IncompatibleAreas – If dimension or areas do not match.
ValueError – If some, but not all data arrays lack an area attribute.
- static drop_coordinates(data_arrays: Sequence[DataArray]) list[DataArray] [source]
Drop negligible non-dimensional coordinates.
Drops negligible coordinates if they do not correspond to any dimension. Negligible coordinates are defined in the
NEGLIGIBLE_COORDS
module attribute.- Parameters:
data_arrays (List[arrays]) – Arrays to be checked
- property id
Return the DataID of the object.
- match_data_arrays(data_arrays: Sequence[DataArray]) list[DataArray] [source]
Match data arrays so that they can be used together in a composite.
For the purpose of this method, “can be used together” means:
All arrays should have the same dimensions.
Either all arrays should have an area, or none should.
If all have an area, the areas should be all the same.
In addition, negligible non-dimensional coordinates are dropped (see
drop_coordinates()
) and dask chunks are unified (seesatpy.utils.unify_chunks()
).- Parameters:
data_arrays (List[arrays]) – Arrays to be checked
- Returns:
Arrays with negligible non-dimensional coordinates removed.
- Return type:
data_arrays (List[arrays])
- Raises:
IncompatibleAreas – If dimension or areas do not match.
ValueError – If some, but not all data arrays lack an area attribute.
- class satpy.composites.DayNightCompositor(name, lim_low=85.0, lim_high=88.0, day_night='day_night', include_alpha=True, **kwargs)[source]
Bases:
GenericCompositor
A compositor that blends day data with night data.
Using the day_night flag it is also possible to provide only a day product or only a night product and mask out (make transparent) the opposite portion of the image (night or day). See the documentation below for more details.
Collect custom configuration values.
- Parameters:
lim_low (float) – lower limit of Sun zenith angle for the blending of the given channels
lim_high (float) – upper limit of Sun zenith angle for the blending of the given channels
day_night (string) – “day_night” means both day and night portions will be kept “day_only” means only day portion will be kept “night_only” means only night portion will be kept
include_alpha (bool) – This only affects the “day only” or “night only” result. True means an alpha band will be added to the output image for transparency. False means the output is a single-band image with undesired pixels being masked out (replaced with NaNs).
- _get_data_for_single_side_product(foreground_data: DataArray, weights: DataArray) tuple[DataArray, DataArray, DataArray] [source]
- class satpy.composites.DifferenceCompositor(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
CompositeBase
Make the difference of two data arrays.
Initialise the compositor.
- class satpy.composites.Filler(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Fix holes in projectable 1 with data from projectable 2.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.FillingCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Make a regular RGB, filling the RGB bands with the first provided dataset’s values.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.GenericCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
CompositeBase
Basic colored composite builder.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- modes = {1: 'L', 2: 'LA', 3: 'RGB', 4: 'RGBA'}
- class satpy.composites.HighCloudCompositor(name, transition_min_limits=(210.0, 230.0), latitude_min_limits=(30.0, 60.0), transition_max=300, transition_gamma=1.0, **kwargs)[source]
Bases:
CloudCompositor
Detect high clouds based on latitude-dependent thresholding and use it as a mask for compositing.
This compositor aims at identifying high clouds and assigning them a transparency based on the brightness temperature (cloud opacity). In contrast to the CloudCompositor, the brightness temperature threshold at the lower end, used to identify high opaque clouds, is made a function of the latitude in order to have tropopause level clouds appear opaque at both high and low latitudes. This follows the Geocolor implementation of high clouds in Miller et al. (2020, DOI:10.1175/JTECH-D-19-0134.1), but with some adjustments to the thresholds based on recent developments and feedback from CIRA.
The two brightness temperature thresholds in transition_min are used together with the corresponding latitude limits in latitude_min to compute a modified version of transition_min that is later used when calling CloudCompositor. The modified version of transition_min will be an array with the same shape as the input projectable dataset, where the actual values of threshold_min are a function of the dataset latitude:
transition_min = transition_min[0] where abs(latitude) < latitude_min(0)
transition_min = transition_min[1] where abs(latitude) > latitude_min(0)
- transition_min = linear interpolation between transition_min[0] and transition_min[1] as a function
of where abs(latitude).
Collect custom configuration values.
- Parameters:
transition_min_limits (tuple) – Brightness temperature values used to identify opaque white clouds at different latitudes
transition_max (float) – Brightness temperatures above this value are not considered to be high clouds -> transparent
latitude_min_limits (tuple) – Latitude values defining the intervals for computing latitude-dependent transition_min values from transition_min_limits.
transition_gamma (float) – Gamma correction to apply to the alpha channel within the brightness temperature range (transition_min to transition_max).
- exception satpy.composites.IncompatibleAreas[source]
Bases:
Exception
Error raised upon compositing things of different shapes.
- exception satpy.composites.IncompatibleTimes[source]
Bases:
Exception
Error raised upon compositing things from different times.
- class satpy.composites.LongitudeMaskingCompositor(name, lon_min=None, lon_max=None, **kwargs)[source]
Bases:
SingleBandCompositor
Masks areas outside defined longitudes.
Collect custom configuration values.
- class satpy.composites.LowCloudCompositor(name, values_land=(1,), values_water=(0,), range_land=(0.0, 4.0), range_water=(0.0, 4.0), transition_gamma=1.0, invert_alpha=True, **kwargs)[source]
Bases:
CloudCompositor
Detect low-level clouds based on thresholding and use it as a mask for compositing during night-time.
This compositor computes the brightness temperature difference between a window channel (e.g. 10.5 micron) and the near-infrared channel e.g. (3.8 micron) and uses this brightness temperature difference, BTD, to create a partially transparent mask for compositing.
Pixels with BTD values below a given threshold will be transparent, whereas pixels with BTD values above another threshold will be opaque. The transparency of all other BTD values will be a linear function of the BTD value itself. Two sets of thresholds are used, one set for land surface types (range_land) and another one for water surface types (range_water), respectively. Hence, this compositor requires a land-water-mask as a prerequisite input. This follows the GeoColor implementation of night-time low-level clouds in Miller et al. (2020, DOI:10.1175/JTECH-D-19-0134.1), but with some adjustments to the thresholds based on recent developments and feedback from CIRA.
Please note that the spectral test and thus the output of the compositor (using the expected input data) is only applicable during night-time.
Init info.
Collect custom configuration values.
- Parameters:
values_land (list) – List of values used to identify land surface pixels in the land-water-mask.
values_water (list) – List of values used to identify water surface pixels in the land-water-mask.
range_land (tuple) – Threshold values used for masking low-level clouds from the brightness temperature difference over land surface types.
range_water (tuple) – Threshold values used for masking low-level clouds from the brightness temperature difference over water.
transition_gamma (float) – Gamma correction to apply to the alpha channel within the brightness temperature difference range.
invert_alpha (bool) – Invert the alpha channel to make low data values transparent and high data values opaque.
- class satpy.composites.LuminanceSharpeningCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Create a high resolution composite by sharpening a low resolution using high resolution luminance.
This is done by converting to YCbCr colorspace, replacing Y, and convertin back to RGB.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.MaskingCompositor(name, transparency=None, conditions=None, mode='LA', **kwargs)[source]
Bases:
GenericCompositor
A compositor that masks e.g. IR 10.8 channel data using cloud products from NWC SAF.
Collect custom configuration values.
- Kwargs:
- transparency (dict): transparency for each cloud type as
key-value pairs in a dictionary. Will be converted to conditions. DEPRECATED.
- conditions (list): list of three items determining the masking
settings.
- mode (str, optional): Image mode to return. For single-band input,
this shall be “LA” (default) or “RGBA”. For multi-band input, this argument is ignored as the result is always RGBA.
Each condition in conditions consists of three items:
- method: Numpy method name. The following are supported
operations: less, less_equal, equal, greater_equal, greater, not_equal, isnan, isfinite, isinf, isneginf, or isposinf.
- value: threshold value of the mask applied with the
operator. Can be a string, in which case the corresponding value will be determined from flag_meanings and flag_values attributes of the mask. NOTE: the value should not be given to ‘is*` methods.
- transparency: transparency from interval [0 … 100] used
for the method/threshold. Value of 100 is fully transparent.
Example:
>>> conditions = [{'method': 'greater_equal', 'value': 0, 'transparency': 100}, {'method': 'greater_equal', 'value': 1, 'transparency': 80}, {'method': 'greater_equal', 'value': 2, 'transparency': 0}, {'method': 'isnan', 'transparency': 100}] >>> compositor = MaskingCompositor("masking compositor", transparency=transparency) >>> result = compositor([data, mask])
This will set transparency of data based on the values in the mask dataset. Locations where mask has values of 0 will be fully transparent, locations with 1 will be semi-transparent and locations with 2 will be fully visible in the resulting image. In the end all NaN areas in the mask are set to full transparency. All the unlisted locations will be visible.
The transparency is implemented by adding an alpha layer to the composite. The locations with transparency of 100 will be set to NaN in the data. If the input data contains an alpha channel, it will be discarded.
- _get_alpha_bands(data, mask_in, alpha_attrs)[source]
Get alpha bands.
From input data, masks, and attributes, get alpha band.
- _get_mask(method, value, mask_data)[source]
Get mask array from mask_data using method and threshold value.
The method is the name of a numpy function.
- _select_data_bands(data_in)[source]
Select data to be composited from input data.
From input data, select the bands that need to have masking applied.
- _set_data_nans(data, mask, attrs)[source]
Set data to nans where mask is True.
The attributes attrs* will be written to each band in data.
- _supported_modes = {'LA', 'RGBA'}
- class satpy.composites.MultiFiller(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
SingleBandCompositor
Fix holes in projectable 1 with data from the next projectables.
Initialise the compositor.
- satpy.composites.NEGLIGIBLE_COORDS = ['time']
Keywords identifying non-dimensional coordinates to be ignored during composite generation.
- class satpy.composites.NaturalEnh(name, ch16_w=1.3, ch08_w=2.5, ch06_w=2.2, *args, **kwargs)[source]
Bases:
GenericCompositor
Enhanced version of natural color composite by Simon Proud.
- Parameters:
Initialize the class.
- class satpy.composites.PaletteCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
ColormapCompositor
A compositor colorizing the data, not interpolating the palette colors.
Warning
Deprecated since Satpy 0.39. See the
ColormapCompositor
docstring for documentation on the alternative.Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.RGBCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Make a composite from three color bands (deprecated).
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.RatioCompositor(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
CompositeBase
Make the ratio of two data arrays.
Initialise the compositor.
- class satpy.composites.RatioSharpenedRGB(*args, **kwargs)[source]
Bases:
GenericCompositor
Sharpen RGB bands with ratio of a high resolution band to a lower resolution version.
Any pixels where the ratio is computed to be negative or infinity, it is reset to 1. Additionally, the ratio is limited to 1.5 on the high end to avoid high changes due to small discrepancies in instrument detector footprint. Note that the input data to this compositor must already be resampled so all data arrays are the same shape.
Example:
R_lo - 1000m resolution - shape=(2000, 2000) G - 1000m resolution - shape=(2000, 2000) B - 1000m resolution - shape=(2000, 2000) R_hi - 500m resolution - shape=(4000, 4000) ratio = R_hi / R_lo new_R = R_hi new_G = G * ratio new_B = B * ratio
In some cases, there could be multiple high resolution bands:
R_lo - 1000m resolution - shape=(2000, 2000) G_hi - 500m resolution - shape=(4000, 4000) B - 1000m resolution - shape=(2000, 2000) R_hi - 500m resolution - shape=(4000, 4000)
To avoid the green band getting involved in calculating ratio or sharpening, add “neutral_resolution_band: green” in the YAML config file. This way only the blue band will get sharpened:
ratio = R_hi / R_lo new_R = R_hi new_G = G_hi new_B = B * ratio
Instanciate the ration sharpener.
- class satpy.composites.RealisticColors(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Create a realistic colours composite for SEVIRI.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.SandwichCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Make a sandwich product.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.composites.SelfSharpenedRGB(*args, **kwargs)[source]
Bases:
RatioSharpenedRGB
Sharpen RGB with ratio of a band with a strided-version of itself.
Example:
R - 500m resolution - shape=(4000, 4000) G - 1000m resolution - shape=(2000, 2000) B - 1000m resolution - shape=(2000, 2000) ratio = R / four_element_average(R) new_R = R new_G = G * ratio new_B = B * ratio
Instanciate the ration sharpener.
- class satpy.composites.SingleBandCompositor(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
CompositeBase
Basic single-band composite builder.
This preserves all the attributes of the dataset it is derived from.
Initialise the compositor.
- class satpy.composites.StaticImageCompositor(name, filename=None, url=None, known_hash=None, area=None, **kwargs)[source]
Bases:
GenericCompositor
,DataDownloadMixin
A compositor that loads a static image from disk.
Environment variables in the filename are automatically expanded.
Collect custom configuration values.
- Parameters:
filename (str) – Name to use when storing and referring to the file in the
data_dir
cache. Ifurl
is provided (preferred), then this is used as the filename in the cache and will be appended to<data_dir>/composites/<class_name>/
. Ifurl
is provided andfilename
is not then thefilename
will be guessed from theurl
. Ifurl
is not provided, then it is assumedfilename
refers to a local file. If thefilename
does not come with an absolute path,data_dir
will be used as the directory path. Environment variables are expanded.url (str) – URL to remote file. When the composite is created the file will be downloaded and cached in Satpy’s
data_dir
. Environment variables are expanded.known_hash (str or None) – Hash of the remote file used to verify a successful download. If not provided then the download will not be verified. See
satpy.aux_download.register_file()
for more information.area (str) – Name of area definition for the image. Optional for images with built-in area definitions (geotiff).
- Use cases:
url + no filename: Satpy determines the filename based on the filename in the URL, then downloads the URL, and saves it to <data_dir>/<filename>. If the file already exists and known_hash is also provided, then the pooch library compares the hash of the file to the known_hash. If it does not match, then the URL is re-downloaded. If it matches then no download.
url + relative filename: Same as case 1 but filename is already provided so download goes to <data_dir>/<filename>. Same hashing behavior. This does not check for an absolute path.
No url + absolute filename: No download, filename is passed directly to generic_image reader. No hashing is done.
No url + relative filename: Check if <data_dir>/<filename> exists. If it does then make filename an absolute path. If it doesn’t, then keep it as is and let the exception at the bottom of the method get raised.
- class satpy.composites.SumCompositor(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
CompositeBase
Make the sum of two data arrays.
Initialise the compositor.
- satpy.composites._get_flag_value(mask, val)[source]
Get a numerical value of the named flag.
This function assumes the naming used in product generated with NWC SAF GEO/PPS softwares.
- satpy.composites.add_alpha_bands(data)[source]
Only used for DayNightCompositor.
Add an alpha band to L or RGB composite as prerequisites for the following band matching to make the masked-out area transparent.
- satpy.composites.enhance2dataset(dset, convert_p=False)[source]
Return the enhancement dataset dset as an array.
If convert_p is True, enhancements generating a P mode will be converted to RGB or RGBA.
satpy.dataset package
Submodules
satpy.dataset.anc_vars module
Utilities for dealing with ancillary variables.
satpy.dataset.data_dict module
Classes and functions related to a dictionary with DataID keys.
- class satpy.dataset.data_dict.DatasetDict[source]
Bases:
dict
Special dictionary object that can handle dict operations based on dataset name, wavelength, or DataID.
Note: Internal dictionary keys are DataID objects.
- get_key(match_key, num_results=1, best=True, **dfilter)[source]
Get multiple fully-specified keys that match the provided query.
- Parameters:
key (DataID) – DataID of query parameters to use for searching. Any parameter that is None is considered a wild card and any match is accepted. Can also be a string representing the dataset name or a number representing the dataset wavelength.
num_results (int) – Number of results to return. If 0 return all, if 1 return only that element, otherwise return a list of matching keys.
**dfilter (dict) – See get_key function for more information.
- exception satpy.dataset.data_dict.TooManyResults[source]
Bases:
KeyError
Special exception when one key maps to multiple items in the container.
- satpy.dataset.data_dict.get_best_dataset_key(key, choices)[source]
Choose the “best” DataID from choices based on key.
To see how the keys are sorted, refer to :meth:satpy.datasets.DataQuery.sort_dataids.
This function assumes choices has already been filtered to only include datasets that match the provided key.
- Parameters:
key (DataQuery) – Query parameters to sort choices by.
choices (iterable) – DataID objects to sort through to determine the best dataset.
- Returns: List of best DataID`s from `choices. If there is more
than one element this function could not choose between the available datasets.
- satpy.dataset.data_dict.get_key(key, key_container, num_results=1, best=True, query=None, **kwargs)[source]
Get the fully-specified key best matching the provided key.
Only the best match is returned if best is True (default). See get_best_dataset_key for more information on how this is determined.
query is provided as a convenience to filter by multiple parameters at once without having to filter by multiple key inputs.
- Parameters:
key (DataID) – DataID of query parameters to use for searching. Any parameter that is None is considered a wild card and any match is accepted.
key_container (dict or set) – Container of DataID objects that uses hashing to quickly access items.
num_results (int) – Number of results to return. Use 0 for all matching results. If 1 then the single matching key is returned instead of a list of length 1. (default: 1)
best (bool) – Sort results to get “best” result first (default: True). See get_best_dataset_key for details.
query (DataQuery) –
filter for the key which can contain for example:
- resolution (float, int, or list): Resolution of the dataset in
dataset units (typically meters). This can also be a list of these numbers.
- calibration (str or list): Dataset calibration
(ex.’reflectance’). This can also be a list of these strings.
- polarization (str or list): Dataset polarization
(ex.’V’). This can also be a list of these strings.
- level (number or list): Dataset level (ex. 100). This can also be a
list of these numbers.
- modifiers (list): Modifiers applied to the dataset. Unlike
resolution and calibration this is the exact desired list of modifiers for one dataset, not a list of possible modifiers.
- Returns:
Matching key(s)
- Return type:
- Raises: KeyError if no matching results or if more than one result is
found when num_results is 1.
satpy.dataset.dataid module
Dataset identifying objects.
- class satpy.dataset.dataid.DataID(id_keys, **keyval_dict)[source]
Bases:
dict
Identifier for all DataArray objects.
DataID is a dict that holds identifying and classifying information about a DataArray.
Init the DataID.
The id_keys dictionary has to be formed as described in Satpy internal workings: having a look under the hood. The other keyword arguments are values to be assigned to the keys. Note that None isn’t a valid value and will simply be ignored.
- convert_dict(keyvals)[source]
Convert a dictionary’s values to the types defined in this object’s id_keys.
- classmethod from_dataarray(array, default_keys={'name': {'required': True}, 'resolution': {'transitive': True}})[source]
Get the DataID using the dataarray attributes.
- property id_keys
Get the id_keys.
- class satpy.dataset.dataid.DataQuery(**kwargs)[source]
Bases:
object
The data query object.
A DataQuery can be used in Satpy to query for a Dataset. This way a fully qualified DataID can be found even if some DataID elements are unknown. In this case a * signifies something that is unknown or not applicable to the requested Dataset.
Initialize the query.
Check if dataid shares required keys with the current query.
- sort_dataids(dataids)[source]
Sort the DataIDs based on this query.
Returns the sorted dataids and the list of distances.
The sorting is performed based on the types of the keys to search on (as they are defined in the DataIDs from dataids). If that type defines a distance method, then it is used to find how ‘far’ the DataID is from the current query. If the type is a number, a simple subtraction is performed. For other types, the distance is 0 if the values are identical, np.inf otherwise.
For example, with the default DataID, we use the following criteria:
Central wavelength is nearest to the key wavelength if specified.
Least modified dataset if modifiers is None in key. Otherwise, the modifiers are ignored.
Highest calibration if calibration is None in key. Calibration priority is the order of the calibration list defined as reflectance, brightness temperature, radiance counts if not overridden in the reader configuration.
Best resolution (smallest number) if resolution is None in key. Otherwise, the resolution is ignored.
- class satpy.dataset.dataid.ModifierTuple(iterable=(), /)[source]
Bases:
tuple
A tuple holder for modifiers.
- class satpy.dataset.dataid.ValueList(value)[source]
Bases:
IntEnum
A static value list.
This class is meant to be used for dynamically created Enums. Due to this it should not be used as a normal Enum class or there may be some unexpected behavior. For example, this class contains custom pickling and unpickling handling that may break in subclasses.
- class satpy.dataset.dataid.WavelengthRange(min, central, max, unit='µm')[source]
Bases:
WavelengthRange
A named tuple for wavelength ranges.
The elements of the range are min, central and max values, and optionally a unit (defaults to µm). No clever unit conversion is done here, it’s just used for checking that two ranges are comparable.
Create new instance of WavelengthRange(min, central, max, unit)
- satpy.dataset.dataid._generalize_value_for_comparison(val)[source]
Get a generalize value for comparisons.
- satpy.dataset.dataid.create_filtered_query(dataset_key, filter_query)[source]
Create a DataQuery matching dataset_key and filter_query.
If a property is specified in both dataset_key and filter_query, the former has priority.
- satpy.dataset.dataid.default_co_keys_config = {'name': {'required': True}, 'resolution': {'transitive': True}}
Default ID keys for coordinate DataArrays.
- satpy.dataset.dataid.default_id_keys_config = {'calibration': {'enum': ['reflectance', 'brightness_temperature', 'radiance', 'radiance_wavenumber', 'counts'], 'transitive': True}, 'modifiers': {'default': (), 'type': <class 'satpy.dataset.dataid.ModifierTuple'>}, 'name': {'required': True}, 'resolution': {'transitive': False}, 'wavelength': {'type': <class 'satpy.dataset.dataid.WavelengthRange'>}}
Default ID keys DataArrays.
- satpy.dataset.dataid.get_keys_from_config(common_id_keys, config)[source]
Gather keys for a new DataID from the ones available in configured dataset.
- satpy.dataset.dataid.minimal_default_keys_config = {'name': {'required': True}, 'resolution': {'transitive': True}}
Minimal ID keys for DataArrays, for example composites.
- satpy.dataset.dataid.wlklass
alias of
WavelengthRange
satpy.dataset.metadata module
Utilities for merging metadata from various sources.
- satpy.dataset.metadata._all_arrays_equal(arrays)[source]
Check if the arrays are equal.
If the arrays are lazy, just check if they have the same identity.
- satpy.dataset.metadata._all_identical(values)[source]
Check that the identities of all values are the same.
- satpy.dataset.metadata._all_list_of_arrays_equal(array_lists)[source]
Check that the lists of arrays are equal.
- satpy.dataset.metadata._dict_equal(d1, d2)[source]
Check that two dictionaries are equal.
Nested dictionaries are flattened to facilitate comparison.
- satpy.dataset.metadata._filter_time_values(values)[source]
Remove values that are not datetime objects.
- satpy.dataset.metadata._get_valid_dicts(metadata_objects)[source]
Get the valid dictionaries matching the metadata_objects.
- satpy.dataset.metadata.average_datetimes(datetime_list)[source]
Average a series of datetime objects.
Note
This function assumes all datetime objects are naive and in the same time zone (UTC).
- Parameters:
datetime_list (iterable) – Datetime objects to average
Returns: Average datetime as a datetime object
- satpy.dataset.metadata.combine_metadata(*metadata_objects, average_times=None)[source]
Combine the metadata of two or more Datasets.
If the values corresponding to any keys are not equal or do not exist in all provided dictionaries then they are not included in the returned dictionary.
All values of the keys containing the substring ‘start_time’ will be set to the earliest value and similarly for ‘end_time’ to latest time. All other keys containing the word ‘time’ are averaged. Before these adjustments, None values resulting from data that don’t have times associated to them are removed. These rules are applied also to values in the ‘time_parameters’ dictionary.
Changed in version 0.47: Before Satpy 0.47, all times, including start_time and end_time, were averaged.
In the interest of processing time, lazy arrays are compared by object identity rather than by their contents.
- Parameters:
*metadata_objects – MetadataObject or dict objects to combine
- Kwargs:
average_times (bool): Removed option to average all time attributes.
- Returns:
the combined metadata
- Return type:
Module contents
Classes and functions related to data identification and querying.
satpy.demo package
Submodules
satpy.demo._google_cloud_platform module
- satpy.demo._google_cloud_platform.get_bucket_files(glob_pattern, base_dir, force=False, pattern_slice=None)[source]
Download files from Google Cloud Storage.
- Parameters:
glob_pattern (str or list) – Glob pattern string or series of patterns used to search for on Google Cloud Storage. The pattern should include the “gs://” protocol prefix. If a list of lists, then the results of each sublist pattern are concatenated and the result is treated as one pattern result. This is important for things like
pattern_slice
and complicated glob patterns not supported by GCP.base_dir (str) – Root directory to place downloaded files on the local system.
force (bool) – Force re-download of data regardless of its existence on the local system. Warning: May delete non-demo files stored in download directory.
pattern_slice (slice) – Slice object to limit the number of files returned by each glob pattern.
satpy.demo.abi_l1b module
Demo data download helper functions for ABI L1b data.
- satpy.demo.abi_l1b.get_hurricane_florence_abi(base_dir=None, method=None, force=False, channels=None, num_frames=10)[source]
Get GOES-16 ABI (Meso sector) data from 2018-09-11 13:00Z to 17:00Z.
- Parameters:
base_dir (str) – Base directory for downloaded files.
method (str) – Force download method for the data if not already cached. Allowed options are: ‘gcsfs’. Default of
None
will choose the best method based on environment settings.force (bool) – Force re-download of data regardless of its existence on the local system. Warning: May delete non-demo files stored in download directory.
channels (list) – Channels to include in download. Defaults to all 16 channels.
num_frames (int or slice) – Number of frames to download. Maximum 240 frames. Default 10 frames.
Size per frame (all channels): ~15MB
Total size (default 10 frames, all channels): ~124MB
Total size (240 frames, all channels): ~3.5GB
- satpy.demo.abi_l1b.get_us_midlatitude_cyclone_abi(base_dir=None, method=None, force=False)[source]
Get GOES-16 ABI (CONUS sector) data from 2019-03-14 00:00Z.
- Parameters:
base_dir (str) – Base directory for downloaded files.
method (str) – Force download method for the data if not already cached. Allowed options are: ‘gcsfs’. Default of
None
will choose the best method based on environment settings.force (bool) – Force re-download of data regardless of its existence on the local system. Warning: May delete non-demo files stored in download directory.
Total size: ~110MB
satpy.demo.ahi_hsd module
Demo data download helper functions for AHI HSD data.
satpy.demo.fci module
Demo FCI data download.
- satpy.demo.fci._unpack_tarfile_to(filename, subdir)[source]
Unpack content of tarfile in filename to subdir.
satpy.demo.seviri_hrit module
Demo data download for SEVIRI HRIT files.
- satpy.demo.seviri_hrit._generate_filenames(pattern, channel, segments)[source]
Generate the filenames for channel and segments.
satpy.demo.utils module
Utilities for demo data download.
satpy.demo.viirs_sdr module
Demo data download for VIIRS SDR HDF5 files.
- satpy.demo.viirs_sdr.get_viirs_sdr_20170128_1229(base_dir=None, channels=('I01', 'I02', 'I03', 'I04', 'I05', 'M01', 'M02', 'M03', 'M04', 'M05', 'M06', 'M07', 'M08', 'M09', 'M10', 'M11', 'M12', 'M13', 'M14', 'M15', 'M16', 'DNB'), granules=(1, 2, 3, 4, 5, 6, 7, 8, 9, 10))[source]
Get VIIRS SDR files for 2017-01-28 12:29 to 12:43.
These files are downloaded from Zenodo. You can see the full file listing here: https://zenodo.org/record/263296
Specific channels can be specified with the
channels
keyword argument. By default, all channels (all I bands, M bands, and DNB bands) will be downloaded. Channels are referred to by their band type and channel number (ex. “I01” or “M16” or “DNB”). Terrain-corrected geolocation files are always downloaded when the corresponding band data is specified.The
granules
argument will control which granules (“time steps”) are downloaded. There are 10 available and the keyword argument can be specified as a tuple of integers from 1 to 10.This full dataset is ~10.1GB.
Notes
File list was retrieved using the zenodo API.
import requests viirs_listing = requests.get("https://zenodo.org/api/records/263296") viirs_dict = json.loads(viirs_listing.content) print("\n".join(sorted(x['links']['self'] for x in viirs_dict['files'])))
Module contents
Demo data download helper functions.
Each get_*
function below downloads files to a local directory and returns
a list of paths to those files. Some (not all) functions have multiple options
for how the data is downloaded (via the method
keyword argument)
including:
- gcsfs:
Download data from a public google cloud storage bucket using the
gcsfs
package.
- unidata_thredds:
Access data using OpenDAP or similar method from Unidata’s public THREDDS server (https://thredds.unidata.ucar.edu/thredds/catalog.html).
- uwaos_thredds:
Access data using OpenDAP or similar method from the University of Wisconsin - Madison’s AOS department’s THREDDS server.
- http:
A last resort download method when nothing else is available of a tarball or zip file from one or more servers available to the Satpy project.
- uw_arcdata:
A network mount available on many servers at the Space Science and Engineering Center (SSEC) at the University of Wisconsin - Madison. This is method is mainly meant when tutorials are taught at the SSEC using a Jupyter Hub server.
To use these functions, do:
>>> from satpy import Scene, demo
>>> filenames = demo.get_us_midlatitude_cyclone_abi()
>>> scn = Scene(reader='abi_l1b', filenames=filenames)
satpy.enhancements package
Submodules
satpy.enhancements.abi module
Enhancement functions specific to the ABI sensor.
- satpy.enhancements.abi._cimss_true_color_contrast(img_data)[source]
Perform per-chunk enhancement.
Code ported from Kaba Bah’s AWIPS python plugin for creating the CIMSS Natural (True) Color image in AWIPS. AWIPS provides that python code the image data on a 0-255 scale. Satpy gives this function the data on a 0-1.0 scale (assuming linear stretching and sqrt enhancements have already been applied).
satpy.enhancements.atmosphere module
Enhancements related to visualising atmospheric phenomena.
- satpy.enhancements.atmosphere._calc_essl_blue(ratio)[source]
Calculate values for blue based on scaled and clipped ratio.
- satpy.enhancements.atmosphere._calc_essl_green(ratio)[source]
Calculate values for green based on scaled and clipped ratio.
- satpy.enhancements.atmosphere._calc_essl_red(ratio)[source]
Calculate values for red based on scaled and clipped ratio.
- satpy.enhancements.atmosphere._is_fci_test_data(data)[source]
Check if we are working with FCI test data.
- satpy.enhancements.atmosphere._scale_and_clip(ratio, low, high)[source]
Scale ratio values to [0, 1] and clip values outside this range.
- satpy.enhancements.atmosphere.essl_moisture(img, low=1.1, high=1.6) None [source]
Low level moisture by European Severe Storms Laboratory (ESSL).
Expects a mode L image with data corresponding to the ratio of the calibrated reflectances for the 0.86 µm and 0.906 µm channel.
This composite and its colorisation were developed by ESSL.
Ratio values are scaled from the range
[low, high]
, which is by default between 1.1 and 1.6, but might be tuned based on region or sensor, to[0, 1]
. Values outside this range are clipped. Color values for red, green, and blue are calculated as follows, wherex
is the ratio between the 0.86 µm and 0.905 µm channels:\[\begin{split}R = \max(1.375 - 2.67 x, -0.75 + x) \\ G = 1 - \frac{8x}{7} \\ B = \max(0.75 - 1.5 x, 0.25 - (x - 0.75)^2) \\\end{split}\]The value of
img.data
is modified in-place.A color interpretation guide is pending further adjustments to the parameters for current and future sensors.
- Parameters:
img – XRImage containing the relevant composite
low – optional, low end for scaling, defaults to 1.1
high – optional, high end for scaling, defaults to 1.6
satpy.enhancements.mimic module
Mimic TPW Color enhancements.
satpy.enhancements.viirs module
Enhancements specific to the VIIRS instrument.
Module contents
Enhancements.
- satpy.enhancements._compute_luminance_from_rgb(r, g, b)[source]
Compute the luminance of the image.
- satpy.enhancements._create_colormap_from_dataset(img, dataset, color_scale)[source]
Create a colormap from an auxiliary variable in a source file.
- satpy.enhancements._jma_true_color_reproduction(img_data, platform=None)[source]
Convert from AHI RGB space to sRGB space.
The conversion matrices for this are supplied per-platform. The matrices are computed using the method described in the paper: ‘True Color Imagery Rendering for Himawari-8 with a Color Reproduction Approach Based on the CIE XYZ Color System’ (DOI:10.2151/jmsj.2018-049).
- satpy.enhancements._three_d_effect_delayed(band_data, kernel, mode)[source]
Kernel for running delayed 3D effect creation.
- satpy.enhancements.btemp_threshold(img, min_in, max_in, threshold, threshold_out=None, **kwargs)[source]
Scale data linearly in two separate regions.
This enhancement scales the input data linearly by splitting the data into two regions; min_in to threshold and threshold to max_in. These regions are mapped to 1 to threshold_out and threshold_out to 0 respectively, resulting in the data being “flipped” around the threshold. A default threshold_out is set to 176.0 / 255.0 to match the behavior of the US National Weather Service’s forecasting tool called AWIPS.
- Parameters:
img (XRImage) – Image object to be scaled
min_in (float) – Minimum input value to scale
max_in (float) – Maximum input value to scale
threshold (float) – Input value where to split data in to two regions
threshold_out (float) – Output value to map the input threshold to. Optional, defaults to 176.0 / 255.0.
- satpy.enhancements.cira_stretch(img, **kwargs)[source]
Logarithmic stretch adapted to human vision.
Applicable only for visible channels.
- satpy.enhancements.colorize(img, **kwargs)[source]
Colorize the given image.
- Parameters:
img – image to be colorized
- Kwargs:
palettes: colormap(s) to use
- The palettes kwarg can be one of the following:
a trollimage.colormap.Colormap object
- list of dictionaries with each of one of the following forms:
- {‘filename’: ‘/path/to/colors.npy’,
‘min_value’: <float, min value to match colors to>, ‘max_value’: <float, min value to match colors to>, ‘reverse’: <bool, reverse the colormap if True (default: False)}
- {‘colors’: <trollimage.colormap.Colormap instance>,
‘min_value’: <float, min value to match colors to>, ‘max_value’: <float, min value to match colors to>, ‘reverse’: <bool, reverse the colormap if True (default: False)}
- {‘colors’: <tuple of RGB(A) tuples>,
‘min_value’: <float, min value to match colors to>, ‘max_value’: <float, min value to match colors to>, ‘reverse’: <bool, reverse the colormap if True (default: False)}
- {‘colors’: <tuple of RGB(A) tuples>,
‘values’: <tuple of values to match colors to>, ‘min_value’: <float, min value to match colors to>, ‘max_value’: <float, min value to match colors to>, ‘reverse’: <bool, reverse the colormap if True (default: False)}
- {‘dataset’: <str, referring to dataset containing palette>,
‘color_scale’: <int, value to be interpreted as white>, ‘min_value’: <float, see above>, ‘max_value’: <float, see above>}
If multiple palettes are supplied, they are concatenated before applied.
- satpy.enhancements.create_colormap(palette, img=None)[source]
Create colormap of the given numpy file, color vector, or colormap.
- Parameters:
palette (dict) – Information describing how to create a colormap object. See below for more details.
From a file
Colormaps can be loaded from
.npy
,.npz
, or comma-separated text files. Numpy (npy/npz) files should be 2D arrays with rows for each color. Comma-separated files should have a row for each color with each column representing a single value/channel. The filename to load can be provided with thefilename
key in the provided palette information. A filename ending with.npy
or.npz
is read as a numpy file withnumpy.load()
. All other extensions are read as a comma-separated file. For.npz
files the data must be stored as a positional list where the first element represents the colormap to use. Seenumpy.savez()
for more information. The path to the colormap can be relative if it is stored in a directory specified by Component Configuration Path. Otherwise it should be an absolute path.The colormap is interpreted as 1 of 4 different “colormap modes”:
RGB
,RGBA
,VRGB
, orVRGBA
. The colormap mode can be forced with thecolormap_mode
key in the provided palette information. If it is not provided then a default will be chosen based on the number of columns in the array (3: RGB, 4: VRGB, 5: VRGBA).The “V” in the possible colormap modes represents the control value of where that color should be applied. If “V” is not provided in the colormap data it defaults to the row index in the colormap array (0, 1, 2, …) divided by the total number of colors to produce a number between 0 and 1. See the “Set Range” section below for more information. The remaining elements in the colormap array represent the Red (R), Green (G), and Blue (B) color to be mapped to.
See the “Color Scale” section below for more information on the value range of provided numbers.
From a list
Colormaps can be loaded from lists of colors provided by the
colors
key in the provided dictionary. Each element in the list represents a single color to be mapped to and can be 3 (RGB) or 4 (RGBA) elements long. By default the value or control point for a color is determined by the index in the list (0, 1, 2, …) divided by the total number of colors to produce a number between 0 and 1. This can be overridden by providing avalues
key in the provided dictionary. See the “Set Range” section below for more information.See the “Color Scale” section below for more information on the value range of provided numbers.
From a builtin colormap
Colormaps can be loaded by name from the builtin colormaps in the
trollimage`
package. Specify the name with thecolors
key in the provided dictionary (ex.{'colors': 'blues'}
). See Colormap for the full list of available colormaps.From an auxiliary variable
If the colormap is defined in the same dataset as the data to which the colormap shall be applied, this can be indicated with
{'dataset': 'palette_variable'}
, where'palette_variable'
is the name of the variable containing the palette. This variable must be an auxiliary variable to the dataset to which the colours are applied. When using this, it is important that one should not setmin_value
andmax_value
as those will be taken from thevalid_range
attribute on the dataset and if those differ frommin_value
andmax_value
, the resulting colors will not match the ones in the palette.Color Scale
By default colors are expected to be in a 0-255 range. This can be overridden by specifying
color_scale
in the provided colormap information. A common alternative to 255 is1
to specify floating point numbers between 0 and 1. The resulting Colormap uses the normalized color values (0-1).Set Range
By default the control points or values of the Colormap are between 0 and 1. This means that data values being mapped to a color must also be between 0 and 1. When this is not the case, the expected input range of the data can be used to configure the Colormap and change the control point values. To do this specify the input data range with
min_value
andmax_value
. Seetrollimage.colormap.Colormap.set_range()
for more information.
- satpy.enhancements.exclude_alpha(func)[source]
Exclude the alpha channel from the DataArray before further processing.
- satpy.enhancements.jma_true_color_reproduction(img)[source]
Apply CIE XYZ matrix and return True Color Reproduction data.
Himawari-8 True Color Reproduction Approach Based on the CIE XYZ Color System Hidehiko MURATA, Kotaro SAITOH, and Yasuhiko SUMIDA Meteorological Satellite Center, Japan Meteorological Agency NOAA National Environmental Satellite, Data, and Information Service Colorado State University—CIRA https://www.jma.go.jp/jma/jma-eng/satellite/introduction/TCR.html
- satpy.enhancements.on_dask_array(func)[source]
Pass the underlying dask array to func instead of the xarray.DataArray.
- satpy.enhancements.on_separate_bands(func)[source]
Apply func one band of the DataArray at a time.
If this decorator is to be applied along with on_dask_array, this decorator has to be applied first, eg:
@on_separate_bands @on_dask_array def my_enhancement_function(data): ...
- satpy.enhancements.palettize(img, **kwargs)[source]
Palettize the given image (no color interpolation).
Arguments as for
colorize()
.NB: to retain the palette when saving the resulting image, pass
keep_palette=True
to the save method (either via the Scene class or directly in trollimage).
- satpy.enhancements.piecewise_linear_stretch(img: XRImage, xp: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], fp: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes], reference_scale_factor: Number | None = None, **kwargs) DataArray [source]
Apply 1D linear interpolation.
This uses
numpy.interp()
mapped over the provided dask array chunks.- Parameters:
img – Image data to be scaled. It is assumed the data is already normalized between 0 and 1.
xp – Input reference values of the image data points used for interpolation. This is passed directly to
numpy.interp()
.fp – Target reference values of the output image data points used for interpolation. This is passed directly to
numpy.interp()
.reference_scale_factor – Divide
xp
andfp
by this value before using them for interpolation. This is a convenience to make matching normalized image data to interp coordinates or to avoid floating point precision errors in YAML configuration files. If not provided,xp
andfp
will not be modified.
Examples
This example YAML uses a ‘crude’ stretch to pre-scale the RGB data and then uses reference points in a 0-255 range.
true_color_linear_interpolation: sensor: abi standard_name: true_color operations: - name: reflectance_range method: !!python/name:satpy.enhancements.stretch kwargs: {stretch: 'crude', min_stretch: 0., max_stretch: 100.} - name: Linear interpolation method: !!python/name:satpy.enhancements.piecewise_linear_stretch kwargs: xp: [0., 25., 55., 100., 255.] fp: [0., 90., 140., 175., 255.] reference_scale_factor: 255
This example YAML does the same as the above on the C02 channel, but the interpolation reference points are already adjusted for the input reflectance (%) data and the output range (0 to 1).
c02_linear_interpolation: sensor: abi standard_name: C02 operations: - name: Linear interpolation method: !!python/name:satpy.enhancements.piecewise_linear_stretch kwargs: xp: [0., 9.8039, 21.5686, 39.2157, 100.] fp: [0., 0.3529, 0.5490, 0.6863, 1.0]
- satpy.enhancements.reinhard_to_srgb(img, saturation=1.25, white=100, **kwargs)[source]
Stretch method based on the Reinhard algorithm, using luminance.
- Parameters:
saturation – Saturation enhancement factor. Less is grayer. Neutral is 1.
white – the reflectance luminance to set to white (in %).
Reinhard, Erik & Stark, Michael & Shirley, Peter & Ferwerda, James. (2002). Photographic Tone Reproduction For Digital Images. ACM Transactions on Graphics. :doi: 21. 10.1145/566654.566575
satpy.modifiers package
Submodules
satpy.modifiers._crefl module
Classes related to the CREFL (corrected reflectance) modifier.
- class satpy.modifiers._crefl.ReflectanceCorrector(*args, dem_filename=None, dem_sds='averaged elevation', url=None, known_hash=None, **kwargs)[source]
Bases:
ModifierBase
,DataDownloadMixin
Corrected Reflectance (crefl) modifier.
Uses a python rewrite of the C CREFL code written for VIIRS and MODIS.
Initialize the compositor with values from the user or from the configuration file.
If dem_filename can’t be found or opened then correction is done assuming TOA or sealevel options.
- Parameters:
dem_filename (str) – DEPRECATED
url (str) – URL or local path to the Digital Elevation Model (DEM) HDF4 file. If unset (None or empty string), then elevation is assumed to be 0 everywhere.
known_hash (str) – Optional SHA256 checksum to verify the download of
url
.dem_sds (str) – Name of the variable in the elevation file to load.
satpy.modifiers._crefl_utils module
Shared utilities for correcting reflectance data using the ‘crefl’ algorithm.
The CREFL algorithm in this module is based on the NASA CREFL SPA software, the NASA CVIIRS SPA, and customizations of these algorithms for ABI/AHI by Ralph Kuehn and Min Oo at the Space Science and Engineering Center (SSEC).
The CREFL SPA documentation page describes the algorithm by saying:
The CREFL_SPA processes MODIS Aqua and Terra Level 1B DB data to create the MODIS Level 2 Corrected Reflectance product. The algorithm performs a simple atmospheric correction with MODIS visible, near-infrared, and short-wave infrared bands (bands 1 through 16).
It corrects for molecular (Rayleigh) scattering and gaseous absorption (water vapor and ozone) using climatological values for gas contents. It requires no real-time input of ancillary data. The algorithm performs no aerosol correction. The Corrected Reflectance products created by CREFL_SPA are very similar to the MODIS Land Surface Reflectance product (MOD09) in clear atmospheric conditions, since the algorithms used to derive both are based on the 6S Radiative Transfer Model. The products show differences in the presence of aerosols, however, because the MODIS Land Surface Reflectance product uses a more complex atmospheric correction algorithm that includes a correction for aerosols.
The additional logic to support ABI (AHI support not included) was originally written by Ralph Kuehn and Min Oo at SSEC. Additional modifications were performed by Martin Raspaud, David Hoese, and Will Roberts to make the code work together and be more dask compatible.
The AHI/ABI implementation is based on the MODIS collection 6 algorithm, where a spherical-shell atmosphere was assumed rather than a plane-parallel. See Appendix A in: “The Collection 6 MODIS aerosol products over land and ocean” Atmos. Meas. Tech., 6, 2989–3034, 2013 www.atmos-meas-tech.net/6/2989/2013/ DOI:10.5194/amt-6-2989-2013.
The original CREFL code is similar to what is described in appendix A1 (page 74) of the ATBD for the MODIS MOD04/MYD04 data product.
- class satpy.modifiers._crefl_utils._ABIAtmosphereVariables(G_O3, G_H2O, G_O2, *args)[source]
Bases:
_AtmosphereVariables
- class satpy.modifiers._crefl_utils._ABICREFLRunner(refl_data_arr)[source]
Bases:
_CREFLRunner
- property coeffs_cls: Type[_Coefficients]
- class satpy.modifiers._crefl_utils._ABICoefficients(wavelength_range, resolution=0)[source]
Bases:
_Coefficients
- COEFF_INDEX_MAP: dict[int, dict[tuple | str, int]] = {2000: {'C01': 0, 'C02': 1, 'C03': 2, 'C05': 3, 'C06': 4, (0.45, 0.47, 0.49, 'µm'): 0, (0.59, 0.64, 0.69, 'µm'): 1, (0.8455, 0.865, 0.8845, 'µm'): 2, (1.58, 1.61, 1.64, 'µm'): 3, (2.225, 2.25, 2.275, 'µm'): 4}}
- LUTS: list[ndarray] = [array([0.0024111 , 0.00431497, 0.0079258 , 0.0093392 , 0.0253 ]), array([0.001236 , 0.0037296 , 0.00017772, 0.0104899 , 0.0163 ]), array([4.2869000e-03, 1.4107995e-02, 8.0243190e-04, 0.0000000e+00, 2.0000000e-05]), array([0.18472 , 0.052349 , 0.015845 , 0.0013074 , 0.00031129])]
- RG_FUDGE = 0.55
- class satpy.modifiers._crefl_utils._AtmosphereVariables(mus, muv, phi, height, ah2o, bh2o, ao3, tau)[source]
Bases:
object
- class satpy.modifiers._crefl_utils._CREFLRunner(refl_data_arr)[source]
Bases:
object
- _height_from_avg_elevation(avg_elevation: ndarray | None) Array | float [source]
Get digital elevation map data for our granule with ocean fill value set to 0.
- property coeffs_cls: Type[_Coefficients]
- class satpy.modifiers._crefl_utils._Coefficients(wavelength_range, resolution=0)[source]
Bases:
object
- _find_coefficient_index(wavelength_range, resolution=0)[source]
Return index in to coefficient arrays for this band’s wavelength.
This function search through the COEFF_INDEX_MAP dictionary and finds the first key where the nominal wavelength of wavelength_range falls between the minimum wavelength and maximum wavelength of the key. wavelength_range can also be the standard name of the band. For example, “M05” for VIIRS or “1” for MODIS.
- Parameters:
wavelength_range – 3-element tuple of (min wavelength, nominal wavelength, max wavelength) or the string name of the band.
resolution – resolution of the band to be corrected
- Returns:
index in to coefficient arrays like aH2O, aO3, etc. None is returned if no matching wavelength is found
- class satpy.modifiers._crefl_utils._MODISAtmosphereVariables(*args)[source]
Bases:
_VIIRSAtmosphereVariables
- class satpy.modifiers._crefl_utils._MODISCREFLRunner(refl_data_arr)[source]
Bases:
_VIIRSMODISCREFLRunner
- property coeffs_cls: Type[_Coefficients]
- class satpy.modifiers._crefl_utils._MODISCoefficients(wavelength_range, resolution=0)[source]
Bases:
_Coefficients
- COEFF_INDEX_MAP: dict[int, dict[tuple | str, int]] = {250: {'1': 0, '2': 1, '3': 2, '4': 3, '5': 4, '6': 5, '7': 6, (0.459, 0.469, 0.479, 'µm'): 2, (0.545, 0.555, 0.565, 'µm'): 3, (0.62, 0.645, 0.67, 'µm'): 0, (0.841, 0.8585, 0.876, 'µm'): 1, (1.23, 1.24, 1.25, 'µm'): 4, (1.628, 1.64, 1.652, 'µm'): 5, (2.105, 2.13, 2.155, 'µm'): 6}, 500: {'1': 0, '2': 1, '3': 2, '4': 3, '5': 4, '6': 5, '7': 6, (0.459, 0.469, 0.479, 'µm'): 2, (0.545, 0.555, 0.565, 'µm'): 3, (0.62, 0.645, 0.67, 'µm'): 0, (0.841, 0.8585, 0.876, 'µm'): 1, (1.23, 1.24, 1.25, 'µm'): 4, (1.628, 1.64, 1.652, 'µm'): 5, (2.105, 2.13, 2.155, 'µm'): 6}, 1000: {'1': 0, '2': 1, '3': 2, '4': 3, '5': 4, '6': 5, '7': 6, (0.459, 0.469, 0.479, 'µm'): 2, (0.545, 0.555, 0.565, 'µm'): 3, (0.62, 0.645, 0.67, 'µm'): 0, (0.841, 0.8585, 0.876, 'µm'): 1, (1.23, 1.24, 1.25, 'µm'): 4, (1.628, 1.64, 1.652, 'µm'): 5, (2.105, 2.13, 2.155, 'µm'): 6}}
- LUTS: list[ndarray] = [array([-5.60723, -5.25251, 0. , 0. , -6.29824, -7.70944, -3.91877, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]), array([0.820175, 0.725159, 0. , 0. , 0.865732, 0.966947, 0.745342, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ]), array([0.0715289 , 0. , 0.00743232, 0.089691 , 0. , 0. , 0. , 0.001 , 0.00383 , 0.0225 , 0.0663 , 0.0836 , 0.0485 , 0.0395 , 0.0119 , 0.00263 ]), array([0.051 , 0.01631, 0.19325, 0.09536, 0.00366, 0.00123, 0.00043, 0.3139 , 0.2375 , 0.1596 , 0.1131 , 0.0994 , 0.0446 , 0.0416 , 0.0286 , 0.0155 ])]
- class satpy.modifiers._crefl_utils._VIIRSAtmosphereVariables(*args)[source]
Bases:
_AtmosphereVariables
- class satpy.modifiers._crefl_utils._VIIRSCREFLRunner(refl_data_arr)[source]
Bases:
_VIIRSMODISCREFLRunner
- property coeffs_cls: Type[_Coefficients]
- class satpy.modifiers._crefl_utils._VIIRSCoefficients(wavelength_range, resolution=0)[source]
Bases:
_Coefficients
- COEFF_INDEX_MAP: dict[int, dict[tuple | str, int]] = {500: {'I01': 7, 'I02': 8, 'I03': 9, (0.6, 0.64, 0.68, 'µm'): 7, (0.845, 0.865, 0.884, 'µm'): 8, (1.58, 1.61, 1.64, 'µm'): 9}, 1000: {'M03': 2, 'M04': 3, 'M05': 0, 'M07': 1, 'M08': 4, 'M10': 5, 'M11': 6, (0.478, 0.488, 0.498, 'µm'): 2, (0.545, 0.555, 0.565, 'µm'): 3, (0.662, 0.672, 0.682, 'µm'): 0, (0.846, 0.865, 0.885, 'µm'): 1, (1.23, 1.24, 1.25, 'µm'): 4, (1.58, 1.61, 1.64, 'µm'): 5, (2.225, 2.25, 2.275, 'µm'): 6}}
- LUTS: list[ndarray] = [array([4.06601e-04, 1.59330e-03, 0.00000e+00, 1.78644e-05, 2.96457e-03, 6.17252e-04, 9.96563e-04, 2.22253e-03, 9.40050e-04, 5.63288e-04, 0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00, 0.00000e+00]), array([0.812659, 0.832931, 1. , 0.867785, 0.806816, 0.944958, 0.78812 , 0.791204, 0.900564, 0.942907, 0. , 0. , 0. , 0. , 0. , 0. ]), array([0.0433461, 0. , 0.0178299, 0.0853012, 0. , 0. , 0. , 0.0813531, 0. , 0. , 0.0663 , 0.0836 , 0.0485 , 0.0395 , 0.0119 , 0.00263 ]), array([0.0435 , 0.01582, 0.16176, 0.0974 , 0.00369, 0.00132, 0.00033, 0.05373, 0.01561, 0.00129, 0.1131 , 0.0994 , 0.0446 , 0.0416 , 0.0286 , 0.0155 ])]
- class satpy.modifiers._crefl_utils._VIIRSMODISCREFLRunner(refl_data_arr)[source]
Bases:
_CREFLRunner
- satpy.modifiers._crefl_utils._run_crefl_abi(refl, mus, muv, phi, solar_zenith, sensor_zenith, height, *coeffs)[source]
- satpy.modifiers._crefl_utils._runner_class_for_sensor(sensor_name: str) Type[_CREFLRunner] [source]
- satpy.modifiers._crefl_utils.run_crefl(refl, sensor_azimuth, sensor_zenith, solar_azimuth, solar_zenith, avg_elevation=None)[source]
Run main crefl algorithm.
All input parameters are per-pixel values meaning they are the same size and shape as the input reflectance data, unless otherwise stated.
- Parameters:
refl – tuple of reflectance band arrays
sensor_azimuth – input swath sensor azimuth angle array
sensor_zenith – input swath sensor zenith angle array
solar_azimuth – input swath solar azimuth angle array
solar_zenith – input swath solar zenith angle array
avg_elevation – average elevation (usually pre-calculated and stored in CMGDEM.hdf)
satpy.modifiers.angles module
Utilties for getting various angles for a dataset..
- class satpy.modifiers.angles.ZarrCacheHelper(func: ~typing.Callable, cache_config_key: str, uncacheable_arg_types=(<class 'pyresample.geometry.SwathDefinition'>, <class 'xarray.core.dataarray.DataArray'>, <class 'dask.array.core.Array'>), sanitize_args_func: ~typing.Callable | None = None, cache_version: int = 1)[source]
Bases:
object
Helper for caching function results to on-disk zarr arrays.
It is recommended to use this class through the
cache_to_zarr_if()
decorator rather than using it directly.Currently the cache does not perform any limiting or removal of cache content. That is left up to the user to manage. Caching is based on arguments passed to the decorated function but will only be performed if the arguments are of a certain type (see
uncacheable_arg_types
). The cache value to use is purely based on the hash value of all of the provided arguments along with the “cache version” (see below).Note that the zarr format requires regular chunking of data. That is, chunks must be all the same size per dimension except for the last chunk. To work around this limitation, this class will determine a good regular chunking based on the existing chunking scheme, rechunk the input arguments, and then rechunk the results before returning them to the user. This rechunking is only done if caching is enabled.
- Parameters:
func – Function that will be called to generate the value to cache.
cache_config_key – Name of the boolean
satpy.config
parameter to use to determine if caching should be done.uncacheable_arg_types – Types that if present in the passed arguments should trigger caching to not happen. By default this includes
SwathDefinition
,xr.DataArray
, andda.Array
objects.sanitize_args_func – Optional function to call to sanitize provided arguments before they are considered for caching. This can be used to make arguments more “cacheable” by replacing them with similar values that will result in more cache hits. Note that the sanitized arguments are only passed to the underlying function if caching will be performed, otherwise the original arguments are passed.
cache_version – Version number used to distinguish one version of a decorated function from future versions.
Notes
Caching only supports dask array values.
This helper allows for an additional
cache_dir
parameter to override the use of thesatpy.config
cache_dir
parameter.
Examples
To use through the
cache_to_zarr_if()
decorator:@cache_to_zarr_if("cache_my_stuff") def generate_my_stuff(area_def: AreaDefinition, some_factor: int) -> da.Array: # Generate return my_dask_arr
To use the decorated function:
with satpy.config.set(cache_my_stuff=True): my_stuff = generate_my_stuff(area_def, 5)
Hold on to provided arguments for future use.
- cache_clear(cache_dir: str | None = None)[source]
Remove all on-disk files associated with this function.
Intended to mimic the
functools.cache()
behavior.
- satpy.modifiers.angles._chunks_are_irregular(chunks_tuple: tuple) bool [source]
Determine if an array is irregularly chunked.
Zarr does not support saving data in irregular chunks. Regular chunking is when all chunks are the same size (except for the last one).
- satpy.modifiers.angles._dim_index_with_default(dims: tuple, dim_name: str, default: int) int [source]
- satpy.modifiers.angles._get_output_chunks_from_func_arguments(args)[source]
Determine what the desired output chunks are.
It is assumed a tuple of tuples of integers is defining chunk sizes. If a tuple like this is not found then arguments are checked for array-like objects with a
.chunks
attribute.
- satpy.modifiers.angles._get_sensor_angles(data_arr: DataArray) tuple[DataArray, DataArray] [source]
- satpy.modifiers.angles._get_sensor_angles_ndarray(lons, lats, start_time, sat_lon, sat_lat, sat_alt) ndarray [source]
- satpy.modifiers.angles._get_sun_azimuth_ndarray(lons: ndarray, lats: ndarray, start_time: datetime) ndarray [source]
- satpy.modifiers.angles._hash_args(*args, unhashable_types=(<class 'pyresample.geometry.SwathDefinition'>, <class 'xarray.core.dataarray.DataArray'>, <class 'dask.array.core.Array'>))[source]
- satpy.modifiers.angles._regular_chunks_from_irregular_chunks(old_chunks: tuple[tuple[int, ...], ...]) tuple[tuple[int, ...], ...] [source]
- satpy.modifiers.angles._sunzen_corr_cos_ndarray(data: ndarray, cos_zen: ndarray, limit: float, max_sza: float | None) ndarray [source]
- satpy.modifiers.angles._sunzen_reduction_ndarray(data: ndarray, sunz: ndarray, limit: float, max_sza: float, strength: float) ndarray [source]
- satpy.modifiers.angles.cache_to_zarr_if(cache_config_key: str, uncacheable_arg_types=(<class 'pyresample.geometry.SwathDefinition'>, <class 'xarray.core.dataarray.DataArray'>, <class 'dask.array.core.Array'>), sanitize_args_func: ~typing.Callable | None = None) Callable [source]
Decorate a function and cache the results as a zarr array on disk.
This only happens if the
satpy.config
boolean value for the provided key isTrue
as well as some other conditions. SeeZarrCacheHelper
for more information. Most importantly, this decorator does not limit how many items can be cached and does not clear out old entries. It is up to the user to manage the size of the cache.
- satpy.modifiers.angles.compute_relative_azimuth(sat_azi: DataArray, sun_azi: DataArray) DataArray [source]
Compute the relative azimuth angle.
- Parameters:
sat_azi – DataArray for the satellite azimuth angles, typically in 0-360 degree range.
sun_azi – DataArray for the solar azimuth angles, should be in same range as sat_azi.
- Returns:
A DataArray containing the relative azimuth angle in the 0-180 degree range.
NOTE: Relative azimuth is defined such that: Relative azimuth is 0 when sun and satellite are aligned on one side of a pixel (back scatter). Relative azimuth is 180 when sun and satellite are directly opposite each other (forward scatter).
- satpy.modifiers.angles.get_angles(data_arr: DataArray) tuple[DataArray, DataArray, DataArray, DataArray] [source]
Get sun and satellite azimuth and zenith angles.
Note that this function can benefit from the
satpy.config
parameters cache_lonlats and cache_sensor_angles being set toTrue
.- Parameters:
data_arr – DataArray to get angles for. Information extracted from this object are
.attrs["area"]
,``.attrs[“start_time”]``, and.attrs["orbital_parameters"]
. Seesatpy.utils.get_satpos()
and Metadata for more information. Additionally, the dask array chunk size is used when generating new arrays. The actual data of the object is not used.- Returns:
Four DataArrays representing sensor azimuth angle, sensor zenith angle, solar azimuth angle, and solar zenith angle. All values are in degrees. Sensor angles are provided in the [0, 360] degree range. Solar angles are provided in the [-180, 180] degree range.
- satpy.modifiers.angles.get_cos_sza(data_arr: DataArray) DataArray [source]
Generate the cosine of the solar zenith angle for the provided data.
- Returns:
DataArray with the same shape as
data_arr
.
- satpy.modifiers.angles.get_satellite_zenith_angle(data_arr: DataArray) DataArray [source]
Generate satellite zenith angle for the provided data.
Note that this function can benefit from the
satpy.config
parameters cache_lonlats and cache_sensor_angles being set toTrue
. Values are in degrees.
- satpy.modifiers.angles.sunzen_corr_cos(data: Array, cos_zen: Array, limit: float = 88.0, max_sza: float | None = 95.0) Array [source]
Perform Sun zenith angle correction.
The correction is based on the provided cosine of the zenith angle (
cos_zen
). The correction is limited tolimit
degrees (default: 88.0 degrees). For larger zenith angles, the correction is the same as at thelimit
ifmax_sza
is None. The default behavior is to gradually reduce the correction pastlimit
degrees up tomax_sza
where the correction becomes 0. Bothdata
andcos_zen
should be 2D arrays of the same shape.
satpy.modifiers.atmosphere module
Modifiers related to atmospheric corrections or adjustments.
- class satpy.modifiers.atmosphere.CO2Corrector(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
ModifierBase
CO2 correction of the brightness temperature of the MSG 3.9um channel.
\[T4_CO2corr = (BT(IR3.9)^4 + Rcorr)^0.25 Rcorr = BT(IR10.8)^4 - (BT(IR10.8)-dt_CO2)^4 dt_CO2 = (BT(IR10.8)-BT(IR13.4))/4.0\]Derived from D. Rosenfeld, “CO2 Correction of Brightness Temperature of Channel IR3.9” .. rubric:: References
Initialise the compositor.
- class satpy.modifiers.atmosphere.PSPAtmosphericalCorrection(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
ModifierBase
Correct for atmospheric effects.
Initialise the compositor.
- class satpy.modifiers.atmosphere.PSPRayleighReflectance(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
ModifierBase
Pyspectral-based rayleigh corrector for visible channels.
It is possible to use
reduce_lim_low
,reduce_lim_high
andreduce_strength
together to reduce rayleigh correction at high solar zenith angle and make the image transition from rayleigh-corrected to partially/none rayleigh-corrected at day/night edge, therefore producing a more natural look, which could be especially helpful for geostationary satellites. This reduction starts at solar zenith angle ofreduce_lim_low
, and ends inreduce_lim_high
. It’s linearly scaled between these two angles. Thereduce_strength
controls the amount of the reduction. When the solar zenith angle reachesreduce_lim_high
, the rayleigh correction will remain(1 - reduce_strength)
of its initial reduce_strength atreduce_lim_high
.To use this function in a YAML configuration file:
rayleigh_corrected_reduced: modifier: !!python/name:satpy.modifiers.PSPRayleighReflectance atmosphere: us-standard aerosol_type: rayleigh_only reduce_lim_low: 70 reduce_lim_high: 95 reduce_strength: 0.6 prerequisites: - name: B03 modifiers: [sunz_corrected] optional_prerequisites: - satellite_azimuth_angle - satellite_zenith_angle - solar_azimuth_angle - solar_zenith_angle
In the case above, rayleigh correction is reduced gradually starting at solar zenith angle 70°. When reaching 95°, the correction will only remain 40% its initial strength at 95°.
Initialise the compositor.
satpy.modifiers.base module
Base modifier classes and utilities.
- class satpy.modifiers.base.ModifierBase(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
CompositeBase
Base class for all modifiers.
A modifier in Satpy is a class that takes one input DataArray to be changed along with zero or more other input DataArrays used to perform these changes. The result of a modifier typically has a lot of the same metadata (name, units, etc) as the original DataArray, but the data is different. A modified DataArray can be differentiated from the original DataArray by the modifiers property of its DataID.
See the
CompositeBase
class for information on the similar concept of “compositors”.Initialise the compositor.
satpy.modifiers.filters module
Tests for image filters.
- class satpy.modifiers.filters.Median(median_filter_params, **kwargs)[source]
Bases:
ModifierBase
Apply a median filter to the band.
Create the instance.
- Parameters:
median_filter_params – The arguments to pass to dask-image’s median_filter function. For example, {size: 3} makes give the median filter a kernel of size 3.
satpy.modifiers.geometry module
Modifier classes for corrections based on sun and other angles.
- class satpy.modifiers.geometry.EffectiveSolarPathLengthCorrector(correction_limit=88.0, **kwargs)[source]
Bases:
SunZenithCorrectorBase
Special sun zenith correction with the method proposed by Li and Shibata.
(2006): https://doi.org/10.1175/JAS3682.1
In addition to adjusting the provided reflectances by the cosine of the solar zenith angle, this modifier forces all reflectances beyond a solar zenith angle of max_sza to 0 to reduce noise in the final data. It also gradually reduces the amount of correction done between
correction_limit
andmax_sza
. Ifmax_sza
isNone
then a constant correction is applied to zenith angles beyondcorrection_limit
.To set
max_sza
toNone
in a YAML configuration file use:effective_solar_pathlength_corrected: modifier: !!python/name:satpy.modifiers.EffectiveSolarPathLengthCorrector max_sza: !!null optional_prerequisites: - solar_zenith_angle
Collect custom configuration values.
- Parameters:
- class satpy.modifiers.geometry.SunZenithCorrector(correction_limit=88.0, **kwargs)[source]
Bases:
SunZenithCorrectorBase
Standard sun zenith correction using
1 / cos(sunz)
.In addition to adjusting the provided reflectances by the cosine of the solar zenith angle, this modifier forces all reflectances beyond a solar zenith angle of
max_sza
to 0. It also gradually reduces the amount of correction done betweencorrection_limit
andmax_sza
. Ifmax_sza
isNone
then a constant correction is applied to zenith angles beyondcorrection_limit
.To set
max_sza
toNone
in a YAML configuration file use:sunz_corrected: modifier: !!python/name:satpy.modifiers.SunZenithCorrector max_sza: !!null optional_prerequisites: - solar_zenith_angle
Collect custom configuration values.
- Parameters:
- class satpy.modifiers.geometry.SunZenithCorrectorBase(max_sza=95.0, **kwargs)[source]
Bases:
ModifierBase
Base class for sun zenith correction modifiers.
Collect custom configuration values.
- Parameters:
max_sza (float) – Maximum solar zenith angle in degrees that is considered valid and correctable. Default 95.0.
- class satpy.modifiers.geometry.SunZenithReducer(correction_limit=80.0, max_sza=90, strength=1.3, **kwargs)[source]
Bases:
SunZenithCorrectorBase
Reduce signal strength at large sun zenith angles.
Within a given sunz interval [correction_limit, max_sza] the strength of the signal is reduced following the formula:
res = signal * reduction_factor
where reduction_factor is a pixel-level value ranging from 0 to 1 within the sunz interval.
The strength parameter can be used for a non-linear reduction within the sunz interval. A strength larger than 1.0 will decelerate the signal reduction towards the sunz interval extremes, whereas a strength smaller than 1.0 will accelerate the signal reduction towards the sunz interval extremes.
Collect custom configuration values.
- Parameters:
correction_limit (float) – Solar zenith angle in degrees where to start the signal reduction.
max_sza (float) – Maximum solar zenith angle in degrees where to apply the signal reduction. Beyond this solar zenith angle the signal will become zero.
strength (float) – The strength of the non-linear signal reduction.
satpy.modifiers.parallax module
Parallax correction.
Routines related to parallax correction using datasets involving height, such as cloud top height.
The geolocation of (geostationary) satellite imagery is calculated by agencies or in satpy readers with the assumption of a clear view from the satellite to the geoid. When a cloud blocks the view of the Earth surface or the surface is above sea level, the geolocation is not accurate for the cloud or mountain top. This module contains routines to correct imagery such that pixels are shifted or interpolated to correct for this parallax effect.
Parallax correction is currently only supported for (cloud top) height
that arrives on an AreaDefinition
, such
as is standard for geostationary satellites. Parallax correction with
data described by a SwathDefinition
,
such as is common for polar satellites, is not (yet) supported.
See also the Modifiers page in the documentation for an introduction to parallax correction as a modifier in Satpy.
- exception satpy.modifiers.parallax.IncompleteHeightWarning[source]
Bases:
UserWarning
Raised when heights only partially overlap with area to be corrected.
- exception satpy.modifiers.parallax.MissingHeightError[source]
Bases:
ValueError
Raised when heights do not overlap with area to be corrected.
- class satpy.modifiers.parallax.ParallaxCorrection(base_area, debug_mode=False)[source]
Bases:
object
Parallax correction calculations.
This class contains higher-level functionality to wrap the parallax correction calculations in
get_parallax_corrected_lonlats()
. The class is initialised using a base area, which is the area for which a corrected geolocation will be calculated. The resulting object is a callable. Calling the object with an array of (cloud top) heights returns aSwathDefinition
describing the new , corrected geolocation. The cloud top height should cover at least the area for which the corrected geolocation will be calculated.Note that the
ctth
dataset must contain satellite location metadata, such as set in theorbital_parameters
dataset attribute that is set by many Satpy readers. It is essential that the datasets to be corrected are coming from the same platform as the provided cloud top height.A note on the algorithm and the implementation. Parallax correction is inherently an inverse problem. The reported geolocation in satellite data files is the true location plus the parallax error. Therefore, this class first calculates the true geolocation (using
get_parallax_corrected_lonlats()
), which gives a shifted longitude and shifted latitude on an irregular grid. The difference between the original and the shifted grid is the parallax error or shift. The magnitude of this error can be estimated withget_surface_parallax_displacement()
. With this difference, we need to invert the parallax correction to calculate the corrected geolocation. Due to parallax correction, high clouds shift a lot, low clouds shift a little, and cloud-free pixels shift not at all. The shift may result in zero, one, two, or more source pixel onto a destination pixel. Physically, this corresponds to the situation where a narrow but high cloud is viewed at a large angle. The cloud may occupy two or more pixels when viewed at a large angle, but only one when viewed straight from above. To accurately reproduce this perspective, the parallax correction uses theBucketResampler
class, specifically theget_abs_max()
method, to retain only the largest absolute shift (corresponding to the highest cloud) within each pixel. Any other resampling method at this step would yield incorrect results. When cloud moves over clear-sky, the clear-sky pixel is unshifted and the shift is located exactly in the centre of the grid box, so nearest-neighbour resampling would lead to such shifts being deselected. Other resampling methods would average large shifts with small shifts, leading to unpredictable results. Now the reprojected shifts can be applied to the original lat/lon, returning a newSwathDefinition
. This is is the object returned bycorrected_area()
.This procedure can be configured as a modifier using the
ParallaxCorrectionModifier
class. However, the modifier can only be applied to one dataset at the time, which may not provide optimal performance, although dask should reuse identical calculations between multiple channels.Initialise parallax correction class.
- Parameters:
base_area (
AreaDefinition
) – Area for which calculated geolocation will be calculated.debug_mode (bool) – Store diagnostic information in self.diagnostics. This attribute always apply to the most recently applied operation only.
- _check_overlap(cth_dataset)[source]
Ensure cth_dataset is usable for parallax correction.
Checks the coverage of
cth_dataset
compared to thebase_area
. If the entirety ofbase_area
is covered bycth_dataset
, do nothing. If only part ofbase_area
is covered bycth_dataset
, raise a IncompleteHeightWarning. If none ofbase_area
is covered bycth_dataset
, raise a MissingHeightError.
- _get_corrected_lon_lat(base_lon, base_lat, shifted_area)[source]
Calculate the corrected lon/lat based from the shifted area.
After calculating the shifted area based on
get_parallax_corrected_lonlats()
, we invert the parallax error and estimate where those pixels came from. For details on the algorithm, see the class docstring.
- static _get_swathdef_from_lon_lat(lon, lat)[source]
Return a SwathDefinition from lon/lat.
Turn ndarrays describing lon/lat into xarray with dimensions y, x, then use these to create a
SwathDefinition
.
- _prepare_cth_dataset(cth_dataset, resampler='nearest', radius_of_influence=50000, lonlat_chunks=1024)[source]
Prepare CTH dataset.
Set cloud top height to zero wherever lat/lon are valid but CTH is undefined. Then resample onto the base area.
- corrected_area(cth_dataset, cth_resampler='nearest', cth_radius_of_influence=50000, lonlat_chunks=1024)[source]
Return the parallax corrected SwathDefinition.
Using the cloud top heights provided in
cth_dataset
, calculate thepyresample.geometry.SwathDefinition
that estimates the geolocation for each pixel if it had been viewed from straight above (without parallax error). The cloud top height will first be resampled onto the area passed upon class initialisation in__init__()
. Pixels that are invisible after parallax correction are not retained but get geolocation NaN.- Parameters:
cth_dataset (
DataArray
) – Cloud top height in meters. The variable attributes must contain anarea
attribute describing the geolocation in a pyresample-aware way, and they must contain satellite orbital parameters. The dimensions must be(y, x)
. For best performance, this should be a dask-basedDataArray
.cth_resampler (string, optional) – Resampler to use when resampling the (cloud top) height to the base area. Defaults to “nearest”.
cth_radius_of_influence (number, optional) – Radius of influence to use when resampling the (cloud top) height to the base area. Defaults to 50000.
lonlat_chunks (int, optional) – Chunking to use when calculating lon/lats. Probably the default (1024) should be fine.
- Returns:
SwathDefinition
describing parallax corrected geolocation.
- class satpy.modifiers.parallax.ParallaxCorrectionModifier(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
ModifierBase
Modifier for parallax correction.
Apply parallax correction as a modifier. Uses the
ParallaxCorrection
class, which in turn uses theget_parallax_corrected_lonlats()
function. See the documentation there for details on the behaviour.To use this, add to
composites/visir.yaml
withinSATPY_CONFIG_PATH
something like:sensor_name: visir modifiers: parallax_corrected: modifier: !!python/name:satpy.modifiers.parallax.ParallaxCorrectionModifier prerequisites: - "ctth_alti" dataset_radius_of_influence: 50000 composites: parallax_corrected_VIS006: compositor: !!python/name:satpy.composites.SingleBandCompositor prerequisites: - name: VIS006 modifiers: [parallax_corrected]
Here,
ctth_alti
is CTH provided by thenwcsaf-geo
reader, so to use it one would have to pass both on scene creation:sc = Scene({"seviri_l1b_hrit": files_l1b, "nwcsaf-geo": files_l2}) sc.load(["parallax_corrected_VIS006"])
The modifier takes optional global parameters, all of which are optional. They affect various steps in the algorithm. Setting them may impact performance:
- cth_resampler
Resampler to use when resampling (cloud top) height to the base area. Defaults to “nearest”.
- cth_radius_of_influence
Radius of influence to use when resampling the (cloud top) height to the base area. Defaults to 50000.
- lonlat_chunks
Chunk size to use when obtaining longitudes and latitudes from the area definition. Defaults to 1024. If you set this to None, then parallax correction will involve premature calculation. Changing this may or may not make parallax correction slower or faster.
- dataset_radius_of_influence
Radius of influence to use when resampling the dataset onto the swathdefinition describing the parallax-corrected area. Defaults to 50000. This always uses nearest neighbour resampling.
Alternately, you can use the lower-level API directly with the
ParallaxCorrection
class, which may be more efficient if multiple datasets need to be corrected. RGB Composites cannot be modified in this way (i.e. you can’t replace “VIS006” by “natural_color”). To get a parallax corrected RGB composite, create a new composite where each input has the modifier applied. The parallax calculation should only occur once, because calculations are happening via dask and dask should reuse the calculation.Initialise the compositor.
- satpy.modifiers.parallax._calculate_slant_cloud_distance(height, elevation)[source]
Calculate slant cloud to ground distance.
From (cloud top) height and satellite elevation, calculate the slant cloud-to-ground distance along the line of sight of the satellite.
- satpy.modifiers.parallax._get_parallax_shift_xyz(sat_lon, sat_lat, sat_alt, lon, lat, parallax_distance)[source]
Calculate the parallax shift in cartesian coordinates.
From satellite position and cloud position, get the parallax shift in cartesian coordinates:
- Parameters:
sat_lon (number) – Satellite longitude in geodetic coordinates [degrees]
sat_lat (number) – Satellite latitude in geodetic coordinates [degrees]
sat_alt (number) – Satellite altitude above the Earth surface [m]
lon (array or number) – Longitudes of pixel or pixels to be corrected, in geodetic coordinates [degrees]
lat (array or number) – Latitudes of pixel/pixels to be corrected, in geodetic coordinates [degrees]
parallax_distance (array or number) – Cloud to ground distance with parallax effect [m].
- Returns:
Parallax shift in cartesian coordinates in meter.
- satpy.modifiers.parallax._get_satellite_elevation(sat_lon, sat_lat, sat_alt, lon, lat)[source]
Get satellite elevation.
Get the satellite elevation from satellite lon/lat/alt for positions lon/lat.
- satpy.modifiers.parallax._get_satpos_from_cth(cth_dataset)[source]
Obtain satellite position from CTH dataset, height in meter.
From a CTH dataset, obtain the satellite position lon, lat, altitude/m, either directly from orbital parameters, or, when missing, from the platform name using pyorbital and skyfield.
- satpy.modifiers.parallax.get_parallax_corrected_lonlats(sat_lon, sat_lat, sat_alt, lon, lat, height)[source]
Calculate parallax corrected lon/lats.
Satellite geolocation generally assumes an unobstructed view of a smooth Earth surface. In reality, this view may be obstructed by clouds or mountains.
If the view of a pixel at location (lat, lon) is blocked by a cloud at height h, this function calculates the (lat, lon) coordinates of the cloud above/in front of the invisible surface.
For scenes that are only partly cloudy, the user might set the cloud top height for clear-sky pixels to NaN. This function will return a corrected lat/lon as NaN as well. The user can use the original lat/lon for those pixels or use the higher level
ParallaxCorrection
class.This function assumes a spherical Earth.
Note
Be careful with units! This code expects
sat_alt
andheight
to be in meter above the Earth’s surface. You may have to convert your input correspondingly. Cloud Top Height is usually reported in meters above the Earth’s surface, rarely in km. Satellite altitude may be reported in either m or km, but orbital parameters are usually in relation to the Earth’s centre. The Earth radius from pyresample is reported in km.- Parameters:
sat_lon (number) – Satellite longitude in geodetic coordinates [degrees]
sat_lat (number) – Satellite latitude in geodetic coordinates [degrees]
sat_alt (number) – Satellite altitude above the Earth surface [m]
lon (array or number) – Longitudes of pixel or pixels to be corrected, in geodetic coordinates [degrees]
lat (array or number) – Latitudes of pixel/pixels to be corrected, in geodetic coordinates [degrees]
height (array or number) – Heights of pixels on which the correction will be based. Typically this is the cloud top height. [m]
- Returns:
- Corrected geolocation
Corrected geolocation
(lon, lat)
in geodetic coordinates for the pixel(s) to be corrected. [degrees]
- Return type:
- satpy.modifiers.parallax.get_surface_parallax_displacement(sat_lon, sat_lat, sat_alt, lon, lat, height)[source]
Calculate surface parallax displacement.
Calculate the displacement due to parallax error. Input parameters are identical to
get_parallax_corrected_lonlats()
.- Returns:
parallax displacement in meter
- Return type:
number or array
satpy.modifiers.spectral module
Modifier classes dealing with spectral domain changes or corrections.
- class satpy.modifiers.spectral.NIREmissivePartFromReflectance(sunz_threshold=None, **kwargs)[source]
Bases:
NIRReflectance
Get the emissive part of NIR bands.
Collect custom configuration values.
- Parameters:
sunz_threshold – The threshold sun zenith angle used when deriving the near infrared reflectance. Above this angle the derivation will assume this sun-zenith everywhere. Default None, in which case the default threshold defined in Pyspectral will be used.
- class satpy.modifiers.spectral.NIRReflectance(sunz_threshold=85.0, masking_limit=88.0, **kwargs)[source]
Bases:
ModifierBase
Get the reflective part of NIR bands.
Collect custom configuration values.
- Parameters:
sunz_threshold – The threshold sun zenith angle used when deriving the near infrared reflectance. Above this angle the derivation will assume this sun-zenith everywhere. Unless overridden, the default threshold of 85.0 degrees will be used.
masking_limit – Mask the data (set to NaN) above this Sun zenith angle. By default the limit is at 88.0 degrees. If set to None, no masking is done.
- MASKING_LIMIT = 88.0
- TERMINATOR_LIMIT = 85.0
- _get_reflectance_as_dask(da_nir, da_tb11, da_tb13_4, da_sun_zenith, metadata)[source]
Calculate 3.x reflectance in % with pyspectral from dask arrays.
- _get_reflectance_as_dataarray(nir, da_tb11, da_tb13_4, da_sun_zenith)[source]
Get the reflectance as a dataarray.
Module contents
Modifier classes and other related utilities.
satpy.multiscene package
Submodules
satpy.multiscene._blend_funcs module
- satpy.multiscene._blend_funcs._combine_stacked_attrs(collected_attrs: Sequence[Mapping]) dict [source]
- satpy.multiscene._blend_funcs._fill_weights_for_invalid_dataset_pixels(datasets: Sequence[DataArray], weights: Sequence[DataArray]) Iterable[DataArray] [source]
Replace weight valus with 0 where data values are invalid/null.
- satpy.multiscene._blend_funcs._stack_blend_by_weights(datasets: Sequence[DataArray], weights: Sequence[DataArray]) DataArray [source]
Stack datasets blending overlap using weights.
- satpy.multiscene._blend_funcs._stack_select_by_weights(datasets: Sequence[DataArray], weights: Sequence[DataArray]) DataArray [source]
Stack datasets selecting pixels using weights.
- satpy.multiscene._blend_funcs._stack_with_weights(datasets: Sequence[DataArray], weights: Sequence[DataArray], blend_type: str) DataArray [source]
- satpy.multiscene._blend_funcs.stack(data_arrays: Sequence[DataArray], weights: Sequence[DataArray] | None = None, blend_type: str = 'select_with_weights') DataArray [source]
Combine a series of datasets in different ways.
By default, DataArrays are stacked on top of each other, so the last one applied is on top. Each DataArray is assumed to represent the same geographic region, meaning they have the same area. If a sequence of weights is provided then they must have the same shape as the area. Weights with greater than 2 dimensions are not currently supported.
When weights are provided, the DataArrays will be combined according to those weights. Data can be integer category products (ex. cloud type), single channels (ex. radiance), or a multi-band composite (ex. an RGB or RGBA true_color). In the latter case, the weight array is applied to each band (R, G, B, A) in the same way. The result will be a composite DataArray where each pixel is constructed in a way depending on
blend_type
.Blend type can be one of the following:
select_with_weights: The input pixel with the maximum weight is chosen.
blend_with_weights: The final pixel is a weighted average of all valid input pixels.
satpy.multiscene._multiscene module
MultiScene object to work with multiple timesteps of satellite data.
- class satpy.multiscene._multiscene.MultiScene(scenes=None)[source]
Bases:
object
Container for multiple Scene objects.
Initialize MultiScene and validate sub-scenes.
- Parameters:
scenes (iterable) – Scene objects to operate on (optional)
Note
If the scenes passed to this object are a generator then certain operations performed will try to preserve that generator state. This may limit what properties or methods are available to the user. To avoid this behavior compute the passed generator by converting the passed scenes to a list first:
MultiScene(list(scenes))
.- static _call_scene_func(gen, func_name, create_new_scene, *args, **kwargs)[source]
Abstract method for running a Scene method on each Scene.
- _distribute_frame_compute(writers, frame_keys, frames_to_write, client, batch_size=1)[source]
Use
dask.distributed
to compute multiple frames at a time.
- _distribute_save_datasets(scenes_iter, client, batch_size=1, **kwargs)[source]
Distribute save_datasets across a cluster.
- static _format_decoration(ds, decorate)[source]
Maybe format decoration.
If the nested dictionary in decorate (argument to
save_animation
) contains a text to be added, format those based on dataset parameters.
- _generate_scene_func(gen, func_name, create_new_scene, *args, **kwargs)[source]
Abstract method for running a Scene method on each Scene.
Additionally, modifies current MultiScene or creates a new one if needed.
- _get_animation_frames(all_datasets, shape, fill_value=None, ignore_missing=False, enh_args=None)[source]
Create enhanced image frames to save to a file.
- _get_animation_info(all_datasets, filename, fill_value=None)[source]
Determine filename and shape of animation to be created.
- _get_single_frame(ds, enh_args, fill_value)[source]
Get single frame from dataset.
Yet a single image frame from a dataset.
- _get_writers_and_frames(filename, datasets, fill_value, ignore_missing, enh_args, imio_args)[source]
Get writers and frames.
Helper function for save_animation.
- static _simple_frame_compute(writers, frame_keys, frames_to_write)[source]
Compute frames the plain dask way.
- property all_same_area
Determine if all contained Scenes have the same ‘area’.
- blend(blend_function: Callable[[...], DataArray] | None = None) Scene [source]
Blend the datasets into one scene.
Reduce the
MultiScene
to a singleScene
. Datasets occurring in each scene will be passed to a blending function, which shall take as input a list of datasets (xarray.DataArray
objects) and shall return a single dataset (xarray.DataArray
object). The blend method then assigns those datasets to the blended scene.Blending functions provided in this module are
stack()
(the default),timeseries()
, andtemporal_rgb()
, but the Python built-in functionsum()
also works and may be appropriate for some types of data.Note
Blending is not currently optimized for generator-based MultiScene.
- property first_scene
First Scene of this MultiScene object.
- classmethod from_files(files_to_sort: Collection[str], reader: str | Collection[str] | None = None, ensure_all_readers: bool = False, scene_kwargs: Mapping | None = None, **kwargs)[source]
Create multiple Scene objects from multiple files.
- Parameters:
files_to_sort – files to read
reader – reader or readers to use
ensure_all_readers – If True, limit to scenes where all readers have at least one file. If False (default), include all scenes where at least one reader has at least one file.
scene_kwargs – additional arguments to pass on to
Scene.__init__()
for each created scene.
This uses the
satpy.readers.group_files()
function to group files. See this function for more details on additional possible keyword arguments. In particular, it is strongly recommended to pass “group_keys” when using multiple instruments.New in version 0.12.
- group(groups)[source]
Group datasets from the multiple scenes.
By default, MultiScene only operates on dataset IDs shared by all scenes. Using this method you can specify groups of datasets that shall be treated equally by MultiScene. Even if their dataset IDs differ (for example because the names or wavelengths are slightly different). Groups can be specified as a dictionary {group_id: dataset_names} where the keys must be of type DataQuery, for example:
groups={ DataQuery('my_group', wavelength=(10, 11, 12)): ['IR_108', 'B13', 'C13'] }
- property is_generator
Contained Scenes are stored as a generator.
- property loaded_dataset_ids
Union of all Dataset IDs loaded by all children.
- save_animation(filename, datasets=None, fps=10, fill_value=None, batch_size=1, ignore_missing=False, client=True, enh_args=None, **kwargs)[source]
Save series of Scenes to movie (MP4) or GIF formats.
Supported formats are dependent on the imageio library and are determined by filename extension by default.
Note
Starting with
imageio
2.5.0, the use of FFMPEG depends on a separateimageio-ffmpeg
package.By default all datasets available will be saved to individual files using the first Scene’s datasets metadata to format the filename provided. If a dataset is not available from a Scene then a black array is used instead (np.zeros(shape)).
This function can use the
dask.distributed
library for improved performance by computing multiple frames at a time (see batch_size option below). If the distributed library is not available then frames will be generated one at a time, one product at a time.- Parameters:
filename (str) – Filename to save to. Can include python string formatting keys from dataset
.attrs
(ex. “{name}_{start_time:%Y%m%d_%H%M%S.gif”)datasets (list) – DataIDs to save (default: all datasets)
fps (int) – Frames per second for produced animation
fill_value (int) – Value to use instead creating an alpha band.
batch_size (int) – Number of frames to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all frames at once. This option should be used with care to avoid memory issues when trying to improve performance. Note that this is the total number of frames for all datasets, so when saving 2 datasets this will compute
(batch_size / 2)
frames for the first dataset and(batch_size / 2)
frames for the second dataset.ignore_missing (bool) – Don’t include a black frame when a dataset is missing from a child scene.
client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is
True
(default) then any existing clients will be used. If this isFalse
orNone
then a client will not be created anddask.distributed
will not be used. If this is a daskClient
object then it will be used for distributed computation.enh_args (Mapping) – Optional, arguments passed to
satpy.writers.get_enhanced_image()
. If this includes a keyword “decorate”, in any text added to the image, string formatting will be applied based on dataset attributes. For example, passingenh_args={"decorate": {"decorate": [{"text": {"txt": "{start_time:%H:%M}"}}]}
will replace the decorated text accordingly.kwargs – Additional keyword arguments to pass to imageio.get_writer.
- save_datasets(client=True, batch_size=1, **kwargs)[source]
Run save_datasets on each Scene.
Note that some writers may not be multi-process friendly and may produce unexpected results or fail by raising an exception. In these cases
client
should be set toFalse
. This is currently a known issue for basic ‘geotiff’ writer work loads.- Parameters:
batch_size (int) – Number of scenes to compute at the same time. This only has effect if the dask.distributed package is installed. This will default to 1. Setting this to 0 or less will attempt to process all scenes at once. This option should be used with care to avoid memory issues when trying to improve performance.
client (bool or dask.distributed.Client) – Dask distributed client to use for computation. If this is
True
(default) then any existing clients will be used. If this isFalse
orNone
then a client will not be created anddask.distributed
will not be used. If this is a daskClient
object then it will be used for distributed computation.kwargs – Additional keyword arguments to pass to
save_datasets()
. Notecompute
can not be provided.
- property scenes
Get list of Scene objects contained in this MultiScene.
Note
If the Scenes contained in this object are stored in a generator (not list or tuple) then accessing this property will load/iterate through the generator possibly
Dataset IDs shared by all children.
- class satpy.multiscene._multiscene._GroupAliasGenerator(scene, groups)[source]
Bases:
object
Add group aliases to a scene.
Initialize the alias generator.
- class satpy.multiscene._multiscene._SceneGenerator(scene_gen)[source]
Bases:
object
Fancy way of caching Scenes from a generator.
- property first
First element in the generator.
- satpy.multiscene._multiscene._group_datasets_in_scenes(scenes, groups)[source]
Group different datasets in multiple scenes by adding aliases.
- Parameters:
scenes (iterable) – Scenes to be processed.
groups (dict) –
Groups of datasets that shall be treated equally by MultiScene. Keys specify the groups, values specify the dataset names to be grouped. For example:
from satpy import DataQuery groups = {DataQuery(name='odd'): ['ds1', 'ds3'], DataQuery(name='even'): ['ds2', 'ds4']}
Module contents
Functions and classes related to MultiScene functionality.
satpy.readers package
Subpackages
satpy.readers.gms package
Submodules
satpy.readers.gms.gms5_vissr_format module
GMS-5 VISSR archive data format.
Reference: VISSR Format Description
satpy.readers.gms.gms5_vissr_l1b module
Reader for GMS-5 VISSR Level 1B data.
Introduction
The gms5_vissr_l1b
reader can decode, navigate and calibrate Level 1B data
from the Visible and Infrared Spin Scan Radiometer (VISSR) in VISSR
archive format. Corresponding platforms are GMS-5 (Japanese Geostationary
Meteorological Satellite) and GOES-09 (2003-2006 backup after MTSAT-1 launch
failure).
VISSR has four channels, each stored in a separate file:
VISSR_20020101_0031_IR1.A.IMG
VISSR_20020101_0031_IR2.A.IMG
VISSR_20020101_0031_IR3.A.IMG
VISSR_20020101_0031_VIS.A.IMG
This is how to read them with Satpy:
from satpy import Scene
import glob
filenames = glob.glob(""/data/VISSR*")
scene = Scene(filenames, reader="gms5-vissr_l1b")
scene.load(["VIS", "IR1"])
References:
Details about platform, instrument and data format can be found in the following references:
Compression
Gzip-compressed VISSR files can be decompressed on the fly using
FSFile
:
import fsspec
from satpy import Scene
from satpy.readers import FSFile
filename = "VISSR_19960217_2331_IR1.A.IMG.gz"
open_file = fsspec.open(filename, compression="gzip")
fs_file = FSFile(open_file)
scene = Scene([fs_file], reader="gms5-vissr_l1b")
scene.load(["IR1"])
Calibration
Sensor counts are calibrated by looking up reflectance/temperature values in the calibration tables included in each file. See section 2.2 in the VISSR user guide.
Space Pixels
VISSR produces data for pixels outside the Earth disk (i.e. atmospheric limb or
deep space pixels). By default, these pixels are masked out as they contain
data of limited or no value, but some applications do require these pixels.
To turn off masking, set mask_space=False
upon scene creation:
import satpy
import glob
filenames = glob.glob("VISSR*.IMG")
scene = satpy.Scene(filenames,
reader="gms5-vissr_l1b",
reader_kwargs={"mask_space": False})
scene.load(["VIS", "IR1])
Metadata
Dataset attributes include metadata such as time and orbital parameters, see Metadata.
Partial Scans
Between 2001 and 2003 VISSR also recorded partial scans of the northern hemisphere. On demand a special Typhoon schedule would be activated between 03:00 and 05:00 UTC.
- class satpy.readers.gms.gms5_vissr_l1b.AreaDefEstimator(coord_conv_params, metadata)[source]
Bases:
object
Estimate area definition for VISSR images.
Initialize the area definition estimator.
- Parameters:
coord_conv_params – Coordinate conversion parameters
metadata – VISSR file metadata
- full_disk_size = {'IR': 2366, 'VIS': 9464}
- class satpy.readers.gms.gms5_vissr_l1b.Calibrator(calib_table)[source]
Bases:
object
Calibrate VISSR data to reflectance or brightness temperature.
Reference: Section 2.2 in the VISSR User Guide.
Initialize the calibrator.
- Parameters:
calib_table – Calibration table
- class satpy.readers.gms.gms5_vissr_l1b.GMS5VISSRFileHandler(filename, filename_info, filetype_info, mask_space=True)[source]
Bases:
BaseFileHandler
File handler for GMS-5 VISSR data in VISSR archive format.
Initialize the file handler.
- Parameters:
filename – Name of file to be read
filename_info – Information obtained from filename
filetype_info – Information about file type
mask_space – Mask space pixels.
- static _concat_orbit_prediction(orb_pred_1, orb_pred_2)[source]
Concatenate orbit prediction data.
It is split over two image parameter blocks in the header.
- property _coord_conv
Get predictions of time-dependent navigation parameters.
Get static navigation parameters.
Note that, “central_line_number_of_vissr_frame” is different for each channel, even if their spatial resolution is identical. For example:
VIS: 5513.0 IR1: 1378.5 IR2: 1378.7 IR3: 1379.1001
- property _mode_block
- static _read_image_param(file_obj, param, channel_type)[source]
Read a single image parameter block from the header.
- property end_time
Nominal end time of the dataset.
- property start_time
Nominal start time of the dataset.
- class satpy.readers.gms.gms5_vissr_l1b.SpaceMasker(image_data, channel)[source]
Bases:
object
Mask pixels outside the earth disk.
Initialize the space masker.
- Parameters:
image_data – Image data
channel – Channel name
- _correct_vis_edges(edges)[source]
Correct VIS edges.
VIS data contains earth edges of IR channel. Compensate for that by scaling with a factor of 4 (1 IR pixel ~ 4 VIS pixels).
- _fill_value = -1
- satpy.readers.gms.gms5_vissr_l1b.get_earth_mask(shape, earth_edges, fill_value=-1)[source]
Get binary mask where 1/0 indicates earth/space.
- Parameters:
shape – Image shape
earth_edges – First and last earth pixel in each scanline
fill_value – Fill value for scanlines not intersecting the earth.
Module contents
GMS reader module.
Submodules
satpy.readers._geos_area module
Geostationary Projection / Area computations.
This module computes properties and area definitions for geostationary satellites. It is designed to be a common module that can be called by all geostationary satellite readers and uses commonly-included parameters such as the CFAC/LFAC values, satellite position, etc, to compute the correct area definition.
- satpy.readers._geos_area.get_area_definition(pdict, a_ext)[source]
Get the area definition for a geo-sat.
- Parameters:
pdict – A dictionary containing common parameters: nlines: Number of lines in image ncols: Number of columns in image ssp_lon: Subsatellite point longitude (deg) a: Earth equatorial radius (m) b: Earth polar radius (m) h: Platform height (m) a_name: Area name a_desc: Area description p_id: Projection id
a_ext – A four element tuple containing the area extent (scan angle) for the scene in radians
- Returns:
An area definition for the scene
- Return type:
a_def
Note
The AreaDefinition proj_id attribute is being deprecated.
- satpy.readers._geos_area.get_area_extent(pdict)[source]
Get the area extent seen by a geostationary satellite.
- Parameters:
pdict – A dictionary containing common parameters: nlines: Number of lines in image ncols: Number of columns in image cfac: Column scaling factor lfac: Line scaling factor coff: Column offset factor loff: Line offset factor scandir: ‘N2S’ for standard (N->S), ‘S2N’ for inverse (S->N)
- Returns:
An area extent for the scene
- Return type:
aex
- satpy.readers._geos_area.get_geos_area_naming(input_dict)[source]
Get a dictionary containing formatted AreaDefinition naming.
- Parameters:
input_dict – dict Dictionary with keys platform_name, instrument_name, service_name, service_desc, resolution . The resolution is expected in meters.
- Returns:
area_naming_dict with area_id, description keys, values are strings.
Note
The AreaDefinition proj_id attribute is being deprecated and is therefore not formatted here. An empty string is to be used until the attribute is fully removed.
- satpy.readers._geos_area.get_resolution_and_unit_strings(resolution)[source]
Get the resolution value and unit as strings.
If the resolution is larger than 1000 m, use kilometer as unit. If lower, use meter.
- Parameters:
resolution – scalar Resolution in meters.
- Returns:
Dictionary with value and unit keys, values are strings.
- satpy.readers._geos_area.get_xy_from_linecol(line, col, offsets, factors)[source]
Get the intermediate coordinates from line & col.
Intermediate coordinates are actually the instruments scanning angles.
- satpy.readers._geos_area.make_ext(ll_x, ur_x, ll_y, ur_y, h)[source]
Create the area extent from computed ll and ur.
- Parameters:
ll_x – The lower left x coordinate (m)
ur_x – The upper right x coordinate (m)
ll_y – The lower left y coordinate (m)
ur_y – The upper right y coordinate (m)
h – The satellite altitude above the Earth’s surface
- Returns:
An area extent for the scene
- Return type:
aex
- satpy.readers._geos_area.sampling_to_lfac_cfac(sampling)[source]
Convert angular sampling to line/column scaling factor (aka LFAC/CFAC).
Reference: MSG Ground Segment LRIT HRIT Mission Specific Implementation, Appendix E.2.
- Parameters:
sampling – float Angular sampling (rad)
- Returns:
Line/column scaling factor (deg-1)
satpy.readers.aapp_l1b module
Reader for aapp level 1b data.
Options for loading:
pre_launch_coeffs (False): use pre-launch coefficients if True, operational otherwise (if available).
https://nwp-saf.eumetsat.int/site/download/documentation/aapp/NWPSAF-MF-UD-003_Formats_v8.0.pdf
- class satpy.readers.aapp_l1b.AAPPL1BaseFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
A base file handler for the AAPP level-1 formats.
Initialize AAPP level-1 file handler object.
- property end_time
Get the time of the final observation.
- property start_time
Get the time of the first observation.
- class satpy.readers.aapp_l1b.AVHRRAAPPL1BFile(filename, filename_info, filetype_info)[source]
Bases:
AAPPL1BaseFileHandler
Reader for AVHRR L1B files created from the AAPP software.
Initialize object information by reading the input file.
Get the longitudes and latitudes of the scene.
- satpy.readers.aapp_l1b._ir_calibrate(header, data, irchn, calib_type, mask=True)[source]
Calibrate for IR bands.
calib_type in brightness_temperature, radiance, count
- satpy.readers.aapp_l1b._vis_calibrate(data, chn, calib_type, pre_launch_coeffs=False, calib_coeffs=None, mask=True)[source]
Calibrate visible channel data.
calib_type in count, reflectance, radiance.
satpy.readers.aapp_mhs_amsub_l1c module
Reader for the AAPP AMSU-B/MHS level-1c data.
https://nwp-saf.eumetsat.int/site/download/documentation/aapp/NWPSAF-MF-UD-003_Formats_v8.0.pdf
- class satpy.readers.aapp_mhs_amsub_l1c.MHS_AMSUB_AAPPL1CFile(filename, filename_info, filetype_info)[source]
Bases:
AAPPL1BaseFileHandler
Reader for AMSU-B/MHS L1C files created from the AAPP software.
Initialize object information by reading the input file.
Get the longitudes and latitudes of the scene.
satpy.readers.abi_base module
Advance Baseline Imager reader base class for the Level 1b and l2+ reader.
- class satpy.readers.abi_base.NC_ABI_BASE(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Base reader for ABI L1B L2+ NetCDF4 files.
Open the NetCDF file with xarray and prepare the Dataset for reading.
- _chunk_bytes_for_resolution() int [source]
Get a best-guess optimal chunk size for resolution-based chunking.
First a chunk size is chosen for the provided Dask setting array.chunk-size and then aligned with a hardcoded on-disk chunk size of 226. This is then adjusted to match the current resolution.
This should result in 500 meter data having 4 times as many pixels per dask array chunk (2 in each dimension) as 1km data and 8 times as many as 2km data. As data is combined or upsampled geographically the arrays should not need to be rechunked. Care is taken to make sure that array chunks are aligned with on-disk file chunks at all resolutions, but at the cost of flexibility due to a hardcoded on-disk chunk size of 226 elements per dimension.
- _get_areadef_fixedgrid(key)[source]
Get the area definition of the data at hand.
Note this method takes special care to round and cast numbers to new data types so that the area definitions for different resolutions (different bands) should be equal. Without the special rounding in __getitem__ and this method the area extents can be 0 to 1.0 meters off depending on how the calculations are done.
- property end_time
End time of the current file’s observations.
- property nc
Get the xarray dataset for this file.
- property sensor
Get sensor name for current file handler.
- spatial_resolution_to_number()[source]
Convert the ‘spatial_resolution’ global attribute to meters.
- property start_time
Start time of the current file’s observations.
satpy.readers.abi_l1b module
Advance Baseline Imager reader for the Level 1b format.
The files read by this reader are described in the official PUG document:
- class satpy.readers.abi_l1b.NC_ABI_L1B(filename, filename_info, filetype_info, clip_negative_radiances=None)[source]
Bases:
NC_ABI_BASE
File reader for individual ABI L1B NetCDF4 files.
Open the NetCDF file with xarray and prepare the Dataset for reading.
- _rad_calibrate(data)[source]
Calibrate any channel to radiances.
This no-op method is just to keep the flow consistent - each valid cal type results in a calibration method call
satpy.readers.abi_l2_nc module
Advance Baseline Imager NOAA Level 2+ products reader.
- The files read by this reader are described in the official PUG document:
- class satpy.readers.abi_l2_nc.NC_ABI_L2(filename, filename_info, filetype_info)[source]
Bases:
NC_ABI_BASE
Reader class for NOAA ABI l2+ products in netCDF format.
Open the NetCDF file with xarray and prepare the Dataset for reading.
satpy.readers.acspo module
ACSPO SST Reader.
See the following page for more information:
https://podaac.jpl.nasa.gov/dataset/VIIRS_NPP-OSPO-L2P-v2.3
- class satpy.readers.acspo.ACSPOFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
ACSPO L2P SST File Reader.
Initialize object.
- property end_time
Get final observation time of data.
- property platform_name
Get satellite name for this file’s data.
- property sensor_name
Get instrument name for this file’s data.
- property start_time
Get first observation time of data.
satpy.readers.agri_l1 module
Advanced Geostationary Radiation Imager reader for the Level_1 HDF format.
The files read by this reader are described in the official Real Time Data Service:
satpy.readers.ahi_hsd module
Advanced Himawari Imager (AHI) standard format data reader.
References
Himawari-8/9 Himawari Standard Data User’s Guide
http://www.data.jma.go.jp/mscweb/en/himawari89/space_segment/spsg_ahi.html
Time Information
AHI observations use the idea of a “nominal” time and an “observation” time.
The “nominal” time or repeat cycle is the overall window when the instrument
can record data, usually at a specific and consistent interval. The
“observation” time is when the data was actually observed inside the nominal
window. These two times are stored in a sub-dictionary in the metadata calls
time_parameters
. Nominal time can be accessed from the
nominal_start_time
and nominal_end_time
metadata keys and
observation time from the observation_start_time
and
observation_end_time
keys. Observation time can also be accessed from the
parent (.attrs
) dictionary as the start_time
and end_time
keys.
Satellite Position
As discussed in the Orbital Parameters documentation, a satellite position can be described by a specific “actual” position, a “nominal” position, a “projection” position, or sometimes a “nadir” position. Not all readers are able to produce all of these positions. In the case of AHI HSD data we have an “actual” and “projection” position. For a lot of sensors/readers though, the “actual” position values do not change between bands or segments of the same time step (repeat cycle). AHI HSD files contain varying values for the actual position.
Other components in Satpy use this actual satellite
position to generate other values (ex. sensor zenith angles). If these values
are not consistent between bands then Satpy (dask) will not be able to share
these calculations (generate one sensor zenith angle for band 1, another for
band 2, etc) even though there is rarely a noticeable difference. To deal with
this this reader has an option round_actual_position
that defaults to
True
and will round the “actual” position (longitude, latitude, altitude)
in a way to produce as consistent a position between bands as possible.
- class satpy.readers.ahi_hsd.AHIHSDFileHandler(filename, filename_info, filetype_info, mask_space=True, calib_mode='update', user_calibration=None, round_actual_position=True)[source]
Bases:
BaseFileHandler
AHI standard format reader.
The AHI sensor produces data for some pixels outside the Earth disk (i,e: atmospheric limb or deep space pixels). By default, these pixels are masked out as they contain data of limited or no value, but some applications do require these pixels. It is therefore possible to override the default behaviour and perform no masking of non-Earth pixels.
In order to change the default behaviour, use the ‘mask_space’ variable as part of
reader_kwargs
upon Scene creation:import satpy import glob filenames = glob.glob('*FLDK*.dat') scene = satpy.Scene(filenames, reader='ahi_hsd', reader_kwargs={'mask_space': False}) scene.load([0.6])
The AHI HSD data files contain multiple VIS channel calibration coefficients. By default, the updated coefficients in header block 6 are used. If the user prefers the default calibration coefficients from block 5 then they can pass calib_mode=’nominal’ when creating a scene:
import satpy import glob filenames = glob.glob('*FLDK*.dat') scene = satpy.Scene(filenames, reader='ahi_hsd', reader_kwargs={'calib_mode': 'update'}) scene.load([0.6])
Alternative AHI calibrations are also available, such as GSICS coefficients. As such, you can supply custom per-channel correction by setting calib_mode=’custom’ and passing correction factors via:
user_calibration={'chan': ['slope': slope, 'offset': offset]}
Where slo and off are per-channel slope and offset coefficients defined by:
rad_leo = (rad_geo - off) / slo
If you do not have coefficients for a particular band, then by default the slope will be set to 1 .and the offset to 0.:
import satpy import glob # Load bands 7, 14 and 15, but we only have coefs for 7+14 calib_dict = {'B07': {'slope': 0.99, 'offset': 0.002}, 'B14': {'slope': 1.02, 'offset': -0.18}} filenames = glob.glob('*FLDK*.dat') scene = satpy.Scene(filenames, reader='ahi_hsd', reader_kwargs={'user_calibration': calib_dict) # B15 will not have custom radiance correction applied. scene.load(['B07', 'B14', 'B15'])
By default, user-supplied calibrations / corrections are applied to the radiance data in accordance with the GSICS standard defined in the equation above. However, user-supplied gain and offset values for converting digital number into radiance via Rad = DN * gain + offset are also possible. To supply your own factors, supply a user calibration dict using type: ‘DN’ as follows:
calib_dict = {'B07': {'slope': 0.0037, 'offset': 18.5}, 'B14': {'slope': -0.002, 'offset': 22.8}, 'type': 'DN'}
You can also explicitly select radiance correction with ‘type’: ‘RAD’ but this is not necessary as it is the default option if you supply your own correction coefficients.
Initialize the reader.
- property _timeline
- property area
Get AreaDefinition representing this file’s data.
- property end_time
Get the nominal end time.
- property nominal_end_time
Get the nominal end time.
- property nominal_start_time
Time this band was nominally to be recorded.
- property observation_end_time
Get the observation end time.
- property observation_start_time
Get the observation start time.
- property start_time
Get the nominal start time.
- class satpy.readers.ahi_hsd._NominalTimeCalculator(timeline, area)[source]
Bases:
object
Get time when a scan was nominally to be recorded.
Initialize the nominal timestamp calculator.
- Parameters:
- _get_closest_timeline(observation_time)[source]
Find the closest timeline for the given observation time.
Needs to check surrounding days because the observation might start a little bit before the planned time.
Observation start time: 2022-12-31 23:59 Timeline: 0000 => Nominal start time: 2023-01-01 00:00
- _modify_observation_time_for_nominal(observation_time)[source]
Round observation time to a nominal time based on known observation frequency.
AHI observations are split into different sectors including Full Disk (FLDK), Japan (JP) sectors, and smaller regional (R) sectors. Each sector is observed at different frequencies (ex. every 10 minutes, every 2.5 minutes, and every 30 seconds). This method will take the actual observation time and round it to the nearest interval for this sector. So if the observation time is 13:32:48 for the “JP02” sector which is the second Japan observation where every Japan observation is 2.5 minutes apart, then the result should be 13:32:30.
- property _observation_frequency
satpy.readers.ahi_l1b_gridded_bin module
Advanced Himawari Imager (AHI) gridded format data reader.
This data comes in a flat binary format on a fixed grid, and needs to have calibration coefficients applied to it in order to retrieve reflectance or BT. LUTs can be downloaded at: ftp://hmwr829gr.cr.chiba-u.ac.jp/gridded/FD/support/
This data is gridded from the original Himawari geometry. To our knowledge, only full disk grids are available, not for the Meso or Japan rapid scans.
References
- AHI gridded data website:
http://www.cr.chiba-u.jp/databases/GEO/H8_9/FD/index_jp.html
- class satpy.readers.ahi_l1b_gridded_bin.AHIGriddedFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
AHI gridded format reader.
This data is flat binary, big endian unsigned short. It covers the region 85E -> 205E, 60N -> 60S at variable resolution: - 0.005 degrees for Band 3 - 0.01 degrees for Bands 1, 2 and 4 - 0.02 degrees for all other bands. These are approximately equivalent to 0.5, 1 and 2km.
Files can either be zipped with bz2 compression (like the HSD format data), or can be uncompressed flat binary.
Initialize the reader.
satpy.readers.ahi_l2_nc module
Reader for Himawari L2 cloud products from NOAA’s big data programme.
For more information about the data, see: <https://registry.opendata.aws/noaa-himawari/>.
These products are generated by the NOAA enterprise cloud suite and have filenames like: AHI-CMSK_v1r1_h09_s202308240540213_e202308240549407_c202308240557548.nc
- The second letter grouping (CMSK above) indicates the product type:
CMSK - Cloud mask
CHGT - Cloud height
CPHS - Cloud type and phase
These products are generated from the AHI sensor on Himawari-8 and Himawari-9, and are produced at the native instrument resolution for the IR channels (2km at nadir).
NOTE: This reader is currently only compatible with full disk scenes. Unlike level 1 himawari data, the netCDF files do not contain the required metadata to produce an appropriate area definition for the data contents, and hence the area definition is hardcoded into the reader.
A warning is displayed to the user highlighting this. The assumed area definition is a full disk image at the nominal subsatellite longitude of 140.7 degrees East.
All the simple data products are supported here, but multidimensional products are not yet supported. These include the CldHgtFlag and the CloudMaskPacked variables.
- class satpy.readers.ahi_l2_nc.HIML2NCFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for Himawari L2 NOAA enterprise data in netCDF format.
Initialize the reader.
- property area
Get AreaDefinition representing this file’s data.
- property end_time
End timestamp of the dataset.
- property start_time
Start timestamp of the dataset.
satpy.readers.ami_l1b module
Advanced Meteorological Imager reader for the Level 1b NetCDF4 format.
- class satpy.readers.ami_l1b.AMIL1bNetCDF(filename, filename_info, filetype_info, calib_mode='PYSPECTRAL', allow_conditional_pixels=False, user_calibration=None)[source]
Bases:
BaseFileHandler
Base reader for AMI L1B NetCDF4 files.
AMI data contains GSICS adjustment factors for the IR bands. By default, these are not applied. If you wish to apply them then you must set the calibration mode appropriately:
import satpy import glob filenames = glob.glob('*FLDK*.dat') scene = satpy.Scene(filenames, reader='ahi_hsd', reader_kwargs={'calib_mode': 'gsics'}) scene.load(['B13'])
In addition, the GSICS website (and other sources) also supply radiance correction coefficients like so:
radiance_corr = (radiance_orig - corr_offset) / corr_slope
If you wish to supply such coefficients, pass ‘user_calibration’ and a dictionary containing per-channel slopes and offsets as a reader_kwarg:
user_calibration={'chan': {'slope': slope, 'offset': offset}}
If you do not have coefficients for a particular band, then by default the slope will be set to 1 .and the offset to 0.:
import satpy import glob # Load bands 7, 14 and 15, but we only have coefs for 7+14 calib_dict = {'WV063': {'slope': 0.99, 'offset': 0.002}, 'IR087': {'slope': 1.02, 'offset': -0.18}} filenames = glob.glob('*.nc') scene = satpy.Scene(filenames, reader='ami_l1b', reader_kwargs={'user_calibration': calib_dict, 'calib_mode': 'file') # IR133 will not have radiance correction applied. scene.load(['WV063', 'IR087', 'IR133'])
By default these updated coefficients are not used. In most cases, setting calib_mode to file is required in order to use external coefficients.
Open the NetCDF file with xarray and prepare the Dataset for reading.
- _apply_gsics_rad_correction(data)[source]
Retrieve GSICS factors from L1 file and apply to radiance.
- _calibrate_ir(dataset_id, data)[source]
Calibrate radiance data to BTs using either pyspectral or in-file coefficients.
- property end_time
Get observation end time.
- property start_time
Get observation start time.
satpy.readers.amsr2_l1b module
Reader for AMSR2 L1B files in HDF5 format.
- class satpy.readers.amsr2_l1b.AMSR2L1BFileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
File handler for AMSR2 l1b.
Initialize file handler.
satpy.readers.amsr2_l2 module
Reader for AMSR2 L2 files in HDF5 format.
- class satpy.readers.amsr2_l2.AMSR2L2FileHandler(filename, filename_info, filetype_info)[source]
Bases:
AMSR2L1BFileHandler
AMSR2 level 2 file handler.
Initialize file handler.
satpy.readers.amsr2_l2_gaasp module
GCOM-W1 AMSR2 Level 2 files from the GAASP software.
GAASP output files are in the NetCDF4 format. Software is provided by NOAA and is also distributed by the CSPP group. More information on the products supported by this reader can be found here: https://www.star.nesdis.noaa.gov/jpss/gcom.php for more information.
GAASP includes both swath/granule products and gridded products. Swath products are provided in files with “MBT”, “OCEAN”, “SNOW”, or “SOIL” in the filename. Gridded products are in files with “SEAICE-SH” or “SEAICE-NH” in the filename where SH stands for South Hemisphere and NH stands for North Hemisphere. These gridded products are on the EASE2 North pole and South pole grids. See https://nsidc.org/ease/ease-grid-projection-gt for more details.
Note that since SEAICE products can be on both the northern or southern hemisphere or both depending on what files are provided to Satpy, this reader appends a _NH and _SH suffix to all variable names that are dynamically discovered from the provided files.
- class satpy.readers.amsr2_l2_gaasp.GAASPFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Generic file handler for GAASP output files.
Initialize file handler.
- available_datasets(configured_datasets=None)[source]
Dynamically discover what variables can be loaded from this file.
See
satpy.readers.file_handlers.BaseHandler.available_datasets()
for more information.
- dim_resolutions = {'Number_of_hi_rez_FOVs': 5000, 'Number_of_low_rez_FOVs': 10000}
- property end_time
Get end time of observation.
- is_gridded = False
- property nc
Get the xarray dataset for this file.
- property platform_name
Name of the platform whose data is stored in this file.
- property sensor_names
Sensors who have data in this file.
- property start_time
Get start time of observation.
- time_dims = ('Time_Dimension',)
- class satpy.readers.amsr2_l2_gaasp.GAASPGriddedFileHandler(filename, filename_info, filetype_info)[source]
Bases:
GAASPFileHandler
GAASP file handler for gridded products like SEAICE.
Initialize file handler.
- dim_resolutions = {'Number_of_X_Dimension': 10000}
- is_gridded = True
- class satpy.readers.amsr2_l2_gaasp.GAASPLowResFileHandler(filename, filename_info, filetype_info)[source]
Bases:
GAASPFileHandler
GAASP file handler for files that only have low resolution products.
Initialize file handler.
- dim_resolutions = {'Number_of_low_rez_FOVs': 10000}
satpy.readers.ascat_l2_soilmoisture_bufr module
ASCAT Soil moisture product reader for BUFR messages.
Based on the IASI L2 SO2 BUFR reader.
- class satpy.readers.ascat_l2_soilmoisture_bufr.AscatSoilMoistureBufr(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
File handler for the ASCAT Soil Moisture BUFR product.
Initialise the file handler for the ASCAT Soil Moisture BUFR data.
- property end_time
Return the end time of data acquisition.
- extract_msg_date_extremes(bufr, date_min=None, date_max=None)[source]
Extract the minimum and maximum dates from a single bufr message.
- property platform_name
Return spacecraft name.
- property start_time
Return the start time of data acqusition.
satpy.readers.atms_l1b_nc module
Advanced Technology Microwave Sounder (ATMS) Level 1B product reader.
The format is explained in the ATMS L1B Product User Guide
- class satpy.readers.atms_l1b_nc.AtmsL1bNCFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
NetCDF4FileHandler
Reader class for ATMS L1B products in netCDF format.
Initialize file handler.
- property antenna_temperature
Get antenna temperature.
- property attrs
Return attributes.
- property end_time
Get observation end time.
- property platform_name
Get platform name.
- property sensor
Get sensor.
- property start_time
Get observation start time.
satpy.readers.atms_sdr_hdf5 module
Reader for the ATMS SDR format.
A reader for Advanced Technology Microwave Sounder (ATMS) SDR data as it e.g. comes out of the CSPP package for processing Direct Readout data.
The format is described in the JPSS COMMON DATA FORMAT CONTROL BOOK (CDFCB):
Joint Polar Satellite System (JPSS) Common Data Format Control Book - External (CDFCB-X) Volume III - SDR/TDR Formats
(474-00001-03_JPSS-CDFCB-X-Vol-III_0124C.pdf)
https://www.nesdis.noaa.gov/about/documents-reports/jpss-technical-documents/jpss-science-documents
- class satpy.readers.atms_sdr_hdf5.ATMS_SDR_FileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
JPSS_SDR_FileHandler
ATMS SDR HDF5 File Reader.
Initialize file handler.
satpy.readers.avhrr_l1b_gaclac module
Reading and calibrating GAC and LAC AVHRR data.
Uses Pygac under the hood. See the Pygac Documentation for supported data formats as well as calibration and navigation methods.
- class satpy.readers.avhrr_l1b_gaclac.GACLACFile(filename, filename_info, filetype_info, start_line=None, end_line=None, strip_invalid_coords=True, interpolate_coords=True, **reader_kwargs)[source]
Bases:
BaseFileHandler
Reader for GAC and LAC data.
Init the file handler.
- Parameters:
start_line – User defined start scanline
end_line – User defined end scanline
strip_invalid_coords – Strip scanlines with invalid coordinates in the beginning/end of the orbit
interpolate_coords – Interpolate coordinates from every eighth pixel to all pixels.
reader_kwargs – More keyword arguments to be passed to pygac.Reader. See the pygac documentation for available options.
- _slice(data)[source]
Select user-defined scanlines and/or strip invalid coordinates.
- Returns:
Sliced data
- _strip_invalid_lat()[source]
Strip scanlines with invalid coordinates in the beginning/end of the orbit.
- Returns:
First and last scanline with valid latitudes.
- property end_time
Get the end time.
- slice(data, times)[source]
Select user-defined scanlines and/or strip invalid coordinates.
Furthermore, update scanline timestamps.
- Parameters:
data – Data to be sliced
times – Scanline timestamps
- Returns:
Sliced data and timestamps
- property start_time
Get the start time.
satpy.readers.clavrx module
Interface to CLAVR-X HDF4 products.
- class satpy.readers.clavrx.CLAVRXHDF4FileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF4FileHandler
,_CLAVRxHelper
A file handler for CLAVRx files.
Init method.
- available_datasets(configured_datasets=None)[source]
Add more information if this reader can provide it.
- property end_time
Get the end time.
- property start_time
Get the start time.
- class satpy.readers.clavrx.CLAVRXNetCDFFileHandler(filename, filename_info, filetype_info)[source]
Bases:
_CLAVRxHelper
,BaseFileHandler
File Handler for CLAVRX netcdf files.
Init method.
- class satpy.readers.clavrx._CLAVRxHelper[source]
Bases:
object
A base class for the CLAVRx File Handlers.
- static _read_axi_fixed_grid(filename: str, sensor: str, l1b_attr) AreaDefinition [source]
Read a fixed grid.
CLAVR-x does not transcribe fixed grid parameters to its output We have to recover that information from the original input file, which is partially named as L1B attribute
example attributes found in L2 CLAVR-x files: sensor = “AHI” ; platform = “HIM8” ; FILENAME = “clavrx_H08_20180719_1300.level2.hdf” ; L1B = “clavrx_H08_20180719_1300” ;
- static _read_pug_fixed_grid(projection_coordinates: netCDF4.Variable, distance_multiplier=1.0) dict [source]
Read from recent PUG format, where axes are in meters.
satpy.readers.cmsaf_claas2 module
Module containing CMSAF CLAAS v2 FileHandler.
- class satpy.readers.cmsaf_claas2.CLAAS2(*args, **kwargs)[source]
Bases:
NetCDF4FileHandler
Handle CMSAF CLAAS-2 files.
Initialise class.
- _get_subset_of_full_disk()[source]
Get subset of the full disk.
CLAAS products are provided on a grid that is slightly smaller than the full disk (excludes most of the space pixels).
- available_datasets(configured_datasets=None)[source]
Yield a collection of available datasets.
Return a generator that will yield the datasets available in the loaded files. See docstring in parent class for specification details.
- property end_time
Get end time from file.
- grid_size = 3636
- property start_time
Get start time from file.
satpy.readers.electrol_hrit module
HRIT format reader.
References
- ELECTRO-L GROUND SEGMENT MSU-GS INSTRUMENT,
LRIT/HRIT Mission Specific Implementation, February 2012
- class satpy.readers.electrol_hrit.HRITGOMSEpilogueFileHandler(filename, filename_info, filetype_info)[source]
Bases:
HRITFileHandler
GOMS HRIT format reader.
Initialize the reader.
- class satpy.readers.electrol_hrit.HRITGOMSFileHandler(filename, filename_info, filetype_info, prologue, epilogue)[source]
Bases:
HRITFileHandler
GOMS HRIT format reader.
Initialize the reader.
- class satpy.readers.electrol_hrit.HRITGOMSPrologueFileHandler(filename, filename_info, filetype_info)[source]
Bases:
HRITFileHandler
GOMS HRIT format reader.
Initialize the reader.
satpy.readers.epic_l1b_h5 module
File handler for DSCOVR EPIC L1B data in hdf5 format.
The epic_l1b_h5
reader reads and calibrates EPIC L1B image data in hdf5 format.
This reader supports all image and most ancillary datasets. Once the reader is initialised:
`` scn = Scene([epic_filename], reader=’epic_l1b_h5’)``
Channels can be loaded with the ‘B’ prefix and their wavelength in nanometers:
scn.load(['B317', 'B688'])
while ancillary data can be loaded by its name:
scn.load(['solar_zenith_angle'])
Note that ancillary dataset names use common standards and not the dataset names in the file. By default, channel data is loaded as calibrated reflectances, but counts data is also available.
- class satpy.readers.epic_l1b_h5.DscovrEpicL1BH5FileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
File handler for DSCOVR EPIC L1b data.
Init filehandler.
- property end_time
Get the end time.
- property start_time
Get the start time.
satpy.readers.eps_l1b module
Reader for eps level 1b data. Uses xml files as a format description.
- class satpy.readers.eps_l1b.EPSAVHRRFile(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Eps level 1b reader for AVHRR data.
Initialize FileHandler.
- property end_time
Get end time.
- property platform_name
Get platform name.
- property sensor_name
Get sensor name.
- sensors = {'AVHR': 'avhrr-3'}
- spacecrafts = {'M01': 'Metop-B', 'M02': 'Metop-A', 'M03': 'Metop-C'}
- property start_time
Get start time.
- property three_a_mask
Mask for 3A.
- property three_b_mask
Mask for 3B.
- units = {'brightness_temperature': 'K', 'radiance': 'W m^-2 sr^-1', 'reflectance': '%'}
satpy.readers.eum_base module
Utilities for EUMETSAT satellite data.
satpy.readers.fci_l1c_nc module
Interface to MTG-FCI L1c NetCDF files.
This module defines the FCIL1cNCFileHandler
file handler, to
be used for reading Meteosat Third Generation (MTG) Flexible Combined
Imager (FCI) Level-1c data. FCI will fly
on the MTG Imager (MTG-I) series of satellites, with the first satellite (MTG-I1)
scheduled to be launched on the 13th of December 2022.
For more information about FCI, see EUMETSAT.
For simulated test data to be used with this reader, see test data releases. For the Product User Guide (PUG) of the FCI L1c data, see PUG.
Note
This reader currently supports Full Disk High Spectral Resolution Imagery
(FDHSI) and High Spatial Resolution Fast Imagery (HRFI) data in full-disc (“FD”) scanning mode.
If the user provides a list of both FDHSI and HRFI files from the same repeat cycle to the Satpy Scene
,
Satpy will automatically read the channels from the source with the finest resolution,
i.e. from the HRFI files for the vis_06, nir_22, ir_38, and ir_105 channels.
If needed, the desired resolution can be explicitly requested using e.g.:
scn.load(['vis_06'], resolution=1000)
.
Note that RSS data is not supported yet.
Geolocation is based on information from the data files. It uses:
From the shape of the data variable
data/<channel>/measured/effective_radiance
, start and end line columns of current swath.From the data variable
data/<channel>/measured/x
, the x-coordinates for the grid, in radians (azimuth angle positive towards West).From the data variable
data/<channel>/measured/y
, the y-coordinates for the grid, in radians (elevation angle positive towards North).From the attribute
semi_major_axis
on the data variabledata/mtg_geos_projection
, the Earth equatorial radiusFrom the attribute
inverse_flattening
on the same data variable, the (inverse) flattening of the ellipsoidFrom the attribute
perspective_point_height
on the same data variable, the geostationary altitude in the normalised geostationary projectionFrom the attribute
longitude_of_projection_origin
on the same data variable, the longitude of the projection originFrom the attribute
sweep_angle_axis
on the same, the sweep angle axis, see https://proj.org/operations/projections/geos.html
From the pixel centre angles in radians and the geostationary altitude, the
extremities of the lower left and upper right corners are calculated in units
of arc length in m. This extent along with the number of columns and rows, the
sweep angle axis, and a dictionary with equatorial radius, polar radius,
geostationary altitude, and longitude of projection origin, are passed on to
pyresample.geometry.AreaDefinition
, which then uses proj4 for the actual
geolocation calculations.
The reading routine supports channel data in counts, radiances, and (depending on channel) brightness temperatures or reflectances. The brightness temperature and reflectance calculation is based on the formulas indicated in PUG. Radiance datasets are returned in units of radiance per unit wavenumber (mW m-2 sr-1 (cm-1)-1). Radiances can be converted to units of radiance per unit wavelength (W m-2 um-1 sr-1) by multiplying with the radiance_unit_conversion_coefficient dataset attribute.
For each channel, it also supports a number of auxiliary datasets, such as the pixel quality, the index map and the related geometric and acquisition parameters: time, subsatellite latitude, subsatellite longitude, platform altitude, subsolar latitude, subsolar longitude, earth-sun distance, sun-satellite distance, swath number, and swath direction.
All auxiliary data can be obtained by prepending the channel name such as
"vis_04_pixel_quality"
.
Warning
The API for the direct reading of pixel quality is temporary and likely to
change. Currently, for each channel, the pixel quality is available by
<chan>_pixel_quality
. In the future, they will likely all be called
pixel_quality
and disambiguated by a to-be-decided property in the
DataID.
Note
For reading compressed data, a decompression library is
needed. Either install the FCIDECOMP library (see PUG), or the
hdf5plugin
package with:
pip install hdf5plugin
or:
conda install hdf5plugin -c conda-forge
If you use hdf5plugin
, make sure to add the line import hdf5plugin
at the top of your script.
- class satpy.readers.fci_l1c_nc.FCIL1cNCFileHandler(filename, filename_info, filetype_info)[source]
Bases:
NetCDF4FsspecFileHandler
Class implementing the MTG FCI L1c Filehandler.
This class implements the Meteosat Third Generation (MTG) Flexible Combined Imager (FCI) Level-1c NetCDF reader. It is designed to be used through the
Scene
class using theload
method with the reader"fci_l1c_nc"
.Initialize file handler.
- _get_dataset_measurand(key, info=None)[source]
Load dataset corresponding to channel measurement.
Load a dataset when the key refers to a measurand, whether uncalibrated (counts) or calibrated in terms of brightness temperature, radiance, or reflectance.
- _platform_name_translate = {'MTI1': 'MTG-I1', 'MTI2': 'MTG-I2', 'MTI3': 'MTG-I3', 'MTI4': 'MTG-I4'}
- calibrate_counts_to_physical_quantity(data, key)[source]
Calibrate counts to radiances, brightness temperatures, or reflectances.
- property end_time
Get end time.
- get_segment_position_info()[source]
Get information about the size and the position of the segment inside the final image array.
As the final array is composed by stacking segments vertically, the position of a segment inside the array is defined by the numbers of the start (lowest) and end (highest) row of the segment. The row numbering is assumed to start with 1. This info is used in the GEOVariableSegmentYAMLReader to compute optimal segment sizes for missing segments.
Note: in the FCI terminology, a segment is actually called “chunk”. To avoid confusion with the dask concept of chunk, and to be consistent with SEVIRI, we opt to use the word segment.
- property nominal_end_time
Get nominal end time.
- property nominal_start_time
Get nominal start time.
- property observation_end_time
Get observation end time.
- property observation_start_time
Get observation start time.
- property orbital_param
Compute the orbital parameters for the current segment.
- property rc_period_min
Get nominal repeat cycle duration.
As RSS is not yet implemeted and error will be raised if RSS are to be read
- property start_time
Get start time.
satpy.readers.fci_l2_nc module
Reader for the FCI L2 products in NetCDF4 format.
- class satpy.readers.fci_l2_nc.FciL2CommonFunctions[source]
Bases:
object
Shared operations for file handlers.
- static _add_flag_values_and_meanings(filename, key, variable)[source]
Build flag values and meaning from enum datatype.
- _get_global_attributes()[source]
Create a dictionary of global attributes to be added to all datasets.
- Returns:
- A dictionary of global attributes.
filename: name of the product file spacecraft_name: name of the spacecraft ssp_lon: longitude of subsatellite point sensor: name of sensor platform_name: name of the platform
- Return type:
- static _mask_data(variable, fill_value)[source]
Set fill_values, as defined in yaml-file, to NaN.
Set data points in variable to NaN if they are equal to fill_value or any of the values in fill_value if fill_value is a list.
- _slice_dataset(variable, dataset_info, dimensions)[source]
Slice data if dimension layers have been provided in yaml-file.
- property sensor_name
Return instrument name.
- property spacecraft_name
Return spacecraft name.
- property ssp_lon
Return longitude at subsatellite point.
- class satpy.readers.fci_l2_nc.FciL2NCAMVFileHandler(filename, filename_info, filetype_info)[source]
Bases:
FciL2CommonFunctions
,BaseFileHandler
Reader class for FCI L2 AMV products in NetCDF4 format.
Open the NetCDF file with xarray and prepare for dataset reading.
- _get_global_attributes()[source]
Create a dictionary of global attributes to be added to all datasets.
- Returns:
- A dictionary of global attributes.
filename: name of the product file spacecraft_name: name of the spacecraft sensor: name of sensor platform_name: name of the platform
- Return type:
- property nc
Read the file.
- class satpy.readers.fci_l2_nc.FciL2NCFileHandler(filename, filename_info, filetype_info, with_area_definition=True)[source]
Bases:
FciL2CommonFunctions
,BaseFileHandler
Reader class for FCI L2 products in NetCDF4 format.
Open the NetCDF file with xarray and prepare for dataset reading.
- class satpy.readers.fci_l2_nc.FciL2NCSegmentFileHandler(filename, filename_info, filetype_info, with_area_definition=False)[source]
Bases:
FciL2CommonFunctions
,BaseFileHandler
Reader class for FCI L2 Segmented products in NetCDF4 format.
Open the NetCDF file with xarray and prepare for dataset reading.
- _construct_area_def(dataset_id)[source]
Construct the area definition.
- Returns:
A pyresample AreaDefinition object containing the area definition.
- Return type:
AreaDefinition
satpy.readers.file_handlers module
Interface for BaseFileHandlers.
- class satpy.readers.file_handlers.BaseFileHandler(filename, filename_info, filetype_info)[source]
Bases:
object
Base file handler.
Initialize file handler.
- available_datasets(configured_datasets=None)[source]
Get information of available datasets in this file.
This is used for dynamically specifying what datasets are available from a file in addition to what’s configured in a YAML configuration file. Note that this method is called for each file handler for each file type; care should be taken when possible to reduce the amount of redundant datasets produced.
This method should not update values of the dataset information dictionary unless this file handler has a matching file type (the data could be loaded from this object in the future) and at least one
satpy.dataset.DataID
key is also modified. Otherwise, this file type may override the information provided by a more preferred file type (as specified in the YAML file). It is recommended that any non-ID metadata be updated during theBaseFileHandler.get_dataset()
part of loading. This method is not guaranteed that it will be called before any other file type’s handler. The availability “boolean” not beingNone
does not mean that a file handler called later can’t provide an additional dataset, but it must provide more identifying (DataID) information to do so and should yield its new dataset in addition to the previous one.- Parameters:
configured_datasets (list) – Series of (bool or None, dict) in the same way as is returned by this method (see below). The bool is whether the dataset is available from at least one of the current file handlers. It can also be
None
if no file handler before us knows how to handle it. The dictionary is existing dataset metadata. The dictionaries are typically provided from a YAML configuration file and may be modified, updated, or used as a “template” for additional available datasets. This argument could be the result of a previous file handler’s implementation of this method.- Returns:
Iterator of (bool or None, dict) pairs where dict is the dataset’s metadata. If the dataset is available in the current file type then the boolean value should be
True
,False
if we know about the dataset but it is unavailable, orNone
if this file object is not responsible for it.
Example 1 - Supplement existing configured information:
def available_datasets(self, configured_datasets=None): "Add information to configured datasets." # we know the actual resolution res = self.resolution # update previously configured datasets for is_avail, ds_info in (configured_datasets or []): # some other file handler knows how to load this # don't override what they've done if is_avail is not None: yield is_avail, ds_info matches = self.file_type_matches(ds_info['file_type']) if matches and ds_info.get('resolution') != res: # we are meant to handle this dataset (file type matches) # and the information we can provide isn't available yet new_info = ds_info.copy() new_info['resolution'] = res yield True, new_info elif is_avail is None: # we don't know what to do with this # see if another future file handler does yield is_avail, ds_info
Example 2 - Add dynamic datasets from the file:
def available_datasets(self, configured_datasets=None): "Add information to configured datasets." # pass along existing datasets for is_avail, ds_info in (configured_datasets or []): yield is_avail, ds_info # get dynamic variables known to this file (that we created) for var_name, val in self.dynamic_variables.items(): ds_info = { 'file_type': self.filetype_info['file_type'], 'resolution': 1000, 'name': var_name, } yield True, ds_info
- combine_info(all_infos)[source]
Combine metadata for multiple datasets.
When loading data from multiple files it can be non-trivial to combine things like start_time, end_time, start_orbit, end_orbit, etc.
By default this method will produce a dictionary containing all values that were equal across all provided info dictionaries.
Additionally it performs the logical comparisons to produce the following if they exist:
start_time
end_time
start_orbit
end_orbit
orbital_parameters
time_parameters
Also, concatenate the areas.
- property end_time
Get end time.
- file_type_matches(ds_ftype)[source]
Match file handler’s type to this dataset’s file type.
- Parameters:
ds_ftype (str or list) – File type or list of file types that a dataset is configured to be loaded from.
- Returns:
True
if this file handler object’s type matches the dataset’s file type(s),None
otherwise.None
is returned instead ofFalse
to follow the convention of theavailable_datasets()
method.
- get_bounding_box()[source]
Get the bounding box of the files, as a (lons, lats) tuple.
The tuple return should a lons and lats list of coordinates traveling clockwise around the points available in the file.
- property sensor_names
List of sensors represented in this file.
- property start_time
Get start time.
- satpy.readers.file_handlers.open_dataset(filename, *args, **kwargs)[source]
Open a file with xarray.
- Parameters:
filename (Union[str, FSFile]) – The path to the file to open. Can be a string or
FSFile
object which allows using fsspec or s3fs like files.- Return type:
Notes
This can be used to enable readers to open remote files.
satpy.readers.fy4_base module
Base reader for the L1 HDF data from the AGRI and GHI instruments aboard the FengYun-4A/B satellites.
The files read by this reader are described in the official Real Time Data Service:
- class satpy.readers.fy4_base.FY4Base(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
The base class for the FengYun4 AGRI and GHI readers.
Init filehandler.
- _apply_lut(data: DataArray, lut: ndarray[Any, dtype[float32]]) DataArray [source]
Calibrate digital number (DN) by applying a LUT.
- Parameters:
data – Raw detector digital number
lut – the look up table
- Returns:
Calibrated quantity
- property end_time
Get the end time.
- property reflectance_coeffs
Retrieve the reflectance calibration coefficients from the HDF file.
- static scale(dn, slope, offset)[source]
Convert digital number (DN) to calibrated quantity through scaling.
- Parameters:
dn – Raw detector digital number
slope – Slope
offset – Offset
- Returns:
Scaled data
- property start_time
Get the start time.
satpy.readers.generic_image module
Reader for generic image (e.g. gif, png, jpg, tif, geotiff, …).
Returns a dataset without calibration. Includes coordinates if
available in the file (eg. geotiff).
If nodata values are present (and rasterio is able to read them), it
will be preserved as attribute _FillValue
in the returned dataset.
In case that nodata values should be used to mask pixels (that have
equal values) with np.nan, it has to be enabled in the reader yaml
file (key nodata_handling
per dataset with value "nan_mask"
).
- class satpy.readers.generic_image.GenericImageFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Handle reading of generic image files.
Initialize filehandler.
- property end_time
Return end time.
- property start_time
Return start time.
satpy.readers.geocat module
Interface to GEOCAT HDF4 or NetCDF4 products.
Note: GEOCAT files do not currently have projection information or precise pixel resolution information. Additionally the longitude and latitude arrays are stored as 16-bit integers which causes loss of precision. For this reason the lon/lats can’t be used as a reliable coordinate system to calculate the projection X/Y coordinates.
Until GEOCAT adds projection information and X/Y coordinate arrays, this reader will estimate the geostationary area the best it can. It currently takes a single lon/lat point as reference and uses hardcoded resolution and projection information to calculate the area extents.
- class satpy.readers.geocat.GEOCATFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
NetCDF4FileHandler
GEOCAT netCDF4 file handler.
Loading data with decode_times=True
By default, this reader will use
xarray_kwargs={"engine": "netcdf4", "decode_times": False}
. to match behavior of xarray when the geocat reader was first written. To use different options use reader_kwargs when loading the Scene:scene = satpy.Scene(filenames, reader='geocat', reader_kwargs={'xarray_kwargs': {'engine': 'netcdf4', 'decode_times': True}})
Open and perform initial investigation of NetCDF file.
- available_datasets(configured_datasets=None)[source]
Update information for or add datasets provided by this file.
If this file handler can load a dataset then it will supplement the dataset info with the resolution and possibly coordinate datasets needed to load it. Otherwise it will continue passing the dataset information down the chain.
See
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
for details.
- property end_time
Get end time.
- property is_geo
Check platform.
- property resolution
Get resolution.
- resolutions = {'abi': {1: 1002.0086577437705, 2: 2004.017315487541}, 'ahi': {1: 999.9999820317674, 2: 1999.999964063535, 4: 3999.99992812707}}
- property sensor_names
Get sensor names.
- sensors = {'goes': 'goes_imager', 'goes16': 'abi', 'goesr': 'abi', 'himawari8': 'ahi'}
- property start_time
Get start time.
satpy.readers.gerb_l2_hr_h5 module
GERB L2 HR HDF5 reader.
A reader for the Top of Atmosphere outgoing fluxes from the Geostationary Earth Radiation Budget instrument aboard the Meteosat Second Generation satellites.
- class satpy.readers.gerb_l2_hr_h5.GERB_HR_FileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
File handler for GERB L2 High Resolution H5 files.
Initialize file handler.
- property end_time
Get end time.
- property start_time
Get start time.
satpy.readers.ghi_l1 module
Geostationary High-speed Imager reader for the Level_1 HDF format.
This instrument is aboard the Fengyun-4B satellite. No document is available to describe this format is available, but it’s broadly similar to the co-flying AGRI instrument.
satpy.readers.ghrsst_l2 module
Reader for the GHRSST level-2 formatted data.
- class satpy.readers.ghrsst_l2.GHRSSTL2FileHandler(filename, filename_info, filetype_info, engine=None)[source]
Bases:
BaseFileHandler
File handler for GHRSST L2 netCDF files.
Initialize the file handler for GHRSST L2 netCDF data.
- property end_time
Get end time.
- property nc
Get the xarray Dataset for the filename.
- property sensor
Get the sensor name.
- property start_time
Get start time.
satpy.readers.glm_l2 module
Geostationary Lightning Mapper reader for the Level 2 format from glmtools.
More information about glmtools and the files it produces can be found on the project’s GitHub repository:
- class satpy.readers.glm_l2.NCGriddedGLML2(filename, filename_info, filetype_info)[source]
Bases:
NC_ABI_BASE
File reader for individual GLM L2 NetCDF4 files.
Open the NetCDF file with xarray and prepare the Dataset for reading.
- available_datasets(configured_datasets=None)[source]
Discover new datasets and add information from file.
- property end_time
End time of the current file’s observations.
- property sensor
Get sensor name for current file handler.
- property start_time
Start time of the current file’s observations.
satpy.readers.goes_imager_hrit module
GOES HRIT format reader.
References
LRIT/HRIT Mission Specific Implementation, February 2012 GVARRDL98.pdf 05057_SPE_MSG_LRIT_HRI
- exception satpy.readers.goes_imager_hrit.CalibrationError[source]
Bases:
Exception
Dummy error-class.
- class satpy.readers.goes_imager_hrit.HRITGOESFileHandler(filename, filename_info, filetype_info, prologue)[source]
Bases:
HRITFileHandler
GOES HRIT format reader.
Initialize the reader.
- class satpy.readers.goes_imager_hrit.HRITGOESPrologueFileHandler(filename, filename_info, filetype_info)[source]
Bases:
HRITFileHandler
GOES HRIT format reader.
Initialize the reader.
- satpy.readers.goes_imager_hrit._epoch_doy_offset_from_sgs_time(sgs_time_array: _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]) timedelta [source]
satpy.readers.goes_imager_nc module
Reader for GOES 8-15 imager data in netCDF format.
Supports netCDF files from both NOAA-CLASS and EUMETSAT.
NOAA-CLASS
GOES-Imager netCDF files from NOAA-CLASS contain detector counts alongside latitude and longitude coordinates.
Note
If ordering files via NOAA CLASS, select 16 bits/pixel.
Note
Some essential information are missing in the netCDF files:
Subsatellite point
Calibration coefficients
Detector-scanline assignment, i.e. information about which scanline was recorded by which detector
Items 1. and 2. are not critical because the images are geo-located and NOAA provides static calibration coefficients ([VIS], [IR]). The detector-scanline assignment however cannot be reconstructed properly. This is where an approximation has to be applied (see below).
Oversampling
GOES-Imager oversamples the viewed scene in E-W direction by a factor of 1.75: IR/VIS pixels are 112/28 urad on a side, but the instrument samples every 64/16 urad in E-W direction (see [BOOK-I] and [BOOK-N]). That means pixels are actually overlapping on the ground. This cannot be represented by a pyresample area definition.
For full disk images it is possible to estimate an area definition with uniform
sampling where pixels don’t overlap. This can be used for resampling and is
available via scene[dataset].attrs["area_def_uni"]
. The pixel size is derived
from altitude and N-S sampling angle. The area extent is based on the maximum
scanning angles at the earth’s limb.
Calibration
Calibration is performed according to [VIS] and [IR], but with an average calibration coefficient applied to all detectors in a certain channel. The reason for and impact of this approximation is described below.
The GOES imager simultaneously records multiple scanlines per sweep using multiple detectors per channel. The VIS channel has 8 detectors, the IR channels have 1-2 detectors (see e.g. Figures 3-5a/b, 3-6a/b and 3-7/a-b in [BOOK-N]). Each detector has its own calibration coefficients, so in order to perform an accurate calibration, the detector-scanline assignment is needed.
In theory it is known which scanline was recorded by which detector (VIS: 5,6,7,8,1,2,3,4; IR: 1,2). However, the plate on which the detectors are mounted flexes due to thermal gradients in the instrument which leads to a N-S shift of +/- 8 visible or +/- 2 IR pixels. This shift is compensated in the GVAR scan formation process, but in a way which is hard to reconstruct properly afterwards. See [GVAR], section 3.2.1. for details.
Since the calibration coefficients of the detectors in a certain channel only differ slightly, a workaround is to calibrate each scanline with the average calibration coefficients. A worst case estimate of the introduced error can be obtained by calibrating all possible counts with both the minimum and the maximum calibration coefficients and computing the difference. The maximum differences are:
GOES-8 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
0.0 |
% # Counts are normalized |
03_9 |
0.187 |
K |
06_8 |
0.0 |
K # only one detector |
10_7 |
0.106 |
K |
12_0 |
0.036 |
K |
GOES-9 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
0.0 |
% # Counts are normalized |
03_9 |
0.0 |
K # coefs identical |
06_8 |
0.0 |
K # only one detector |
10_7 |
0.021 |
K |
12_0 |
0.006 |
K |
GOES-10 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
1.05 |
% |
03_9 |
0.0 |
K # coefs identical |
06_8 |
0.0 |
K # only one detector |
10_7 |
0.013 |
K |
12_0 |
0.004 |
K |
GOES-11 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
1.25 |
% |
03_9 |
0.0 |
K # coefs identical |
06_8 |
0.0 |
K # only one detector |
10_7 |
0.0 |
K # coefs identical |
12_0 |
0.065 |
K |
GOES-12 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
0.8 |
% |
03_9 |
0.0 |
K # coefs identical |
06_5 |
0.044 |
K |
10_7 |
0.0 |
K # coefs identical |
13_3 |
0.0 |
K # only one detector |
GOES-13 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
1.31 |
% |
03_9 |
0.0 |
K # coefs identical |
06_5 |
0.085 |
K |
10_7 |
0.008 |
K |
13_3 |
0.0 |
K # only one detector |
GOES-14 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
0.66 |
% |
03_9 |
0.0 |
K # coefs identical |
06_5 |
0.043 |
K |
10_7 |
0.006 |
K |
13_3 |
0.003 |
K |
GOES-15 |
||
---|---|---|
Channel |
Diff |
Unit |
00_7 |
0.86 |
% |
03_9 |
0.0 |
K # coefs identical |
06_5 |
0.02 |
K |
10_7 |
0.009 |
K |
13_3 |
0.008 |
K |
EUMETSAT
During tandem operations of GOES-15 and GOES-17, EUMETSAT distributed a variant of this dataset with the following differences:
The geolocation is in a separate file, used for all bands
VIS data is calibrated to Albedo (or reflectance)
IR data is calibrated to radiance.
VIS data is downsampled to IR resolution (4km)
File name differs also slightly
Data is received via EumetCast
References:
[GVAR] GVAR transmission format
[BOOK-N] GOES-N databook
[BOOK-I] GOES-I databook (broken)
[IR] Conversion of GVAR Infrared Data to Scene Radiance or Temperature
[VIS] Calibration of the Visible Channels of the GOES Imagers and Sounders
[GLOSSARY] GVAR_IMG Glossary
[SCHED-W] GOES-15 Routine Imager Schedule
[SCHED-E] Optimized GOES-East Routine Imager Schedule
- class satpy.readers.goes_imager_nc.AreaDefEstimator(platform_name, channel)[source]
Bases:
object
Estimate area definition for GOES-Imager.
Create the instance.
- get_area_def_with_uniform_sampling(projection_longitude)[source]
Get area definition with uniform sampling.
The area definition is based on geometry and instrument properties: Pixel size is derived from altitude and N-S sampling angle. Area extent is based on the maximum scanning angles at the limb of the earth.
- class satpy.readers.goes_imager_nc.GOESCoefficientReader(ir_url, vis_url)[source]
Bases:
object
Read GOES Imager calibration coefficients from NOAA reference HTMLs.
Init the coef reader.
- gvar_channels = {'GOES-10': {'00_7': 1, '03_9': 2, '06_8': 3, '10_7': 4, '12_0': 5}, 'GOES-11': {'00_7': 1, '03_9': 2, '06_8': 3, '10_7': 4, '12_0': 5}, 'GOES-12': {'00_7': 1, '03_9': 2, '06_5': 3, '10_7': 4, '13_3': 6}, 'GOES-13': {'00_7': 1, '03_9': 2, '06_5': 3, '10_7': 4, '13_3': 6}, 'GOES-14': {'00_7': 1, '03_9': 2, '06_5': 3, '10_7': 4, '13_3': 6}, 'GOES-15': {'00_7': 1, '03_9': 2, '06_5': 3, '10_7': 4, '13_3': 6}, 'GOES-8': {'00_7': 1, '03_9': 2, '06_8': 3, '10_7': 4, '12_0': 5}, 'GOES-9': {'00_7': 1, '03_9': 2, '06_8': 3, '10_7': 4, '12_0': 5}}
- ir_tables = {'GOES-10': '2-3', 'GOES-11': '2-4', 'GOES-12': '2-5a', 'GOES-13': '2-6', 'GOES-14': '2-7c', 'GOES-15': '2-8b', 'GOES-8': '2-1', 'GOES-9': '2-2'}
- vis_tables = {'GOES-10': 'Table 2.', 'GOES-11': 'Table 3.', 'GOES-12': 'Table 4.', 'GOES-13': 'Table 5.', 'GOES-14': 'Table 6.', 'GOES-15': 'Table 7.', 'GOES-8': 'Table 1.', 'GOES-9': 'Table 1.'}
- class satpy.readers.goes_imager_nc.GOESEUMGEONCFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for GOES Geolocation data in EUM netCDF format.
Initialize the reader.
- property resolution
Specify the spatial resolution of the dataset.
In the EUMETSAT format VIS data is downsampled to IR resolution (4km).
- class satpy.readers.goes_imager_nc.GOESEUMNCFileHandler(filename, filename_info, filetype_info, geo_data)[source]
Bases:
GOESNCBaseFileHandler
File handler for GOES Imager data in EUM netCDF format.
TODO: Remove datasets which are not available in the file (counts, VIS radiance) via available_datasets() -> See #434
Initialize the reader.
- ir_sectors = {(566, 3464): 'Southern Hemisphere (GOES-East)', (1062, 2760): 'Southern Hemisphere (GOES-West)', (1354, 3312): 'Northern Hemisphere (GOES-West)', (1826, 3464): 'Northern Hemisphere (GOES-East)', (2704, 5208): 'Full Disc'}
- vis_sectors = {(566, 3464): 'Southern Hemisphere (GOES-East)', (1062, 2760): 'Southern Hemisphere (GOES-West)', (1354, 3312): 'Northern Hemisphere (GOES-West)', (1826, 3464): 'Northern Hemisphere (GOES-East)', (2704, 5208): 'Full Disc'}
- class satpy.readers.goes_imager_nc.GOESNCBaseFileHandler(filename, filename_info, filetype_info, geo_data=None)[source]
Bases:
BaseFileHandler
File handler for GOES Imager data in netCDF format.
Initialize the reader.
- _calibrate(radiance, coefs, channel, calibration)[source]
Convert radiance to reflectance or brightness temperature.
- static _calibrate_ir(radiance, coefs)[source]
Convert IR radiance to brightness temperature.
Reference: [IR]
- Parameters:
radiance – Radiance [mW m-2 cm-1 sr-1]
coefs – Dictionary of calibration coefficients. Keys: n: The channel’s central wavenumber [cm-1] a: Offset [K] b: Slope [1] btmin: Minimum brightness temperature threshold [K] btmax: Maximum brightness temperature threshold [K]
- Returns:
Brightness temperature [K]
- static _calibrate_vis(radiance, k)[source]
Convert VIS radiance to reflectance.
Note: Angle of incident radiation and annual variation of the earth-sun distance is not taken into account. A value of 100% corresponds to the radiance of a perfectly reflecting diffuse surface illuminated at normal incidence when the sun is at its annual-average distance from the Earth.
TODO: Take angle of incident radiation (cos sza) and annual variation of the earth-sun distance into account.
Reference: [VIS]
- Parameters:
radiance – Radiance [mW m-2 cm-1 sr-1]
k – pi / H, where H is the solar spectral irradiance at annual-average sun-earth distance, averaged over the spectral response function of the detector). Units of k: [m2 um sr W-1]
- Returns:
Reflectance [%]
- static _get_nadir_pixel(earth_mask, sector)[source]
Find the nadir pixel.
- Parameters:
earth_mask – Mask identifying earth and space pixels
sector – Specifies the scanned sector
- Returns:
nadir row, nadir column
- static _ircounts2radiance(counts, scale, offset)[source]
Convert IR counts to radiance.
Reference: [IR].
- Parameters:
counts – Raw detector counts
scale – Scale [mW-1 m2 cm sr]
offset – Offset [1]
- Returns:
Radiance [mW m-2 cm-1 sr-1]
- static _viscounts2radiance(counts, slope, offset)[source]
Convert VIS counts to radiance.
References: [VIS]
- Parameters:
counts – Raw detector counts
slope – Slope [W m-2 um-1 sr-1]
offset – Offset [W m-2 um-1 sr-1]
- Returns:
Radiance [W m-2 um-1 sr-1]
- available_datasets(configured_datasets=None)[source]
Update information for or add datasets provided by this file.
If this file handler can load a dataset then it will supplement the dataset info with the resolution and possibly coordinate datasets needed to load it. Otherwise it will continue passing the dataset information down the chain.
See
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
for details.
- property end_time
End timestamp of the dataset.
- get_shape(key, info)[source]
Get the shape of the data.
- Returns:
Number of lines, number of columns
- abstract property ir_sectors
Get the ir sectors.
- property meta
Derive metadata from the coordinates.
- property resolution
Specify the spatial resolution of the dataset.
Channel 13_3’s spatial resolution changes from one platform to another while the wavelength and file format remain the same. In order to avoid multiple YAML reader definitions for the same file format, read the channel’s resolution from the file instead of defining it in the YAML dataset. This information will then be used by the YAML reader to complement the YAML definition of the dataset.
- Returns:
Spatial resolution in kilometers
- property start_time
Start timestamp of the dataset.
- abstract property vis_sectors
Get the vis sectors.
- yaw_flip_sampling_distance = 10
- class satpy.readers.goes_imager_nc.GOESNCFileHandler(filename, filename_info, filetype_info)[source]
Bases:
GOESNCBaseFileHandler
File handler for GOES Imager data in netCDF format.
Initialize the reader.
- ir_sectors = {(566, 3464): 'Southern Hemisphere (GOES-East)', (1062, 2760): 'Southern Hemisphere (GOES-West)', (1354, 3312): 'Northern Hemisphere (GOES-West)', (1826, 3464): 'Northern Hemisphere (GOES-East)', (2704, 5208): 'Full Disc'}
- vis_sectors = {(2267, 13852): 'Southern Hemisphere (GOES-East)', (4251, 11044): 'Southern Hemisphere (GOES-West)', (5419, 13244): 'Northern Hemisphere (GOES-West)', (7307, 13852): 'Northern Hemisphere (GOES-East)', (10819, 20800): 'Full Disc'}
- satpy.readers.goes_imager_nc.is_vis_channel(channel)[source]
Determine whether the given channel is a visible channel.
- satpy.readers.goes_imager_nc.test_coefs(ir_url, vis_url)[source]
Test calibration coefficients against NOAA reference pages.
Currently the reference pages are:
ir_url = https://www.ospo.noaa.gov/Operations/GOES/calibration/gvar-conversion.html vis_url = https://www.ospo.noaa.gov/Operations/GOES/calibration/goes-vis-ch-calibration.html
- Parameters:
ir_url – Path or URL to HTML page with IR coefficients
vis_url – Path or URL to HTML page with VIS coefficients
- Raises:
ValueError if coefficients don't match the reference –
satpy.readers.gpm_imerg module
Reader for GPM imerg data on half-hourly timesteps.
References
The NASA IMERG ATBD: https://pmm.nasa.gov/sites/default/files/document_files/IMERG_ATBD_V06.pdf
- class satpy.readers.gpm_imerg.Hdf5IMERG(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
IMERG hdf5 reader.
Init method.
- property end_time
Find the end time from filename info.
- property start_time
Find the start time from filename info.
satpy.readers.grib module
Generic Reader for GRIB2 files.
Currently this reader depends on the pygrib python package. The eccodes package from ECMWF is preferred, but does not support python 3 at the time of writing.
- class satpy.readers.grib.GRIBFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Generic GRIB file handler.
Open grib file and do initial message parsing.
- available_datasets(configured_datasets=None)[source]
Automatically determine datasets provided by this file.
- property end_time
Get end time of this entire file.
Assumes the last message is the latest message.
- get_area_def(dsid)[source]
Get area definition for message.
If latlong grid then convert to valid eqc grid.
- property start_time
Get start time of this entire file.
Assumes the first message is the earliest message.
satpy.readers.hdf4_utils module
Helpers for reading hdf4-based files.
- class satpy.readers.hdf4_utils.HDF4FileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Base class for common HDF4 operations.
Open file and collect information.
satpy.readers.hdf5_utils module
Helpers for reading hdf5-based files.
- class satpy.readers.hdf5_utils.HDF5FileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Small class for inspecting a HDF5 file and retrieve its metadata/header data.
Initialize file handler.
satpy.readers.hdfeos_base module
Base HDF-EOS reader.
- class satpy.readers.hdfeos_base.HDFEOSBaseFileReader(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
Base file handler for HDF EOS data for both L1b and L2 products.
Initialize the base reader.
- _add_satpy_metadata(data_id: DataID, data_arr: DataArray)[source]
Add metadata that is specific to Satpy.
- _scale_and_mask_data_array(data, is_category=False)[source]
Unscale byte data and mask invalid/fill values.
MODIS requires unscaling the in-file bytes in an unexpected way:
data = (byte_value - add_offset) * scale_factor
See the below L1B User’s Guide Appendix C for more information:
- property end_time
Get the end time of the dataset.
- property metadata_platform_name
Platform name from the internal file metadata.
- property start_time
Get the start time of the dataset.
- class satpy.readers.hdfeos_base.HDFEOSGeoReader(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDFEOSBaseFileReader
Handler for the geographical datasets.
Initialize the geographical reader.
- DATASET_NAMES = {'latitude': 'Latitude', 'longitude': 'Longitude', 'satellite_azimuth_angle': ('SensorAzimuth', 'Sensor_Azimuth'), 'satellite_zenith_angle': ('SensorZenith', 'Sensor_Zenith'), 'solar_azimuth_angle': ('SolarAzimuth', 'SolarAzimuth'), 'solar_zenith_angle': ('SolarZenith', 'Solar_Zenith')}
- property geo_resolution
Resolution of the geographical data retrieved in the metadata.
- get_dataset(dataset_id: DataID, dataset_info: dict) DataArray [source]
Get the geolocation dataset.
- get_interpolated_dataset(name1, name2, resolution, offset=0)[source]
Load and interpolate datasets.
- satpy.readers.hdfeos_base._find_and_run_interpolation(interpolation_functions, src_resolution, dst_resolution, args)[source]
- satpy.readers.hdfeos_base._interpolate_no_angles(clons, clats, src_resolution, dst_resolution)[source]
satpy.readers.hrit_base module
HRIT/LRIT format reader.
This module is the base module for all HRIT-based formats. Here, you will find the common building blocks for hrit reading.
One of the features here is the on-the-fly decompression of hrit files. It needs a path to the xRITDecompress binary to be provided through the environment variable called XRIT_DECOMPRESS_PATH. When compressed hrit files are then encountered (files finishing with .C_), they are decompressed to the system’s temporary directory for reading.
- class satpy.readers.hrit_base.HRITFileHandler(filename, filename_info, filetype_info, hdr_info)[source]
Bases:
BaseFileHandler
HRIT standard format reader.
Initialize the reader.
- _get_hd(hdr_info)[source]
Open the file, read and get the basic file header info and set the mda dictionary.
- property end_time
Get end time.
- get_xy_from_linecol(line, col, offsets, factors)[source]
Get the intermediate coordinates from line & col.
Intermediate coordinates are actually the instruments scanning angles.
- property observation_end_time
Get observation end time.
- property observation_start_time
Get observation start time.
- property start_time
Get start time.
- class satpy.readers.hrit_base.HRITSegment(filename, mda)[source]
Bases:
object
An HRIT segment with data.
Set up the segment.
- satpy.readers.hrit_base.decompress(infile, outdir='.')[source]
Decompress an XRIT data file and return the path to the decompressed file.
It expect to find Eumetsat’s xRITDecompress through the environment variable XRIT_DECOMPRESS_PATH.
- satpy.readers.hrit_base.get_header_content(fp, header_dtype, count=1)[source]
Return the content of the HRIT header.
satpy.readers.hrit_jma module
HRIT format reader for JMA data.
Introduction
The JMA HRIT format is described in the JMA HRIT - Mission Specific Implementation. There are three readers for this format in Satpy:
jami_hrit
: For data from the JAMI instrument on MTSAT-1Rmtsat2-imager_hrit
: For data from the Imager instrument on MTSAT-2ahi_hrit
: For data from the AHI instrument on Himawari-8/9
Although the data format is identical, the instruments have different characteristics, which is why there is a dedicated reader for each of them. Sample data is available here:
Example:
Here is an example how to read Himwari-8 HRIT data with Satpy:
from satpy import Scene
import glob
filenames = glob.glob('data/IMG_DK01B14_2018011109*')
scn = Scene(filenames=filenames, reader='ahi_hrit')
scn.load(['B14'])
print(scn['B14'])
Output:
<xarray.DataArray (y: 5500, x: 5500)>
dask.array<concatenate, shape=(5500, 5500), dtype=float64, chunksize=(550, 4096), ...
Coordinates:
acq_time (y) datetime64[ns] 2018-01-11T09:00:20.995200 ... 2018-01-11T09:09:40.348800
crs object +proj=geos +lon_0=140.7 +h=35785831 +x_0=0 +y_0=0 +a=6378169 ...
* y (y) float64 5.5e+06 5.498e+06 5.496e+06 ... -5.496e+06 -5.498e+06
* x (x) float64 -5.498e+06 -5.496e+06 -5.494e+06 ... 5.498e+06 5.5e+06
Attributes:
orbital_parameters: {'projection_longitude': 140.7, 'projection_latitud...
standard_name: toa_brightness_temperature
level: None
wavelength: (11.0, 11.2, 11.4)
units: K
calibration: brightness_temperature
file_type: ['hrit_b14_seg', 'hrit_b14_fd']
modifiers: ()
polarization: None
sensor: ahi
name: B14
platform_name: Himawari-8
resolution: 4000
start_time: 2018-01-11 09:00:20.995200
end_time: 2018-01-11 09:09:40.348800
area: Area ID: FLDK, Description: Full Disk, Projection I...
ancillary_variables: []
JMA HRIT data contain the scanline acquisition time for only a subset of scanlines. Timestamps of
the remaining scanlines are computed using linear interpolation. This is what you’ll find in the
acq_time
coordinate of the dataset.
Compression
Gzip-compressed MTSAT files can be decompressed on the fly using
FSFile
:
import fsspec
from satpy import Scene
from satpy.readers import FSFile
filename = "/data/HRIT_MTSAT1_20090101_0630_DK01IR1.gz"
open_file = fsspec.open(filename, compression="gzip")
fs_file = FSFile(open_file)
scn = Scene([fs_file], reader="jami_hrit")
scn.load(["IR1"])
- class satpy.readers.hrit_jma.HRITJMAFileHandler(filename, filename_info, filetype_info, use_acquisition_time_as_start_time=False)[source]
Bases:
HRITFileHandler
JMA HRIT format reader.
By default, the reader uses the start time parsed from the filename. To use exact time, computed from the metadata, the user can define a keyword argument:
scene = Scene(filenames=filenames, reader='ahi_hrit', reader_kwargs={'use_acquisition_time_as_start_time': True})
As this time is different for every channel, time-dependent calculations like SZA correction can be pretty slow when multiple channels are used.
The exact scanline times are always available as coordinates of an individual channels:
scene.load(["B03"]) print(scene["B03].coords["acq_time"].data)
would print something similar to:
array(['2021-12-08T06:00:20.131200000', '2021-12-08T06:00:20.191948000', '2021-12-08T06:00:20.252695000', ..., '2021-12-08T06:09:39.449390000', '2021-12-08T06:09:39.510295000', '2021-12-08T06:09:39.571200000'], dtype='datetime64[ns]')
The first value represents the exact start time, and the last one the exact end time of the data acquisition.
Initialize the reader.
- _check_sensor_platform_consistency(sensor)[source]
Make sure sensor and platform are consistent.
- Parameters:
sensor (str) – Sensor name from YAML dataset definition
- Raises:
ValueError if they don't match –
- _get_acq_time()[source]
Get the acquisition times from the file.
Acquisition times for a subset of scanlines are stored in the header as follows:
b’LINE:=1rTIME:=54365.022558rLINE:=21rTIME:=54365.022664r…’
Missing timestamps in between are computed using linear interpolation.
- _get_line_offset()[source]
Get line offset for the current segment.
Read line offset from the file and adapt it to the current segment or half disk scan so that
y(l) ~ l - loff
because this is what get_geostationary_area_extent() expects.
- _get_platform()[source]
Get the platform name.
The platform is not specified explicitly in JMA HRIT files. For segmented data it is not even specified in the filename. But it can be derived indirectly from the projection name:
GEOS(140.00): MTSAT-1R GEOS(140.25): MTSAT-1R # TODO: Check if there is more… GEOS(140.70): Himawari-8 GEOS(145.00): MTSAT-2
See [MTSAT], section 3.1. Unfortunately Himawari-8 and 9 are not distinguishable using that method at the moment. From [HIMAWARI]:
“HRIT/LRIT files have the same file naming convention in the same format in Himawari-8 and Himawari-9, so there is no particular difference.”
TODO: Find another way to distinguish Himawari-8 and 9.
References: [MTSAT] http://www.data.jma.go.jp/mscweb/notice/Himawari7_e.html [HIMAWARI] http://www.data.jma.go.jp/mscweb/en/himawari89/space_segment/sample_hrit.html
- property end_time
Get end time of the scan.
- property start_time
Get start time of the scan.
satpy.readers.hrpt module
Reading and calibrating hrpt avhrr data.
Todo: - AMSU - Compare output with AAPP
Reading: http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/klm/html/c4/sec4-1.htm#t413-1
Calibration: http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/klm/html/c7/sec7-1.htm
- class satpy.readers.hrpt.HRPTFile(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Reader for HRPT Minor Frame, 10 bits data expanded to 16 bits.
Init the file handler.
- property _chunks
Get the best chunks for this data.
- property _data
Get the data.
Get navigation data.
- property _is3b
- property calibrator
Create a calibrator for the data.
- property end_time
Get the end time.
- property lons_lats
Get the lons and lats.
- property platform_name
Get the platform name.
- property start_time
Get the start time.
- property telemetry
Get the telemetry.
- property times
Get the timestamps for each line.
satpy.readers.hsaf_grib module
A reader for files produced by the Hydrology SAF.
Currently this reader depends on the pygrib python package. The eccodes package from ECMWF is preferred, but does not support python 3 at the time of writing.
- class satpy.readers.hsaf_grib.HSAFFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for HSAF grib files.
Init the file handler.
- property analysis_time
Get validity time of this file.
satpy.readers.hsaf_h5 module
A reader for HDF5 Snow Cover (SC) file produced by the Hydrology SAF.
- class satpy.readers.hsaf_h5.HSAFFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for HSAF H5 files.
Init the file handler.
- _get_area_def()[source]
Area definition for h10 - hardcoded.
Area definition not available in the HDF5 message, so using hardcoded one (it’s known).
hsaf_h10: description: H SAF H10 area definition projection: proj: geos lon_0: 0 h: 35785831 x_0: 0 y_0: 0 a: 6378169 rf: 295.488065897001 no_defs: null type: crs shape: height: 916 width: 1902 area_extent: lower_left_xy: [-1936760.3163240477, 2635854.280233425] upper_right_xy: [3770006.7195370505, 5384223.683413638] units: m
- property end_time
Get end time.
- get_area_def(dsid)[source]
Area definition for h10 SC dataset.
Since it is not available in the HDF5 message, using hardcoded one (it’s known).
- property start_time
Get start time.
satpy.readers.hy2_scat_l2b_h5 module
HY-2B L2B Reader.
Distributed by Eumetsat in HDF5 format. Also handle the HDF5 files from NSOAS, based on a file example.
- class satpy.readers.hy2_scat_l2b_h5.HY2SCATL2BH5FileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
File handler for HY2 scat.
Initialize file handler.
- property end_time
Time for final observation.
- property platform_name
Get the Platform ShortName.
- property start_time
Time for first observation.
satpy.readers.iasi_l2 module
IASI L2 files.
- class satpy.readers.iasi_l2.IASIL2CDRNC(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FsspecFileHandler
Reader for IASI L2 CDR in NetCDF format.
Reader for IASI All Sky Temperature and Humidity Profiles - Climate Data Record Release 1.1 - Metop-A and -B. Data and documentation are available from http://doi.org/10.15770/EUM_SEC_CLM_0063. Data are also available from the EUMETSAT Data Store under ID EO:EUM:DAT:0576.
Initialize object.
- class satpy.readers.iasi_l2.IASIL2HDF5(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for IASI L2 HDF5 files.
Init the file handler.
- property end_time
Get the end time.
- property start_time
Get the start time.
satpy.readers.iasi_l2_so2_bufr module
IASI L2 SO2 BUFR format reader.
Introduction
The iasi_l2_so2_bufr
reader reads IASI level2 SO2 data in BUFR format. The algorithm is described in the
Theoretical Basis Document, linked below.
Each BUFR file consists of a number of messages, one for each scan, each of which contains SO2 column amounts in Dobson units for retrievals performed with plume heights of 7, 10, 13, 16 and 25 km.
Reader Arguments
A list of retrieval files, fnames, can be opened as follows:
Scene(reader="iasi_l2_so2_bufr", filenames=fnames)
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
import glob
filenames = glob.glob(
'/test_data/W_XX-EUMETSAT-Darmstadt,SOUNDING+SATELLITE,METOPA+IASI_C_EUMC_20200204091455_68984_eps_o_so2_l2.bin')
scn = Scene(filenames=filenames, reader='iasi_l2_so2_bufr')
scn.load(['so2_height_3', 'so2_height_4'])
print(scn['so2_height_3'])
Output:
<xarray.DataArray 'so2_height_3' (y: 23, x: 120)>
dask.array<where, shape=(23, 120), dtype=float64, chunksize=(1, 120), chunktype=numpy.ndarray>
Coordinates:
crs object +proj=latlong +datum=WGS84 +ellps=WGS84 +type=crs
Dimensions without coordinates: y, x
Attributes:
sensor: IASI
units: dobson
file_type: iasi_l2_so2_bufr
wavelength: None
modifiers: ()
platform_name: METOP-2
resolution: 12000
fill_value: -1e+100
level: None
polarization: None
coordinates: ('longitude', 'latitude')
calibration: None
key: #3#sulphurDioxide
name: so2_height_3
start_time: 2020-02-04 09:14:55
end_time: 2020-02-04 09:17:51
area: Shape: (23, 120)\nLons: <xarray.DataArray 'longitud...
ancillary_variables: []
References: Algorithm Theoretical Basis Document: https://acsaf.org/docs/atbd/Algorithm_Theoretical_Basis_Document_IASI_SO2_Jul_2016.pdf
- class satpy.readers.iasi_l2_so2_bufr.IASIL2SO2BUFR(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
File handler for the IASI L2 SO2 BUFR product.
Initialise the file handler for the IASI L2 SO2 BUFR data.
- property end_time
Return the end time of data acquisition.
- property platform_name
Return spacecraft name.
- property start_time
Return the start time of data acqusition.
satpy.readers.ici_l1b_nc module
EUMETSAT EPS-SG Ice Cloud Imager (ICI) Level 1B products reader.
The format is explained in the EPS-SG ICI Level 1B Product Format Specification V3A.
This version is applicable for the ici test data released in Jan 2021.
- class satpy.readers.ici_l1b_nc.IciL1bNCFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
NetCDF4FileHandler
Reader class for ICI L1B products in netCDF format.
Read the calibration data and prepare the class for dataset reading.
- _calibrate(variable, dataset_info)[source]
Perform the calibration.
- Parameters:
variable – xarray DataArray containing the dataset to calibrate.
dataset_info – dictionary of information about the dataset.
- Returns:
- array containing the calibrated values and all the
original metadata.
- Return type:
DataArray
- static _calibrate_bt(radiance, cw, a, b)[source]
Perform the calibration to brightness temperature.
- Parameters:
radiance – xarray DataArray or numpy ndarray containing the radiance values.
cw – center wavenumber [cm-1].
a – temperature coefficient [-].
b – temperature coefficient [K].
- Returns:
- array containing the calibrated brightness
temperature values.
- Return type:
DataArray
- static _get_third_dimension_name(variable)[source]
Get name of the third dimension of the variable.
- static _interpolate_geo(longitude, latitude, n_samples)[source]
Perform the interpolation of geographic coordinates from tie points to pixel points.
- Parameters:
longitude – xarray DataArray containing the longitude dataset to interpolate.
latitude – xarray DataArray containing the longitude dataset to interpolate.
n_samples – int describing number of samples per scan to interpolate onto.
- Returns:
- tuple of arrays containing the interpolate values, all the original
metadata and the updated dimension names.
- _interpolate_viewing_angle(azimuth, zenith, n_samples)[source]
Perform the interpolation of angular coordinates from tie points to pixel points.
- Parameters:
azimuth – xarray DataArray containing the azimuth angle dataset to interpolate.
zenith – xarray DataArray containing the zenith angle dataset to interpolate.
n_samples – int describing number of samples per scan to interpolate onto.
- Returns:
- tuple of arrays containing the interpolate values, all the original
metadata and the updated dimension names.
- _orthorectify(variable, orthorect_data_name)[source]
Perform the orthorectification.
- Parameters:
variable – xarray DataArray containing the dataset to correct for orthorectification.
orthorect_data_name – name of the orthorectification correction data in the product.
- Returns:
- array containing the corrected values and all the
original metadata.
- Return type:
DataArray
- property end_time
Get observation end time.
- property latitude
Get latitude coordinates.
- property longitude
Get longitude coordinates.
- property longitude_and_latitude
Get longitude and latitude coordinates.
- property observation_azimuth
Get observation azimuth angles.
- property observation_azimuth_and_zenith
Get observation azimuth and zenith angles.
- property observation_zenith
Get observation zenith angles.
- property platform_name
Return platform name.
- property sensor
Return sensor.
- property solar_azimuth
Get solar azimuth angles.
- property solar_azimuth_and_zenith
Get solar azimuth and zenith angles.
- property solar_zenith
Get solar zenith angles.
- property ssp_lon
Return subsatellite point longitude.
- property start_time
Get observation start time.
satpy.readers.insat3d_img_l1b_h5 module
File handler for Insat 3D L1B data in hdf5 format.
- class satpy.readers.insat3d_img_l1b_h5.Insat3DIMGL1BH5FileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for insat 3d imager data.
Initialize file handler.
- property datatree
Create the datatree.
- property end_time
Get the end time.
- property start_time
Get the start time.
- satpy.readers.insat3d_img_l1b_h5.decode_lut_arr(arr, lut)[source]
Decode an array using a lookup table.
- satpy.readers.insat3d_img_l1b_h5.get_lonlat_suffix(resolution)[source]
Get the lonlat variable suffix from the resolution.
satpy.readers.li_base_nc module
Base class used for the MTG Lighting Imager netCDF4 readers.
The base LI reader class supports generating the available datasets
programmatically: to achieve this, each LI product type should provide a
"file description" which is itself retrieved directly from the YAML
configuration file for the reader of interest, as a custom
file_desc
entry inside the 'file_type' section
corresponding to that product type.
Each of the file_desc
entry describes what are the
variables that are available into that product that should be used to
register the available satpy datasets.
Each of those description entries may contain the following elements:
product_type [required]:
Indicate the processing_level / product_type name to use internally for that type of product file. This should correspond to the
{processing_level}-{product_type}
part of the full file_pattern.search_paths [optional]:
A list of the possible paths that should be prefixed to a given variable name when searching for that variable in the NetCDF file to register a dataset on it. The list is given in priority order. If no search path is provided (or an empty array is provided) then the variables will only be searched directly in the root group of the NetCDF structure.
swath_coordinates [required]:
The LI reader will use a
SwathDefinition
object to define the area/coordinates of each of the provided datasets depending on the content of this entry. The user can either:Specify a
swath_coordinates
entry directly withlatitude
andlongitude
entries, in which case, the datasets that will match one of the'variable_patterns'
provided will use those lat/lon variables as coordinate providers.Specify a
swath_coordinates
entry directly withprojection
,azimuth
andelevation
entries instead, in which case, the reader will first use the variables pointed by those 3 entries compute the corresponding latitude/longitude data from the scan angles contained in the product file. And then, continue with assigned those lat/lon datasets as coordinates for datasets that will match one of thevariable_patterns
provided.
Note: It is acceptable to specify an empty array for the list of
variable_patterns
, in this case, the swath coordinates will not be assigned to any dataset.sectors [optional]:
The custom dataset description mechanism makes a distinction between "ordinary" variables which should be used to create a "single dataset" and "sectored variables" which will be found per sector and will thus be used to generate as many datasets as there are sectors (see below). So this entry is used to specify the list of sector names there should be available in the NetCDF structure.
sector_variables [optional]:
This entry is used to provide a list of the variables that are available per sector in the NetCDF file. Thus, assuming the
sectors
entry is set to the standard list['north', 'east', 'south', 'west']
, 4 separated datasets will be registered for each variable listed here (using the conventional suffix"{sector_name}_sector"
)variables [optional]:
This entry is used to provide a list of "ordinary variables" (ie. variables that are not available per sector). Each of those variables will be used to register one dataset.
Note: A single product may provide both the "variables" and the "sector_variables" at the same time (as this is the case for LI LEF for instance)
variable_transforms [optional]:
This entry is may be used to provide specific additional entries per variable name (ie. will apply to both in sector or out of sector variables) that should be added to the dataset infos when registering a dataset with that variable. While any kind of info could be added this way to the final dataset infos, we are currently using the entry mainly to provide our LI reader with the following traits which will then be used to "transform" the data of the dataset as requested on loading:
broadcast_to
: if this extra info is found in a dataset_info on dataset loading, then the initial data array will be broadcast to the shape of the variable found under the variable path specified as value for that entry. Note that, if the pattern{sector_name}
if found in this entry value, then the reader will assume that we are writing a dataset from an in sector variable, and use the current sector name to find the appropriate alternate variable that will be used as reference to broadcast the current variable data.seconds_to_datetime
: This transformation is used to internally convert variables provided as float values to thenp.datetime64
data type. The value specified for this entry should be the reference epoch time used as offsets for the elapsed seconds when converting the data.seconds_to_timedelta
: This transformation is used to internally convert variables (assumed to use a "second" unit) provided as float values to thenp.timedelta64
data type. This entry should be set totrue
to activate this transform. During the conversion, we internally use a nanosecond resolution on the input floating point second values.milliseconds_to_timedelta
: Same kind of transformation asseconds_to_timedelta
except that the source data is assumed to contain millisecond float values.accumulate_index_offset
: if this extra info is found in adataset_info
on dataset loading, then we will consider that the dataset currently being generated is an array of indices inside the variable pointed by the path provided as value for that entry. Note that the same usage of the pattern{sector_name}
mentioned for the entry "broadcast_to" will also apply here. This behavior is useful when multiple input files are loaded together in a single satpy scene, in which case, the variables from each files will be concatenated to produce a single dataset for each variable, and thus the need to correct the reported indices accordingly.An example of usage of this entry is as follows:
variable_transforms: integration_frame_index: accumulate_index_offset: "{sector_name}/exposure_time"
In the example above the integration_frame_index from each sector (i.e. optical channel) provides a list of indices in the corresponding exposure_time array from that same sector. The final indices will thus correctly take into account that the final exposure_time array contains all the values concatenated from all the input files in the scene.
use_rescaling
: By default, we currently apply variable rescaling as soon as we find one (or more) of the attributes named'scale_factor'
,'scaling_factor'
or'add_offset'
in the source netcdf variable. This automatic transformation can be disabled for a given variable specifying a value of false for this extra info element, for instance:variable_transforms: latitude: use_rescaling: false
Note: We are currently not disabling rescaling for any dataset, so that entry is not used in the current version of the YAML config files for the LI readers.
- class satpy.readers.li_base_nc.LINCFileHandler(filename, filename_info, filetype_info, cache_handle=True)[source]
Bases:
NetCDF4FsspecFileHandler
Base class used as parent for the concrete LI reader classes.
Initialize LINCFileHandler.
- apply_accumulate_index_offset(data_array, ds_info)[source]
Apply the accumulate_index_offset transform on a given array.
- apply_milliseconds_to_timedelta(data_array, _ds_info)[source]
Apply the milliseconds_to_timedelta transform on a given array.
- apply_seconds_to_datetime(data_array, ds_info)[source]
Apply the seconds_to_datetime transform on a given array.
- apply_seconds_to_timedelta(data_array, _ds_info)[source]
Apply the seconds_to_timedelta transform on a given array.
- apply_transforms(data_array, ds_info)[source]
Apply all transformations requested in the ds_info on the provided data array.
- apply_use_rescaling(data_array, ds_info=None)[source]
Apply the use_rescaling transform on a given array.
- available_datasets(configured_datasets=None)[source]
Determine automatically the datasets provided by this file.
Uses a per product type dataset registration mechanism using the dataset descriptions declared in the reader construction above.
- combine_info(all_infos)[source]
Re-implement combine_info.
This is to be able to reset our __index_offset attribute in the shared ds_info currently being updated.
- property end_time
Get the end time.
- generate_coords_from_scan_angles()[source]
Generate the latitude/longitude coordinates from the scan azimuth and elevation angles.
- get_coordinate_names(ds_infos)[source]
Get the target coordinate names, applying the sector name as needed.
- get_dataset_infos(dname)[source]
Retrieve the dataset infos corresponding to one of the registered datasets.
- get_first_valid_variable(var_paths)[source]
Select the first valid path for a variable from the given input list and returns the data.
- get_latlon_names()[source]
Retrieve the user specified names for latitude/longitude coordinates.
Use default ‘latitude’ / ‘longitude’ if not specified.
- get_measured_variable(var_paths, fill_value=nan)[source]
Retrieve a measured variable path taking into account the potential old data formatting schema.
And also replace the missing values with the provided fill_value (except if this is explicitly set to None). Also, if a slice index is provided, only that slice of the array (on the axis=0) is retrieved (before filling the missing values).
- get_transform_reference(transform_name, ds_info)[source]
Retrieve a variable that should be used as reference during a transform.
- get_transformed_dataset(ds_info)[source]
Retrieve a dataset with all transformations applied on it.
- is_prod_in_accumulation_grid()[source]
Check if the current product is an accumulated product in geos grid.
- register_available_datasets()[source]
Register all the available dataset that should be made available from this file handler.
- property sensor_names
List of sensors represented in this file.
- property start_time
Get the start time.
- update_array_attributes(data_array, ds_info)[source]
Inject the attributes from the ds_info structure into the final data array, ignoring the internal entries.
satpy.readers.li_l2_nc module
MTG Lighting Imager (LI) L2 unified reader.
This reader supports reading all the products from the LI L2 processing level:
L2-LE
L2-LGR
L2-AFA
L2-LEF
L2-LFL
L2-AF
L2-AFR
- class satpy.readers.li_l2_nc.LIL2NCFileHandler(filename, filename_info, filetype_info, with_area_definition=False)[source]
Bases:
LINCFileHandler
Implementation class for the unified LI L2 satpy reader.
Initialize LIL2NCFileHandler.
- get_area_def(dsid)[source]
Compute area definition for a dataset, only supported for accumulated products.
satpy.readers.maia module
Reader for NWPSAF AAPP MAIA Cloud product.
https://nwpsaf.eu/site/software/aapp/
Documentation reference:
[NWPSAF-MF-UD-003] DATA Formats [NWPSAF-MF-UD-009] MAIA version 4 Scientific User Manual
- class satpy.readers.maia.MAIAFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for Maia files.
Init the file handler.
- property end_time
Get the end time.
- property start_time
Get the start time.
satpy.readers.meris_nc_sen3 module
ENVISAT MERIS reader.
Sentinel 3 like format: https://earth.esa.int/eogateway/documents/20142/37627/MERIS-Sentinel-3-Like-L1-andL2-PFS.pdf
- Default:
scn = Scene(filenames=my_files, reader=’meris_nc_sen3’)
References
- class satpy.readers.meris_nc_sen3.NCMERIS2(filename, filename_info, filetype_info)[source]
Bases:
NCOLCI2
File handler for MERIS l2.
Init the file handler.
- class satpy.readers.meris_nc_sen3.NCMERISAngles(filename, filename_info, filetype_info)[source]
Bases:
NCOLCIAngles
File handler for the MERIS angles.
Init the file handler.
- class satpy.readers.meris_nc_sen3.NCMERISCal(filename, filename_info, filetype_info)[source]
Bases:
NCOLCIBase
Dummy class for calibration.
Init the meris reader base.
- class satpy.readers.meris_nc_sen3.NCMERISGeo(filename, filename_info, filetype_info)[source]
Bases:
NCOLCIBase
Dummy class for navigation.
Init the meris reader base.
- class satpy.readers.meris_nc_sen3.NCMERISMeteo(filename, filename_info, filetype_info)[source]
Bases:
NCOLCIMeteo
File handler for the MERIS meteo data.
Init the file handler.
satpy.readers.mersi_l1b module
Reader for the FY-3D MERSI-2 L1B file format.
The files for this reader are HDF5 and come in four varieties; band data and geolocation data, both at 250m and 1000m resolution.
This reader was tested on FY-3D MERSI-2 data, but should work on future platforms as well assuming no file format changes.
- class satpy.readers.mersi_l1b.MERSIL1B(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
MERSI-2/MERSI-LL/MERSI-RM L1B file reader.
Initialize file handler.
- _get_bt_dataset(data, calibration_index, wave_number)[source]
Get the dataset as brightness temperature.
Apparently we don’t use these calibration factors for Rad -> BT:
coeffs = self._get_coefficients(ds_info['calibration_key'], calibration_index) # coefficients are per-scan, we need to repeat the values for a # clean alignment coeffs = np.repeat(coeffs, data.shape[0] // coeffs.shape[1], axis=1) coeffs = coeffs.rename({ coeffs.dims[0]: 'coefficients', coeffs.dims[1]: 'y' }) # match data dims data = coeffs[0] + coeffs[1] * data + coeffs[2] * data**2 + coeffs[3] * data**3
- _mask_data(data, dataset_id, attrs)[source]
Mask the data using fill_value and valid_range attributes.
- property end_time
Time for final observation.
- property sensor_name
Map sensor name to Satpy ‘standard’ sensor names.
- property start_time
Time for first observation.
satpy.readers.mimic_TPW2_nc module
Reader for Mimic TPW data in netCDF format from SSEC.
This module implements reader for MIMIC_TPW2 netcdf files. MIMIC-TPW2 is an experimental global product of total precipitable water (TPW), using morphological compositing of the MIRS retrieval from several available operational microwave-frequency sensors. Originally described in a 2010 paper by Wimmers and Velden. This Version 2 is developed from an older method that uses simpler, but more limited TPW retrievals and advection calculations.
More information, data and credits at http://tropic.ssec.wisc.edu/real-time/mtpw2/credits.html
- class satpy.readers.mimic_TPW2_nc.MimicTPW2FileHandler(filename, filename_info, filetype_info)[source]
Bases:
NetCDF4FileHandler
NetCDF4 reader for MIMC TPW.
Initialize the reader.
- available_datasets(configured_datasets=None)[source]
Get datasets in file matching gelocation shape (lat/lon).
- property end_time
End timestamp of the dataset same as start_time.
- property sensor_name
Sensor name.
- property start_time
Start timestamp of the dataset determined from yaml.
satpy.readers.mirs module
Interface to MiRS product.
- class satpy.readers.mirs.MiRSL2ncHandler(filename, filename_info, filetype_info, limb_correction=True)[source]
Bases:
BaseFileHandler
MiRS handler for NetCDF4 files using xarray.
The MiRS retrieval algorithm runs on multiple sensors. For the ATMS sensors, a limb correction is applied by default. In order to change that behavior, use the keyword argument
limb_correction=False
:from satpy import Scene, find_files_and_readers filenames = find_files_and_readers(base_dir, reader="mirs") scene = Scene(filenames, reader_kwargs={'limb_correction': False})
Init method.
- _apply_valid_range(data_arr, valid_range, scale_factor, add_offset)[source]
Get and apply valid_range.
- property _get_coeff_filenames
Retrieve necessary files for coefficients if needed.
- property _get_platform_name
Get platform name.
- property _get_sensor
Get sensor.
- apply_attributes(data, ds_info)[source]
Combine attributes from file and yaml and apply.
File attributes should take precedence over yaml if both are present
- available_datasets(configured_datasets=None)[source]
Dynamically discover what variables can be loaded from this file.
See
satpy.readers.file_handlers.BaseHandler.available_datasets()
for more information.
- property end_time
Get end time.
- property platform_shortname
Get platform shortname.
- property sensor_names
Return standard sensor names for the file’s data.
- property start_time
Get start time.
- satpy.readers.mirs.apply_atms_limb_correction(datasets, channel_idx, dmean, coeffs, amean, nchx, nchanx)[source]
Calculate the correction for each channel.
- satpy.readers.mirs.get_coeff_by_sfc(coeff_fn, bt_data, idx)[source]
Read coefficients for specific filename (land or sea).
satpy.readers.modis_l1b module
Modis level 1b hdf-eos format reader.
Introduction
The modis_l1b
reader reads and calibrates Modis L1 image data in hdf-eos format. Files often have
a pattern similar to the following one:
M[O/Y]D02[1/H/Q]KM.A[date].[time].[collection].[processing_time].hdf
Other patterns where “collection” and/or “proccessing_time” are missing might also work
(see the readers yaml file for details). Geolocation files (MOD03) are also supported.
The IMAPP direct broadcast naming format is also supported with names like:
a1.12226.1846.1000m.hdf
.
Saturation Handling
Band 2 of the MODIS sensor is available in 250m, 500m, and 1km resolutions.
The band data may include a special fill value to indicate when the detector
was saturated in the 250m version of the data. When the data is aggregated to
coarser resolutions this saturation fill value is converted to a
“can’t aggregate” fill value. By default, Satpy will replace these fill values
with NaN to indicate they are invalid. This is typically undesired when
generating images for the data as they appear as “holes” in bright clouds.
To control this the keyword argument mask_saturated
can be passed and set
to False
to set these two fill values to the maximum valid value.
scene = satpy.Scene(filenames=filenames,
reader='modis_l1b',
reader_kwargs={'mask_saturated': False})
scene.load(['2'])
Note that the saturation fill value can appear in other bands (ex. bands 7-19) in addition to band 2. Also, the “can’t aggregate” fill value is a generic “catch all” for any problems encountered when aggregating high resolution bands to lower resolutions. Filling this with the max valid value could replace non-saturated invalid pixels with valid values.
Geolocation files
For the 1km data (mod021km) geolocation files (mod03) are optional. If not given to the reader 1km geolocations will be interpolated from the 5km geolocation contained within the file.
For the 500m and 250m data geolocation files are needed.
References
Modis gelocation description: http://www.icare.univ-lille1.fr/wiki/index.php/MODIS_geolocation
- class satpy.readers.modis_l1b.HDFEOSBandReader(filename, filename_info, filetype_info, mask_saturated=True, **kwargs)[source]
Bases:
HDFEOSBaseFileReader
Handler for the regular band channels.
Init the file handler.
- _fill_saturated(array, valid_max)[source]
Replace saturation-related values with max reflectance.
If the file handler was created with
mask_saturated
set toTrue
then all invalid/fill values are set to NaN. IfFalse
then the fill values 65528 and 65533 are set to the maximum valid value. These values correspond to “can’t aggregate” and “saturation”.Fill values:
65535 Fill Value (includes reflective band data at night mode and completely missing L1A scans)
65534 L1A DN is missing within a scan
65533 Detector is saturated
65532 Cannot compute zero point DN, e.g., SV is saturated
65531 Detector is dead (see comments below)
65530 RSB dn** below the minimum of the scaling range
65529 TEB radiance or RSB dn exceeds the maximum of the scaling range
65528 Aggregation algorithm failure
65527 Rotation of Earth view Sector from nominal science collection position
65526 Calibration coefficient b1 could not be computed
65525 Subframe is dead
65524 Both sides of the PCLW electronics on simultaneously
65501 - 65523 (reserved for future use)
65500 NAD closed upper limit
- res = {'1': 1000, 'H': 500, 'Q': 250}
- res_to_possible_variable_names = {250: ['EV_250_RefSB'], 500: ['EV_250_Aggr500_RefSB', 'EV_500_RefSB'], 1000: ['EV_250_Aggr1km_RefSB', 'EV_500_Aggr1km_RefSB', 'EV_1KM_RefSB', 'EV_1KM_Emissive']}
- class satpy.readers.modis_l1b.MixedHDFEOSReader(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDFEOSGeoReader
,HDFEOSBandReader
A file handler for the files that have both regular bands and geographical information in them.
Init the file handler.
- satpy.readers.modis_l1b.calibrate_bt(array, attributes, index, band_name)[source]
Calibration for the emissive channels.
- satpy.readers.modis_l1b.calibrate_counts(array, attributes, index)[source]
Calibration for counts channels.
satpy.readers.modis_l2 module
Modis level 2 hdf-eos format reader.
Introduction
The modis_l2
reader reads and calibrates Modis L2 image data in hdf-eos format.
Since there are a multitude of different level 2 datasets not all of theses are implemented (yet).
- Currently the reader supports:
m[o/y]d35_l2: cloud_mask dataset
some datasets in m[o/y]d06 files
To get a list of the available datasets for a given file refer to the “Load data” section in Reading.
Geolocation files
Similar to the modis_l1b
reader the geolocation files (mod03) for the 1km data are optional and if not
given 1km geolocations will be interpolated from the 5km geolocation contained within the file.
For the 500m and 250m data geolocation files are needed.
References
Documentation about the format: https://modis-atmos.gsfc.nasa.gov/products
- class satpy.readers.modis_l2.ModisL2HDFFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDFEOSGeoReader
File handler for MODIS HDF-EOS Level 2 files.
Includes error handling for files produced by IMAPP produced files.
Initialize the geographical reader.
- _select_hdf_dataset(hdf_dataset_name, byte_dimension)[source]
Load a dataset from HDF-EOS level 2 file.
- property end_time
Get the end time of the dataset.
- property is_imapp_mask_byte1
Get if this file is the IMAPP ‘mask_byte1’ file type.
- static read_geo_resolution(metadata)[source]
Parse metadata to find the geolocation resolution.
It is implemented as a staticmethod to match read_mda pattern.
- property start_time
Get the start time of the dataset.
satpy.readers.modis_l3 module
Modis level 3 hdf-eos format reader.
Introduction
The modis_l3
reader reads MODIS L3 products in HDF-EOS format.
There are multiple level 3 products, including some on sinusoidal grids and some on the climate modeling grid (CMG). This reader supports the CMG products at present, and the sinusoidal products will be added if there is demand.
- The reader has been tested with:
MCD43c*: BRDF/Albedo data, such as parameters, albedo and nbar
MOD09CMG: Surface Reflectance on climate monitoring grid.
To get a list of the available datasets for a given file refer to the “Load data” section in Reading.
- class satpy.readers.modis_l3.ModisL3GriddedHDFFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDFEOSGeoReader
File handler for MODIS HDF-EOS Level 3 CMG gridded files.
Initialize the geographical reader.
- available_datasets(configured_datasets=None)[source]
Automatically determine datasets provided by this file.
satpy.readers.msi_safe module
SAFE MSI L1C reader.
The MSI data has a special value for saturated pixels. By default, these
pixels are set to np.inf, but for some applications it might be desirable
to have these pixels left untouched.
For this case, the mask_saturated flag is available in the reader, and can be
toggled with reader_kwargs
upon Scene creation:
scene = satpy.Scene(filenames,
reader='msi_safe',
reader_kwargs={'mask_saturated': False})
scene.load(['B01'])
L1B format description for the files read here:
- class satpy.readers.msi_safe.SAFEMSIL1C(filename, filename_info, filetype_info, mda, tile_mda, mask_saturated=True)[source]
Bases:
BaseFileHandler
File handler for SAFE MSI files (jp2).
Initialize the reader.
- property end_time
Get the end time.
- property start_time
Get the start time.
- class satpy.readers.msi_safe.SAFEMSIMDXML(filename, filename_info, filetype_info, mask_saturated=True)[source]
Bases:
SAFEMSIXMLMetadata
File handle for sentinel 2 safe XML generic metadata.
Init the reader.
- property band_indices
Get the band indices from the metadata.
- property band_offsets
Get the band offsets from the metadata.
- calibrate_to_radiances(data, band_name)[source]
Calibrate data to radiance using the radiometric information for the metadata.
- calibrate_to_reflectances(data, band_name)[source]
Calibrate data using the radiometric information for the metadata.
- property no_data
Get the nodata value from the metadata.
- property physical_gains
Get the physical gains dictionary.
- property saturated
Get the saturated value from the metadata.
- property special_values
Get the special values from the metadata.
- class satpy.readers.msi_safe.SAFEMSITileMDXML(filename, filename_info, filetype_info, mask_saturated=True)[source]
Bases:
SAFEMSIXMLMetadata
File handle for sentinel 2 safe XML tile metadata.
Init the reader.
- property projection
Get the geographic projection.
- class satpy.readers.msi_safe.SAFEMSIXMLMetadata(filename, filename_info, filetype_info, mask_saturated=True)[source]
Bases:
BaseFileHandler
Base class for SAFE MSI XML metadata filehandlers.
Init the reader.
- property end_time
Get end time.
- property start_time
Get start time.
satpy.readers.msu_gsa_l1b module
Reader for the Arctica-M1 MSU-GS/A data.
The files for this reader are HDF5 and contain channel data at 1km resolution for the VIS channels and 4km resolution for the IR channels. Geolocation data is available at both resolutions, as is sun and satellite geometry.
This reader was tested on sample data provided by EUMETSAT.
- class satpy.readers.msu_gsa_l1b.MSUGSAFileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
MSU-GS/A L1B file reader.
Initialize file handler.
- property platform_name
Platform name is also hardcoded.
- property satellite_altitude
Satellite altitude at time of scan.
There is no documentation but this appears to be height above surface in meters.
- property satellite_latitude
Satellite latitude at time of scan.
- property satellite_longitude
Satellite longitude at time of scan.
- property sensor_name
Sensor name is hardcoded.
- property start_time
Time for timeslot scan start.
satpy.readers.mviri_l1b_fiduceo_nc module
FIDUCEO MVIRI FCDR Reader.
Introduction
The FIDUCEO MVIRI FCDR is a Fundamental Climate Data Record (FCDR) of
re-calibrated Level 1.5 Infrared, Water Vapour, and Visible radiances from
the Meteosat Visible Infra-Red Imager (MVIRI) instrument onboard the
Meteosat First Generation satellites. There are two variants of the dataset:
The full FCDR and a simplified version called easy FCDR. Some datasets are
only available in one of the two variants, see the corresponding YAML
definition in satpy/etc/readers/
.
Dataset Names
The FIDUCEO MVIRI readers use names VIS
, WV
and IR
for the visible,
water vapor and infrared channels, respectively. These are different from
the original netCDF variable names for the following reasons:
VIS channel is named differently in full FCDR (
counts_vis
) and easy FCDR (toa_bidirectional_reflectance_vis
)netCDF variable names contain the calibration level (e.g.
counts_...
), which might be confusing for satpy users if a different calibration level is chosen.
Remaining datasets (such as quality flags and uncertainties) have the same name in the reader as in the netCDF file.
Example:
This is how to read FIDUCEO MVIRI FCDR data in satpy:
from satpy import Scene
scn = Scene(filenames=['FIDUCEO_FCDR_L15_MVIRI_MET7-57.0...'],
reader='mviri_l1b_fiduceo_nc')
scn.load(['VIS', 'WV', 'IR'])
Global netCDF attributes are available in the raw_metadata
attribute of
each loaded dataset.
Image Orientation
The images are stored in MVIRI scanning direction, that means South is up and East is right. This can be changed as follows:
scn.load(['VIS'], upper_right_corner='NE')
Geolocation
In addition to the image data, FIDUCEO also provides so called static FCDRs containing latitude and longitude coordinates. In order to simplify their usage, the FIDUCEO MVIRI readers do not make use of these static files, but instead provide an area definition that can be used to compute longitude and latitude coordinates on demand.
area = scn['VIS'].attrs['area']
lons, lats = area.get_lonlats()
Those were compared to the static FCDR and they agree very well, however there are small differences. The mean difference is < 1E3 degrees for all channels and projection longitudes.
Huge VIS Reflectances
You might encounter huge VIS reflectances (10^8 percent and greater) in situations where both radiance and solar zenith angle are small. The reader certainly needs some improvement in this regard. Maybe the corresponding uncertainties can be used to filter these cases before calculating reflectances.
VIS Channel Quality Flags
Quality flags are available for the VIS channel only. A simple approach for
masking bad quality pixels is to set the mask_bad_quality
keyword argument
to True
:
scn = Scene(filenames=['FIDUCEO_FCDR_L15_MVIRI_MET7-57.0...'],
reader='mviri_l1b_fiduceo_nc',
reader_kwargs={'mask_bad_quality': True})
See FiduceoMviriBase
for an argument description. In some situations
however the entire image can be flagged (look out for warnings). In that case
check out the quality_pixel_bitmask
and data_quality_bitmask
datasets
to find out why.
Angles
The FIDUCEO MVIRI FCDR provides satellite and solar angles on a coarse tiepoint grid. By default these datasets will be interpolated to the higher VIS resolution. This can be changed as follows:
scn.load(['solar_zenith_angle'], resolution=4500)
If you need the angles in both resolutions, use data queries:
from satpy import DataQuery
query_vis = DataQuery(
name='solar_zenith_angle',
resolution=2250
)
query_ir = DataQuery(
name='solar_zenith_angle',
resolution=4500
)
scn.load([query_vis, query_ir])
# Use the query objects to access the datasets as follows
sza_vis = scn[query_vis]
References:
[Handbook] MFG User Handbook
[PUG] FIDUCEO MVIRI FCDR Product User Guide
- satpy.readers.mviri_l1b_fiduceo_nc.ALTITUDE = 35785860.0
[Handbook] section 5.2.1.
- class satpy.readers.mviri_l1b_fiduceo_nc.DatasetWrapper(nc)[source]
Bases:
object
Helper class for accessing the dataset.
Wrap the given dataset.
- _reassign_coords(ds)[source]
Re-assign coordinates.
For some reason xarray doesn’t assign coordinates to all high resolution data variables.
- property attrs
Exposes dataset attributes.
- class satpy.readers.mviri_l1b_fiduceo_nc.FiduceoMviriBase(filename, filename_info, filetype_info, mask_bad_quality=False)[source]
Bases:
BaseFileHandler
Baseclass for FIDUCEO MVIRI file handlers.
Initialize the file handler.
- Parameters:
mask_bad_quality – Mask VIS pixels with bad quality, that means any quality flag except “ok”. If you need more control, use the
quality_pixel_bitmask
anddata_quality_bitmask
datasets.
- abstract _calibrate_vis(ds, channel, calibration)[source]
Calibrate VIS channel. To be implemented by subclasses.
- _cleanup_coords(ds)[source]
Cleanup dataset coordinates.
Y/x coordinates have been useful for interpolation so far, but they only contain row/column numbers. Drop these coordinates so that Satpy can assign projection coordinates upstream (based on the area definition).
- _get_acq_time_uncached(resolution)[source]
Get scanline acquisition time for the given resolution.
Note that the acquisition time does not increase monotonically with the scanline number due to the scan pattern and rectification.
- _get_angles_uncached(name, resolution)[source]
Get angle dataset.
Files provide angles (solar/satellite zenith & azimuth) at a coarser resolution. Interpolate them to the desired resolution.
- _get_calib_coefs()[source]
Get calibration coefficients for all channels.
Note: Only coefficients present in both file types.
- _get_ssp_lonlat()[source]
Get longitude and latitude at the subsatellite point.
Easy FCDR files provide satellite position at the beginning and end of the scan. This method computes the mean of those two values. In the full FCDR the information seems to be missing.
- Returns:
Subsatellite longitude and latitude
- nc_keys = {'IR': 'count_ir', 'WV': 'count_wv'}
- class satpy.readers.mviri_l1b_fiduceo_nc.FiduceoMviriEasyFcdrFileHandler(filename, filename_info, filetype_info, mask_bad_quality=False)[source]
Bases:
FiduceoMviriBase
File handler for FIDUCEO MVIRI Easy FCDR.
Initialize the file handler.
- Parameters:
mask_bad_quality – Mask VIS pixels with bad quality, that means any quality flag except “ok”. If you need more control, use the
quality_pixel_bitmask
anddata_quality_bitmask
datasets.
- _calibrate_vis(ds, channel, calibration)[source]
Calibrate VIS channel.
Easy FCDR provides reflectance only, no counts or radiance.
- nc_keys = {'IR': 'count_ir', 'VIS': 'toa_bidirectional_reflectance_vis', 'WV': 'count_wv'}
- class satpy.readers.mviri_l1b_fiduceo_nc.FiduceoMviriFullFcdrFileHandler(filename, filename_info, filetype_info, mask_bad_quality=False)[source]
Bases:
FiduceoMviriBase
File handler for FIDUCEO MVIRI Full FCDR.
Initialize the file handler.
- Parameters:
mask_bad_quality – Mask VIS pixels with bad quality, that means any quality flag except “ok”. If you need more control, use the
quality_pixel_bitmask
anddata_quality_bitmask
datasets.
- nc_keys = {'IR': 'count_ir', 'VIS': 'count_vis', 'WV': 'count_wv'}
- class satpy.readers.mviri_l1b_fiduceo_nc.IRWVCalibrator(coefs)[source]
Bases:
object
Calibrate IR & WV channels.
Initialize the calibrator.
- Parameters:
coefs – Calibration coefficients.
- _calibrate_rad_bt(counts, calibration)[source]
Calibrate counts to radiance or brightness temperature.
- _counts_to_radiance(counts)[source]
Convert IR/WV counts to radiance.
Reference: [PUG], equations (4.1) and (4.2).
- class satpy.readers.mviri_l1b_fiduceo_nc.Interpolator[source]
Bases:
object
Interpolate datasets to another resolution.
- static interp_acq_time(time2d, target_y)[source]
Interpolate scanline acquisition time to the given coordinates.
The files provide timestamps per pixel for the low resolution channels (IR/WV) only.
Average values in each line to obtain one timestamp per line.
For the VIS channel duplicate values in y-direction (as advised by [PUG]).
Note that the timestamps do not increase monotonically with the line number in some cases.
- Returns:
Mean scanline acquisition timestamps
- satpy.readers.mviri_l1b_fiduceo_nc.MVIRI_FIELD_OF_VIEW = 18.0
[Handbook] section 5.3.2.1.
Bases:
object
Navigate MVIRI images.
Determine line/column offsets and scaling factors.
Get projection parameters for the given settings.
Create MVIRI area definition.
- class satpy.readers.mviri_l1b_fiduceo_nc.VISCalibrator(coefs, solar_zenith_angle=None)[source]
Bases:
object
Calibrate VIS channel.
Initialize the calibrator.
- Parameters:
coefs – Calibration coefficients.
solar_zenith_angle (optional) – Solar zenith angle. Only required for calibration to reflectance.
- _counts_to_radiance(counts)[source]
Convert VIS counts to radiance.
Reference: [PUG], equations (7) and (8).
- _radiance_to_reflectance(rad)[source]
Convert VIS radiance to reflectance factor.
Note: Produces huge reflectances in situations where both radiance and solar zenith angle are small. Maybe the corresponding uncertainties can be used to filter these cases before calculating reflectances.
Reference: [PUG], equation (6).
satpy.readers.mws_l1b module
Reader for the EPS-SG Microwave Sounder (MWS) level-1b data.
Documentation: https://www.eumetsat.int/media/44139
- class satpy.readers.mws_l1b.MWSL1BFile(filename, filename_info, filetype_info)[source]
Bases:
NetCDF4FileHandler
Class implementing the EPS-SG-A1 MWS L1b Filehandler.
This class implements the European Polar System Second Generation (EPS-SG) Microwave Sounder (MWS) Level-1b NetCDF reader. It is designed to be used through the
Scene
class using theload
method with the reader"mws_l1b_nc"
.Initialize file handler.
- _get_dataset_channel(key, dataset_info)[source]
Load dataset corresponding to channel measurement.
Load a dataset when the key refers to a measurand, whether uncalibrated (counts) or calibrated in terms of brightness temperature or radiance.
- _platform_name_translate = {'SGA1': 'Metop-SG-A1', 'SGA2': 'Metop-SG-A2', 'SGA3': 'Metop-SG-A3'}
- property end_time
Get end time.
- property platform_name
Get the platform name.
- property sensor
Get the sensor name.
- property start_time
Get start time.
- property sub_satellite_latitude_end
Get the latitude of sub-satellite point at end of the product.
- property sub_satellite_latitude_start
Get the latitude of sub-satellite point at start of the product.
- property sub_satellite_longitude_end
Get the longitude of sub-satellite point at end of the product.
- property sub_satellite_longitude_start
Get the longitude of sub-satellite point at start of the product.
satpy.readers.netcdf_utils module
Helpers for reading netcdf-based files.
- class satpy.readers.netcdf_utils.NetCDF4FileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
BaseFileHandler
Small class for inspecting a NetCDF4 file and retrieving its metadata/header data.
File information can be accessed using bracket notation. Variables are accessed by using:
wrapper[“var_name”]
Or:
wrapper[“group/subgroup/var_name”]
Attributes can be accessed by appending “/attr/attr_name” to the item string:
wrapper[“group/subgroup/var_name/attr/units”]
Or for global attributes:
wrapper[“/attr/platform_short_name”]
Or for all of global attributes:
wrapper[“/attrs”]
Note that loading datasets requires reopening the original file (unless those datasets are cached, see below), but to get just the shape of the dataset append “/shape” to the item string:
wrapper[“group/subgroup/var_name/shape”]
If your file has many small data variables that are frequently accessed, you may choose to cache some of them. You can do this by passing a number, any variable smaller than this number in bytes will be read into RAM. Warning, this part of the API is provisional and subject to change.
You may get an additional speedup by passing
cache_handle=True
. This will keep the netCDF4 dataset handles open throughout the lifetime of the object, and instead of using xarray.open_dataset to open every data variable, a dask array will be created “manually”. This may be useful if you have a dataset distributed over many files, such as for FCI. Note that the coordinates will be missing in this case. If you use this option,xarray_kwargs
will have no effect.- Parameters:
filename (str) – File to read
filename_info (dict) – Dictionary with filename information
filetype_info (dict) – Dictionary with filetype information
auto_maskandscale (bool) – Apply mask and scale factors
xarray_kwargs (dict) – Addition arguments to xarray.open_dataset
cache_var_size (int) – Cache variables smaller than this size.
cache_handle (bool) – Keep files open for lifetime of filehandler.
Initialize object.
- collect_cache_vars(cache_var_size)[source]
Collect data variables for caching.
This method will collect some data variables and store them in RAM. This may be useful if some small variables are frequently accessed, to prevent needlessly frequently opening and closing the file, which in case of xarray is associated with some overhead.
Should be called later than collect_metadata.
- Parameters:
cache_var_size (int) – Maximum size of the collected variables in bytes
- collect_metadata(name, obj)[source]
Collect all file variables and attributes for the provided file object.
This method also iterates through subgroups of the provided object.
- file_handle = None
- class satpy.readers.netcdf_utils.NetCDF4FsspecFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
NetCDF4 file handler using fsspec to read files remotely.
Initialize object.
satpy.readers.nucaps module
Interface to NUCAPS Retrieval NetCDF files.
NUCAPS stands for NOAA Unique Combined Atmospheric Processing System. NUCAPS retrievals include temperature, moisture, trace gas, and cloud-cleared radiance profiles. Product details can be found at:
https://www.ospo.noaa.gov/Products/atmosphere/soundings/nucaps/
This reader supports both standard NOAA NUCAPS EDRs, and Science EDRs, which are essentially a subset of the standard EDRs with some additional parameters such as relative humidity and boundary layer temperature.
NUCAPS data is derived from Cross-track Infrared Sounder (CrIS) data, and from Advanced Technology Microwave Sounder (ATMS) data, instruments onboard Joint Polar Satellite System spacecraft.
- class satpy.readers.nucaps.NUCAPSFileHandler(*args, **kwargs)[source]
Bases:
NetCDF4FileHandler
File handler for NUCAPS netCDF4 format.
Initialize file handler.
- property end_orbit_number
Return orbit number for the end of the swath.
- property end_time
Get end time.
- property platform_name
Return standard platform name for the file’s data.
- property sensor_names
Return standard sensor or instrument name for the file’s data.
- property start_orbit_number
Return orbit number for the beginning of the swath.
- property start_time
Get start time.
- class satpy.readers.nucaps.NUCAPSReader(config_files, mask_surface=True, mask_quality=True, **kwargs)[source]
Bases:
FileYAMLReader
Reader for NUCAPS NetCDF4 files.
Configure reader behavior.
- Parameters:
mask_surface (boolean) – mask anything below the surface pressure
mask_quality (boolean) – mask anything where the Quality_Flag metadata is
!= 1
.
- _abc_impl = <_abc._abc_data object>
- load(dataset_keys, previous_datasets=None, pressure_levels=None)[source]
Load data from one or more set of files.
- Parameters:
pressure_levels – mask out certain pressure levels: True for all levels (min, max) for a range of pressure levels […] list of levels to include
- load_ds_ids_from_config()[source]
Convert config dataset entries to DataIDs.
Special handling is done to provide level specific datasets for any pressured based datasets. For example, a dataset is added for each pressure level of ‘Temperature’ with each new dataset being named ‘Temperature_Xmb’ where X is the pressure level.
satpy.readers.nwcsaf_msg2013_hdf5 module
Reader for the old NWCSAF/Geo (v2013 and earlier) cloud product format.
References
The NWCSAF GEO 2013 products documentation: http://www.nwcsaf.org/web/guest/archive - Search for Code “ICD/3”; Type “MSG” and the box to the right should say ‘Status’ (which means any status). Version 7.0 seems to be for v2013
- class satpy.readers.nwcsaf_msg2013_hdf5.Hdf5NWCSAF(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
NWCSAF MSG hdf5 reader.
Init method.
- property start_time
Return the start time of the object.
satpy.readers.nwcsaf_nc module
Nowcasting SAF common PPS&MSG NetCDF/CF format reader.
References
The NWCSAF GEO 2018 products documentation: http://www.nwcsaf.org/web/guest/archive
- class satpy.readers.nwcsaf_nc.NcNWCSAF(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
NWCSAF PPS&MSG NetCDF reader.
Init method.
- static _ensure_crs_extents_in_meters(crs, area_extent)[source]
Fix units in Earth shape, satellite altitude and ‘units’ attribute.
- _upsample_geolocation_uncached()[source]
Upsample the geolocation (lon,lat) from the tiepoint grid.
- property end_time
Return the end time of the object.
- get_area_def(dsid)[source]
Get the area definition of the datasets in the file.
Only applicable for MSG products!
- get_orbital_parameters(variable)[source]
Get the orbital parameters from the file if possible (geo).
- scale_dataset(variable, info)[source]
Scale the data set, applying the attributes from the netCDF file.
The scale and offset attributes will then be removed from the resulting variable.
- property sensor_names
List of sensors represented in this file.
- set_platform_and_sensor(**kwargs)[source]
Set some metadata: platform_name, sensors, and pps (identifying PPS or Geo).
- property start_time
Return the start time of the object.
satpy.readers.oceancolorcci_l3_nc module
Reader for files produced by ESA’s Ocean Color CCI project.
This reader currently supports the lat/lon gridded products and does not yet support the products on a sinusoidal grid. The products on each of the composite periods (1, 5 and 8 day plus monthly) are supported and both the merged product files (OC_PRODUCTS) and single product (RRS, CHLOR_A, IOP, K_490) are supported.
- class satpy.readers.oceancolorcci_l3_nc.OCCCIFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
File handler for Ocean Color CCI netCDF files.
Initialize object.
- property composite_period
Determine composite period from filename information.
- property end_time
Get the end time.
- get_area_def(dsid)[source]
Get the area definition based on information in file.
There is no area definition in the file itself, so we have to compute it from the metadata, which specifies the area extent and pixel resolution.
- property start_time
Get the start time.
satpy.readers.olci_nc module
Sentinel-3 OLCI reader.
This reader supports an optional argument to choose the ‘engine’ for reading
OLCI netCDF4 files. By default, this reader uses the default xarray choice of
engine, as defined in the xarray.open_dataset()
documentation`.
As an alternative, the user may wish to use the ‘h5netcdf’ engine, but that is not default as it typically prints many non-fatal but confusing error messages to the terminal. To choose between engines the user can do as follows for the default:
scn = Scene(filenames=my_files, reader='olci_l1b')
or as follows for the h5netcdf engine:
scn = Scene(filenames=my_files,
reader='olci_l1b', reader_kwargs={'engine': 'h5netcdf'})
References
- class satpy.readers.olci_nc.BitFlags(value, flag_list=None)[source]
Bases:
object
Manipulate flags stored bitwise.
Init the flags.
- class satpy.readers.olci_nc.NCOLCI1B(filename, filename_info, filetype_info, cal, engine=None)[source]
Bases:
NCOLCIChannelBase
File handler for OLCI l1b.
Init the file handler.
- class satpy.readers.olci_nc.NCOLCI2(filename, filename_info, filetype_info, engine=None, unlog=False, mask_items=None)[source]
Bases:
NCOLCIChannelBase
File handler for OLCI l2.
Init the file handler.
- class satpy.readers.olci_nc.NCOLCIAngles(filename, filename_info, filetype_info, engine=None, **kwargs)[source]
Bases:
NCOLCILowResData
File handler for the OLCI angles.
Init the file handler.
- datasets = {'satellite_azimuth_angle': 'OAA', 'satellite_zenith_angle': 'OZA', 'solar_azimuth_angle': 'SAA', 'solar_zenith_angle': 'SZA'}
- property satellite_angles
Return the satellite angles.
- property sun_angles
Return the sun angles.
- class satpy.readers.olci_nc.NCOLCIBase(filename, filename_info, filetype_info, engine=None, **kwargs)[source]
Bases:
BaseFileHandler
The OLCI reader base.
Init the olci reader base.
- cols_name = 'columns'
- property end_time
End time property.
- property nc
Get the nc xr dataset.
- rows_name = 'rows'
- property start_time
Start time property.
- class satpy.readers.olci_nc.NCOLCICal(filename, filename_info, filetype_info, engine=None, **kwargs)[source]
Bases:
NCOLCIBase
Dummy class for calibration.
Init the olci reader base.
- class satpy.readers.olci_nc.NCOLCIChannelBase(filename, filename_info, filetype_info, engine=None)[source]
Bases:
NCOLCIBase
Base class for channel reading.
Init the file handler.
- class satpy.readers.olci_nc.NCOLCIGeo(filename, filename_info, filetype_info, engine=None, **kwargs)[source]
Bases:
NCOLCIBase
Dummy class for navigation.
Init the olci reader base.
- class satpy.readers.olci_nc.NCOLCILowResData(filename, filename_info, filetype_info, engine=None, **kwargs)[source]
Bases:
NCOLCIBase
Handler for low resolution data.
Init the file handler.
- property _need_interpolation
- cols_name = 'tie_columns'
- rows_name = 'tie_rows'
- class satpy.readers.olci_nc.NCOLCIMeteo(filename, filename_info, filetype_info, engine=None)[source]
Bases:
NCOLCILowResData
File handler for the OLCI meteo data.
Init the file handler.
- datasets = ['humidity', 'sea_level_pressure', 'total_columnar_water_vapour', 'total_ozone']
satpy.readers.omps_edr module
Interface to OMPS EDR format.
- class satpy.readers.omps_edr.EDREOSFileHandler(filename, filename_info, filetype_info)[source]
Bases:
EDRFileHandler
EDR EOS file handler.
Initialize file handler.
- _fill_name = 'MissingValue'
- class satpy.readers.omps_edr.EDRFileHandler(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
EDR file handler.
Initialize file handler.
- _fill_name = '_FillValue'
- property end_orbit_number
Get the end orbit number.
- property platform_name
Get the platform name.
- property sensor_name
Get the sensor name.
- property start_orbit_number
Get the start orbit number.
satpy.readers.osisaf_l3_nc module
A reader for OSI-SAF level 3 products in netCDF format.
- class satpy.readers.osisaf_l3_nc.OSISAFL3NCFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
Reader for the OSISAF l3 netCDF format.
Initialize object.
- property end_time
Get the end time.
- get_area_def(area_id)[source]
Get the area definition, which varies depending on file type and structure.
- property start_time
Get the start time.
satpy.readers.pmw_channels_definitions module
Passive Microwave instrument and channel specific features.
- class satpy.readers.pmw_channels_definitions.FrequencyBandBaseArithmetics[source]
Bases:
object
Mixin class with basic frequency comparison operations.
- class satpy.readers.pmw_channels_definitions.FrequencyDoubleSideBand(central: float, side: float, bandwidth: float, unit: str = 'GHz')[source]
Bases:
FrequencyBandBaseArithmetics
,FrequencyDoubleSideBandBase
The frequency double side band class.
The elements of the double-side-band type frequency band are the central frquency, the relative side band frequency (relative to the center - left and right) and their bandwidths, and optionally a unit (defaults to GHz). No clever unit conversion is done here, it’s just used for checking that two ranges are comparable.
Frequency Double Side Band is supposed to describe the special type of bands commonly used in humidty sounding from Passive Microwave Sensors. When the absorption band being observed is symmetrical it is advantageous (giving better NeDT) to sense in a band both right and left of the central absorption frequency.
Create new instance of FrequencyDoubleSideBandBase(central, side, bandwidth, unit)
- static _check_band_contains_other(band, other_band)[source]
Check that a band contains another band.
A band is here defined as a tuple of a central frequency and a bandwidth.
- distance(value)[source]
Get the distance to the double side band.
Determining the distance in frequency space between two double side bands can be quite ambiguous, as such bands are in effect a set of 2 narrow bands, one on each side of the absorption line. To keep it as simple as possible we have until further decided to set the distance between such two bands to infitiy if neither of them are contained in the other.
If the frequency entered is a single value and this frequency falls inside one of the side bands, the distance will be the minimum of the distances to the two outermost sides of the double side band. However, is such a single frequency value falls outside one of the two side bands, the distance will be set to infitiy.
If the frequency entered is a tuple the distance will either be 0 (if one is containde in the other) or infinity.
- class satpy.readers.pmw_channels_definitions.FrequencyDoubleSideBandBase(central: float, side: float, bandwidth: float, unit: str = 'GHz')[source]
Bases:
NamedTuple
Base class for a frequency double side band.
Frequency Double Side Band is supposed to describe the special type of bands commonly used in humidty sounding from Passive Microwave Sensors. When the absorption band being observed is symmetrical it is advantageous (giving better NeDT) to sense in a band both right and left of the central absorption frequency.
This is needed because of this bug: https://bugs.python.org/issue41629
Create new instance of FrequencyDoubleSideBandBase(central, side, bandwidth, unit)
- _asdict()
Return a new dict which maps field names to their values.
- _field_defaults = {'unit': 'GHz'}
- _fields = ('central', 'side', 'bandwidth', 'unit')
- classmethod _make(iterable)
Make a new FrequencyDoubleSideBandBase object from a sequence or iterable
- _replace(**kwds)
Return a new FrequencyDoubleSideBandBase object replacing specified fields with new values
- class satpy.readers.pmw_channels_definitions.FrequencyQuadrupleSideBand(central: float, side: float, sideside: float, bandwidth: float, unit: str = 'GHz')[source]
Bases:
FrequencyBandBaseArithmetics
,FrequencyQuadrupleSideBandBase
The frequency quadruple side band class.
The elements of the quadruple-side-band type frequency band are the central frquency, the relative (main) side band frequency (relative to the center - left and right), the sub-side band frequency (relative to the offset side-band(s)) and their bandwidths. Optionally a unit (defaults to GHz) may be specified. No clever unit conversion is done here, it’s just used for checking that two ranges are comparable.
Frequency Quadruple Side Band is supposed to describe the special type of bands commonly used in temperature sounding from Passive Microwave Sensors. When the absorption band being observed is symmetrical it is advantageous (giving better NeDT) to sense in a band both right and left of the central absorption frequency. But to avoid (CO2) absorption lines symmetrically positioned on each side of the main absorption band it is common to split the side bands in two ‘side-side’ bands.
Create new instance of FrequencyQuadrupleSideBandBase(central, side, sideside, bandwidth, unit)
- distance(value)[source]
Get the distance to the quadruple side band.
Determining the distance in frequency space between two quadruple side bands can be quite ambiguous, as such bands are in effect a set of 4 narrow bands, two on each side of the main absorption band, and on each side, one on each side of the secondary absorption lines. To keep it as simple as possible we have until further decided to define the distance between such two bands to infinity if they are determined to be equal.
If the frequency entered is a single value, the distance will be the minimum of the distances to the two outermost sides of the quadruple side band.
If the frequency entered is a tuple or list and the two quadruple frequency bands are contained in each other (equal) the distance will always be zero.
- class satpy.readers.pmw_channels_definitions.FrequencyQuadrupleSideBandBase(central: float, side: float, sideside: float, bandwidth: float, unit: str = 'GHz')[source]
Bases:
NamedTuple
Base class for a frequency quadruple side band.
Frequency Quadruple Side Band is supposed to describe the special type of bands commonly used in temperature sounding from Passive Microwave Sensors. When the absorption band being observed is symmetrical it is advantageous (giving better NeDT) to sense in a band both right and left of the central absorption frequency. But to avoid (CO2) absorption lines symmetrically positioned on each side of the main absorption band it is common to split the side bands in two ‘side-side’ bands.
This is needed because of this bug: https://bugs.python.org/issue41629
Create new instance of FrequencyQuadrupleSideBandBase(central, side, sideside, bandwidth, unit)
- _asdict()
Return a new dict which maps field names to their values.
- _field_defaults = {'unit': 'GHz'}
- _fields = ('central', 'side', 'sideside', 'bandwidth', 'unit')
- classmethod _make(iterable)
Make a new FrequencyQuadrupleSideBandBase object from a sequence or iterable
- _replace(**kwds)
Return a new FrequencyQuadrupleSideBandBase object replacing specified fields with new values
- class satpy.readers.pmw_channels_definitions.FrequencyRange(central: float, bandwidth: float, unit: str = 'GHz')[source]
Bases:
FrequencyBandBaseArithmetics
,FrequencyRangeBase
The Frequency range class.
The elements of the range are central and bandwidth values, and optionally a unit (defaults to GHz). No clever unit conversion is done here, it’s just used for checking that two ranges are comparable.
This type is used for passive microwave sensors.
Create new instance of FrequencyRangeBase(central, bandwidth, unit)
- class satpy.readers.pmw_channels_definitions.FrequencyRangeBase(central: float, bandwidth: float, unit: str = 'GHz')[source]
Bases:
NamedTuple
Base class for frequency ranges.
This is needed because of this bug: https://bugs.python.org/issue41629
Create new instance of FrequencyRangeBase(central, bandwidth, unit)
- _asdict()
Return a new dict which maps field names to their values.
- _field_defaults = {'unit': 'GHz'}
- _fields = ('central', 'bandwidth', 'unit')
- classmethod _make(iterable)
Make a new FrequencyRangeBase object from a sequence or iterable
- _replace(**kwds)
Return a new FrequencyRangeBase object replacing specified fields with new values
satpy.readers.safe_sar_l2_ocn module
SAFE SAR L2 OCN format reader.
The OCN data contains various parameters, but mainly the wind speed and direction calculated from SAR data and input model data from ECMWF
Implemented in this reader is the OWI, Ocean Wind field.
See more at ESA webpage https://sentinel.esa.int/web/sentinel/ocean-wind-field-component
- class satpy.readers.safe_sar_l2_ocn.SAFENC(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Measurement file reader.
Init the file reader.
- property end_time
Product end_time, parsed from the measurement file name.
- property fend_time
Product fend_time meaning the end time parsed from the SAFE directory.
- property fstart_time
Product fstart_time meaning the start time parsed from the SAFE directory.
- property start_time
Product start_time, parsed from the measurement file name.
satpy.readers.sar_c_safe module
SAFE SAR-C reader.
This module implements a reader for Sentinel 1 SAR-C GRD (level1) SAFE format as provided by ESA. The format is comprised of a directory containing multiple files, most notably two measurement files in geotiff and a few xml files for calibration, noise and metadata.
References
Level 1 Product Formatting https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-1-sar/products-algorithms/level-1-product-formatting
J. Park, A. A. Korosov, M. Babiker, S. Sandven and J. Won, “Efficient Thermal Noise Removal for Sentinel-1 TOPSAR Cross-Polarization Channel,” in IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 3, pp. 1555-1565, March 2018. doi: 10.1109/TGRS.2017.2765248
- class satpy.readers.sar_c_safe.AzimuthNoiseReader(root, shape)[source]
Bases:
object
Class to parse and read azimuth-noise data.
The azimuth noise vector is provided as a series of blocks, each comprised of a column of data to fill the block and a start and finish column number, and a start and finish line. For example, we can see here a (fake) azimuth noise array:
[[ 1. 1. 1. nan nan nan nan nan nan nan] [ 1. 1. 1. nan nan nan nan nan nan nan] [ 2. 2. 3. 3. 3. 4. 4. 4. 4. nan] [ 2. 2. 3. 3. 3. 4. 4. 4. 4. nan] [ 2. 2. 3. 3. 3. 4. 4. 4. 4. nan] [ 2. 2. 5. 5. 5. 5. 6. 6. 6. 6.] [ 2. 2. 5. 5. 5. 5. 6. 6. 6. 6.] [ 2. 2. 5. 5. 5. 5. 6. 6. 6. 6.] [ 2. 2. 7. 7. 7. 7. 7. 8. 8. 8.] [ 2. 2. 7. 7. 7. 7. 7. 8. 8. 8.]]
As is shown here, the blocks may not cover the full array, and hence it has to be gap-filled with NaNs.
Set up the azimuth noise reader.
- _assemble_azimuth_noise_blocks(chunks)[source]
Assemble the azimuth noise blocks into one single array.
- _create_dask_slice_from_block_line(current_line, chunks)[source]
Create a dask slice from the blocks at the current line.
- _create_dask_slices_from_blocks(chunks)[source]
Create full-width slices from azimuth noise blocks.
- class satpy.readers.sar_c_safe.SAFEGRD(filename, filename_info, filetype_info, calfh, noisefh, annotationfh)[source]
Bases:
BaseFileHandler
Measurement file reader.
The measurement files are in geotiff format and read using rasterio. For performance reasons, the reading adapts the chunk size to match the file’s block size.
Init the grd filehandler.
- property end_time
Get the end time.
- property start_time
Get the start time.
- class satpy.readers.sar_c_safe.SAFEXML(filename, filename_info, filetype_info, header_file=None)[source]
Bases:
BaseFileHandler
XML file reader for the SAFE format.
Init the xml filehandler.
- property end_time
Get the end time.
- property start_time
Get the start time.
- class satpy.readers.sar_c_safe.SAFEXMLAnnotation(filename, filename_info, filetype_info, header_file=None)[source]
Bases:
SAFEXML
XML file reader for the SAFE format, Annotation file.
Init the XML annotation reader.
- class satpy.readers.sar_c_safe.SAFEXMLCalibration(filename, filename_info, filetype_info, header_file=None)[source]
Bases:
SAFEXML
XML file reader for the SAFE format, Calibration file.
Init the XML calibration reader.
- class satpy.readers.sar_c_safe.SAFEXMLNoise(filename, filename_info, filetype_info, header_file=None)[source]
Bases:
SAFEXML
XML file reader for the SAFE format, Noise file.
Init the xml filehandler.
- class satpy.readers.sar_c_safe.XMLArray(root, list_tag, element_tag)[source]
Bases:
object
A proxy for getting xml data as an array.
Set up the XML array.
- class satpy.readers.sar_c_safe._AzimuthBlock(xml_element)[source]
Bases:
object
Implementation of an single azimuth-noise block.
Set up the block from an XML element.
- property first_line
- property first_pixel
- property last_line
- property last_pixel
- property lines
- property lut
- satpy.readers.sar_c_safe._get_calibration_name(calibration)[source]
Get the proper calibration name.
- satpy.readers.sar_c_safe.interpolate_slice(slice_rows, slice_cols, interpolator)[source]
Interpolate the given slice of the larger array.
- satpy.readers.sar_c_safe.interpolate_xarray(xpoints, ypoints, values, shape, blocksize=4096)[source]
Interpolate, generating a dask array.
satpy.readers.satpy_cf_nc module
Reader for files produced with the cf netcdf writer in satpy.
Introduction
The satpy_cf_nc
reader reads data written by the satpy cf_writer. Filenames for cf_writer are optional.
There are several readers using the same satpy_cf_nc.py reader.
Generic reader
satpy_cf_nc
EUMETSAT GAC FDR reader
avhrr_l1c_eum_gac_fdr_nc
Generic reader
The generic satpy_cf_nc
reader reads files of type:
'{platform_name}-{sensor}-{start_time:%Y%m%d%H%M%S}-{end_time:%Y%m%d%H%M%S}.nc'
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
filenames = ['data/npp-viirs-mband-20201007075915-20201007080744.nc']
scn = Scene(reader='satpy_cf_nc', filenames=filenames)
scn.load(['M05'])
scn['M05']
Output:
<xarray.DataArray 'M05' (y: 4592, x: 3200)>
dask.array<open_dataset-d91cfbf1bf4f14710d27446d91cdc6e4M05, shape=(4592, 3200),
dtype=float32, chunksize=(4096, 3200), chunktype=numpy.ndarray>
Coordinates:
longitude (y, x) float32 dask.array<chunksize=(4096, 3200), meta=np.ndarray>
latitude (y, x) float32 dask.array<chunksize=(4096, 3200), meta=np.ndarray>
Dimensions without coordinates: y, x
Attributes:
start_time: 2020-10-07 07:59:15
start_orbit: 46350
end_time: 2020-10-07 08:07:44
end_orbit: 46350
calibration: reflectance
long_name: M05
modifiers: ('sunz_corrected',)
platform_name: Suomi-NPP
resolution: 742
sensor: viirs
standard_name: toa_bidirectional_reflectance
units: %
wavelength: 0.672 µm (0.662-0.682 µm)
date_created: 2020-10-07T08:20:02Z
instrument: VIIRS
Notes
Available datasets and attributes will depend on the data saved with the cf_writer.
EUMETSAT AVHRR GAC FDR L1C reader
The avhrr_l1c_eum_gac_fdr_nc
reader reads files of type:
''AVHRR-GAC_FDR_1C_{platform}_{start_time:%Y%m%dT%H%M%SZ}_{end_time:%Y%m%dT%H%M%SZ}_{processing_mode}_{disposition_mode}_{creation_time}_{version_int:04d}.nc'
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
filenames = ['data/AVHRR-GAC_FDR_1C_N06_19810330T042358Z_19810330T060903Z_R_O_20200101T000000Z_0100.nc']
scn = Scene(reader='avhrr_l1c_eum_gac_fdr_nc', filenames=filenames)
scn.load(['brightness_temperature_channel_4'])
scn['brightness_temperature_channel_4']
Output:
<xarray.DataArray 'brightness_temperature_channel_4' (y: 11, x: 409)>
dask.array<open_dataset-55ffbf3623b32077c67897f4283640a5brightness_temperature_channel_4, shape=(11, 409),
dtype=float32, chunksize=(11, 409), chunktype=numpy.ndarray>
Coordinates:
* x (x) int16 0 1 2 3 4 5 6 7 8 ... 401 402 403 404 405 406 407 408
* y (y) int64 0 1 2 3 4 5 6 7 8 9 10
acq_time (y) datetime64[ns] dask.array<chunksize=(11,), meta=np.ndarray>
longitude (y, x) float64 dask.array<chunksize=(11, 409), meta=np.ndarray>
latitude (y, x) float64 dask.array<chunksize=(11, 409), meta=np.ndarray>
Attributes:
start_time: 1981-03-30 04:23:58
end_time: 1981-03-30 06:09:03
calibration: brightness_temperature
modifiers: ()
resolution: 1050
standard_name: toa_brightness_temperature
units: K
wavelength: 10.8 µm (10.3-11.3 µm)
Conventions: CF-1.8 ACDD-1.3
comment: Developed in cooperation with EUME...
creator_email: ops@eumetsat.int
creator_name: EUMETSAT
creator_url: https://www.eumetsat.int/
date_created: 2020-09-14T10:50:51.073707
disposition_mode: O
gac_filename: NSS.GHRR.NA.D81089.S0423.E0609.B09...
geospatial_lat_max: 89.95386902434623
geospatial_lat_min: -89.97581969005503
geospatial_lat_resolution: 1050 meters
geospatial_lat_units: degrees_north
geospatial_lon_max: 179.99952992568998
geospatial_lon_min: -180.0
geospatial_lon_resolution: 1050 meters
geospatial_lon_units: degrees_east
ground_station: GC
id: DOI:10.5676/EUM/AVHRR_GAC_L1C_FDR/...
institution: EUMETSAT
instrument: Earth Remote Sensing Instruments >...
keywords: ATMOSPHERE > ATMOSPHERIC RADIATION...
keywords_vocabulary: GCMD Science Keywords, Version 9.1
licence: EUMETSAT data policy https://www.e...
naming_authority: int.eumetsat
orbit_number_end: 9123
orbit_number_start: 9122
orbital_parameters_tle: ['1 11416U 79057A 81090.16350942...
platform: Earth Observation Satellites > NOA...
processing_level: 1C
processing_mode: R
product_version: 1.0.0
references: Devasthale, A., M. Raspaud, C. Sch...
source: AVHRR GAC Level 1 Data
standard_name_vocabulary: CF Standard Name Table v73
summary: Fundamental Data Record (FDR) of m...
sun_earth_distance_correction_factor: 0.9975244779999585
time_coverage_end: 19820803T003900Z
time_coverage_start: 19800101T000000Z
title: AVHRR GAC L1C FDR
version_calib_coeffs: PATMOS-x, v2017r1
version_pygac: 1.4.0
version_pygac_fdr: 0.1.dev107+gceb7b26.d20200910
version_satpy: 0.21.1.dev894+g5cf76e6
history: Created by pytroll/satpy on 2020-0...
name: brightness_temperature_channel_4
_satpy_id: DataID(name='brightness_temperatur...
ancillary_variables: []
- class satpy.readers.satpy_cf_nc.SatpyCFFileHandler(filename, filename_info, filetype_info, numeric_name_prefix='CHANNEL_')[source]
Bases:
BaseFileHandler
File handler for Satpy’s CF netCDF files.
Initialize file handler.
- property end_time
Get end time.
- property sensor_names
Get sensor set.
- property start_time
Get start time.
satpy.readers.scmi module
SCMI NetCDF4 Reader.
SCMI files are typically used for data for the ABI instrument onboard the GOES-16/17 satellites. It is the primary format used for providing ABI data to the AWIPS visualization clients used by the US National Weather Service forecasters. The python code for this reader may be reused by other readers as NetCDF schemes/metadata change for different products. The initial reader using this code is the “scmi_abi” reader (see abi_l1b_scmi.yaml for more information).
There are two forms of these files that this reader supports:
- Official SCMI format: NetCDF4 files where the main data variable is stored
in a variable called “Sectorized_CMI”. This variable name can be configured in the YAML configuration file.
- Satpy/Polar2Grid SCMI format: NetCDF4 files based on the official SCMI
format created for the Polar2Grid project. This format was migrated to Satpy as part of Polar2Grid’s adoption of Satpy for the majority of its features. This format is what is produced by Satpy’s scmi writer. This format can be identified by a single variable named “data” and a global attribute named
"awips_id"
that is set to a string starting with"AWIPS_"
.
- class satpy.readers.scmi.SCMIFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Handle a single SCMI NetCDF4 file.
Set up the SCMI file handler.
- property end_time
Get the end time.
- property sensor_names
Get the sensor names.
- property start_time
Get the start time.
satpy.readers.seadas_l2 module
Reader for SEADAS L2 products.
This reader currently only supports MODIS and VIIRS Chlorophyll A from SEADAS.
The reader includes an additional keyword argument apply_quality_flags
which can be used to mask out low-quality pixels based on quality flags
contained in the file (l2_flags
). This option defaults to False
, but
when set to True
the “CHLWARN” pixels of the l2_flags
variable
are masked out. These pixels represent data where the chlorophyll algorithm
warned about the quality of the result.
- class satpy.readers.seadas_l2.SEADASL2HDFFileHandler(filename, filename_info, filetype_info, apply_quality_flags=False)[source]
Bases:
_SEADASL2Base
,HDF4FileHandler
Simple handler of SEADAS L2 HDF4 files.
Initialize file handler and determine if data quality flags should be applied.
- end_time_attr_name = '/attr/End Time'
- l2_flags_var_name = 'l2_flags'
- platform_attr_name = '/attr/Mission'
- sensor_attr_name = '/attr/Sensor Name'
- start_time_attr_name = '/attr/Start Time'
- time_format = '%Y%j%H%M%S'
- class satpy.readers.seadas_l2.SEADASL2NetCDFFileHandler(filename, filename_info, filetype_info, apply_quality_flags=False)[source]
Bases:
_SEADASL2Base
,NetCDF4FileHandler
Simple handler of SEADAS L2 NetCDF4 files.
Initialize file handler and determine if data quality flags should be applied.
- end_time_attr_name = '/attr/time_coverage_end'
- l2_flags_var_name = 'geophysical_data/l2_flags'
- platform_attr_name = '/attr/platform'
- sensor_attr_name = '/attr/instrument'
- start_time_attr_name = '/attr/time_coverage_start'
- time_format = '%Y-%m-%dT%H:%M:%S.%f'
- class satpy.readers.seadas_l2._SEADASL2Base(filename, filename_info, filetype_info, apply_quality_flags=False)[source]
Bases:
object
Simple handler of SEADAS L2 files.
Initialize file handler and determine if data quality flags should be applied.
- property end_time
Get the ending observation time of this file’s data.
- property sensor_names
Get sensor for the current file’s data.
- property start_time
Get the starting observation time of this file’s data.
satpy.readers.seviri_base module
Common functionality for SEVIRI L1.5 data readers.
Introduction
The Spinning Enhanced Visible and InfraRed Imager (SEVIRI) is the primary instrument on Meteosat Second Generation (MSG) and has the capacity to observe the Earth in 12 spectral channels.
Level 1.5 corresponds to image data that has been corrected for all unwanted radiometric and geometric effects, has been geolocated using a standardised projection, and has been calibrated and radiance-linearised. (From the EUMETSAT documentation)
Satpy provides the following readers for SEVIRI L1.5 data in different formats:
Native:
satpy.readers.seviri_l1b_native
netCDF:
satpy.readers.seviri_l1b_nc
Calibration
This section describes how to control the calibration of SEVIRI L1.5 data.
Calibration to radiance
The SEVIRI L1.5 data readers allow for choosing between two file-internal calibration coefficients to convert counts to radiances:
Nominal for all channels (default)
GSICS where available (IR currently) and nominal for the remaining channels (VIS & HRV currently)
In order to change the default behaviour, use the reader_kwargs
keyword
argument upon Scene creation:
import satpy
scene = satpy.Scene(filenames=filenames,
reader='seviri_l1b_...',
reader_kwargs={'calib_mode': 'GSICS'})
scene.load(['VIS006', 'IR_108'])
In addition, two other calibration methods are available:
It is possible to specify external calibration coefficients for the conversion from counts to radiances. External coefficients take precedence over internal coefficients and over the Meirink coefficients, but you can also mix internal and external coefficients: If external calibration coefficients are specified for only a subset of channels, the remaining channels will be calibrated using the chosen file-internal coefficients (nominal or GSICS). Calibration coefficients must be specified in [mW m-2 sr-1 (cm-1)-1].
The calibration mode
meirink-2023
uses coefficients based on an intercalibration with Aqua-MODIS for the visible channels, as found in Inter-calibration of polar imager solar channels using SEVIRI (2013) by J. F. Meirink, R. A. Roebeling, and P. Stammes.
In the following example we use external calibration coefficients for the
VIS006
& IR_108
channels, and nominal coefficients for the
remaining channels:
coefs = {'VIS006': {'gain': 0.0236, 'offset': -1.20},
'IR_108': {'gain': 0.2156, 'offset': -10.4}}
scene = satpy.Scene(filenames,
reader='seviri_l1b_...',
reader_kwargs={'ext_calib_coefs': coefs})
scene.load(['VIS006', 'VIS008', 'IR_108', 'IR_120'])
In the next example we use external calibration coefficients for the
VIS006
& IR_108
channels, GSICS coefficients where available
(other IR channels) and nominal coefficients for the rest:
coefs = {'VIS006': {'gain': 0.0236, 'offset': -1.20},
'IR_108': {'gain': 0.2156, 'offset': -10.4}}
scene = satpy.Scene(filenames,
reader='seviri_l1b_...',
reader_kwargs={'calib_mode': 'GSICS',
'ext_calib_coefs': coefs})
scene.load(['VIS006', 'VIS008', 'IR_108', 'IR_120'])
In the next example we use the mode meirink-2023
calibration
coefficients for all visible channels and nominal coefficients for the
rest:
scene = satpy.Scene(filenames,
reader='seviri_l1b_...',
reader_kwargs={'calib_mode': 'meirink-2023'})
scene.load(['VIS006', 'VIS008', 'IR_016'])
Calibration to reflectance
When loading solar channels, the SEVIRI L1.5 data readers apply a correction for
the Sun-Earth distance variation throughout the year - as recommended by
the EUMETSAT document
Conversion from radiances to reflectances for SEVIRI warm channels.
In the unlikely situation that this correction is not required, it can be
removed on a per-channel basis using
satpy.readers.utils.remove_earthsun_distance_correction()
.
Masking of bad quality scan lines
By default bad quality scan lines are masked and replaced with np.nan
for radiance, reflectance and
brightness temperature calibrations based on the quality flags provided by the data (for details on quality
flags see MSG Level 1.5 Image Data Format Description page 109). To disable masking
reader_kwargs={'mask_bad_quality_scan_lines': False}
can be passed to the Scene.
Metadata
The SEVIRI L1.5 readers provide the following metadata:
The
orbital_parameters
attribute provides the nominal and actual satellite position, as well as the projection centre. See the Metadata section in the Reading chapter for more information.The
acq_time
coordinate provides the mean acquisition time for each scanline. Use aMultiIndex
to enable selection by acquisition time:import pandas as pd mi = pd.MultiIndex.from_arrays([scn['IR_108']['y'].data, scn['IR_108']['acq_time'].data], names=('y_coord', 'time')) scn['IR_108']['y'] = mi scn['IR_108'].sel(time=np.datetime64('2019-03-01T12:06:13.052000000'))
Raw metadata from the file header can be included by setting the reader argument
include_raw_metadata=True
(HRIT and Native format only). Note that this comes with a performance penalty of up to 10% if raw metadata from multiple segments or scans need to be combined. By default, arrays with more than 100 elements are excluded to limit the performance penalty. This threshold can be adjusted using themda_max_array_size
reader keyword argument:scene = satpy.Scene(filenames, reader='seviri_l1b_hrit/native', reader_kwargs={'include_raw_metadata': True, 'mda_max_array_size': 1000})
References
- class satpy.readers.seviri_base.MeirinkCalibrationHandler(calib_mode)[source]
Bases:
object
Re-calibration of the SEVIRI visible channels slope (see Meirink 2013).
Initialize the calibration handler.
- class satpy.readers.seviri_base.MpefProductHeader[source]
Bases:
object
MPEF product header class.
- property images_used
Return structure for images_used.
- exception satpy.readers.seviri_base.NoValidOrbitParams[source]
Bases:
Exception
Exception when validOrbitParameters are missing.
- class satpy.readers.seviri_base.OrbitPolynomial(coefs, start_time, end_time)[source]
Bases:
object
Polynomial encoding the satellite position.
Satellite position as a function of time is encoded in the coefficients of an 8th-order Chebyshev polynomial.
Initialize the polynomial.
- class satpy.readers.seviri_base.OrbitPolynomialFinder(orbit_polynomials)[source]
Bases:
object
Find orbit polynomial for a given timestamp.
Initialize with the given candidates.
- Parameters:
orbit_polynomials –
Dictionary of orbit polynomials as found in SEVIRI L1B files:
{'X': x_polynomials, 'Y': y_polynomials, 'Z': z_polynomials, 'StartTime': polynomials_valid_from, 'EndTime': polynomials_valid_to}
- _get_closest_interval(time)[source]
Find interval closest to the given timestamp.
- Returns:
Index of closest interval, distance from its center
- _get_closest_interval_within(time, threshold)[source]
Find interval closest to the given timestamp within a given distance.
- Parameters:
time – Timestamp of interest
threshold – Maximum distance between timestamp and interval center
- Returns:
Index of closest interval
- get_orbit_polynomial(time, max_delta=6)[source]
Get orbit polynomial valid for the given time.
Orbit polynomials are only valid for certain time intervals. Find the polynomial, whose corresponding interval encloses the given timestamp. If there are multiple enclosing intervals, use the most recent one. If there is no enclosing interval, find the interval whose centre is closest to the given timestamp (but not more than
max_delta
hours apart).Why are there gaps between those intervals? Response from EUM:
A manoeuvre is a discontinuity in the orbit parameters. The flight dynamic algorithms are not made to interpolate over the time-span of the manoeuvre; hence we have elements describing the orbit before a manoeuvre and a new set of elements describing the orbit after the manoeuvre. The flight dynamic products are created so that there is an intentional gap at the time of the manoeuvre. Also the two pre-manoeuvre elements may overlap. But the overlap is not of an issue as both sets of elements describe the same pre-manoeuvre orbit (with negligible variations).
- class satpy.readers.seviri_base.SEVIRICalibrationAlgorithm(platform_id, scan_time)[source]
Bases:
object
SEVIRI calibration algorithms.
Initialize the calibration algorithm.
- vis_calibrate(data, solar_irradiance)[source]
Calibrate to reflectance.
This uses the method described in Conversion from radiances to reflectances for SEVIRI warm channels: https://www-cdn.eumetsat.int/files/2020-04/pdf_msg_seviri_rad2refl.pdf
- class satpy.readers.seviri_base.SEVIRICalibrationHandler(platform_id, channel_name, coefs, calib_mode, scan_time)[source]
Bases:
object
Calibration handler for SEVIRI HRIT-, native- and netCDF-formats.
Handles selection of calibration coefficients and calls the appropriate calibration algorithm.
Initialize the calibration handler.
- get_gain_offset()[source]
Get gain & offset for calibration from counts to radiance.
Choices for internal coefficients are nominal or GSICS. If no GSICS coefficients are available for a certain channel, fall back to nominal coefficients. External coefficients take precedence over internal coefficients.
- satpy.readers.seviri_base._create_bad_quality_lines_mask(line_validity, line_geometric_quality, line_radiometric_quality)[source]
Create bad quality scan lines mask.
For details on quality flags see MSG Level 1.5 Image Data Format Description page 109.
- Parameters:
line_validity (numpy.ndarray) – Quality flags with shape (nlines,).
line_geometric_quality (numpy.ndarray) – Quality flags with shape (nlines,).
line_radiometric_quality (numpy.ndarray) – Quality flags with shape (nlines,).
- Returns:
Indicating if the scan line is bad.
- Return type:
- satpy.readers.seviri_base.add_scanline_acq_time(dataset, acq_time)[source]
Add scanline acquisition time to the given dataset.
- satpy.readers.seviri_base.calculate_area_extent(area_dict)[source]
Calculate the area extent seen by a geostationary satellite.
- Parameters:
area_dict – A dictionary containing the required parameters center_point: Center point for the projection north: Northmost row number east: Eastmost column number west: Westmost column number south: Southmost row number column_step: Pixel resolution in meters in east-west direction line_step: Pixel resolution in meters in south-north direction [column_offset: Column offset, defaults to 0 if not given] [line_offset: Line offset, defaults to 0 if not given]
- Returns:
- An area extent for the scene defined by the lower left and
upper right corners
- Return type:
# For Earth model 2 and full disk VISIR, (center_point - west - 0.5 + we_offset) must be -1856.5 . # See MSG Level 1.5 Image Data Format Description Figure 7 - Alignment and numbering of the non-HRV pixels.
- satpy.readers.seviri_base.chebyshev(coefs, time, domain)[source]
Evaluate a Chebyshev Polynomial.
- Parameters:
Reference: Appendix A in the MSG Level 1.5 Image Data Format Description.
- satpy.readers.seviri_base.chebyshev_3d(coefs, time, domain)[source]
Evaluate Chebyshev Polynomials for three dimensions (x, y, z).
Expects the three coefficient sets to be defined in the same domain.
- Parameters:
coefs – (x, y, z) coefficient sets.
time – See
chebyshev()
domain – See
chebyshev()
- Returns:
Polynomials evaluated in (x, y, z) dimension.
- satpy.readers.seviri_base.create_coef_dict(coefs_nominal, coefs_gsics, radiance_type, ext_coefs)[source]
Create coefficient dictionary expected by calibration class.
- satpy.readers.seviri_base.dec10216(inbuf)[source]
Decode 10 bits data into 16 bits words.
/* * pack 4 10-bit words in 5 bytes into 4 16-bit words * * 0 1 2 3 4 5 * 01234567890123456789012345678901234567890 * 0 1 2 3 4 */ ip = &in_buffer[i]; op = &out_buffer[j]; op[0] = ip[0]*4 + ip[1]/64; op[1] = (ip[1] & 0x3F)*16 + ip[2]/16; op[2] = (ip[2] & 0x0F)*64 + ip[3]/4; op[3] = (ip[3] & 0x03)*256 +ip[4];
- satpy.readers.seviri_base.get_cds_time(days, msecs)[source]
Compute timestamp given the days since epoch and milliseconds of the day.
1958-01-01 00:00 is interpreted as fill value and will be replaced by NaT (Not a Time).
- Parameters:
days (int, either scalar or numpy.ndarray) – Days since 1958-01-01
msecs (int, either scalar or numpy.ndarray) – Milliseconds of the day
- Returns:
Timestamp(s)
- Return type:
- satpy.readers.seviri_base.get_meirink_slope(meirink_coefs, acquisition_time)[source]
Compute the slope for the visible channel calibration according to Meirink 2013.
S = A + B * 1.e-3* Day
S is here in µW m-2 sr-1 (cm-1)-1
EUMETSAT calibration is given in mW m-2 sr-1 (cm-1)-1, so an extra factor of 1/1000 must be applied.
- satpy.readers.seviri_base.get_padding_area(shape, dtype)[source]
Create a padding area filled with no data.
- satpy.readers.seviri_base.get_satpos(orbit_polynomial, time, semi_major_axis, semi_minor_axis)[source]
Get satellite position in geodetic coordinates.
- Parameters:
orbit_polynomial – OrbitPolynomial instance
time – Timestamp where to evaluate the polynomial
semi_major_axis – Semi-major axis of the ellipsoid
semi_minor_axis – Semi-minor axis of the ellipsoid
- Returns:
Longitude [deg east], Latitude [deg north] and Altitude [m]
- satpy.readers.seviri_base.mask_bad_quality(data, line_validity, line_geometric_quality, line_radiometric_quality)[source]
Mask scan lines with bad quality.
- Parameters:
data (xarray.DataArray) – Channel data
line_validity (numpy.ndarray) – Quality flags with shape (nlines,).
line_geometric_quality (numpy.ndarray) – Quality flags with shape (nlines,).
line_radiometric_quality (numpy.ndarray) – Quality flags with shape (nlines,).
- Returns:
data with lines flagged as bad converted to np.nan.
- Return type:
- satpy.readers.seviri_base.pad_data_horizontally(data, final_size, east_bound, west_bound)[source]
Pad the data given east and west bounds and the desired size.
- satpy.readers.seviri_base.pad_data_vertically(data, final_size, south_bound, north_bound)[source]
Pad the data given south and north bounds and the desired size.
- satpy.readers.seviri_base.round_nom_time(dt, time_delta)[source]
Round a datetime object to a multiple of a timedelta.
dt : datetime.datetime object, default now. time_delta : timedelta object, we round to a multiple of this, default 1 minute. adapted for SEVIRI from: https://stackoverflow.com/questions/3463930/how-to-round-the-minute-of-a-datetime-object-python
satpy.readers.seviri_l1b_hrit module
SEVIRI Level 1.5 HRIT format reader.
Introduction
The seviri_l1b_hrit
reader reads and calibrates MSG-SEVIRI L1.5 image data in HRIT format. The format is explained
in the MSG Level 1.5 Image Data Format Description. The files are usually named as
follows:
H-000-MSG4__-MSG4________-_________-PRO______-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000001___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000002___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000003___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000004___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000005___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000006___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000007___-201903011200-__
H-000-MSG4__-MSG4________-IR_108___-000008___-201903011200-__
H-000-MSG4__-MSG4________-_________-EPI______-201903011200-__
Each image is decomposed into 24 segments (files) for the high-resolution-visible (HRV) channel and 8 segments for other visible (VIS) and infrared (IR) channels. Additionally, there is one prologue and one epilogue file for the entire scan which contain global metadata valid for all channels.
Reader Arguments
Some arguments can be provided to the reader to change its behaviour. These are provided through the Scene instantiation, eg:
scn = Scene(filenames=filenames, reader="seviri_l1b_hrit", reader_kwargs={'fill_hrv': False})
To see the full list of arguments that can be provided, look into the documentation
of HRITMSGFileHandler
.
Compression
This reader accepts compressed HRIT files, ending in C_
as other HRIT readers, see
satpy.readers.hrit_base.HRITFileHandler
.
This reader also accepts bzipped file with the extension .bz2
for the prologue,
epilogue, and segment files.
Nominal start/end time
Warning
attribute access change
nominal_start_time
and nominal_end_time
should be accessed using the time_parameters
attribute.
nominal_start_time
and nominal_end_time
are also available directly
via start_time
and end_time
respectively.
Here is an exmaple of the content of the start/end time and time_parameters
attibutes
Start time: 2019-08-29 12:00:00
End time: 2019-08-29 12:15:00
time_parameters:
{'nominal_start_time': datetime.datetime(2019, 8, 29, 12, 0),
'nominal_end_time': datetime.datetime(2019, 8, 29, 12, 15),
'observation_start_time': datetime.datetime(2019, 8, 29, 12, 0, 9, 338000),
'observation_end_time': datetime.datetime(2019, 8, 29, 12, 15, 9, 203000)
}
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
import glob
filenames = glob.glob('data/H-000-MSG4__-MSG4________-*201903011200*')
scn = Scene(filenames=filenames, reader='seviri_l1b_hrit')
scn.load(['VIS006', 'IR_108'])
print(scn['IR_108'])
Output:
<xarray.DataArray (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
acq_time (y) datetime64[ns] NaT NaT NaT NaT NaT NaT ... NaT NaT NaT NaT NaT
* x (x) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
* y (y) float64 -5.566e+06 -5.563e+06 ... 5.566e+06 5.569e+06
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'projection_latit...
platform_name: Meteosat-11
georef_offset_corrected: True
standard_name: brightness_temperature
raw_metadata: {'file_type': 0, 'total_header_length': 6198, '...
wavelength: (9.8, 10.8, 11.8)
units: K
sensor: seviri
platform_name: Meteosat-11
start_time: 2019-03-01 12:00:09.716000
end_time: 2019-03-01 12:12:42.946000
area: Area ID: some_area_name\\nDescription: On-the-fl...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
polarization: None
level: None
modifiers: ()
ancillary_variables: []
The filenames argument can either be a list of strings, see the example above, or a list of
satpy.readers.FSFile
objects. FSFiles can be used in conjunction with fsspec,
e.g. to handle in-memory data:
import glob
from fsspec.implementations.memory import MemoryFile, MemoryFileSystem
from satpy import Scene
from satpy.readers import FSFile
# In this example, we will make use of `MemoryFile`s in a `MemoryFileSystem`.
memory_fs = MemoryFileSystem()
# Usually, the data already resides in memory.
# For explanatory reasons, we will load the files found with glob in memory,
# and load the scene with FSFiles.
filenames = glob.glob('data/H-000-MSG4__-MSG4________-*201903011200*')
fs_files = []
for fn in filenames:
with open(fn, 'rb') as fh:
fs_files.append(MemoryFile(
fs=memory_fs,
path="{}{}".format(memory_fs.root_marker, fn),
data=fh.read()
))
fs_files[-1].commit() # commit the file to the filesystem
fs_files = [FSFile(open_file) for open_file in filenames] # wrap MemoryFiles as FSFiles
# similar to the example above, we pass a list of FSFiles to the `Scene`
scn = Scene(filenames=fs_files, reader='seviri_l1b_hrit')
scn.load(['VIS006', 'IR_108'])
print(scn['IR_108'])
Output:
<xarray.DataArray (y: 3712, x: 3712)>
dask.array<shape=(3712, 3712), dtype=float32, chunksize=(464, 3712)>
Coordinates:
acq_time (y) datetime64[ns] NaT NaT NaT NaT NaT NaT ... NaT NaT NaT NaT NaT
* x (x) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
* y (y) float64 -5.566e+06 -5.563e+06 ... 5.566e+06 5.569e+06
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'projection_latit...
platform_name: Meteosat-11
georef_offset_corrected: True
standard_name: brightness_temperature
raw_metadata: {'file_type': 0, 'total_header_length': 6198, '...
wavelength: (9.8, 10.8, 11.8)
units: K
sensor: seviri
platform_name: Meteosat-11
start_time: 2019-03-01 12:00:09.716000
end_time: 2019-03-01 12:12:42.946000
area: Area ID: some_area_name\\nDescription: On-the-fl...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
polarization: None
level: None
modifiers: ()
ancillary_variables: []
References
- class satpy.readers.seviri_l1b_hrit.HRITMSGEpilogueFileHandler(filename, filename_info, filetype_info, calib_mode='nominal', ext_calib_coefs=None, include_raw_metadata=False, mda_max_array_size=None, fill_hrv=None, mask_bad_quality_scan_lines=None)[source]
Bases:
HRITMSGPrologueEpilogueBase
SEVIRI HRIT epilogue reader.
Initialize the reader.
- class satpy.readers.seviri_l1b_hrit.HRITMSGFileHandler(filename, filename_info, filetype_info, prologue, epilogue, calib_mode='nominal', ext_calib_coefs=None, include_raw_metadata=False, mda_max_array_size=100, fill_hrv=True, mask_bad_quality_scan_lines=True)[source]
Bases:
HRITFileHandler
SEVIRI HRIT format reader.
Calibration
See
satpy.readers.seviri_base
.Padding of the HRV channel
By default, the HRV channel is loaded padded with no-data, returning a full-disk dataset. If you want the original, unpadded data, just provide the fill_hrv as False in the reader_kwargs:
scene = satpy.Scene(filenames, reader='seviri_l1b_hrit', reader_kwargs={'fill_hrv': False})
Metadata
See
satpy.readers.seviri_base
.Initialize the reader.
- _get_area_extent(pdict)[source]
Get the area extent of the file.
Until December 2017, the data is shifted by 1.5km SSP North and West against the nominal GEOS projection. Since December 2017 this offset has been corrected. A flag in the data indicates if the correction has been applied. If no correction was applied, adjust the area extent to match the shifted data.
For more information see Section 3.1.4.2 in the MSG Level 1.5 Image Data Format Description. The correction of the area extent is documented in a developer’s memo.
- property _repeat_cycle_duration
Get repeat cycle duration from epilogue.
- property end_time
Get the general end time for this file.
- property nominal_end_time
Get the end time and round it according to scan law.
- property nominal_start_time
Get the start time and round it according to scan law.
- property observation_end_time
Get the observation end time.
- property observation_start_time
Get the observation start time.
- property start_time
Get general start time for this file.
- class satpy.readers.seviri_l1b_hrit.HRITMSGPrologueEpilogueBase(filename, filename_info, filetype_info, hdr_info)[source]
Bases:
HRITFileHandler
Base reader for prologue and epilogue files.
Initialize the file handler for prologue and epilogue files.
- class satpy.readers.seviri_l1b_hrit.HRITMSGPrologueFileHandler(filename, filename_info, filetype_info, calib_mode='nominal', ext_calib_coefs=None, include_raw_metadata=False, mda_max_array_size=None, fill_hrv=None, mask_bad_quality_scan_lines=None)[source]
Bases:
HRITMSGPrologueEpilogueBase
SEVIRI HRIT prologue reader.
Initialize the reader.
- get_earth_radii()[source]
Get earth radii from prologue.
- Returns:
Equatorial radius, polar radius [m]
- property satpos
Get actual satellite position in geodetic coordinates (WGS-84).
Evaluate orbit polynomials at the start time of the scan.
Returns: Longitude [deg east], Latitude [deg north] and Altitude [m]
satpy.readers.seviri_l1b_icare module
Interface to SEVIRI L1B data from ICARE (Lille).
Introduction
The seviri_l1b_icare
reader reads MSG-SEVIRI L1.5 image data in HDF format
that has been produced by the ICARE Data and Services Center
Data can be accessed via: http://www.icare.univ-lille1.fr
Each SEVIRI timeslot comes as 12 HDF files, one per band. Only those bands that are of interest need to be passed to the reader. Others can be ignored. Filenames follow the format: GEO_L1B-MSG1_YYYY-MM-DDTHH-MM-SS_G_CHANN_VX-XX.hdf Where: YYYY, MM, DD, HH, MM, SS specify the timeslot starting time. CHANN is the channel (i.e: HRV, IR016, WV073, etc) VX-XX is the processing version number
Example:
Here is an example how to read the data in satpy:
from satpy import Scene
import glob
filenames = glob.glob('data/*2019-03-01T12-00-00*.hdf')
scn = Scene(filenames=filenames, reader='seviri_l1b_icare')
scn.load(['VIS006', 'IR_108'])
print(scn['IR_108'])
Output:
<xarray.DataArray 'array-a1d52b7e19ec5a875e2f038df5b60d7e' (y: 3712, x: 3712)>
dask.array<add, shape=(3712, 3712), dtype=float32, chunksize=(1024, 1024), chunktype=numpy.ndarray>
Coordinates:
crs object +proj=geos +a=6378169.0 +b=6356583.8 +lon_0=0.0 +h=35785831.0 +units=m +type=crs
* y (y) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
* x (x) float64 -5.566e+06 -5.563e+06 -5.56e+06 ... 5.566e+06 5.569e+06
Attributes:
start_time: 2004-12-29 12:15:00
end_time: 2004-12-29 12:27:44
area: Area ID: geosmsg\nDescription: MSG/SEVIRI low resol...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
polarization: None
level: None
modifiers: ()
ancillary_variables: []
- class satpy.readers.seviri_l1b_icare.SEVIRI_ICARE(filename, filename_info, filetype_info)[source]
Bases:
HDF4FileHandler
SEVIRI L1B handler for HDF4 files.
Init the file handler.
- property alt
Get the altitude.
- property end_time
Get the end time.
- property geoloc
Get the geolocation.
- property projection
Get the projection.
- property projlon
Get the projection longitude.
- property res
Get the resolution.
- property satlon
Get the satellite longitude.
- property sensor_name
Get the sensor name.
- property start_time
Get the start time.
- property zone
Get the zone.
satpy.readers.seviri_l1b_native module
SEVIRI Level 1.5 native format reader.
Introduction
The seviri_l1b_native
reader reads and calibrates MSG-SEVIRI L1.5 image data in binary format. The format is
explained in the MSG Level 1.5 Native Format File Definition. The files are usually named as
follows:
MSG4-SEVI-MSG15-0100-NA-20210302124244.185000000Z-NA.nat
Reader Arguments
Some arguments can be provided to the reader to change its behaviour. These are provided through the Scene instantiation, eg:
scn = Scene(filenames=filenames, reader="seviri_l1b_native", reader_kwargs={'fill_disk': True})
To see the full list of arguments that can be provided, look into the documentation
of NativeMSGFileHandler
.
Example:
Here is an example how to read the data in satpy.
NOTE: When loading the data, the orientation
of the image can be set with upper_right_corner
-keyword.
Possible options are NW
, NE
, SW
, SE
, or native
.
from satpy import Scene
filenames = ['MSG4-SEVI-MSG15-0100-NA-20210302124244.185000000Z-NA.nat']
scn = Scene(filenames=filenames, reader='seviri_l1b_native')
scn.load(['VIS006', 'IR_108'], upper_right_corner='NE')
print(scn['IR_108'])
Output:
<xarray.DataArray 'reshape-969ef97d34b7b0c70ca19f53c6abcb68' (y: 3712, x: 3712)>
dask.array<truediv, shape=(3712, 3712), dtype=float32, chunksize=(928, 3712), chunktype=numpy.ndarray>
Coordinates:
acq_time (y) datetime64[ns] NaT NaT NaT NaT NaT NaT ... NaT NaT NaT NaT NaT
crs object PROJCRS["unknown",BASEGEOGCRS["unknown",DATUM["unknown",...
* y (y) float64 -5.566e+06 -5.563e+06 ... 5.566e+06 5.569e+06
* x (x) float64 5.566e+06 5.563e+06 5.56e+06 ... -5.566e+06 -5.569e+06
Attributes:
orbital_parameters: {'projection_longitude': 0.0, 'projection_latit...
time_parameters: {'nominal_start_time': datetime.datetime(2021, ...
units: K
wavelength: 10.8 µm (9.8-11.8 µm)
standard_name: toa_brightness_temperature
platform_name: Meteosat-11
sensor: seviri
georef_offset_corrected: True
start_time: 2021-03-02 12:30:11.584603
end_time: 2021-03-02 12:45:09.949762
reader: seviri_l1b_native
area: Area ID: msg_seviri_fes_3km\\nDescription: MSG S...
name: IR_108
resolution: 3000.403165817
calibration: brightness_temperature
modifiers: ()
_satpy_id: DataID(name='IR_108', wavelength=WavelengthRang...
ancillary_variables: []
References
- class satpy.readers.seviri_l1b_native.ImageBoundaries(header, trailer, mda)[source]
Bases:
object
Collect image boundary information.
Initialize the class.
- _get_hrv_actual_img_bounds()[source]
Get HRV (if not ROI) image boundaries from the ActualL15CoverageHRV information stored in the trailer.
- _get_selected_img_bounds(dataset_id)[source]
Get VISIR and HRV (if ROI) image boundaries from the SelectedRectangle information stored in the header.
- get_img_bounds(dataset_id, is_roi)[source]
Get image line and column boundaries.
- Returns:
Dictionary with the four keys ‘south_bound’, ‘north_bound’, ‘east_bound’ and ‘west_bound’, each containing a list of the respective line/column numbers of the image boundaries.
Lists (rather than scalars) are returned since the HRV data in FES mode contain data from two windows/areas.
- class satpy.readers.seviri_l1b_native.NativeMSGFileHandler(filename, filename_info, filetype_info, calib_mode='nominal', fill_disk=False, ext_calib_coefs=None, include_raw_metadata=False, mda_max_array_size=100)[source]
Bases:
BaseFileHandler
SEVIRI native format reader.
Calibration
See
satpy.readers.seviri_base
.Padding channel data to full disk
By providing the fill_disk as True in the reader_kwargs, the channel is loaded as full disk, padded with no-data where necessary. This is especially useful for the HRV channel, but can also be used for RSS and ROI data. By default, the original, unpadded, data are loaded:
scene = satpy.Scene(filenames, reader='seviri_l1b_native', reader_kwargs={'fill_disk': False})
Metadata
See
satpy.readers.seviri_base
.Initialize the reader.
- _add_scanline_acq_time(dataset, dataset_id)[source]
Add scanline acquisition time to the given dataset.
- property _repeat_cycle_duration
Get repeat cycle duration from the trailer.
- property end_time
Get the general end time for this file.
- get_area_def(dataset_id)[source]
Get the area definition of the band.
In general, image data from one window/area is available. For the HRV channel in FES mode, however, data from two windows (‘Lower’ and ‘Upper’) are available. Hence, we collect lists of area-extents and corresponding number of image lines/columns. In case of FES HRV data, two area definitions are computed, stacked and squeezed. For other cases, the lists will only have one entry each, from which a single area definition is computed.
Note that the AreaDefinition area extents returned by this function for Native data will be slightly different compared to the area extents returned by the SEVIRI HRIT reader. This is due to slightly different pixel size values when calculated using the data available in the files. E.g. for the 3 km grid:
Native: data15hd['ImageDescription']['ReferenceGridVIS_IR']['ColumnDirGridStep'] == 3000.4031658172607
HRIT: np.deg2rad(2.**16 / pdict['lfac']) * pdict['h'] == 3000.4032785810186
This results in the Native 3 km full-disk area extents being approx. 20 cm shorter in each direction.
The method for calculating the area extents used by the HRIT reader (CFAC/LFAC mechanism) keeps the highest level of numeric precision and is used as reference by EUM. For this reason, the standard area definitions defined in the areas.yaml file correspond to the HRIT ones.
- get_area_extent(dataset_id)[source]
Get the area extent of the file.
Until December 2017, the data is shifted by 1.5km SSP North and West against the nominal GEOS projection. Since December 2017 this offset has been corrected. A flag in the data indicates if the correction has been applied. If no correction was applied, adjust the area extent to match the shifted data.
For more information see Section 3.1.4.2 in the MSG Level 1.5 Image Data Format Description. The correction of the area extent is documented in a developer’s memo.
- is_roi()[source]
Check if data covers a selected region of interest (ROI).
Standard RSS data consists of 3712 columns and 1392 lines, covering the three northmost segments of the SEVIRI disk. Hence, if the data does not cover the full disk, nor the standard RSS region in RSS mode, it’s assumed to be ROI data.
- property nominal_end_time
Get the repeat cycle nominal end time from file header and round it to expected nominal time slot.
- property nominal_start_time
Get the repeat cycle nominal start time from file header and round it to expected nominal time slot.
- property observation_end_time
Get observation end time from trailer.
- property observation_start_time
Get observation start time from trailer.
- property satpos
Get actual satellite position in geodetic coordinates (WGS-84).
Evaluate orbit polynomials at the start time of the scan.
Returns: Longitude [deg east], Latitude [deg north] and Altitude [m]
- property start_time
Get general start time for this file.
- class satpy.readers.seviri_l1b_native.Padder(dataset_id, img_bounds, is_full_disk)[source]
Bases:
object
Padding of HRV, RSS and ROI data to full disk.
Initialize the padder.
- satpy.readers.seviri_l1b_native.get_available_channels(header)[source]
Get the available channels from the header information.
satpy.readers.seviri_l1b_native_hdr module
Header and trailer records of SEVIRI native format.
- satpy.readers.seviri_l1b_native_hdr.DEFAULT_15_SECONDARY_PRODUCT_HEADER = {'EastColumnSelectedRectangle': {'Value': 1}, 'NorthLineSelectedRectangle': {'Value': 3712}, 'NumberColumnsHRV': {'Value': 11136}, 'NumberColumnsVISIR': {'Value': 3712}, 'NumberLinesHRV': {'Value': 11136}, 'NumberLinesVISIR': {'Value': 3712}, 'SelectedBandIDs': {'Value': 'XXXXXXXXXXXX'}, 'SouthLineSelectedRectangle': {'Value': 1}, 'WestColumnSelectedRectangle': {'Value': 3712}}
Default secondary product header for files containing all channels.
- class satpy.readers.seviri_l1b_native_hdr.GSDTRecords[source]
Bases:
object
MSG Ground Segment Data Type records.
Reference Document (EUM/MSG/SPE/055): MSG Ground Segment Design Specification (GSDS)
- gp_cpu_address = [('Qualifier_1', <class 'numpy.uint8'>), ('Qualifier_2', <class 'numpy.uint8'>), ('Qualifier_3', <class 'numpy.uint8'>), ('Qualifier_4', <class 'numpy.uint8'>)]
- gp_fac_env
alias of
uint8
- gp_fac_id
alias of
uint8
- gp_pk_header = [('HeaderVersionNo', <class 'numpy.uint8'>), ('PacketType', <class 'numpy.uint8'>), ('SubHeaderType', <class 'numpy.uint8'>), ('SourceFacilityId', <class 'numpy.uint8'>), ('SourceEnvId', <class 'numpy.uint8'>), ('SourceInstanceId', <class 'numpy.uint8'>), ('SourceSUId', <class 'numpy.uint32'>), ('SourceCPUId', [('Qualifier_1', <class 'numpy.uint8'>), ('Qualifier_2', <class 'numpy.uint8'>), ('Qualifier_3', <class 'numpy.uint8'>), ('Qualifier_4', <class 'numpy.uint8'>)]), ('DestFacilityId', <class 'numpy.uint8'>), ('DestEnvId', <class 'numpy.uint8'>), ('SequenceCount', <class 'numpy.uint16'>), ('PacketLength', <class 'numpy.int32'>)]
- gp_pk_sh1 = [('SubHeaderVersionNo', <class 'numpy.uint8'>), ('ChecksumFlag', <class 'bool'>), ('Acknowledgement', (<class 'numpy.uint8'>, 4)), ('ServiceType', <class 'numpy.uint8'>), ('ServiceSubtype', <class 'numpy.uint8'>), ('PacketTime', [('Days', '>u2'), ('Milliseconds', '>u4')]), ('SpacecraftId', <class 'numpy.uint16'>)]
- gp_sc_id
alias of
uint16
- gp_su_id
alias of
uint32
- gp_svce_type
alias of
uint8
- class satpy.readers.seviri_l1b_native_hdr.HritPrologue[source]
Bases:
L15DataHeaderRecord
HRIT Prologue handler.
- class satpy.readers.seviri_l1b_native_hdr.L15DataHeaderRecord[source]
Bases:
object
L15 Data Header handler.
Reference Document (EUM/MSG/ICD/105): MSG Level 1.5 Image Data Format Description
- property celestial_events
Get celestial events data.
- property geometric_processing
Get geometric processing data.
- property image_acquisition
Get image acquisition data.
- property image_description
Get image description data.
- property impf_configuration
Get impf configuration information.
- property radiometric_processing
Get radiometric processing data.
- property satellite_status
Get satellite status data.
- class satpy.readers.seviri_l1b_native_hdr.L15MainProductHeaderRecord[source]
Bases:
object
L15 Main Product header handler.
Reference Document: MSG Level 1.5 Native Format File Definition
- class satpy.readers.seviri_l1b_native_hdr.L15PhData[source]
Bases:
object
L15 Ph handler.
- l15_ph_data = [('Name', 'S30'), ('Value', 'S50')]
- class satpy.readers.seviri_l1b_native_hdr.L15SecondaryProductHeaderRecord[source]
Bases:
object
L15 Secondary Product header handler.
Reference Document: MSG Level 1.5 Native Format File Definition
- class satpy.readers.seviri_l1b_native_hdr.Msg15NativeHeaderRecord[source]
Bases:
object
SEVIRI Level 1.5 header for native-format.
- class satpy.readers.seviri_l1b_native_hdr.Msg15NativeTrailerRecord[source]
Bases:
object
SEVIRI Level 1.5 trailer for native-format.
Reference Document (EUM/MSG/ICD/105): MSG Level 1.5 Image Data Format Description
- property geometric_quality
Get geometric quality record data.
- property image_production_stats
Get image production statistics.
Get navigation extraction data.
- property radiometric_quality
Get radiometric quality record data.
- property seviri_l15_trailer
Get file trailer data.
- property timeliness_and_completeness
Get time and completeness record data.
satpy.readers.seviri_l1b_nc module
SEVIRI netcdf format reader.
- class satpy.readers.seviri_l1b_nc.NCSEVIRIFileHandler(filename, filename_info, filetype_info, ext_calib_coefs=None, mask_bad_quality_scan_lines=True)[source]
Bases:
BaseFileHandler
File handler for NC seviri files.
Calibration
See
satpy.readers.seviri_base
. Note that there is only one set of calibration coefficients available in the netCDF files and therefore there is no calib_mode argument.Metadata
See
satpy.readers.seviri_base
.Init the file handler.
- _get_calib_coefs(dataset, channel)[source]
Get coefficients for calibration from counts to radiance.
- property _repeat_cycle_duration
Get repeat cycle duration from the metadata.
- property end_time
Get the general end time for this file.
- get_area_def(dataset_id)[source]
Get the area def.
Note that the AreaDefinition area extents returned by this function for NetCDF data will be slightly different compared to the area extents returned by the SEVIRI HRIT reader. This is due to slightly different pixel size values when calculated using the data available in the files. E.g. for the 3 km grid:
NetCDF: self.nc.attrs['vis_ir_column_dir_grid_step'] == 3000.4031658172607
HRIT: np.deg2rad(2.**16 / pdict['lfac']) * pdict['h'] == 3000.4032785810186
This results in the Native 3 km full-disk area extents being approx. 20 cm shorter in each direction.
The method for calculating the area extents used by the HRIT reader (CFAC/LFAC mechanism) keeps the highest level of numeric precision and is used as reference by EUM. For this reason, the standard area definitions defined in the areas.yaml file correspond to the HRIT ones.
- property nc
Read the file.
- property nominal_end_time
Read the repeat cycle nominal end time from metadata and round it to expected nominal time slot.
- property nominal_start_time
Read the repeat cycle nominal start time from metadata and round it to expected nominal time slot.
- property observation_end_time
Get the repeat cycle observation end time from metadata.
- property observation_start_time
Get the repeat cycle observation start time from metadata.
- property satpos
Get actual satellite position in geodetic coordinates (WGS-84).
Evaluate orbit polynomials at the start time of the scan.
Returns: Longitude [deg east], Latitude [deg north] and Altitude [m]
- property start_time
Get general start time for this file.
- class satpy.readers.seviri_l1b_nc.NCSEVIRIHRVFileHandler(filename, filename_info, filetype_info, ext_calib_coefs=None, mask_bad_quality_scan_lines=True)[source]
Bases:
NCSEVIRIFileHandler
,SEVIRICalibrationHandler
HRV filehandler.
Init the file handler.
satpy.readers.seviri_l2_bufr module
SEVIRI L2 BUFR format reader.
References
EUMETSAT Product Navigator https://navigator.eumetsat.int/
- class satpy.readers.seviri_l2_bufr.SeviriL2BufrFileHandler(filename, filename_info, filetype_info, with_area_definition=False, rectification_longitude='default', **kwargs)[source]
Bases:
BaseFileHandler
File handler for SEVIRI L2 BUFR products.
Loading data with AreaDefinition
By providing the with_area_definition as True in the reader_kwargs, the dataset is loaded with an AreaDefinition using a standardized AreaDefinition in areas.yaml. By default, the dataset will be loaded with a SwathDefinition, i.e. similar to how the data are stored in the BUFR file:
- scene = satpy.Scene(filenames,
reader=’seviri_l2_bufr’, reader_kwargs={‘with_area_definition’: False})
Defining dataset recticifation longitude
The BUFR data were originally extracted from a rectified two-dimensional grid with a given central longitude (typically the sub-satellite point). This information is not available in the file itself nor the filename (for files from the EUMETSAT archive). Also, it cannot be reliably derived from all datasets themselves. Hence, the rectification longitude can be defined by the user by providing rectification_longitude in the reader_kwargs:
- scene = satpy.Scene(filenames,
reader=’seviri_l2_bufr’, reader_kwargs={‘rectification_longitude’: 0.0})
If not done, default values applicable to the operational grids of the respective SEVIRI instruments will be used.
Initialise the file handler for SEVIRI L2 BUFR data.
- _construct_area_def(dataset_id)[source]
Construct a standardized AreaDefinition based on satellite, instrument, resolution and sub-satellite point.
- Returns:
A pyresample AreaDefinition object containing the area definition.
- Return type:
AreaDefinition
- property end_time
Return the repeat cycle end time.
- get_dataset(dataset_id, dataset_info)[source]
Create dataset.
Load data from BUFR file using the BUFR key in dataset_info and create the dataset with or without an AreaDefinition.
- property platform_name
Return spacecraft name.
- property ssp_lon
Return subsatellite point longitude.
- property start_time
Return the repeat cycle start time.
satpy.readers.seviri_l2_grib module
Reader for the SEVIRI L2 products in GRIB2 format.
References
FM 92 GRIB Edition 2 https://www.wmo.int/pages/prog/www/WMOCodes/Guides/GRIB/GRIB2_062006.pdf EUMETSAT Product Navigator https://navigator.eumetsat.int/
- class satpy.readers.seviri_l2_grib.SeviriL2GribFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Reader class for SEVIRI L2 products in GRIB format.
Read the global attributes and prepare for dataset reading.
- _get_attributes()[source]
Create a dictionary of attributes to be added to the dataset.
- Returns:
- A dictionary of parameter attributes.
ssp_lon: longitude of subsatellite point sensor: name of sensor platform_name: name of the platform
- Return type:
- static _get_from_msg(gid, key)[source]
Get a value from the GRIB message based on the key, return None if missing.
- Parameters:
gid – The ID of the GRIB message.
key – The key of the required attribute.
- Returns:
The retrieved attribute or None if the key is missing.
- _get_proj_area(gid)[source]
Compute the dictionary with the projection and area definition from a GRIB message.
- Parameters:
gid – The ID of the GRIB message.
- Returns:
- A tuple of two dictionaries for the projection and the area definition.
- pdict:
a: Earth major axis [m] b: Earth minor axis [m] h: Height over surface [m] ssp_lon: longitude of subsatellite point [deg] nlines: number of lines ncols: number of columns a_name: name of the area a_desc: description of the area p_id: id of the projection
- area_dict:
center_point: coordinate of the center point north: coodinate of the north limit east: coodinate of the east limit west: coodinate of the west limit south: coodinate of the south limit
- Return type:
- _get_xarray_from_msg(gid)[source]
Read the values from the GRIB message and return a DataArray object.
- Parameters:
gid – The ID of the GRIB message.
- Returns:
The array containing the retrieved values.
- Return type:
DataArray
- _read_attributes(gid)[source]
Read the parameter attributes from the message and create the projection and area dictionaries.
- static _scale_earth_axis(data)[source]
Scale Earth axis data to make sure the value matched the expected unit [m].
The earthMinorAxis value stored in the aerosol over sea product is scaled incorrectly by a factor of 1e8. This method provides a flexible temporarily workaraound by making sure that all earth axis values are scaled such that they are on the order of millions of meters as expected by the reader. As soon as the scaling issue has been resolved by EUMETSAT this workaround can be removed.
- property end_time
Return the sensing end time.
- get_dataset(dataset_id, dataset_info)[source]
Get dataset using the parameter_number key in dataset_info.
In a previous version of the reader, the attributes (nrows, ncols, ssp_lon) and projection information (pdict and area_dict) were computed while initializing the file handler. Also the code would break out from the While-loop below as soon as the correct parameter_number was found. This has now been revised becasue the reader would sometimes give corrupt information about the number of messages in the file and the dataset dimensions within a given message if the file was only partly read (not looping over all messages) in an earlier instance.
- property start_time
Return the sensing start time.
satpy.readers.sgli_l1b module
GCOM-C SGLI L1b reader.
GCOM-C has an imager instrument: SGLI https://www.wmo-sat.info/oscar/instruments/view/505
Test data is available here: https://suzaku.eorc.jaxa.jp/GCOM_C/data/product_std.html The live data is available from here: https://gportal.jaxa.jp/gpr/search?tab=1 And the format description is here: https://gportal.jaxa.jp/gpr/assets/mng_upload/GCOM-C/SGLI_Level1_Product_Format_Description_en.pdf
- class satpy.readers.sgli_l1b.HDF5SGLI(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for the SGLI l1b data.
Initialize the filehandler.
- property end_time
Get the end time.
- interpolate_spherical(azimuthal_angle, polar_angle, resampling_interval)[source]
Interpolate spherical coordinates.
- property start_time
Get the start time.
satpy.readers.slstr_l1b module
SLSTR L1b reader.
- class satpy.readers.slstr_l1b.NCSLSTR1B(filename, filename_info, filetype_info, user_calibration=None)[source]
Bases:
BaseFileHandler
Filehandler for l1 SLSTR data.
By default, the calibration factors recommended by EUMETSAT are applied. This is required as the SLSTR VIS channels are producing slightly incorrect radiances that require adjustment. Satpy uses the radiance corrections in S3.PN-SLSTR-L1.08, checked 11/03/2022. User-supplied coefficients can be passed via the user_calibration kwarg This should be a dict of channel names (such as S1_nadir, S8_oblique).
For example:
calib_dict = {'S1_nadir': 1.12} scene = satpy.Scene(filenames, reader='slstr-l1b', reader_kwargs={'user_calib': calib_dict})
Will multiply S1 nadir radiances by 1.12.
Initialize the SLSTR l1 data filehandler.
- _apply_radiance_adjustment(radiances)[source]
Adjust SLSTR radiances with default or user supplied values.
- property end_time
Get the end time.
- property start_time
Get the start time.
- class satpy.readers.slstr_l1b.NCSLSTRAngles(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Filehandler for angles.
Initialize the angles reader.
- property end_time
Get the end time.
- property start_time
Get the start time.
- class satpy.readers.slstr_l1b.NCSLSTRFlag(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
File handler for flags.
Initialize the flag reader.
- property end_time
Get the end time.
- property start_time
Get the start time.
satpy.readers.smos_l2_wind module
SMOS L2 wind Reader.
Data can be found here after register: https://www.smosstorm.org/Data2/SMOS-NRT-wind-Products-access Format documentation at the same site after register: SMOS_WIND_DS_PDD_20191107_signed.pdf
- class satpy.readers.smos_l2_wind.SMOSL2WINDFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
File handler for SMOS L2 wind netCDF files.
Initialize object.
- available_datasets(configured_datasets=None)[source]
Automatically determine datasets provided by this file.
- property end_time
Get end time.
- property platform_name
Get platform.
- property platform_shortname
Get platform shortname.
- property start_time
Get start time.
satpy.readers.tropomi_l2 module
Interface to TROPOMI L2 Reader.
The TROPOspheric Monitoring Instrument (TROPOMI) is the satellite instrument on board the Copernicus Sentinel-5 Precursor satellite. It measures key atmospheric trace gasses, such as ozone, nitrogen oxides, sulfur dioxide, carbon monoxide, methane, and formaldehyde.
Level 2 data products are available via the Copernicus Open Access Hub. For more information visit the following URL: http://www.tropomi.eu/data-products/level-2-products
- class satpy.readers.tropomi_l2.TROPOMIL2FileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
File handler for TROPOMI L2 netCDF files.
Initialize object.
- _iterate_over_dataset_contents(handled_variables, shape)[source]
Iterate over dataset contents.
This is where we dynamically add new datasets We will sift through all groups and variables, looking for data matching the geolocation bounds
- available_datasets(configured_datasets=None)[source]
Automatically determine datasets provided by this file.
- property end_time
Get end time.
- property platform_shortname
Get platform shortname.
- prepare_geo(bounds_data)[source]
Prepare lat/lon bounds for pcolormesh.
lat/lon bounds are ordered in the following way:
3----2 | | 0----1
Extend longitudes and latitudes with one element to support “pcolormesh”:
(X[i+1, j], Y[i+1, j]) (X[i+1, j+1], Y[i+1, j+1]) +--------+ | C[i,j] | +--------+ (X[i, j], Y[i, j]) (X[i, j+1], Y[i, j+1])
- property sensor
Get sensor.
- property sensor_names
Get sensor set.
- property start_time
Get start time.
- property time_coverage_end
Get time_coverage_end.
- property time_coverage_start
Get time_coverage_start.
satpy.readers.utils module
Helper functions for satpy readers.
- satpy.readers.utils._lonlat_from_geos_angle(x, y, geos_area)[source]
Get lons and lats from x, y in projection coordinates.
- satpy.readers.utils._unzip_FSFile(filename: FSFile, prefix=None)[source]
Open and Unzip remote FSFile ending with ‘bz2’.
- Parameters:
filename – The FSFile to unzip.
prefix (str, optional) – If file is one of many segments of data, prefix random filename
number. (for correct sorting. This is normally the segment) –
- Returns:
Temporary filename path for decompressed file or None.
- satpy.readers.utils._unzip_local_file(filename: str, prefix=None)[source]
Unzip the file ending with ‘bz2’. Initially with pbzip2 if installed or bz2.
- Parameters:
filename – The file to unzip.
prefix (str, optional) – If file is one of many segments of data, prefix random filename
number. (for correct sorting. This is normally the segment) –
- Returns:
Temporary filename path for decompressed file or None.
- satpy.readers.utils.apply_earthsun_distance_correction(reflectance, utc_date=None)[source]
Correct reflectance data to account for changing Earth-Sun distance.
- satpy.readers.utils.apply_rad_correction(data, slope, offset)[source]
Apply GSICS-like correction factors to radiance data.
- satpy.readers.utils.bbox(img)[source]
Find the bounding box around nonzero elements in the given array.
Copied from https://stackoverflow.com/a/31402351/5703449 .
- Returns:
rowmin, rowmax, colmin, colmax
- satpy.readers.utils.generic_open(filename, *args, **kwargs)[source]
Context manager for opening either a regular file or a bzip2 file.
Returns a file-like object.
- satpy.readers.utils.get_array_date(scn_data, utc_date=None)[source]
Get start time from a channel data array.
- satpy.readers.utils.get_earth_radius(lon, lat, a, b)[source]
Compute radius of the earth ellipsoid at the given longitude and latitude.
- Parameters:
lon – Geodetic longitude (degrees)
lat – Geodetic latitude (degrees)
a – Semi-major axis of the ellipsoid (meters)
b – Semi-minor axis of the ellipsoid (meters)
- Returns:
Earth Radius (meters)
- satpy.readers.utils.get_geostationary_angle_extent(geos_area)[source]
Get the max earth (vs space) viewing angles in x and y.
- satpy.readers.utils.get_geostationary_bounding_box(geos_area, nb_points=50)[source]
Get the bbox in lon/lats of the valid pixels inside geos_area.
- Parameters:
geos_area – The geostationary area to analyse.
nb_points – Number of points on the polygon
- satpy.readers.utils.get_geostationary_mask(area, chunks=None)[source]
Compute a mask of the earth’s shape as seen by a geostationary satellite.
- Parameters:
area (pyresample.geometry.AreaDefinition) – Corresponding area definition
chunks (int or tuple) – Chunk size for the 2D array that is generated.
- Returns:
Boolean mask, True inside the earth’s shape, False outside.
- satpy.readers.utils.get_sub_area(area, xslice, yslice)[source]
Apply slices to the area_extent and size of the area.
- satpy.readers.utils.get_user_calibration_factors(band_name, correction_dict)[source]
Retrieve radiance correction factors from user-supplied dict.
- satpy.readers.utils.np2str(value)[source]
Convert an numpy.string_ to str.
- Parameters:
value (ndarray) – scalar or 1-element numpy array to convert
- Raises:
ValueError – if value is array larger than 1-element, or it is not of type numpy.string_ or it is not a numpy array
- satpy.readers.utils.reduce_mda(mda, max_size=100)[source]
Recursively remove arrays with more than max_size elements from the given metadata dictionary.
- satpy.readers.utils.remove_earthsun_distance_correction(reflectance, utc_date=None)[source]
Remove the sun-earth distance correction.
- satpy.readers.utils.unzip_context(filename)[source]
Context manager for decompressing a .bz2 file on the fly.
Uses unzip_file. Removes the uncompressed file on exit of the context manager.
Returns: the filename of the uncompressed file or of the original file if it was not compressed.
- satpy.readers.utils.unzip_file(filename: str | FSFile, prefix=None)[source]
Unzip the local/remote file ending with ‘bz2’.
- Parameters:
filename – The local/remote file to unzip.
prefix (str, optional) – If file is one of many segments of data, prefix random filename
number. (for correct sorting. This is normally the segment) –
- Returns:
Temporary filename path for decompressed file or None.
satpy.readers.vaisala_gld360 module
Vaisala Global Lightning Dataset 360 reader.
Vaisala Global Lightning Dataset GLD360 is data as a service that provides real-time lightning data for accurate and early detection and tracking of severe weather. The data provided is generated by a Vaisala owned and operated world-wide lightning detection sensor network.
References: - [GLD360] https://www.vaisala.com/en/products/data-subscriptions-and-reports/data-sets/gld360
- class satpy.readers.vaisala_gld360.VaisalaGLD360TextFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
ASCII reader for Vaisala GDL360 data.
Initialize VaisalaGLD360TextFileHandler.
- property end_time
Get end time.
- property start_time
Get start time.
satpy.readers.vii_base_nc module
EUMETSAT EPS-SG Visible/Infrared Imager (VII) readers base class.
- class satpy.readers.vii_base_nc.ViiNCBaseFileHandler(filename, filename_info, filetype_info, orthorect=False)[source]
Bases:
NetCDF4FileHandler
Base reader class for VII products in netCDF format.
- Parameters:
Prepare the class for dataset reading.
- _get_global_attributes()[source]
Create a dictionary of global attributes to be added to all datasets.
- static _perform_geo_interpolation(longitude, latitude)[source]
Perform the interpolation of geographic coodinates from tie points to pixel points.
- Parameters:
longitude – xarray DataArray containing the longitude dataset to interpolate.
latitude – xarray DataArray containing the longitude dataset to interpolate.
- Returns:
- tuple of arrays containing the interpolate values, all the original metadata
and the updated dimension names.
- static _perform_interpolation(variable)[source]
Perform the interpolation from tie points to pixel points.
- Parameters:
variable – xarray DataArray containing the dataset to interpolate.
- Returns:
- array containing the interpolate values, all the original metadata
and the updated dimension names.
- Return type:
DataArray
- property end_time
Get observation end time.
- property sensor
Return sensor.
- property spacecraft_name
Return spacecraft name.
- property ssp_lon
Return subsatellite point longitude.
- property start_time
Get observation start time.
satpy.readers.vii_l1b_nc module
EUMETSAT EPS-SG Visible/Infrared Imager (VII) Level 1B products reader.
The vii_l1b_nc
reader reads and calibrates EPS-SG VII L1b image data in netCDF format. The format is explained
in the EPS-SG VII Level 1B Product Format Specification V4A.
This version is applicable for the vii test data V2 to be released in Jan 2022.
- class satpy.readers.vii_l1b_nc.ViiL1bNCFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
ViiNCBaseFileHandler
Reader class for VII L1B products in netCDF format.
Read the calibration data and prepare the class for dataset reading.
- static _calibrate_bt(radiance, cw, a, b)[source]
Perform the calibration to brightness temperature.
- Parameters:
radiance – numpy ndarray containing the radiance values.
cw – center wavelength [μm].
a – temperature coefficient [-].
b – temperature coefficient [K].
- Returns:
array containing the calibrated brightness temperature values.
- Return type:
numpy ndarray
- static _calibrate_refl(radiance, angle_factor, isi)[source]
Perform the calibration to reflectance.
- Parameters:
radiance – numpy ndarray containing the radiance values.
angle_factor – numpy ndarray containing the inverse of cosine of solar zenith angle [-].
isi – integrated solar irradiance [W/(m2 * μm)].
- Returns:
array containing the calibrated reflectance values.
- Return type:
numpy ndarray
- _perform_calibration(variable, dataset_info)[source]
Perform the calibration.
- Parameters:
variable – xarray DataArray containing the dataset to calibrate.
dataset_info – dictionary of information about the dataset.
- Returns:
array containing the calibrated values and all the original metadata.
- Return type:
DataArray
- _perform_orthorectification(variable, orthorect_data_name)[source]
Perform the orthorectification.
- Parameters:
variable – xarray DataArray containing the dataset to correct for orthorectification.
orthorect_data_name – name of the orthorectification correction data in the product.
- Returns:
array containing the corrected values and all the original metadata.
- Return type:
DataArray
satpy.readers.vii_l2_nc module
EUMETSAT EPS-SG Visible/Infrared Imager (VII) Level 2 products reader.
- class satpy.readers.vii_l2_nc.ViiL2NCFileHandler(filename, filename_info, filetype_info, orthorect=False)[source]
Bases:
ViiNCBaseFileHandler
Reader class for VII L2 products in netCDF format.
Prepare the class for dataset reading.
- _perform_orthorectification(variable, orthorect_data_name)[source]
Perform the orthorectification.
- Parameters:
variable – xarray DataArray containing the dataset to correct for orthorectification.
orthorect_data_name – name of the orthorectification correction data in the product.
- Returns:
array containing the corrected values and all the original metadata.
- Return type:
DataArray
satpy.readers.vii_utils module
Utilities for the management of VII products.
satpy.readers.viirs_atms_sdr_base module
Common utilities for reading VIIRS and ATMS SDR data.
- class satpy.readers.viirs_atms_sdr_base.JPSS_SDR_FileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDF5FileHandler
Base class for reading JPSS VIIRS & ATMS SDR HDF5 Files.
Initialize file handler.
- available_datasets(configured_datasets=None)[source]
Generate dataset info and their availablity.
See
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
for details.
- property end_orbit_number
Get end orbit number.
- property end_time
Get end time.
- static expand_single_values(var, scans)[source]
Expand single valued variable to full scan lengths.
- property platform_name
Get platform name.
- scale_data_to_specified_unit(data, dataset_id, ds_info)[source]
Get sscale and offset factors and convert/scale data to given physical unit.
- scale_swath_data(data, scaling_factors, dataset_group)[source]
Scale swath data using scaling factors and offsets.
Multi-granule (a.k.a. aggregated) files will have more than the usual two values.
- property sensor_name
Get sensor name.
- property start_orbit_number
Get start orbit number.
- property start_time
Get start time.
satpy.readers.viirs_compact module
Compact viirs format.
This is a reader for the Compact VIIRS format shipped on Eumetcast for the VIIRS SDR. The format is compressed in multiple ways, notably by shipping only tie-points for geographical data. The interpolation of this data is done using dask operations, so it should be relatively performant.
For more information on this format, the reader can refer to the Compact VIIRS SDR Product Format User Guide that can be found on this EARS page.
- class satpy.readers.viirs_compact.VIIRSCompactFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
A file handler class for VIIRS compact format.
Initialize the reader.
- property end_time
Get the end time.
Expand angle and navigation datasets.
- property expansion_coefs
Compute the expansion coefficients.
Generate the navigation datasets.
- property start_time
Get the start time.
- satpy.readers.viirs_compact._interpolate_data(data, corner_coefficients, scans)[source]
Interpolate the data using the provided coefficients.
- satpy.readers.viirs_compact.convert_from_angles(azi, zen)[source]
Convert the angles to cartesian coordinates.
- satpy.readers.viirs_compact.convert_to_angles(x, y, z)[source]
Convert the cartesian coordinates to angles.
- satpy.readers.viirs_compact.expand(data, coefs, scans, scan_size)[source]
Perform the expansion in numpy domain.
satpy.readers.viirs_edr module
VIIRS NOAA enterprise EDR product reader.
This module defines the VIIRSJRRFileHandler
file handler, to
be used for reading VIIRS EDR products generated by the NOAA enterprise
suite, which are downloadable via NOAA CLASS or on NOAA’s AWS buckets.
A wide variety of such products exist and, at present, only a subset are supported.
Cloud mask: JRR-CloudMask_v2r3_j01_s202112250807275_e202112250808520_c202112250837300.nc
Cloud products: JRR-CloudHeight_v2r3_j01_s202112250807275_e202112250808520_c202112250837300.nc
Aerosol detection: JRR-ADP_v2r3_j01_s202112250807275_e202112250808520_c202112250839550.nc
Aerosol optical depth: JRR-AOD_v2r3_j01_s202112250807275_e202112250808520_c202112250839550.nc
Surface reflectance: SurfRefl_v1r1_j01_s202112250807275_e202112250808520_c202112250845080.nc
Land Surface Temperature: LST_v2r0_npp_s202307241724558_e202307241726200_c202307241854058.nc
All products use the same base reader viirs_edr
and can be read through satpy with:
import satpy
import glob
filenames = glob.glob('JRR-ADP*.nc')
scene = satpy.Scene(filenames, reader='viirs_edr')
scene.load(['smoke_concentration'])
Note
Multiple products contain datasets with the same name! For example, both the cloud mask and aerosol detection files contain a cloud mask, but these are not identical. For clarity, the aerosol file cloudmask is named cloud_mask_adp in this reader.
Vegetation Indexes
The NDVI and EVI products can be loaded from CSPP-produced Surface Reflectance
files. By default, these products are filtered based on the Surface Reflectance
Quality Flags. This is used to remove/mask pixels in certain cloud or water
regions. This behavior can be disabled by providing the reader keyword argument
filter_veg
and setting it to False
. For example:
scene = satpy.Scene(filenames, reader='viirs_edr', reader_kwargs={"filter_veg": False})
AOD Filtering
The AOD (Aerosol Optical Depth) product can be optionally filtered based on
Quality Control (QC) values in the file. By default no filtering is performed.
By providing the aod_qc_filter
keyword argument and specifying the maximum
value of the QCAll
variable to include (not mask). For example:
scene = satpy.Scene(filenames, reader='viirs_edr', reader_kwargs={"aod_qc_filter": 1})
will only preserve AOD550 values where the quality is 0 (“high”) or
1 (“medium”). At the time of writing the QCAll
variable has 1 (“medium”),
2 (“low”), and 3 (“no retrieval”).
- class satpy.readers.viirs_edr.VIIRSAODHandler(*args, aod_qc_filter: int | None = None, **kwargs)[source]
Bases:
VIIRSJRRFileHandler
File handler for AOD data files.
Initialize file handler and keep track of QC filtering.
- class satpy.readers.viirs_edr.VIIRSJRRFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
NetCDF4 reader for VIIRS Active Fires.
Initialize the geo filehandler.
- available_datasets(configured_datasets=None)[source]
Get information of available datasets in this file.
- Parameters:
configured_datasets (list) – Series of (bool or None, dict) in the same way as is returned by this method (see below). The bool is whether the dataset is available from at least one of the current file handlers. It can also be
None
if no file handler before us knows how to handle it. The dictionary is existing dataset metadata. The dictionaries are typically provided from a YAML configuration file and may be modified, updated, or used as a “template” for additional available datasets. This argument could be the result of a previous file handler’s implementation of this method.- Returns:
Iterator of (bool or None, dict) pairs where dict is the dataset’s metadata. If the dataset is available in the current file type then the boolean value should be
True
,False
if we know about the dataset but it is unavailable, orNone
if this file object is not responsible for it.
- property end_time
Get last date/time when observations were recorded.
- property platform_name
Get platform name.
- rows_per_scans(data_arr: DataArray) int [source]
Get number of array rows per instrument scan based on data resolution.
- property start_time
Get first date/time when observations were recorded.
- class satpy.readers.viirs_edr.VIIRSLSTHandler(*args, **kwargs)[source]
Bases:
VIIRSJRRFileHandler
File handler to handle LST file scale factor and offset weirdness.
Initialize the file handler and unscale necessary variables.
- _manual_scalings = {'Satellite_Azimuth_Angle': ('AZI_ScaleFact', 'AZI_Offset'), 'VLST': ('LST_ScaleFact', 'LST_Offset'), 'emis_bbe': ('LSE_ScaleFact', 'LSE_Offset'), 'emis_m15': ('LSE_ScaleFact', 'LSE_Offset'), 'emis_m16': ('LSE_ScaleFact', 'LSE_Offset')}
- class satpy.readers.viirs_edr.VIIRSSurfaceReflectanceWithVIHandler(*args, filter_veg: bool = True, **kwargs)[source]
Bases:
VIIRSJRRFileHandler
File handler for surface reflectance files with optional vegetation indexes.
Initialize file handler and keep track of vegetation index filtering.
satpy.readers.viirs_edr_active_fires module
VIIRS Active Fires reader.
This module implements readers for VIIRS Active Fires NetCDF and ASCII files.
- class satpy.readers.viirs_edr_active_fires.VIIRSActiveFiresFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None)[source]
Bases:
NetCDF4FileHandler
NetCDF4 reader for VIIRS Active Fires.
Open and perform initial investigation of NetCDF file.
- property end_time
Get last date/time when observations were recorded.
- get_dataset(dsid, dsinfo)[source]
Get requested data as DataArray.
- Parameters:
dsid – Dataset ID
param2 – Dataset Information
- Returns:
Data
- Return type:
Dask DataArray
- property platform_name
Name of platform/satellite for this file.
- property sensor_name
Name of sensor for this file.
- property start_time
Get first date/time when observations were recorded.
- class satpy.readers.viirs_edr_active_fires.VIIRSActiveFiresTextFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
ASCII reader for VIIRS Active Fires.
Make sure filepath is valid and then reads data into a Dask DataFrame.
- Parameters:
filename – Filename
filename_info – Filename information
filetype_info – Filetype information
- property end_time
Get last date/time when observations were recorded.
- property start_time
Get first date/time when observations were recorded.
satpy.readers.viirs_edr_flood module
Interface to VIIRS flood product.
- class satpy.readers.viirs_edr_flood.VIIRSEDRFlood(filename, filename_info, filetype_info)[source]
Bases:
HDF4FileHandler
VIIRS EDR Flood-product handler for HDF4 files.
Open file and collect information.
- property end_time
Get end time.
- property platform_name
Get platform name.
- property sensor_name
Get sensor name.
- property start_time
Get start time.
satpy.readers.viirs_l1b module
Interface to VIIRS L1B format.
- class satpy.readers.viirs_l1b.VIIRSL1BFileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
VIIRS L1B File Reader.
Initialize object.
- available_datasets(configured_datasets=None)[source]
Generate dataset info and their availablity.
See
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
for details.
- property end_orbit_number
Get end orbit number.
- property end_time
Get end time.
- property platform_name
Get platform name.
- property sensor_name
Get sensor name.
- property start_orbit_number
Get start orbit number.
- property start_time
Get start time.
satpy.readers.viirs_l2 module
Interface to VIIRS L2 format.
This reader implements the support of L2 files generated using the VIIRS instrument on SNPP and NOAA satellite files. The intent of this reader is to be able to reproduce images from L2 layers in NASA worldview with identical colormaps.
Currently a subset of four of these layers are supported 1. Deep Blue Aerosol Angstrom Exponent (Land and Ocean) 2. Clear Sky Confidence 3. Cloud Top Height 4. Deep Blue Aerosol Optical Thickness (Land and Ocean)
- class satpy.readers.viirs_l2.VIIRSL2FileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False)[source]
Bases:
NetCDF4FileHandler
NetCDF File Handler for VIIRS L2 Products.
Initialize object.
- available_datasets(configured_datasets=None)[source]
Generate dataset info and their availablity.
See
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
for details.
- property end_orbit_number
Get end orbit number.
- property end_time
Get end time.
- property platform_name
Get platform name.
- property sensor_name
Get sensor name.
- property start_orbit_number
Get start orbit number.
- property start_time
Get start time.
satpy.readers.viirs_sdr module
Interface to VIIRS SDR format.
This reader implements the support of VIIRS SDR files as produced by CSPP and CLASS. It is comprised of two parts:
A subclass of the YAMLFileReader class to allow handling all the files
A filehandler class to implement the actual reading
Format documentation:
- class satpy.readers.viirs_sdr.VIIRSSDRFileHandler(filename, filename_info, filetype_info, use_tc=None, **kwargs)[source]
Bases:
JPSS_SDR_FileHandler
VIIRS SDR HDF5 File Reader.
Initialize file handler.
- get_dataset(dataset_id, ds_info)[source]
Get the dataset corresponding to dataset_id.
The size of the return DataArray will be dependent on the number of scans actually sensed, and not necessarily the regular 768 scanlines that the file contains for each granule. To that end, the number of scans for each granule is read from:
Data_Products/...Gran_x/N_Number_Of_Scans
.
- class satpy.readers.viirs_sdr.VIIRSSDRReader(config_files, use_tc=None, **kwargs)[source]
Bases:
FileYAMLReader
Custom file reader for finding VIIRS SDR geolocation at runtime.
Initialize file reader and adjust geolocation preferences.
- Parameters:
config_files (iterable) – yaml config files passed to base class
use_tc (boolean) – If True use the terrain corrected files. If False, switch to non-TC files. If None (default), use TC if available, non-TC otherwise.
- _abc_impl = <_abc._abc_data object>
- _get_coordinates_for_dataset_key(dsid)[source]
Get the coordinate dataset keys for dsid.
Wraps the base class method in order to load geolocation files from the geo reference attribute in the datasets file.
- _load_filenames_from_geo_ref(dsid)[source]
Load filenames from the N_GEO_Ref attribute of a dataset’s file.
- satpy.readers.viirs_sdr._get_invalid_info(granule_data)[source]
Get a detailed report of the missing data.
N/A: not applicable MISS: required value missing at time of processing OBPT: onboard pixel trim (overlapping/bow-tie pixel removed during SDR processing) OGPT: on-ground pixel trim (overlapping/bow-tie pixel removed during EDR processing) ERR: error occurred during processing / non-convergence ELINT: ellipsoid intersect failed / instrument line-of-sight does not intersect the Earth’s surface VDNE: value does not exist / processing algorithm did not execute SOUB: scaled out-of-bounds / solution not within allowed range
satpy.readers.viirs_vgac_l1c_nc module
Reading VIIRS VGAC data.
- class satpy.readers.viirs_vgac_l1c_nc.VGACFileHandler(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Reader VGAC data.
Init the file handler.
- property end_time
Get the end time.
- fix_radiances_not_in_percent(data)[source]
Scale radiances to percent. This was not done in first version of data.
- property start_time
Get the start time.
satpy.readers.virr_l1b module
Interface to VIRR (Visible and Infra-Red Radiometer) level 1b format.
The file format is HDF5. Important attributes:
Latitude
Longitude
SolarZenith
EV_Emissive
EV_RefSB
Emissive_Radiance_Offsets
Emissive_Radiance_Scales
RefSB_Cal_Coefficients
RefSB_Effective_Wavelength
Emmisive_Centroid_Wave_Number
Supported satellites:
FY-3B and FY-3C.
For more information:
- class satpy.readers.virr_l1b.VIRR_L1B(filename, filename_info, filetype_info)[source]
Bases:
HDF5FileHandler
VIRR Level 1b reader.
Open file and perform initial setup.
- property end_time
Get ending observation time.
- property start_time
Get starting observation time.
satpy.readers.xmlformat module
Reads a format from an xml file to create dtypes and scaling factor arrays.
- class satpy.readers.xmlformat.XMLFormat(filename)[source]
Bases:
object
XMLFormat object.
Init the format reader.
satpy.readers.yaml_reader module
Base classes and utilities for all readers configured by YAML files.
- class satpy.readers.yaml_reader.AbstractYAMLReader(config_dict)[source]
Bases:
object
Base class for all readers that use YAML configuration files.
This class should only be used in rare cases. Its child class FileYAMLReader should be used in most cases.
Load information from YAML configuration file about how to read data files.
- _abc_impl = <_abc._abc_data object>
- property all_dataset_ids
Get DataIDs of all datasets known to this reader.
- property all_dataset_names
Get names of all datasets known to this reader.
- property available_dataset_ids
Get DataIDs that are loadable by this reader.
- property available_dataset_names
Get names of datasets that are loadable by this reader.
- abstract property end_time
End time of the reader.
- abstract filter_selected_filenames(filenames)[source]
Filter provided filenames by parameters in reader configuration.
Returns: iterable of usable files
- classmethod from_config_files(*config_files, **reader_kwargs)[source]
Create a reader instance from one or more YAML configuration files.
- get_dataset_key(key, **kwargs)[source]
Get the fully qualified DataID matching key.
See satpy.readers.get_key for more information about kwargs.
- select_files_from_directory(directory=None, fs=None)[source]
Find files for this reader in directory.
If directory is None or ‘’, look in the current directory.
Searches the local file system by default. Can search on a remote filesystem by passing an instance of a suitable implementation of
fsspec.spec.AbstractFileSystem
.- Parameters:
directory (Optional[str]) – Path to search.
fs (Optional[FileSystem]) – fsspec FileSystem implementation to use. Defaults to None, using local file system.
- Returns:
list of strings describing matching files
- select_files_from_pathnames(filenames)[source]
Select the files from filenames this reader can handle.
- property sensor_names
Names of sensors whose data is being loaded by this reader.
- abstract property start_time
Start time of the reader.
- class satpy.readers.yaml_reader.FileYAMLReader(config_dict, filter_parameters=None, filter_filenames=True, **kwargs)[source]
Bases:
AbstractYAMLReader
,DataDownloadMixin
Primary reader base class that is configured by a YAML file.
This class uses the idea of per-file “file handler” objects to read file contents and determine what is available in the file. This differs from the base
AbstractYAMLReader
which does not depend on individual file handler objects. In almost all cases this class should be used over its base class and can be used as a reader by itself and requires no subclassing.Set up initial internal storage for loading file data.
- _abc_impl = <_abc._abc_data object>
- static _assign_coords_from_dataarray(coords, ds)[source]
Assign coords from the ds dataarray if needed.
- _coords_cache: WeakValueDictionary = <WeakValueDictionary>
- _file_handlers_available_datasets()[source]
Generate a series of available dataset information.
This is done by chaining file handler’s
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
together. See that method’s documentation for more information.- Returns:
Generator of (bool, dict) where the boolean tells whether the current dataset is available from any of the file handlers. The boolean can also be None in the case where no loaded file handler is configured to load the dataset. The dictionary is the metadata provided either by the YAML configuration files or by the file handler itself if it is a new dataset. The file handler may have also supplemented or modified the information.
- _gather_ancillary_variables_ids(datasets)[source]
Gather ancillary variables’ ids.
This adds/modifies the dataset’s ancillary_variables attr.
- static _load_dataset(dsid, ds_info, file_handlers, dim='y', **kwargs)[source]
Load only a piece of the dataset.
- _make_swath_definition_from_lons_lats(lons, lats)[source]
Make a swath definition instance from lons and lats.
- _new_filehandler_instances(filetype_info, filename_items, fh_kwargs=None)[source]
Generate new filehandler instances.
- _new_filehandlers_for_filetype(filetype_info, filenames, fh_kwargs=None)[source]
Create filehandlers for a given filetype.
- _preferred_filetype(filetypes)[source]
Get the preferred filetype out of the filetypes list.
At the moment, it just returns the first filetype that has been loaded.
- property available_dataset_ids
Get DataIDs that are loadable by this reader.
- static check_file_covers_area(file_handler, check_area)[source]
Check if the file covers the current area.
If the file doesn’t provide any bounding box information or ‘area’ was not provided in filter_parameters, the check returns True.
- create_filehandlers(filenames, fh_kwargs=None)[source]
Organize the filenames into file types and create file handlers.
- property end_time
End time of the latest file used by this reader.
- static filename_items_for_filetype(filenames, filetype_info)[source]
Iterate over the filenames matching filetype_info.
- filter_fh_by_metadata(filehandlers)[source]
Filter out filehandlers using provide filter parameters.
- filter_filenames_by_info(filename_items)[source]
Filter out file using metadata from the filenames.
Currently only uses start and end time. If only start time is available from the filename, keep all the filename that have a start time before the requested end time.
- filter_selected_filenames(filenames)[source]
Filter provided files based on metadata in the filename.
- find_required_filehandlers(requirements, filename_info)[source]
Find the necessary file handlers for the given requirements.
We assume here requirements are available.
- Raises:
KeyError, if no handler for the given requirements is available. –
RuntimeError, if there is a handler for the given requirements, –
but it doesn't match the filename info. –
- get_dataset_key(key, available_only=False, **kwargs)[source]
Get the fully qualified DataID matching key.
This will first search through available DataIDs, datasets that should be possible to load, and fallback to “known” datasets, those that are configured but aren’t loadable from the provided files. Providing
available_only=True
will stop this fallback behavior and raise aKeyError
exception if no available dataset is found.- Parameters:
key (str, float, DataID, DataQuery) – Key to search for in this reader.
available_only (bool) – Search only loadable datasets for the provided key. Loadable datasets are always searched first, but if
available_only=False
(default) then all known datasets will be searched.kwargs – See
satpy.readers.get_key()
for more information about kwargs.
- Returns:
Best matching DataID to the provided
key
.- Raises:
KeyError – if no key match is found.
- load(dataset_keys, previous_datasets=None, **kwargs)[source]
Load dataset_keys.
If previous_datasets is provided, do not reload those.
- metadata_matches(sample_dict, file_handler=None)[source]
Check that file metadata matches filter_parameters of this reader.
- property sensor_names
Names of sensors whose data is being loaded by this reader.
- property start_time
Start time of the earlier file used by this reader.
- time_matches(fstart, fend)[source]
Check that a file’s start and end time mtach filter_parameters of this reader.
- update_ds_ids_from_file_handlers()[source]
Add or modify available dataset information.
Each file handler is consulted on whether or not it can load the dataset with the provided information dictionary. See
satpy.readers.file_handlers.BaseFileHandler.available_datasets()
for more information.
- class satpy.readers.yaml_reader.GEOFlippableFileYAMLReader(config_dict, filter_parameters=None, filter_filenames=True, **kwargs)[source]
Bases:
FileYAMLReader
Reader for flippable geostationary data.
Set up initial internal storage for loading file data.
- _abc_impl = <_abc._abc_data object>
- class satpy.readers.yaml_reader.GEOSegmentYAMLReader(config_dict, filter_parameters=None, filter_filenames=True, **kwargs)[source]
Bases:
GEOFlippableFileYAMLReader
Reader for segmented geostationary data.
This reader pads the data to full geostationary disk if necessary.
This reader uses an optional
pad_data
keyword argument that can be passed toScene.load()
to control if padding is done (True by default). Passing pad_data=False will return data unpadded.When using this class in a reader’s YAML configuration, segmented file types (files that may have multiple segments) should specify an extra
expected_segments
piece of file_type metadata. This tells this reader how many total segments it should expect when padding data. Alternatively, the file patterns for a file type can include atotal_segments
field which will be used ifexpected_segments
is not defined. This will default to 1 segment.Set up initial internal storage for loading file data.
- _abc_impl = <_abc._abc_data object>
- _get_segments_areadef_with_later_padded(file_handlers, filetype, dsid, available_segments, expected_segments)[source]
- _load_area_def(dsid, file_handlers, pad_data=True)[source]
Load the area definition of dsid with padding.
- _load_area_def_with_padding(dsid, file_handlers)[source]
Load the area definition of dsid with padding.
- _load_dataset(dsid, ds_info, file_handlers, dim='y', pad_data=True)[source]
Load only a piece of the dataset.
- _pad_earlier_segments_area(file_handlers, dsid, area_defs)[source]
Pad area definitions for missing segments that are earlier in sequence than the first available.
- class satpy.readers.yaml_reader.GEOVariableSegmentYAMLReader(config_dict, filter_parameters=None, filter_filenames=True, **kwargs)[source]
Bases:
GEOSegmentYAMLReader
GEOVariableSegmentYAMLReader for handling segmented GEO products with segments of variable height.
This YAMLReader overrides parts of the GEOSegmentYAMLReader to account for formats where the segments can have variable heights. It computes the sizes of the padded segments using the information available in the file(handlers), so that gaps of any size can be filled as needed.
This implementation was motivated by the FCI L1c format, where the segments (called chunks in the FCI world) can have variable heights. It is however generic, so that any future reader can use it. The requirement for the reader is to have a method called get_segment_position_info, returning a dictionary containing the positioning info for each segment (see example in
satpy.readers.fci_l1c_nc.FCIL1cNCFileHandler.get_segment_position_info()
).For more information on please see the documentation of
satpy.readers.yaml_reader.GEOSegmentYAMLReader()
.Initialise the GEOVariableSegmentYAMLReader object.
- _abc_impl = <_abc._abc_data object>
- satpy.readers.yaml_reader._compute_optimal_missing_segment_heights(seg_infos, grid_type, expected_vertical_size)[source]
- satpy.readers.yaml_reader._compute_positioning_data_for_missing_group(segment_start_rows, segment_end_rows, segment_heights, group)[source]
- satpy.readers.yaml_reader._compute_proposed_sizes_of_missing_segments_in_group(group, segment_end_rows, segment_start_rows)[source]
- satpy.readers.yaml_reader._find_missing_segments(file_handlers, ds_info, dsid)[source]
Find missing segments.
- satpy.readers.yaml_reader._flip_dataset_data_and_area_extents(dataset, area_extents_to_update, flip_direction)[source]
Flip the data and area extents array for a dataset.
- satpy.readers.yaml_reader._get_current_scene_orientation(area_extents_to_update)[source]
Get the current scene orientation from the area_extents.
- satpy.readers.yaml_reader._get_dataset_area_extents_array(dataset_area_attr)[source]
Get dataset area extents in a numpy array for further flipping.
- satpy.readers.yaml_reader._get_empty_segment_with_height(empty_segment, new_height, dim)[source]
Get a new empty segment with the specified height.
- satpy.readers.yaml_reader._get_filebase(path, pattern)[source]
Get the end of path of same length as pattern.
- satpy.readers.yaml_reader._get_new_flipped_area_definition(dataset_area_attr, area_extents_to_update, flip_areadef_stacking)[source]
Get a new area definition with updated area_extents for flipped geostationary datasets.
- satpy.readers.yaml_reader._get_projection_type(dataset_area_attr)[source]
Get the projection type from the crs coordinate operation method name.
- satpy.readers.yaml_reader._get_target_scene_orientation(upper_right_corner)[source]
Get the target scene orientation from the target upper_right_corner.
‘NE’ corresponds to target_eastright and target_northup being True.
- satpy.readers.yaml_reader._init_positioning_arrays_for_variable_padding(chk_infos, grid_type, exp_segment_nr)[source]
- satpy.readers.yaml_reader._load_area_def(dsid, file_handlers)[source]
Load the area definition of dsid.
- satpy.readers.yaml_reader._match_filenames(filenames, pattern)[source]
Get the filenames matching pattern.
- satpy.readers.yaml_reader._populate_group_end_row_using_later_segment(group, segment_end_rows, segment_start_rows)[source]
- satpy.readers.yaml_reader._populate_group_start_end_row_using_neighbour_segments(group, segment_end_rows, segment_start_rows)[source]
- satpy.readers.yaml_reader._populate_group_start_row_using_previous_segment(group, segment_end_rows, segment_start_rows)[source]
- satpy.readers.yaml_reader._populate_positioning_arrays_with_available_segment_info(chk_infos, grid_type, segment_start_rows, segment_end_rows, segment_heights)[source]
- satpy.readers.yaml_reader._populate_start_end_rows_of_missing_segments_with_proposed_sizes(group, proposed_sizes_missing_segments, segment_start_rows, segment_end_rows, segment_heights)[source]
- satpy.readers.yaml_reader._set_orientation(dataset, upper_right_corner)[source]
Set the orientation of geostationary datasets.
Allows to flip geostationary imagery when loading the datasets. Example call: scn.load([‘VIS008’], upper_right_corner=’NE’)
- Parameters:
dataset – Dataset to be flipped.
upper_right_corner (str) – Direction of the upper right corner of the image after flipping. Possible options are ‘NW’, ‘NE’, ‘SW’, ‘SE’, or ‘native’. The common upright image orientation corresponds to ‘NE’. Defaults to ‘native’ (no flipping is applied).
- satpy.readers.yaml_reader._stack_area_defs(area_def_dict)[source]
Stack given dict of area definitions and return a StackedAreaDefinition.
- satpy.readers.yaml_reader.listify_string(something)[source]
Take something and make it a list.
something is either a list of strings or a string, in which case the function returns a list containing the string. If something is None, an empty list is returned.
- satpy.readers.yaml_reader.load_yaml_configs(*config_files, loader=<class 'yaml.cyaml.CLoader'>)[source]
Merge a series of YAML reader configuration files.
- Parameters:
*config_files (str) – One or more pathnames to YAML-based reader configuration files that will be merged to create a single configuration.
loader – Yaml loader object to load the YAML with. Defaults to CLoader if libyaml is available, Loader otherwise.
- Returns: dict
Dictionary representing the entire YAML configuration with the addition of config[‘reader’][‘config_files’] (the list of YAML pathnames that were merged).
Module contents
Shared objects of the various reader classes.
- class satpy.readers.FSFile(file, fs=None)[source]
Bases:
PathLike
Implementation of a PathLike file object, that can be opened.
Giving the filenames to
Scene
with valid transfer protocols will automatically use this class so manual usage of this class is needed mainly for fine-grained control.This class is made to be used in conjuction with fsspec or s3fs. For example:
from satpy import Scene import fsspec filename = 'noaa-goes16/ABI-L1b-RadC/2019/001/17/*_G16_s20190011702186*' the_files = fsspec.open_files("simplecache::s3://" + filename, s3={'anon': True}) from satpy.readers import FSFile fs_files = [FSFile(open_file) for open_file in the_files] scn = Scene(filenames=fs_files, reader='abi_l1b') scn.load(['true_color_raw'])
Initialise the FSFile instance.
- Parameters:
file (str, Pathlike, or OpenFile) – String, object implementing the os.PathLike protocol, or an fsspec.OpenFile instance. If passed an instance of fsspec.OpenFile, the following argument
fs
has no effect.fs (fsspec filesystem, optional) – Object implementing the fsspec filesystem protocol.
- _abc_impl = <_abc._abc_data object>
- satpy.readers._assign_files_to_readers(files_to_sort, reader_names, reader_kwargs)[source]
Assign files to readers.
Given a list of file names (paths), match those to reader instances.
Internal helper for group_files.
- satpy.readers._filter_groups(groups, missing='pass')[source]
Filter multi-reader group-files behavior.
Helper for group_files. When group_files is called with multiple readers, make sure that the desired behaviour for missing files is enforced: if missing is
"raise"
, raise an exception if at least one group has at least one reader without files; if it is"skip"
, remove those. If it is"pass"
, do nothing. Yields groups to be kept.
- satpy.readers._get_file_keys_for_reader_files(reader_files, group_keys=None)[source]
From a mapping from _assign_files_to_readers, get file keys.
Given a mapping where each key is a reader name and each value is a tuple of reader instance (typically FileYAMLReader) and a collection of files, return a mapping with the same keys, but where the values are lists of tuples of (keys, filename), where keys are extracted from the filenames according to group_keys and filenames are the names those keys were extracted from.
Internal helper for group_files.
- Returns:
Mapping[str, List[Tuple[Tuple, str]]], as described.
- satpy.readers._get_fs_open_kwargs(file)[source]
Get keyword arguments for opening a file via file system.
For example compression.
- satpy.readers._get_keys_with_empty_values(grp)[source]
Find mapping keys where values have length zero.
Helper for _filter_groups, which is in turn a helper for group_files. Given a mapping key -> Collection[Any], return the keys where the length of the collection is zero.
- Parameters:
grp (Mapping[Any, Collection[Any]]) – dictionary to check
- Returns:
set of keys
- satpy.readers._get_loadables_for_reader_config(base_dir, reader, sensor, reader_configs, reader_kwargs, fs)[source]
Get loadables for reader configs.
Helper for find_files_and_readers.
- Parameters:
base_dir – as for find_files_and_readers
reader – as for find_files_and_readers
sensor – as for find_files_and_readers
reader_configs – reader metadata such as returned by configs_for_reader.
reader_kwargs – Keyword arguments to be passed to reader.
fs (FileSystem) – as for find_files_and_readers
- satpy.readers._get_reader_kwargs(reader, reader_kwargs)[source]
Help load_readers to form reader_kwargs.
Helper for load_readers to get reader_kwargs and reader_kwargs_without_filter in the desirable form.
- satpy.readers._get_sorted_file_groups(all_file_keys, time_threshold)[source]
Get sorted file groups.
Get a list of dictionaries, where each list item consists of a dictionary mapping a tuple of keys to a mapping of reader names to files. The files listed in each list item are considered to be grouped within the same time.
- Parameters:
all_file_keys –
_get_file_keys_for_reader_files (as returned by) –
time_threshold – temporal threshold
- Returns:
List[Mapping[Tuple, Mapping[str, List[str]]]], as described
Internal helper for group_files.
- satpy.readers.available_readers(as_dict=False, yaml_loader=<class 'yaml.loader.UnsafeLoader'>)[source]
Available readers based on current configuration.
- Parameters:
as_dict (bool) – Optionally return reader information as a dictionary. Default: False.
yaml_loader (Optional[Union[yaml.BaseLoader, yaml.FullLoader, yaml.UnsafeLoader]]) – The yaml loader type. Default:
yaml.UnsafeLoader
.
- Returns:
List of available reader names. If as_dict is True then a list of dictionaries including additionally reader information is returned.
- Return type:
- satpy.readers.configs_for_reader(reader=None)[source]
Generate reader configuration files for one or more readers.
- Parameters:
reader (Optional[str]) – Yield configs only for this reader
Returns: Generator of lists of configuration files
- satpy.readers.find_files_and_readers(start_time=None, end_time=None, base_dir=None, reader=None, sensor=None, filter_parameters=None, reader_kwargs=None, missing_ok=False, fs=None)[source]
Find files matching the provided parameters.
Use start_time and/or end_time to limit found filenames by the times in the filenames (not the internal file metadata). Files are matched if they fall anywhere within the range specified by these parameters.
Searching is NOT recursive.
Files may be either on-disk or on a remote file system. By default, files are searched for locally. Users can search on remote filesystems by passing an instance of an implementation of fsspec.spec.AbstractFileSystem (strictly speaking, any object of a class implementing a
glob
method works).If locating files on a local file system, the returned dictionary can be passed directly to the Scene object through the filenames keyword argument. If it points to a remote file system, it is the responsibility of the user to download the files first (directly reading from cloud storage is not currently available in Satpy).
The behaviour of time-based filtering depends on whether or not the filename contains information about the end time of the data or not:
if the end time is not present in the filename, the start time of the filename is used and has to fall between (inclusive) the requested start and end times
otherwise, the timespan of the filename has to overlap the requested timespan
Example usage for querying a s3 filesystem using the s3fs module:
>>> import s3fs, satpy.readers, datetime >>> satpy.readers.find_files_and_readers( ... base_dir="s3://noaa-goes16/ABI-L1b-RadF/2019/321/14/", ... fs=s3fs.S3FileSystem(anon=True), ... reader="abi_l1b", ... start_time=datetime.datetime(2019, 11, 17, 14, 40)) {'abi_l1b': [...]}
- Parameters:
start_time (datetime) – Limit used files by starting time.
end_time (datetime) – Limit used files by ending time.
base_dir (str) – The directory to search for files containing the data to load. Defaults to the current directory.
reader (str or list) – The name of the reader to use for loading the data or a list of names.
sensor (str or list) – Limit used files by provided sensors.
filter_parameters (dict) – Filename pattern metadata to filter on. start_time and end_time are automatically added to this dictionary. Shortcut for reader_kwargs[‘filter_parameters’].
reader_kwargs (dict) – Keyword arguments to pass to specific reader instances to further configure file searching.
missing_ok (bool) – If False (default), raise ValueError if no files are found. If True, return empty dictionary if no files are found.
fs (
fsspec.spec.AbstractFileSystem
) – Optional, instance of implementation offsspec.spec.AbstractFileSystem
(strictly speaking, any object of a class implementing.glob
is enough). Defaults to searching the local filesystem.
- Returns:
Dictionary mapping reader name string to list of filenames
- Return type:
- satpy.readers.get_valid_reader_names(reader)[source]
Check for old reader names or readers pending deprecation.
- satpy.readers.group_files(files_to_sort, reader=None, time_threshold=10, group_keys=None, reader_kwargs=None, missing='pass')[source]
Group series of files by file pattern information.
By default this will group files by their filename
start_time
assuming it exists in the pattern. By passing the individual dictionaries returned by this function to the Scene classes’filenames
, a series Scene objects can be easily created.- Parameters:
files_to_sort (iterable) – File paths to sort in to group
reader (str or Collection[str]) – Reader or readers whose file patterns should be used to sort files. If not given, try all readers (slow, adding a list of readers is strongly recommended).
time_threshold (int) – Number of seconds used to consider time elements in a group as being equal. For example, if the ‘start_time’ item is used to group files then any time within time_threshold seconds of the first file’s ‘start_time’ will be seen as occurring at the same time.
group_keys (list or tuple) – File pattern information to use to group files. Keys are sorted in order and only the first key is used when comparing datetime elements with time_threshold (see above). This means it is recommended that datetime values should only come from the first key in
group_keys
. Otherwise, there is a good chance that files will not be grouped properly (datetimes being barely unequal). Defaults to a reader’sgroup_keys
configuration (set in YAML), otherwise('start_time',)
. When passing multiple readers, passing group_keys is strongly recommended as the behaviour without doing so is undefined.reader_kwargs (dict) – Additional keyword arguments to pass to reader creation.
missing (str) – Parameter to control the behavior in the scenario where multiple readers were passed, but at least one group does not have files associated with every reader. Valid values are
"pass"
(the default),"skip"
, and"raise"
. If set to"pass"
, groups are passed as-is. Some groups may have zero files for some readers. If set to"skip"
, groups for which one or more readers have zero files are skipped (meaning that some files may not be associated to any group). If set to"raise"
, raise a FileNotFoundError in case there are any groups for which one or more readers have no files associated.
- Returns:
List of dictionaries mapping ‘reader’ to a list of filenames. Each of these dictionaries can be passed as
filenames
to a Scene object.
- satpy.readers.load_reader(reader_configs, **reader_kwargs)[source]
Import and setup the reader from reader_info.
- satpy.readers.load_readers(filenames=None, reader=None, reader_kwargs=None)[source]
Create specified readers and assign files to them.
- Parameters:
filenames (iterable or dict) – A sequence of files that will be used to load data from. A
dict
object should map reader names to a list of filenames for that reader.reader (str or list) – The name of the reader to use for loading the data or a list of names.
reader_kwargs (dict) – Keyword arguments to pass to specific reader instances. This can either be a single dictionary that will be passed to all reader instances, or a mapping of reader names to dictionaries. If the keys of
reader_kwargs
match exactly the list of strings inreader
or the keys of filenames, each reader instance will get its own keyword arguments accordingly.
Returns: Dictionary mapping reader name to reader instance
- satpy.readers.open_file_or_filename(unknown_file_thing)[source]
Try to open the provided file “thing” if needed, otherwise return the filename or Path.
This wraps the logic of getting something like an fsspec OpenFile object that is not directly supported by most reading libraries and making it usable. If a
pathlib.Path
object or something that is not open-able is provided then that object is passed along. In the case of fsspec OpenFiles their.open()
method is called and the result returned.
satpy.tests package
Subpackages
satpy.tests.cf_tests package
Submodules
satpy.tests.cf_tests._test_data module
Functions and fixture to test CF code.
satpy.tests.cf_tests.test_area module
Tests for the CF Area.
- class satpy.tests.cf_tests.test_area.TestCFArea[source]
Bases:
object
Test case for CF Area.
- test_add_grid_mapping_cf_repr(input_data_arr)[source]
Test the conversion from pyresample area object to CF grid mapping.
Projection has a corresponding CF representation (e.g. geos).
- test_add_grid_mapping_cf_repr_no_ab(input_data_arr)[source]
Test the conversion from pyresample area object to CF grid mapping.
Projection has a corresponding CF representation but no explicit a/b.
- test_add_grid_mapping_no_cf_repr(input_data_arr)[source]
Test the conversion from pyresample area object to CF grid mapping.
Projection does not have a corresponding CF representation (e.g. COSMO).
- test_add_grid_mapping_oblique_mercator(input_data_arr)[source]
Test the conversion from pyresample area object to CF grid mapping.
Projection is oblique mercator.
- test_add_grid_mapping_transverse_mercator(input_data_arr)[source]
Test the conversion from pyresample area object to CF grid mapping.
Projection is transverse mercator.
satpy.tests.cf_tests.test_attrs module
Tests for CF-compatible attributes encoding.
satpy.tests.cf_tests.test_coords module
CF processing of time information (coordinates and dimensions).
- class satpy.tests.cf_tests.test_coords.TestCFcoords[source]
Bases:
object
Test cases for CF spatial dimension and coordinates.
- test_add_coordinates_attrs_coords()[source]
Check that coordinates link has been established correctly.
satpy.tests.cf_tests.test_dataaarray module
Tests CF-compliant DataArray creation.
- class satpy.tests.cf_tests.test_dataaarray.TestCfDataArray[source]
Bases:
object
Test creation of CF DataArray.
satpy.tests.cf_tests.test_datasets module
Tests CF-compliant Dataset(s) creation.
satpy.tests.cf_tests.test_decoding module
Tests for CF decoding.
satpy.tests.cf_tests.test_encoding module
Tests for compatible netCDF/Zarr DataArray encodings.
Module contents
The CF dataset tests package.
satpy.tests.compositor_tests package
Submodules
satpy.tests.compositor_tests.test_abi module
Tests for ABI compositors.
- class satpy.tests.compositor_tests.test_abi.TestABIComposites(methodName='runTest')[source]
Bases:
TestCase
Test ABI-specific composites.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.compositor_tests.test_agri module
Tests for AGRI compositors.
- class satpy.tests.compositor_tests.test_agri.TestAGRIComposites(methodName='runTest')[source]
Bases:
TestCase
Test AGRI-specific composites.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.compositor_tests.test_ahi module
Tests for AHI compositors.
- class satpy.tests.compositor_tests.test_ahi.TestAHIComposites(methodName='runTest')[source]
Bases:
TestCase
Test AHI-specific composites.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.compositor_tests.test_glm module
Tests for GLM compositors.
satpy.tests.compositor_tests.test_sar module
Tests for SAR compositors.
- class satpy.tests.compositor_tests.test_sar.TestSARComposites(methodName='runTest')[source]
Bases:
TestCase
Test SAR-specific composites.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.compositor_tests.test_spectral module
Tests for spectral correction compositors.
- class satpy.tests.compositor_tests.test_spectral.TestNdviHybridGreenCompositor[source]
Bases:
object
Test NDVI-weighted hybrid green correction of green band.
satpy.tests.compositor_tests.test_viirs module
Tests for VIIRS compositors.
- class satpy.tests.compositor_tests.test_viirs.TestVIIRSComposites[source]
Bases:
object
Test various VIIRS-specific composites.
- test_erf_dnb(dnb_units, saturation_correction, area, sza, lza)[source]
Test the ‘dynamic_dnb’ or ERF DNB compositor.
Module contents
Tests for compositors.
satpy.tests.enhancement_tests package
Submodules
satpy.tests.enhancement_tests.test_abi module
Unit testing for the ABI enhancement functions.
- class satpy.tests.enhancement_tests.test_abi.TestABIEnhancement(methodName='runTest')[source]
Bases:
TestCase
Test the ABI enhancement functions.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.enhancement_tests.test_atmosphere module
Tests for enhancements in enhancements/atmosphere.py.
satpy.tests.enhancement_tests.test_enhancements module
Unit testing the enhancements functions, e.g. cira_stretch.
- class satpy.tests.enhancement_tests.test_enhancements.TestColormapLoading[source]
Bases:
object
Test utilities used with colormaps.
- test_cmap_bad_mode(real_mode, forced_mode, filename_suffix)[source]
Test that reading colormaps with the wrong mode fails.
- class satpy.tests.enhancement_tests.test_enhancements.TestEnhancementStretch[source]
Bases:
object
Class for testing enhancements in satpy.enhancements.
- class satpy.tests.enhancement_tests.test_enhancements.TestTCREnhancement[source]
Bases:
object
Test the AHI enhancement functions.
- satpy.tests.enhancement_tests.test_enhancements._generate_cmap_test_data(color_scale, colormap_mode)[source]
- satpy.tests.enhancement_tests.test_enhancements._write_cmap_to_file(cmap_filename, cmap_data)[source]
- satpy.tests.enhancement_tests.test_enhancements.closed_named_temp_file(**kwargs)[source]
Named temporary file context manager that closes the file after creation.
This helps with Windows systems which can get upset with opening or deleting a file that is already open.
- satpy.tests.enhancement_tests.test_enhancements.identical_decorator(func)[source]
Decorate but do nothing.
- satpy.tests.enhancement_tests.test_enhancements.run_and_check_enhancement(func, data, expected, **kwargs)[source]
Perform basic checks that apply to multiple tests.
- satpy.tests.enhancement_tests.test_enhancements.test_nwcsaf_comps(fake_area, tmp_path, data)[source]
Test loading NWCSAF composites.
- satpy.tests.enhancement_tests.test_enhancements.test_on_dask_array()[source]
Test the on_dask_array decorator.
satpy.tests.enhancement_tests.test_viirs module
Unit testing for the VIIRS enhancement function.
- class satpy.tests.enhancement_tests.test_viirs.TestVIIRSEnhancement(methodName='runTest')[source]
Bases:
TestCase
Class for testing the VIIRS enhancement function in satpy.enhancements.viirs.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Module contents
The enhancements tests package.
satpy.tests.modifier_tests package
Submodules
satpy.tests.modifier_tests.test_angles module
Tests for the angles in modifiers.
- class satpy.tests.modifier_tests.test_angles.TestAngleGeneration[source]
Bases:
object
Test the angle generation utility functions.
- test_cache_get_angles(input_func, num_normalized_chunks, exp_zarr_chunks, input2_func, exp_equal_sun, exp_num_zarr, force_bad_glob, tmp_path)[source]
Test get_angles when caching is enabled.
- test_cached_no_chunks_fails(tmp_path)[source]
Test that trying to pass non-dask arrays and no chunks fails.
- test_cached_result_numpy_fails(tmp_path)[source]
Test that trying to cache with non-dask arrays fails.
- test_caching_with_array_in_args_does_not_warn_when_caching_is_not_enabled(tmp_path, recwarn)[source]
Test that trying to cache with non-dask arrays fails.
- test_caching_with_array_in_args_warns(tmp_path)[source]
Test that trying to cache with non-dask arrays fails.
- satpy.tests.modifier_tests.test_angles._get_angle_test_data(area_def: AreaDefinition | StackedAreaDefinition | None = None, chunks: int | tuple | None = 2, shape: tuple = (5, 5), dims: tuple | None = None) DataArray [source]
satpy.tests.modifier_tests.test_crefl module
Tests for the CREFL ReflectanceCorrector modifier.
- class satpy.tests.modifier_tests.test_crefl.TestReflectanceCorrectorModifier[source]
Bases:
object
Test the CREFL modifier.
- test_reflectance_corrector_abi(name, wavelength, resolution, exp_mean, exp_unique)[source]
Test ReflectanceCorrector modifier with ABI data.
- test_reflectance_corrector_bad_prereqs()[source]
Test ReflectanceCorrector modifier with wrong number of inputs.
- satpy.tests.modifier_tests.test_crefl._make_viirs_xarray(data, area, name, standard_name, wavelength=None, units='degrees', calibration=None)[source]
- satpy.tests.modifier_tests.test_crefl._mock_and_create_dem_file(tmpdir, url, var_name, fill_value=None)[source]
satpy.tests.modifier_tests.test_filters module
Implementation of some image filters.
satpy.tests.modifier_tests.test_parallax module
Tests related to parallax correction.
- class satpy.tests.modifier_tests.test_parallax.TestForwardParallax[source]
Bases:
object
Test the forward parallax function with various inputs.
- test_get_parallax_corrected_lonlats_clearsky()[source]
Test parallax correction for clearsky case (returns NaN).
- test_get_parallax_corrected_lonlats_cloudy_slant()[source]
Test parallax correction for fully cloudy scene (not SSP).
- test_get_parallax_corrected_lonlats_cloudy_ssp(lat, lon, resolution)[source]
Test parallax correction for fully cloudy scene at SSP.
- test_get_parallax_corrected_lonlats_horizon()[source]
Test that exception is raised if satellites exactly at the horizon.
Test the rather unlikely case of a satellite elevation of exactly 0
- test_get_parallax_corrected_lonlats_mixed()[source]
Test parallax correction for mixed cloudy case.
- class satpy.tests.modifier_tests.test_parallax.TestParallaxCorrectionClass[source]
Bases:
object
Test that the ParallaxCorrection class is behaving sensibly.
- test_correct_area_clearsky(sat_pos, ar_pos, resolution, caplog)[source]
Test that ParallaxCorrection doesn’t change clearsky geolocation.
- test_correct_area_clearsky_different_resolutions(res1, res2)[source]
Test clearsky correction when areas have different resolutions.
- test_correct_area_cloudy_partly_shifted()[source]
Test cloudy correction when areas overlap only partly.
- test_correct_area_no_orbital_parameters(caplog, fake_tle)[source]
Test ParallaxCorrection when CTH has no orbital parameters.
Some CTH products, such as NWCSAF-GEO, do not include information on satellite location directly. Rather, they include platform name, sensor, start time, and end time, that we have to use instead.
- test_correct_area_partlycloudy(daskify)[source]
Test ParallaxCorrection for partly cloudy situation.
- class satpy.tests.modifier_tests.test_parallax.TestParallaxCorrectionModifier[source]
Bases:
object
Test that the parallax correction modifier works correctly.
- _get_fake_cloud_datasets(test_area, cth, use_dask)[source]
Return datasets for BT and CTH for fake cloud.
- test_area(request)[source]
Produce test area for parallax correction unit tests.
Produce test area for the modifier-interface parallax correction unit tests.
- test_modifier_interface_cloud_moves_to_observer(cth, use_dask, test_area)[source]
Test that a cloud moves to the observer.
With the modifier interface, use a high resolution area and test that pixels are moved in the direction of the observer and not away from it.
- test_parallax_modifier_interface_with_cloud()[source]
Test the modifier interface with a cloud.
Test corresponds to a real bug encountered when using CTH data from NWCSAF-GEO, which created strange speckles in Africa (see https://github.com/pytroll/satpy/pull/1904#issuecomment-1011161623 for an example). Create fake CTH corresponding to NWCSAF-GEO area and BT corresponding to full disk SEVIRI, and test that no strange speckles occur.
- class satpy.tests.modifier_tests.test_parallax.TestParallaxCorrectionSceneLoad[source]
Bases:
object
Test that scene load interface works as expected.
- test_double_load(fake_scene, conf_file, fake_tle)[source]
Test that loading corrected and uncorrected works correctly.
When the modifier
__call__
method fails to callself.apply_modifier_info(new, old)
and the original and parallax-corrected dataset are requested at the same time, the DataArrays differ but the underlying dask arrays have object identity, which in turn leads to both being parallax corrected. This unit test confirms that there is no such object identity.
- satpy.tests.modifier_tests.test_parallax._get_attrs(lat, lon, height=35000)[source]
Get attributes for datasets in fake scene.
Module contents
Tests for modifiers.
satpy.tests.multiscene_tests package
Submodules
satpy.tests.multiscene_tests.test_blend module
Unit tests for blending datasets with the Multiscene object.
- class satpy.tests.multiscene_tests.test_blend.TestBlendFuncs[source]
Bases:
object
Test individual functions used for blending.
- datasets_and_weights()[source]
X-Array datasets with area definition plus weights for input to tests.
- test_blend_function_stack_weighted(datasets_and_weights, line, column)[source]
Test the ‘stack_weighted’ function.
- test_blend_two_scenes_bad_blend_type(multi_scene_and_weights, groups)[source]
Test exception is raised when bad ‘blend_type’ is used.
- test_blend_two_scenes_using_stack(multi_scene_and_weights, groups, scene1_with_weights, scene2_with_weights)[source]
Test blending two scenes by stacking them on top of each other using function ‘stack’.
- test_blend_two_scenes_using_stack_weighted(multi_scene_and_weights, groups, scene1_with_weights, scene2_with_weights, blend_func, exp_result_func)[source]
Test stacking two scenes using weights.
Here we test that the start and end times can be combined so that they describe the start and times of the entire data series. We also test the various types of weighted stacking functions (ex. select, blend).
- class satpy.tests.multiscene_tests.test_blend.TestTemporalRGB[source]
Bases:
object
Test the temporal RGB blending method.
- satpy.tests.multiscene_tests.test_blend._check_stacked_metadata(data_arr: DataArray, exp_name: str) None [source]
- satpy.tests.multiscene_tests.test_blend._get_expected_stack_blend(scene1: Scene, scene2: Scene) DataArray [source]
- satpy.tests.multiscene_tests.test_blend._get_expected_stack_select(scene1: Scene, scene2: Scene) DataArray [source]
- satpy.tests.multiscene_tests.test_blend.cloud_type_data_array1(test_area, data_type, image_mode)[source]
Get DataArray for cloud type in the first test Scene.
- satpy.tests.multiscene_tests.test_blend.cloud_type_data_array2(test_area, data_type, image_mode)[source]
Get DataArray for cloud type in the second test Scene.
- satpy.tests.multiscene_tests.test_blend.data_type(request)[source]
Get array data type of the DataArray being tested.
- satpy.tests.multiscene_tests.test_blend.image_mode(request)[source]
Get image mode of the main DataArray being tested.
- satpy.tests.multiscene_tests.test_blend.multi_scene_and_weights(scene1_with_weights, scene2_with_weights)[source]
Create small multi-scene for testing.
- satpy.tests.multiscene_tests.test_blend.scene1_with_weights(cloud_type_data_array1, test_area)[source]
Create first test scene with a dataset of weights.
satpy.tests.multiscene_tests.test_misc module
Unit tests for the Multiscene object.
- class satpy.tests.multiscene_tests.test_misc.TestMultiScene(methodName='runTest')[source]
Bases:
TestCase
Test basic functionality of MultiScene.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.multiscene_tests.test_save_animation module
Unit tests for saving animations using Multiscene.
- class satpy.tests.multiscene_tests.test_save_animation.TestMultiSceneSave(methodName='runTest')[source]
Bases:
TestCase
Test saving a MultiScene to various formats.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_save_datasets_distributed_delayed()[source]
Test distributed save for writers returning delayed obejcts e.g. simple_image.
satpy.tests.multiscene_tests.test_utils module
Utilties to assist testing the Multiscene functionality.
Creating fake test data for use in the other Multiscene test modules.
- satpy.tests.multiscene_tests.test_utils._create_test_area(proj_str=None, shape=(5, 10), extents=None)[source]
Create a test area definition.
- satpy.tests.multiscene_tests.test_utils._create_test_dataset(name, shape=(5, 10), area=None, values=None, dims=('y', 'x'))[source]
Create a test DataArray object.
- satpy.tests.multiscene_tests.test_utils._create_test_int8_dataset(name, shape=(5, 10), area=None, values=None, dims=('y', 'x'))[source]
Create a test DataArray object.
Module contents
Unit tests for Multiscene.
satpy.tests.reader_tests package
Subpackages
satpy.tests.reader_tests.gms package
Submodules
satpy.tests.reader_tests.gms.test_gms5_vissr_data module
Real world test data for GMS-5 VISSR unit tests.
satpy.tests.reader_tests.gms.test_gms5_vissr_l1b module
Unit tests for GMS-5 VISSR reader.
- class satpy.tests.reader_tests.gms.test_gms5_vissr_l1b.TestCorruptFile[source]
Bases:
object
Test reading corrupt files.
- class satpy.tests.reader_tests.gms.test_gms5_vissr_l1b.TestEarthMask[source]
Bases:
object
Test getting the earth mask.
- class satpy.tests.reader_tests.gms.test_gms5_vissr_l1b.TestFileHandler[source]
Bases:
object
Test VISSR file handler.
- _patch_number_of_pixels_per_scanline(monkeypatch)[source]
Patch data types so that each scanline has two pixels.
- cal_params(vis_calibration, ir1_calibration, ir2_calibration, wv_calibration)[source]
Get calibration parameters.
- coord_conv()[source]
Get parameters for coordinate conversions.
Adjust pixel offset so that the first column is at the image center. This has the advantage that we can test with very small 2x2 images. Otherwise, all pixels would be in space.
- coordinate_conversion(coord_conv, simple_coord_conv_table)[source]
Get all coordinate conversion parameters.
- lons_lats_exp(dataset_id)[source]
Get expected lon/lat coordinates.
Computed with JMA’s Msial library for 2 pixels near the central column (6688.5/1672.5 for VIS/IR).
VIS:
pix = [6688, 6688, 6689, 6689] lin = [2744, 8356, 2744, 8356]
IR1:
pix = [1672, 1672, 1673, 1673] lin = [686, 2089, 686, 2089]
Get navigation parameters.
- orbit_prediction(orbit_prediction_1, orbit_prediction_2)[source]
Get predictions of orbital parameters.
- class satpy.tests.reader_tests.gms.test_gms5_vissr_l1b.VissrFileWriter(ch_type, open_function)[source]
Bases:
object
Write data in VISSR archive format.
Initialize the writer.
- Parameters:
ch_type – Channel type (VIS or IR)
open_function – Open function to be used (e.g. open or gzip.open)
- _write(fd, data, offset=None)[source]
Write data to file.
If specified, prepend with ‘offset’ placeholder bytes.
- image_params_order = ['mode', 'coordinate_conversion', 'attitude_prediction', 'orbit_prediction_1', 'orbit_prediction_2', 'vis_calibration', 'ir1_calibration', 'ir2_calibration', 'wv_calibration', 'simple_coordinate_conversion_table']
Module contents
Unit tests for GMS reader.
satpy.tests.reader_tests.modis_tests package
Submodules
satpy.tests.reader_tests.modis_tests._modis_fixtures module
MODIS L1b and L2 test fixtures.
- satpy.tests.reader_tests.modis_tests._modis_fixtures._add_variable_to_file(h, var_name, var_info)[source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._create_core_metadata(file_shortname: str) str [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._create_struct_metadata(geo_resolution: int) str [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._create_struct_metadata_cmg(ftype: str) str [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._generate_angle_data(resolution: int) ndarray [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._generate_lonlat_data(resolution: int) tuple[ndarray, ndarray] [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._generate_visible_data(resolution: int, num_bands: int, dtype=<class 'numpy.uint16'>) ndarray [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._generate_visible_uncertainty_data(shape: tuple) ndarray [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_angles_variable_info(resolution: int) dict [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_basic_variable_info(var_name: str, resolution: int) dict [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_cloud_mask_variable_info(var_name: str, resolution: int) dict [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_emissive_variable_info(var_name: str, resolution: int, bands: list[str])[source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_l1b_geo_variable_info(filename: str, geo_resolution: int, include_angles: bool = True) dict [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_l3_refl_variable_info(var_name: str) dict [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_lonlat_variable_info(resolution: int) dict [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._get_visible_variable_info(var_name: str, resolution: int, bands: list[str])[source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures._shape_for_resolution(resolution: int) tuple[int, int] [source]
- satpy.tests.reader_tests.modis_tests._modis_fixtures.create_hdfeos_test_file(filename: str, variable_infos: dict, struct_meta: str | None = None, core_meta: str | None = None, archive_meta: str | None = None) None [source]
Create a fake MODIS L1b HDF4 file with headers.
- Parameters:
filename – Full path of filename to be created.
variable_infos – Dictionary mapping HDF4 variable names to dictionary of variable information (see
_add_variable_to_file
).struct_meta – Contents of the ‘StructMetadata.0’ header.
core_meta – Contents of the ‘CoreMetadata.0’ header.
archive_meta – Contents of the ‘ArchiveMetadata.0’ header.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.generate_imapp_filename(suffix)[source]
Generate a filename that follows IMAPP MODIS L1b convention.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.generate_nasa_l1b_filename(prefix)[source]
Generate a filename that follows NASA MODIS L1b convention.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.generate_nasa_l2_filename(prefix: str) str [source]
Generate a file name that follows MODIS 35 L2 convention in a temporary directory.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.generate_nasa_l3_filename(prefix: str) str [source]
Generate a file name that follows MODIS 09 L3 convention in a temporary directory.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_imapp_1000m_file(tmpdir_factory) list[str] [source]
Create a single MOD021KM file following IMAPP file scheme.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_imapp_geo_file(tmpdir_factory) list[str] [source]
Create a single geo file following standard IMAPP file scheme.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_nasa_1km_mod03_files(modis_l1b_nasa_mod021km_file, modis_l1b_nasa_mod03_file) list[str] [source]
Create input files including the 1KM and MOD03 files.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_nasa_mod021km_file(tmpdir_factory) list[str] [source]
Create a single MOD021KM file following standard NASA file scheme.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_nasa_mod02hkm_file(tmpdir_factory) list[str] [source]
Create a single MOD02HKM file following standard NASA file scheme.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_nasa_mod02qkm_file(tmpdir_factory) list[str] [source]
Create a single MOD02QKM file following standard NASA file scheme.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l1b_nasa_mod03_file(tmpdir_factory) list[str] [source]
Create a single MOD03 file following standard NASA file scheme.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_imapp_mask_byte1_file(tmpdir_factory) list[str] [source]
Create a single IMAPP mask_byte1 L2 HDF4 file with headers.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_imapp_mask_byte1_geo_files(modis_l2_imapp_mask_byte1_file, modis_l1b_nasa_mod03_file) list[str] [source]
Create the IMAPP mask_byte1 and geo HDF4 files.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_imapp_snowmask_file(tmpdir_factory) list[str] [source]
Create a single IMAPP snowmask L2 HDF4 file with headers.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_imapp_snowmask_geo_files(modis_l2_imapp_snowmask_file, modis_l1b_nasa_mod03_file) list[str] [source]
Create the IMAPP snowmask and geo HDF4 files.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_nasa_mod06_file(tmpdir_factory) list[str] [source]
Create a single MOD06 L2 HDF4 file with headers.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_nasa_mod35_file(tmpdir_factory) list[str] [source]
Create a single MOD35 L2 HDF4 file with headers.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l2_nasa_mod35_mod03_files(modis_l2_nasa_mod35_file, modis_l1b_nasa_mod03_file) list[str] [source]
Create a MOD35 L2 HDF4 file and MOD03 L1b geolocation file.
- satpy.tests.reader_tests.modis_tests._modis_fixtures.modis_l3_file(tmpdir_factory, f_prefix, var_name, f_short)[source]
Create a MODIS L3 file of the desired type.
satpy.tests.reader_tests.modis_tests.conftest module
Setup and configuration for all reader tests.
satpy.tests.reader_tests.modis_tests.test_modis_l1b module
Unit tests for MODIS L1b HDF reader.
- class satpy.tests.reader_tests.modis_tests.test_modis_l1b.TestModisL1b[source]
Bases:
object
Test MODIS L1b reader.
- test_load_longitude_latitude(input_files, has_5km, has_500, has_250, default_res)[source]
Test that longitude and latitude datasets are loaded correctly.
- test_load_sat_zenith_angle(modis_l1b_nasa_mod021km_file)[source]
Test loading satellite zenith angle band.
satpy.tests.reader_tests.modis_tests.test_modis_l2 module
Unit tests for MODIS L2 HDF reader.
- class satpy.tests.reader_tests.modis_tests.test_modis_l2.TestModisL2[source]
Bases:
object
Test MODIS L2 reader.
- test_load_category_dataset(input_files, loadables, request_resolution, exp_resolution, exp_area)[source]
Test loading category products.
- test_load_l2_dataset(input_files, loadables, exp_resolution, exp_area, exp_value)[source]
Load and check an L2 variable.
satpy.tests.reader_tests.modis_tests.test_modis_l3 module
Unit tests for MODIS L3 HDF reader.
Module contents
Unit tests for MODIS readers.
This subdirectory mostly exists to have MODIS-based pytest fixtures only loaded for MODIS tests.
satpy.tests.reader_tests.test_clavrx package
Submodules
satpy.tests.reader_tests.test_clavrx.test_clavrx_geohdf module
Module for testing the satpy.readers.clavrx module.
- class satpy.tests.reader_tests.test_clavrx.test_clavrx_geohdf.FakeHDF4FileHandlerGeo(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF4FileHandler
Swap-in HDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_clavrx.test_clavrx_geohdf.TestCLAVRXReaderGeo(methodName='runTest')[source]
Bases:
TestCase
Test CLAVR-X Reader with Geo files.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Test exception raised when no donor file is available.
- yaml_file = 'clavrx.yaml'
satpy.tests.reader_tests.test_clavrx.test_clavrx_nc module
Module for testing the satpy.readers.clavrx module.
- class satpy.tests.reader_tests.test_clavrx.test_clavrx_nc.TestCLAVRXReaderGeo[source]
Bases:
object
Test CLAVR-X Reader with Geo files.
- test_available_datasets(filenames, expected_datasets)[source]
Test that variables are dynamically discovered.
- test_load_all_new_donor(filenames, loadable_ids)[source]
Test loading all test datasets with new donor.
- test_scale_data(filenames, loadable_ids)[source]
Test that data is scaled when necessary and not scaled data are flags.
- test_yaml_datasets(filenames, expected_loadables)[source]
Test available_datasets with fake variables from YAML.
- yaml_file = 'clavrx.yaml'
satpy.tests.reader_tests.test_clavrx.test_clavrx_polarhdf module
Module for testing the satpy.readers.clavrx module.
- class satpy.tests.reader_tests.test_clavrx.test_clavrx_polarhdf.FakeHDF4FileHandlerPolar(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF4FileHandler
Swap-in HDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_clavrx.test_clavrx_polarhdf.TestCLAVRXReaderPolar(methodName='runTest')[source]
Bases:
TestCase
Test CLAVR-X Reader with Polar files.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'clavrx.yaml'
Module contents
The clavrx reader tests package.
Submodules
satpy.tests.reader_tests._li_test_utils module
Common utility modules used for LI mock-oriented unit tests.
- class satpy.tests.reader_tests._li_test_utils.FakeLIFileHandlerBase(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Class for faking the NetCDF4 Filehandler.
Get fake file content from ‘get_test_content’.
- get_test_content(filename, filename_info, filetype_info)[source]
Get the content of the test data.
Here we generate the default content we want to provide depending on the provided filename infos.
- schema_parameters = None
- satpy.tests.reader_tests._li_test_utils.accumulation_dimensions(nacc, nobs)[source]
Set dimensions for the accumulated products.
- satpy.tests.reader_tests._li_test_utils.add_attributes(attribs, ignored_attrs, desc)[source]
Add all the custom properties directly as attributes.
- satpy.tests.reader_tests._li_test_utils.extract_filetype_info(filetype_infos, filetype)[source]
Extract Satpy-conform filetype_info from filetype_infos fixture.
- satpy.tests.reader_tests._li_test_utils.fci_grid_definition(axis, nobs)[source]
FCI grid definition on X or Y axis.
- satpy.tests.reader_tests._li_test_utils.get_product_schema(pname, settings=None)[source]
Retrieve an LI product schema given its name.
- satpy.tests.reader_tests._li_test_utils.l2_af_schema(settings=None)[source]
Define schema for LI L2 AF product.
- satpy.tests.reader_tests._li_test_utils.l2_afa_schema(settings=None)[source]
Define schema for LI L2 AFA product.
- satpy.tests.reader_tests._li_test_utils.l2_afr_schema(settings=None)[source]
Define schema for LI L2 AFR product.
- satpy.tests.reader_tests._li_test_utils.l2_le_schema(settings=None)[source]
Define schema for LI L2 LE product.
- satpy.tests.reader_tests._li_test_utils.l2_lef_schema(settings=None)[source]
Define schema for LI L2 LEF product.
- satpy.tests.reader_tests._li_test_utils.l2_lfl_schema(settings=None)[source]
Define schema for LI L2 LFL product.
- satpy.tests.reader_tests._li_test_utils.l2_lgr_schema(settings=None)[source]
Define schema for LI L2 LGR product.
- satpy.tests.reader_tests._li_test_utils.mtg_geos_projection()[source]
MTG geos projection definition.
satpy.tests.reader_tests.conftest module
Setup and configuration for all reader tests.
satpy.tests.reader_tests.test_aapp_l1b module
Test module for the avhrr aapp l1b reader.
- class satpy.tests.reader_tests.test_aapp_l1b.TestAAPPL1BAllChannelsPresent(methodName='runTest')[source]
Bases:
TestCase
Test the filehandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Test reading the lon and lats.
- class satpy.tests.reader_tests.test_aapp_l1b.TestAAPPL1BChannel3AMissing(methodName='runTest')[source]
Bases:
TestCase
Test the filehandler when channel 3a is missing.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_aapp_l1b.TestNegativeCalibrationSlope(methodName='runTest')[source]
Bases:
TestCase
Case for testing correct behaviour when the data has negative slope2 coefficients.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_aapp_mhs_amsub_l1c module
Test module for the MHS AAPP level-1c reader.
- class satpy.tests.reader_tests.test_aapp_mhs_amsub_l1c.TestMHS_AMSUB_AAPPL1CReadData(methodName='runTest')[source]
Bases:
TestCase
Test the filehandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Test reading the longitudes and latitudes.
satpy.tests.reader_tests.test_abi_l1b module
The abi_l1b reader tests package.
- class satpy.tests.reader_tests.test_abi_l1b.Test_NC_ABI_L1B[source]
Bases:
object
Test the NC_ABI_L1B reader.
- pytestmark = [Mark(name='parametrize', args=('c01_data_arr', [<LazyFixture "c01_rad">, <LazyFixture "c01_rad_h5netcdf">]), kwargs={})]
- satpy.tests.reader_tests.test_abi_l1b._create_fake_rad_dataarray(rad: DataArray | None = None, resolution: int = 2000) DataArray [source]
- satpy.tests.reader_tests.test_abi_l1b._create_fake_rad_dataset(rad: DataArray, resolution: int) Dataset [source]
- satpy.tests.reader_tests.test_abi_l1b._create_reader_for_data(tmp_path: Path, channel_name: str, rad: DataArray | None, resolution: int, reader_kwargs: dict[str, Any] | None = None) FileYAMLReader [source]
- satpy.tests.reader_tests.test_abi_l1b._get_and_check_array(data_arr: DataArray, exp_dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) ndarray[Any, dtype[_ScalarType_co]] [source]
- satpy.tests.reader_tests.test_abi_l1b.c01_rad_h5netcdf(tmp_path) DataArray [source]
Load c01 radiances through h5netcdf.
- satpy.tests.reader_tests.test_abi_l1b.c07_bt_creator(tmp_path) Callable [source]
Create a loader for c07 brightness temperatures.
- satpy.tests.reader_tests.test_abi_l1b.generate_l1b_filename(chan_name: str) str [source]
Generate a l1b filename.
- satpy.tests.reader_tests.test_abi_l1b.test_file_patterns_match(channel, suffix)[source]
Test that the configured file patterns work.
satpy.tests.reader_tests.test_abi_l2_nc module
The abi_l2_nc reader tests package.
- class satpy.tests.reader_tests.test_abi_l2_nc.TestMCMIPReading[source]
Bases:
object
Test cases of the MCMIP file format.
- class satpy.tests.reader_tests.test_abi_l2_nc.Test_NC_ABI_L2_area_AOD[source]
Bases:
object
Test the NC_ABI_L2 reader for the AOD product.
- class satpy.tests.reader_tests.test_abi_l2_nc.Test_NC_ABI_L2_area_fixedgrid[source]
Bases:
object
Test the NC_ABI_L2 reader.
- class satpy.tests.reader_tests.test_abi_l2_nc.Test_NC_ABI_L2_area_latlon[source]
Bases:
object
Test the NC_ABI_L2 reader.
satpy.tests.reader_tests.test_acspo module
Module for testing the satpy.readers.acspo module.
- class satpy.tests.reader_tests.test_acspo.FakeNetCDF4FileHandler2(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
satpy.tests.reader_tests.test_agri_l1 module
The agri_l1 reader tests package.
- class satpy.tests.reader_tests.test_agri_l1.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_agri_l1.Test_HDF_AGRI_L1_cal[source]
Bases:
object
Test VIRR L1B Reader.
- static _assert_which_channels_are_loaded(available_datasets, band_names, resolution_to_test)[source]
- test_agri_for_one_resolution(resolution_to_test, satname)[source]
Test loading data when only one resolution is available.
- test_fy4a_channels_are_loaded_with_right_resolution()[source]
Test all channels are loaded with the right resolution.
- yaml_file = 'agri_fy4a_l1.yaml'
satpy.tests.reader_tests.test_ahi_hrit module
The hrit ahi reader tests package.
- class satpy.tests.reader_tests.test_ahi_hrit.TestHRITJMAFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRITJMAFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- _get_acq_time(nlines)[source]
Get sample header entry for scanline acquisition times.
Lines: 1, 21, 41, 61, …, nlines Times: 1970-01-01 00:00 + (1, 21, 41, 61, …, nlines) seconds
So the interpolated times are expected to be 1970-01-01 + (1, 2, 3, 4, …, nlines) seconds. Note that there will be some floating point inaccuracies, because timestamps are stored with only 6 decimals precision.
- _get_mda(loff=5500.0, coff=5500.0, nlines=11000, ncols=11000, segno=0, numseg=1, vis=True, platform='Himawari-8')[source]
Create metadata dict like HRITFileHandler would do it.
satpy.tests.reader_tests.test_ahi_hsd module
The ahi_hsd reader tests package.
- class satpy.tests.reader_tests.test_ahi_hsd.TestAHICalibration(methodName='runTest')[source]
Bases:
TestCase
Test case for various AHI calibration types.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_ahi_hsd.TestAHIHSDFileHandler[source]
Bases:
object
Tests for the AHI HSD file handler.
Bases:
TestCase
Test the AHI HSD reader navigation.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
Test region navigation.
Test segment navigation.
- class satpy.tests.reader_tests.test_ahi_hsd.TestNominalTimeCalculator[source]
Bases:
object
Test case for nominal timestamp computation.
- satpy.tests.reader_tests.test_ahi_hsd._create_fake_file_handler(in_fname, filename_info=None, filetype_info=None, fh_kwargs=None)[source]
satpy.tests.reader_tests.test_ahi_l1b_gridded_bin module
The ahi_l1b_gridded_bin reader tests package.
- class satpy.tests.reader_tests.test_ahi_l1b_gridded_bin.TestAHIGriddedArea(methodName='runTest')[source]
Bases:
TestCase
Test the AHI gridded reader definition.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_ahi_l1b_gridded_bin.TestAHIGriddedFileCalibration(methodName='runTest')[source]
Bases:
TestCase
Test case for the file calibration types.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_ahi_l1b_gridded_bin.TestAHIGriddedFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test case for the file reading.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_ahi_l1b_gridded_bin.TestAHIGriddedLUTs(methodName='runTest')[source]
Bases:
TestCase
Test case for the downloading and preparing LUTs.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_ahi_l2_nc module
Tests for the Himawari L2 netCDF reader.
- satpy.tests.reader_tests.test_ahi_l2_nc.ahil2_filehandler(fname, platform='h09')[source]
Instantiate a Filehandler.
- satpy.tests.reader_tests.test_ahi_l2_nc.himl2_filename(tmp_path_factory)[source]
Create a fake himawari l2 file.
- satpy.tests.reader_tests.test_ahi_l2_nc.himl2_filename_bad(tmp_path_factory)[source]
Create a fake himawari l2 file.
- satpy.tests.reader_tests.test_ahi_l2_nc.test_ahi_l2_area_def(himl2_filename, caplog)[source]
Test reader handles area definition correctly.
- satpy.tests.reader_tests.test_ahi_l2_nc.test_bad_area_name(himl2_filename_bad)[source]
Check case where area name is not correct.
satpy.tests.reader_tests.test_ami_l1b module
The ami_l1b reader tests package.
- class satpy.tests.reader_tests.test_ami_l1b.FakeDataset(info, attrs)[source]
Bases:
object
Mimic xarray Dataset object.
Initialize test data.
- class satpy.tests.reader_tests.test_ami_l1b.TestAMIL1bNetCDF(methodName='runTest')[source]
Bases:
TestAMIL1bNetCDFBase
Test the AMI L1b reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_ami_l1b.TestAMIL1bNetCDFBase(methodName='runTest')[source]
Bases:
TestCase
Common setup for NC_ABI_L1B tests.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_ami_l1b.TestAMIL1bNetCDFIRCal(methodName='runTest')[source]
Bases:
TestAMIL1bNetCDFBase
Test IR specific things about the AMI reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_amsr2_l1b module
Module for testing the satpy.readers.amsr2_l1b module.
- class satpy.tests.reader_tests.test_amsr2_l1b.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_amsr2_l1b.TestAMSR2L1BReader(methodName='runTest')[source]
Bases:
TestCase
Test AMSR2 L1B Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'amsr2_l1b.yaml'
satpy.tests.reader_tests.test_amsr2_l2 module
Unit tests for AMSR L2 reader.
- class satpy.tests.reader_tests.test_amsr2_l2.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_amsr2_l2.TestAMSR2L2Reader(methodName='runTest')[source]
Bases:
TestCase
Test AMSR2 L2 Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'amsr2_l2.yaml'
satpy.tests.reader_tests.test_amsr2_l2_gaasp module
Tests for the ‘amsr2_l2_gaasp’ reader.
- class satpy.tests.reader_tests.test_amsr2_l2_gaasp.TestGAASPReader[source]
Bases:
object
Tests for the GAASP reader.
- test_available_datasets(filenames, expected_datasets)[source]
Test that variables are dynamically discovered.
- yaml_file = 'amsr2_l2_gaasp.yaml'
- satpy.tests.reader_tests.test_amsr2_l2_gaasp._create_gridded_gaasp_dataset(filename)[source]
Represent files with gridded products.
- satpy.tests.reader_tests.test_amsr2_l2_gaasp._create_one_res_gaasp_dataset(filename)[source]
Represent files with one resolution of variables in them (ex. SOIL).
- satpy.tests.reader_tests.test_amsr2_l2_gaasp._create_two_res_gaasp_dataset(filename)[source]
Represent files with two resolution of variables in them (ex. OCEAN).
satpy.tests.reader_tests.test_ascat_l2_soilmoisture_bufr module
Unittesting the ASCAT SCATTEROMETER SOIL MOISTURE BUFR reader.
- class satpy.tests.reader_tests.test_ascat_l2_soilmoisture_bufr.TesitAscatL2SoilmoistureBufr(methodName='runTest')[source]
Bases:
TestCase
Test ASCAT Soil Mosture loader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_atms_l1b_nc module
The atms_l1b_nc reader tests package.
- class satpy.tests.reader_tests.test_atms_l1b_nc.TestAtsmsL1bNCFileHandler[source]
Bases:
object
Test the AtmsL1bNCFileHandler reader.
satpy.tests.reader_tests.test_atms_sdr_hdf5 module
Module for testing the ATMS SDR HDF5 reader.
- class satpy.tests.reader_tests.test_atms_sdr_hdf5.FakeHDF5_ATMS_SDR_FileHandler(filename, filename_info, filetype_info, include_factors=True)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Create fake file handler.
- static _add_geolocation_info_to_file_content(file_content, filename, data_var_prefix, num_grans)[source]
- _add_granule_specific_info_to_file_content(file_content, dataset_group, num_granules, num_scans_per_granule, gran_group_prefix)[source]
- _num_of_bands = 22
- _num_scans_per_gran = [12]
- _num_test_granules = 1
- class satpy.tests.reader_tests.test_atms_sdr_hdf5.TestATMS_SDR_Reader[source]
Bases:
object
Test ATMS SDR Reader.
- test_init_start_end_time()[source]
Test basic init with start and end times around the start/end times of the provided file.
- test_load_all_bands(files, expected)[source]
Load brightness temperatures for all 22 ATMS channels, with/without geolocation.
- yaml_file = 'atms_sdr_hdf5.yaml'
satpy.tests.reader_tests.test_avhrr_l0_hrpt module
Tests for the hrpt reader.
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.CalibratorPatcher(methodName='runTest')[source]
Bases:
PygacPatcher
Patch pygac.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTChannel3(methodName='runTest')[source]
Bases:
TestHRPTWithPatchedCalibratorAndFile
Test case for reading calibrated brightness temperature from hrpt data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTGetCalibratedBT(methodName='runTest')[source]
Bases:
TestHRPTWithPatchedCalibratorAndFile
Test case for reading calibrated brightness temperature from hrpt data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTGetCalibratedReflectances(methodName='runTest')[source]
Bases:
TestHRPTWithPatchedCalibratorAndFile
Test case for reading calibrated reflectances from hrpt data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTGetUncalibratedData(methodName='runTest')[source]
Bases:
TestHRPTWithFile
Test case for reading uncalibrated hrpt data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Bases:
TestHRPTWithFile
Test case for computing HRPT navigation.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
Prepare the mocks.
Set up the test case.
Check that latitudes are returned properly.
Check that latitudes are returned properly.
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTReading(methodName='runTest')[source]
Bases:
TestHRPTWithFile
Test case for reading hrpt data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTWithFile(methodName='runTest')[source]
Bases:
TestCase
Test base class with writing a fake file.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l0_hrpt.TestHRPTWithPatchedCalibratorAndFile(methodName='runTest')[source]
Bases:
CalibratorPatcher
,TestHRPTWithFile
Test case with patched calibration routines and a synthetic file.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_avhrr_l1b_gaclac module
Pygac interface.
- class satpy.tests.reader_tests.test_avhrr_l1b_gaclac.GACLACFilePatcher(methodName='runTest')[source]
Bases:
PygacPatcher
Patch pygac.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l1b_gaclac.PygacPatcher(methodName='runTest')[source]
Bases:
TestCase
Patch pygac.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l1b_gaclac.TestGACLACFile(methodName='runTest')[source]
Bases:
GACLACFilePatcher
Test the GACLAC file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_avhrr_l1b_gaclac.TestGetDataset(methodName='runTest')[source]
Bases:
GACLACFilePatcher
Test the get_dataset method.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_cmsaf_claas module
Tests for the ‘cmsaf-claas2_l2_nc’ reader.
- class satpy.tests.reader_tests.test_cmsaf_claas.TestCLAAS2MultiFile[source]
Bases:
object
Test reading multiple CLAAS-2 files.
- class satpy.tests.reader_tests.test_cmsaf_claas.TestCLAAS2SingleFile[source]
Bases:
object
Test reading a single CLAAS2 file.
- satpy.tests.reader_tests.test_cmsaf_claas.fake_dataset(start_time_str)[source]
Create a CLAAS-like test dataset.
- satpy.tests.reader_tests.test_cmsaf_claas.fake_file(fake_dataset, encoding, tmp_path)[source]
Write a fake dataset to file.
- satpy.tests.reader_tests.test_cmsaf_claas.fake_files(fake_dataset, encoding, tmp_path)[source]
Write the same fake dataset into two different files.
- satpy.tests.reader_tests.test_cmsaf_claas.start_time(request)[source]
Get start time of the dataset.
satpy.tests.reader_tests.test_electrol_hrit module
The HRIT electrol reader tests package.
- class satpy.tests.reader_tests.test_electrol_hrit.TestHRITGOMSEpiFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRIT Epilogue FileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_electrol_hrit.TestHRITGOMSFileHandler(methodName='runTest')[source]
Bases:
TestCase
A test of the ELECTRO-L main file handler functions.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_electrol_hrit.TestHRITGOMSProFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRIT Prologue FileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_calib = array([[50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50], ..., [50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50]], dtype=int32)
- test_img_acq = {'Cel': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'StartDelay': array([9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019], dtype=int32), 'Status': array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=uint32), 'TagLength': array([24, 24, 24, 24, 24, 24, 24, 24, 24, 24], dtype=uint32), 'TagType': array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3], dtype=uint32)}
- test_pro = {'ImageAcquisition': {'Cel': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), 'StartDelay': array([9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019, 9119019], dtype=int32), 'Status': array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2], dtype=uint32), 'TagLength': array([24, 24, 24, 24, 24, 24, 24, 24, 24, 24], dtype=uint32), 'TagType': array([3, 3, 3, 3, 3, 3, 3, 3, 3, 3], dtype=uint32)}, 'ImageCalibration': array([[50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50], ..., [50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50], [50, 50, 50, ..., 50, 50, 50]], dtype=int32), 'SatelliteStatus': {'NominalLongitude': 1.3264, 'SatelliteCondition': 1, 'SatelliteID': 19002, 'SatelliteName': b'ELECTRO', 'TagLength': 292, 'TagType': 2, 'TimeOffset': 0.0}}
- test_sat_status = {'NominalLongitude': 1.3264, 'SatelliteCondition': 1, 'SatelliteID': 19002, 'SatelliteName': b'ELECTRO', 'TagLength': 292, 'TagType': 2, 'TimeOffset': 0.0}
- class satpy.tests.reader_tests.test_electrol_hrit.Testrecarray2dict(methodName='runTest')[source]
Bases:
TestCase
Test the function that converts numpy record arrays into dicts for use within SatPy.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_epic_l1b_h5 module
The epic_l1b_h5 reader tests package.
- class satpy.tests.reader_tests.test_epic_l1b_h5.TestEPICL1bReader[source]
Bases:
object
Test the EPIC L1b HDF5 reader.
- test_bad_calibration(setup_hdf5_file)[source]
Test that error is raised if a bad calibration is used.
satpy.tests.reader_tests.test_eps_l1b module
Test the eps l1b format.
- class satpy.tests.reader_tests.test_eps_l1b.BaseTestCaseEPSL1B(methodName='runTest')[source]
Bases:
TestCase
Base class for EPS l1b test case.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_eps_l1b.TestEPSL1B(methodName='runTest')[source]
Bases:
BaseTestCaseEPSL1B
Test the filehandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Test the navigation.
- class satpy.tests.reader_tests.test_eps_l1b.TestWrongSamplingEPSL1B(methodName='runTest')[source]
Bases:
BaseTestCaseEPSL1B
Test the filehandler on a corrupt file.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_eps_l1b.TestWrongScanlinesEPSL1B(methodName='runTest')[source]
Bases:
BaseTestCaseEPSL1B
Test the filehandler on a corrupt file.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_eum_base module
EUMETSAT base reader tests package.
- class satpy.tests.reader_tests.test_eum_base.TestGetServiceMode(methodName='runTest')[source]
Bases:
TestCase
Test the get_service_mode function.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_get_seviri_service_mode_fes()[source]
Test fetching of SEVIRI service mode information for FES.
- test_get_seviri_service_mode_iodc_E0415()[source]
Test fetching of SEVIRI service mode information for IODC at 41.5 degrees East.
- test_get_seviri_service_mode_iodc_E0455()[source]
Test fetching of SEVIRI service mode information for IODC at 45.5 degrees East.
- test_get_seviri_service_mode_rss()[source]
Test fetching of SEVIRI service mode information for RSS.
- class satpy.tests.reader_tests.test_eum_base.TestMakeTimeCdsDictionary(methodName='runTest')[source]
Bases:
TestCase
Test TestMakeTimeCdsDictionary.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_eum_base.TestMakeTimeCdsRecarray(methodName='runTest')[source]
Bases:
TestCase
Test TestMakeTimeCdsRecarray.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_eum_base.TestRecarray2Dict(methodName='runTest')[source]
Bases:
TestCase
Test TestRecarray2Dict.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_fci_l1c_nc module
Tests for the ‘fci_l1c_nc’ reader.
- class satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerBase(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Class for faking the NetCDF4 Filehandler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerFDHSI(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeFCIFileHandlerBase
Mock FDHSI data.
Get fake file content from ‘get_test_content’.
- chan_patterns: Dict[str, Dict[str, List[int] | str]] = {'ir_{:>02d}': {'channels': [38, 87, 97, 105, 123, 133], 'grid_type': '2km'}, 'nir_{:>02d}': {'channels': [13, 16, 22], 'grid_type': '1km'}, 'vis_{:>02d}': {'channels': [4, 5, 6, 8, 9], 'grid_type': '1km'}, 'wv_{:>02d}': {'channels': [63, 73], 'grid_type': '2km'}}
- satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerFDHSI_fixture()[source]
Get a fixture for the fake FDHSI filehandler, including channel and file names.
- class satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerHRFI(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeFCIFileHandlerBase
Mock HRFI data.
Get fake file content from ‘get_test_content’.
- satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerHRFI_fixture()[source]
Get a fixture for the fake HRFI filehandler, including channel and file names.
- class satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerWithBadData(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeFCIFileHandlerFDHSI
Mock bad data.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_fci_l1c_nc.FakeFCIFileHandlerWithBadIDPFData(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeFCIFileHandlerFDHSI
Mock bad data for IDPF TO-DO’s.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_fci_l1c_nc.FakeH5Variable(data, dims=(), attrs=None)[source]
Bases:
object
Class for faking h5netcdf.Variable class.
Initialize the class.
- property ndim
Get the number of dimensions.
- property shape
Get the shape.
- class satpy.tests.reader_tests.test_fci_l1c_nc.TestFCIL1cNCReader[source]
Bases:
object
Test FCI L1c NetCDF reader with nominal data.
- expected_pos_info_for_filetype = {'fdhsi': {'1km': {'end_position_row': 200, 'grid_width': 11136, 'segment_height': 200, 'start_position_row': 1}, '2km': {'end_position_row': 100, 'grid_width': 5568, 'segment_height': 100, 'start_position_row': 1}}, 'hrfi': {'1km': {'end_position_row': 200, 'grid_width': 11136, 'segment_height': 200, 'start_position_row': 1}, '500m': {'end_position_row': 400, 'grid_width': 22272, 'segment_height': 400, 'start_position_row': 1}}}
- fh_param_for_filetype = {'fdhsi': {'channels': {'solar': ['vis_04', 'vis_05', 'vis_06', 'vis_08', 'vis_09', 'nir_13', 'nir_16', 'nir_22'], 'solar_grid_type': ['1km', '1km', '1km', '1km', '1km', '1km', '1km', '1km'], 'terran': ['ir_38', 'wv_63', 'wv_73', 'ir_87', 'ir_97', 'ir_105', 'ir_123', 'ir_133'], 'terran_grid_type': ['2km', '2km', '2km', '2km', '2km', '2km', '2km', '2km']}, 'filenames': ['W_XX-EUMETSAT-Darmstadt,IMG+SAT,MTI1+FCI-1C-RRAD-FDHSI-FD--CHK-BODY--L2P-NC4E_C_EUMT_20170410114434_GTT_DEV_20170410113925_20170410113934_N__C_0070_0067.nc']}, 'hrfi': {'channels': {'solar': ['vis_06', 'nir_22'], 'solar_grid_type': ['500m', '500m'], 'terran': ['ir_38', 'ir_105'], 'terran_grid_type': ['1km', '1km']}, 'filenames': ['W_XX-EUMETSAT-Darmstadt,IMG+SAT,MTI1+FCI-1C-RRAD-HRFI-FD--CHK-BODY--L2P-NC4E_C_EUMT_20170410114434_GTT_DEV_20170410113925_20170410113934_N__C_0070_0067.nc']}}
- test_area_definition_computation(reader_configs, fh_param, expected_area)[source]
Test that the geolocation computation is correct.
- test_file_pattern_for_TRAIL_file(reader_configs, filenames)[source]
Test file pattern matching for TRAIL files, which should not be picked up.
- test_get_segment_position_info(reader_configs, fh_param, expected_pos_info)[source]
Test the segment position info method.
- test_load_quality_only(reader_configs, fh_param, expected_res_n)[source]
Test that loading quality only works.
- test_load_reflectance(reader_configs, fh_param, expected_res_n)[source]
Test loading with reflectance.
- class satpy.tests.reader_tests.test_fci_l1c_nc.TestFCIL1cNCReaderBadData[source]
Bases:
object
Test the FCI L1c NetCDF Reader for bad data input.
- class satpy.tests.reader_tests.test_fci_l1c_nc.TestFCIL1cNCReaderBadDataFromIDPF[source]
Bases:
object
Test the FCI L1c NetCDF Reader for bad data input, specifically the IDPF issues.
- satpy.tests.reader_tests.test_fci_l1c_nc._get_reader_with_filehandlers(filenames, reader_configs)[source]
- satpy.tests.reader_tests.test_fci_l1c_nc._get_test_geolocation_for_channel(data, ch_str, grid_type, n_rows_cols)[source]
- satpy.tests.reader_tests.test_fci_l1c_nc._get_test_image_data_for_channel(data, ch_str, n_rows_cols)[source]
- satpy.tests.reader_tests.test_fci_l1c_nc._get_test_index_map_for_channel(data, ch_str, n_rows_cols)[source]
- satpy.tests.reader_tests.test_fci_l1c_nc._get_test_pixel_quality_for_channel(data, ch_str, n_rows_cols)[source]
- satpy.tests.reader_tests.test_fci_l1c_nc._get_test_segment_position_for_channel(data, ch_str, n_rows_cols)[source]
- satpy.tests.reader_tests.test_fci_l1c_nc.clear_cache(reader)[source]
Clear the cache for file handlres in reader.
satpy.tests.reader_tests.test_fci_l2_nc module
The fci_cld_l2_nc reader tests package.
- class satpy.tests.reader_tests.test_fci_l2_nc.TestFciL2NCAMVFileHandler[source]
Bases:
object
Test the FciL2NCAMVFileHandler reader.
- class satpy.tests.reader_tests.test_fci_l2_nc.TestFciL2NCFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the FciL2NCFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_dataset_with_invalid_filekey()[source]
Test the correct execution of the get_dataset function with an invalid nc_key.
- test_dataset_with_layer()[source]
Check the correct execution of the get_dataset function with a valid nc_key & layer.
- test_dataset_with_scalar()[source]
Test the execution of the get_dataset function for scalar values.
- test_dataset_with_total_cot()[source]
Test the correct execution of the get_dataset function for total COT (add contributions from two layers).
- test_emumerations()[source]
Test the conversion of enumerated type information into flag_values and flag_meanings.
- class satpy.tests.reader_tests.test_fci_l2_nc.TestFciL2NCReadingByteData(methodName='runTest')[source]
Bases:
TestCase
Test the FciL2NCFileHandler when reading and extracting byte data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_fci_l2_nc.TestFciL2NCSegmentFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the FciL2NCSegmentFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_dataset_slicing_catid()[source]
Test the correct execution of the _slice_dataset function with ‘category_id’ set.
- test_dataset_slicing_chid_catid()[source]
Test the correct execution of the _slice_dataset function with ‘channel_id’ and ‘category_id’ set.
- test_dataset_slicing_irid()[source]
Test the correct execution of the _slice_dataset function with ‘ir_channel_id’ set.
- test_dataset_slicing_visid_catid()[source]
Test the correct execution of the _slice_dataset function with ‘vis_channel_id’ and ‘category_id’ set.
- test_dataset_with_adef()[source]
Test the correct execution of the get_dataset function with with_area_definition=True.
- test_dataset_with_adef_and_wrongs_dims()[source]
Test the correct execution of the get_dataset function with dims that don’t match expected AreaDefinition.
satpy.tests.reader_tests.test_fy4_base module
The fy4_base reader tests package.
- class satpy.tests.reader_tests.test_fy4_base.Test_FY4Base[source]
Bases:
object
Tests for the FengYun4 base class for the components missed by AGRI/GHI tests.
- test_badcalibration()[source]
Test case where we pass a bad calibration type, radiance is not supported.
satpy.tests.reader_tests.test_generic_image module
Unittests for generic image reader.
- class satpy.tests.reader_tests.test_generic_image.TestGenericImage(methodName='runTest')[source]
Bases:
TestCase
Test generic image reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_geocat module
Module for testing the satpy.readers.geocat module.
- class satpy.tests.reader_tests.test_geocat.FakeNetCDF4FileHandler2(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_geocat.TestGEOCATReader(methodName='runTest')[source]
Bases:
TestCase
Test GEOCAT Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'geocat.yaml'
satpy.tests.reader_tests.test_geos_area module
Geostationary project utility module tests package.
- class satpy.tests.reader_tests.test_geos_area.TestGEOSProjectionUtil(methodName='runTest')[source]
Bases:
TestCase
Tests for the area utilities.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_get_resolution_and_unit_strings_in_km()[source]
Test the resolution and unit strings function for a km resolution.
satpy.tests.reader_tests.test_gerb_l2_hr_h5 module
Unit tests for GERB L2 HR HDF5 reader.
- satpy.tests.reader_tests.test_gerb_l2_hr_h5.gerb_l2_hr_h5_dummy_file(tmp_path_factory)[source]
Create a dummy HDF5 file for the GERB L2 HR product.
- satpy.tests.reader_tests.test_gerb_l2_hr_h5.make_h5_null_string(length)[source]
Make a HDF5 type for a NULL terminated string of fixed length.
satpy.tests.reader_tests.test_ghi_l1 module
The agri_l1 reader tests package.
- class satpy.tests.reader_tests.test_ghi_l1.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_ghi_l1.Test_HDF_GHI_L1_cal[source]
Bases:
object
Test VIRR L1B Reader.
- static _assert_which_channels_are_loaded(available_datasets, band_names, resolution_to_test)[source]
- test_ghi_channels_are_loaded_with_right_resolution()[source]
Test all channels are loaded with the right resolution.
- test_ghi_for_one_resolution(resolution_to_test)[source]
Test loading data when only one resolution is available.
- yaml_file = 'ghi_l1.yaml'
satpy.tests.reader_tests.test_ghrsst_l2 module
Module for testing the satpy.readers.ghrsst_l2 module.
- class satpy.tests.reader_tests.test_ghrsst_l2.TestGHRSSTL2Reader[source]
Bases:
object
Test Sentinel-3 SST L2 reader.
- test_get_start_and_end_times(tmp_path)[source]
Test retrieval of the sensor name from the netCDF file.
satpy.tests.reader_tests.test_glm_l2 module
The glm_l2 reader tests package.
- class satpy.tests.reader_tests.test_glm_l2.TestGLML2FileHandler(methodName='runTest')[source]
Bases:
TestCase
Tests for the GLM L2 reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_glm_l2.TestGLML2Reader(methodName='runTest')[source]
Bases:
TestCase
Test high-level reading functionality of GLM L2 reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'glm_l2.yaml'
satpy.tests.reader_tests.test_goes_imager_hrit module
The hrit msg reader tests package.
- class satpy.tests.reader_tests.test_goes_imager_hrit.TestGVARFloat(methodName='runTest')[source]
Bases:
TestCase
GVAR float tester.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_goes_imager_hrit.TestHRITGOESFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRITFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_goes_imager_hrit.TestHRITGOESPrologueFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRITFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_goes_imager_hrit.TestMakeSGSTime(methodName='runTest')[source]
Bases:
TestCase
SGS Time tester.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_goes_imager_nc_eum module
Tests for the goes imager nc reader (EUMETSAT variant).
- class satpy.tests.reader_tests.test_goes_imager_nc_eum.GOESNCEUMFileHandlerRadianceTest(methodName='runTest')[source]
Bases:
TestCase
Tests for the radiances.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- longMessage = True
- class satpy.tests.reader_tests.test_goes_imager_nc_eum.GOESNCEUMFileHandlerReflectanceTest(methodName='runTest')[source]
Bases:
TestCase
Testing the reflectances.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- longMessage = True
satpy.tests.reader_tests.test_goes_imager_nc_noaa module
Tests for the goes imager nc reader (NOAA CLASS variant).
- class satpy.tests.reader_tests.test_goes_imager_nc_noaa.GOESNCBaseFileHandlerTest(methodName='runTest')[source]
Bases:
TestCase
Testing the file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- longMessage = True
- class satpy.tests.reader_tests.test_goes_imager_nc_noaa.GOESNCFileHandlerTest(methodName='runTest')[source]
Bases:
TestCase
Test the file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- longMessage = True
- class satpy.tests.reader_tests.test_goes_imager_nc_noaa.TestChannelIdentification[source]
Bases:
object
Test identification of channel type.
satpy.tests.reader_tests.test_gpm_imerg module
Unittests for GPM IMERG reader.
- class satpy.tests.reader_tests.test_gpm_imerg.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_gpm_imerg.TestHdf5IMERG(methodName='runTest')[source]
Bases:
TestCase
Test the GPM IMERG reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'gpm_imerg.yaml'
satpy.tests.reader_tests.test_grib module
Module for testing the satpy.readers.grib module.
- class satpy.tests.reader_tests.test_grib.FakeGRIB(messages=None, proj_params=None, latlons=None)[source]
Bases:
object
Fake GRIB file returned by pygrib.open.
Init the grib file.
- class satpy.tests.reader_tests.test_grib.FakeMessage(values, proj_params=None, latlons=None, **attrs)[source]
Bases:
object
Fake message returned by pygrib.open().message(x).
Init the message.
- class satpy.tests.reader_tests.test_grib.TestGRIBReader[source]
Bases:
object
Test GRIB Reader.
- test_area_def_crs(proj_params, lon_corners, lat_corners)[source]
Check that the projection is accurate.
- test_jscanspositively(proj_params, lon_corners, lat_corners)[source]
Check that data is flipped if the jScansPositively is present.
- test_missing_attributes(proj_params, lon_corners, lat_corners)[source]
Check that the grib reader handles missing attributes in the grib file.
- yaml_file = 'grib.yaml'
- satpy.tests.reader_tests.test_grib._round_trip_projection_lonlat_check(area)[source]
Check that X/Y coordinates can be transformed multiple times.
Many GRIB files include non-standard projects that work for the initial transformation of X/Y coordinates to longitude/latitude, but may fail in the reverse transformation. For example, an eqc projection that goes from 0 longitude to 360 longitude. The X/Y coordinates may accurately go from the original X/Y metered space to the correct longitude/latitude, but transforming those coordinates back to X/Y space will produce the wrong result.
satpy.tests.reader_tests.test_hdf4_utils module
Module for testing the satpy.readers.hdf4_utils module.
- class satpy.tests.reader_tests.test_hdf4_utils.FakeHDF4FileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDF4FileHandler
Swap-in NetCDF4 File Handler for reader tests to use.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_hdf4_utils.TestHDF4FileHandler(methodName='runTest')[source]
Bases:
TestCase
Test HDF4 File Handler Utility class.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_hdf5_utils module
Module for testing the satpy.readers.hdf5_utils module.
- class satpy.tests.reader_tests.test_hdf5_utils.FakeHDF5FileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
HDF5FileHandler
Swap HDF5 File Handler for reader tests to use.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_hdf5_utils.TestHDF5FileHandler(methodName='runTest')[source]
Bases:
TestCase
Test HDF5 File Handler Utility class.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_hdfeos_base module
Tests for the HDF-EOS base functionality.
- class satpy.tests.reader_tests.test_hdfeos_base.TestReadMDA(methodName='runTest')[source]
Bases:
TestCase
Test reading metadata.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_hrit_base module
The HRIT base reader tests package.
- class satpy.tests.reader_tests.test_hrit_base.TestHRITDecompress(methodName='runTest')[source]
Bases:
TestCase
Test the on-the-fly decompression.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_hrit_base.TestHRITFileHandler[source]
Bases:
object
Test the HRITFileHandler.
- test_read_band_bzipped2_filepath(stub_bzipped_hrit_file)[source]
Test reading a single band from a bzipped file.
- class satpy.tests.reader_tests.test_hrit_base.TestHRITFileHandlerCompressed[source]
Bases:
object
Test the HRITFileHandler with compressed segments.
- satpy.tests.reader_tests.test_hrit_base.create_stub_hrit(filename, open_fun=<built-in function open>, meta={'GP_SC_ID': 324, 'annotation_header': b'H-000-MSG4__-MSG4________-VIS006___-000001___-202208180730-C_', 'cds_p_field': 64, 'cfac': -13642337, 'coff': 1856, 'compression_flag_for_data': 0, 'data_field_length': 17223680, 'data_field_representation': 3, 'file_type': 0, 'image_segment_line_quality': array([(1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0), (1, (0, 0), 1, 1, 0)], dtype=[('line_number_in_grid', '>i4'), ('line_mean_acquisition', [('days', '>u2'), ('milliseconds', '>u4')]), ('line_validity', 'u1'), ('line_radiometric_quality', 'u1'), ('line_geometric_quality', 'u1')]), 'lfac': -13642337, 'loff': 1856, 'number_of_bits_per_pixel': 10, 'number_of_columns': 3712, 'number_of_lines': 464, 'orbital_parameters': {}, 'planned_end_segment_number': 8, 'planned_start_segment_number': 1, 'projection_name': b'GEOS(+000.0) ', 'projection_parameters': {'SSP_longitude': 0.0, 'a': 6378169.0, 'b': 6356583.8, 'h': 35785831.0}, 'segment_sequence_number': 1, 'spectral_channel_id': 1, 'timestamp': (23605, 27911151), 'total_header_length': 6198})[source]
Create a stub hrit file.
- satpy.tests.reader_tests.test_hrit_base.fake_decompress(infile, outdir='.')[source]
Fake decompression.
- satpy.tests.reader_tests.test_hrit_base.new_get_hd(instance, hdr_info)[source]
Generate some metadata.
- satpy.tests.reader_tests.test_hrit_base.new_get_hd_compressed(instance, hdr_info)[source]
Generate some metadata.
- satpy.tests.reader_tests.test_hrit_base.stub_bzipped_hrit_file(tmp_path)[source]
Create a stub bzipped hrit file.
- satpy.tests.reader_tests.test_hrit_base.stub_compressed_hrit_file(tmp_path)[source]
Create a stub compressed hrit file.
satpy.tests.reader_tests.test_hsaf_grib module
Module for testing the satpy.readers.grib module.
- class satpy.tests.reader_tests.test_hsaf_grib.FakeGRIB(messages=None, proj_params=None, latlons=None)[source]
Bases:
object
Fake GRIB file returned by pygrib.open.
Init the fake grib file.
- class satpy.tests.reader_tests.test_hsaf_grib.FakeMessage(values, proj_params=None, latlons=None, **attrs)[source]
Bases:
object
Fake message returned by pygrib.open().message(x).
Init the fake message.
- class satpy.tests.reader_tests.test_hsaf_grib.TestHSAFFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test HSAF Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_hsaf_h5 module
Tests for the H-SAF H5 reader.
- satpy.tests.reader_tests.test_hsaf_h5._get_scene_with_loaded_sc_datasets(filename)[source]
Return a scene with SC and SC_pal loaded.
- satpy.tests.reader_tests.test_hsaf_h5.sc_h5_file(tmp_path_factory)[source]
Create a fake HSAF SC HDF5 file.
- satpy.tests.reader_tests.test_hsaf_h5.test_hsaf_sc_areadef(sc_h5_file)[source]
Test the H-SAF SC area definition.
- satpy.tests.reader_tests.test_hsaf_h5.test_hsaf_sc_colormap_dataset(sc_h5_file)[source]
Test the H-SAF SC_pal dataset.
satpy.tests.reader_tests.test_hy2_scat_l2b_h5 module
Module for testing the satpy.readers.hy2_scat_l2b_h5 module.
- class satpy.tests.reader_tests.test_hy2_scat_l2b_h5.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_hy2_scat_l2b_h5.TestHY2SCATL2BH5Reader(methodName='runTest')[source]
Bases:
TestCase
Test HY2 Scatterometer L2B H5 Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'hy2_scat_l2b_h5.yaml'
satpy.tests.reader_tests.test_iasi_l2 module
Unit tests for IASI L2 reader.
- class satpy.tests.reader_tests.test_iasi_l2.TestIasiL2(methodName='runTest')[source]
Bases:
TestCase
Test IASI L2 reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- satpy.tests.reader_tests.test_iasi_l2.fake_iasi_l2_cdr_nc_dataset()[source]
Create minimally fake IASI L2 CDR NC dataset.
- satpy.tests.reader_tests.test_iasi_l2.fake_iasi_l2_cdr_nc_file(fake_iasi_l2_cdr_nc_dataset, tmp_path)[source]
Write a NetCDF file with minimal fake IASI L2 CDR NC data.
satpy.tests.reader_tests.test_iasi_l2_so2_bufr module
Unittesting the SEVIRI L2 BUFR reader.
- class satpy.tests.reader_tests.test_iasi_l2_so2_bufr.TestIasiL2So2Bufr(methodName='runTest')[source]
Bases:
TestCase
Test IASI l2 SO2 loader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_ici_l1b_nc module
The ici_l1b_nc reader tests package.
This version tests the reader for ICI test data as per PFS V3A.
- class satpy.tests.reader_tests.test_ici_l1b_nc.IciL1bFakeFileWriter(file_path)[source]
Bases:
object
Writer class of fake ici level1b data.
Init.
Write the navigation data group.
- class satpy.tests.reader_tests.test_ici_l1b_nc.TestIciL1bNCFileHandler[source]
Bases:
object
Test the IciL1bNCFileHandler reader.
- test_calibrate_calls_calibrate_bt(mocked_calibrate_bt, reader)[source]
Test calibrate calls calibrate_bt.
- test_calibrate_does_not_call_calibrate_bt_if_not_needed(mocked_calibrate, reader)[source]
Test calibrate does not call calibrate_bt if not needed.
- test_calibrate_raises_for_unknown_calibration_method(reader)[source]
Test perform calibration raises for unknown calibration method.
- test_get_dataset_does_not_calibrate_if_not_desired(mocked_calibrate, reader, dataset_info)[source]
Test get dataset does not calibrate if not desired.
- test_get_dataset_handles_calibration(reader, dataset_info)[source]
Test get dataset handles calibration.
- test_get_dataset_orthorectifies_if_orthorect_data_defined(reader)[source]
Test get dataset orthorectifies if orthorect data is defined.
- test_get_dataset_return_none_if_data_not_exist(reader)[source]
Tes get dataset return none if data does not exist.
- test_get_third_dimension_name_return_none_for_2d_data(reader)[source]
Test get third dimension name return none for 2d data.
- test_interpolate_calls_interpolate_geo(mock, reader)[source]
Test interpolate calls interpolate_geo.
- test_interpolate_calls_interpolate_viewing_angles(mock, reader)[source]
Test interpolate calls interpolate viewing_angles.
satpy.tests.reader_tests.test_insat3d_img_l1b_h5 module
Tests for the Insat3D reader.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5._create_channels(channels, h5f, resolution)[source]
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.insat_filehandler(insat_filename)[source]
Instantiate a Filehandler.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.insat_filename(tmp_path_factory)[source]
Create a fake insat 3d l1b file.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.mask_array(array)[source]
Mask an array with nan instead of 0.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_filehandler_has_start_and_end_time(insat_filehandler)[source]
Test that the filehandler handles start and end time.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_filehandler_returns_area(insat_filehandler)[source]
Test that filehandle returns an area.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_filehandler_returns_coords(insat_filehandler)[source]
Test that lon and lat can be loaded.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_filehandler_returns_data_array(insat_filehandler, calibration, expected_values)[source]
Test that the filehandler can get dataarrays.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_filehandler_returns_masked_data_in_space(insat_filehandler)[source]
Test that the filehandler masks space pixels.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_backend_has_1km_channels(insat_filename)[source]
Test the insat3d backend.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_datatree_has_global_attributes(insat_filename)[source]
Test that the backend supports global attributes in the datatree.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_has_calibrated_arrays(insat_filename, resolution, name, shape, expected_values, expected_name, expected_units)[source]
Check that calibration happens as expected.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_has_dask_arrays(insat_filename)[source]
Test that the backend uses dask.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_has_global_attributes(insat_filename, resolution)[source]
Test that the backend supports global attributes.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_has_orbital_parameters(insat_filehandler)[source]
Test that the filehandler returns data with orbital parameter attributes.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_only_has_3_resolutions(insat_filename)[source]
Test that we only accept 1000, 4000, 8000.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_opens_datatree(insat_filename, resolution)[source]
Test that a datatree is produced.
- satpy.tests.reader_tests.test_insat3d_img_l1b_h5.test_insat3d_returns_lonlat(insat_filename, resolution)[source]
Test that lons and lats are loaded.
satpy.tests.reader_tests.test_li_l2_nc module
Unit tests on the LI L2 reader using the conventional mock constructed context.
- class satpy.tests.reader_tests.test_li_l2_nc.TestLIL2[source]
Bases:
object
Main test class for the LI L2 reader.
- _test_dataset_sector_variables(settings, ds_desc, handler)[source]
Check the loading of the in sector variables.
- _test_dataset_single_sector_variable(names, desc, settings, handler)[source]
Check the validity of a given sector variable.
- _test_dataset_single_variable(vname, desc, settings, handler)[source]
Check the validity of a given variable.
- _test_dataset_variable(var_params, sname='')[source]
Test the validity of a given (sector) variable.
- _test_dataset_variables(settings, ds_desc, handler)[source]
Check the loading of the non in sector variables.
- create_fullname_key(desc, var_path, vname, sname='')[source]
Create full name key for sector/non-sector content retrieval.
- generate_coords(filetype_infos, file_type_name, variable_name)[source]
Generate file handler and mimic coordinate generator call.
- get_variable_dataset(dataset_info, dname, handler)[source]
Get the dataset of a given (sector) variable.
- test_coordinates_projection(filetype_infos)[source]
Should automatically generate lat/lon coords from projection data.
- test_coords_generation(filetype_infos)[source]
Compare daskified coords generation results with non-daskified.
- test_dataset_not_in_provided_dataset(filetype_infos)[source]
Test loading of a dataset that is not provided.
- test_generate_coords_inverse_proj(filetype_infos)[source]
Test inverse_projection execution delayed until .values is called on the dataset.
- test_generate_coords_not_called_on_non_accum_dataset(filetype_infos)[source]
Test that the method is not called when getting non-accum dataset.
- test_generate_coords_not_called_on_non_coord_dataset(filetype_infos)[source]
Test that the method is not called when getting non-coord dataset.
- test_generate_coords_on_accumulated_prods(filetype_infos)[source]
Test daskified generation of coords.
- test_generate_coords_on_lon_lat(filetype_infos)[source]
Test getting lon/lat dataset on accumulated product.
- test_get_area_def_acc_products(filetype_infos)[source]
Test retrieval of area def for accumulated products.
- test_get_area_def_non_acc_products(filetype_infos)[source]
Test retrieval of area def for non-accumulated products.
- test_get_first_valid_variable(filetype_infos)[source]
Test get_first_valid_variable from li reader.
- test_get_first_valid_variable_not_found(filetype_infos)[source]
Test get_first_valid_variable from li reader if the variable is not found.
- test_get_on_fci_grid_exc(filetype_infos)[source]
Test the execution of the get_on_fci_grid function for an accumulated gridded variable.
- test_get_on_fci_grid_exc_non_accum(filetype_infos)[source]
Test the non-execution of the get_on_fci_grid function for a non-accumulated variable.
- test_get_on_fci_grid_exc_non_grid(filetype_infos)[source]
Test the non-execution of the get_on_fci_grid function for an accumulated non-gridded variable.
- test_report_datetimes(filetype_infos)[source]
Should report time variables as numpy datetime64 type and time durations as timedelta64.
- test_swath_coordinates(filetype_infos)[source]
Test that swath coordinates are used correctly to assign coordinates to some datasets.
- test_variable_scaling(filetype_infos)[source]
Test automatic rescaling with offset and scale attributes.
- test_with_area_def(filetype_infos)[source]
Test accumulated products data array with area definition.
- test_with_area_def_pixel_placement(filetype_infos)[source]
Test the placements of pixel value with area definition.
satpy.tests.reader_tests.test_meris_nc module
Module for testing the satpy.readers.meris_nc_sen3 module.
- class satpy.tests.reader_tests.test_meris_nc.TestBitFlags(methodName='runTest')[source]
Bases:
TestCase
Test the bitflag reading.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_meris_nc.TestMERISReader(methodName='runTest')[source]
Bases:
TestCase
Test various meris_nc_sen3 filehandlers.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_mersi_l1b module
Tests for the ‘mersi2_l1b’ reader.
- class satpy.tests.reader_tests.test_mersi_l1b.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- property _geo_prefix_for_file_type
- property _num_cols_for_file_type
- property _rows_per_scan
- num_cols = 2048
- num_scans = 2
- class satpy.tests.reader_tests.test_mersi_l1b.MERSIL1BTester[source]
Bases:
object
Test MERSI2 L1B Reader.
- class satpy.tests.reader_tests.test_mersi_l1b.TestMERSI2L1B[source]
Bases:
MERSIL1BTester
Test the FY3D MERSI2 L1B reader.
- filenames_1000m = ['tf2019071182739.FY3D-X_MERSI_1000M_L1B.HDF', 'tf2019071182739.FY3D-X_MERSI_GEO1K_L1B.HDF']
- filenames_250m = ['tf2019071182739.FY3D-X_MERSI_0250M_L1B.HDF', 'tf2019071182739.FY3D-X_MERSI_GEOQK_L1B.HDF']
- filenames_all = ['tf2019071182739.FY3D-X_MERSI_1000M_L1B.HDF', 'tf2019071182739.FY3D-X_MERSI_GEO1K_L1B.HDF', 'tf2019071182739.FY3D-X_MERSI_0250M_L1B.HDF', 'tf2019071182739.FY3D-X_MERSI_GEOQK_L1B.HDF']
- yaml_file = 'mersi2_l1b.yaml'
- class satpy.tests.reader_tests.test_mersi_l1b.TestMERSILLL1B[source]
Bases:
MERSIL1BTester
Test the FY3E MERSI-LL L1B reader.
- filenames_1000m = ['FY3E_MERSI_GRAN_L1_20230410_1910_1000M_V0.HDF', 'FY3E_MERSI_GRAN_L1_20230410_1910_GEO1K_V0.HDF']
- filenames_250m = ['FY3E_MERSI_GRAN_L1_20230410_1910_0250M_V0.HDF', 'FY3E_MERSI_GRAN_L1_20230410_1910_GEOQK_V0.HDF']
- filenames_all = ['FY3E_MERSI_GRAN_L1_20230410_1910_1000M_V0.HDF', 'FY3E_MERSI_GRAN_L1_20230410_1910_GEO1K_V0.HDF', 'FY3E_MERSI_GRAN_L1_20230410_1910_0250M_V0.HDF', 'FY3E_MERSI_GRAN_L1_20230410_1910_GEOQK_V0.HDF']
- yaml_file = 'mersi_ll_l1b.yaml'
- class satpy.tests.reader_tests.test_mersi_l1b.TestMERSIRML1B[source]
Bases:
MERSIL1BTester
Test the FY3E MERSI-RM L1B reader.
- filenames_500m = ['FY3G_MERSI_GRAN_L1_20230410_1910_0500M_V1.HDF', 'FY3G_MERSI_GRAN_L1_20230410_1910_GEOHK_V1.HDF']
- yaml_file = 'mersi_rm_l1b.yaml'
- satpy.tests.reader_tests.test_mersi_l1b._get_250m_ll_data(num_scans, rows_per_scan, num_cols)[source]
satpy.tests.reader_tests.test_mimic_TPW2_lowres module
Module for testing the satpy.readers.tropomi_l2 module.
- class satpy.tests.reader_tests.test_mimic_TPW2_lowres.FakeNetCDF4FileHandlerMimicLow(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_mimic_TPW2_lowres.TestMimicTPW2Reader(methodName='runTest')[source]
Bases:
TestCase
Test Mimic Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'mimicTPW2_comp.yaml'
satpy.tests.reader_tests.test_mimic_TPW2_nc module
Module for testing the satpy.readers.tropomi_l2 module.
- class satpy.tests.reader_tests.test_mimic_TPW2_nc.FakeNetCDF4FileHandlerMimic(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_mimic_TPW2_nc.TestMimicTPW2Reader(methodName='runTest')[source]
Bases:
TestCase
Test Mimic Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'mimicTPW2_comp.yaml'
satpy.tests.reader_tests.test_mirs module
Module for testing the satpy.readers.mirs module.
- satpy.tests.reader_tests.test_mirs._check_metadata(data_arr: DataArray, test_data: Dataset, platform_name: str) None [source]
- satpy.tests.reader_tests.test_mirs._create_fake_reader(filenames: list[str], reader_kwargs: dict, exp_loadable_files: int | None = None) FileYAMLReader [source]
- satpy.tests.reader_tests.test_mirs._get_datasets_with_attributes(**kwargs)[source]
Represent files with two resolution of variables in them (ex. OCEAN).
- satpy.tests.reader_tests.test_mirs._get_datasets_with_less_attributes()[source]
Represent files with two resolution of variables in them (ex. OCEAN).
- satpy.tests.reader_tests.test_mirs._load_and_check_limb_correction_variables(reader: FileYAMLReader, loadable_ids: list[str], platform_name: str, exp_limb_corr: bool) dict[DataID, DataArray] [source]
- satpy.tests.reader_tests.test_mirs.fake_open_dataset(filename, **kwargs)[source]
Create a Dataset similar to reading an actual file with xarray.open_dataset.
satpy.tests.reader_tests.test_msi_safe module
Module for testing the satpy.readers.msi_safe module.
- class satpy.tests.reader_tests.test_msi_safe.TestMTDXML[source]
Bases:
object
Test the SAFE MTD XML file handler.
- test_xml_calibration_unmasked_saturated()[source]
Test the calibration with radiometric offset but unmasked saturated pixels.
Test the navigation.
satpy.tests.reader_tests.test_msu_gsa_l1b module
Tests for the ‘msu_gsa_l1b’ reader.
- class satpy.tests.reader_tests.test_msu_gsa_l1b.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
satpy.tests.reader_tests.test_mviri_l1b_fiduceo_nc module
Unit tests for the FIDUCEO MVIRI FCDR Reader.
- class satpy.tests.reader_tests.test_mviri_l1b_fiduceo_nc.TestDatasetWrapper[source]
Bases:
object
Unit tests for DatasetWrapper class.
- test_reassign_coords()[source]
Test reassigning of coordinates.
For some reason xarray does not always assign (y, x) coordinates to the high resolution datasets, although they have dimensions (y, x) and coordinates y and x exist. A dataset with these properties seems impossible to create (neither dropping, resetting or deleting coordinates seems to work). Instead use mock as a workaround.
- class satpy.tests.reader_tests.test_mviri_l1b_fiduceo_nc.TestFiduceoMviriFileHandlers[source]
Bases:
object
Unit tests for FIDUCEO MVIRI file handlers.
- test_get_area_definition(file_handler, name, resolution, area_exp)[source]
Test getting area definitions.
- satpy.tests.reader_tests.test_mviri_l1b_fiduceo_nc.fixture_fake_dataset()[source]
Create fake dataset.
satpy.tests.reader_tests.test_mws_l1b_nc module
The mws_l1b_nc reader tests.
This module tests the reading of the MWS l1b netCDF format data as per version v4B issued 22 November 2021.
- class satpy.tests.reader_tests.test_mws_l1b_nc.MWSL1BFakeFileWriter(file_path)[source]
Bases:
object
Writer class of fake mws level-1b data.
Init.
Write the navigation data group.
- class satpy.tests.reader_tests.test_mws_l1b_nc.TestMwsL1bNCFileHandler[source]
Bases:
object
Test the MWSL1BFile reader.
- test_get_dataset_aux_data_expected_data_missing(caplog, reader)[source]
Test get auxillary dataset which is not present but supposed to be in file.
- test_get_dataset_logs_debug_message(caplog, fake_file, reader)[source]
Test get dataset return none if data does not exist.
- test_get_dataset_return_none_if_data_not_exist(reader)[source]
Test get dataset return none if data does not exist.
Test get the longitudes.
- test_sub_satellite_latitude_end(reader)[source]
Test getting the latitude of sub-satellite point at end of the product.
- test_sub_satellite_latitude_start(reader)[source]
Test getting the latitude of sub-satellite point at start of the product.
- satpy.tests.reader_tests.test_mws_l1b_nc.fake_file(tmp_path)[source]
Return file path to level-1b file.
- satpy.tests.reader_tests.test_mws_l1b_nc.reader(fake_file)[source]
Return reader of mws level-1b data.
satpy.tests.reader_tests.test_netcdf_utils module
Module for testing the satpy.readers.netcdf_utils module.
- class satpy.tests.reader_tests.test_netcdf_utils.FakeNetCDF4FileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
NetCDF4FileHandler
Swap-in NetCDF4 File Handler for reader tests to use.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_netcdf_utils.TestNetCDF4FileHandler(methodName='runTest')[source]
Bases:
TestCase
Test NetCDF4 File Handler Utility class.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_netcdf_utils.TestNetCDF4FsspecFileHandler[source]
Bases:
object
Test the remote reading class.
- satpy.tests.reader_tests.test_netcdf_utils.test_get_data_as_xarray_h5netcdf(tmp_path)[source]
Test getting xr.DataArray from h5netcdf variable.
- satpy.tests.reader_tests.test_netcdf_utils.test_get_data_as_xarray_netcdf4(tmp_path)[source]
Test getting xr.DataArray from netcdf4 variable.
satpy.tests.reader_tests.test_nucaps module
Module for testing the satpy.readers.nucaps module.
- class satpy.tests.reader_tests.test_nucaps.FakeNetCDF4FileHandler2(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_nucaps.TestNUCAPSReader(methodName='runTest')[source]
Bases:
TestCase
Test NUCAPS Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_load_individual_pressure_levels_min_max()[source]
Test loading individual Temperature with min/max level specified.
- test_load_individual_pressure_levels_single()[source]
Test loading individual Temperature with specific levels.
- test_load_individual_pressure_levels_true()[source]
Test loading Temperature with individual pressure datasets.
- test_load_pressure_levels_single_and_pressure_levels()[source]
Test loading a specific Temperature level and pressure levels.
- yaml_file = 'nucaps.yaml'
- class satpy.tests.reader_tests.test_nucaps.TestNUCAPSScienceEDRReader(methodName='runTest')[source]
Bases:
TestCase
Test NUCAPS Science EDR Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_load_individual_pressure_levels_min_max()[source]
Test loading individual Temperature with min/max level specified.
- test_load_individual_pressure_levels_single()[source]
Test loading individual Temperature with specific levels.
- test_load_individual_pressure_levels_true()[source]
Test loading Temperature with individual pressure datasets.
- test_load_pressure_levels_single_and_pressure_levels()[source]
Test loading a specific Temperature level and pressure levels.
- yaml_file = 'nucaps.yaml'
satpy.tests.reader_tests.test_nwcsaf_msg module
Unittests for NWC SAF MSG (2013) reader.
- class satpy.tests.reader_tests.test_nwcsaf_msg.TestH5NWCSAF(methodName='runTest')[source]
Bases:
TestCase
Test the nwcsaf msg reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_nwcsaf_nc module
Unittests for NWC SAF reader.
- class satpy.tests.reader_tests.test_nwcsaf_nc.TestNcNWCSAFFileKeyPrefix[source]
Bases:
object
Test the NcNWCSAF reader when using a file key prefix.
- class satpy.tests.reader_tests.test_nwcsaf_nc.TestNcNWCSAFGeo[source]
Bases:
object
Test the NcNWCSAF reader for Geo products.
- test_get_area_def_km(nwcsaf_old_geo_ct_filehandler)[source]
Test that get_area_def() returns proper area when the projection is in km.
- test_orbital_parameters_are_correct(nwcsaf_geo_ct_filehandler)[source]
Test that orbital parameters are present in the dataset attributes.
- test_scale_dataset_attr_removal(nwcsaf_geo_ct_filehandler)[source]
Test the scaling of the dataset and removal of obsolete attributes.
- test_scale_dataset_floating(nwcsaf_geo_ct_filehandler, attrs, expected)[source]
Test the scaling of the dataset with floating point values.
- test_scale_dataset_floating_nwcsaf_geo_ctth(nwcsaf_geo_ct_filehandler)[source]
Test the scaling of the dataset with floating point values for CTTH NWCSAF/Geo v2016/v2018.
- test_sensor_name_platform(nwcsaf_geo_ct_filehandler, platform, instrument)[source]
Test that the correct sensor name is being set.
- class satpy.tests.reader_tests.test_nwcsaf_nc.TestNcNWCSAFPPS[source]
Bases:
object
Test the NcNWCSAF reader for PPS products.
- test_get_dataset_can_handle_file_key_list(nwcsaf_pps_cmic_filehandler, nwcsaf_pps_cpp_filehandler)[source]
Test that get_dataset() can handle a list of file_keys.
- test_get_dataset_raises_when_dataset_missing(nwcsaf_pps_cpp_filehandler)[source]
Test that get_dataset() raises an error when the requested dataset is missing.
- test_get_dataset_scales_and_offsets(nwcsaf_pps_cpp_filehandler)[source]
Test that get_dataset() returns scaled and offseted data.
- test_get_dataset_scales_and_offsets_palette_meanings_using_other_dataset(nwcsaf_pps_cpp_filehandler)[source]
Test that get_dataset() returns scaled palette_meanings with another dataset as scaling source.
- test_get_dataset_uses_file_key_if_present(nwcsaf_pps_cmic_filehandler, nwcsaf_pps_cpp_filehandler)[source]
Test that get_dataset() uses a file_key if present.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_cmic_file(path, filetype, attrs={'gdal_projection': '+proj=geos +a=6378137.000 +b=6356752.300 +lon_0=0.000000 +h=35785863.000', 'gdal_xgeo_low_right': 5566500.0, 'gdal_xgeo_up_left': -5569500.0, 'gdal_ygeo_low_right': 2653500.0, 'gdal_ygeo_up_left': 5437500.0, 'satellite_identifier': 'MSG4', 'source': 'NWC/GEO version v2021.1', 'sub-satellite_longitude': 0.0, 'time_coverage_end': '2023-01-18T10:42:22Z', 'time_coverage_start': '2023-01-18T10:39:17Z'})[source]
Create a cmic file.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_cot_pal_variable(nc_file, var_name)[source]
Create a palette variable.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_cot_variable(nc_file, var_name)[source]
Create a COT variable.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_cre_variables(nc_file, var_name)[source]
Create a CRE variable.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_ctth_alti_pal_variable_with_fill_value_color(nc_file, var_name)[source]
Create a palette variable.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_ctth_file(path, attrs={'gdal_projection': '+proj=geos +a=6378137.000 +b=6356752.300 +lon_0=0.000000 +h=35785863.000', 'gdal_xgeo_low_right': 5566500.0, 'gdal_xgeo_up_left': -5569500.0, 'gdal_ygeo_low_right': 2653500.0, 'gdal_ygeo_up_left': 5437500.0, 'satellite_identifier': 'MSG4', 'source': 'NWC/GEO version v2021.1', 'sub-satellite_longitude': 0.0, 'time_coverage_end': '2023-01-18T10:42:22Z', 'time_coverage_start': '2023-01-18T10:39:17Z'})[source]
Create a cmic file.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_ctth_variables(nc_file, var_name)[source]
Create a CRE variable.
- satpy.tests.reader_tests.test_nwcsaf_nc.create_nwcsaf_geo_ct_file(directory, attrs={'gdal_projection': '+proj=geos +a=6378137.000 +b=6356752.300 +lon_0=0.000000 +h=35785863.000', 'gdal_xgeo_low_right': 5566500.0, 'gdal_xgeo_up_left': -5569500.0, 'gdal_ygeo_low_right': 2653500.0, 'gdal_ygeo_up_left': 5437500.0, 'nominal_product_time': '2023-01-18T10:30:00Z', 'satellite_identifier': 'MSG4', 'source': 'NWC/GEO version v2021.1', 'sub-satellite_longitude': 0.0, 'time_coverage_end': '2023-01-18T10:42:22Z', 'time_coverage_start': '2023-01-18T10:39:17Z'})[source]
Create a CT file.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_geo_ct_filehandler(nwcsaf_geo_ct_filename)[source]
Create a CT filehandler.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_geo_ct_filename(tmp_path_factory)[source]
Create a CT file and return the filename.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_old_geo_ct_filehandler(nwcsaf_old_geo_ct_filename)[source]
Create a CT filehandler.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_old_geo_ct_filename(tmp_path_factory)[source]
Create a CT file and return the filename.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_pps_cmic_filehandler(nwcsaf_pps_cmic_filename)[source]
Create a CMIC filehandler.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_pps_cmic_filename(tmp_path_factory)[source]
Create a CMIC file.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_pps_cpp_filehandler(nwcsaf_pps_cpp_filename)[source]
Create a CPP filehandler.
- satpy.tests.reader_tests.test_nwcsaf_nc.nwcsaf_pps_cpp_filename(tmp_path_factory)[source]
Create a CPP file.
satpy.tests.reader_tests.test_oceancolorcci_l3_nc module
Module for testing the satpy.readers.oceancolorcci_l3_nc module.
- class satpy.tests.reader_tests.test_oceancolorcci_l3_nc.TestOCCCIReader[source]
Bases:
object
Test the Ocean Color reader.
satpy.tests.reader_tests.test_olci_nc module
Module for testing the satpy.readers.olci_nc module.
- class satpy.tests.reader_tests.test_olci_nc.TestBitFlags(methodName='runTest')[source]
Bases:
TestCase
Test the bitflag reading.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_olci_nc.TestOLCIReader(methodName='runTest')[source]
Bases:
TestCase
Test various olci_nc filehandlers.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_omps_edr module
Module for testing the satpy.readers.omps_edr module.
- class satpy.tests.reader_tests.test_omps_edr.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_omps_edr.TestOMPSEDRReader(methodName='runTest')[source]
Bases:
TestCase
Test OMPS EDR Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_load_so2_DIMENSION_LIST(mock_h5py_file, mock_hdf5_utils_get_reference)[source]
Test load of so2 datasets with DIMENSION_LIST.
- yaml_file = 'omps_edr.yaml'
satpy.tests.reader_tests.test_osisaf_l3 module
Module for testing the satpy.readers.osisaf_l3 module.
- class satpy.tests.reader_tests.test_osisaf_l3.OSISAFL3ReaderTests[source]
Bases:
object
Test OSI-SAF level 3 netCDF reader ice files.
- test_get_area_def_bad(tmp_path)[source]
Test getting the area definition for the polar stereographic grid.
- class satpy.tests.reader_tests.test_osisaf_l3.TestOSISAFL3ReaderFluxGeo[source]
Bases:
OSISAFL3ReaderTests
Test OSI-SAF level 3 netCDF reader flux files on lat/lon grid (GEO sensors).
- class satpy.tests.reader_tests.test_osisaf_l3.TestOSISAFL3ReaderFluxStere[source]
Bases:
OSISAFL3ReaderTests
Test OSI-SAF level 3 netCDF reader flux files on stereographic grid.
- class satpy.tests.reader_tests.test_osisaf_l3.TestOSISAFL3ReaderICE[source]
Bases:
OSISAFL3ReaderTests
Test OSI-SAF level 3 netCDF reader ice files.
- class satpy.tests.reader_tests.test_osisaf_l3.TestOSISAFL3ReaderSST[source]
Bases:
OSISAFL3ReaderTests
Test OSI-SAF level 3 netCDF reader surface temperature files.
satpy.tests.reader_tests.test_safe_sar_l2_ocn module
Module for testing the satpy.readers.safe_sar_l2_ocn module.
- class satpy.tests.reader_tests.test_safe_sar_l2_ocn.TestSAFENC(methodName='runTest')[source]
Bases:
TestCase
Test various SAFE SAR L2 OCN file handlers.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_sar_c_safe module
Module for testing the satpy.readers.sar-c_safe module.
- class satpy.tests.reader_tests.test_sar_c_safe.Calibration(value)[source]
Bases:
Enum
Calibration levels.
- beta_nought = 3
- dn = 4
- gamma = 1
- sigma_nought = 2
- class satpy.tests.reader_tests.test_sar_c_safe.TestSAFEGRD(methodName='runTest')[source]
Bases:
TestCase
Test the SAFE GRD file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_sar_c_safe.TestSAFEXMLAnnotation(methodName='runTest')[source]
Bases:
TestCase
Test the SAFE XML Annotation file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_sar_c_safe.TestSAFEXMLCalibration(methodName='runTest')[source]
Bases:
TestCase
Test the SAFE XML Calibration file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_sar_c_safe.TestSAFEXMLNoise(methodName='runTest')[source]
Bases:
TestCase
Test the SAFE XML Noise file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_satpy_cf_nc module
Tests for the CF reader.
- class satpy.tests.reader_tests.test_satpy_cf_nc.TestCFReader[source]
Bases:
object
Test case for CF reader.
- test_dataid_attrs_equal_contains_not_matching_key(cf_scene, nc_filename)[source]
Check that get_dataset returns valid dataset when dataid have key(s) not existing in data.
- test_dataid_attrs_equal_matching_dataset(cf_scene, nc_filename)[source]
Check that get_dataset returns valid dataset when keys matches.
- test_dataid_attrs_equal_not_matching_dataset(cf_scene, nc_filename)[source]
Check that get_dataset returns None when key(s) are not matching.
- test_decoding_of_dict_type_attributes(cf_scene, nc_filename)[source]
Test decoding of dict type attributes.
- test_fix_modifier_attr()[source]
Check that fix modifier can handle empty list as modifier attribute.
- test_read_prefixed_channels(cf_scene, nc_filename)[source]
Check channels starting with digit is prefixed and read back correctly.
- test_read_prefixed_channels_by_user(cf_scene, nc_filename)[source]
Check channels starting with digit is prefixed by user and read back correctly.
- test_read_prefixed_channels_by_user2(cf_scene, nc_filename)[source]
Check channels starting with digit is prefixed by user when saving and read back correctly without prefix.
- test_read_prefixed_channels_by_user_include_prefix(cf_scene, nc_filename)[source]
Check channels starting with digit is prefixed by user and include original name when saving.
- test_read_prefixed_channels_by_user_no_prefix(cf_scene, nc_filename)[source]
Check channels starting with digit is not prefixed by user.
- test_read_prefixed_channels_include_orig_name(cf_scene, nc_filename)[source]
Check channels starting with digit and includeed orig name is prefixed and read back correctly.
- test_write_and_read_from_two_files(nc_filename, nc_filename_i)[source]
Save two datasets with different resolution and read the solar_zenith_angle again.
- satpy.tests.reader_tests.test_satpy_cf_nc.cf_scene(datasets, common_attrs)[source]
Create a cf scene.
- satpy.tests.reader_tests.test_satpy_cf_nc.common_attrs(area)[source]
Get common dataset attributes.
- satpy.tests.reader_tests.test_satpy_cf_nc.datasets(vis006, ir_108, qual_flags, lonlats, prefix_data, swath_data)[source]
Get datasets belonging to the scene.
- satpy.tests.reader_tests.test_satpy_cf_nc.nc_filename(tmp_path)[source]
Create an nc filename for viirs m band.
- satpy.tests.reader_tests.test_satpy_cf_nc.nc_filename_i(tmp_path)[source]
Create an nc filename for viirs i band.
- satpy.tests.reader_tests.test_satpy_cf_nc.prefix_data(xy_coords, area)[source]
Get dataset whose name should be prefixed.
satpy.tests.reader_tests.test_scmi module
The scmi_abi_l1b reader tests package.
- class satpy.tests.reader_tests.test_scmi.FakeDataset(info, attrs, dims=None)[source]
Bases:
object
Fake dataset.
Init the dataset.
- class satpy.tests.reader_tests.test_scmi.TestSCMIFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the SCMIFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_scmi.TestSCMIFileHandlerArea(methodName='runTest')[source]
Bases:
TestCase
Test the SCMIFileHandler’s area creation.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_seadas_l2 module
Tests for the ‘seadas_l2’ reader.
- class satpy.tests.reader_tests.test_seadas_l2.TestSEADAS[source]
Bases:
object
Test the SEADAS L2 file reader.
- satpy.tests.reader_tests.test_seadas_l2._add_variable_to_netcdf_file(nc, var_name, var_info)[source]
- satpy.tests.reader_tests.test_seadas_l2._create_seadas_chlor_a_hdf4_file(full_path, mission, sensor)[source]
- satpy.tests.reader_tests.test_seadas_l2._create_seadas_chlor_a_netcdf_file(full_path, mission, sensor)[source]
- satpy.tests.reader_tests.test_seadas_l2.seadas_l2_modis_chlor_a(tmp_path_factory)[source]
Create MODIS SEADAS file.
- satpy.tests.reader_tests.test_seadas_l2.seadas_l2_modis_chlor_a_netcdf(tmp_path_factory)[source]
Create MODIS SEADAS NetCDF file.
satpy.tests.reader_tests.test_seviri_base module
Test the MSG common (native and hrit format) functionionalities.
- class satpy.tests.reader_tests.test_seviri_base.SeviriBaseTest(methodName='runTest')[source]
Bases:
TestCase
Test SEVIRI base.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_get_cds_time_nanoseconds()[source]
Test the get_cds_time function for having nanosecond precision.
- class satpy.tests.reader_tests.test_seviri_base.TestMeirinkSlope[source]
Bases:
object
Unit tests for the slope of Meirink calibration.
- class satpy.tests.reader_tests.test_seviri_base.TestOrbitPolynomialFinder[source]
Bases:
object
Unit tests for orbit polynomial finder.
satpy.tests.reader_tests.test_seviri_l1b_calibration module
Unittesting the native msg reader.
- class satpy.tests.reader_tests.test_seviri_l1b_calibration.TestFileHandlerCalibrationBase[source]
Bases:
object
Base class for file handler calibration tests.
- expected = {'HRV': {'counts': {'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ 0, 10], [100, 255]]) Dimensions without coordinates: y, x}, 'radiance': {'EXTERNAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 45.], [ 495., 1270.]]) Dimensions without coordinates: y, x, 'GSICS': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 108.], [1188., 3048.]]) Dimensions without coordinates: y, x, 'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 108.], [1188., 3048.]]) Dimensions without coordinates: y, x}, 'reflectance': {'EXTERNAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 173.02817], [1903.31 , 4883.2397 ]]) Dimensions without coordinates: y, x, 'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 415.26767], [ 4567.944 , 11719.775 ]]) Dimensions without coordinates: y, x}}, 'IR_108': {'brightness_temperature': {'EXTERNAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 335.14236], [ 758.6249 , 1262.7567 ]]) Dimensions without coordinates: y, x, 'GSICS': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 189.20985], [285.53293, 356.06668]]) Dimensions without coordinates: y, x, 'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 279.82318], [543.2585 , 812.77167]]) Dimensions without coordinates: y, x}, 'counts': {'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ 0, 10], [100, 255]]) Dimensions without coordinates: y, x}, 'radiance': {'EXTERNAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 180.], [1980., 5080.]]) Dimensions without coordinates: y, x, 'GSICS': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 8.19], [ 89.19, 228.69]]) Dimensions without coordinates: y, x, 'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 81.], [ 891., 2286.]]) Dimensions without coordinates: y, x}}, 'VIS006': {'counts': {'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ 0, 10], [100, 255]]) Dimensions without coordinates: y, x}, 'radiance': {'EXTERNAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 90.], [ 990., 2540.]]) Dimensions without coordinates: y, x, 'GSICS': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 9.], [ 99., 254.]]) Dimensions without coordinates: y, x, 'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 9.], [ 99., 254.]]) Dimensions without coordinates: y, x}, 'reflectance': {'EXTERNAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 418.89853], [ 4607.8843 , 11822.249 ]]) Dimensions without coordinates: y, x, 'NOMINAL': <xarray.DataArray (y: 2, x: 2)> Size: 32B array([[ nan, 41.88985], [ 460.7884 , 1182.2247 ]]) Dimensions without coordinates: y, x}}}
- external_coefs = {'HRV': {'gain': 5, 'offset': -5}, 'IR_108': {'gain': 20, 'offset': -20}, 'VIS006': {'gain': 10, 'offset': -10}}
- gains_gsics = [0, 0, 0, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 0]
- gains_nominal = array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12])
- offsets_gsics = [0, 0, 0, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9, -1.0, -1.1, 0]
- offsets_nominal = array([ -1, -2, -3, -4, -5, -6, -7, -8, -9, -10, -11, -12])
- platform_id = 324
- radiance_types = array([2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2., 2.])
- scan_time = datetime.datetime(2020, 1, 1, 0, 0)
- spectral_channel_ids = {'HRV': 12, 'IR_108': 9, 'VIS006': 1}
- class satpy.tests.reader_tests.test_seviri_l1b_calibration.TestSEVIRICalibrationAlgorithm(methodName='runTest')[source]
Bases:
TestCase
Unit Tests for SEVIRI calibration algorithm.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_seviri_l1b_hrit module
The HRIT msg reader tests package.
- class satpy.tests.reader_tests.test_seviri_l1b_hrit.TestHRITMSGBase(methodName='runTest')[source]
Bases:
TestCase
Baseclass for SEVIRI HRIT reader tests.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_seviri_l1b_hrit.TestHRITMSGCalibration[source]
Bases:
TestFileHandlerCalibrationBase
Unit tests for calibration.
- class satpy.tests.reader_tests.test_seviri_l1b_hrit.TestHRITMSGEpilogueFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRIT epilogue file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_seviri_l1b_hrit.TestHRITMSGFileHandler(methodName='runTest')[source]
Bases:
TestHRITMSGBase
Test the HRITFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_get_dataset_with_raw_metadata(calibrate, parent_get_dataset)[source]
Test getting the dataset.
- class satpy.tests.reader_tests.test_seviri_l1b_hrit.TestHRITMSGFileHandlerHRV(methodName='runTest')[source]
Bases:
TestHRITMSGBase
Test the HRITFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_seviri_l1b_hrit.TestHRITMSGPrologueFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the HRIT prologue file handler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_seviri_l1b_hrit_setup module
Setup for SEVIRI HRIT reader tests.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_acq_time_cds(start_time, nlines)[source]
Get fake scanline acquisition times.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_acq_time_exp(start_time, nlines)[source]
Get expected scanline acquisition times.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_attrs_exp(projection_longitude=0.0)[source]
Get expected dataset attributes.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_fake_dataset_info()[source]
Create fake dataset info.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_fake_epilogue()[source]
Create a fake HRIT epilogue.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_fake_file_handler(observation_start_time, nlines, ncols, projection_longitude=0, orbit_polynomials={'EndTime': array([[datetime.datetime(2006, 1, 1, 12, 0), datetime.datetime(2006, 1, 1, 18, 0), datetime.datetime(2006, 1, 2, 0, 0), datetime.datetime(1958, 1, 1, 0, 0)]], dtype=object), 'StartTime': array([[datetime.datetime(2006, 1, 1, 6, 0), datetime.datetime(2006, 1, 1, 12, 0), datetime.datetime(2006, 1, 1, 18, 0), datetime.datetime(1958, 1, 1, 0, 0)]], dtype=object), 'X': [array([0., 0., 0., 0., 0., 0., 0., 0.]), [84160.7082, 2.9431926, 0.986748617, -0.270135453, -0.038436465, 0.00848718433, 0.000770548174, -0.000144262718], array([0., 0., 0., 0., 0., 0., 0., 0.])], 'Y': [array([0., 0., 0., 0., 0., 0., 0., 0.]), [-5211.70255, 5.12998948, -1.33370453, -0.309634144, 0.0618232793, 0.00750505681, -0.00135131011, -0.000112054405], array([0., 0., 0., 0., 0., 0., 0., 0.])], 'Z': [array([0., 0., 0., 0., 0., 0., 0., 0.]), [-651.293855, 145.830459, 56.13794, -3.90970565, -0.738137565, 0.0306131644, 0.00382892428, -0.000112739309], array([0., 0., 0., 0., 0., 0., 0., 0.])]})[source]
Create a mocked SEVIRI HRIT file handler.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_fake_filename_info(start_time)[source]
Create fake filename information.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_fake_mda(nlines, ncols, start_time)[source]
Create fake metadata.
- satpy.tests.reader_tests.test_seviri_l1b_hrit_setup.get_fake_prologue(projection_longitude, orbit_polynomials)[source]
Create a fake HRIT prologue.
satpy.tests.reader_tests.test_seviri_l1b_icare module
Tests for the SEVIRI L1b HDF4 from ICARE reader.
- class satpy.tests.reader_tests.test_seviri_l1b_icare.FakeHDF4FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF4FileHandler
Swap in HDF4 file handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_seviri_l1b_icare.TestSEVIRIICAREReader(methodName='runTest')[source]
Bases:
TestCase
Test SEVIRI L1b HDF4 from ICARE Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'seviri_l1b_icare.yaml'
satpy.tests.reader_tests.test_seviri_l1b_native module
Unittesting the Native SEVIRI reader.
- class satpy.tests.reader_tests.test_seviri_l1b_native.TestNativeMSGCalibration[source]
Bases:
TestFileHandlerCalibrationBase
Unit tests for calibration.
- class satpy.tests.reader_tests.test_seviri_l1b_native.TestNativeMSGDataset[source]
Bases:
object
Tests for getting the dataset.
- test_repeat_cycle_duration(file_handler)[source]
Test repeat cycle handling for FD or ReduscedScan.
- class satpy.tests.reader_tests.test_seviri_l1b_native.TestNativeMSGFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the NativeMSGFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_seviri_l1b_native.TestNativeMSGFilenames[source]
Bases:
object
Test identification of Native format filenames.
- class satpy.tests.reader_tests.test_seviri_l1b_native.TestNativeMSGPadder(methodName='runTest')[source]
Bases:
TestCase
Test Padder of the native l1b seviri reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- satpy.tests.reader_tests.test_seviri_l1b_native.create_test_header(earth_model, dataset_id, is_full_disk, is_rapid_scan, good_qual='OK')[source]
Create test header for SEVIRI L1.5 product.
Header includes mandatory attributes for NativeMSGFileHandler.get_area_extent
- satpy.tests.reader_tests.test_seviri_l1b_native.create_test_trailer(is_rapid_scan)[source]
Create test trailer for SEVIRI L1.5 product.
Trailer includes mandatory attributes for NativeMSGFileHandler.get_area_extent
- satpy.tests.reader_tests.test_seviri_l1b_native.prepare_area_definitions(test_dict)[source]
Prepare calculated and expected area definitions for equal checking.
- satpy.tests.reader_tests.test_seviri_l1b_native.prepare_is_roi(test_dict)[source]
Prepare calculated and expected check for region of interest data for equal checking.
- satpy.tests.reader_tests.test_seviri_l1b_native.test_area_definitions(actual, expected)[source]
Test area definitions with only one area.
- satpy.tests.reader_tests.test_seviri_l1b_native.test_has_archive_header(starts_with, expected)[source]
Test if the file includes an ASCII archive header.
- satpy.tests.reader_tests.test_seviri_l1b_native.test_header_type(file_content, exp_header_size)[source]
Test identification of the file header type.
- satpy.tests.reader_tests.test_seviri_l1b_native.test_header_warning()[source]
Test warning is raised for NOK quality flag.
- satpy.tests.reader_tests.test_seviri_l1b_native.test_is_roi(actual, expected)[source]
Test if given area is of area-of-interest.
satpy.tests.reader_tests.test_seviri_l1b_nc module
The HRIT msg reader tests package.
- class satpy.tests.reader_tests.test_seviri_l1b_nc.TestNCSEVIRIFileHandler[source]
Bases:
TestFileHandlerCalibrationBase
Unit tests for SEVIRI netCDF reader.
- _get_fake_dataset(counts, h5netcdf)[source]
Create a fake dataset.
- Parameters:
counts (xr.DataArray) – Array with data.
h5netcdf (boolean) – If True an array attribute will be created which is common for the h5netcdf backend in xarray for scalar values.
- test_get_dataset(file_handler, channel, calibration, mask_bad_quality_scan_lines)[source]
Test getting the dataset.
- test_h5netcdf_pecularity(file_handler, h5netcdf)[source]
Test conversion of attributes when xarray is used with h5netcdf backend.
- test_repeat_cycle_duration(file_handler)[source]
Test repeat cycle handling for FD or ReduscedScan.
satpy.tests.reader_tests.test_seviri_l2_bufr module
Unittesting the SEVIRI L2 BUFR reader.
- class satpy.tests.reader_tests.test_seviri_l2_bufr.SeviriL2AMVBufrData(filename)[source]
Bases:
object
Mock SEVIRI L2 AMV BUFR data.
Initialize by mocking test data for testing the SEVIRI L2 BUFR reader.
- class satpy.tests.reader_tests.test_seviri_l2_bufr.SeviriL2BufrData(filename, with_adef=False, rect_lon='default')[source]
Bases:
object
Mock SEVIRI L2 BUFR data.
Initialize by mocking test data for testing the SEVIRI L2 BUFR reader.
- class satpy.tests.reader_tests.test_seviri_l2_bufr.TestSeviriL2AMVBufrReader[source]
Bases:
object
Test SEVIRI L2 BUFR Reader for AMV data.
- class satpy.tests.reader_tests.test_seviri_l2_bufr.TestSeviriL2BufrReader[source]
Bases:
object
Test SEVIRI L2 BUFR Reader.
- pytestmark = [Mark(name='parametrize', args=('input_file', ['ASRBUFRProd_20191106130000Z_00_OMPEFS02_MET09_FES_E0000', 'MSG2-SEVI-MSGASRE-0101-0101-20191106130000.000000000Z-20191106131702-1362128.bfr', 'MSG2-SEVI-MSGASRE-0101-0101-20191106101500.000000000Z-20191106103218-1362148']), kwargs={})]
- static test_attributes_with_area_definition(input_file)[source]
Test correctness of dataset attributes with data loaded with a AreaDefinition.
- static test_attributes_with_swath_definition(input_file)[source]
Test correctness of dataset attributes with data loaded with a SwathDefinition (default behaviour).
- test_data_with_rect_lon(input_file)[source]
Test data loaded with AreaDefinition and user defined rectification longitude.
satpy.tests.reader_tests.test_seviri_l2_grib module
SEVIRI L2 GRIB-reader test package.
- class satpy.tests.reader_tests.test_seviri_l2_grib.Test_SeviriL2GribFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the SeviriL2GribFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_sgli_l1b module
Tests for the SGLI L1B backend.
- satpy.tests.reader_tests.test_sgli_l1b.add_downsampled_geometry_data(h5f)[source]
Add downsampled geometry data to an h5py file instance.
- satpy.tests.reader_tests.test_sgli_l1b.sgli_ir_file(tmp_path_factory)[source]
Create a stub IR file.
- satpy.tests.reader_tests.test_sgli_l1b.sgli_pol_file(tmp_path_factory)[source]
Create a POL stub file.
- satpy.tests.reader_tests.test_sgli_l1b.sgli_vn_file(tmp_path_factory)[source]
Create a stub VN file.
- satpy.tests.reader_tests.test_sgli_l1b.test_channel_is_chunked(sgli_vn_file)[source]
Test that the channel data is chunked.
- satpy.tests.reader_tests.test_sgli_l1b.test_channel_is_masked(sgli_vn_file)[source]
Test that channels are masked for no-data.
- satpy.tests.reader_tests.test_sgli_l1b.test_end_time(sgli_vn_file)[source]
Test that the end time is extracted.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_dataset_counts(sgli_vn_file)[source]
Test that counts can be extracted from a file.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_dataset_for_unknown_channel(sgli_vn_file)[source]
Test that counts can be extracted from a file.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_polarized_dataset_reflectance(sgli_pol_file, polarization)[source]
Test getting polarized reflectances.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_polarized_longitudes(sgli_pol_file)[source]
Test getting polarized reflectances.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_sw_dataset_reflectances(sgli_ir_file)[source]
Test getting SW dataset reflectances.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_ti_dataset_bt(sgli_ir_file)[source]
Test getting brightness temperatures for IR channels.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_ti_dataset_radiance(sgli_ir_file)[source]
Test getting thermal IR radiances.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_ti_lon_lats(sgli_ir_file)[source]
Test getting the lons and lats for IR channels.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_vn_dataset_radiance(sgli_vn_file)[source]
Test that datasets can be calibrated to radiance.
- satpy.tests.reader_tests.test_sgli_l1b.test_get_vn_dataset_reflectances(sgli_vn_file)[source]
Test that the vn datasets can be calibrated to reflectances.
- satpy.tests.reader_tests.test_sgli_l1b.test_loading_lon_lat(sgli_vn_file)[source]
Test that loading lons and lats works.
- satpy.tests.reader_tests.test_sgli_l1b.test_loading_sensor_angles(sgli_vn_file)[source]
Test loading the satellite angles.
- satpy.tests.reader_tests.test_sgli_l1b.test_loading_solar_angles(sgli_vn_file)[source]
Test loading sun angles.
satpy.tests.reader_tests.test_slstr_l1b module
Module for testing the satpy.readers.nc_slstr module.
- class satpy.tests.reader_tests.test_slstr_l1b.TestSLSTRCalibration(methodName='runTest')[source]
Bases:
TestSLSTRL1B
Test the implementation of the calibration factors.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_slstr_l1b.TestSLSTRL1B(methodName='runTest')[source]
Bases:
TestCase
Common setup for SLSTR_L1B tests.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.reader_tests.test_slstr_l1b.TestSLSTRReader(methodName='runTest')[source]
Bases:
TestSLSTRL1B
Test various nc_slstr file handlers.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_smos_l2_wind module
Module for testing the satpy.readers.smos_l2_wind module.
- class satpy.tests.reader_tests.test_smos_l2_wind.FakeNetCDF4FileHandlerSMOSL2WIND(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_smos_l2_wind.TestSMOSL2WINDReader(methodName='runTest')[source]
Bases:
TestCase
Test SMOS L2 WINDReader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'smos_l2_wind.yaml'
satpy.tests.reader_tests.test_tropomi_l2 module
Module for testing the satpy.readers.tropomi_l2 module.
- class satpy.tests.reader_tests.test_tropomi_l2.FakeNetCDF4FileHandlerTL2(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_tropomi_l2.TestTROPOMIL2Reader(methodName='runTest')[source]
Bases:
TestCase
Test TROPOMI L2 Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'tropomi_l2.yaml'
satpy.tests.reader_tests.test_utils module
Testing of helper functions.
- class satpy.tests.reader_tests.test_utils.TestHelpers(methodName='runTest')[source]
Bases:
TestCase
Test the area helpers.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_generic_open_FSFile_MemoryFileSystem()[source]
Test the generic_open method with FSFile in MemoryFileSystem.
- test_get_user_calibration_factors()[source]
Test the retrieval of user-supplied calibration factors.
satpy.tests.reader_tests.test_vaisala_gld360 module
Unittesting the Vaisala GLD360 reader.
- class satpy.tests.reader_tests.test_vaisala_gld360.TestVaisalaGLD360TextFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the VaisalaGLD360TextFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_vii_base_nc module
The vii_base_nc reader tests package.
- class satpy.tests.reader_tests.test_vii_base_nc.TestViiNCBaseFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the ViiNCBaseFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_vii_l1b_nc module
The vii_l1b_nc reader tests package.
This version tests the readers for VII test data V2 as per PFS V4A.
- class satpy.tests.reader_tests.test_vii_l1b_nc.TestViiL1bNCFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the ViiL1bNCFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_vii_l2_nc module
The vii_2_nc reader tests package.
- class satpy.tests.reader_tests.test_vii_l2_nc.TestViiL2NCFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the ViiL2NCFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_vii_utils module
The vii_utils reader tests package.
- class satpy.tests.reader_tests.test_vii_utils.TestViiUtils(methodName='runTest')[source]
Bases:
TestCase
Test the vii_utils module.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_vii_wv_nc module
The vii_l2_nc reader tests package for VII/METimage water vapour products.
- class satpy.tests.reader_tests.test_vii_wv_nc.TestViiL2NCFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the ViiL2NCFileHandler reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.reader_tests.test_viirs_atms_utils module
Test common VIIRS/ATMS SDR reader functions.
- satpy.tests.reader_tests.test_viirs_atms_utils.test_get_file_units(caplog)[source]
Test get the file-units from the dataset info.
- satpy.tests.reader_tests.test_viirs_atms_utils.test_get_scale_factors_for_units_reflectances(caplog)[source]
Test get scale factors for units, when variable is supposed to be a reflectance.
satpy.tests.reader_tests.test_viirs_compact module
Module for testing the satpy.readers.viirs_compact module.
satpy.tests.reader_tests.test_viirs_edr module
Module for testing the satpy.readers.viirs_l2_jrr module.
Note: This is adapted from the test_slstr_l2.py code.
- class satpy.tests.reader_tests.test_viirs_edr.TestVIIRSJRRReader[source]
Bases:
object
Test the VIIRS JRR L2 reader.
- test_availability_veg_idx(data_file, exp_available)[source]
Test that vegetation indexes aren’t available when they aren’t present.
- test_get_aod_filtered(aod_file, aod_qc_filter, exp_masked_pixel)[source]
Test that the AOD product can be loaded and filtered.
- satpy.tests.reader_tests.test_viirs_edr._array_checks(data_arr: xr.DataArray, dtype: npt.Dtype = <class 'numpy.float32'>, multiple_files: bool = False) None [source]
- satpy.tests.reader_tests.test_viirs_edr._check_continuous_data_arr(data_arr: DataArray) None [source]
- satpy.tests.reader_tests.test_viirs_edr._check_surf_refl_data_arr(data_arr: xr.DataArray, dtype: npt.DType = <class 'numpy.float32'>, multiple_files: bool = False) None [source]
- satpy.tests.reader_tests.test_viirs_edr._check_surf_refl_qf_data_arr(data_arr: DataArray, multiple_files: bool) None [source]
- satpy.tests.reader_tests.test_viirs_edr._check_vi_data_arr(data_arr: DataArray, is_filtered: bool, multiple_files: bool) None [source]
- satpy.tests.reader_tests.test_viirs_edr._create_continuous_variables(var_names: Iterable[str]) dict[str, DataArray] [source]
- satpy.tests.reader_tests.test_viirs_edr._create_fake_dataset(vars_dict: dict[str, DataArray]) Dataset [source]
- satpy.tests.reader_tests.test_viirs_edr._create_fake_file(tmp_path_factory: TempPathFactory, filename: str, data_arrs: dict[str, DataArray]) Path [source]
- satpy.tests.reader_tests.test_viirs_edr._create_surf_refl_variables() dict[str, DataArray] [source]
- satpy.tests.reader_tests.test_viirs_edr._create_surface_reflectance_file(tmp_path_factory: TempPathFactory, start_time: datetime, include_veg_indices: bool = False) Path [source]
- satpy.tests.reader_tests.test_viirs_edr._create_veg_index_variables() dict[str, DataArray] [source]
- satpy.tests.reader_tests.test_viirs_edr.aod_file(tmp_path_factory: TempPathFactory) Path [source]
Generate fake AOD VIIRs EDR file.
- satpy.tests.reader_tests.test_viirs_edr.cloud_height_file(tmp_path_factory: TempPathFactory) Path [source]
Generate fake CloudHeight VIIRS EDR file.
- satpy.tests.reader_tests.test_viirs_edr.lst_file(tmp_path_factory: TempPathFactory) Path [source]
Generate fake VLST EDR file.
- satpy.tests.reader_tests.test_viirs_edr.multiple_surface_reflectance_files(surface_reflectance_file, surface_reflectance_file2) list[Path] [source]
Get two multiple surface reflectance files.
- satpy.tests.reader_tests.test_viirs_edr.multiple_surface_reflectance_files_with_veg_indices(surface_reflectance_with_veg_indices_file, surface_reflectance_with_veg_indices_file2) list[Path] [source]
Get two multiple surface reflectance files with vegetation indexes included.
- satpy.tests.reader_tests.test_viirs_edr.surface_reflectance_file(tmp_path_factory: TempPathFactory) Path [source]
Generate fake surface reflectance EDR file.
- satpy.tests.reader_tests.test_viirs_edr.surface_reflectance_file2(tmp_path_factory: TempPathFactory) Path [source]
Generate fake surface reflectance EDR file.
- satpy.tests.reader_tests.test_viirs_edr.surface_reflectance_with_veg_indices_file(tmp_path_factory: TempPathFactory) Path [source]
Generate fake surface reflectance EDR file with vegetation indexes included.
- satpy.tests.reader_tests.test_viirs_edr.surface_reflectance_with_veg_indices_file2(tmp_path_factory: TempPathFactory) Path [source]
Generate fake surface reflectance EDR file with vegetation indexes included.
- satpy.tests.reader_tests.test_viirs_edr.test_available_datasets(aod_file)[source]
Test that available datasets doesn’t claim non-filetype datasets.
For example, if a YAML-configured dataset’s file type is not loaded then the available status is None and should remain None. This means no file type knows what to do with this dataset. If it is False then that means that a file type knows of the dataset, but that the variable is not available in the file. In the below test this isn’t the case so the YAML-configured dataset should be provided once and have a None availability.
satpy.tests.reader_tests.test_viirs_edr_active_fires module
VIIRS Active Fires Tests.
This module implements tests for VIIRS Active Fires NetCDF and ASCII file readers.
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.FakeImgFiresNetCDF4FileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap in CDF4 file handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.FakeImgFiresTextFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
Fake file handler for text files at image resolution.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.FakeModFiresNetCDF4FileHandler(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap in CDF4 file handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.FakeModFiresTextFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
Fake file handler for text files at moderate resolution.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.TestImgVIIRSActiveFiresNetCDF4(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS Fires Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_edr_active_fires.yaml'
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.TestImgVIIRSActiveFiresText(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS Fires Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_edr_active_fires.yaml'
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.TestModVIIRSActiveFiresNetCDF4(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS Fires Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_edr_active_fires.yaml'
- class satpy.tests.reader_tests.test_viirs_edr_active_fires.TestModVIIRSActiveFiresText(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS Fires Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_edr_active_fires.yaml'
satpy.tests.reader_tests.test_viirs_edr_flood module
Tests for the VIIRS EDR Flood reader.
- class satpy.tests.reader_tests.test_viirs_edr_flood.FakeHDF4FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF4FileHandler
Swap in HDF4 file handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_viirs_edr_flood.TestVIIRSEDRFloodReader(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS EDR Flood Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_edr_flood.yaml'
satpy.tests.reader_tests.test_viirs_l1b module
Module for testing the satpy.readers.viirs_l1b module.
- class satpy.tests.reader_tests.test_viirs_l1b.FakeNetCDF4FileHandlerDay(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
- I_BANDS = ['I01', 'I02', 'I03', 'I04', 'I05']
- I_BT_BANDS = ['I04', 'I05']
- I_REFL_BANDS = ['I01', 'I02', 'I03']
- M_BANDS = ['M01', 'M02', 'M03', 'M04', 'M05', 'M06', 'M07', 'M08', 'M09', 'M10', 'M11', 'M12', 'M13', 'M14', 'M15', 'M16']
- M_BT_BANDS = ['M12', 'M13', 'M14', 'M15', 'M16']
- M_REFL_BANDS = ['M01', 'M02', 'M03', 'M04', 'M05', 'M06', 'M07', 'M08', 'M09', 'M10', 'M11']
- class satpy.tests.reader_tests.test_viirs_l1b.FakeNetCDF4FileHandlerNight(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandlerDay
Same as the day file handler, but some day-only bands are missing.
This matches what happens in real world files where reflectance bands are removed in night data to save space.
Get fake file content from ‘get_test_content’.
- I_BANDS = ['I04', 'I05']
- M_BANDS = ['M12', 'M13', 'M14', 'M15', 'M16']
- class satpy.tests.reader_tests.test_viirs_l1b.TestVIIRSL1BReaderDay[source]
Bases:
object
Test VIIRS L1B Reader.
- fake_cls
alias of
FakeNetCDF4FileHandlerDay
- has_reflectance_bands = True
- yaml_file = 'viirs_l1b.yaml'
- class satpy.tests.reader_tests.test_viirs_l1b.TestVIIRSL1BReaderDayNight[source]
Bases:
TestVIIRSL1BReaderDay
Test VIIRS L1b with night data.
Night data files don’t have reflectance bands in them.
- fake_cls
alias of
FakeNetCDF4FileHandlerNight
- has_reflectance_bands = False
satpy.tests.reader_tests.test_viirs_l2 module
Module for testing the satpy.readers.viirs_l2 module.
- class satpy.tests.reader_tests.test_viirs_l2.FakeNetCDF4FileHandlerVIIRSL2(filename, filename_info, filetype_info, auto_maskandscale=False, xarray_kwargs=None, cache_var_size=0, cache_handle=False, extra_file_content=None)[source]
Bases:
FakeNetCDF4FileHandler
Swap-in NetCDF4 File Handler.
Get fake file content from ‘get_test_content’.
satpy.tests.reader_tests.test_viirs_sdr module
Module for testing the satpy.readers.viirs_sdr module.
- class satpy.tests.reader_tests.test_viirs_sdr.FakeHDF5FileHandler2(filename, filename_info, filetype_info, include_factors=True)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Create fake file handler.
- static _add_geolocation_info_to_file_content(file_content, filename, data_var_prefix, num_grans)[source]
- _add_granule_specific_info_to_file_content(file_content, dataset_group, num_granules, num_scans_per_granule, gran_group_prefix)[source]
- _num_scans_per_gran = [48]
- _num_test_granules = 1
- class satpy.tests.reader_tests.test_viirs_sdr.FakeHDF5FileHandlerAggr(filename, filename_info, filetype_info, include_factors=True)[source]
Bases:
FakeHDF5FileHandler2
Swap-in HDF5 File Handler with 4 VIIRS Granules per file.
Create fake file handler.
- _num_scans_per_gran = [48, 48, 48, 48]
- _num_test_granules = 4
- class satpy.tests.reader_tests.test_viirs_sdr.FakeShortHDF5FileHandlerAggr(filename, filename_info, filetype_info, include_factors=True)[source]
Bases:
FakeHDF5FileHandler2
Fake file that has less scans than usual in a couple granules.
Create fake file handler.
- _num_scans_per_gran = [47, 48, 47]
- _num_test_granules = 3
- class satpy.tests.reader_tests.test_viirs_sdr.TestAggrVIIRSSDRReader(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS SDR Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_sdr.yaml'
- class satpy.tests.reader_tests.test_viirs_sdr.TestShortAggrVIIRSSDRReader(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS SDR Reader with a file that has truncated granules.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- yaml_file = 'viirs_sdr.yaml'
- class satpy.tests.reader_tests.test_viirs_sdr.TestVIIRSSDRReader(methodName='runTest')[source]
Bases:
TestCase
Test VIIRS SDR Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_init_start_time_is_nodate()[source]
Test basic init with start_time being set to the no-date 1/1-1958.
- test_load_all_i_reflectances_provided_geo()[source]
Load all I band reflectances with geo files provided.
- test_load_all_m_reflectances_find_geo()[source]
Load all M band reflectances with geo files not specified but existing.
- test_load_all_m_reflectances_no_geo()[source]
Load all M band reflectances with no geo files provided.
- test_load_all_m_reflectances_provided_geo()[source]
Load all M band reflectances with geo files provided.
- test_load_all_m_reflectances_use_nontc()[source]
Load all M band reflectances but use non-TC geolocation.
- test_load_all_m_reflectances_use_nontc2()[source]
Load all M band reflectances but use non-TC geolocation because TC isn’t available.
- test_load_dnb_sza_no_factors()[source]
Load DNB solar zenith angle with no scaling factors.
The angles in VIIRS SDRs should never have scaling factors so we test it that way.
- yaml_file = 'viirs_sdr.yaml'
satpy.tests.reader_tests.test_viirs_vgac_l1c_nc module
The viirs_vgac_l1b_nc reader tests package.
This version tests the readers for VIIIRS VGAC data preliminary version.
satpy.tests.reader_tests.test_virr_l1b module
Test for readers/virr_l1b.py.
- class satpy.tests.reader_tests.test_virr_l1b.FakeHDF5FileHandler2(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
FakeHDF5FileHandler
Swap-in HDF5 File Handler.
Get fake file content from ‘get_test_content’.
- class satpy.tests.reader_tests.test_virr_l1b.TestVIRRL1BReader(methodName='runTest')[source]
Bases:
TestCase
Test VIRR L1B Reader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _band_helper(attributes, units, calibration, standard_name, file_type, band_index_size, resolution)[source]
- _classSetupFailed = False
- _class_cleanups = []
- _fy3_helper(platform_name, reader, Emissive_units)[source]
Load channels and test accurate metadata.
- yaml_file = 'virr_l1b.yaml'
satpy.tests.reader_tests.utils module
Utilities for reader tests.
- satpy.tests.reader_tests.utils.default_attr_processor(root, attr)[source]
Do not change the attribute.
- satpy.tests.reader_tests.utils.fill_h5(root, contents, attr_processor=<function default_attr_processor>)[source]
Fill hdf5 file with the given contents.
- Parameters:
root – hdf5 file rott
contents – Contents to be written into the file
attr_processor – A method for modifying attributes before they are written to the file.
- satpy.tests.reader_tests.utils.get_jit_methods(module)[source]
Get all jit-compiled methods in a module.
- satpy.tests.reader_tests.utils.skip_numba_unstable_if_missing()[source]
Determine if numba-based tests should be skipped during unstable CI tests.
If numba fails to import it could be because numba is not compatible with a newer version of numpy. This is very likely to happen in the unstable/experimental CI environment. This function returns
True
if numba-based tests should be skipped ifnumba
could not be imported and we’re in the unstable environment. We determine if we’re in this CI environment by looking for theUNSTABLE="1"
environment variable.
Module contents
The reader tests package.
satpy.tests.scene_tests package
Submodules
satpy.tests.scene_tests.test_conversions module
Unit tests for Scene conversion functionality.
- class satpy.tests.scene_tests.test_conversions.TestSceneConversions[source]
Bases:
object
Test Scene conversion to geoviews, xarray, etc.
- test_geoviews_basic_with_area()[source]
Test converting a Scene to geoviews with an AreaDefinition.
- class satpy.tests.scene_tests.test_conversions.TestSceneSerialization[source]
Bases:
object
Test the Scene serialization.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- class satpy.tests.scene_tests.test_conversions.TestToXarrayConversion[source]
Bases:
object
Test Scene.to_xarray() conversion.
- test_to_xarray_with_multiple_area_scene(multi_area_scn)[source]
Test converting muiltple area Scene to xarray.
satpy.tests.scene_tests.test_data_access module
Unit tests for data access methods and properties of the Scene class.
- class satpy.tests.scene_tests.test_data_access.TestComputePersist[source]
Bases:
object
Test methods that compute the internal data in some way.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- class satpy.tests.scene_tests.test_data_access.TestDataAccessMethods[source]
Bases:
object
Test the scene class.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- test_sensor_names_added_datasets(include_reader, added_sensor, exp_sensors)[source]
Test that Scene sensor_names handles contained sensors properly.
- class satpy.tests.scene_tests.test_data_access.TestFinestCoarsestArea[source]
Bases:
object
Test the Scene logic for finding the finest and coarsest area.
- test_coarsest_finest_area_different_shape(coarse_area, fine_area)[source]
Test ‘coarsest_area’ and ‘finest_area’ methods for upright areas.
- test_coarsest_finest_area_same_shape(area_def, shifted_area)[source]
Test that two areas with the same shape are consistently returned.
If two geometries (ex. two AreaDefinitions or two SwathDefinitions) have the same resolution (shape) but different coordinates, which one has the finer resolution would ultimately be determined by the semi-random ordering of the internal container of the Scene (a dict) if only pixel resolution was compared. This test makes sure that it is always the same object returned.
satpy.tests.scene_tests.test_init module
Unit tests for Scene creation.
- class satpy.tests.scene_tests.test_init.TestScene[source]
Bases:
object
Test the scene class.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- test_create_multiple_reader_different_kwargs(include_test_etc)[source]
Test passing different kwargs to different readers.
- test_create_reader_instances_with_reader()[source]
Test createring a reader instance providing the reader name.
- test_create_reader_instances_with_reader_kwargs()[source]
Test creating a reader instance with reader kwargs.
- test_storage_options_from_reader_kwargs_no_options()[source]
Test getting storage options from reader kwargs.
Case where there are no options given.
- test_storage_options_from_reader_kwargs_per_reader()[source]
Test getting storage options from reader kwargs.
Case where each reader have their own storage options.
- test_storage_options_from_reader_kwargs_per_reader_and_global()[source]
Test getting storage options from reader kwargs.
Case where each reader have their own storage options and there are global options to merge.
satpy.tests.scene_tests.test_load module
Unit tests for loading-related functionality in scene.py.
- class satpy.tests.scene_tests.test_load.TestBadLoading[source]
Bases:
object
Test the Scene object’s .load method with bad inputs.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- class satpy.tests.scene_tests.test_load.TestLoadingComposites[source]
Bases:
object
Test the Scene object’s .load method for composites.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- test_load_comp15()[source]
Test loading a composite whose prerequisites can’t be loaded.
Note that the prereq exists in the reader, but fails in loading.
- test_load_comp18()[source]
Test loading a composite that depends on an incompatible area modified dataset.
- test_load_comp18_2()[source]
Test loading a composite that depends on an incompatible area modified dataset.
Specifically a modified dataset where the modifier has optional dependencies.
- test_load_comp19()[source]
Test loading a composite that shares a dep with a dependency.
More importantly test that loading a dependency that depends on the same dependency as this composite (a sibling dependency) and that sibling dependency includes a modifier. This test makes sure that the Node in the dependency tree is the exact same node.
- test_load_dataset_after_composite2()[source]
Test load complex composite followed by other datasets.
- test_load_modified_with_load_kwarg()[source]
Test loading a modified dataset using the
Scene.load
keyword argument.
- test_load_multiple_resolutions()[source]
Test loading a dataset has multiple resolutions available with different resolutions.
- test_load_same_subcomposite()[source]
Test loading a composite and one of it’s subcomposites at the same time.
- test_load_when_sensor_none_in_preloaded_dataarrays()[source]
Test Scene loading when existing loaded arrays have sensor set to None.
Some readers or composites (ex. static images) don’t have a sensor and developers choose to set it to None. This test makes sure this doesn’t break loading.
- class satpy.tests.scene_tests.test_load.TestLoadingReaderDatasets[source]
Bases:
object
Test the Scene object’s .load method for datasets coming from a reader.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- test_load_ds5_multiple_resolution_loads()[source]
Test loading a dataset with multiple resolutions available as separate loads.
- class satpy.tests.scene_tests.test_load.TestSceneAllAvailableDatasets[source]
Bases:
object
Test the Scene’s handling of various dependencies.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- test_available_composite_ids_missing_available()[source]
Test available_composite_ids when a composites dep is missing.
- test_available_composites_known_versus_all()[source]
Test available_composite_ids when some datasets aren’t available.
- test_available_comps_no_deps()[source]
Test Scene available composites when composites don’t have a dependency.
- test_available_dataset_names_no_readers()[source]
Test the available dataset names without a reader.
- test_available_when_sensor_none_in_preloaded_dataarrays()[source]
Test Scene available composites when existing loaded arrays have sensor set to None.
Some readers or composites (ex. static images) don’t have a sensor and developers choose to set it to None. This test makes sure this doesn’t break available composite IDs.
satpy.tests.scene_tests.test_resampling module
Unit tests for resampling and crop-related functionality in scene.py.
- class satpy.tests.scene_tests.test_resampling.TestSceneAggregation[source]
Bases:
object
Test the scene’s aggregate method.
- class satpy.tests.scene_tests.test_resampling.TestSceneCrop[source]
Bases:
object
Test creating new Scenes by cropping an existing Scene.
- class satpy.tests.scene_tests.test_resampling.TestSceneResampling[source]
Bases:
object
Test resampling a Scene to another Scene object.
- _fake_resample_dataset(dataset, dest_area, **kwargs)[source]
Return copy of dataset pretending it was resampled.
- _fake_resample_dataset_force_20x20(dataset, dest_area, **kwargs)[source]
Return copy of dataset pretending it was resampled to (20, 20) shape.
- pytestmark = [Mark(name='usefixtures', args=('include_test_etc',), kwargs={})]
- test_comp_loading_after_resampling_existing_sensor()[source]
Test requesting a composite after resampling.
- test_comp_loading_after_resampling_new_sensor()[source]
Test requesting a composite after resampling when the sensor composites weren’t loaded before.
- test_comp_loading_multisensor_composite_created_user()[source]
Test that multisensor composite can be created manually.
Test that if the user has created datasets “manually”, that multi-sensor composites provided can still be read.
- test_comps_need_resampling_optional_mod_deps()[source]
Test that a composite with complex dependencies.
This is specifically testing the case where a compositor depends on multiple resolution prerequisites which themselves are composites. These sub-composites depend on data with a modifier that only has optional dependencies. This is a very specific use case and is the simplest way to present the problem (so far).
The general issue is that the Scene loading creates the “ds13” dataset which already has one modifier on it. The “comp27” composite requires resampling so its 4 prerequisites + the requested “ds13” (from the reader which includes mod1 modifier) remain. If the DependencyTree is not copied properly in this situation then the new Scene object will have the composite dependencies without resolution in its dep tree, but have the DataIDs with the resolution in the dataset dictionary. This all results in the Scene trying to regenerate composite dependencies that aren’t needed which fail.
- test_resample_multi_ancillary()[source]
Test that multiple ancillary variables are retained after resampling.
This test corresponds to GH#2329
- test_resample_reduce_data()[source]
Test that the Scene reducing data does not affect final output.
- test_resample_reduce_data_toggle(rs)[source]
Test that the Scene can be reduced or not reduced during resampling.
satpy.tests.scene_tests.test_saving module
Unit tests for saving-related functionality in scene.py.
Module contents
Tests of the Scene class.
satpy.tests.writer_tests package
Submodules
satpy.tests.writer_tests.test_awips_tiled module
Tests for the AWIPS Tiled writer.
- class satpy.tests.writer_tests.test_awips_tiled.TestAWIPSTiledWriter[source]
Bases:
object
Test basic functionality of AWIPS Tiled writer.
- test_basic_lettered_tiles_diff_projection(tmp_path)[source]
Test creating a lettered grid from data with differing projection..
- test_basic_numbered_1_tile(extra_attrs, expected_filename, use_save_dataset, caplog, tmp_path)[source]
Test creating a single numbered tile.
- test_basic_numbered_tiles(tile_count, tile_size, tmp_path)[source]
Test creating a multiple numbered tiles.
- test_lettered_tiles_bad_filename(tmp_path)[source]
Test creating a lettered grid with a bad filename.
- test_lettered_tiles_no_fit(tmp_path)[source]
Test creating a lettered grid with no data overlapping the grid.
- test_lettered_tiles_no_valid_data(tmp_path)[source]
Test creating a lettered grid with no valid data.
- test_lettered_tiles_sector_ref(tmp_path)[source]
Test creating a lettered grid using the sector as reference.
- test_lettered_tiles_update_existing(tmp_path)[source]
Test updating lettered tiles with additional data.
- satpy.tests.writer_tests.test_awips_tiled._check_required_common_attributes(ds)[source]
Check common properties of the created AWIPS tiles for validity.
- satpy.tests.writer_tests.test_awips_tiled._check_scaled_x_coordinate_variable(ds, masked_ds)[source]
- satpy.tests.writer_tests.test_awips_tiled._check_scaled_y_coordinate_variable(ds, masked_ds)[source]
- satpy.tests.writer_tests.test_awips_tiled._get_test_area(shape=(200, 100), crs=None, extents=None)[source]
satpy.tests.writer_tests.test_cf module
Tests for the CF writer.
- class satpy.tests.writer_tests.test_cf.TempFile(suffix='.nc')[source]
Bases:
object
A temporary filename class.
Initialize.
- class satpy.tests.writer_tests.test_cf.TestCFWriter[source]
Bases:
object
Test case for CF writer.
- test_global_attr_default_history_and_Conventions()[source]
Test saving global attributes history and Conventions.
- test_global_attr_history_and_Conventions()[source]
Test saving global attributes history and Conventions.
- test_load_module_with_old_pyproj()[source]
Test that cf_writer can still be loaded with pyproj 1.9.6.
- test_save_dataset_a_digit()[source]
Test saving an array to netcdf/cf where dataset name starting with a digit.
- test_save_dataset_a_digit_no_prefix_include_attr()[source]
Test saving an array to netcdf/cf dataset name starting with a digit with no prefix include orig name.
- test_save_dataset_a_digit_prefix()[source]
Test saving an array to netcdf/cf where dataset name starting with a digit with prefix.
- test_save_dataset_a_digit_prefix_include_attr()[source]
Test saving an array to netcdf/cf where dataset name starting with a digit with prefix include orig name.
- class satpy.tests.writer_tests.test_cf.TestEncodingAttribute[source]
Bases:
TestNetcdfEncodingKwargs
Test CF writer with ‘encoding’ dataset attribute.
satpy.tests.writer_tests.test_geotiff module
Tests for the geotiff writer.
- class satpy.tests.writer_tests.test_geotiff.TestGeoTIFFWriter[source]
Bases:
object
Test the GeoTIFF Writer class.
- test_dtype_for_enhance_false(tmp_path)[source]
Test that dtype of dataset is used if parameters enhance=False and dtype=None.
- test_dtype_for_enhance_false_and_given_dtype(tmp_path)[source]
Test that dtype of dataset is used if enhance=False and dtype=uint8.
- test_float_write(tmp_path)[source]
Test that geotiffs can be written as floats.
NOTE: Does not actually check that the output is floats.
satpy.tests.writer_tests.test_mitiff module
Tests for the mitiff writer.
Based on the test for geotiff writer
- class satpy.tests.writer_tests.test_mitiff.TestMITIFFWriter(methodName='runTest')[source]
Bases:
TestCase
Test the MITIFF Writer class.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_get_test_dataset_three_bands_prereq()[source]
Test basic writer operation with 3 bands with DataQuery prerequisites with missing name.
- test_save_dataset_with_calibration_error_one_dataset()[source]
Test saving if mitiff as dataset with only one channel with invalid calibration.
- test_save_dataset_with_calibration_one_dataset()[source]
Test saving if mitiff as dataset with only one channel.
satpy.tests.writer_tests.test_ninjogeotiff module
Tests for writing GeoTIFF files with NinJoTIFF tags.
- satpy.tests.writer_tests.test_ninjogeotiff._get_fake_da(lo, hi, shp, dtype='f4')[source]
Generate dask array with synthetic data.
This is more or less a 2d linspace: it’ll return a 2-d dask array of shape
shp
, lowest value islo
, highest value ishi
.
- satpy.tests.writer_tests.test_ninjogeotiff._patch_datetime_now(monkeypatch)[source]
Get a fake datetime.datetime.now().
- satpy.tests.writer_tests.test_ninjogeotiff.ntg1(test_image_small_mid_atlantic_L)[source]
Create instance of NinJoTagGenerator class.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg2(test_image_large_asia_RGB)[source]
Create instance of NinJoTagGenerator class.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg3(test_image_small_arctic_P)[source]
Create instance of NinJoTagGenerator class.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg_cmyk(test_image_cmyk_antarctic)[source]
Create NinJoTagGenerator instance with CMYK image.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg_latlon(test_image_latlon)[source]
Create NinJoTagGenerator with latlon-area image.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg_no_fill_value(test_image_small_mid_atlantic_L)[source]
Create instance of NinJoTagGenerator class.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg_northpole(test_image_northpole)[source]
Create NinJoTagGenerator with north pole image.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg_rgba(test_image_rgba_merc)[source]
Create NinJoTagGenerator instance with RGBA image.
- satpy.tests.writer_tests.test_ninjogeotiff.ntg_weird(test_image_weird)[source]
Create NinJoTagGenerator instance with weird image.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_epsg4326()[source]
Test with EPSG4326 (latlong) area, which has no CRS coordinate operation.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_northpole()[source]
Create a 20x10 test area centered exactly on the north pole.
This has no well-defined central meridian so needs separate testing.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_small_eqc_wgs84()[source]
Create 50x100 test equirectangular area centered on (50, 90), wgs84.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_tiny_antarctic()[source]
Create a 20x10 test stereographic area centered near the south pole, wgs84.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_tiny_eqc_sphere()[source]
Create 10x00 test equirectangular area centered on (40, -30), spherical geoid, m.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_tiny_stereographic_wgs84()[source]
Create a 20x10 test stereographic area centered near the north pole, wgs84.
- satpy.tests.writer_tests.test_ninjogeotiff.test_area_weird()[source]
Create a weird area (interrupted goode homolosine) to test error handling.
- satpy.tests.writer_tests.test_ninjogeotiff.test_calc_single_tag_by_name(ntg1, ntg2, ntg3)[source]
Test calculating single tag from dataset.
- satpy.tests.writer_tests.test_ninjogeotiff.test_create_unknown_tags(test_image_small_arctic_P)[source]
Test that unknown tags raise ValueError.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_all_tags(ntg1, ntg3, ntg_latlon, ntg_northpole, caplog)[source]
Test getting all tags from dataset.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_central_meridian(ntg1, ntg2, ntg3, ntg_latlon, ntg_northpole)[source]
Test calculating the central meridian.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_color_depth(ntg1, ntg2, ntg3, ntg_weird, ntg_rgba, ntg_cmyk)[source]
Test extracting the color depth.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_creation_date_id(ntg1, ntg2, ntg3)[source]
Test getting the creation date ID.
This is the time at which the file was created.
This test believes it is run at 2033-5-18 05:33:20Z.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_date_id(ntg1, ntg2, ntg3)[source]
Test getting the date ID.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_earth_radius_large(ntg1, ntg2, ntg3)[source]
Test getting the Earth semi-major axis.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_earth_radius_small(ntg1, ntg2, ntg3)[source]
Test getting the Earth semi-minor axis.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_filename(ntg1, ntg2, ntg3)[source]
Test getting the filename.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_max_gray_value_L(ntg1)[source]
Test getting max gray value for mode L.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_max_gray_value_P(ntg3)[source]
Test getting max gray value for mode P.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_max_gray_value_RGB(ntg2)[source]
Test max gray value for RGB.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_meridian_east(ntg1, ntg2, ntg3)[source]
Test getting east meridian.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_meridian_west(ntg1, ntg2, ntg3)[source]
Test getting west meridian.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_min_gray_value_L(ntg1)[source]
Test getting min gray value for mode L.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_min_gray_value_P(ntg3)[source]
Test getting min gray value for mode P.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_min_gray_value_RGB(ntg2)[source]
Test getting min gray value for RGB.
Note that min/max gray value is mandatory in NinJo even for RGBs?
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_projection(ntg1, ntg2, ntg3, ntg_weird, ntg_rgba, ntg_cmyk, ntg_latlon)[source]
Test getting projection string.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_ref_lat_1(ntg1, ntg2, ntg3, ntg_weird, ntg_latlon)[source]
Test getting reference latitude 1.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_ref_lat_2(ntg1, ntg2, ntg3)[source]
Test getting reference latitude 2.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_transparent_pixel(ntg1, ntg2, ntg3, ntg_no_fill_value)[source]
Test getting fill value.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_xmax(ntg1, ntg2, ntg3)[source]
Test getting maximum x.
- satpy.tests.writer_tests.test_ninjogeotiff.test_get_ymax(ntg1, ntg2, ntg3)[source]
Test getting maximum y.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_cmyk_antarctic(test_area_tiny_antarctic)[source]
Get a small test image in mode CMYK on south pole.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_large_asia_RGB(test_area_small_eqc_wgs84)[source]
Get a large-ish test image in mode RGB, over Asia.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_latlon(test_area_epsg4326)[source]
Get image with latlon areadefinition.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_northpole(test_area_northpole)[source]
Test image with area exactly on northpole.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_rgba_merc(test_area_merc)[source]
Get a small test image in mode RGBA and mercator.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_small_arctic_P(test_area_tiny_stereographic_wgs84)[source]
Get a small-ish test image in mode P, over Arctic.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_small_mid_atlantic_K_L(test_area_tiny_eqc_sphere)[source]
Get a small test image in units K, mode L, over Atlantic.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_small_mid_atlantic_L(test_area_tiny_eqc_sphere)[source]
Get a small test image in mode L, over Atlantic.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_small_mid_atlantic_L_no_quantity(test_area_tiny_eqc_sphere)[source]
Get a small test image, mode L, over Atlantic, with non-quantitywvalues.
This could be the case, for example, for vis_with_night_ir.
- satpy.tests.writer_tests.test_ninjogeotiff.test_image_weird(test_area_weird)[source]
Get a small image with some weird properties to test error handling.
- satpy.tests.writer_tests.test_ninjogeotiff.test_str_ids(test_image_small_arctic_P)[source]
Test that channel and satellit IDs can be str.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_file(test_image_small_mid_atlantic_L, tmp_path)[source]
Test that it writes a GeoTIFF with the appropriate NinJo-tags.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_file_LA(test_image_latlon, tmp_path)[source]
Test writing and reading LA image.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_file_P(test_image_small_arctic_P, tmp_path)[source]
Test writing and reading P image.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_file_RGB(test_image_large_asia_RGB, tmp_path)[source]
Test writing and reading RGB.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_file_units(test_image_small_mid_atlantic_K_L, tmp_path, caplog)[source]
Test that it writes a GeoTIFF with the appropriate NinJo-tags and units.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_no_quantity(test_image_small_mid_atlantic_L_no_quantity, tmp_path, unit)[source]
Test that no scale/offset written if no valid units present.
- satpy.tests.writer_tests.test_ninjogeotiff.test_write_and_read_via_scene(test_image_small_mid_atlantic_L, tmp_path)[source]
Test that all attributes are written also when writing from scene.
It appears that
Satpy.Scene.save_dataset()
does not pass the filename to the writer. Test that filename is still written to header when saving this way (the regular way).
satpy.tests.writer_tests.test_ninjotiff module
Tests for the NinJoTIFF writer.
- class satpy.tests.writer_tests.test_ninjotiff.FakeImage(data, mode)[source]
Bases:
object
Fake image.
Init fake image.
- class satpy.tests.writer_tests.test_ninjotiff.TestNinjoTIFFWriter(methodName='runTest')[source]
Bases:
TestCase
The ninjo tiff writer tests.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.writer_tests.test_simple_image module
Tests for the simple image writer.
- class satpy.tests.writer_tests.test_simple_image.TestPillowWriter(methodName='runTest')[source]
Bases:
TestCase
Test Pillow/PIL writer.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.writer_tests.test_utils module
Tests for writer utilities.
- class satpy.tests.writer_tests.test_utils.WriterUtilsTest(methodName='runTest')[source]
Bases:
TestCase
Test various writer utilities.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
Module contents
The writer tests package.
Submodules
satpy.tests.conftest module
Shared preparation and utilities for testing.
This module is executed automatically by pytest.
- satpy.tests.conftest._clear_function_caches()[source]
Clear out global function-level caches that may cause conflicts between tests.
satpy.tests.test_cf_roundtrip module
Test roundripping the cf writer and reader.
satpy.tests.test_composites module
Tests for compositors in composites/__init__.py.
- class satpy.tests.test_composites.TestAddBands(methodName='runTest')[source]
Bases:
TestCase
Test case for the add_bands function.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestBackgroundCompositor[source]
Bases:
object
Test case for the background compositor.
- class satpy.tests.test_composites.TestCategoricalDataCompositor(methodName='runTest')[source]
Bases:
TestCase
Test composiotor for recategorization of categorical data.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestCloudCompositorCommonMask[source]
Bases:
object
Test the CloudCompositorCommonMask.
- class satpy.tests.test_composites.TestCloudCompositorWithoutCloudfree[source]
Bases:
object
Test the CloudCompositorWithoutCloudfree.
- test_bad_indata()[source]
Test the CloudCompositorWithoutCloudfree composite generation without status.
- test_call_bad_optical_conditions()[source]
Test the CloudCompositorWithoutCloudfree composite generation.
- class satpy.tests.test_composites.TestColorizeCompositor(methodName='runTest')[source]
Bases:
TestCase
Test the ColorizeCompositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestColormapCompositor(methodName='runTest')[source]
Bases:
TestCase
Test the ColormapCompositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestDayNightCompositor(methodName='runTest')[source]
Bases:
TestCase
Test DayNightCompositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_day_only_area_with_alpha()[source]
Test compositor with day portion with alpha_band when SZA data is not provided.
- test_day_only_area_with_alpha_and_missing_data()[source]
Test compositor with day portion with alpha_band when SZA data is not provided and there is missing data.
- test_day_only_area_without_alpha()[source]
Test compositor with day portion without alpha_band when SZA data is not provided.
- test_day_only_sza_with_alpha()[source]
Test compositor with day portion with alpha band when SZA data is included.
- test_day_only_sza_without_alpha()[source]
Test compositor with day portion without alpha band when SZA data is included.
- test_daynight_area()[source]
Test compositor both day and night portions when SZA data is not provided.
- test_daynight_sza()[source]
Test compositor with both day and night portions when SZA data is included.
- test_night_only_area_with_alpha()[source]
Test compositor with night portion with alpha band when SZA data is not provided.
- test_night_only_area_without_alpha()[source]
Test compositor with night portion without alpha band when SZA data is not provided.
- class satpy.tests.test_composites.TestDifferenceCompositor(methodName='runTest')[source]
Bases:
TestCase
Test case for the difference compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestEnhance2Dataset(methodName='runTest')[source]
Bases:
TestCase
Test the enhance2dataset utility.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestFillingCompositor(methodName='runTest')[source]
Bases:
TestCase
Test case for the filling compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestGenericCompositor(methodName='runTest')[source]
Bases:
TestCase
Test generic compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestHighCloudCompositor[source]
Bases:
object
Test HighCloudCompositor.
- test_high_cloud_compositor_dtype()[source]
Test that the datatype is not altered by the compositor.
- class satpy.tests.test_composites.TestInferMode(methodName='runTest')[source]
Bases:
TestCase
Test the infer_mode utility.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestInlineComposites(methodName='runTest')[source]
Bases:
TestCase
Test inline composites.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestLongitudeMaskingCompositor(methodName='runTest')[source]
Bases:
TestCase
Test case for the LongitudeMaskingCompositor compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestLowCloudCompositor[source]
Bases:
object
Test LowCloudCompositor.
- class satpy.tests.test_composites.TestLuminanceSharpeningCompositor(methodName='runTest')[source]
Bases:
TestCase
Test luminance sharpening compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestMaskingCompositor[source]
Bases:
object
Test case for the simple masking compositor.
- reference_data(test_data, test_ct_data)[source]
Get reference data to use in masking compositor tests.
- test_call_named_fields(conditions_v2, test_data, test_ct_data, reference_data, reference_alpha)[source]
Test with named fields.
- test_call_named_fields_string(conditions_v2, test_data, test_ct_data, reference_data, reference_alpha)[source]
Test with named fields which are as a string in the mask attributes.
- class satpy.tests.test_composites.TestMatchDataArrays[source]
Bases:
object
Test the utility method ‘match_data_arrays’.
- test_almost_equal_geo_coordinates()[source]
Test that coordinates that are almost-equal still match.
See https://github.com/pytroll/satpy/issues/2668 for discussion.
Various operations like cropping and resampling can cause geo-coordinates (y, x) to be very slightly unequal due to floating point precision. This test makes sure that even in those cases we can still generate composites from DataArrays with these coordinates.
- class satpy.tests.test_composites.TestMultiFiller(methodName='runTest')[source]
Bases:
TestCase
Test case for the MultiFiller compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestNaturalEnhCompositor(methodName='runTest')[source]
Bases:
TestCase
Test NaturalEnh compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestPaletteCompositor(methodName='runTest')[source]
Bases:
TestCase
Test the PaletteCompositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestPrecipCloudsCompositor(methodName='runTest')[source]
Bases:
TestCase
Test the PrecipClouds compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestRatioSharpenedCompositors[source]
Bases:
object
Test RatioSharpenedRGB and SelfSharpendRGB compositors.
- test_ratio_sharpening(high_resolution_band, neutral_resolution_band, exp_r, exp_g, exp_b)[source]
Test RatioSharpenedRGB by different groups of high_resolution_band and neutral_resolution_band.
- class satpy.tests.test_composites.TestRealisticColors[source]
Bases:
object
Test the SEVIRI Realistic Colors compositor.
- class satpy.tests.test_composites.TestSandwichCompositor[source]
Bases:
object
Test sandwich compositor.
- class satpy.tests.test_composites.TestSingleBandCompositor(methodName='runTest')[source]
Bases:
TestCase
Test the single-band compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_composites.TestStaticImageCompositor(methodName='runTest')[source]
Bases:
TestCase
Test case for the static compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- satpy.tests.test_composites._enhance2dataset(dataset, convert_p=False)[source]
Mock the enhance2dataset to return the original data.
- satpy.tests.test_composites.fake_dataset_pair(fake_area)[source]
Return a fake pair of 2×2 datasets.
- satpy.tests.test_composites.test_bad_sensor_yaml_configs(tmp_path)[source]
Test composite YAML file with no sensor isn’t loaded.
But the bad YAML also shouldn’t crash composite configuration loading.
satpy.tests.test_config module
Test objects and functions in the satpy.config module.
- class satpy.tests.test_config.TestBuiltinAreas(methodName='runTest')[source]
Bases:
TestCase
Test that the builtin areas are all valid.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_config.TestConfigObject[source]
Bases:
object
Test basic functionality of the central config object.
- class satpy.tests.test_config.TestPluginsConfigs[source]
Bases:
object
Test that plugins are working.
- test_get_plugin_configs(fake_composite_plugin_etc_path)[source]
Check that the plugin configs are looked for.
- test_load_entry_point_composite(fake_composite_plugin_etc_path)[source]
Test that composites can be loaded from plugin entry points.
- test_plugin_enhancements_generic_sensor(fake_enh_plugin_etc_path, sensor_name, exp_result)[source]
Test that enhancements from a plugin are available.
- test_plugin_reader_available_readers(fake_reader_plugin_etc_path)[source]
Test that readers can be loaded from plugin entry points.
- test_plugin_reader_configs(fake_reader_plugin_etc_path, specified_reader)[source]
Test that readers can be loaded from plugin entry points.
- satpy.tests.test_config._create_fake_importlib_files(module_paths: dict[str, Path]) Callable[[str], Path] [source]
- satpy.tests.test_config._create_fake_iter_entry_points(entry_points: dict[str, list[EntryPoint]]) Callable[[], dict[str, EntryPoint]] [source]
- satpy.tests.test_config._create_yamlbased_plugin(tmp_path: Path, component_type: str, yaml_name: str, yaml_func: Callable[[str], None]) Iterator[Path] [source]
- satpy.tests.test_config._get_entry_points_and_etc_paths(tmp_path: Path, entry_point_names: dict[str, list[str]]) tuple[Path, dict[str, list[EntryPoint]], dict[str, Path]] [source]
- satpy.tests.test_config.fake_composite_plugin_etc_path(tmp_path: Path) Iterator[Path] [source]
Create a fake plugin entry point with a fake compositor YAML configuration file.
- satpy.tests.test_config.fake_enh_plugin_etc_path(tmp_path: Path) Iterator[Path] [source]
Create a fake plugin entry point with a fake enhancement YAML configure files.
This creates a
fake_sensor.yaml
andgeneric.yaml
enhancement configuration.
- satpy.tests.test_config.fake_plugin_etc_path(tmp_path: Path, entry_point_names: dict[str, list[str]]) Iterator[Path] [source]
Create a fake satpy plugin entry point.
This mocks the necessary methods to trick Satpy into thinking a plugin package is installed and has made a satpy plugin available.
- satpy.tests.test_config.fake_reader_plugin_etc_path(tmp_path: Path) Iterator[Path] [source]
Create a fake plugin entry point with a fake reader YAML configuration file.
satpy.tests.test_crefl_utils module
Test CREFL rayleigh correction functions.
- class satpy.tests.test_crefl_utils.TestCreflUtils(methodName='runTest')[source]
Bases:
TestCase
Test crefl_utils.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.test_data_download module
Test for ancillary data downloading.
- class satpy.tests.test_data_download.TestDataDownload[source]
Bases:
object
Test basic data downloading functionality.
- test_find_registerable(readers, writers, comp_sensors)[source]
Test that find_registerable finds some things.
- class satpy.tests.test_data_download.UnfriendlyModifier(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
ModifierBase
,DataDownloadMixin
Fake modifier that raises an exception in __init__.
Raise an exception if we weren’t provided any prerequisites.
satpy.tests.test_dataset module
Test objects and functions in the dataset module.
- class satpy.tests.test_dataset.TestCombineMetadata(methodName='runTest')[source]
Bases:
TestCase
Test how metadata is combined.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_combine_end_times_with_none()[source]
Test the combine_metadata with end times when there’s a None included.
- test_combine_lists_same_size_diff_values()[source]
Test combine metadata with lists with different values.
- class satpy.tests.test_dataset.TestDataID(methodName='runTest')[source]
Bases:
TestCase
Test DataID object creation and other methods.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_dataset.TestIDQueryInteractions(methodName='runTest')[source]
Bases:
TestCase
Test the interactions between DataIDs and DataQuerys.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- satpy.tests.test_dataset.test_combine_dicts_close()[source]
Test combination of dictionaries whose values are close.
- satpy.tests.test_dataset.test_combine_dicts_different(test_mda)[source]
Test combination of dictionaries differing in various ways.
- satpy.tests.test_dataset.test_dataid_elements_picklable()[source]
Test individual elements of DataID can be pickled.
In some cases, like in the base reader classes, the elements of a DataID are extracted and stored in a separate dictionary. This means that the internal/fancy pickle handling of DataID does not play a part.
- satpy.tests.test_dataset.test_dataid_equal_if_enums_different()[source]
Check that dataids with different enums but same items are equal.
- satpy.tests.test_dataset.test_frequency_double_side_band_channel_containment()[source]
Test the frequency double side band object: check if one band contains another.
- satpy.tests.test_dataset.test_frequency_double_side_band_channel_distances()[source]
Test the frequency double side band object: get the distance between two bands.
- satpy.tests.test_dataset.test_frequency_double_side_band_channel_equality()[source]
Test the frequency double side band object: check if two bands are ‘equal’.
- satpy.tests.test_dataset.test_frequency_double_side_band_channel_str()[source]
Test the frequency double side band object: test the band description.
- satpy.tests.test_dataset.test_frequency_double_side_band_class_method_convert()[source]
Test the frequency double side band object: test the class method convert.
- satpy.tests.test_dataset.test_frequency_quadruple_side_band_channel_containment()[source]
Test the frequency quadruple side band object: check if one band contains another.
- satpy.tests.test_dataset.test_frequency_quadruple_side_band_channel_distances()[source]
Test the frequency quadruple side band object: get the distance between two bands.
- satpy.tests.test_dataset.test_frequency_quadruple_side_band_channel_equality()[source]
Test the frequency quadruple side band object: check if two bands are ‘equal’.
- satpy.tests.test_dataset.test_frequency_quadruple_side_band_channel_str()[source]
Test the frequency quadruple side band object: test the band description.
- satpy.tests.test_dataset.test_frequency_quadruple_side_band_class_method_convert()[source]
Test the frequency double side band object: test the class method convert.
- satpy.tests.test_dataset.test_frequency_range_channel_containment()[source]
Test the frequency range object: channel containment.
- satpy.tests.test_dataset.test_frequency_range_channel_distances()[source]
Test the frequency range object: derive distances between bands.
- satpy.tests.test_dataset.test_frequency_range_channel_equality()[source]
Test the frequency range object: check if two bands are ‘equal’.
- satpy.tests.test_dataset.test_frequency_range_class_method_convert()[source]
Test the frequency range object: test the class method convert.
satpy.tests.test_demo module
Tests for the satpy.demo module.
- class satpy.tests.test_demo.TestAHIDemoDownload[source]
Bases:
object
Test the AHI demo data download.
- class satpy.tests.test_demo.TestDemo(methodName='runTest')[source]
Bases:
TestCase
Test demo data download functions.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_demo.TestGCPUtils(methodName='runTest')[source]
Bases:
TestCase
Test Google Cloud Platform utilities.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_demo.TestSEVIRIHRITDemoDownload(methodName='runTest')[source]
Bases:
TestCase
Test case for downloading an hrit tarball.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_demo.TestVIIRSSDRDemoDownload[source]
Bases:
object
Test VIIRS SDR downloading.
- ALL_BAND_PREFIXES = ('SVI01', 'SVI02', 'SVI03', 'SVI04', 'SVI05', 'SVM01', 'SVM02', 'SVM03', 'SVM04', 'SVM05', 'SVM06', 'SVM07', 'SVM08', 'SVM09', 'SVM10', 'SVM11', 'SVM12', 'SVM13', 'SVM14', 'SVM15', 'SVM16', 'SVDNB')
- ALL_GEO_PREFIXES = ('GITCO', 'GMTCO', 'GDNBO')
- test_download_channels_num_granules_dnb(requests, tmpdir)[source]
Test downloading and re-downloading VIIRS SDR DNB data with select granules.
- class satpy.tests.test_demo._FakeRequest(url, stream=None, timeout=None)[source]
Bases:
object
Fake object to act like a requests return value when downloading a file.
- class satpy.tests.test_demo._GlobHelper(num_results)[source]
Bases:
object
Create side effect function for mocking gcsfs glob method.
Initialize side_effect function for mocking gcsfs glob method.
- satpy.tests.test_demo._create_and_populate_dummy_tarfile(fn)[source]
Populate a dummy tarfile with dummy files.
- satpy.tests.test_demo.mock_filesystem()[source]
Create a mock filesystem, patching open and os.path.isfile.
satpy.tests.test_dependency_tree module
Unit tests for the dependency tree class and dependencies.
- class satpy.tests.test_dependency_tree.TestDependencyTree(methodName='runTest')[source]
Bases:
TestCase
Test the dependency tree.
This is what we are working with:
None (No Data) +DataID(name='comp19') + +DataID(name='ds5', resolution=250, modifiers=('res_change',)) + + +DataID(name='ds5', resolution=250, modifiers=()) + + +__EMPTY_LEAF_SENTINEL__ (No Data) + +DataID(name='comp13') + + +DataID(name='ds5', resolution=250, modifiers=('res_change',)) + + + +DataID(name='ds5', resolution=250, modifiers=()) + + + +__EMPTY_LEAF_SENTINEL__ (No Data) + +DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=())
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_dependency_tree.TestMissingDependencies(methodName='runTest')[source]
Bases:
TestCase
Test the MissingDependencies exception.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_dependency_tree.TestMultipleResolutionSameChannelDependency(methodName='runTest')[source]
Bases:
TestCase
Test that MODIS situations where the same channel is available at multiple resolution works.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_dependency_tree.TestMultipleSensors(methodName='runTest')[source]
Bases:
TestCase
Test cases where multiple sensors are available.
This is what we are working with:
None (No Data) +DataID(name='comp19') + +DataID(name='ds5', resolution=250, modifiers=('res_change',)) + + +DataID(name='ds5', resolution=250, modifiers=()) + + +__EMPTY_LEAF_SENTINEL__ (No Data) + +DataID(name='comp13') + + +DataID(name='ds5', resolution=250, modifiers=('res_change',)) + + + +DataID(name='ds5', resolution=250, modifiers=()) + + + +__EMPTY_LEAF_SENTINEL__ (No Data) + +DataID(name='ds2', resolution=250, calibration=<calibration.reflectance>, modifiers=())
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.test_file_handlers module
test file handler baseclass.
- class satpy.tests.test_file_handlers.TestBaseFileHandler(methodName='runTest')[source]
Bases:
TestCase
Test the BaseFileHandler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.test_modifiers module
Tests for modifiers in modifiers/__init__.py.
- class satpy.tests.test_modifiers.TestNIREmissivePartFromReflectance(methodName='runTest')[source]
Bases:
TestCase
Test the NIR Emissive part from reflectance compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_modifiers.TestNIRReflectance(methodName='runTest')[source]
Bases:
TestCase
Test NIR reflectance compositor.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_masking_limit_default_value_is_not_none(calculator, apply_modifier_info, sza)[source]
Check that sun_zenith_threshold is not None.
- test_no_sunz_no_co2(calculator, apply_modifier_info, sza)[source]
Test NIR reflectance compositor with minimal parameters.
- test_no_sunz_with_co2(calculator, apply_modifier_info, sza)[source]
Test NIR reflectance compositor provided extra co2 info.
- test_provide_masking_limit(calculator, apply_modifier_info, sza)[source]
Test NIR reflectance compositor provided sunz and a sunz threshold.
- test_provide_sunz_and_threshold(calculator, apply_modifier_info, sza)[source]
Test NIR reflectance compositor provided sunz and a sunz threshold.
- class satpy.tests.test_modifiers.TestPSPAtmosphericalCorrection(methodName='runTest')[source]
Bases:
TestCase
Test the pyspectral-based atmospheric correction modifier.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_modifiers.TestPSPRayleighReflectance[source]
Bases:
object
Test the pyspectral-based Rayleigh correction modifier.
- class satpy.tests.test_modifiers.TestSunZenithCorrector[source]
Bases:
object
Test case for the zenith corrector.
- class satpy.tests.test_modifiers.TestSunZenithReducer[source]
Bases:
object
Test case for the sun zenith reducer.
- satpy.tests.test_modifiers._sunz_bigger_area_def()[source]
Get area that is twice the size of ‘sunz_area_def’.
satpy.tests.test_node module
Unit tests for the dependency tree class and dependencies.
- class satpy.tests.test_node.FakeCompositor(id)[source]
Bases:
object
A fake compositor.
Set up the fake compositor.
- class satpy.tests.test_node.TestCompositorNode(methodName='runTest')[source]
Bases:
TestCase
Test case for the compositor node object.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_node.TestCompositorNodeCopy(methodName='runTest')[source]
Bases:
TestCase
Test case for copying a node.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
satpy.tests.test_readers module
Test classes and functions in the readers/__init__.py module.
- class satpy.tests.test_readers.TestDatasetDict(methodName='runTest')[source]
Bases:
TestCase
Test DatasetDict and its methods.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_readers.TestFSFile[source]
Bases:
object
Test the FSFile class.
- test_equality(local_filename, local_filename2, local_zip_file)[source]
Test that FSFile compares equal when it should.
- test_fsfile_with_fs_open_file_abides_pathlike(local_file, random_string)[source]
Test that FSFile abides PathLike for fsspec OpenFile instances.
- test_fsfile_with_regular_filename_abides_pathlike(random_string)[source]
Test that FSFile abides PathLike for regular filenames.
- test_fsfile_with_regular_filename_and_fs_spec_abides_pathlike(random_string)[source]
Test that FSFile abides PathLike for filename+fs instances.
- test_hash(local_filename, local_filename2, local_zip_file)[source]
Test that FSFile hashing behaves sanely.
- test_open_zip_fs_regular_filename(local_filename2, local_zip_file)[source]
Test opening a zipfs with a regular filename provided.
- class satpy.tests.test_readers.TestFindFilesAndReaders[source]
Bases:
object
Test the find_files_and_readers utility function.
- test_no_parameters_both_atms_and_viirs(viirs_file, atms_file)[source]
Test with no limiting parameters when there area both atms and viirs files in the same directory.
- test_pending_old_reader_name_mapping()[source]
Test that requesting pending old reader names raises a warning.
- test_reader_name_matched_end_time(viirs_file)[source]
Test with end matching the filename.
End time in the middle of the file time should still match the file.
- test_reader_name_matched_start_end_time(viirs_file)[source]
Test with start and end time matching the filename.
- test_reader_name_matched_start_time(viirs_file)[source]
Test with start matching the filename.
Start time in the middle of the file time should still match the file.
- test_reader_name_unmatched_start_end_time(viirs_file)[source]
Test with start and end time matching the filename.
- class satpy.tests.test_readers.TestGroupFiles(methodName='runTest')[source]
Bases:
TestCase
Test the ‘group_files’ utility function.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- _filenames_abi_glm = ['OR_ABI-L1b-RadF-M6C14_G16_s19000010000000_e19000010005000_c20403662359590.nc', 'OR_ABI-L1b-RadF-M6C14_G16_s19000010010000_e19000010015000_c20403662359590.nc', 'OR_ABI-L1b-RadF-M6C14_G16_s19000010020000_e19000010025000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010000000_e19000010001000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010001000_e19000010002000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010002000_e19000010003000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010003000_e19000010004000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010004000_e19000010005000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010005000_e19000010006000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010006000_e19000010007000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010007000_e19000010008000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010008000_e19000010009000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010009000_e19000010010000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010010000_e19000010011000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010011000_e19000010012000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010012000_e19000010013000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010013000_e19000010014000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010014000_e19000010015000_c20403662359590.nc', 'OR_GLM-L2-GLMF-M3_G16_s19000010015000_e19000010016000_c20403662359590.nc']
- test_large_time_threshold()[source]
Test what happens when the time threshold holds multiple files.
- test_multi_readers_empty_groups_missing_skip()[source]
Verify empty groups are skipped.
Verify that all groups lacking ABI are skipped, resulting in only three groups that are all non-empty for both instruments.
- test_multi_readers_empty_groups_passed()[source]
Verify that all groups are there, resulting in some that are empty.
- test_multi_readers_empty_groups_raises_filenotfounderror()[source]
Test behaviour on empty groups passing multiple readers.
Make sure it raises an exception, for there will be groups containing GLM but not ABI.
- test_multi_readers_invalid_parameter()[source]
Verify that invalid missing parameter raises ValueError.
- test_non_datetime_group_key()[source]
Test what happens when the start_time isn’t used for grouping.
- test_two_instruments_files()[source]
Test the behavior when two instruments files are provided.
This is undesired from a user point of view since we don’t want G16 and G17 files in the same Scene. Readers (like abi_l1b) are or can be configured to have specific group keys for handling these situations. Due to that this test forces the fallback group keys of (‘start_time’,).
- class satpy.tests.test_readers.TestReaderLoader(methodName='runTest')[source]
Bases:
TestCase
Test the load_readers function.
Assumes that the VIIRS SDR reader exists and works.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_empty_filenames_as_dict()[source]
Test passing filenames as a dictionary with an empty list of filenames.
- test_filenames_as_dict_bad_reader()[source]
Test loading with filenames dict but one of the readers is bad.
- test_filenames_as_dict_with_reader()[source]
Test loading from a filenames dict with a single reader specified.
This can happen in the deprecated Scene behavior of passing a reader and a base_dir.
- class satpy.tests.test_readers.TestYAMLFiles[source]
Bases:
object
Test and analyze the reader configuration files.
- satpy.tests.test_readers.local_filename(tmp_path_factory, random_string)[source]
Create simple on-disk file.
- satpy.tests.test_readers.local_hdf5_fsspec(local_hdf5_filename)[source]
Get fsspec OpenFile pointing to local HDF5 file.
- satpy.tests.test_readers.local_hdf5_path(local_hdf5_filename)[source]
Get Path object pointing to local HDF5 file.
- satpy.tests.test_readers.local_netcdf_filename(tmp_path_factory)[source]
Create a simple local NetCDF file.
- satpy.tests.test_readers.local_netcdf_fsfile(local_netcdf_fsspec)[source]
Get FSFile object wrapping an fsspec OpenFile pointing to local netcdf file.
- satpy.tests.test_readers.local_netcdf_fsspec(local_netcdf_filename)[source]
Get fsspec OpenFile object pointing to local netcdf file.
- satpy.tests.test_readers.local_netcdf_path(local_netcdf_filename)[source]
Get Path object pointing to local netcdf file.
- satpy.tests.test_readers.local_zip_file(local_filename2)[source]
Create local zip file containing one local file.
satpy.tests.test_regressions module
Test fixed bugs.
- satpy.tests.test_regressions.generate_fake_abi_xr_dataset(filename, chunks=None, **kwargs)[source]
Create a fake xarray dataset for abi data.
This is an incomplete copy of existing file structures.
- satpy.tests.test_regressions.test_1088(fake_open_dataset)[source]
Check that copied arrays gets resampled.
satpy.tests.test_resample module
Unittests for resamplers.
- class satpy.tests.test_resample.TestBilinearResampler(methodName='runTest')[source]
Bases:
TestCase
Test the bilinear resampler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestBucketAvg(methodName='runTest')[source]
Bases:
TestCase
Test the bucket resampler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- _compute_mocked_bucket_avg(data, return_data=None, **kwargs)[source]
Compute the mocked bucket average.
- class satpy.tests.test_resample.TestBucketCount(methodName='runTest')[source]
Bases:
TestCase
Test the count bucket resampler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestBucketFraction(methodName='runTest')[source]
Bases:
TestCase
Test the fraction bucket resampler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestBucketSum(methodName='runTest')[source]
Bases:
TestCase
Test the sum bucket resampler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestCoordinateHelpers(methodName='runTest')[source]
Bases:
TestCase
Test various utility functions for working with coordinates.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestHLResample(methodName='runTest')[source]
Bases:
TestCase
Test the higher level resampling functions.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestKDTreeResampler(methodName='runTest')[source]
Bases:
TestCase
Test the kd-tree resampler.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_resample.TestNativeResampler[source]
Bases:
object
Tests for the ‘native’ resampling method.
- test_expand_reduce_agg_rechunk()[source]
Test that an incompatible factor for the chunk size is rechunked.
This can happen when a user chunks their data that makes sense for the overall shape of the array and for their local machine’s performance, but the resulting resampling factor does not divide evenly into that chunk size.
- test_expand_reduce_aggregate_identity()[source]
Test classmethod ‘expand_reduce’ returns the original dask array when factor is 1.
- test_expand_reduce_aggregate_invalid(dim0_factor)[source]
Test classmethod ‘expand_reduce’ fails when factor does not divide evenly.
- satpy.tests.test_resample.get_test_data(input_shape=(100, 50), output_shape=(200, 100), output_proj=None, input_dims=('y', 'x'))[source]
Get common data objects used in testing.
- Returns:
input_data_on_area: DataArray with dimensions as if it is a gridded dataset.
input_area_def: AreaDefinition of the above DataArray
input_data_on_swath: DataArray with dimensions as if it is a swath.
input_swath: SwathDefinition of the above DataArray
target_area_def: AreaDefinition to be used as a target for resampling
- Return type:
satpy.tests.test_utils module
Testing of utils.
- class satpy.tests.test_utils.TestCheckSatpy(methodName='runTest')[source]
Bases:
TestCase
Test the ‘check_satpy’ function.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_utils.TestGeoUtils[source]
Bases:
object
Testing geo-related utility functions.
- class satpy.tests.test_utils.TestGetSatPos[source]
Bases:
object
Tests for ‘get_satpos’.
- test_get_satpos(included_prefixes, preference, expected_result)[source]
Test getting the satellite position.
- satpy.tests.test_utils._data_arrays_from_params(shapes: list[tuple[int, ...]], chunks: list[tuple[int, ...]], dims: list[tuple[int, ...]]) Generator[DataArray, None, None] [source]
- satpy.tests.test_utils._verify_unchanged_chunks(data_arrays: list[DataArray], orig_arrays: list[DataArray]) None [source]
- satpy.tests.test_utils.test_chunk_size_limit_from_dask_config()[source]
Check the chunk size limit computations.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_filename_dict()[source]
Test convertion of remote files to fsspec objects.
Case where filenames is a dictionary mapping readers and filenames.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_fsfile()[source]
Test convertion of remote files to fsspec objects.
Case where the some of the files are already FSFile objects.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_local_files()[source]
Test convertion of remote files to fsspec objects.
Case without scheme/protocol, which should default to plain filenames.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_local_pathlib_files()[source]
Test convertion of remote files to fsspec objects.
Case using pathlib objects as filenames.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_mixed_sources()[source]
Test convertion of remote files to fsspec objects.
Case with mixed local and remote files.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_storage_options(open_files)[source]
Test convertion of remote files to fsspec objects.
Case with storage options given.
- satpy.tests.test_utils.test_convert_remote_files_to_fsspec_windows_paths()[source]
Test convertion of remote files to fsspec objects.
Case where windows paths are used.
- satpy.tests.test_utils.test_find_in_ancillary()[source]
Test finding a dataset in ancillary variables.
- satpy.tests.test_utils.test_logging_on_and_off(caplog)[source]
Test that switching logging on and off works.
- satpy.tests.test_utils.test_make_fake_scene()[source]
Test the make_fake_scene utility.
Although the make_fake_scene utility is for internal testing purposes, it has grown sufficiently complex that it needs its own testing.
satpy.tests.test_writers module
Test generic writer functions.
- class satpy.tests.test_writers.TestBaseWriter[source]
Bases:
object
Test the base writer class.
- test_save_dataset_dynamic_filename(fmt_fn, exp_fns)[source]
Test saving a dataset with a format filename specified.
- class satpy.tests.test_writers.TestComplexSensorEnhancerConfigs[source]
Bases:
_BaseCustomEnhancementConfigTests
Test enhancement configs that use or expect multiple sensors.
- ENH_FN = 'test_sensor1.yaml'
- ENH_FN2 = 'test_sensor2.yaml'
- TEST_CONFIGS: dict[str, str] = {'test_sensor1.yaml': '\nenhancements:\n test1_sensor1_specific:\n name: test1\n sensor: test_sensor1\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 200}\n\n ', 'test_sensor2.yaml': '\nenhancements:\n default:\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 100}\n test1_sensor2_specific:\n name: test1\n sensor: test_sensor2\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 50}\n exact_multisensor_comp:\n name: my_comp\n sensor: [test_sensor1, test_sensor2]\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 20}\n '}
- class satpy.tests.test_writers.TestComputeWriterResults(methodName='runTest')[source]
Bases:
TestCase
Test compute_writer_results().
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_writers.TestEnhancer(methodName='runTest')[source]
Bases:
TestCase
Test basic Enhancer functionality with builtin configs.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_writers.TestEnhancerUserConfigs[source]
Bases:
_BaseCustomEnhancementConfigTests
Test Enhancer functionality when user’s custom configurations are present.
- ENH_ENH_FN = 'enhancements/test_sensor.yaml'
- ENH_ENH_FN2 = 'enhancements/test_sensor2.yaml'
- ENH_FN = 'test_sensor.yaml'
- ENH_FN2 = 'test_sensor2.yaml'
- ENH_FN3 = 'test_empty.yaml'
- TEST_CONFIGS: dict[str, str] = {'enhancements/test_sensor.yaml': '\nenhancements:\n test1_kelvin:\n name: test1\n units: kelvin\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 20}\n\n ', 'enhancements/test_sensor2.yaml': '\n\n ', 'test_empty.yaml': '', 'test_sensor.yaml': '\nenhancements:\n test1_default:\n name: test1\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: linear, cutoffs: [0., 0.]}\n\n ', 'test_sensor2.yaml': '\n\n\n '}
- test_enhance_with_sensor_entry2()[source]
Test enhancing an image with a more detailed configuration section.
- class satpy.tests.test_writers.TestOverlays(methodName='runTest')[source]
Bases:
TestCase
Tests for add_overlay and add_decorate functions.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_writers.TestReaderEnhancerConfigs[source]
Bases:
_BaseCustomEnhancementConfigTests
Test enhancement configs that use reader name.
- ENH_FN = 'test_sensor1.yaml'
- TEST_CONFIGS: dict[str, str] = {'test_sensor1.yaml': '\nenhancements:\n default_reader2:\n reader: reader2\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 75}\n default:\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 100}\n test1_reader2_specific:\n name: test1\n reader: reader2\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 50}\n test1_reader1_specific:\n name: test1\n reader: reader1\n operations:\n - name: stretch\n method: !!python/name:satpy.enhancements.stretch\n kwargs: {stretch: crude, min_stretch: 0, max_stretch: 200}\n '}
- class satpy.tests.test_writers.TestWritersModule(methodName='runTest')[source]
Bases:
TestCase
Test the writers module.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_writers.TestYAMLFiles(methodName='runTest')[source]
Bases:
TestCase
Test and analyze the writer configuration files.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- satpy.tests.test_writers.test_group_results_by_output_file(tmp_path)[source]
Test grouping results by output file.
Add a test for grouping the results from save_datasets(…, compute=False) by output file. This is useful if for some reason we want to treat each output file as a seperate computation (that can still be computed together later).
satpy.tests.test_yaml_reader module
Testing the yaml_reader module.
- class satpy.tests.test_yaml_reader.DummyReader(filename, filename_info, filetype_info)[source]
Bases:
BaseFileHandler
Dummy reader instance.
Initialize the dummy reader.
- property end_time
Return end time.
- property start_time
Return start time.
- class satpy.tests.test_yaml_reader.FakeFH(start_time, end_time)[source]
Bases:
BaseFileHandler
Fake file handler class.
Initialize fake file handler.
- property end_time
Return end time.
- property start_time
Return start time.
- satpy.tests.test_yaml_reader.GVSYReader()[source]
Get a fixture of the GEOVariableSegmentYAMLReader.
- class satpy.tests.test_yaml_reader.TestFileFileYAMLReader(methodName='runTest')[source]
Bases:
TestCase
Test units from FileYAMLReader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_deprecated_passing_config_files()[source]
Test that we get an exception when config files are passed to inti.
- class satpy.tests.test_yaml_reader.TestFileFileYAMLReaderMultipleFileTypes(methodName='runTest')[source]
Bases:
TestCase
Test units from FileYAMLReader with multiple file types.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_yaml_reader.TestFileFileYAMLReaderMultiplePatterns(methodName='runTest')[source]
Bases:
TestCase
Test units from FileYAMLReader with multiple readers.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_yaml_reader.TestFileYAMLReaderLoading(methodName='runTest')[source]
Bases:
TestCase
Tests for FileYAMLReader.load.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_yaml_reader.TestFileYAMLReaderWithCustomIDKey(methodName='runTest')[source]
Bases:
TestCase
Test units from FileYAMLReader with custom id_keys.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_yaml_reader.TestGEOFlippableFileYAMLReader(methodName='runTest')[source]
Bases:
TestCase
Test GEOFlippableFileYAMLReader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- test_load_dataset_with_area_for_data_without_area(ldwa)[source]
Test _load_dataset_with_area() for data wihtout area information.
- test_load_dataset_with_area_for_single_areas(ldwa)[source]
Test _load_dataset_with_area() for single area definitions.
- class satpy.tests.test_yaml_reader.TestGEOSegmentYAMLReader(methodName='runTest')[source]
Bases:
TestCase
Test GEOSegmentYAMLReader.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- class satpy.tests.test_yaml_reader.TestGEOVariableSegmentYAMLReader[source]
Bases:
object
Test GEOVariableSegmentYAMLReader.
- test_get_empty_segment(GVSYReader, fake_mss, fake_xr, fake_geswh)[source]
Test execution of (overridden) get_empty_segment inside _load_dataset.
- test_pad_earlier_segments_area(GVSYReader, fake_adef)[source]
Test _pad_earlier_segments_area() for the variable segment case.
- class satpy.tests.test_yaml_reader.TestUtils(methodName='runTest')[source]
Bases:
TestCase
Test the utility functions.
Create an instance of the class that will use the named test method when executed. Raises a ValueError if the instance does not have a method with the specified name.
- _classSetupFailed = False
- _class_cleanups = []
- satpy.tests.test_yaml_reader._create_mocked_fh_and_areadef(aex, ashape, expected_segments, segment, chk_pos_info)[source]
- satpy.tests.test_yaml_reader.available_datasets(self, configured_datasets=None)[source]
Fake available_datasets for testing multiple file types.
- satpy.tests.test_yaml_reader.fake_geswh()[source]
Get a fixture of the patched _get_empty_segment_with_height.
satpy.tests.utils module
Utilities for various satpy tests.
- class satpy.tests.utils.CustomScheduler(max_computes=1)[source]
Bases:
object
Scheduler raising an exception if data are computed too many times.
Set starting and maximum compute counts.
- class satpy.tests.utils.FakeCompositor(name, common_channel_mask=True, **kwargs)[source]
Bases:
GenericCompositor
Act as a compositor that produces fake RGB data.
Collect custom configuration values.
- Parameters:
common_channel_mask (bool) – If True, mask all the channels with a mask that combines all the invalid areas of the given data.
- class satpy.tests.utils.FakeFileHandler(filename, filename_info, filetype_info, **kwargs)[source]
Bases:
BaseFileHandler
Fake file handler to be used by test readers.
Initialize file handler and accept all keyword arguments.
- available_datasets(configured_datasets=None)[source]
Report YAML datasets available unless ‘not_available’ is specified during creation.
- property end_time
Get static end time datetime object.
- property sensor_names
Get sensor name from filetype configuration.
- property start_time
Get static start time datetime object.
- class satpy.tests.utils.FakeModifier(name, prerequisites=None, optional_prerequisites=None, **kwargs)[source]
Bases:
ModifierBase
Act as a modifier that performs different modifications.
Initialise the compositor.
- satpy.tests.utils._filter_datasets(all_ds, names_or_ids)[source]
Help filtering DataIDs by name or DataQuery.
- satpy.tests.utils._get_did_for_fake_scene(area, arr, extra_attrs, daskify)[source]
Add instance to fake scene. Helper for make_fake_scene.
- satpy.tests.utils._get_fake_scene_area(arr, area)[source]
Get area for fake scene. Helper for make_fake_scene.
- satpy.tests.utils.assert_attrs_equal(attrs, attrs_exp, tolerance=0)[source]
Test that attributes are equal.
Walks dictionary recursively. Numerical attributes are compared with the given relative tolerance.
- satpy.tests.utils.assert_dict_array_equality(d1, d2)[source]
Check that dicts containing arrays are equal.
- satpy.tests.utils.assert_maximum_dask_computes(max_computes=1)[source]
Context manager to make sure dask computations are not executed more than
max_computes
times.
- satpy.tests.utils.convert_file_content_to_data_array(file_content, attrs=(), dims=('z', 'y', 'x'))[source]
Help old reader tests that still use numpy arrays.
A lot of old reader tests still use numpy arrays and depend on the “var_name/attr/attr_name” convention established before Satpy used xarray and dask. While these conventions are still used and should be supported, readers need to use xarray DataArrays instead.
If possible, new tests should be based on pure DataArray objects instead of the “var_name/attr/attr_name” style syntax provided by the utility file handlers.
- Parameters:
file_content (dict) – Dictionary of string file keys to fake file data.
attrs (iterable) – Series of attributes to copy to DataArray object from file content dictionary. Defaults to no attributes.
dims (iterable) – Dimension names to use for resulting DataArrays. The second to last dimension is used for 1D arrays, so for dims of
('z', 'y', 'x')
this would use'y'
. Otherwise, the dimensions are used starting with the last, so 2D arrays are('y', 'x')
Dimensions are used in reverse order so the last dimension specified is used as the only dimension for 1D arrays and the last dimension for other arrays.
- satpy.tests.utils.make_cid(**items)[source]
Make a DataID with a minimal set of keys to id composites.
- satpy.tests.utils.make_fake_scene(content_dict, daskify=False, area=True, common_attrs=None)[source]
Create a fake Scene.
Create a fake Scene object from fake data. Data are provided in the
content_dict
argument. Incontent_dict
, keys should be strings or DataID, and values may be either numpy.ndarray or xarray.DataArray, in either case with exactly two dimensions. The function will convert each of the numpy.ndarray objects into an xarray.DataArray and assign those as datasets to a Scene object. A fake AreaDefinition will be assigned for each array, unless disabled by passingarea=False
. When areas are automatically generated, arrays with the same shape will get the same area.This function is exclusively intended for testing purposes.
If regular ndarrays are passed and the keyword argument daskify is True, DataArrays will be created as dask arrays. If False (default), regular DataArrays will be created. When the user passes xarray.DataArray objects then this flag has no effect.
- Parameters:
content_dict (Mapping) – Mapping where keys correspond to objects accepted by
Scene.__setitem__
, i.e. strings or DataID, and values may be eithernumpy.ndarray
orxarray.DataArray
.daskify (bool) – optional, to use dask when converting
numpy.ndarray
toxarray.DataArray
. No effect when the values incontent_dict
are alreadyxarray.DataArray
.area (bool or BaseDefinition) – Can be
True
,False
, or an instance ofpyresample.geometry.BaseDefinition
such asAreaDefinition
orSwathDefinition
. IfTrue
, which is the default, automatically generate areas with the name “test-area”. IfFalse
, values will not have assigned areas. If an instance ofpyresample.geometry.BaseDefinition
, those instances will be used for all generated fake datasets. Warning: Passing an area as a string (area="germ"
) is not supported.common_attrs (Mapping) – optional, additional attributes that will be added to every dataset in the scene.
- Returns:
Scene object with datasets corresponding to content_dict.
- satpy.tests.utils.spy_decorator(method_to_decorate)[source]
Fancy decorator to wrap an object while still calling it.
Module contents
The tests package.
satpy.writers package
Submodules
satpy.writers.awips_tiled module
The AWIPS Tiled writer is used to create AWIPS-compatible tiled NetCDF4 files.
The Advanced Weather Interactive Processing System (AWIPS) is a program used by the United States National Weather Service (NWS) and others to view different forms of weather imagery. The original Sectorized Cloud and Moisture Imagery (SCMI) functionality in AWIPS was a NetCDF4 format supported by AWIPS to store one image broken up in to one or more “tiles”. This format has since been expanded to support many other products and so the writer for this format in Satpy is generically called the “AWIPS Tiled” writer. You may still see SCMI referenced in this documentation or in the source code for the writer. Once AWIPS is configured for specific products this writer can be used to provide compatible products to the system.
The AWIPS Tiled writer takes 2D (y, x) geolocated data and creates one or more AWIPS-compatible NetCDF4 files. The writer and the AWIPS client may need to be configured to make things appear the way the user wants in the AWIPS client. The writer can only produce files for datasets mapped to areas with specific projections:
lcc
geos
merc
stere
This is a limitation of the AWIPS client and not of the writer. In the case where AWIPS has been updated to support additional projections, this writer may also need to be updated to support those projections.
AWIPS Configuration
Depending on how this writer is used and the data it is provided, AWIPS may need additional configuration on the server side to properly ingest the files produced. This will require administrator privileges to the ingest server(s) and is not something that can be configured on the client. Note that any changes required must be done on all servers that you wish to ingest your data files. The generic “polar” template this writer defaults to should limit the number of modifications needed for any new data fields that AWIPS previously was unaware of. Once the data is ingested, the client can be used to customize how the data looks on screen.
AWIPS requires files to follow a specific naming scheme so they can be routed to specific “decoders”. For the files produced by this writer, this typically means editing the “goesr” decoder configuration in a directory like:
/awips2/edex/data/utility/common_static/site/<site>/distribution/goesr.xml
The “goesr” decoder is a subclass of the “satellite” decoder. You may see either name show up in the AWIPS ingest logs. With the correct regular expression in the above file, your files should be passed to the right decoder, opened, and parsed for data.
To tell AWIPS exactly what attributes and variables mean in your file, you’ll need to create or configure an XML file in:
/awips2/edex/data/utility/common_static/site/<site>/satellite/goesr/descriptions/
See the existing files in this directory for examples. The “polar” template (see below) that this writer uses by default is already configured in the “Polar” subdirectory assuming that the TOWR-S RPM package has been installed on your AWIPS ingest server.
Templates
This writer allows for a “template” to be specified to control how the output
files are structured and created. Templates can be configured in the writer
YAML file (awips_tiled.yaml
) or passed as a dictionary to the template
keyword argument. Templates have three main sections:
global_attributes
coordinates
variables
Additionally, you can specify whether a template should produce files with
one variable per file by specifying single_variable: true
or multiple
variables per file by specifying single_variable: false
. You can also
specify the output filename for a template using a Python format string.
See awips_tiled.yaml
for examples. Lastly, a add_sector_id_global
boolean parameter can be specified to add the user-provided sector_id
keyword argument as a global attribute to the file.
The global_attributes
section takes names of global attributes and
then a series of options to “render” that attribute from the metadata
provided when creating files. For example:
product_name:
value: "{name}"
For more information see the
satpy.writers.awips_tiled.NetCDFTemplate.get_attr_value()
method.
The coordinates
and variables
are similar to each other in that they
define how a variable should be created, the attributes it should have, and
the encoding to write to the file. Coordinates typically don’t need to be
modified as tiled files usually have only x
and y
dimension variables.
The Variables on the other hand use a decision tree to determine what section
applies for a particular DataArray being saved. The basic structure is:
variables:
arbitrary_section_name:
<decision tree matching parameters>
var_name: "output_netcdf_variable_name"
attributes:
<attributes similar to global attributes>
encoding:
<xarray encoding parameters>
The “decision tree matching parameters” can be one or more of “name”, “standard_name’, “satellite”, “sensor”, “area_id’, “units”, or “reader”. The writer will choose the best section for the DataArray being saved (the most matches). If none of these parameters are specified in a section then it will be used when no other matches are found (the “default” section).
The “encoding” parameters can be anything accepted by xarray’s to_netcdf
method. See xarray.Dataset.to_netcdf()
for more information on the
encoding` keyword argument.
For more examples see the existing builtin templates defined in
awips_tiled.yaml
.
Builtin Templates
There are only a few templates provided in Sapty currently.
polar: A custom format developed for the CSPP Polar2Grid project at the University of Wisconsin - Madison Space Science and Engineering Center (SSEC). This format is made available through the TOWR-S package that can be installed for GOES-R support in AWIPS. This format is meant to be very generic and should theoretically allow any variable to get ingested into AWIPS.
glm_l2_radc: This format is used to produce standard files for the gridded GLM products produced by the CSPP Geo Gridded GLM package. Support for this format is also available in the TOWR-S package on an AWIPS ingest server. This format is specific to gridded GLM on the CONUS sector and is not meant to work for other data.
glm_l2_radf: This format is used to produce standard files for the gridded GLM productes produced by the CSPP Geo Gridded GLM package. Support for this format is also available in the TOWR-S package on an AWIPS ingest server. This format is specific to gridded GLM on the Full Disk sector and is not meant to work for other data.
Numbered versus Lettered Grids
By default this writer will save tiles by number starting with ‘1’ representing the upper-left image tile. Tile numbers then increase along the column and then on to the next row.
By specifying lettered_grid as True tiles can be designated with a letter. Lettered grids or sectors are preconfigured in the awips_tiled.yaml configuration file. The lettered tile locations are static and will not change with the data being written to them. Each lettered tile is split into a certain number of subtiles (num_subtiles), default 2 rows by 2 columns. Lettered tiles are meant to make it easier for receiving AWIPS clients/stations to filter what tiles they receive; saving time, bandwidth, and space.
Any tiles (numbered or lettered) not containing any valid data are not created.
Updating tiles
There are some input data cases where we want to put new data in a tile file written by a previous execution. An example is a pre-tiled input dataset that is processed one tile at a time. One input tile may map to one or more output AWIPS tiles, but may not perfectly aligned, leaving empty/unused space in the output tile. The next input tile may be able to fill in that empty space and should be allowed to write the “new” data to the file. This is the default behavior of the AWIPS tiled writer. In cases where data overlaps the existing data in the tile, the newer data has priority.
Shifting Lettered Grids
Due to the static nature of the lettered grids, there is sometimes a need to shift the locations of where these tiles are by up to 0.5 pixels in each dimension to align with the data being processed. This means that the tiles for a 1000m resolution grid may be shifted up to 500m in each direction from the original definition of the lettered “sector”. This can cause differences in the location of the tiles between executions depending on the locations of the input data. In the worst case tile A01 from one execution could be shifted up to 1 grid cell from tile A01 in another execution (one is shifted 0.5 pixels to the left, the other is shifted 0.5 to the right).
This shifting makes the calculations for generating tiles easier and
more accurate. By default, the lettered tile locations are changed to match
the location of the data. This works well when output tiles will not be
updated (see above) in future processing. In cases where output tiles will be
filled in or updated with more data the use_sector_reference
keyword
argument can be set to True
to tell the writer to shift the data’s
geolocation by up to 0.5 pixels in each dimension instead of shifting the
lettered tile locations.
- class satpy.writers.awips_tiled.AWIPSNetCDFTemplate(template_dict, swap_end_time=False)[source]
Bases:
NetCDFTemplate
NetCDF template renderer specifically for tiled AWIPS files.
Handle AWIPS special cases and initialize template helpers.
- static _fill_units_and_standard_name(attrs, units, standard_name)[source]
Fill in units and standard_name if not set in attrs.
- _global_production_location(input_metadata)[source]
Get default global production_location attribute.
- _global_production_site(input_metadata)
Get default global production_location attribute.
- _swap_attributes_end_time(template_dict)[source]
Swap every use of ‘start_time’ to use ‘end_time’ instead.
- apply_misc_metadata(new_ds, sector_id=None, creator=None, creation_time=None)[source]
Add attributes that don’t fit into any other category.
- apply_tile_coord_encoding(new_ds, xy_factors)[source]
Add encoding information specific to the coordinate variables.
- render(dataset_or_data_arrays, area_def, tile_info, sector_id, creator=None, creation_time=None, shared_attrs=None, extra_global_attrs=None)[source]
Create a
xarray.Dataset
from template using information provided.
- class satpy.writers.awips_tiled.AWIPSTiledVariableDecisionTree(decision_dicts, **kwargs)[source]
Bases:
DecisionTree
Load AWIPS-specific metadata from YAML configuration.
Initialize decision tree with specific keys to look for.
- class satpy.writers.awips_tiled.AWIPSTiledWriter(compress=False, fix_awips=False, **kwargs)[source]
Bases:
Writer
Writer for AWIPS NetCDF4 Tile files.
See
satpy.writers.awips_tiled
documentation for more information on templates and produced file format.Initialize writer and decision trees.
- _delay_netcdf_creation(delayed_gen, precompute=True, use_distributed=False)[source]
Workaround random dask and xarray hanging executions.
In previous implementations this writer called ‘to_dataset’ directly in a delayed function. This seems to cause random deadlocks where execution would hang indefinitely.
- _enhance_and_split_rgbs(datasets)[source]
Handle multi-band images by splitting in to separate products.
- _get_lettered_sector_info(sector_id)[source]
Get metadata for the current sector if configured.
This is not necessary for numbered grids. If found, the sector info will provide the overall tile layout for this grid/sector. This allows for consistent tile numbering/naming regardless of where the data being converted actually is.
- _get_tile_generator(area_def, lettered_grid, sector_id, num_subtiles, tile_size, tile_count, use_sector_reference=False)[source]
Get the appropriate tile generator class for lettered or numbered tiles.
- _iter_area_tile_info_and_datasets(area_datasets, template, lettered_grid, sector_id, num_subtiles, tile_size, tile_count, use_sector_reference)[source]
- property enhancer
Get lazy loaded enhancer object only if needed.
- get_filename(template, area_def, tile_info, sector_id, **kwargs)[source]
Generate output NetCDF file from metadata.
- save_datasets(datasets, sector_id=None, source_name=None, tile_count=(1, 1), tile_size=None, lettered_grid=False, num_subtiles=None, use_end_time=False, use_sector_reference=False, template='polar', check_categories=True, extra_global_attrs=None, environment_prefix='DR', compute=True, **kwargs)[source]
Write a series of DataArray objects to multiple NetCDF4 Tile files.
- Parameters:
datasets (iterable) – Series of gridded
DataArray
objects with the necessary metadata to be converted to a valid tile product file.sector_id (str) – Name of the region or sector that the provided data is on. This name will be written to the NetCDF file and will be used as the sector in the AWIPS client for the ‘polar’ template. For lettered grids this name should match the name configured in the writer YAML. This is required for some templates (ex. default ‘polar’ template) but is defined as a keyword argument for better error handling in Satpy.
source_name (str) – Name of producer of these files (ex. “SSEC”). This name is used to create the output filename for some templates.
environment_prefix (str) – Prefix of filenames for some templates. For operational real-time data this is usually “OR”, “OT” for test data, “IR” for test system real-time data, and “IT” for test system test data. This defaults to “DR” for “Developer Real-time” to avoid anyone accidentally producing files that could be mistaken for the operational system.
tile_count (tuple) – For numbered tiles only, how many tile rows and tile columns to produce. Default to
(1, 1)
, a single giant tile. Eithertile_count
,tile_size
, orlettered_grid
should be specified.tile_size (tuple) – For numbered tiles only, how many pixels each tile should be. This takes precedence over
tile_count
if specified. Eithertile_count
,tile_size
, orlettered_grid
should be specified.lettered_grid (bool) – Whether to use a preconfigured grid and label tiles with letters and numbers instead of only numbers. For example, tiles will be named “A01”, “A02”, “B01”, and so on in the first row of data and continue on to “A03”, “A04”, and “B03” in the default case where
num_subtiles
is (2, 2). Letters start in the upper-left corner and will go from A up to Z, if necessary.num_subtiles (tuple) – For lettered tiles only, how many rows and columns to split each lettered tile in to. By default 2 rows and 2 columns will be created. For example, the tile for letter “A” will have “A01” and “A02” in the top row and “A03” and “A04” in the second row.
use_end_time (bool) – Instead of using the
start_time
for the product filename and time written to the file, use theend_time
. This is useful for multi-day composites where theend_time
is a better representation of what data is in the file.use_sector_reference (bool) – For lettered tiles only, whether to shift the data locations to align with the preconfigured grid’s pixels. By default this is False meaning that the grid’s tiles will be shifted to align with the data locations. If True, the data is shifted. At most the data will be shifted by 0.5 pixels. See
satpy.writers.awips_tiled
for more information.template (str or dict) – Name of the template configured in the writer YAML file. This can also be a dictionary with a full template configuration. See the
satpy.writers.awips_tiled
documentation for more information on templates. Defaults to the ‘polar’ builtin template.check_categories (bool) – Whether category and flag products should be included in the checks for empty or not empty tiles. In some cases (ex. data quality flags) category products may look like all valid data (a non-empty tile) but shouldn’t be used to determine the emptiness of the overall tile (good quality versus non-existent). Default is True. Set to False to ignore category (integer dtype or “flag_meanings” defined) when checking for valid data.
extra_global_attrs (dict) – Additional global attributes to be added to every produced file. These attributes are applied at the end of template rendering and will therefore overwrite template generated values with the same global attribute name.
compute (bool) – Compute and write the output immediately using dask. Default to
False
.
- class satpy.writers.awips_tiled.LetteredTileGenerator(area_definition, extents, sector_crs, cell_size=(2000000, 2000000), num_subtiles=None, use_sector_reference=False)[source]
Bases:
NumberedTileGenerator
Helper class to generate per-tile metadata for lettered tiles.
Initialize tile information for later generation.
- Parameters:
area_definition (AreaDefinition) – Area of the data being saved.
extents (tuple) – Four element tuple of the configured lettered area.
sector_crs (pyproj.CRS) – CRS of the configured lettered sector area.
cell_size (tuple) – Two element tuple of resolution of each tile in sector projection units (y, x).
- class satpy.writers.awips_tiled.NetCDFTemplate(template_dict)[source]
Bases:
object
Helper class to convert a dictionary-based NetCDF template to an
xarray.Dataset
.Parse template dictionary and prepare for rendering.
- get_attr_value(attr_name, input_metadata, value=None, raw_key=None, raw_value=None, prefix='_')[source]
Determine attribute value using the provided configuration information.
If value and raw_key are not provided, this method will search for a method named
<prefix><attr_name>
, which will be called with one argument (input_metadata) to get the value to return. See the documentation for the prefix keyword argument below for more information.- Parameters:
attr_name (str) – Name of the attribute whose value we are generating.
input_metadata (dict) – Dictionary of metadata from the input DataArray and other context information. Used to provide information to value or access data from using raw_key if provided.
value (Any) – Value to assign to this attribute. If a string, it may be a python format string which will be provided the data from input_metadata. For example,
{name}
will be filled with the value for the"name"
in input_metadata. It can also include environment variables (ex."${MY_ENV_VAR}"
) which will be expanded. String formatting is accomplished by the specialtrollsift.parser.StringFormatter
which allows for special common conversions.raw_key (str) – Key to access value from input_metadata, but without any string formatting applied to it. This allows for metadata of non-string types to be requested.
raw_value (Any) – Static hardcoded value to set this attribute to. Overrides all other options.
prefix (str) – Prefix to use when value and raw_key are both
None
. Default is"_"
. This will be used to find custom attribute handlers in subclasses. For example, if value and raw_key are bothNone
and attr_name is"my_attr"
, then the methodself._my_attr
will be called asreturn self._my_attr(input_metadata)
. SeeNetCDFTemplate.render_global_attributes()
for additional information (prefix is"_global_"
).
- render(dataset_or_data_arrays, shared_attrs=None)[source]
Create
xarray.Dataset
from provided data.
- class satpy.writers.awips_tiled.NumberedTileGenerator(area_definition, tile_shape=None, tile_count=None)[source]
Bases:
object
Helper class to generate per-tile metadata for numbered tiles.
Initialize and generate tile information for this sector/grid for later use.
- class satpy.writers.awips_tiled.TileInfo(tile_count, image_shape, tile_shape, tile_row_offset, tile_column_offset, tile_id, tile_number, x, y, xy_factors, tile_slices, data_slices)
Bases:
tuple
Create new instance of TileInfo(tile_count, image_shape, tile_shape, tile_row_offset, tile_column_offset, tile_id, tile_number, x, y, xy_factors, tile_slices, data_slices)
- _asdict()
Return a new dict which maps field names to their values.
- _field_defaults = {}
- _fields = ('tile_count', 'image_shape', 'tile_shape', 'tile_row_offset', 'tile_column_offset', 'tile_id', 'tile_number', 'x', 'y', 'xy_factors', 'tile_slices', 'data_slices')
- classmethod _make(iterable)
Make a new TileInfo object from a sequence or iterable
- _replace(**kwds)
Return a new TileInfo object replacing specified fields with new values
- data_slices
Alias for field number 11
- image_shape
Alias for field number 1
- tile_column_offset
Alias for field number 4
- tile_count
Alias for field number 0
- tile_id
Alias for field number 5
- tile_number
Alias for field number 6
- tile_row_offset
Alias for field number 3
- tile_shape
Alias for field number 2
- tile_slices
Alias for field number 10
- x
Alias for field number 7
- xy_factors
Alias for field number 9
- y
Alias for field number 8
- class satpy.writers.awips_tiled.XYFactors(mx, bx, my, by)
Bases:
tuple
Create new instance of XYFactors(mx, bx, my, by)
- _asdict()
Return a new dict which maps field names to their values.
- _field_defaults = {}
- _fields = ('mx', 'bx', 'my', 'by')
- classmethod _make(iterable)
Make a new XYFactors object from a sequence or iterable
- _replace(**kwds)
Return a new XYFactors object replacing specified fields with new values
- bx
Alias for field number 1
- by
Alias for field number 3
- mx
Alias for field number 0
- my
Alias for field number 2
- satpy.writers.awips_tiled._add_valid_ranges(data_arrs)[source]
Add ‘valid_range’ metadata if not present.
If valid_range or valid_min/valid_max are not present in a DataArrays metadata (
.attrs
), then lazily compute it with dask so it can be computed later when we write tiles out.AWIPS requires that scale_factor/add_offset/_FillValue be the same for all tiles. We must do this calculation before splitting the data into tiles otherwise the values will be different.
- satpy.writers.awips_tiled._create_debug_array(sector_info, num_subtiles, font_path='Verdana.ttf')[source]
- satpy.writers.awips_tiled.create_debug_lettered_tiles(**writer_kwargs)[source]
Create tile files with tile identifiers “burned” in to the image data for debugging.
- satpy.writers.awips_tiled.draw_rectangle(draw, coordinates, outline=None, fill=None, width=1)[source]
Draw simple rectangle in to a numpy array image.
- satpy.writers.awips_tiled.fix_awips_file(fn)[source]
Hack the NetCDF4 files to workaround NetCDF-Java bugs used by AWIPS.
This should not be needed for new versions of AWIPS.
- satpy.writers.awips_tiled.tile_filler(data_arr_data, tile_shape, tile_slices, fill_value)[source]
Create an empty tile array and fill the proper locations with data.
- satpy.writers.awips_tiled.to_nonempty_netcdf(dataset_to_save: Dataset, factors: dict, output_filename: str, update_existing: bool = True, check_categories: bool = True)[source]
Save
xarray.Dataset
to a NetCDF file if not all fills.In addition to checking certain Dataset variables for fill values, this function can also “update” an existing NetCDF file with the new valid data provided.
satpy.writers.cf_writer module
Writer for netCDF4/CF.
Example usage
The CF writer saves datasets in a Scene as CF-compliant netCDF file. Here is an example with MSG SEVIRI data in HRIT format:
>>> from satpy import Scene
>>> import glob
>>> filenames = glob.glob('data/H*201903011200*')
>>> scn = Scene(filenames=filenames, reader='seviri_l1b_hrit')
>>> scn.load(['VIS006', 'IR_108'])
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc',
exclude_attrs=['raw_metadata'])
You can select the netCDF backend using the
engine
keyword argument. If None if followsto_netcdf()
engine choices with a preference for ‘netcdf4’.For datasets with area definition you can exclude lat/lon coordinates by setting
include_lonlats=False
. If the area has a projected CRS, units are assumed to be in metre. If the area has a geographic CRS, units are assumed to be in degrees. The writer does not verify that the CRS is supported by the CF conventions. One commonly used projected CRS not supported by the CF conventions is the equirectangular projection, such as EPSG 4087.By default non-dimensional coordinates (such as scanline timestamps) are prefixed with the corresponding dataset name. This is because they are likely to be different for each dataset. If a non-dimensional coordinate is identical for all datasets, the prefix can be removed by setting
pretty=True
.Some dataset names start with a digit, like AVHRR channels 1, 2, 3a, 3b, 4 and 5. This doesn’t comply with CF https://cfconventions.org/Data/cf-conventions/cf-conventions-1.7/build/ch02s03.html. These channels are prefixed with
"CHANNEL_"
by default. This can be controlled with the variable numeric_name_prefix to save_datasets. Setting it to None or ‘’ will skip the prefixing.
Grouping
All datasets to be saved must have the same projection coordinates x
and y
. If a scene holds datasets with
different grids, the CF compliant workaround is to save the datasets to separate files. Alternatively, you can save
datasets with common grids in separate netCDF groups as follows:
>>> scn.load(['VIS006', 'IR_108', 'HRV'])
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108', 'HRV'],
filename='seviri_test.nc', exclude_attrs=['raw_metadata'],
groups={'visir': ['VIS006', 'IR_108'], 'hrv': ['HRV']})
Note that the resulting file will not be fully CF compliant.
Dataset Encoding
Dataset encoding can be specified in two ways:
Via the
encoding
keyword argument ofsave_datasets
:>>> my_encoding = { ... 'my_dataset_1': { ... 'compression': 'zlib', ... 'complevel': 9, ... 'scale_factor': 0.01, ... 'add_offset': 100, ... 'dtype': np.int16 ... }, ... 'my_dataset_2': { ... 'compression': None, ... 'dtype': np.float64 ... } ... } >>> scn.save_datasets(writer='cf', filename='encoding_test.nc', encoding=my_encoding)
Via the
encoding
attribute of the datasets in a scene. For example>>> scn['my_dataset'].encoding = {'compression': 'zlib'} >>> scn.save_datasets(writer='cf', filename='encoding_test.nc')
See the xarray encoding documentation for all encoding options.
Note
Chunk-based compression can be specified with the compression
keyword
since
netCDF4-1.6.0 libnetcdf-4.9.0 xarray-2022.12.0
The zlib
keyword is deprecated. Make sure that the versions of
these modules are all above or all below that reference. Otherwise,
compression might fail or be ignored silently.
Attribute Encoding
In the above examples, raw metadata from the HRIT files have been excluded. If you want all attributes to be included,
just remove the exclude_attrs
keyword argument. By default, dict-type dataset attributes, such as the raw metadata,
are encoded as a string using json. Thus, you can use json to decode them afterwards:
>>> import xarray as xr
>>> import json
>>> # Save scene to nc-file
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc')
>>> # Now read data from the nc-file
>>> ds = xr.open_dataset('seviri_test.nc')
>>> raw_mda = json.loads(ds['IR_108'].attrs['raw_metadata'])
>>> print(raw_mda['RadiometricProcessing']['Level15ImageCalibration']['CalSlope'])
[0.020865 0.0278287 0.0232411 0.00365867 0.00831811 0.03862197
0.12674432 0.10396091 0.20503568 0.22231115 0.1576069 0.0352385]
Alternatively it is possible to flatten dict-type attributes by setting flatten_attrs=True
. This is more human
readable as it will create a separate nc-attribute for each item in every dictionary. Keys are concatenated with
underscore separators. The CalSlope attribute can then be accessed as follows:
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc',
flatten_attrs=True)
>>> ds = xr.open_dataset('seviri_test.nc')
>>> print(ds['IR_108'].attrs['raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalSlope'])
[0.020865 0.0278287 0.0232411 0.00365867 0.00831811 0.03862197
0.12674432 0.10396091 0.20503568 0.22231115 0.1576069 0.0352385]
This is what the corresponding ncdump
output would look like in this case:
$ ncdump -h test_seviri.nc
...
IR_108:raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalOffset = -1.064, ...;
IR_108:raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalSlope = 0.021, ...;
IR_108:raw_metadata_RadiometricProcessing_MPEFCalFeedback_AbsCalCoeff = 0.021, ...;
...
- class satpy.writers.cf_writer.CFWriter(name=None, filename=None, base_dir=None, **kwargs)[source]
Bases:
Writer
Writer producing NetCDF/CF compatible datasets.
Initialize the writer object.
- Parameters:
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
kwargs (dict) – Additional keyword arguments to pass to the
Plugin
class.
- static da2cf(dataarray, epoch=None, flatten_attrs=False, exclude_attrs=None, include_orig_name=True, numeric_name_prefix='CHANNEL_')[source]
Convert the dataarray to something cf-compatible.
- Parameters:
dataarray (xr.DataArray) – The data array to be converted.
epoch (str) – Reference time for encoding of time coordinates. If None, the default reference time is defined using from satpy.cf.coords import EPOCH
flatten_attrs (bool) – If True, flatten dict-type attributes.
exclude_attrs (list) – List of dataset attributes to be excluded.
include_orig_name (bool) – Include the original dataset name in the netcdf variable attributes.
numeric_name_prefix (str) – Prepend dataset name with this if starting with a digit.
- save_dataset(dataset, filename=None, fill_value=None, **kwargs)[source]
Save the dataset to a given filename.
- save_datasets(datasets, filename=None, groups=None, header_attrs=None, engine=None, epoch=None, flatten_attrs=False, exclude_attrs=None, include_lonlats=True, pretty=False, include_orig_name=True, numeric_name_prefix='CHANNEL_', **to_netcdf_kwargs)[source]
Save the given datasets in one netCDF file.
Note that all datasets (if grouping: in one group) must have the same projection coordinates.
- Parameters:
datasets (list) – List of xr.DataArray to be saved.
filename (str) – Output file.
groups (dict) – Group datasets according to the given assignment: {‘group_name’: [‘dataset1’, ‘dataset2’, …]}. The group name None corresponds to the root of the file, i.e., no group will be created. Warning: The results will not be fully CF compliant!
header_attrs – Global attributes to be included.
engine (str, optional) – Module to be used for writing netCDF files. Follows xarray’s
to_netcdf()
engine choices with a preference for ‘netcdf4’.epoch (str, optional) – Reference time for encoding of time coordinates. If None, the default reference time is defined using from satpy.cf.coords import EPOCH.
flatten_attrs (bool, optional) – If True, flatten dict-type attributes.
exclude_attrs (list, optional) – List of dataset attributes to be excluded.
include_lonlats (bool, optional) – Always include latitude and longitude coordinates, even for datasets with area definition.
pretty (bool, optional) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
include_orig_name (bool, optional) – Include the original dataset name as a variable attribute in the final netCDF.
numeric_name_prefix (str, optional) – Prefix to add to each variable with a name starting with a digit. Use ‘’ or None to leave this out.
- satpy.writers.cf_writer._check_backend_versions()[source]
Issue warning if backend versions do not match.
satpy.writers.geotiff module
GeoTIFF writer objects for creating GeoTIFF files from DataArray objects.
- class satpy.writers.geotiff.GeoTIFFWriter(dtype=None, tags=None, **kwargs)[source]
Bases:
ImageWriter
Writer to save GeoTIFF images.
Basic example from Scene:
>>> scn.save_datasets(writer='geotiff')
By default the writer will use the
Enhancer
class to linear stretch the data (see Enhancements). To get Un-enhanced imagesenhance=False
can be specified which will write a geotiff with the data type of the dataset. The fill value defaults to the the datasets"_FillValue"
attribute if notNone
and no value is passed tofill_value
for integer data. In case of float data iffill_value
is not passed NaN will be used. If a geotiff with a certain datatype is desired for example 32 bit floating point geotiffs:>>> scn.save_datasets(writer='geotiff', dtype=np.float32, enhance=False)
To add custom metadata use tags:
>>> scn.save_dataset(dataset_name, writer='geotiff', ... tags={'offset': 291.8, 'scale': -0.35})
Images are tiled by default. To create striped TIFF files
tiled=False
can be specified:>>> scn.save_datasets(writer='geotiff', tiled=False)
For performance tips on creating geotiffs quickly and making them smaller see the Frequently Asked Questions.
Init the writer.
- GDAL_OPTIONS = ('tfw', 'rpb', 'rpctxt', 'interleave', 'tiled', 'blockxsize', 'blockysize', 'nbits', 'compress', 'num_threads', 'predictor', 'discard_lsb', 'sparse_ok', 'jpeg_quality', 'jpegtablesmode', 'zlevel', 'photometric', 'alpha', 'profile', 'bigtiff', 'pixeltype', 'copy_src_overviews', 'blocksize', 'resampling', 'quality', 'level', 'overview_resampling', 'warp_resampling', 'overview_compress', 'overview_quality', 'overview_predictor', 'tiling_scheme', 'zoom_level_strategy', 'target_srs', 'res', 'extent', 'aligned_levels', 'add_alpha')
- save_image(img: XRImage, filename: str | None = None, compute: bool = True, dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any] = None, fill_value: int | float | None = None, keep_palette: bool = False, cmap: Colormap | None = None, tags: dict[str, Any] | None = None, overviews: list[int] | None = None, overviews_minsize: int = 256, overviews_resampling: str | None = None, include_scale_offset: bool = False, scale_offset_tags: tuple[str, str] | None = None, colormap_tag: str | None = None, driver: str | None = None, tiled: bool = True, **kwargs)[source]
Save the image to the given
filename
in geotiff format.Note this writer requires the
rasterio
library to be installed.- Parameters:
img (xarray.DataArray) – Data to save to geotiff.
filename (str) – Filename to save the image to. Defaults to
filename
passed during writer creation. Unlike the creationfilename
keyword argument, this filename does not get formatted with data attributes.compute (bool) – Compute dask arrays and save the image immediately. If
False
then the return value can be passed tocompute_writer_results()
to do the computation. This is useful when multiple images may share input calculations where dask can benefit from not repeating them multiple times. Defaults toTrue
in the writer by itself, but is typically passed asFalse
by callers where calculations can be combined.dtype (DTypeLike) – Numpy data type to save the image as. Defaults to 8-bit unsigned integer (
np.uint8
) or the data type of the data to be saved ifenhance=False
. If thedtype
argument is provided during writer creation then that will be used as the default.fill_value (float or int) – Value to use where data values are NaN/null. If this is specified in the writer configuration file that value will be used as the default.
keep_palette (bool) – Save palette/color table to geotiff. To be used with images that were palettized with the “palettize” enhancement. Setting this to
True
will cause the colormap of the image to be written as a “color table” in the output geotiff and the image data values will represent the index values in to that color table. By default, this will use the colormap used in the “palettize” operation. See thecmap
option for other options. This option defaults toFalse
and palettized images will be converted to RGB/A.cmap (trollimage.colormap.Colormap or None) – Colormap to save as a color table in the output geotiff. See
keep_palette
for more information. Defaults to the palette of the providedimg
object. The colormap’s range should be set to match the index range of the palette (ex. cmap.set_range(0, len(colors))).tags (dict) – Extra metadata to store in geotiff.
overviews (list) –
The reduction factors of the overviews to include in the image, eg:
scn.save_datasets(overviews=[2, 4, 8, 16])
If provided as an empty list, then levels will be computed as powers of two until the last level has less pixels than overviews_minsize. Default is to not add overviews.
overviews_minsize (int) – Minimum number of pixels for the smallest overview size generated when overviews is auto-generated. Defaults to 256.
overviews_resampling (str) – Resampling method to use when generating overviews. This must be the name of an enum value from
rasterio.enums.Resampling
and only takes effect if the overviews keyword argument is provided. Common values include nearest (default), bilinear, average, and many others. See the rasterio documentation for more information.scale_offset_tags (Tuple[str, str]) – If set, include inclusion of scale and offset in the GeoTIFF headers in the GDALMetaData tag. The value of this argument should be a keyword argument
(scale_label, offset_label)
, for example,("scale", "offset")
, indicating the labels to be used.colormap_tag (Optional[str]) – If set and the image being saved was colorized or palettized then a comma-separated version of the colormap is saved to a custom geotiff tag with the provided name. See
trollimage.colormap.Colormap.to_csv()
for more information.driver (Optional[str]) – Name of GDAL driver to use to save the geotiff. If not specified or None (default) the “GTiff” driver is used. Another common option is “COG” for Cloud Optimized GeoTIFF. See GDAL documentation for more information.
tiled (bool) – For performance this defaults to
True
. PassFalse
to created striped TIFF files.include_scale_offset (deprecated, bool) – Deprecated. Use
scale_offset_tags=("scale", "offset")
to include scale and offset tags.
satpy.writers.mitiff module
MITIFF writer objects for creating MITIFF files from Dataset objects.
- class satpy.writers.mitiff.MITIFFWriter(name=None, tags=None, **kwargs)[source]
Bases:
ImageWriter
Writer to produce MITIFF image files.
Initialize reader with tag and other configuration information.
- _generate_intermediate_filename(gen_filename)[source]
Replace mitiff ext because pillow doesn’t recognise the file type.
- _make_image_description(datasets, **kwargs)[source]
Generate image description for mitiff.
Satellite: NOAA 18 Date and Time: 06:58 31/05-2016 SatDir: 0 Channels: 6 In this file: 1-VIS0.63 2-VIS0.86 3(3B)-IR3.7 4-IR10.8 5-IR11.5 6(3A)-VIS1.6 Xsize: 4720 Ysize: 5544 Map projection: Stereographic Proj string: +proj=stere +lon_0=0 +lat_0=90 +lat_ts=60 +ellps=WGS84 +towgs84=0,0,0 +units=km +x_0=2526000.000000 +y_0=5806000.000000 TrueLat: 60 N GridRot: 0 Xunit:1000 m Yunit: 1000 m NPX: 0.000000 NPY: 0.000000 Ax: 1.000000 Ay: 1.000000 Bx: -2526.000000 By: -262.000000
Satellite: <satellite name> Date and Time: <HH:MM dd/mm-yyyy> SatDir: 0 Channels: <number of chanels> In this file: <channels names in order> Xsize: <number of pixels x> Ysize: <number of pixels y> Map projection: Stereographic Proj string: <proj4 string with +x_0 and +y_0 which is the positive distance from proj origo to the lower left corner of the image data> TrueLat: 60 N GridRot: 0 Xunit:1000 m Yunit: 1000 m NPX: 0.000000 NPY: 0.000000 Ax: <pixels size x in km> Ay: <pixel size y in km> Bx: <left corner of upper right pixel in km> By: <upper corner of upper right pixel in km>
if palette image write special palette if normal channel write table calibration: Table_calibration: <channel name>, <calibration type>, [<unit>], <no of bits of data>, [<calibration values space separated>]nn
- _save_as_enhanced(datasets, tmp_gen_filename, **kwargs)[source]
Save datasets as an enhanced RGB image.
- _save_datasets_as_mitiff(datasets, image_description, gen_filename, **kwargs)[source]
Put all together and save as a tiff file.
Include the special tags making it a mitiff file.
- save_dataset(dataset, filename=None, fill_value=None, compute=True, **kwargs)[source]
Save single dataset as mitiff file.
satpy.writers.ninjogeotiff module
Writer for GeoTIFF images with tags for the NinJo visualization tool.
Starting with NinJo 7, NinJo is able to read standard GeoTIFF images,
with required metadata encoded as a set of XML tags in the GDALMetadata
TIFF tag. Each of the XML tags must be prepended with 'NINJO_'
.
For NinJo delivery, these GeoTIFF files supersede the old NinJoTIFF
format. The NinJoGeoTIFFWriter
therefore supersedes the old
Satpy NinJoTIFF writer and the pyninjotiff package.
The reference documentation for valid NinJo tags and their meaning is contained in NinJoPedia. Since this page is not in the public web, there is a (possibly outdated) mirror.
There are some user-facing differences between the old NinJoTIFF writer and the new NinJoGeoTIFF writer. Most notably, keyword arguments that correspond to tags directly passed by the user are now identical, including case, to how they will be written to the GDALMetaData and interpreted by NinJo. That means some keyword arguments have changed, such as summarised in this table:
ninjotiff (old) |
ninjogeotiff (new) |
Notes |
---|---|---|
|
|
mandatory |
|
|
mandatory |
|
|
mandatory |
|
|
mandatory |
|
|
mandatory |
|
|
optional |
Moreover, two keyword arguments are no longer supported because
their functionality has become redundant. This applies to
ch_min_measurement_unit
and ch_max_measurement_unit
.
Instead, pass those values in source units to the
stretch()
enhancement with the min_stretch
and max_stretch
arguments.
For images where the pixel value corresponds directly to a physical value,
NinJo has a functionality to read the corresponding quantity (example:
brightness temperature or reflectance). To make this possible, the writer
adds the tags Gradient
and AxisIntercept
. Those tags are added if
and only if the image has mode L
or LA
and PhysicUnit
is not set
to "N/A"
. In other words, to suppress those tags for images with mode
L
or LA
(for example, for the composite vis_with_ir
, where the
physical interpretation of individual pixels is lost), one should set
PhysicUnit
to "N/A"
, "n/a"
, "1"
, or ""
(empty string).
- class satpy.writers.ninjogeotiff.NinJoGeoTIFFWriter(dtype=None, tags=None, **kwargs)[source]
Bases:
GeoTIFFWriter
Writer for GeoTIFFs with NinJo tags.
This writer is experimental. API may be subject to change.
For information, see module docstring and documentation for
save_image()
.Init the writer.
- _fix_units(image, quantity, unit)[source]
Adapt units between °C and K.
This will return a new XRImage, to make sure the old data and enhancement history aren’t touched.
- save_image(image, filename=None, fill_value=None, compute=True, keep_palette=False, cmap=None, overviews=None, overviews_minsize=256, overviews_resampling=None, tags=None, config_files=None, *, ChannelID, DataType, PhysicUnit, PhysicValue, SatelliteNameID, **kwargs)[source]
Save image along with NinJo tags.
Save image along with NinJo tags. Interface as for GeoTIFF, except NinJo expects some additional tags. Those tags will be prepended with
ninjo_
and added as GDALMetaData.Writing such images requires trollimage 1.16 or newer.
Importing such images with NinJo requires NinJo 7 or newer.
- Parameters:
image (
XRImage
) – Image to save.filename (str) – Where to save the file.
fill_value (int) – Which pixel value is fill value?
compute (bool) – To compute or not to compute, that is the question.
keep_palette (bool) – As for parent GeoTIFF
save_image()
.cmap (
trollimage.colormap.Colormap
) – As for parentsave_image()
.overviews (list) – As for
save_image()
.overviews_minsize (int) – As for
save_image()
.overviews_resampling (str) – As for
save_image()
.tags (dict) – Extra (not NinJo) tags to add to GDAL MetaData
config_files (Any) – Not directly used by this writer, supported for compatibility with other writers.
Remaining keyword arguments are either passed as GDAL options, if contained in
self.GDAL_OPTIONS
, or they are passed toNinJoTagGenerator
, which will include them as NinJo tags in GDALMetadata. Supported tags are defined inNinJoTagGenerator.optional_tags
. The meaning of those (and other) tags are defined in the NinJo documentation (see module documentation for a link to NinJoPedia). The following tags are mandatory and must be provided as keyword arguments:- ChannelID (int)
NinJo Channel ID
- DataType (int)
NinJo Data Type
- SatelliteNameID (int)
NinJo Satellite ID
- PhysicUnit (str)
NinJo label for unit (example: “C”). If PhysicValue is set to “Temperature”, PhysicUnit is set to “C”, but data attributes incidate the data have unit “K”, then the writer will adapt the header
ninjo_AxisIntercept
such that data are interpreted in units of “C”. If PhysicUnit is set to “N/A”, no AxisIntercept and Gradient tags will be written.- PhysicValue (str)
NinJo label for quantity (example: “temperature”)
- scale_offset_tag_names = ('ninjo_Gradient', 'ninjo_AxisIntercept')
- class satpy.writers.ninjogeotiff.NinJoTagGenerator(image, fill_value, filename, **kwargs)[source]
Bases:
object
Class to collect NinJo tags.
This class is used by
NinJoGeoTIFFWriter
to collect NinJo tags. Most end-users will not need to create instances of this class directly.Tags are gathered from three sources:
Fixed tags, contained in the attribute
fixed_tags
. The value of those tags is hardcoded and never changes.Tags passed by the user, contained in the attribute
passed_tags
. Those tags must be passed by the user as arguments to the writer, which will pass them on when instantiating this class.Tags calculated from data and metadata. Those tags are defined in the attribute
dynamic_tags
. They are either calculated from image data, from image metadata, or from arguments passed by the user to the writer.
Some tags are mandatory (defined in
mandatory_tags
). All tags that are not mandatory are optional. By default, optional tags are generated if and only if the required information is available.Initialise tag generator.
- Parameters:
image (
trollimage.xrimage.XRImage
) – XRImage for which NinJo tags should be calculated.fill_value (int) – Fill value corresponding to image.
filename (str) – Filename to be written.
**kwargs – Any additional tags to be included as-is.
- _epoch = datetime.datetime(1970, 1, 1, 0, 0, tzinfo=datetime.timezone.utc)
- dynamic_tags = {'CentralMeridian': 'central_meridian', 'ColorDepth': 'color_depth', 'CreationDateID': 'creation_date_id', 'DateID': 'date_id', 'EarthRadiusLarge': 'earth_radius_large', 'EarthRadiusSmall': 'earth_radius_small', 'FileName': 'filename', 'MaxGrayValue': 'max_gray_value', 'MinGrayValue': 'min_gray_value', 'Projection': 'projection', 'ReferenceLatitude1': 'ref_lat_1', 'TransparentPixel': 'transparent_pixel', 'XMaximum': 'xmaximum', 'YMaximum': 'ymaximum'}
- fixed_tags = {'HeaderVersion': 2, 'Magic': 'NINJO', 'XMinimum': 1, 'YMinimum': 1}
- get_creation_date_id()[source]
Calculate the creation date ID.
That’s seconds since UNIX Epoch for the time the image is created.
- get_date_id()[source]
Calculate the date ID.
That’s seconds since UNIX Epoch for the time corresponding to the satellite image start of measurement time.
- get_meridian_east()[source]
Get the easternmost longitude of the area.
Currently not implemented. In pyninjotiff it was implemented but the answer was incorrect.
- get_meridian_west()[source]
Get the westernmost longitude of the area.
Currently not implemented. In pyninjotiff it was implemented but the answer was incorrect.
- get_projection()[source]
Get NinJo projection string.
From the documentation, valid values are:
NPOL/SPOL: polar-sterographic North/South
PLAT: „Plate Carrée“, equirectangular projection
MERC: Mercator projection
Derived from AreaDefinition.
- get_ref_lat_2()[source]
Get reference latitude two.
This is not implemented and never was correctly implemented in pyninjotiff either. It doesn’t appear to be used by NinJo.
- get_transparent_pixel()[source]
Get the transparent pixel value, also known as the fill value.
When the no fill value is defined (value None), such as for RGBA or LA images, returns -1, in accordance with the file format specification.
- get_xmaximum()[source]
Get the maximum value of x, i.e. the meridional extent of the image in pixels.
- mandatory_tags = {'AxisIntercept', 'ChannelID', 'ColorDepth', 'CreationDateID', 'DataType', 'DateID', 'Gradient', 'HeaderVersion', 'MaxGrayValue', 'MinGrayValue', 'PhysicUnit', 'PhysicValue', 'Projection', 'SatelliteNameID', 'SatelliteNumber', 'TransparentPixel', 'XMaximum', 'XMinimum', 'YMaximum', 'YMinimum'}
- optional_tags = {'AOSAzimuth', 'Altitude', 'CentralMeridian', 'ColorTable', 'DataSource', 'Description', 'EarthRadiusLarge', 'EarthRadiusSmall', 'GeoLatitude', 'GeoLongitude', 'GeodeticDate', 'IsAtmosphereCorrected', 'IsBlackLinesCorrection', 'IsCalibrated', 'IsNormalized', 'IsValueTableAvailable', 'LOSAzimuth', 'MaxElevation', 'MeridianEast', 'MeridianWest', 'OriginalHeader', 'OverFlightTime', 'OverflightDirection', 'ReferenceLatitude1', 'ReferenceLatitude2', 'ValueTableFloatField'}
- passed_tags = {'ChannelID', 'DataType', 'PhysicUnit', 'PhysicValue', 'SatelliteNameID'}
- postponed_tags = {'AxisIntercept', 'Gradient'}
satpy.writers.ninjotiff module
Writer for TIFF images compatible with the NinJo visualization tool (NinjoTIFFs).
NinjoTIFFs can be color images or monochromatic. For monochromatic images, the physical units and scale and offsets to retrieve the physical values are provided. Metadata is also recorded in the file.
In order to write ninjotiff files, some metadata needs to be provided to the writer. Here is an example on how to write a color image:
chn = "airmass"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="airmass.tif", writer='ninjotiff',
sat_id=6300014,
chan_id=6500015,
data_cat='GPRN',
data_source='EUMCAST',
nbits=8)
Here is an example on how to write a color image:
chn = "IR_108"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="msg.tif", writer='ninjotiff',
sat_id=6300014,
chan_id=900015,
data_cat='GORN',
data_source='EUMCAST',
physic_unit='K',
nbits=8)
The metadata to provide to the writer can also be stored in a configuration file (see pyninjotiff), so that the previous example can be rewritten as:
chn = "IR_108"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="msg.tif", writer='ninjotiff',
# ninjo product name to look for in .cfg file
ninjo_product_name="IR_108",
# custom configuration file for ninjo tiff products
# if not specified PPP_CONFIG_DIR is used as config file directory
ninjo_product_file="/config_dir/ninjotiff_products.cfg")
- class satpy.writers.ninjotiff.NinjoTIFFWriter(tags=None, **kwargs)[source]
Bases:
ImageWriter
Writer for NinjoTiff files.
Inititalize the writer.
- save_dataset(dataset, filename=None, fill_value=None, compute=True, convert_temperature_units=True, **kwargs)[source]
Save a dataset to ninjotiff format.
This calls save_image in turn, but first preforms some unit conversion if necessary and desired. Unit conversion can be suppressed by passing
convert_temperature_units=False
.
- satpy.writers.ninjotiff.convert_units(dataset, in_unit, out_unit)[source]
Convert units of dataset.
Convert dataset units for the benefit of writing NinJoTIFF. The main background here is that NinJoTIFF would like brightness temperatures in °C, but satellite data files are in K. For simplicity of implementation, this function can only convert from K to °C.
This function will convert input data from K to °C and write the new unit in the
"units"
attribute. When output and input units are equal, it returns the input dataset.
satpy.writers.simple_image module
Generic PIL/Pillow image format writer.
- class satpy.writers.simple_image.PillowWriter(**kwargs)[source]
Bases:
ImageWriter
Generic PIL image format writer.
Initialize image writer plugin.
- save_image(img, filename=None, compute=True, **kwargs)[source]
Save Image object to a given
filename
.- Parameters:
img (trollimage.xrimage.XRImage) – Image object to save to disk.
filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
compute (bool) – If True (default), compute and save the dataset. If False return either a dask.delayed.Delayed object or tuple of (source, target). See the return values below for more information.
**kwargs – Keyword arguments to pass to the images save method.
- Returns:
Value returned depends on compute. If compute is True then the return value is the result of computing a dask.delayed.Delayed object or running dask.array.store. If compute is False then the returned value is either a dask.delayed.Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed to dask.array.store. If target is provided the the caller is responsible for calling target.close() if the target has this method.
satpy.writers.utils module
Writer utilities.
Module contents
Shared objects of the various writer classes.
For now, this includes enhancement configuration utilities.
- class satpy.writers.DecisionTree(decision_dicts, match_keys, multival_keys=None)[source]
Bases:
object
Structure to search for nearest match from a set of parameters.
This class is used to find the best configuration section by matching a set of attributes. The provided dictionary contains a mapping of “section name” to “decision” dictionaries. Each decision dictionary contains the attributes that will be used for matching plus any additional keys that could be useful when matched. This class will search these decisions and return the one with the most matching parameters to the attributes passed to the
find_match()
method.Note that decision sections are provided as a dict instead of a list so that they can be overwritten or updated by doing the equivalent of a
current_dicts.update(new_dicts)
.Examples
Decision sections are provided as a dictionary of dictionaries. The returned match will be the first result found by searching provided match_keys in order.
decisions = { 'first_section': { 'a': 1, 'b': 2, 'useful_key': 'useful_value', }, 'second_section': { 'a': 5, 'useful_key': 'other_useful_value1', }, 'third_section': { 'b': 4, 'useful_key': 'other_useful_value2', }, } tree = DecisionTree(decisions, ('a', 'b')) tree.find_match(a=5, b=2) # second_section dict tree.find_match(a=1, b=2) # first_section dict tree.find_match(a=5, b=4) # second_section dict tree.find_match(a=3, b=2) # no match
Init the decision tree.
- Parameters:
decision_dicts (dict) – Dictionary of dictionaries. Each sub-dictionary contains key/value pairs that can be matched from the find_match method. Sub-dictionaries can include additional keys outside the
match_keys
provided to act as the “result” of a query. The keys of the root dict are arbitrary.match_keys (list) – Keys of the provided dictionary to use for matching.
multival_keys (list) – Keys of match_keys that can be provided as multiple values. A multi-value key can be specified as a single value (typically a string) or a set. If a set, it will be sorted and converted to a tuple and then used for matching. When querying the tree, these keys will be searched for exact multi-value results (the sorted tuple) and if not found then each of the values will be searched individually in alphabetical order.
- _build_tree(conf)[source]
Build the tree.
Create a tree structure of dicts where each level represents the possible matches for a specific
match_key
. When finding matches we will iterate through the tree matching each key that we know about. The last dict in the “tree” will contain the configure section whose match values led down that path in the tree.See
DecisionTree.find_match()
for more information.
- any_key = None
- class satpy.writers.EnhancementDecisionTree(*decision_dicts, **kwargs)[source]
Bases:
DecisionTree
The enhancement decision tree.
Init the decision tree.
- class satpy.writers.Enhancer(enhancement_config_file=None)[source]
Bases:
object
Helper class to get enhancement information for images.
Initialize an Enhancer instance.
- Parameters:
enhancement_config_file – The enhancement configuration to apply, False to leave as is.
- class satpy.writers.ImageWriter(name=None, filename=None, base_dir=None, enhance=None, **kwargs)[source]
Bases:
Writer
Base writer for image file formats.
Initialize image writer object.
- Parameters:
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
enhance (bool or Enhancer) – Whether to automatically enhance data to be more visually useful and to fit inside the file format being saved to. By default, this will default to using the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed.kwargs (dict) – Additional keyword arguments to pass to the
Writer
base class.
Changed in version 0.10: Deprecated enhancement_config_file and ‘enhancer’ in favor of enhance. Pass an instance of the Enhancer class to enhance instead.
- save_dataset(dataset, filename=None, fill_value=None, overlay=None, decorate=None, compute=True, units=None, **kwargs)[source]
Save the
dataset
to a givenfilename
.This method creates an enhanced image using
get_enhanced_image()
. The image is then passed tosave_image()
. See both of these functions for more details on the arguments passed to this method.
- save_image(img: XRImage, filename: str | None = None, compute: bool = True, **kwargs)[source]
Save Image object to a given
filename
.- Parameters:
img (trollimage.xrimage.XRImage) – Image object to save to disk.
filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
compute (bool) – If True (default), compute and save the dataset. If False return either a Dask Delayed object or tuple of (source, target). See the return values below for more information.
**kwargs – Other keyword arguments to pass to this writer.
- Returns:
Value returned depends on compute. If compute is True then the return value is the result of computing a Dask Delayed object or running
dask.array.store()
. If compute is False then the returned value is either a Dask Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
- class satpy.writers.Writer(name=None, filename=None, base_dir=None, **kwargs)[source]
Bases:
Plugin
,DataDownloadMixin
Base Writer class for all other writers.
A minimal writer subclass should implement the save_dataset method.
Initialize the writer object.
- Parameters:
name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S}.tif
base_dir (str) – Base destination directories for all created files.
kwargs (dict) – Additional keyword arguments to pass to the
Plugin
class.
- create_filename_parser(base_dir)[source]
Create a
trollsift.parser.Parser
object for later use.
- get_filename(**kwargs)[source]
Create a filename where output data will be saved.
- Parameters:
kwargs (dict) – Attributes and other metadata to use for formatting the previously provided filename.
- save_dataset(dataset, filename=None, fill_value=None, compute=True, units=None, **kwargs)[source]
Save the
dataset
to a givenfilename
.This method must be overloaded by the subclass.
- Parameters:
dataset (xarray.DataArray) – Dataset to save using this writer.
filename (str) – Optionally specify the filename to save this dataset to. If not provided then filename which can be provided to the init method will be used and formatted by dataset attributes.
fill_value (int or float) – Replace invalid values in the dataset with this fill value if applicable to this writer.
compute (bool) – If True (default), compute and save the dataset. If False return either a Dask Delayed object or tuple of (source, target). See the return values below for more information.
units (str or None) – If not None, will convert the dataset to the given unit using pint-xarray before saving. Default is not to do any conversion.
**kwargs – Other keyword arguments for this particular writer.
- Returns:
Value returned depends on compute. If compute is True then the return value is the result of computing a Dask Delayed object or running
dask.array.store()
. If compute is False then the returned value is either a Dask Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the caller is responsible for calling target.close() if the target has this method.
- save_datasets(datasets, compute=True, **kwargs)[source]
Save all datasets to one or more files.
Subclasses can use this method to save all datasets to one single file or optimize the writing of individual datasets. By default this simply calls save_dataset for each dataset provided.
- Parameters:
datasets (iterable) – Iterable of xarray.DataArray objects to save using this writer.
compute (bool) – If True (default), compute all the saves to disk. If False then the return value is either a Dask Delayed object or two lists to be passed to a
dask.array.store()
call. See return values below for more details.**kwargs – Keyword arguments to pass to save_dataset. See that documentation for more details.
- Returns:
Value returned depends on compute keyword argument. If compute is True the value is the result of either a
dask.array.store()
operation or a Dask Delayed compute, typically this is None. If compute is False then the result is either a Dask Delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed todask.array.store()
. If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.
- classmethod separate_init_kwargs(kwargs)[source]
Help separating arguments between init and save methods.
Currently the
Scene
is passed one set of arguments to represent the Writer creation and saving steps. This is not preferred for Writer structure, but provides a simpler interface to users. This method splits the provided keyword arguments between those needed for initialization and those needed for thesave_dataset
andsave_datasets
method calls.Writer subclasses should try to prefer keyword arguments only for the save methods only and leave the init keyword arguments to the base classes when possible.
- satpy.writers._burn_overlay(img, image_metadata, area, cw_, overlays)[source]
Burn the overlay in the image array.
- satpy.writers._create_overlays_dict(color, width, grid, level_coast, level_borders)[source]
Fill in the overlays dict.
- satpy.writers.add_decorate(orig, fill_value=None, **decorate)[source]
Decorate an image with text and/or logos/images.
This call adds text/logos in order as given in the input to keep the alignment features available in pydecorate.
An example of the decorate config:
decorate = { 'decorate': [ {'logo': {'logo_path': <path to a logo>, 'height': 143, 'bg': 'white', 'bg_opacity': 255}}, {'text': {'txt': start_time_txt, 'align': {'top_bottom': 'bottom', 'left_right': 'right'}, 'font': <path to ttf font>, 'font_size': 22, 'height': 30, 'bg': 'black', 'bg_opacity': 255, 'line': 'white'}} ] }
Any numbers of text/logo in any order can be added to the decorate list, but the order of the list is kept as described above.
Note that a feature given in one element, eg. bg (which is the background color) will also apply on the next elements unless a new value is given.
align is a special keyword telling where in the image to start adding features, top_bottom is either top or bottom and left_right is either left or right.
- satpy.writers.add_logo(orig, dc, img, logo)[source]
Add logos or other images to an image using the pydecorate package.
All the features of pydecorate’s
add_logo
are available. See documentation of Welcome to the Pydecorate documentation! for more info.
- satpy.writers.add_overlay(orig_img, area, coast_dir, color=None, width=None, resolution=None, level_coast=None, level_borders=None, fill_value=None, grid=None, overlays=None)[source]
Add coastline, political borders and grid(graticules) to image.
Uses
color
for feature colors wherecolor
is a 3-element tuple of integers between 0 and 255 representing (R, G, B).Warning
This function currently loses the data mask (alpha band).
resolution
is chosen automatically if None (default), otherwise it should be one of:‘f’
Full resolution
0.04 km
‘h’
High resolution
0.2 km
‘i’
Intermediate resolution
1.0 km
‘l’
Low resolution
5.0 km
‘c’
Crude resolution
25 km
grid
is a dictionary with key values as documented in detail in pycoast- eg. overlay={‘grid’: {‘major_lonlat’: (10, 10),
‘write_text’: False, ‘outline’: (224, 224, 224), ‘width’: 0.5}}
Here major_lonlat is plotted every 10 deg for both longitude and latitude, no labels for the grid lines are plotted, the color used for the grid lines is light gray, and the width of the gratucules is 0.5 pixels.
For grid if aggdraw is used, font option is mandatory, if not
write_text
is set to False:font = aggdraw.Font('black', '/usr/share/fonts/truetype/msttcorefonts/Arial.ttf', opacity=127, size=16)
- satpy.writers.add_scale(orig, dc, img, scale)[source]
Add scale to an image using the pydecorate package.
All the features of pydecorate’s
add_scale
are available. See documentation of Welcome to the Pydecorate documentation! for more info.
- satpy.writers.add_text(orig, dc, img, text)[source]
Add text to an image using the pydecorate package.
All the features of pydecorate’s
add_text
are available. See documentation of Welcome to the Pydecorate documentation! for more info.
- satpy.writers.available_writers(as_dict=False)[source]
Available writers based on current configuration.
- Parameters:
as_dict (bool) – Optionally return writer information as a dictionary. Default: False
- Returns: List of available writer names. If as_dict is True then
a list of dictionaries including additionally writer information is returned.
- satpy.writers.compute_writer_results(results)[source]
Compute all the given dask graphs results so that the files are saved.
- Parameters:
results (iterable) – Iterable of dask graphs resulting from calls to scn.save_datasets(…, compute=False)
- satpy.writers.configs_for_writer(writer=None)[source]
Generate writer configuration files for one or more writers.
- Parameters:
writer (Optional[str]) – Yield configs only for this writer
Returns: Generator of lists of configuration files
- satpy.writers.get_enhanced_image(dataset, enhance=None, overlay=None, decorate=None, fill_value=None)[source]
Get an enhanced version of dataset as an
XRImage
instance.- Parameters:
dataset (xarray.DataArray) – Data to be enhanced and converted to an image.
enhance (bool or Enhancer) – Whether to automatically enhance data to be more visually useful and to fit inside the file format being saved to. By default, this will default to using the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed.overlay (dict) – Options for image overlays. See
add_overlay()
for available options.decorate (dict) – Options for decorating the image. See
add_decorate()
for available options.fill_value (int or float) – Value to use when pixels are masked or invalid. Default of None means to create an alpha channel. See
finalize()
for more details. Only used when adding overlays or decorations. Otherwise it is up to the caller to “finalize” the image before using it except if callingimg.show()
or providing the image to a writer as these will finalize the image.
- satpy.writers.group_results_by_output_file(sources, targets)[source]
Group results by output file.
For writers that return sources and targets for
compute=False
, split the results by output file.When not only the data but also GeoTIFF tags are dask arrays, then
save_datasets(..., compute=False)`
returns a tuple of flat lists, where the second list consists of a mixture ofRIOTag
andRIODataset
objects (from trollimage). In some cases, we may want to get a seperate delayed object for each file; for example, if we want to add a wrapper to do something with the file as soon as it’s finished. This function unflattens the flat lists into a list of (src, target) tuples.For example, to close files as soon as computation is completed:
>>> @dask.delayed >>> def closer(obj, targs): ... for targ in targs: ... targ.close() ... return obj >>> (srcs, targs) = sc.save_datasets(writer="ninjogeotiff", compute=False, **ninjo_tags) >>> for (src, targ) in group_results_by_output_file(srcs, targs): ... delayed_store = da.store(src, targ, compute=False) ... wrapped_store = closer(delayed_store, targ) ... wrapped.append(wrapped_store) >>> compute_writer_results(wrapped)
In the wrapper you can do other useful tasks, such as writing a log message or moving files to a different directory.
Warning
Adding a callback may impact runtime and RAM. The pattern or cause is unclear. Tests with FCI data show that for resampling with high RAM use (from around 15 GB), runtime increases when a callback is added. Tests with ABI or low RAM consumption rather show a decrease in runtime. More information, see these GitHub comments Users who find out more are encouraged to contact the Satpy developers with clues.
- Parameters:
sources – List of sources (typically dask.array) as returned by
Scene.save_datasets()
.targets – List of targets (should be
RIODataset
orRIOTag
) as returned byScene.save_datasets()
.
- Returns:
List of
Tuple(List[sources], List[targets])
with a length equal to the number of output files planned to be written byScene.save_datasets()
.
- satpy.writers.load_writer(writer, **writer_kwargs)[source]
Find and load writer writer in the available configuration files.
- satpy.writers.load_writer_configs(writer_configs, **writer_kwargs)[source]
Load the writer from the provided writer_configs.
- satpy.writers.read_writer_config(config_files, loader=<class 'yaml.loader.UnsafeLoader'>)[source]
Read the writer config_files and return the info extracted.
- satpy.writers.split_results(results)[source]
Split results.
Get sources, targets and delayed objects to separate lists from a list of results collected from (multiple) writer(s).
- satpy.writers.to_image(dataset)[source]
Convert
dataset
into aXRImage
instance.Convert the
dataset
into an instance of theXRImage
class. This function makes no other changes. To get an enhanced image, possibly with overlays and decoration, seeget_enhanced_image()
.- Parameters:
dataset (xarray.DataArray) – Data to be converted to an image.
- Returns:
Instance of
XRImage
.
Submodules
satpy._compat module
Backports and compatibility fixes for satpy.
satpy._config module
Satpy Configuration directory and file handling.
- satpy._config.cached_entry_point(group_name: str) Iterable[EntryPoint] [source]
Return entry_point for specified
group
.This is a dummy proxy to allow caching and provide compatibility between versions of Python and importlib_metadata.
- satpy._config.config_search_paths(filename, search_dirs=None, **kwargs)[source]
Get series of configuration base paths where Satpy configs are located.
- satpy._config.get_config_path(filename)[source]
Get the path to the highest priority version of a config file.
- satpy._config.get_entry_points_config_dirs(group_name: str, include_config_path: bool = True) list[str] [source]
Get the config directories for all entry points of given name.
- satpy._config.glob_config(pattern, search_dirs=None)[source]
Return glob results for all possible configuration locations.
- Note: This method does not check the configuration “base” directory if the pattern includes a subdirectory.
This is done for performance since this is usually used to find all configs for a certain component.
satpy._scene_converters module
Helper functions for converting the Scene object to some other object.
- satpy._scene_converters._get_dataarrays_from_identifiers(scn, identifiers)[source]
Return a list of DataArray based on a single or list of identifiers.
An identifier can be a DataID or a string with name of a valid DataID.
- satpy._scene_converters.to_geoviews(scn, gvtype=None, datasets=None, kdims=None, vdims=None, dynamic=False)[source]
Convert satpy Scene to geoviews.
- Parameters:
scn (satpy.Scene) – Satpy Scene.
gvtype (gv plot type) – One of gv.Image, gv.LineContours, gv.FilledContours, gv.Points Default to
geoviews.Image
. See Geoviews documentation for details.datasets (list) – Limit included products to these datasets
kdims (list of str) – Key dimensions. See geoviews documentation for more information.
vdims (list of str, optional) – Value dimensions. See geoviews documentation for more information. If not given defaults to first data variable
dynamic (bool, optional) – Load and compute data on-the-fly during visualization. Default is
False
. See https://holoviews.org/user_guide/Gridded_Datasets.html#working-with-xarray-data-types for more information. Has no effect when data to be visualized only has 2 dimensions (y/x or longitude/latitude) and doesn’t require grouping via the Holoviewsgroupby
function.
Returns: geoviews object
- satpy._scene_converters.to_hvplot(scn, datasets=None, *args, **kwargs)[source]
Convert satpy Scene to Hvplot. The method could not be used with composites of swath data.
- Parameters:
scn (satpy.Scene) – Satpy Scene.
datasets (list) – Limit included products to these datasets.
args – Arguments coming from hvplot
kwargs – hvplot options dictionary.
- Returns:
hvplot object that contains within it the plots of datasets list. As default it contains all Scene datasets plots and a plot title is shown.
Example usage:
scene_list = ['ash','IR_108'] scn = Scene() scn.load(scene_list) scn = scn.resample('eurol') plot = scn.to_hvplot(datasets=scene_list) plot.ash+plot.IR_108
- satpy._scene_converters.to_xarray(scn, datasets=None, header_attrs=None, exclude_attrs=None, flatten_attrs=False, pretty=True, include_lonlats=True, epoch=None, include_orig_name=True, numeric_name_prefix='CHANNEL_')[source]
Merge all xr.DataArray(s) of a satpy.Scene to a CF-compliant xarray object.
If all Scene DataArrays are on the same area, it returns an xr.Dataset. If Scene DataArrays are on different areas, currently it fails, although in future we might return a DataTree object, grouped by area.
- Parameters:
scn (satpy.Scene) – Satpy Scene.
datasets (iterable, optional) – List of Satpy Scene datasets to include in the output xr.Dataset. Elements can be string name, a wavelength as a number, a DataID, or DataQuery object. If None (the default), it includes all loaded Scene datasets.
header_attrs – Global attributes of the output xr.Dataset.
epoch (str, optional) – Reference time for encoding the time coordinates (if available). Format example: “seconds since 1970-01-01 00:00:00”. If None, the default reference time is retrieved using “from satpy.cf_writer import EPOCH”.
flatten_attrs (bool, optional) – If True, flatten dict-type attributes.
exclude_attrs (list, optional) – List of xr.DataArray attribute names to be excluded.
include_lonlats (bool, optional) – If True, includes ‘latitude’ and ‘longitude’ coordinates. If the ‘area’ attribute is a SwathDefinition, it always includes latitude and longitude coordinates.
pretty (bool, optional) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
include_orig_name (bool, optional) – Include the original dataset name as a variable attribute in the xr.Dataset.
numeric_name_prefix (str, optional) – Prefix to add to each variable with name starting with a digit. Use ‘’ or None to leave this out.
- Returns:
A CF-compliant xr.Dataset
- Return type:
xr.Dataset
satpy.aux_download module
Functions and utilities for downloading ancillary data.
- class satpy.aux_download.DataDownloadMixin[source]
Bases:
object
Mixin class for Satpy components to download files.
This class simplifies the logic needed to download and cache data files needed for operations in a Satpy component (readers, writers, etc). It does this in a two step process where files that might be downloaded are “registered” and then “retrieved” when they need to be used.
To use this class include it as one of the subclasses of your Satpy component. Then in the
__init__
method, call theregister_data_files
function during initialization.Note
This class is already included in the
FileYAMLReader
andWriter
base classes. There is no need to define a custom class.The below code is shown as an example:
from satpy.readers.yaml_reader import AbstractYAMLReader from satpy.aux_download import DataDownloadMixin class MyReader(AbstractYAMLReader, DataDownloadMixin): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.register_data_files()
This class expects data files to be configured in either a
self.info['data_files']
(standard for readers/writers) orself.config['data_files']
list. Thedata_files
item itself is a list of dictionaries. This information can also be passed directly toregister_data_files
for more complex cases. In YAML, for a reader, this might look like this:reader: name: abi_l1b short_name: ABI L1b long_name: GOES-R ABI Level 1b ... other metadata ... data_files: - url: "https://example.com/my_data_file.dat" - url: "https://raw.githubusercontent.com/pytroll/satpy/main/README.rst" known_hash: "sha256:5891286b63e7745de08c4b0ac204ad44cfdb9ab770309debaba90308305fa759" - url: "https://raw.githubusercontent.com/pytroll/satpy/main/RELEASING.md" filename: "satpy_releasing.md"
In this example we register two files that might be downloaded. If
known_hash
is not provided or None (null in YAML) then the data file will not be checked for validity when downloaded. Seeregister_file()
for more information. You can optionally specifyfilename
to define the in-cache name when this file is downloaded. This can be useful in cases when the filename can not be easily determined from the URL.When it comes time to needing the file, you can retrieve the local path by calling
~satpy.aux_download.retrieve(cache_key)
with the “cache key” generated during registration. These keys will be in the format:<component_type>/<filename>
. For a reader this would bereaders/satpy_release.md
.This Mixin is not the only way to register and download files for a Satpy component, but is the most generic and flexible. Feel free to use the
register_file()
andretrieve()
functions directly. However,find_registerable_files()
must also be updated to support your component (if files are not register during initialization).- DATA_FILE_COMPONENTS = {'composit': 'composites', 'corr': 'modifiers', 'modifi': 'modifiers', 'reader': 'readers', 'writer': 'writers'}
- property _data_file_component_type
- register_data_files(data_files=None)[source]
Register a series of files that may be downloaded later.
See
DataDownloadMixin
for more information on the assumptions and structure of the data file configuration dictionary.
- satpy.aux_download._find_registerable_files_compositors(sensors=None)[source]
Load all compositor configs so that files are registered.
Compositor objects should register files when they are initialized.
- satpy.aux_download._find_registerable_files_readers(readers=None)[source]
Load all readers so that files are registered.
- satpy.aux_download._find_registerable_files_writers(writers=None)[source]
Load all writers so that files are registered.
- satpy.aux_download._should_download(cache_key)[source]
Check if we’re running tests and can download this file.
- satpy.aux_download.find_registerable_files(readers=None, writers=None, composite_sensors=None)[source]
Load all Satpy components so they can be downloaded.
- Parameters:
readers (list or None) – Limit searching to these readers. If not specified or
None
then all readers are searched. If an empty list then no readers are searched.writers (list or None) – Limit searching to these writers. If not specified or
None
then all writers are searched. If an empty list then no writers are searched.composite_sensors (list or None) – Limit searching to composite configuration files for these sensors. If
None
then all sensor configs will be searched. If an empty list then no composites will be searched.
- satpy.aux_download.register_file(url, filename, component_type=None, known_hash=None)[source]
Register file for future retrieval.
This function only prepares Satpy to be able to download and cache the provided file. It will not download the file. See
satpy.aux_download.retrieve()
for more information.- Parameters:
url (str) – URL where remote file can be downloaded.
filename (str) – Filename used to identify and store the downloaded file as.
component_type (str or None) – Name of the type of Satpy component that will use this file. Typically “readers”, “composites”, “writers”, or “enhancements” for consistency. This will be prepended to the filename when storing the data in the cache.
known_hash (str) – Hash used to verify the file is downloaded correctly. See https://www.fatiando.org/pooch/v1.3.0/beginner.html#hashes for more information. If not provided then the file is not checked.
- Returns:
Cache key that can be used to retrieve the file later. The cache key consists of the
component_type
and providedfilename
. This should be passed tosatpy.aux_download_retrieve()
when the file will be used.
- satpy.aux_download.retrieve(cache_key, pooch_kwargs=None)[source]
Download and cache the file associated with the provided
cache_key
.Cache location is controlled by the config
data_dir
key. See Data Directory for more information.- Parameters:
cache_key (str) – Cache key returned by
register_file()
.pooch_kwargs (dict or None) – Extra keyword arguments to pass to
pooch.Pooch.fetch()
.
- Returns:
Local path of the cached file.
- satpy.aux_download.retrieve_all(readers=None, writers=None, composite_sensors=None, pooch_kwargs=None)[source]
Find cache-able data files for Satpy and download them.
The typical use case for this function is to download all ancillary files before going to an environment/system that does not have internet access.
- Parameters:
readers (list or None) – Limit searching to these readers. If not specified or
None
then all readers are searched. If an empty list then no readers are searched.writers (list or None) – Limit searching to these writers. If not specified or
None
then all writers are searched. If an empty list then no writers are searched.composite_sensors (list or None) – Limit searching to composite configuration files for these sensors. If
None
then all sensor configs will be searched. If an empty list then no composites will be searched.pooch_kwargs (dict) – Additional keyword arguments to pass to pooch
fetch
.
satpy.conftest module
Pytest configuration and setup functions.
satpy.dependency_tree module
Implementation of a dependency tree.
- class satpy.dependency_tree.DependencyTree(readers, compositors=None, modifiers=None, available_only=False)[source]
Bases:
Tree
Structure to discover and store Dataset dependencies.
Used primarily by the Scene object to organize dependency finding. Dependencies are stored used a series of Node objects which this class is a subclass of.
Collect Dataset generating information.
Collect the objects that generate and have information about Datasets including objects that may depend on certain Datasets being generated. This includes readers, compositors, and modifiers.
Composites and modifiers are defined per-sensor. If multiple sensors are available, compositors and modifiers are searched for in sensor alphabetical order.
- Parameters:
readers (dict) – Reader name -> Reader Object
compositors (dict) – Sensor name -> Composite ID -> Composite Object. Empty dictionary by default.
modifiers (dict) – Sensor name -> Modifier name -> (Modifier Class, modifier options). Empty dictionary by default.
available_only (bool) – Whether only reader’s available/loadable datasets should be used when searching for dependencies (True) or use all known/configured datasets regardless of whether the necessary files were provided to the reader (False). Note that when
False
loadable variations of a dataset will have priority over other known variations. Default isFalse
.
- _create_optional_subtrees(parent, prereqs, query=None)[source]
Determine optional prerequisite Nodes for a composite.
- Parameters:
parent (Node) – Compositor node to add these prerequisites under
prereqs (sequence) – Strings (names), floats (wavelengths), or DataQuerys to analyze.
- _create_prerequisite_subtrees(parent, prereqs, query=None)[source]
Determine prerequisite Nodes for a composite.
- Parameters:
parent (Node) – Compositor node to add these prerequisites under
prereqs (sequence) – Strings (names), floats (wavelengths), DataQuerys or Nodes to analyze.
- _create_required_subtrees(parent, prereqs, query=None)[source]
Determine required prerequisite Nodes for a composite.
- Parameters:
parent (Node) – Compositor node to add these prerequisites under
prereqs (sequence) – Strings (names), floats (wavelengths), DataQuerys or Nodes to analyze.
- _get_unique_matching_id(matching_ids, dataset_key, query)[source]
Get unique matching id from matching_ids, for a given dataset_key and some optional query.
- _promote_query_to_modified_dataid(query, dep_key)[source]
Promote a query to an id based on the dataset it will modify (dep).
Typical use case is requesting a modified dataset (query). This modified dataset most likely depends on a less-modified dataset (dep_key). The less-modified dataset must come from a reader (at least for now) or will eventually depend on a reader dataset. The original request key may be limited like (wavelength=0.67, modifiers=(‘a’, ‘b’)) while the reader-based key should have all of its properties specified. This method updates the original request key so it is fully specified and should reduce the chance of Node’s not being unique.
- copy()[source]
Copy this node tree.
Note all references to readers are removed. This is meant to avoid tree copies accessing readers that would return incompatible (Area) data. Theoretically it should be possible for tree copies to request compositor or modifier information as long as they don’t depend on any datasets not already existing in the dependency tree.
- class satpy.dependency_tree.Tree[source]
Bases:
object
A tree implementation.
Set up the tree.
- empty_node = <Node ('__EMPTY_LEAF_SENTINEL__')>
- leaves(limit_nodes_to: Iterable[DataID] | None = None, unique: bool = True) list[Node] [source]
Get the leaves of the tree starting at the root.
- Parameters:
limit_nodes_to – Limit leaves to Nodes with the names (DataIDs) specified.
unique – Only include individual leaf nodes once.
- Returns:
list of leaf nodes
- trunk(limit_nodes_to: Iterable[DataID] | None = None, unique: bool = True, limit_children_to: Container[DataID] | None = None) list[Node] [source]
Get the trunk nodes of the tree starting at this root.
- Parameters:
limit_nodes_to – Limit searching to trunk nodes with the names (DataIDs) specified and the children of these nodes.
unique – Only include individual trunk nodes once
limit_children_to – Limit searching to the children with the specified names. These child nodes will be included in the result, but not their children.
- Returns:
list of trunk nodes
- class satpy.dependency_tree._DataIDContainer[source]
Bases:
dict
Special dictionary object that can handle dict operations based on dataset name, wavelength, or DataID.
Note: Internal dictionary keys are DataID objects.
satpy.node module
Nodes to build trees.
- class satpy.node.CompositorNode(compositor)[source]
Bases:
Node
Implementation of a compositor-specific node.
Set up the node.
- property compositor
Get the compositor.
- property optional_nodes
Get the optional nodes.
- property required_nodes
Get the required nodes.
- exception satpy.node.MissingDependencies(missing_dependencies, *args, **kwargs)[source]
Bases:
RuntimeError
Exception when dependencies are missing.
Set up the exception.
- class satpy.node.Node(name, data=None)[source]
Bases:
object
A node object.
Init the node object.
- property is_leaf
Check if the node is a leaf.
satpy.plugin_base module
Classes and utilities for defining generic “plugin” components.
- class satpy.plugin_base.Plugin(default_config_filename=None, config_files=None, **kwargs)[source]
Bases:
object
Base plugin class for all dynamically loaded and configured objects.
Load configuration files related to this plugin.
This initializes a self.config dictionary that can be used to customize the subclass.
- Parameters:
default_config_filename (str) – Configuration filename to use if no other files have been specified with config_files.
config_files (list or str) – Configuration files to load instead of those automatically found in SATPY_CONFIG_PATH and other default configuration locations.
kwargs (dict) – Unused keyword arguments.
satpy.resample module
Resampling in Satpy.
Satpy provides multiple resampling algorithms for resampling geolocated
data to uniform projected grids. The easiest way to perform resampling in
Satpy is through the Scene
object’s
resample()
method. Additional utility functions are
also available to assist in resampling data. Below is more information on
resampling with Satpy as well as links to the relevant API documentation for
available keyword arguments.
Resampling algorithms
Resampler |
Description |
Related |
---|---|---|
nearest |
Nearest Neighbor |
|
ewa |
Elliptical Weighted Averaging |
|
ewa_legacy |
Elliptical Weighted Averaging (Legacy) |
|
native |
Native |
|
bilinear |
Bilinear |
|
bucket_avg |
Average Bucket Resampling |
|
bucket_sum |
Sum Bucket Resampling |
|
bucket_count |
Count Bucket Resampling |
|
bucket_fraction |
Fraction Bucket Resampling |
|
gradient_search |
Gradient Search Resampling |
|
The resampling algorithm used can be specified with the resampler
keyword
argument and defaults to nearest
:
>>> scn = Scene(...)
>>> euro_scn = scn.resample('euro4', resampler='nearest')
Warning
Some resampling algorithms expect certain forms of data. For example, the EWA resampling expects polar-orbiting swath data and prefers if the data can be broken in to “scan lines”. See the API documentation for a specific algorithm for more information.
Resampling for comparison and composites
While all the resamplers can be used to put datasets of different resolutions on to a common area, the ‘native’ resampler is designed to match datasets to one resolution in the dataset’s original projection. This is extremely useful when generating composites between bands of different resolutions.
>>> new_scn = scn.resample(resampler='native')
By default this resamples to the
highest resolution area
(smallest footprint per
pixel) shared between the loaded datasets. You can easily specify the lowest
resolution area:
>>> new_scn = scn.resample(scn.coarsest_area(), resampler='native')
Providing an area that is neither the minimum or maximum resolution area may work, but behavior is currently undefined.
Caching for geostationary data
Satpy will do its best to reuse calculations performed to resample datasets,
but it can only do this for the current processing and will lose this
information when the process/script ends. Some resampling algorithms, like
nearest
and bilinear
, can benefit by caching intermediate data on disk in the directory
specified by cache_dir and using it next time. This is most beneficial with
geostationary satellite data where the locations of the source data and the
target pixels don’t change over time.
>>> new_scn = scn.resample('euro4', cache_dir='/path/to/cache_dir')
See the documentation for specific algorithms to see availability and limitations of caching for that algorithm.
Create custom area definition
See pyresample.geometry.AreaDefinition
for information on creating
areas that can be passed to the resample method:
>>> from pyresample.geometry import AreaDefinition
>>> my_area = AreaDefinition(...)
>>> local_scene = scn.resample(my_area)
Create dynamic area definition
See pyresample.geometry.DynamicAreaDefinition
for more information.
Examples coming soon…
Store area definitions
Area definitions can be saved to a custom YAML file (see pyresample’s writing to disk) and loaded using pyresample’s utility methods (pyresample’s loading from disk):
>>> from pyresample import load_area
>>> my_area = load_area('my_areas.yaml', 'my_area')
Or using satpy.resample.get_area_def()
, which will search through all
areas.yaml
files in your SATPY_CONFIG_PATH
:
>>> from satpy.resample import get_area_def
>>> area_eurol = get_area_def("eurol")
For examples of area definitions, see the file etc/areas.yaml
that is
included with Satpy and where all the area definitions shipped with Satpy are
defined.
- class satpy.resample.BilinearResampler(source_geo_def, target_geo_def)[source]
Bases:
BaseResampler
Resample using bilinear interpolation.
This resampler implements on-disk caching when the cache_dir argument is provided to the resample method. This should provide significant performance improvements on consecutive resampling of geostationary data.
- Parameters:
cache_dir (str) – Long term storage directory for intermediate results.
radius_of_influence (float) – Search radius cut off distance in meters
epsilon (float) – Allowed uncertainty in meters. Increasing uncertainty reduces execution time.
reduce_data (bool) – Reduce the input data to (roughly) match the target area.
Init BilinearResampler.
- compute(data, fill_value=None, **kwargs)[source]
Resample the given data using bilinear interpolation.
- class satpy.resample.BucketAvg(source_geo_def, target_geo_def)[source]
Bases:
BucketResamplerBase
Class for averaging bucket resampling.
Bucket resampling calculates the average of all the values that are closest to each bin and inside the target area.
- Parameters:
fill_value (float (default: np.nan)) – Fill value to mark missing/invalid values in the input data, as well as in the binned and averaged output data.
skipna (boolean (default: True)) – If True, skips missing values (as marked by NaN or fill_value) for the average calculation (similarly to Numpy’s nanmean). Buckets containing only missing values are set to fill_value. If False, sets the bucket to fill_value if one or more missing values are present in the bucket (similarly to Numpy’s mean). In both cases, empty buckets are set to fill_value.
Initialize bucket resampler.
- class satpy.resample.BucketCount(source_geo_def, target_geo_def)[source]
Bases:
BucketResamplerBase
Class for bucket resampling which implements hit-counting.
This resampler calculates the number of occurences of the input data closest to each bin and inside the target area.
Initialize bucket resampler.
- class satpy.resample.BucketFraction(source_geo_def, target_geo_def)[source]
Bases:
BucketResamplerBase
Class for bucket resampling to compute category fractions.
This resampler calculates the fraction of occurences of the input data per category.
Initialize bucket resampler.
- class satpy.resample.BucketResamplerBase(source_geo_def, target_geo_def)[source]
Bases:
BaseResampler
Base class for bucket resampling which implements averaging.
Initialize bucket resampler.
- resample(data, **kwargs)[source]
Resample data by calling precompute and compute methods.
- Parameters:
data (xarray.DataArray) – Data to be resampled
Returns (xarray.DataArray): Data resampled to the target area
- class satpy.resample.BucketSum(source_geo_def, target_geo_def)[source]
Bases:
BucketResamplerBase
Class for bucket resampling which implements accumulation (sum).
This resampler calculates the cumulative sum of all the values that are closest to each bin and inside the target area.
- Parameters:
fill_value (float (default: np.nan)) – Fill value for missing data
skipna (boolean (default: True)) – If True, skips NaN values for the sum calculation (similarly to Numpy’s nansum). Buckets containing only NaN are set to zero. If False, sets the bucket to NaN if one or more NaN values are present in the bucket (similarly to Numpy’s sum). In both cases, empty buckets are set to 0.
Initialize bucket resampler.
- class satpy.resample.KDTreeResampler(source_geo_def, target_geo_def)[source]
Bases:
BaseResampler
Resample using a KDTree-based nearest neighbor algorithm.
This resampler implements on-disk caching when the cache_dir argument is provided to the resample method. This should provide significant performance improvements on consecutive resampling of geostationary data. It is not recommended to provide cache_dir when the mask keyword argument is provided to precompute which occurs by default for SwathDefinition source areas.
- Parameters:
cache_dir (str) – Long term storage directory for intermediate results.
mask (bool) – Force resampled data’s invalid pixel mask to be used when searching for nearest neighbor pixels. By default this is True for SwathDefinition source areas and False for all other area definition types.
radius_of_influence (float) – Search radius cut off distance in meters
epsilon (float) – Allowed uncertainty in meters. Increasing uncertainty reduces execution time.
Init KDTreeResampler.
- compute(data, weight_funcs=None, fill_value=nan, with_uncert=False, **kwargs)[source]
Resample data.
- load_neighbour_info(cache_dir, mask=None, **kwargs)[source]
Read index arrays from either the in-memory or disk cache.
- class satpy.resample.NativeResampler(source_geo_def: SwathDefinition | AreaDefinition, target_geo_def: CoordinateDefinition | AreaDefinition)[source]
Bases:
BaseResampler
Expand or reduce input datasets to be the same shape.
If data is higher resolution (more pixels) than the destination area then data is averaged to match the destination resolution.
If data is lower resolution (less pixels) than the destination area then data is repeated to match the destination resolution.
This resampler does not perform any caching or masking due to the simplicity of the operations.
Initialize resampler with geolocation information.
- Parameters:
source_geo_def – Geolocation definition for the data to be resampled
target_geo_def – Geolocation definition for the area to resample data to.
- satpy.resample._move_existing_caches(cache_dir, filename)[source]
Move existing cache files out of the way.
- satpy.resample._replicate(d_arr, repeats)[source]
Repeat data pixels by the per-axis factors specified.
- satpy.resample.add_crs_xy_coords(data_arr, area)[source]
Add
pyproj.crs.CRS
and x/y or lons/lats to coordinates.For SwathDefinition or GridDefinition areas this will add a crs coordinate and coordinates for the 2D arrays of lons and lats.
For AreaDefinition areas this will add a crs coordinate and the 1-dimensional x and y coordinate variables.
- Parameters:
data_arr (xarray.DataArray) – DataArray to add the ‘crs’ coordinate.
area (pyresample.geometry.AreaDefinition) – Area to get CRS information from.
- satpy.resample.add_xy_coords(data_arr, area, crs=None)[source]
Assign x/y coordinates to DataArray from provided area.
If ‘x’ and ‘y’ coordinates already exist then they will not be added.
- Parameters:
data_arr (xarray.DataArray) – data object to add x/y coordinates to
area (pyresample.geometry.AreaDefinition) – area providing the coordinate data.
crs (pyproj.crs.CRS or None) – CRS providing additional information about the area’s coordinate reference system if available. Requires pyproj 2.0+.
Returns (xarray.DataArray): Updated DataArray object
- satpy.resample.get_area_def(area_name)[source]
Get the definition of area_name from file.
The file is defined to use is to be placed in the $SATPY_CONFIG_PATH directory, and its name is defined in satpy’s configuration file.
- satpy.resample.get_area_file()[source]
Find area file(s) to use.
The files are to be named areas.yaml or areas.def.
- satpy.resample.get_fill_value(dataset)[source]
Get the fill value of the dataset, defaulting to np.nan.
- satpy.resample.prepare_resampler(source_area, destination_area, resampler=None, **resample_kwargs)[source]
Instantiate and return a resampler.
- satpy.resample.resample(source_area, data, destination_area, resampler=None, **kwargs)[source]
Do the resampling.
- satpy.resample.resample_dataset(dataset, destination_area, **kwargs)[source]
Resample dataset and return the resampled version.
- Parameters:
dataset (xarray.DataArray) – Data to be resampled.
destination_area – The destination onto which to project the data, either a full blown area definition or a string corresponding to the name of the area as defined in the area file.
**kwargs – The extra parameters to pass to the resampler objects.
- Returns:
A resampled DataArray with updated
.attrs["area"]
field. The dtype of the array is preserved.
- satpy.resample.update_resampled_coords(old_data, new_data, new_area)[source]
Add coordinate information to newly resampled DataArray.
- Parameters:
old_data (xarray.DataArray) – Old data before resampling.
new_data (xarray.DataArray) – New data after resampling.
new_area (pyresample.geometry.BaseDefinition) – Area definition for the newly resampled data.
satpy.scene module
Scene object to hold satellite data.
- exception satpy.scene.DelayedGeneration[source]
Bases:
KeyError
Mark that a dataset can’t be generated without further modification.
- class satpy.scene.Scene(filenames=None, reader=None, filter_parameters=None, reader_kwargs=None)[source]
Bases:
object
The Almighty Scene Class.
Example usage:
from satpy import Scene from glob import glob # create readers and open files scn = Scene(filenames=glob('/path/to/files/*'), reader='viirs_sdr') # load datasets from input files scn.load(['I01', 'I02']) # resample from satellite native geolocation to builtin 'eurol' Area new_scn = scn.resample('eurol') # save all resampled datasets to geotiff files in the current directory new_scn.save_datasets()
Initialize Scene with Reader and Compositor objects.
To load data filenames and preferably reader must be specified:
scn = Scene(filenames=glob('/path/to/viirs/sdr/files/*'), reader='viirs_sdr')
If
filenames
is provided withoutreader
then the available readers will be searched for a Reader that can support the provided files. This can take a considerable amount of time so it is recommended thatreader
always be provided. Note withoutfilenames
the Scene is created with no Readers available requiring Datasets to be added manually:scn = Scene() scn['my_dataset'] = Dataset(my_data_array, **my_info)
Further, notice that it is also possible to load a combination of files or sets of files each requiring their specific reader. For that
filenames
needs to be a dict (see parameters list below), e.g.:scn = Scene(filenames={'nwcsaf-pps_nc': glob('/path/to/nwc/saf/pps/files/*'), 'modis_l1b': glob('/path/to/modis/lvl1/files/*')})
- Parameters:
filenames (iterable or dict) – A sequence of files that will be used to load data from. A
dict
object should map reader names to a list of filenames for that reader.reader (str or list) – The name of the reader to use for loading the data or a list of names.
filter_parameters (dict) – Specify loaded file filtering parameters. Shortcut for reader_kwargs[‘filter_parameters’].
reader_kwargs (dict) –
Keyword arguments to pass to specific reader instances. Either a single dictionary that will be passed onto to all reader instances, or a dictionary mapping reader names to sub-dictionaries to pass different arguments to different reader instances.
Keyword arguments for remote file access are also given in this dictionary. See documentation for usage examples.
- _check_known_composites(available_only=False)[source]
Create new dependency tree and check what composites we know about.
- static _compare_area_defs(compare_func: Callable, area_defs: list[AreaDefinition]) list[AreaDefinition] [source]
- _compare_areas(datasets=None, compare_func=<built-in function max>)[source]
Compare areas for the provided datasets.
- Parameters:
datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets. This can also be a series of area objects, typically AreaDefinitions.
compare_func (callable) – min or max or other function used to compare the dataset’s areas.
- static _compare_swath_defs(compare_func: Callable, swath_defs: list[SwathDefinition]) list[SwathDefinition] [source]
- _create_reader_instances(filenames=None, reader=None, reader_kwargs=None)[source]
Find readers and return their instances.
- _gather_all_areas(datasets)[source]
Gather all areas from datasets.
They have to be of the same type, and at least one dataset should have an area.
- _generate_composite(comp_node: CompositorNode, keepables: set)[source]
Collect all composite prereqs and create the specified composite.
- Parameters:
comp_node – Composite Node to generate a Dataset for
keepables – set to update if any datasets are needed when generation is continued later. This can happen if generation is delayed to incompatible areas which would require resampling first.
- _generate_composites_from_loaded_datasets()[source]
Compute all the composites contained in requirements.
- _generate_composites_nodes_from_loaded_datasets(compositor_nodes)[source]
Read (generate) composites.
- _get_prereq_datasets(comp_id, prereq_nodes, keepables, skip=False)[source]
Get a composite’s prerequisites, generating them if needed.
- Parameters:
comp_id (DataID) – DataID for the composite whose prerequisites are being collected.
prereq_nodes (sequence of Nodes) – Prerequisites to collect
keepables (set) – set to update if any prerequisites can’t be loaded at this time (see _generate_composite).
skip (bool) – If True, consider prerequisites as optional and only log when they are missing. If False, prerequisites are considered required and will raise an exception and log a warning if they can’t be collected. Defaults to False.
- Raises:
KeyError – If required (skip=False) prerequisite can’t be collected.
- static _get_writer_by_ext(extension)[source]
Find the writer matching the
extension
.Defaults to “simple_image”.
Example Mapping:
geotiff: .tif, .tiff
cf: .nc
mitiff: .mitiff
simple_image: .png, .jpeg, .jpg, …
- _read_dataset_nodes_from_storage(reader_nodes, **kwargs)[source]
Read the given dataset nodes from storage.
- _read_datasets_from_storage(**kwargs)[source]
Load datasets from the necessary reader.
- Parameters:
**kwargs – Keyword arguments to pass to the reader’s load method.
- Returns:
DatasetDict of loaded datasets
- _reduce_data(dataset, source_area, destination_area, reduce_data, reductions, resample_kwargs)[source]
- _resampled_scene(new_scn, destination_area, reduce_data=True, **resample_kwargs)[source]
Resample datasets to the destination area.
If data reduction is enabled, some local caching is perfomed in order to avoid recomputation of area intersections.
- static _slice_area_from_bbox(src_area, dst_area, ll_bbox=None, xy_bbox=None)[source]
Slice the provided area using the bounds provided.
- _slice_datasets(dataset_ids, slice_key, new_area, area_only=True)[source]
Slice scene in-place for the datasets specified.
- aggregate(dataset_ids=None, boundary='trim', side='left', func='mean', **dim_kwargs)[source]
Create an aggregated version of the Scene.
- Parameters:
dataset_ids (iterable) – DataIDs to include in the returned Scene. Defaults to all datasets.
func (string, callable) – Function to apply on each aggregation window. One of ‘mean’, ‘sum’, ‘min’, ‘max’, ‘median’, ‘argmin’, ‘argmax’, ‘prod’, ‘std’, ‘var’ strings or a custom function. ‘mean’ is the default.
boundary – See
xarray.DataArray.coarsen()
, ‘trim’ by default.side – See
xarray.DataArray.coarsen()
, ‘left’ by default.dim_kwargs – the size of the windows to aggregate.
- Returns:
A new aggregated scene
See also
xarray.DataArray.coarsen
Example
scn.aggregate(func=’min’, x=2, y=2) will apply the min function across a window of size 2 pixels.
- all_dataset_ids(reader_name=None, composites=False)[source]
Get IDs of all datasets from loaded readers or reader_name if specified.
Excludes composites unless
composites=True
is passed.- Parameters:
Returns: list of all dataset IDs
- all_dataset_names(reader_name=None, composites=False)[source]
Get all known dataset names configured for the loaded readers.
Note that some readers dynamically determine what datasets are known by reading the contents of the files they are provided. This means that the list of datasets returned by this method may change depending on what files are provided even if a product/dataset is a “standard” product for a particular reader.
Excludes composites unless
composites=True
is passed.- Parameters:
Returns: list of all dataset names
- property all_same_area
All contained data arrays are on the same area.
- property all_same_proj
All contained data array are in the same projection.
- available_composite_ids()[source]
Get IDs of composites that can be generated from the available datasets.
- available_dataset_ids(reader_name=None, composites=False)[source]
Get DataIDs of loadable datasets.
This can be for all readers loaded by this Scene or just for
reader_name
if specified.Available dataset names are determined by what each individual reader can load. This is normally determined by what files are needed to load a dataset and what files have been provided to the scene/reader. Some readers dynamically determine what is available based on the contents of the files provided.
By default, only returns non-composite dataset IDs. To include composite dataset IDs, pass
composites=True
.- Parameters:
Returns: list of available dataset IDs
- available_dataset_names(reader_name=None, composites=False)[source]
Get the list of the names of the available datasets.
By default, this only shows names of datasets directly defined in (one of the) readers. Names of composites are not returned unless the argument
composites=True
is passed.- Parameters:
Returns: list of available dataset names
- chunk(**kwargs)[source]
Call chunk on all Scene data arrays.
See
xarray.DataArray.chunk()
for more details.
- coarsest_area(datasets=None)[source]
Get lowest resolution area for the provided datasets.
- Parameters:
datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.
- compute(**kwargs)[source]
Call compute on all Scene data arrays.
See
xarray.DataArray.compute()
for more details. Note that this will convert the contents of the DataArray to numpy arrays which may not work with all parts of Satpy which may expect dask arrays.
- crop(area=None, ll_bbox=None, xy_bbox=None, dataset_ids=None)[source]
Crop Scene to a specific Area boundary or bounding box.
- Parameters:
area (AreaDefinition) – Area to crop the current Scene to
ll_bbox (tuple, list) – 4-element tuple where values are in lon/lat degrees. Elements are
(xmin, ymin, xmax, ymax)
where X is longitude and Y is latitude.xy_bbox (tuple, list) – Same as ll_bbox but elements are in projection units.
dataset_ids (iterable) – DataIDs to include in the returned Scene. Defaults to all datasets.
This method will attempt to intelligently slice the data to preserve relationships between datasets. For example, if we are cropping two DataArrays of 500m and 1000m pixel resolution then this method will assume that exactly 4 pixels of the 500m array cover the same geographic area as a single 1000m pixel. It handles these cases based on the shapes of the input arrays and adjusting slicing indexes accordingly. This method will have trouble handling cases where data arrays seem related but don’t cover the same geographic area or if the coarsest resolution data is not related to the other arrays which are related.
It can be useful to follow cropping with a call to the native resampler to resolve all datasets to the same resolution and compute any composites that could not be generated previously:
>>> cropped_scn = scn.crop(ll_bbox=(-105., 40., -95., 50.)) >>> remapped_scn = cropped_scn.resample(resampler='native')
Note
The resample method automatically crops input data before resampling to save time/memory.
- property end_time
Return the end time of the file.
If no data is currently contained in the Scene then loaded readers will be consulted. If no readers are loaded then the
Scene.start_time
is returned.
- finest_area(datasets=None)[source]
Get highest resolution area for the provided datasets.
- Parameters:
datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.
- generate_possible_composites(unload)[source]
See which composites can be generated and generate them.
- Parameters:
unload (bool) – if the dependencies of the composites should be unloaded after successful generation.
- iter_by_area()[source]
Generate datasets grouped by Area.
- Returns:
generator of (area_obj, list of dataset objects)
- load(wishlist, calibration='*', resolution='*', polarization='*', level='*', modifiers='*', generate=True, unload=True, **kwargs)[source]
Read and generate requested datasets.
When the wishlist contains DataQuery objects they can either be fully-specified DataQuery objects with every parameter specified or they can not provide certain parameters and the “best” parameter will be chosen. For example, if a dataset is available in multiple resolutions and no resolution is specified in the wishlist’s DataQuery then the highest (the smallest number) resolution will be chosen.
Loaded DataArray objects are created and stored in the Scene object.
- Parameters:
wishlist (iterable) – List of names (str), wavelengths (float), DataQuery objects or DataID of the requested datasets to load. See available_dataset_ids() for what datasets are available.
calibration (list | str) – Calibration levels to limit available datasets. This is a shortcut to having to list each DataQuery/DataID in wishlist.
resolution (list | float) – Resolution to limit available datasets. This is a shortcut similar to calibration.
polarization (list | str) – Polarization (‘V’, ‘H’) to limit available datasets. This is a shortcut similar to calibration.
modifiers (tuple | str) – Modifiers that should be applied to the loaded datasets. This is a shortcut similar to calibration, but only represents a single set of modifiers as a tuple. For example, specifying
modifiers=('sunz_corrected', 'rayleigh_corrected')
will attempt to apply both of these modifiers to all loaded datasets in the specified order (‘sunz_corrected’ first).level (list | str) – Pressure level to limit available datasets. Pressure should be in hPa or mb. If an altitude is used it should be specified in inverse meters (1/m). The units of this parameter ultimately depend on the reader.
generate (bool) – Generate composites from the loaded datasets (default: True)
unload (bool) – Unload datasets that were required to generate the requested datasets (composite dependencies) but are no longer needed.
- max_area(datasets=None)[source]
Get highest resolution area for the provided datasets. Deprecated.
Deprecated. Use
finest_area()
instead.- Parameters:
datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.
- min_area(datasets=None)[source]
Get lowest resolution area for the provided datasets. Deprecated.
Deprecated. Use
coarsest_area()
instead.- Parameters:
datasets (iterable) – Datasets whose areas will be compared. Can be either xarray.DataArray objects or identifiers to get the DataArrays from the current Scene. Defaults to all datasets.
- property missing_datasets
Set of DataIDs that have not been successfully loaded.
- persist(**kwargs)[source]
Call persist on all Scene data arrays.
See
xarray.DataArray.persist()
for more details.
- resample(destination=None, datasets=None, generate=True, unload=True, resampler=None, reduce_data=True, **resample_kwargs)[source]
Resample datasets and return a new scene.
- Parameters:
destination (AreaDefinition, GridDefinition) – area definition to resample to. If not specified then the area returned by Scene.finest_area() will be used.
datasets (list) – Limit datasets to resample to these specified data arrays. By default all currently loaded datasets are resampled.
generate (bool) – Generate any requested composites that could not be previously due to incompatible areas (default: True).
unload (bool) – Remove any datasets no longer needed after requested composites have been generated (default: True).
resampler (str) – Name of resampling method to use. By default, this is a nearest neighbor KDTree-based resampling (‘nearest’). Other possible values include ‘native’, ‘ewa’, etc. See the
resample
documentation for more information.reduce_data (bool) – Reduce data by matching the input and output areas and slicing the data arrays (default: True)
resample_kwargs – Remaining keyword arguments to pass to individual resampler classes. See the individual resampler class documentation
here
for available arguments.
- save_dataset(dataset_id, filename=None, writer=None, overlay=None, decorate=None, compute=True, **kwargs)[source]
Save the
dataset_id
to file usingwriter
.- Parameters:
dataset_id (str or Number or DataID or DataQuery) – Identifier for the dataset to save to disk.
filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
writer (str) – Name of writer to use when writing data to disk. Default to
"geotiff"
. If not provided, butfilename
is provided then the filename’s extension is used to determine the best writer to use.overlay (dict) – See
satpy.writers.add_overlay()
. Only valid for “image” writers like geotiff or simple_image.decorate (dict) – See
satpy.writers.add_decorate()
. Only valid for “image” writers like geotiff or simple_image.compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a Dask Delayed object or two lists to be passed to a dask.array.store call. See return values below for more details.
kwargs – Additional writer arguments. See Writing for more information.
- Returns:
Value returned depends on compute. If compute is True then the return value is the result of computing a Dask Delayed object or running
dask.array.store()
. If compute is False then the returned value is either a Dask Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
- save_datasets(writer=None, filename=None, datasets=None, compute=True, **kwargs)[source]
Save requested datasets present in a scene to disk using
writer
.Note that dependency datasets (those loaded solely to create another and not requested explicitly) that may be contained in this Scene will not be saved by default. The default datasets are those explicitly requested through
.load
and exist in the Scene currently. Specify dependency datasets using thedatasets
keyword argument.- Parameters:
writer (str) – Name of writer to use when writing data to disk. Default to
"geotiff"
. If not provided, butfilename
is provided then the filename’s extension is used to determine the best writer to use.filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
datasets (iterable) – Limit written products to these datasets. Elements can be string name, a wavelength as a number, a DataID, or DataQuery object.
compute (bool) – If True (default), compute all of the saves to disk. If False then the return value is either a Dask Delayed object or two lists to be passed to a dask.array.store call. See return values below for more details.
kwargs – Additional writer arguments. See Writing for more information.
- Returns:
Value returned depends on compute keyword argument. If compute is True the value is the result of a either a dask.array.store operation or a Dask Delayed compute, typically this is None. If compute is False then the result is either a Dask Delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed to
dask.array.store()
. If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.
- property sensor_names: set[str]
Return sensor names for the data currently contained in this Scene.
Sensor information is collected from data contained in the Scene whether loaded from a reader or generated as a composite with
load()
or added manually usingscn["name"] = data_arr
). Sensor information is also collected from any loaded readers. In some rare cases this may mean that the reader includes sensor information for data that isn’t actually loaded or even available.
- show(dataset_id, overlay=None)[source]
Show the dataset on screen as an image.
Show dataset on screen as an image, possibly with an overlay.
- Parameters:
dataset_id (DataID, DataQuery or str) – Either a DataID, a DataQuery or a string, that refers to a data array that has been previously loaded using Scene.load.
overlay (dict, optional) – Add an overlay before showing the image. The keys/values for this dictionary are as the arguments for
add_overlay()
. The dictionary should contain at least the key"coast_dir"
, which should refer to a top-level directory containing shapefiles. See the pycoast package documentation for coastline shapefile installation instructions.
- slice(key)[source]
Slice Scene by dataset index.
Note
DataArrays that do not have an
area
attribute will not be sliced.
- property start_time
Return the start time of the contained data.
If no data is currently contained in the Scene then loaded readers will be consulted.
- to_geoviews(gvtype=None, datasets=None, kdims=None, vdims=None, dynamic=False)[source]
Convert satpy Scene to geoviews.
- Parameters:
scn (satpy.Scene) – Satpy Scene.
gvtype (gv plot type) – One of gv.Image, gv.LineContours, gv.FilledContours, gv.Points Default to
geoviews.Image
. See Geoviews documentation for details.datasets (list) – Limit included products to these datasets
kdims (list of str) – Key dimensions. See geoviews documentation for more information.
vdims (list of str, optional) – Value dimensions. See geoviews documentation for more information. If not given defaults to first data variable
dynamic (bool, optional) – Load and compute data on-the-fly during visualization. Default is
False
. See https://holoviews.org/user_guide/Gridded_Datasets.html#working-with-xarray-data-types for more information. Has no effect when data to be visualized only has 2 dimensions (y/x or longitude/latitude) and doesn’t require grouping via the Holoviewsgroupby
function.
Returns: geoviews object
- to_hvplot(datasets=None, *args, **kwargs)[source]
Convert satpy Scene to Hvplot. The method could not be used with composites of swath data.
- Parameters:
scn (satpy.Scene) – Satpy Scene.
datasets (list) – Limit included products to these datasets.
args – Arguments coming from hvplot
kwargs – hvplot options dictionary.
- Returns:
hvplot object that contains within it the plots of datasets list. As default it contains all Scene datasets plots and a plot title is shown.
Example usage:
scene_list = ['ash','IR_108'] scn = Scene() scn.load(scene_list) scn = scn.resample('eurol') plot = scn.to_hvplot(datasets=scene_list) plot.ash+plot.IR_108
- to_xarray(datasets=None, header_attrs=None, exclude_attrs=None, flatten_attrs=False, pretty=True, include_lonlats=True, epoch=None, include_orig_name=True, numeric_name_prefix='CHANNEL_')[source]
Merge all xr.DataArray(s) of a satpy.Scene to a CF-compliant xarray object.
If all Scene DataArrays are on the same area, it returns an xr.Dataset. If Scene DataArrays are on different areas, currently it fails, although in future we might return a DataTree object, grouped by area.
- Parameters:
(iterable) (datasets) – List of Satpy Scene datasets to include in the output xr.Dataset. Elements can be string name, a wavelength as a number, a DataID, or DataQuery object. If None (the default), it include all loaded Scene datasets.
header_attrs – Global attributes of the output xr.Dataset.
(str) (numeric_name_prefix) – Reference time for encoding the time coordinates (if available). Example format: “seconds since 1970-01-01 00:00:00”. If None, the default reference time is defined using “from satpy.cf.coords import EPOCH”
(bool) (pretty) – If True, flatten dict-type attributes.
(list) (exclude_attrs) – List of xr.DataArray attribute names to be excluded.
(bool) – If True, it includes ‘latitude’ and ‘longitude’ coordinates. If the ‘area’ attribute is a SwathDefinition, it always includes latitude and longitude coordinates.
(bool) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
(bool). (include_orig_name) – Include the original dataset name as a variable attribute in the xr.Dataset.
(str) – Prefix to add the each variable with name starting with a digit. Use ‘’ or None to leave this out.
Returns –
------- –
ds – A CF-compliant xr.Dataset
xr.Dataset – A CF-compliant xr.Dataset
- to_xarray_dataset(datasets=None)[source]
Merge all xr.DataArrays of a scene to a xr.DataSet.
- Parameters:
datasets (list) – List of products to include in the
xarray.Dataset
Returns:
xarray.Dataset
- unload(keepables=None)[source]
Unload all unneeded datasets.
Datasets are considered unneeded if they weren’t directly requested or added to the Scene by the user or they are no longer needed to generate composites that have yet to be generated.
- Parameters:
keepables (iterable) – DataIDs to keep whether they are needed or not.
- property wishlist
Return a copy of the wishlist.
satpy.utils module
Module defining various utilities.
- exception satpy.utils.PerformanceWarning[source]
Bases:
Warning
Warning raised when there is a possible performance impact.
- class satpy.utils._WarningManager[source]
Bases:
object
Class to handle switching warnings on and off.
- filt = None
- satpy.utils._check_yaml_configs(configs, key)[source]
Get a diagnostic for the yaml configs.
key is the section to look for to get a name for the config at hand.
- satpy.utils._get_chunk_pixel_size()[source]
Compute the maximum chunk size from PYTROLL_CHUNK_SIZE.
- satpy.utils._get_satpos_from_platform_name(cth_dataset)[source]
Get satellite position if no orbital parameters in metadata.
Some cloud top height datasets lack orbital parameter information in metadata. Here, orbital parameters are calculated based on the platform name and start time, via Two Line Element (TLE) information.
Needs pyorbital, skyfield, and astropy to be installed.
- satpy.utils.atmospheric_path_length_correction(data, cos_zen, limit=88.0, max_sza=95.0)[source]
Perform Sun zenith angle correction.
This function uses the correction method proposed by Li and Shibata (2006): https://doi.org/10.1175/JAS3682.1
The correction is limited to
limit
degrees (default: 88.0 degrees). For larger zenith angles, the correction is the same as at thelimit
ifmax_sza
is None. The default behavior is to gradually reduce the correction pastlimit
degrees up tomax_sza
where the correction becomes 0. Bothdata
andcos_zen
should be 2D arrays of the same shape.
- satpy.utils.check_satpy(readers=None, writers=None, extras=None)[source]
Check the satpy readers and writers for correct installation.
- Parameters:
- Returns: bool
True if all specified features were successfully loaded.
- satpy.utils.convert_remote_files_to_fsspec(filenames, storage_options=None)[source]
Check filenames for transfer protocols, convert to FSFile objects if possible.
- satpy.utils.debug(deprecation_warnings=True)[source]
Context manager to temporarily set debugging on.
Example:
>>> with satpy.utils.debug(): ... code_here()
- Parameters:
deprecation_warnings (Optional[bool]) – Switch on deprecation warnings. Defaults to True.
- satpy.utils.debug_off()[source]
Turn debugging logging off.
This disables both debugging logging and the global visibility of deprecation warnings.
- satpy.utils.debug_on(deprecation_warnings=True)[source]
Turn debugging logging on.
Sets up a StreamHandler to to sys.stderr at debug level for all loggers, such that all debug messages (and log messages with higher severity) are logged to the standard error stream.
By default, since Satpy 0.26, this also enables the global visibility of deprecation warnings. This can be suppressed by passing a false value.
- Parameters:
deprecation_warnings (Optional[bool]) – Switch on deprecation warnings. Defaults to True.
- Returns:
None
- satpy.utils.find_in_ancillary(data, dataset)[source]
Find a dataset by name in the ancillary vars of another dataset.
- Parameters:
data (xarray.DataArray) – Array for which to search the ancillary variables
dataset (str) – Name of ancillary variable to look for.
- satpy.utils.get_chunk_size_limit(dtype=<class 'float'>)[source]
Compute the chunk size limit in bytes given dtype (float by default).
It is derived from PYTROLL_CHUNK_SIZE if defined (although deprecated) first, from dask config’s array.chunk-size then. It defaults to 128MiB.
- Returns:
The recommended chunk size in bytes.
- satpy.utils.get_legacy_chunk_size()[source]
Get the legacy chunk size.
This function should only be used while waiting for code to be migrated to use satpy.utils.get_chunk_size_limit instead.
- satpy.utils.get_satpos(data_arr: DataArray, preference: str | None = None, use_tle: bool = False) tuple[float, float, float] [source]
Get satellite position from dataset attributes.
- Parameters:
data_arr – DataArray object to access
.attrs
metadata from.preference –
Optional preference for one of the available types of position information. If not provided or
None
then the default preference is:Longitude & Latitude: nadir, actual, nominal, projection
Altitude: actual, nominal, projection
The provided
preference
can be any one of these individual strings (nadir, actual, nominal, projection). If the preference is not available then the original preference list is used. A warning is issued when projection values have to be used because nothing else is available and it wasn’t provided as thepreference
.use_tle – If true, try to obtain position via satellite name and TLE if it can’t be determined otherwise. This requires pyorbital, skyfield, and astropy to be installed and may need network access to obtain the TLE. Note that even if
use_tle
is true, the TLE will not be used if the dataset metadata contain the satellite position directly.
- Returns:
Geodetic longitude, latitude, altitude [km]
- satpy.utils.get_storage_options_from_reader_kwargs(reader_kwargs)[source]
Read and clean storage options from reader_kwargs.
- satpy.utils.ignore_invalid_float_warnings()[source]
Ignore warnings generated for working with NaN/inf values.
Numpy and dask sometimes don’t like NaN or inf values in normal function calls. This context manager hides/ignores them inside its context.
Examples
Use around numpy operations that you expect to produce warnings:
with ignore_invalid_float_warnings(): np.nanmean(np.nan)
- satpy.utils.ignore_pyproj_proj_warnings()[source]
Wrap operations that we know will produce a PROJ.4 precision warning.
Only to be used internally to Pyresample when we have no other choice but to use PROJ.4 strings/dicts. For example, serialization to YAML or other human-readable formats or testing the methods that produce the PROJ.4 versions of the CRS.
- satpy.utils.lonlat2xyz(lon, lat)[source]
Convert lon lat to cartesian.
For a sphere with unit radius, convert the spherical coordinates longitude and latitude to cartesian coordinates.
- Parameters:
lon (number or array of numbers) – Longitude in °.
lat (number or array of numbers) – Latitude in °.
- Returns:
(x, y, z) Cartesian coordinates [1]
- satpy.utils.normalize_low_res_chunks(chunks: tuple[int | Literal['auto'], ...], input_shape: tuple[int, ...], previous_chunks: tuple[int, ...], low_res_multipliers: tuple[int, ...], input_dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any]) tuple[int, ...] [source]
Compute dask chunk sizes based on data resolution.
First, chunks are computed for the highest resolution version of the data. This is done by multiplying the input array shape by the
low_res_multiplier
and then using Dask’s utility functions and configuration to produce a chunk size to fit into a specific number of bytes. See Chunks for more information. Next, the same multiplier is used to reduce the high resolution chunk sizes to the lower resolution of the input data. The end result of reading multiple resolutions of data is that each dask chunk covers the same geographic region. This also means replicating or aggregating one resolution and then combining arrays should not require any rechunking.- Parameters:
chunks – Requested chunk size for each dimension. This is passed directly to dask. Use
"auto"
for dimensions that should have chunks determined for them,-1
for dimensions that should be whole (not chunked), and1
or any other positive integer for dimensions that have a known chunk size beforehand.input_shape – Shape of the array to compute dask chunk size for.
previous_chunks – Any previous chunking or structure of the data. This can also be thought of as the smallest number of high (fine) resolution elements that make up a single “unit” or chunk of data. This could be a multiple or factor of the scan size for some instruments and/or could be based on the on-disk chunk size. This value ensures that chunks are aligned to the underlying data structure for best performance. On-disk chunk sizes should be multiplied by the largest low resolution multiplier if it is the same between all files (ex. 500m file has 226 chunk size, 1km file has 226 chunk size, etc).. Otherwise, the resulting low resolution chunks may not be aligned to the on-disk chunks. For example, if dask decides on a chunk size of 226 * 3 for 500m data, that becomes 226 * 3 / 2 for 1km data which is not aligned to the on-disk chunk size of 226.
low_res_multipliers – Number of high (fine) resolution pixels that fit in a single low (coarse) resolution pixel.
input_dtype – Dtype for the final unscaled array. This is usually 32-bit float (
np.float32
) or 64-bit float (np.float64
) for non-category data. If this doesn’t represent the final data type of the data then the final size of chunks in memory will not match the user’s request via dask’sarray.chunk-size
configuration. Sometimes it is useful to keep this as a single dtype for all reading functionality (ex.np.float32
) in order to keep all read variable chunks the same size regardless of dtype.
- Returns:
A tuple where each element is the chunk size for that axis/dimension.
- satpy.utils.proj_units_to_meters(proj_str)[source]
Convert projection units from kilometers to meters.
- satpy.utils.unify_chunks(*data_arrays: DataArray) tuple[DataArray, ...] [source]
Run
xarray.unify_chunks()
if input dimensions are all the same size.This is mostly used in
satpy.composites.CompositeBase
to safe guard against runningdask.array.core.map_blocks()
with arrays of different chunk sizes. Doing so can cause unexpected results or errors. However, xarray’sunify_chunks
will raise an exception if dimensions of the provided DataArrays are different sizes. This is a common case for Satpy. For example, the “bands” dimension may be 1 (L), 2 (LA), 3 (RGB), or 4 (RGBA) for most compositor operations that combine other composites together.
- satpy.utils.xyz2lonlat(x, y, z, asin=False)[source]
Convert cartesian to lon lat.
For a sphere with unit radius, convert cartesian coordinates to spherical coordinates longitude and latitude.
- Parameters:
x (number or array of numbers) – x-coordinate, unitless
y (number or array of numbers) – y-coordinate, unitless
z (number or array of numbers) – z-coordinate, unitless
asin (optional, bool) – If true, use arcsin for calculations. If false, use arctan2 for calculations.
- Returns:
Longitude and latitude in °.
- Return type:
(lon, lat)
satpy.version module
Module contents
Satpy Package initializer.
FAQ
Below you’ll find frequently asked questions, performance tips, and other topics that don’t really fit in to the rest of the Satpy documentation.
If you have any other questions that aren’t answered here feel free to make an issue on GitHub or talk to us on the Slack team or mailing list. See the contributing documentation for more information.
How can I speed up creation of composites that need resampling?
Satpy performs some initial image generation on the fly, but for composites
that need resampling (like the true_color
composite for GOES/ABI) the data
must be resampled to a common grid before the final image can be produced, as
the input channels are at differing spatial resolutions. In such cases, you may
see a substantial performance improvement by passing generate=False
when you
load your composite:
scn = Scene(filenames=filenames, reader='abi_l1b')
scn.load(['true_color'], generate=False)
scn_res = scn.resample(...)
By default, generate=True
which means that Satpy will create as many
composites as it can with the available data. In some cases this could mean
a lot of intermediate products (ex. rayleigh corrected data using dynamically
generated angles for each band resolution) that will then need to be
resampled.
By setting generate=False
, Satpy will only load the necessary dependencies
from the reader, but not attempt generating any composites or applying any
modifiers. In these cases this can save a lot of time and memory as only one
resolution of the input data have to be processed. Note that this option has
no effect when only loading data directly from readers (ex. IR/visible bands
directly from the files) and where no composites or modifiers are used. Also
note that in cases where most of your composite
inputs are already at the same resolution and you are only generating a limited
number of composites, generate=False
may actually hurt performance.
Why is Satpy slow on my powerful machine?
Satpy depends heavily on the dask library for its performance. However, on some systems dask’s default settings can actually hurt performance. By default dask will create a “worker” for each logical core on your system. In most systems you have twice as many logical cores (also known as threaded cores) as physical cores. Managing and communicating with all of these workers can slow down dask, especially when they aren’t all being used by most Satpy calculations. One option is to limit the number of workers by doing the following at the top of your python code:
import dask
dask.config.set(num_workers=8)
# all other Satpy imports and code
This will limit dask to using 8 workers. Typically numbers between 4 and 8 are good starting points. Number of workers can also be set from an environment variable before running the python script, so code modification isn’t necessary:
DASK_NUM_WORKERS=4 python myscript.py
Similarly, if you have many workers processing large chunks of data you may be using much more memory than you expect. If you limit the number of workers and the size of the data chunks being processed by each worker you can reduce the overall memory usage. Default chunk size can be configured in Satpy by using the following around your code:
with dask.config.set("array.chunk-size": "32MiB"):
# your code here
For more information about chunk sizes in Satpy, please refer to the Data Chunks section in Overview.
Note
The PYTROLL_CHUNK_SIZE variable is pending deprecation, so the above-mentioned dask configuration parameter should be used instead.
Why multiple CPUs are used even with one worker?
Many of the underlying Python libraries use math libraries like BLAS and LAPACK written in C or FORTRAN, and they are often compiled to be multithreaded. If necessary, it is possible to force the number of threads they use by setting an environment variable:
OMP_NUM_THREADS=2 python myscript.py
What is the difference between number of workers and number of threads?
The above questions handle two different stages of parallellization: Dask workers and math library threading.
The number of Dask workers affect how many separate tasks are started, effectively telling how many chunks of the data are processed at the same time. The more workers are in use, the higher also the memory usage will be.
The number of threads determine how much parallel computations are run for the chunk handled by each worker. This has minimal effect on memory usage.
The optimal setup is often a mix of these two settings, for example
DASK_NUM_WORKERS=2 OMP_NUM_THREADS=4 python myscript.py
would create two workers, and each of them would process their chunk of data using 4 threads when calling the underlying math libraries.
How do I avoid memory errors?
If your environment is using many dask workers, it may be using more memory than it needs to be using. See the “Why is Satpy slow on my powerful machine?” question above for more information on changing Satpy’s memory usage.
Reducing GDAL output size?
Sometimes GDAL-based products, like geotiffs, can be much larger than expected.
This can be caused by GDAL’s internal memory caching conflicting with dask’s
chunking of the data arrays. Modern versions of GDAL default to using 5% of
available memory for holding on to data before compressing it and writing it
to disk. On more powerful systems (~128GB of memory) this is usually not a
problem. However, on low memory systems this may mean that GDAL is only
compressing a small amount of data before writing it to disk. This results
in poor compression and large overhead from the many small compressed areas.
One solution is to increase the chunk size used by dask but this can result
in poor performance during computation. Another solution is to increase
GDAL_CACHEMAX
, an environment variable that GDAL uses. This defaults to
"5%"
, but can be increased:
export GDAL_CACHEMAX="15%"
For more information see GDAL’s documentation.
How do I use multi-threaded compression when writing GeoTIFFs?
The GDAL library’s GeoTIFF driver has a lot of options for changing how your
GeoTIFF is formatted and written. One of the most important ones when it comes
to writing GeoTIFFs is using multiple threads to compress your data. By
default Satpy will use DEFLATE compression which can be slower to compress
than other options out there, but faster to read. GDAL gives us the option to
control the number of threads used during compression by specifying the
num_threads
option. This option defaults to 1
, but it is recommended
to set this to at least the same number of dask workers you use. Do this by
adding num_threads
to your save_dataset or save_datasets call:
scn.save_datasets(base_dir='/tmp', num_threads=8)
Satpy also stores our data as “tiles” instead
of “stripes” which is another way to get more efficient compression of our
GeoTIFF image. You can disable this with tiled=False
.
See the GDAL GeoTIFF documentation for more information on the creation options available including other compression choices.
Description |
Reader name |
Status |
fsspec support |
---|---|---|---|
GOES-R ABI imager Level 1b data in netcdf format |
abi_l1b |
Nominal |
true |
SCMI ABI L1B in netCDF4 format |
abi_l1b_scmi |
Beta |
false |
GOES-R ABI Level 2 products in netCDF4 format |
abi_l2_nc |
Beta |
true |
NOAA Level 2 ACSPO SST data in netCDF4 format |
acspo |
Nominal |
false |
FY-4A AGRI Level 1 HDF5 format |
agri_fy4a_l1 |
Beta |
false |
FY-4B AGRI Level 1 data HDF5 format |
agri_fy4b_l1 |
Nominal |
true |
Himawari (8 + 9) AHI Level 1 (HRIT) |
ahi_hrit |
Nominal |
false |
Himawari (8 + 9) AHI Level 1b (HSD) |
ahi_hsd |
Nominal |
false |
Himawari (8 + 9) AHI Level 1b (gridded) |
ahi_l1b_gridded_bin |
Nominal |
false |
Himawari-8/9 AHI Level 2 products in netCDF4 format from NOAA enterprise |
ahi_l2_nc |
Beta |
true |
GEO-KOMPSAT-2 AMI Level 1b |
ami_l1b |
Beta |
true |
GCOM-W1 AMSR2 data in HDF5 format |
amsr2_l1b |
Nominal |
false |
GCOM-W1 AMSR2 Level 2 (HDF5) |
amsr2_l2 |
Beta |
false |
GCOM-W1 AMSR2 Level 2 GAASP (NetCDF4) |
amsr2_l2_gaasp |
Beta |
false |
AAPP L1C AMSU-B format |
amsub_l1c_aapp |
Beta |
false |
METOP ASCAT Level 2 SOILMOISTURE BUFR |
ascat_l2_soilmoisture_bufr |
Defunct |
false |
S-NPP and JPSS-1 ATMS L1B (NetCDF4) |
atms_l1b_nc |
Beta |
false |
S-NPP and JPSS ATMS SDR (hdf5) |
atms_sdr_hdf5 |
Beta |
false |
NOAA 15 to 19, Metop A to C AVHRR data in AAPP format |
avhrr_l1b_aapp |
Nominal |
false |
Metop A to C AVHRR in native level 1 format |
avhrr_l1b_eps |
Nominal |
false |
Tiros-N, NOAA 7 to 19 AVHRR data in GAC and LAC format |
avhrr_l1b_gaclac |
Nominal |
false |
NOAA 15 to 19 AVHRR data in raw HRPT format |
avhrr_l1b_hrpt |
Alpha |
false |
EUMETCSAT GAC FDR NetCDF4 |
avhrr_l1c_eum_gac_fdr_nc |
Defunct |
false |
Callipso Caliop Level 2 Cloud Layer data (v3) in EOS-hdf4 format |
caliop_l2_cloud |
Alpha |
false |
The Clouds from AVHRR Extended (CLAVR-x) |
clavrx |
Nominal |
false |
CMSAF CLAAS-2 data for SEVIRI-derived cloud products |
cmsaf-claas2_l2_nc |
Beta |
false |
Electro-L N2 MSU-GS data in HRIT format |
electrol_hrit |
Nominal |
false |
DSCOVR EPIC L1b hdf5 |
epic_l1b_h5 |
Beta |
false |
MTG FCI Level-1c NetCDF |
fci_l1c_nc |
Beta for full-disc FDHSI and HRFI, RSS not supported yet |
true |
MTG FCI L2 data in netCDF4 format |
fci_l2_nc |
Alpha |
false |
Generic Images e.g. GeoTIFF |
generic_image |
Nominal |
true |
GEOstationary Cloud Algorithm Test-bed |
geocat |
Nominal |
false |
Meteosat Second Generation Geostationary Earth Radiation Budget L2 High-Resolution |
gerb_l2_hr_h5 |
Beta |
false |
FY-4A GHI Level 1 HDF5 format |
ghi_l1 |
Nominal |
false |
Sentinel-3 SLSTR SST data in netCDF4 format |
ghrsst_l2 |
Beta |
false |
GOES-R GLM Level 2 |
glm_l2 |
Beta |
false |
GMS-5 VISSR Level 1b |
gms5-vissr_l1b |
Alpha |
true |
GOES Imager Level 1 (HRIT) |
goes-imager_hrit |
Nominal |
false |
GOES Imager Level 1 (netCDF) |
goes-imager_nc |
Beta |
false |
GPM IMERG level 3 precipitation data in HDF5 format |
gpm_imerg |
Nominal |
false |
GRIB2 format |
grib |
Beta |
false |
Hydrology SAF products in GRIB format |
hsaf_grib |
Beta, only h03, h03b, h05 and h05b currently supported |
false |
Hydrology SAF products in HDF5 format |
hsaf_h5 |
Beta, only h10 currently supported |
false |
HY-2B Scatterometer level 2b data in HDF5 format from both EUMETSAT and NSOAS |
hy2_scat_l2b_h5 |
Beta |
false |
IASI Level 2 data in HDF5 format |
iasi_l2 |
Alpha |
false |
IASI All Sky Temperature and Humidity Profiles - Climate Data Record Release 1.1 - Metop-A and -B |
iasi_l2_cdr_nc |
Alpha |
True |
METOP IASI Level 2 SO2 in BUFR format |
iasi_l2_so2_bufr |
Beta |
false |
EPS-SG ICI L1B Radiance (NetCDF4) |
ici_l1b_nc |
Beta |
false |
Insat 3d IMG L1B HDF5 |
insat3d_img_l1b_h5 |
Beta, navigation still off |
false |
MTSAT-1R JAMI Level 1 data in JMA HRIT format |
jami_hrit |
Beta |
false |
LI Level-2 NetCDF Reader |
li_l2_nc |
Beta |
false |
AAPP MAIA VIIRS and AVHRR products in HDF5 format |
maia |
Nominal |
false |
Sentinel 3 MERIS NetCDF format |
meris_nc_sen3 |
Beta |
false |
MERSI-2 L1B data in HDF5 format |
mersi2_l1b |
Beta |
false |
FY-3E MERSI Low Light Level 1B |
mersi_ll_l1b |
Nominal |
true |
MERSI-RM L1B data in HDF5 format |
mersi_rm_l1b |
Beta |
false |
AAPP L1C in MHS format |
mhs_l1c_aapp |
Nominal |
false |
MIMIC Total Precipitable Water Product Reader in netCDF format |
mimicTPW2_comp |
Beta |
false |
MiRS Level 2 Precipitation and Surface Swath Product Reader in netCDF4 format |
mirs |
Beta |
false |
Terra and Aqua MODIS data in EOS-hdf4 level-1 format as produced by IMAPP and IPOPP or downloaded from LAADS |
modis_l1b |
Nominal |
false |
MODIS Level 2 (mod35) data in HDF-EOS format |
modis_l2 |
Beta |
false |
MODIS Level 3 (mcd43) data in HDF-EOS format |
modis_l3 |
Beta |
false |
Sentinel-2 A and B MSI data in SAFE format |
msi_safe |
Nominal |
false |
Arctica-M (N1) MSU-GS/A data in HDF5 format |
msu_gsa_l1b |
Beta |
false |
MTSAT-2 Imager Level 1 data in JMA HRIT format |
mtsat2-imager_hrit |
Beta |
false |
MFG (Meteosat 2 to 7) MVIRI data in netCDF format (FIDUCEO FCDR) |
mviri_l1b_fiduceo_nc |
Beta |
false |
EPS-SG MWI L1B Radiance (NetCDF4) |
mwi_l1b_nc |
Beta |
false |
EPS-SG MWS L1B Radiance (NetCDF4) |
mws_l1b_nc |
Beta |
false |
NUCAPS EDR Retrieval data in NetCDF4 format |
nucaps |
Nominal |
false |
NWCSAF GEO 2016 products in netCDF4 format (limited to SEVIRI) |
nwcsaf-geo |
Alpha |
false |
NWCSAF GEO 2013 products in HDF5 format (limited to SEVIRI) |
nwcsaf-msg2013-hdf5 |
Defunct |
false |
NWCSAF PPS 2014, 2018 products in netCDF4 format |
nwcsaf-pps_nc |
Alpha, only standard swath based ouput supported (remapped netCDF and CPP products not supported yet) |
false |
Ocean color CCI Level 3S data reader |
oceancolorcci_l3_nc |
Nominal |
false |
Sentinel-3 A and B OLCI Level 1B data in netCDF4 format |
olci_l1b |
Nominal |
true |
Sentinel-3 A and B OLCI Level 2 data in netCDF4 format |
olci_l2 |
Nominal |
true |
OMPS EDR data in HDF5 format |
omps_edr |
Beta |
false |
OSI-SAF data in netCDF4 format |
osisaf_nc |
Beta |
true |
SAR Level 2 OCN data in SAFE format |
safe_sar_l2_ocn |
Defunct |
false |
Sentinel-1 A and B SAR-C data in SAFE format |
sar-c_safe |
Nominal |
false |
Reader for CF conform netCDF files written with Satpy |
satpy_cf_nc |
Nominal |
false |
Scatsat-1 Level 2b Wind field data in HDF5 format |
scatsat1_l2b |
defunct |
false |
SEADAS L2 Chlorphyll A product in HDF4 format |
seadas_l2 |
Beta |
false |
MSG SEVIRI Level 1b (HRIT) |
seviri_l1b_hrit |
Nominal |
true |
MSG SEVIRI Level 1b in HDF format from ICARE (Lille) |
seviri_l1b_icare |
Defunct |
false |
MSG (Meteosat 8 to 11) SEVIRI data in native format |
seviri_l1b_native |
Nominal |
false |
MSG SEVIRI Level 1b NetCDF4 |
seviri_l1b_nc |
Beta, HRV channel not supported |
true |
MSG (Meteosat 8 to 11) Level 2 products in BUFR format |
seviri_l2_bufr |
Alpha |
false |
MSG (Meteosat 8 to 11) SEVIRI Level 2 products in GRIB2 format |
seviri_l2_grib |
Nominal |
false |
GCOM-C SGLI Level 1B HDF5 format |
sgli_l1b |
Beta |
false |
Sentinel-3 A and B SLSTR data in netCDF4 format |
slstr_l1b |
Alpha |
false |
SMOS level 2 wind data in NetCDF4 format |
smos_l2_wind |
Beta |
false |
TROPOMI Level 2 data in NetCDF4 format |
tropomi_l2 |
Beta |
false |
Vaisala Global Lightning Dataset GLD360 data in ASCII format |
vaisala_gld360 |
Beta |
false |
EPS-SG Visual Infrafred Imager (VII) Level 1B Radiance data in netCDF4 format |
vii_l1b_nc |
Beta |
false |
EPS-SG Visual Infrared Imager (VII) Level 2 data in netCDF4 format |
vii_l2_nc |
Beta |
false |
JPSS VIIRS SDR data in HDF5 Compact format |
viirs_compact |
Nominal |
false |
JPSS VIIRS EDR NetCDF format |
viirs_edr |
Beta |
false |
VIIRS EDR Active Fires data in netCDF4 & CSV .txt format |
viirs_edr_active_fires |
Beta |
false |
VIIRS EDR Flood data in HDF4 format |
viirs_edr_flood |
Beta |
false |
JPSS VIIRS Level 1b data in netCDF4 format |
viirs_l1b |
Nominal |
false |
SNPP VIIRS Level 2 data in netCDF4 format |
viirs_l2 |
Alpha |
false |
JPSS VIIRS data in HDF5 SDR format |
viirs_sdr |
Nominal |
false |
VIIRS Global Area Coverage from VIIRS Reflected Solar Band and Thermal Emission Band data for both Moserate resolution and Imager resolution channels. |
viirs_vgac_l1c_nc |
false |
|
VIRR data in HDF5 format |
virr_l1b |
Beta |
false |
Note
Status description:
- Defunct
Most likely the reader is not functional. If it is there is a good chance of bugs and/or performance problems (e.g. not ported to dask/xarray yet). Future development is unclear. Users are encouraged to contribute (see section How to contribute and/or get help on Slack or by opening a Github issue).
- Alpha
This denotes early development status. Reader is functional and implements some or all of the nominal features. There might be bugs. Exactness of results is not be guaranteed. Use at your own risk.
- Beta
This denotes final developement status. Reader is functional and implements all nominal features. Results should be dependable but there might be bugs. Users are actively encouraged to test and report bugs.
- Nominal
This denotes a finished status. Reader is functional and most likely no new features will be introduced. It has been tested and there are no known bugs.