Client#
- class datareservoirio.Client(auth, cache=True, cache_opt=None)[source]#
DataReservoir.io client for user-friendly interaction.
- Parameters:
auth (cls) – An authenticated session that is used in all API calls. Must supply a valid bearer token to all API calls.
cache (bool) – Enable caching (default).
cache_opt (dict, optional) – Configuration object for controlling the series cache. ‘max_size’: max size of cache in megabytes. Default is 1024 MB. ‘cache_root’: cache storage location. See documentation for platform specific defaults.
- append(series, series_id, wait_on_verification=True)[source]#
Append data to an already existing series.
- Parameters:
series (pandas.Series) – Series with index (as DatetimeIndex-like or integer array). Needs to be sorted on index.
series_id (string) – The identifier of the existing series.
wait_on_verification (bool (optional)) – All series are subjected to a server-side data validation before they are made available for consumption; failing validation will result in the series being ignored. If True, the method will wait for the data validation process to be completed and return the outcome, which may be time consuming. If False, the method will NOT wait for the outcome and the data will be available when/if the validation is successful. The latter is significantly faster, but is recommended when the data is “validated” in advance. Default is True.
- Returns:
The response from DataReservoir.io.
- Return type:
- create(series=None, wait_on_verification=True)[source]#
Create a new series in DataReservoir.io from a pandas.Series. If no data is provided, an empty series is created.
- Parameters:
series (pandas.Series, optional) – Series with index (as DatetimeIndex-like or integer array). Default is None. Needs to be sorted on index.
wait_on_verification (bool (optional)) – All series are subjected to a server-side data validation before they are made available for consumption; failing validation will result in the series being ignored. If True, the method will wait for the data validation process to be completed and return the outcome, which may be time consuming. If False, the method will NOT wait for the outcome and the data will be available when/if the validation is successful. The latter is significantly faster, but is recommended when the data is “validated” in advance. Default is True.
- Returns:
The response from DataReservoir.io containing the unique id of the newly created series.
- Return type:
- delete(series_id)[source]#
Delete a series from DataReservoir.io.
- Parameters:
series_id (string) – The id of the series to delete.
- get(series_id, start=None, end=None, convert_date=True, raise_empty=False)[source]#
Retrieve a series from DataReservoir.io.
- Parameters:
series_id (str) – Identifier of the series to download
start (optional) – start time (inclusive) of the series given as anything pandas.to_datetime is able to parse.
end (optional) – stop time (exclusive) of the series given as anything pandas.to_datetime is able to parse.
convert_date (bool) – If True (default), the index is converted to DatetimeIndex. If False, index is returned as ascending integers.
raise_empty (bool) – If True, raise ValueError if no data exist in the provided interval. Otherwise, return an empty pandas.Series (default).
- Returns:
Series data
- Return type:
pandas.Series
- get_samples_aggregate(series_id, start=None, end=None, aggregation_period=None, aggregation_function=None, max_page_size=30000)[source]#
Retrieve a series from DataReservoir.io using the samples/aggregate endpoint.
- Parameters:
series_id (str) – Identifier of the series to download
start (required) – Start time (inclusive) of the aggregated series given as anything pandas.to_datetime is able to parse. Date must be within the past 90 days.
end – Stop time (exclusive) of the aggregated series given as anything pandas.to_datetime is able to parse. Date must be within the past 90 days.
aggregation_function (str) – One of “mean”, “min”, “max”, “std”.
aggregation_period (str) – Used in combination with aggregation function to specify the period for aggregation. Aggregation period is maximum 24 hours. Values can be in units of h, m, s, ms, microsecond or tick. Use 100 ms instead of 0.1s for 10Hz.
max_page_size (optional) – Maximum number of samples to return per page. The method automatically follows links to next pages and returns the entire series. For advanced usage.
- Returns:
Series data
- Return type:
pandas.Series
- info(series_id)[source]#
Retrieve basic information about a series.
- Returns:
Available information about the series. None if series not found.
- Return type:
- metadata_browse(namespace=None)[source]#
List available metadata namespaces and keys. If namespace is None, a list of all available namespaces is returned. If namespace is specified, a list of all available keys for that namespace is returned.
- Parameters:
namespace (string) – The namespace to search in (exact match)
- Returns:
The namespaces or keys found.
- Return type:
- metadata_delete(metadata_id)[source]#
Delete an existing metadata entry.
- Parameters:
metadata_id (str) – id of metadata
- metadata_get(metadata_id=None, namespace=None, key=None)[source]#
Retrieve a metadata entry. Required input is either metatdata_id, or namespace + key, i.e. metadata_get(my_metadata_id) or metadata_get(my_namespace, my_key)
- metadata_search(namespace, key)[source]#
Find metadata entries given namespace/key combination.
- namespacestring
The namespace to search in
- keystring
The key to narrow search. Supports “begins with” specification, i.e. will look for matches with “key + wildcard”
- Returns:
Metadata entries that matches the search.
- Return type:
- metadata_set(namespace, key, **namevalues)[source]#
Create or update a metadata entry. If the namespace/key combination does not already exist, a new entry will be created. If the combination already exist, the entry will be updated with the specified namevalues.
- remove_metadata(series_id, metadata_id)[source]#
Remove a metadata entry from a series. Note that metadata entries are not deleted, but the link between series and metadata is broken.
- search(namespace, key=None, name=None, value=None)[source]#
Find available series having metadata with given namespace + key* (optional) + name (optional) + value (optional) combination. Note that the arguments are hierarchical, starting from the left. If an argument is None, the proceeding ones are also set to None. For example, (namespace = “hello”, key=None, name=”Rabbit”, value=”Hole”) will have the same effect as (namespace = “hello”, key=None, name=None, value=None)
- Parameters:
namespace (str) – Full namespace to search for
key (str, optional) – Key or partial (begins with) key to narrow search. Default (None) will include all.
name (str, optional) – Full name to narrow search further. Default (None) will include all.
value (str, optional) – Value or partial (begins or ends with or both) to narrow search further. Default (None) will include all.
- Returns:
Available information about the series. If
value
is passed, a plain list withTimeSeriesId
is returned. Otherwise, a dict is returned ->{TimeSeriesId: metadata}
.- Return type:
- set_metadata(series_id, metadata_id=None, namespace=None, key=None, overwrite=False, **namevalues)[source]#
Set metadata entries on a series. Metadata can be set from existing values or new metadata can be created.
- Parameters:
series_id (str) – The identifier of the existing series
metadata_id (str, optional) – The identifier of the existing metadata entries. If passed, other metadata related arguments are ignored.
namespace (str, optional) – Metadata namespace.
key (str, mandatory if namespace is passed.) – Metadata key.
overwrite (bool, optional) – If true, and namespace+key corresponds to existing metadata, the value of the metadata will be overwritten. If false, a ValueError will be raised if the metadata already exist.
namevalues (keyword arguments) – Metadata name-value pairs
- Returns:
response.json()
- Return type: