Main menu

Pages

Create Space Time Cube By Aggregating Points, Emerging Hot Spot Analysis and Local Outlier Analysis

Create Space Time Cube By Aggregating Points, Emerging Hot Spot Analysis and Local Outlier Analysis Tools

Create Space Time Cube By Aggregating Points 

How to use Create Space Time Cube By Aggregating Points Tool in Arc Toolbox??

Create Space Time Cube By Aggregating Points Tool
Create Space Time Cube By Aggregating Points

Path to access the tool

:

Create Space Time Cube By Aggregating Points Tool, Space Time Pattern Mining Tools Toolbox

 

Create Space Time Cube By Aggregating Points

Summarizes a set of points into a netCDF data structure by aggregating them into space-time bins. Within each bin, the points are counted and specified attributes are aggregated. For all bin locations, the trend for counts and summary field values are evaluated.

1.    Input Features

The input point feature class to be aggregated into space-time bins.

2.    Output Space Time Cube

The output netCDF data cube that will be created to contain counts and summaries of the input feature point data.

3.    Time Field

The field containing the date and time (timestamp) for each point. This field must be of type Date.

4.    Template Cube (optional)

A reference space-time cube used to define the Output Space Time Cube extent of analysis, bin dimensions, and bin alignment. Time Step Interval, Distance Interval, and Reference Time are also obtained from the template cube. This template cube must be a netCDF (.nc) file that has been created using this tool.

5.    Time Step Interval (optional)

The number of seconds, minutes, hours, days, weeks, or years that will represent a single time step. All points within the same Time Step Interval and Distance Interval will be aggregated. (When a Template Cube is provided, this parameter is ignored, and the Time Step Interval value is obtained from the template cube). Examples of valid entries for this parameter are 1 Weeks, 13 Days, or 1 Years.

6.    Time Step Alignment (optional)

Defines how aggregation will occur based on a given Time Step Interval. If a Template Cube is provided, the Time Step Alignment associated with the Template Cube overrides this parameter setting and the Time Step Alignment of the Template Cube is used.

  • END_TIME—Time steps align to the last time event and aggregate back in time.
  • START_TIME—Time steps align to the first time event and aggregate forward in time.
  • REFERENCE_TIME—Time steps align to a particular date/time that you specify. If all points in the input features have a timestamp larger than the REFERENCE_TIME you provide (or it falls exactly on the start time of the input features), the time-step interval will begin with that reference time and aggregate forward in time (as occurs with a START_TIME alignment). If all points in the input features have a timestamp smaller than the reference time you provide (or it falls exactly on the end time of the input features), the time-step interval will end with that reference time and aggregate backward in time (as occurs with an END_TIME alignment). If the REFERENCE_TIME you provide is in the middle of the time extent of your data, a time-step interval will be created ending with the reference time provided (as occurs with an END_TIME alignment); additional intervals will be created both before and after the reference time until the full time extent of your data is covered.

7.    Reference Time (optional)

The date/time to use to align the time-step intervals. If you want to bin your data weekly from Monday to Sunday, for example, you could set a reference time of Sunday at midnight to ensure bins break between Sunday and Monday at midnight. (When a Template Cube is provided, this parameter is disabled and the Reference Time is based on the Template Cube.)

8.    Distance Interval (optional)

The size of the bins used to aggregate the Input Features. All points that fall within the same Distance Interval and Time Step Interval will be aggregated. When aggregating into a hexagon grid, this distance is used as the height to construct the hexagon polygons. (When a Template Cube is provided, this parameter is disabled and the distance interval value will be based on the Template Cube.)

9.    Summary Fields (optional)

The numeric field containing attribute values used to calculate the specified statistic when aggregating into a space-time cube. Multiple statistic and field combinations may be specified. Null values are excluded from all statistical calculations.

Available statistic types are:

  1. SUM—Adds the total value for the specified field within each bin.
  2. MEAN—Calculates the average for the specified field wintin each bin.
  3. MIN—Finds the smallest value for all records of the specified field within each bin.
  4. MAX—Finds the largest value for all records of the specified field withtin each bin.
  5. STD—Finds the standard deviation on values in the specified field within each bin.
  6. MEDIAN-Finds the sorted middle value of all records of the specified field within each bin.

  • Available fill types are:
  1. ZEROS—Fills empty bins with zeros.
  2. SPATIAL_NEIGHBORS—Fills empty bins with the average value of spatial neighbors
  3. SPACE_TIME_NEIGHBORS—Fills empty bins with the average value of space time neighbors.
  4. TEMPORAL_TREND—Fills empty bins using an interpolated univariate spline algorithm.

Note: Null values present in any of the summary fields will result in those features being excluded from analysis. If having the count of points in each bin is part of your analysis strategy, you may want to consider creating separate cubes, one for the count (without summary fields) and one for summary fields. If the set of null values is different for each summary field, you may also consider creating a separate cube for each summary field.

10. Aggregation Shape Type (optional)

The shape of the polygon mesh the input feature point data will be aggregated into.

  1. FISHNET_GRID—The input features will be aggregated into a grid of square (fishnet) cells.
  2. HEXAGON_GRID—The input features will be aggregated into a grid of hexagonal cells.

Emerging Hot Spot Analysis

How to use Emerging Hot Spot Analysis Tool in Arc Toolbox??

Emerging Hot Spot Analysis Tool
Emerging Hot Spot Analysis

Path to access the tool

:

Emerging Hot Spot Analysis Tool, Space Time Pattern Mining Tools Toolbox

 

Emerging Hot Spot Analysis

Identifies trends in the clustering of point densities (counts) or summary fields in a space-time cube created using the Create Space Time Cube By Aggregating Points tool. Categories include new, consecutive, intensifying, persistent, diminishing, sporadic, oscillating, and historical hot and cold spots.

1.    Input Space Time Cube

The netCDF cube to be analyzed. This file must have an (.nc) extension and must have been created using the Create Space Time Cube By Aggregating Points tool.

2.    Analysis Variable

The numeric variable in the netCDF file you want to analyze.

3.    Output Features

The output feature class results. This feature class will be a two-dimensional map representation of the hot and cold spot trends in your data. It will show, for example, any new or intensifying hot spots.

4.    Neighborhood Distance (optional)

The spatial extent of the analysis neighborhood. This value determines which features are analyzed together in order to assess local space-time clustering.

5.    Neighborhood Time Step (optional)

The number of time-step intervals to include in the analysis neighborhood. This value determines which features are analyzed together in order to assess local space-time clustering.

6.    Polygon Analysis Mask (optional)

A polygon feature layer with one or more polygons defining the analysis study area. You would use a polygon analysis mask to exclude a large lake from the analysis, for example. Bins defined in the Input Space Time Cube that fall outside of the mask will not be included in the analysis. 

Local Outlier Analysis

How to use Local Outlier Analysis Tool in Arc Toolbox??

Local Outlier Analysis Tool
Local Outlier Analysis

Path to access the tool

:

Local Outlier Analysis Tool, Space Time Pattern Mining Tools Toolbox

 

Local Outlier Analysis

Identifies statistically significant clusters and outliers in the context of both space and time. This tool is a space-time implementation of the Anselin Local Moran's I statistic.

1.    Input Space Time Cube

The netCDF cube to be analyzed. This file must have an (.nc) extension and must have been created using the Create Space Time Cube By Aggregating Points tool.

2.    Analysis Variable

The numeric variable in the netCDF file you want to analyze.

3.    Output Features

The output feature class containing locations that were considered statistically significant clusters or outliers.

4.    Neighborhood Distance (optional)

The spatial extent of the analysis neighborhood. This value determines which features are analyzed together in order to assess local space-time clustering.

5.    Neighborhood Time Step (optional)

The number of time-step intervals to include in the analysis neighborhood. This value determines which features are analyzed together in order to assess local space-time clustering.

6.    Number of Permutations (optional)

The number of random permutations for the calculation of pseudo p-values. The default number of permutations is 499. If you choose 0 permutations, the standard p-value is calculated.

  • 0—Permutations are not used and a standard p-value is calculated.
  • 99—With 99 permutations, the smallest possible pseudo p-value is 0.01 and all other pseudo p-values will be even multiples of this value.
  • 199—With 199 permutations, the smallest possible pseudo p-value is 0.005 and all other pseudo p-values will be even multiples of this value.
  • 499—With 499 permutations, the smallest possible pseudo p-value is 0.002 and all other pseudo p-values will be even multiples of this value.
  • 999—With 999 permutations, the smallest possible pseudo p-value is 0.001 and all other pseudo p-values will be even multiples of this value.
  • 9999—With 9999 permutations, the smallest possible pseudo p-value is 0.0001 and all other pseudo p-values will be even multiples of this value.

7.    Polygon Analysis Mask (optional)

A polygon feature layer with one or more polygons defining the analysis study area. You would use a polygon analysis mask to exclude a large lake from the analysis, for example. Bins defined in the Input Space Time Cube that fall outside of the mask will not be included in the analysis. 

Comments

table of contents title