The GIS database usually consists of three principal types of information:
1. Geographic or location information (Where): Geographic space is bi-dimensional or tri-dimensional and can be regarded as continuous. In a map, this can be a point with latitude and longitude (bi-dimensional) or a point with latitude, longitude and elevation or depth (tri-dimensional).
2. Temporal Details (When): Temporal space is usually one-dimensional.
3. Attribute Data (What): The attribute data are typically multivariate with a mix of measurement metrics that are seldom continuous.
Each of these information types represent data spaces that are characterized by different metrics. These data spaces do not share a common metric and therefore they can’t be easily transformed into a comparable form. Traditionally they were analyzed separately.
Space-Time-Attribute analysis is fairly a simple analysis. By searching all locations, time and attributes, its is possible to overcome our ignorance of not knowing where to look. In other words, by analyzing these spaces together, we wil to identify a pattern.
What is a pattern? In GIS, a pattern may often be viewed as localized excess concentration of data cases that are unusual. Those patterns are of potential interest, either because of the intensity of their localized concentration or because of their predictability over time or their similarity in terms of their features.
1854 Broad Street Cholera Outbreak:
Many consider this to be earliest documented use of maps. On 31 August 1854, there was a severe cholera outbreak in Soho district of London, England. A physician named John Snow was a sceptic of the then-dominant theory that stated that the disease such as cholera were caused by pollution. By talking to local residents, he identified the source of the outbreak as the public water pump on Broad Street.
Snow used some proto-GIS methods to buttress his argument: first he drew Thiessen polygons around the wells, defining straight-line least-distance service areas for each. A large majority of the cholera deaths fell within the Thiessen polygon surrounding the Broad Street pump, and a large portion of the remaining deaths were on the Broad Street side of the polygon surrounding the bad-tasting Carnaby Street well. Next, using a pencil and string, Snow redrew the service area polygons to reflect shortest routes along streets to wells. An even larger proportion of the cholera deaths fell within the shortest-travel-distance area around the Broad Street pump.
Show below is the original John Snow map (left) and the map drawn using Arc’s kernel density tool (right).
Here is the animation of the same map projecting the number of deaths over time on a map.
Looking at the animation, we can see that, all three spaces were analyzed to identify a pattern. First, there is a geography space on which we project the scale of the event, second there is a time space (please see the dates on the top of the map) and finally there is an attribute which is the number of deaths.
Imagine how difficult it is to solve this problem with out a visualization platform like this map.
Below is the algorithm I found for Spate-Time-Attribute analysis in the book Spatial Analysis and GIS
Step1 : Define an initial range of search in the three data spaces; g1… gn (for geographic space), t1… tn (for time space) and a1…an (for attribute space).
Step 2: Select an observed data with a profile of g, t and a spaces.
Step 3: Define a geographic search region centered on the observed case and distance radius (or grid) gr where (g1 < gr < gn).
Step 4: Define a temporal search region centered on the observed case and distance radius tr where (t1 < tr < tn).
Step 5: Define an attribute search region centered on the observed case and distance radius ar where (a1 < ar < an).
Step 6: Scan through the database to find those records that lie within these critical regions. Abort the search if fewer than a minimum number of records are found.
Step 7: Use a Monte Carlo significance test procedure to determine the probability of a test statistic for any search occurring under a null hypothesis
Step 8: Keep the record identifier and the search parameter if the probability is sufficiently small
Step 9: Examine all combinations of g, t, and a search parameters.
Step 10: Report the results if a sufficient number of the results pass the stated significance level.
Step 11: Change the central record, and repeat step 3 to 11.
Google Self-Driving Cars:
Is google self-driving car, a version of space-time-attribute analysis algorithm?
Space (road), time (car speed is a function of speed which is distance and time), attribute (In this case there are multiple attributes such as signals, roads, pedestrians. etc).
May be, but it’s much more complex:
a. There are many attributes.
b. While on the road, the computer must identify the type of the attribute without error as each attribute has different property and behavior.
c. These attributes change continuously.
One must be aware of the property of the attribute while trying to identify a pattern from it.
In the book “Fooled by Randomness“, the author gives an example of how data mining might lead us to misleading results.
Consider a square with 16 random darts hitting it with equal probability of being at any place in the square. If we divide the square into 16 smaller squares, it is expected that each smaller square will contain one dart on average, but only on average. There is a very small probability of having 16 darts in 16 different squares. The average grid will have more than one dart in a few squares, and no dart at all in many squares. It will be an exceptionally rare incident that no cluster would show on the grid.
The above example illustrates that we can’t use our past analysis on why a particular square had more than one dart to predict which squares will have more than one dart in future, if the darts were to be thrown again.
Imagine overlaying the grid on a map to estimate the probability of an event. In this case, assume the location of darts on the grid as the location of the event on the map.
So, Though the random events might help determine what has worked in the past, the same rules may not apply to predict the future.
In other words, we humans tend to see patterns when, in fact, the results are completely random.