# Temporal Reasoning

WARNING: this page may contain wrong or obsolete information and is in the process of being rewritten.

The PLN book leaves some definitions open regarding temporal reasoning. Here's an attempt to fill the blanks.

WARNING! The Value subsystem provides a superior mechanism for storing, recording, and working with fleeting and time-changing data. There are multiple problems with using Atoms for transient data, including: (a) it is CPU-intensive to create an Atom. Creating Values is much faster. (b) it is CPU-intensive to insert an Atom into the AtomSpace. Values are not stored in the atomspace. There is no functional need for the indexing services that the AtomSpace provides, so its pointless to store fleeting data there, anyway. (c) it can become very difficult, sometimes impossible, to remove Atoms from the atomspace. This can happen when an Atom become a part of the outgoing set of another atom; in such a case, it would be illegal to delete the atom. For these three reasons, use Values instead. They just work better. See also SpaceServer for additional space-time management ideas.

Thus, for most practical tasks involving robot perception, such as the tagging of visual, auditory, spatial or event data with timestamps or time intervals, values are simply a superior system. For practical reasoning about robot perception, such as determining distances, angles, and elapsed time, it makes more sense to write custom scheme code (or custom C++ code) to perform those specific calculations (e.g. to compute the distance between two objects).

For most practical natural language tasks involving robot perception, such as answering questions about in-front, behind, near, far, bigger, smaller, inside-of, next-to, touching, it makes more sense to write custom scheme code (or custom C++ code) to perform those specific calculations, and only then place the results into the atomspace, where they can be reasoned on.

Flooding the atomspace with a torrent of temporal data, which is then promptly deleted, as it goes immediately stale, is not a practical use of CPU and RAM resources at this time.

# Basic Temporal Predicates

Here are suggestions for the fundamental definitions of temporal predicates, that other predicates (like PredictiveImplication) can be built upon.

The primary temporal predicate is

AtTime


it is important enough to have its dedicated link

AtTimeLink <TV>
A
T


which would be equivalent to

EvaluationLink <TV>
Predicate "AtTime"
A
T


where T is a TimeNode which according to

represents a timestamp or an interval (a pair of timestamps) or a distribution. WARNING: I don't think we should have TimeNode represent an interval, it will make many things more complicated, better introduce a TimeIntervalLink.

(BTW, a timestamp is an unsigned long in the code, we should definitely have a typedef for it, see Quick_tasks#TimeServer for a quick task about that.)

The definition of AtTimeLink when T is an instant seems clear enough, but what about when it is an time interval?

Two possibilities:

### AtTime as average

It could be the average of AtTime over the interval, so

AtTimeLink <TV>
A
[t1:t2)


${\displaystyle TV.s={\frac {\sum _{t=t1}^{t2-1}TVt.s}{t1-t2}}}$

where TVt is

AtTimeLink <TVt>
A
t


TV.c should be defined so that it matches best the distribution of TVt.s over this interval (if one really wants to be accurate one can indefinitize the calculation using the confidence of TVt too).

You may notice that this is an non-weighted average, because I assume that all situations captured by the concept of time are uniformly distributed.

This definition is enough to make the difference between an event with a quasi constant TV throughout an interval versus an event with a TV of a great variance. For instance the TV measuring the happiness of a stoic man over a certain interval of time would be rather sharp, while the TV of a manic-depressive man would be rather smooth.

However, using an average will not allow us to make the difference between almost identical TVs with low confidence at each instant over the interval and TVs with high confidence that are distributed with a high variance. But I don't think that is problem, after all if we have a low confidence in the first place it is more likely that the actual probabilities will have high variances so it's not that much of a difference (but I suppose it is yet). To capture that we would need to use some ForAll, or fuzzy ForAll formula instead of Average. I don't think we need that for now, but who knows.

Another nice property of that definition is that

AtTimeLink <TV>
A
t1:t2


is equivalent to

ContextLink <TV>
SatisfyingSet
AtTime
Universe
t1:t2
A


that is if SatisfyingSet(EvaluationLink(AtTime(t1:t2, Universe))) is the concept of the time interval [t1:t2) then AtTimeLink of the event A in this interval is A in the context of this interval concept. (Not sure it is useful but I like it).

This definition of AtTimeLink, I think, covers the definitions of HoldAt and HoldThroughout predicates defined in the PLN book.

## AtTime as SomeTime

Another possibility is to have

AtTime <TV>
A
[t1:t2)


defined as

Exists <TV>
t in T
AtTime
A
t


So one can interpret this definition of AtTime as SomeTime, that is A occurs sometime between t1 and t2.

Or perhaps one could define:

AtTime <TV>
A
[t1:t2)


as

OrLink <TV>
AtTime
A
t1
...
AtTime
A
t2-1


Of course if one is dealing with binary TVs, one can use either two, the resulting TV would be 0 or 1 either way.

## InitiatedAt

It seems the most natural way to define InitiatedAt is as the positive part of the derivative of AtTimeLink. That is, if one takes for time unit the timestamp counter we have

InitiatedAtLink <TV'>
A
t


with TV' = max(0, TV2 - TV1), such that

AtTimeLink <TV1>
A
t-1

AtTimeLink <TV2>
A
t


One can define InitiatedAtLink over an interval the same way AtTimeLink is defined over an interval.

## TerminatedAt

Similarly terminatedAt is defined as the negation of the negative part of the derivative of AtTimeLink, so

terminatedAtLink <TV'>
A
t


with TV' = -min(0, TV2 - TV1), such that

AtTimeLink <TV1>
A
t-1


and

AtTimeLink <TV2>
A
t


I think this is would be enough for a while. Let me know what you think. The next step will be to define and implement the inference rules according to these definitions.

## CumulateInitiatedAt

A useful variant of InitiatedAt might be as follow

CumulateInitiatedAt <TV'>
A
[t1:t2)


TV' = max(0, TV2 - TV1), such that

AtTimeLink <TV1>
A
t1-1

AtTimeLink <TV2>
A
t2-1


So when we consider the initiation of a event over certain period of time we look at the overall additional strength it got over that period of time. It is perhaps closer to human cognition and it probably makes definitions involving larger temporal scale simpler

The only perhaps weird side effect is that

CumulateInitiatedAt <0>
A
[t1:t2)


if A got both initiated and terminated within [t1:t2(. So for instance

CumulateInitiatedAt <1>
SunRise
[4am:8am)


but

CumulateInitiatedAt <0>
SunRise
[4am:11pm)


I think that's OK, if not, it's certainly possible to tweak the definition so that

CumulateInitiatedAt <1>
SunRise
[4am:11pm)


# AndSeq, PredictiveImplication, etc

Here's a sum-up of the thread [3]

## AndSeq

EDIT: this definition is currently slightly invalid, see PredictiveImplicationLink#With_time_interval till the page gets corrected.

AndSeq <TV>
T
A
B


is defined as

AverageLink <TV>
t
AtTime
A
t
AtTime
B
t+T


where T is a timestamp or a time interval, in this case t+T = [t1+t:t2+t).

As for a definition of AndSeq with no temporal label, it could be the average over all T, but ultimately I don't know how useful that is, better wait till we need it.

## AndSeqInitiated

One can go with another variant using InitiatedAt instead of AtTime. Let's call it AndSeqInitiated, so

AndSeqInitiated <TV>
T
A
B


is defined as

AverageLink <TV>
t
InitiatedAt
A
t
InitiatedAt
B
t+T


Note that a side effect (good or bad?) of that definition is that whether the event A is initiated instantaneously or progressively, as long as the delay in which B in initiated is still within T, these 2 cases will have same TV! That is due to the average of course.

Of course one can imagine other variants using TerminatedAt, or combining InitiatedAt and TerminatedAt, etc.

## PredictiveImplication

PredictiveImplication <TV>
T
P
Q


is equivalent to

Implication <TV>
P
AndSeq
T
P
Q


## PredictiveImplicationInitiatedAt

As for variants using InitiatedAt or terminatedAt instead of AtTime, one can still define that in the style of the previous definition. Let's give an example for InitiatedAt. One must use P_InitatedAt instead of P, so that

P_InitiatedAt <TVp>


is equivalent to

AverageLink <TVp>
t
InitiatedAt
P
t


Then

PredictiveImplicationInitiatedAt <TV>
T
P
Q


is equivalent to

Implication <TV>
P_InitiatedAt
AndSeqInitiatedAt
T
P
Q


# Allen Interval Algebra

Some prototype work creating truth value formulas based on Allen Interval Algebra was done in the deprecated python PLN version and still needs to be ported to the new URE-based version, see Spatial and Temporal Inference Rules

# Temporal PLN rules

## Equivalence of AtTime over a predicate (or concept) and its evaluations (or members)

AtTimeLink

LambdaLink