Time series - Core Foundations
Understand the definition and key properties of time series—including stationarity and ergodicity—and how they underpin analysis and forecasting.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
What is the definition of a time series?
1 of 10
Summary
Time Series: Definition and Basic Concepts
What Is a Time Series?
A time series is simply a sequence of data points collected in chronological order. The key distinguishing feature of time series data is that observations follow a natural ordering based on when they were measured. For example, daily stock prices, monthly unemployment rates, or annual rainfall measurements are all time series.
In most practical applications, time series observations are recorded at equally spaced time intervals. This regularity is important because it allows us to work with consistent, predictable gaps between measurements. Time series data can span remarkably different scales—from microseconds in high-frequency financial data to centuries in climate records.
The image above shows a classic example: it displays measurements over time with visible irregular fluctuations around an underlying trend (shown in red).
Why Time Series Are Different
Time series data stand apart from other types of data in one crucial way: there is a natural temporal ordering that matters. This is fundamentally different from cross-sectional data, where each observation (say, the height of different students in a classroom) has no inherent ordering. You could rearrange cross-sectional observations randomly without losing any information, but rearranging a time series destroys everything important about it.
Another key characteristic is autocorrelation: observations that occur close together in time tend to be more strongly related to each other than observations far apart. If today's stock price is $100, tomorrow's price is more likely to be near $100 than to be dramatically different. This autocorrelation structure is what makes time series analysis distinct from analyzing independent observations.
In statistical terms, we often model a time series as a stochastic process—a random process that evolves over time. This framework helps us capture both the systematic patterns and the randomness inherent in real-world data.
The Goals of Time Series Work
Time series analysis serves two main purposes:
Time series analysis seeks to understand what has happened in the data. This involves extracting meaningful statistics and uncovering underlying patterns. Three types of patterns commonly appear:
Trends: long-term upward or downward movements
Seasonal effects: regular patterns that repeat at fixed intervals (for example, retail sales that spike every December)
Irregular fluctuations: random or unpredictable variations
Time series forecasting takes this understanding further by building a model that predicts future values based on past observations. A forecasting model learns the patterns in historical data to estimate what will likely happen next.
Stationarity: A Fundamental Concept
Before we can effectively analyze or forecast a time series, we need to understand whether the data's statistical properties are stable or changing. This is where stationarity becomes critical.
Strict and Wide-Sense Stationarity
Strict stationarity means that the joint probability distribution of the time series does not change over time. More simply: if you take any chunk of data from your series and compare its statistical properties to any other chunk, they should be identical regardless of when each chunk occurred. This is a very strong requirement.
Wide-sense (or second-order) stationarity is a weaker, more practical condition that's easier to check in real data. A time series is wide-sense stationary if:
The mean is constant over time (doesn't drift up or down)
The autocovariance depends only on the lag—the distance between observations—not on the specific time points
For example, the autocovariance between measurements that are 5 days apart should be the same whether we're looking at measurements from January or July.
Why does this matter? Many standard forecasting methods and statistical tests assume stationarity. If your data is non-stationary, these methods can give misleading results.
Non-Stationary Series and Transformations
Real-world time series frequently violate stationarity assumptions. A series might have a clear trend (the mean increases over time) or changing volatility (the variability increases or decreases). These are non-stationary properties.
The good news is that non-stationary series can often be transformed into stationary ones. The most common approach is differencing: instead of analyzing the raw values, we analyze the changes between consecutive observations.
This figure illustrates differencing: the top panel shows a non-stationary series with an obvious trend. The middle panel shows the yearly change (the difference between each year and the previous year), which removes the trend and creates a more stationary series. The bottom panel shows percentage change, another transformation option.
Other transformations—like taking logarithms, square roots, or other mathematical functions—can also help stabilize a series before analysis. The choice of transformation depends on the specific characteristics of your data.
<extrainfo>
Seasonal and Seasonally Stationary Series
Some time series exhibit regular, predictable patterns that repeat at fixed intervals. A series might be seasonally stationary, meaning it has consistent seasonal patterns but no overall trend. For example, ice cream sales consistently spike in summer and dip in winter, year after year.
While seasonal patterns are important to recognize and model separately, the process of making a series stationary (through differencing or other methods) often handles seasonal non-stationarity along with trend-driven non-stationarity.
</extrainfo>
Ergodicity: Why a Single Series Can Be Enough
Ergodicity is a technical property that has practical importance. An ergodic time series has a special property: time averages (calculated from a single long sequence of observations) converge to ensemble averages (calculated from many different possible realizations of the same process).
What does this mean practically? In many statistical situations, we'd like to calculate properties of a distribution by taking many independent samples. With time series data, we usually have only one realization (one path through time) rather than many independent samples. Ergodicity tells us that's okay—we can reliably estimate statistical properties from that single long time series.
This is why a long historical record of a time series can be statistically useful even though it's technically just one sequence of observations.
Flashcards
What is the definition of a time series?
A sequence of data points recorded in chronological order
At what typical intervals are observations in a time series taken?
Equally spaced time intervals
What is the natural ordering of time series data that distinguishes it from cross-sectional data?
Temporal ordering
How does the relationship between time series observations usually change as the time between them increases?
Observations become less strongly related
How is a time series often modeled to capture randomness and uncertainty?
As a stochastic process
How does the ordering of observations in cross-sectional data differ from time series data?
It has no inherent ordering
To what do spatial data relate observations instead of time?
Geographic locations
What characteristic of a strict-stationary process remains unchanged over time?
The joint distribution
What does ergodicity imply about the relationship between time averages and ensemble averages?
Time averages converge to ensemble averages
What is the practical implication of ergodicity for estimating statistical properties?
Properties can be estimated from a single long realization
Quiz
Time series - Core Foundations Quiz Question 1: Which description best defines a time series?
- A sequence of data points recorded in chronological order (correct)
- A collection of observations taken at a single point in time
- Data arranged according to geographic locations rather than time
- Randomly ordered values with no inherent temporal structure
Time series - Core Foundations Quiz Question 2: What characterizes a strict‑stationary stochastic process?
- Its joint probability distribution does not change over time (correct)
- Its mean value is always zero
- Its autocovariance depends only on the lag between observations
- Its variance steadily increases as time progresses
Time series - Core Foundations Quiz Question 3: What distinguishes time series data from cross‑sectional data?
- They have a natural temporal ordering of observations (correct)
- Observations are assumed to be independent of each other
- Data points are linked to geographic locations
- Values are measured on a ratio scale
Time series - Core Foundations Quiz Question 4: Which statement accurately describes spatial data?
- They associate observations with geographic locations rather than time (correct)
- They consist of observations ordered chronologically over time
- They are collected simultaneously from multiple subjects at a single point in time
- They disregard any information about location
Which description best defines a time series?
1 of 4
Key Concepts
Time Series Concepts
Time series
Stationarity
Non‑stationary time series
Seasonal time series
Stochastic process
Autocovariance
Statistical Properties
Ergodicity
Time series forecasting
Definitions
Time series
A sequence of data points recorded in chronological order, typically at equally spaced intervals.
Stationarity
A property of a stochastic process whose statistical characteristics, such as mean and autocovariance, do not change over time.
Ergodicity
The condition whereby time averages of a process converge to ensemble averages, allowing inference from a single long observation.
Seasonal time series
A series that exhibits regular, repeating patterns or cycles tied to specific periods such as months or quarters.
Non‑stationary time series
A series whose statistical properties evolve over time, often requiring differencing or transformation to analyze.
Stochastic process
A collection of random variables indexed by time, used to model the inherent randomness in time‑dependent phenomena.
Autocovariance
A measure of how values of a time series at different lags co‑vary, reflecting the dependence structure over time.
Time series forecasting
The practice of using historical time‑series data and models to predict future observations.