Model Understanding >

DLinear

DLinear is a simple neural network model that decomposes the input sequence into trend and seasonal components and then uses a linear layer to predict the next value in the sequence.

DLinear Architecture

Are Transformers Effective for Time Series Forecasting? (github) explored the transformer's effectiveness for time series forecasting. While doing so they proposed two simple models: NLinear and DLinear. They found that these two models are competitive with the state-of-the-art transformer-based models.

DLinear works as follows:

  1. Compute the moving average along time dimension. This becomes the trend component.
  2. Subtract the trend component from the input. This becomes the seasonal component.
  3. Each of the trend and seasonal components are passed through linear layers, respectively.
  4. The outputs of the two linear layers are summed up to produce the final output.

The papaer states that normalization is applied in transformer models. However, since DLinear can handle trend, we did not apply any of the data scaling other than trying percentage change target as in capturing trends.

DLinear for Air Passengers

To demonstrate model performance, we show the model's prediction results for the air passengers dataset. The cross validation process identified the best transformation to make the time series stationary and the optimal hyperparameters. The Root Mean Squared Error on the next day's closing price was used to determine the best model.

In the chart, we display the model's predictions for last split of cross validation and test data.

  1. train: Training data of the last split.
  2. validation: Validation data of the last split.
  3. prediction (train, validation): Prdiction for train and validation data period. For each row (or a sliding window) of data, predictions are made for n days into the future (where n is set to 1, 2, 7). The predictions are then combined into a single series of dots. Since the accuracy of predictions decreases for large n, we see some hiccups in the predictions. The predictions from the tail of the train spills into the validation period as that's future from the 'train' data period viewpoint. These are somewhat peculiar settings, but it works well in testing if the model's predictions are good enough.
  4. test(input): Test input data.
  5. test(actual): Test actual data.
  6. prediction(test): The model's prediction given the test input. There's only one prediction from the last row (or the last sliding window) of the test input which corresponds to 1, 2, 7 days later after 'test(input)'.