Financial TS Models

In the previous sections, we focused on modeling the trend and mean forecasting of the U.S. Dollar Index (USD), as well as exploring its relationships with various macroeconomic variables. However, as a key financial market indicator, the USD exhibits volatility clustering, where periods of high return variability are followed by similarly turbulent intervals, and calm periods follow one another. To capture and analyze this behavior, we apply ARCH/GARCH models in this section. These models help us understand how the volatility of the USD changes over time, especially during significant events like the 2008 Financial Crisis and the 2020 COVID-19 pandemic. This analysis provides valuable insights for risk management and financial decision-making.

Time Series Plot

The US Dollar Index is a measure of the value of the US dollar relative to a basket of foreign currencies. It can be considered as a type of asset whose price fluctuates over time, similar to how stock prices or commodity prices behave. Therefore, the USD Index is treated as an asset price, and we can calculate returns based on its price changes to capture the continuous growth rate and time-varying volatility of the USD Index.

Code
library(tidyverse)
library(ggplot2)
library(forecast)
library(astsa) 
library(xts)
library(tseries)
library(fpp2)
library(fma)
library(lubridate)
library(tidyverse)
library(TSstudio)
library(quantmod)
library(tidyquant)
library(plotly)
library(ggplot2)
library(knitr)
library(kableExtra)
library(fGarch)

# Load data
invisible(getSymbols("DX-Y.NYB", src = "yahoo", from = "2005-01-01", to = "2024-12-31"))
dxy <- data.frame(Date = index(`DX-Y.NYB`),
                       Open = `DX-Y.NYB`[, "DX-Y.NYB.Open"], 
                       High = `DX-Y.NYB`[, "DX-Y.NYB.High"], 
                       Low = `DX-Y.NYB`[, "DX-Y.NYB.Low"], 
                       Close = `DX-Y.NYB`[, "DX-Y.NYB.Close"],
                       Adjusted = `DX-Y.NYB`[,"DX-Y.NYB.Adjusted"])
colnames(dxy) <- c("Date", "Open", "High", "Low", "Close", "Adjusted")
dxy <- na.omit(dxy)
dxy$Date <- as.Date(dxy$Date, format = "%m/%d/%Y")
figc <- dxy %>% plot_ly(x = ~Date, type = "candlestick",
                       open = ~Open, close = ~Close,
                       high = ~High, low = ~Low)
figc <- figc %>% layout(title = "U.S. Dollar Index Candlestick Plot")

figc
Code
gg <- ggplot(dxy, aes(x = Date, y = log(Adjusted))) +   
  geom_line(color = 'purple') +  
  labs(x = "Year", title = "Log-transformed USD")

plotly_gg <- ggplotly(gg)
plotly_gg

When the log transformation is applied to the USD prices, the resulting series becomes less variable compared to the original price series. This log transformation helps in reducing the price-dependent variance, making the series more suitable for analysis, as it transforms the data to a more manageable scale.

We use log differences to calculate returns, which are comparable to percentage changes in stock prices. Specifically, if \(P_t\) is the adjusted price at time \(t\), the log return is calculated as \(\log(P_t) - \log(P_{t-1}) = \log\left(\frac{P_t}{P_{t-1}}\right)\). This transformation is beneficial because it makes returns additive over time. It also helps stabilize variance, which is important for modeling volatility.

Code
returns <- diff(log(dxy$Adjusted))
chartSeries(xts(returns, order.by = dxy$Date[-1]), theme = "white") 

The differenced log prices of the USD appear more mean-reverting, indicating that the series tends to return to a central value over time. Additionally, the variance of the differenced log series changes over time, suggesting a noticeable pattern of volatility clustering. This behavior is common in financial time series and is often modeled using ARCH or GARCH models.

ACF and PACF Plots

Code
ggtsdisplay(returns, main="ACF and PACF of Returns")

Both the ACF and PACF plots of returns show few significant autocorrelations. This indicates that the mean structure of returns is limited.

Code
ggtsdisplay(abs(returns), main="ACF and PACF of Absolute Returns")

Code
ggtsdisplay(returns^2, main="ACF and PACF of Squared Returns")

While the raw returns appear to be uncorrelated, the ACF and PACF plots of \(|r_t|\) and \(r_t^2\) show significant autocorrelations at multiple lags. This provides strong evidence that the returns are not independently and identically distributed, and supports the need for conditional variance models such as ARCH or GARCH to capture the underlying time-varying volatility structure.

ARIMA + GARCH Model

There are two common approaches to fitting a GARCH model to time series data. The first approach involves fitting an ARIMA model to the actual prices to capture the conditional mean dynamics. The residuals from the ARIMA model are then used to fit a GARCH model to model the time-varying volatility. The second approach is to apply the GARCH model directly to the returns series.

Here, we first perform the ARIMA + GARCH approach.

ARIMA on Log Prices

Fit an ARIMA model to the log-transformed prices.

Code
ggtsdisplay(log(dxy$Adjusted), main="ACF and PACF of Log Prices")

The ACF plot of the log-transformed USD prices shows significant autocorrelations at multiple lags. This gradual decay suggests that the log prices exhibit long-term dependencies, indicating differnecing is needed to make the series stationary.

Code
ggtsdisplay(diff(log(dxy$Adjusted)), main="ACF and PACF of Differenced Log Prices")

The ACF and PACF plots show a significant spike at lag 6, as well as few spikes beyond lag 20. These patterns suggest that the appropriate orders for both p and q in the ARIMA model could be 0 or 6.

Code
ARIMA.c = function(p1, p2, q1, q2, data) {
  d = 1
  i = 1
  temp = data.frame()
  ls = matrix(rep(NA, 6 * 1000), nrow = 1000)
  
  for (p in p1:p2) {
    for (q in q1:q2) {
          if (p + d + q <= 9) {
            
            model <- tryCatch({
              Arima(data, order = c(p, d, q), include.drift = TRUE)
            }, error = function(e) {
              return(NULL)
            })
            
            if (!is.null(model)) {
              ls[i, ] = c(p, d, q, model$aic, model$bic, model$aicc)
              i = i + 1
            }
          }
        }
      }
      temp = as.data.frame(ls)
      names(temp) = c("p", "d", "q", "AIC", "BIC", "AICc")
      temp = na.omit(temp)
      return(temp)
}

highlight_output = function(output, type="ARIMA") {
    highlight_row <- c(which.min(output$AIC), which.min(output$BIC), which.min(output$AICc))
    knitr::kable(output, align = 'c', caption = paste("Comparison of", type, "Models")) %>%
    kable_styling(full_width = FALSE, position = "center") %>%
    row_spec(highlight_row, bold = TRUE, background = "#FFFF99")  # Highlight row in yellow
}

output=ARIMA.c(p1=0,p2=6,q1=0,q2=6,data=log(dxy$Adjusted))
highlight_output(output)
Comparison of ARIMA Models
p d q AIC BIC AICc
0 1 0 -39549.01 -39535.96 -39549.01
0 1 1 -39548.08 -39528.51 -39548.08
0 1 2 -39546.49 -39520.39 -39546.48
0 1 3 -39545.42 -39512.80 -39545.41
0 1 4 -39543.75 -39504.60 -39543.73
0 1 5 -39541.75 -39496.08 -39541.73
0 1 6 -39541.94 -39489.74 -39541.91
1 1 0 -39548.06 -39528.49 -39548.06
1 1 1 -39546.07 -39519.98 -39546.06
1 1 2 -39544.50 -39511.87 -39544.48
1 1 3 -39543.41 -39504.27 -39543.40
1 1 4 -39541.74 -39496.08 -39541.72
1 1 5 -39539.76 -39487.56 -39539.73
1 1 6 -39539.92 -39481.21 -39539.89
2 1 0 -39546.53 -39520.43 -39546.52
2 1 1 -39544.52 -39511.90 -39544.51
2 1 2 -39542.13 -39502.98 -39542.11
2 1 3 -39541.32 -39495.65 -39541.30
2 1 4 -39539.51 -39487.31 -39539.48
2 1 5 -39537.52 -39478.80 -39537.49
2 1 6 -39559.37 -39494.13 -39559.33
3 1 0 -39545.35 -39512.73 -39545.34
3 1 1 -39543.36 -39504.21 -39543.34
3 1 2 -39541.22 -39495.55 -39541.20
3 1 3 -39556.18 -39503.98 -39556.15
3 1 4 -39557.72 -39499.00 -39557.68
3 1 5 -39557.99 -39492.75 -39557.95
4 1 0 -39543.72 -39504.58 -39543.71
4 1 1 -39541.73 -39496.06 -39541.70
4 1 2 -39560.65 -39508.45 -39560.62
4 1 3 -39546.40 -39487.68 -39546.36
4 1 4 -39554.13 -39488.89 -39554.09
5 1 0 -39541.79 -39496.12 -39541.77
5 1 1 -39539.79 -39487.59 -39539.76
5 1 2 -39537.78 -39479.06 -39537.74
5 1 3 -39537.72 -39472.48 -39537.68
6 1 0 -39542.06 -39489.87 -39542.03
6 1 1 -39540.08 -39481.36 -39540.04
6 1 2 -39559.29 -39494.05 -39559.25

The ARIMA(0,1,0) model has the lowest BIC value, while the ARIMA(4,1,2) model has the lowest AIC and AICc values. Therefore, we will consider both models for further analysis.

Code
auto.arima(log(dxy$Adjusted))
Series: log(dxy$Adjusted) 
ARIMA(1,1,0) 

Coefficients:
         ar1
      0.0146
s.e.  0.0141

sigma^2 = 2.27e-05:  log likelihood = 19776.68
AIC=-39549.37   AICc=-39549.37   BIC=-39536.32

The auto.arima() function suggests an ARIMA(1,1,0) model.

ARIMA(0,1,0)

Code
model_output <- capture.output(sarima(log(dxy$Adjusted), 0,1,0))

Code
start_line <- grep("Coefficients", model_output)  # Locate where coefficient details start
end_line <- length(model_output)  # Last line of output
cat(model_output[start_line:end_line], sep = "\n")
Coefficients: 
         Estimate    SE t.value p.value
constant    1e-04 1e-04  0.8257   0.409

sigma^2 estimated as 2.26922e-05 on 5034 degrees of freedom 
 
AIC = -7.854818  AICc = -7.854818  BIC = -7.852226 
 

ARIMA(4,1,2)

Code
model_output <- capture.output(sarima(log(dxy$Adjusted), 4,1,2))

Code
start_line <- grep("Coefficients", model_output)  # Locate where coefficient details start
end_line <- length(model_output)  # Last line of output
cat(model_output[start_line:end_line], sep = "\n")
Coefficients: 
         Estimate     SE  t.value p.value
ar1       -0.1996 0.0331  -6.0221  0.0000
ar2       -0.9247 0.0487 -18.9926  0.0000
ar3        0.0069 0.0158   0.4363  0.6626
ar4        0.0162 0.0156   1.0383  0.2992
ma1        0.2147 0.0301   7.1378  0.0000
ma2        0.9225 0.0464  19.8788  0.0000
constant   0.0001 0.0001   0.7335  0.4633

sigma^2 estimated as 2.258561e-05 on 5028 degrees of freedom 
 
AIC = -7.857129  AICc = -7.857125  BIC = -7.846763 
 

ARIMA(1,1,0)

Code
model_output <- capture.output(sarima(log(dxy$Adjusted), 1,1,0))

Code
start_line <- grep("Coefficients", model_output)  # Locate where coefficient details start
end_line <- length(model_output)  # Last line of output
cat(model_output[start_line:end_line], sep = "\n")
Coefficients: 
         Estimate     SE t.value p.value
ar1        0.0145 0.0141  1.0260  0.3049
constant   0.0001 0.0001  0.8118  0.4169

sigma^2 estimated as 2.268745e-05 on 5033 degrees of freedom 
 
AIC = -7.85463  AICc = -7.854629  BIC = -7.850743 
 

The ARIMA(4,1,2) model is preferred because it has the lowest AIC value and most of its coefficients are statistically significant at 0.1% level. Its use of multiple AR and MA terms allows it to better capture the underlying structure and lagged relationships in the time series. This makes it more effective at modeling complex patterns in the data.

ARIMA(4,1,2)

Code
# Fit ARIMA(4,1,2) model to the log-transformed stock prices
arima_412 <- Arima(log(dxy$Adjusted), order = c(4,1,2))
summary(arima_412)
Series: log(dxy$Adjusted) 
ARIMA(4,1,2) 

Coefficients:
          ar1      ar2     ar3     ar4     ma1     ma2
      -0.1977  -0.9276  0.0075  0.0154  0.2130  0.9248
s.e.   0.0314   0.0555  0.0158  0.0163  0.0281  0.0535

sigma^2 = 2.262e-05:  log likelihood = 19787.98
AIC=-39561.95   AICc=-39561.93   BIC=-39516.28

Training set error measures:
                       ME        RMSE         MAE         MPE       MAPE
Training set 5.661833e-05 0.004752691 0.003530632 0.001192505 0.07874363
                  MASE         ACF1
Training set 0.9999802 0.0004543161

GARCH on Residuals

Fit a GARCH model to the residuals of the ARIMA model.

To determine the appropriate values for GARCH(p, q), we examine the ACF and PACF plots of the squared residuals from the ARIMA model. Significant spikes in the ACF and PACF at certain lags can provide insight into the orders of p (ARCH terms) and q (GARCH terms) for the GARCH model. Specifically, the number of significant lags in the ACF and PACF can guide the selection of the GARCH model’s parameters.

Code
# Get residuals from the fitted ARIMA model
arima.res <- arima_412$residuals
# Compute squared residuals
squared.arima.res <- arima.res^2

ggtsdisplay(arima.res, main="ACF and PACF of ARIMA Residuals")

Code
ggtsdisplay(squared.arima.res, main="ACF and PACF of Squared ARIMA Residuals")

We can see that the ACF remains significant for the first 40 lags, while the PACF is significant up to lag 9, with a few individual significant spikes thereafter. We will perform a search for q=0:9 and p=0:9.

Code
# Initialize list to store models
models <- list()
cc <- 1

# Loop over GARCH(p, q) combinations
for (p in 1:9) {
  for (q in 0:9) {
    models[[cc]] <- garch(arima.res, order = c(q, p), trace = FALSE)
    cc <- cc + 1
  }
}

# Extract AIC values for all models
GARCH_AIC <- sapply(models, AIC)

# Find and return the best model (lowest AIC)
best_index <- which.min(GARCH_AIC)
models[[best_index]]

Call:
garch(x = arima.res, order = c(q, p), trace = FALSE)

Coefficient(s):
       a0         a1         b1  
9.511e-08  3.575e-02  9.601e-01  

This suggests that the best model is GARCH(1,1), as it yields the lowest AIC, indicating a better fit to the data compared to other models.

Code
# Fit a GARCH(1,1) model to ARIMA residuals and display summary
garch_model <- garchFit(~ garch(1,1), data = arima.res, trace = FALSE)
summary(garch_model)

Title:
 GARCH Modelling 

Call:
 garchFit(formula = ~garch(1, 1), data = arima.res, trace = FALSE) 

Mean and Variance Equation:
 data ~ garch(1, 1)
<environment: 0x00000150740a2580>
 [data = arima.res]

Conditional Distribution:
 norm 

Coefficient(s):
        mu       omega      alpha1       beta1  
3.7414e-05  9.5020e-08  3.5770e-02  9.6012e-01  

Std. Errors:
 based on Hessian 

Error Analysis:
        Estimate  Std. Error  t value Pr(>|t|)    
mu     3.741e-05   5.742e-05    0.652 0.514650    
omega  9.502e-08   2.676e-08    3.551 0.000384 ***
alpha1 3.577e-02   3.537e-03   10.114  < 2e-16 ***
beta1  9.601e-01   3.749e-03  256.113  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Log Likelihood:
 20161.39    normalized:  4.003453 

Description:
 Tue May  6 02:49:26 2025 by user: nibh 


Standardised Residuals Tests:
                                 Statistic   p-Value
 Jarque-Bera Test   R    Chi^2  145.374005 0.0000000
 Shapiro-Wilk Test  R    W              NA        NA
 Ljung-Box Test     R    Q(10)    3.426426 0.9695377
 Ljung-Box Test     R    Q(15)   11.056049 0.7486111
 Ljung-Box Test     R    Q(20)   11.456358 0.9335188
 Ljung-Box Test     R^2  Q(10)    5.461793 0.8582760
 Ljung-Box Test     R^2  Q(15)   13.728217 0.5462336
 Ljung-Box Test     R^2  Q(20)   16.693837 0.6727552
 LM Arch Test       R    TR^2     6.777284 0.8719737

Information Criterion Statistics:
      AIC       BIC       SIC      HQIC 
-8.005317 -8.000135 -8.005319 -8.003502 

Both coefficients in the GARCH(1,1) model are significant at the 0.1% level, indicating that both the ARCH and GARCH effects are statistically significant. This suggests that the volatility of the residuals is influenced by both past squared residuals (ARCH effect) and past conditional variances (GARCH effect).

The Ljung-Box test results on standardized residuals show no significant serial correlation, confirming that the residuals behave like white noise, and the model has appropriately captured the structure in the returns.

The ARCH LM tests on squared standardized residuals suggest that no significant ARCH effects remain in the residuals, supporting that the model has appropriately captured the volatility clustering.

Code
# Extract residuals and use checkresiduals from forecast
garch_res <- residuals(garch_model)

# Diagnostic plots on residuals
checkresiduals(garch_res)


    Ljung-Box test

data:  Residuals
Q* = 7.1928, df = 10, p-value = 0.7071

Model df: 0.   Total lags used: 10
Code
# Predict 100 steps ahead using the fitted GARCH(1,1) model
invisible(predict(garch_model, n.ahead = 100, plot = TRUE))

ARIMA(4,1,2)+GARCG(1,1) Model Equation

\[ \begin{gathered} \phi(B)(1-B)^d x_t=\theta(B) w_t \\ \nabla x_t=x_t-x_{t-1} \\ \phi(B)=1-\phi_1 B-\phi_2 B^2-\phi_3 B^3-\phi_4 B^4 \\ \phi(B)=1+0.1997 B+0.9276 B^2 -0.0075 B^3-0.0154 B^4\\ \theta(B)=1+\theta_1 B+\theta_2 B^2 \\ \theta(B)=1+0.2130 B+0.9248 B^2 \\ (1+0.1997 B+0.9276 B^2 -0.0075 B^3-0.0154 B^4)(1-B) x_t= (1+0.2130 B+0.9248 B^2)w_t \end{gathered} \]

So, the equation becomes: \[ \begin{gathered} (1-0.8003 B+0.7279 B^2 -0.9351 B^3 -0.0079 B^4+0.0154 B^5) x_t = (1+0.2130 B+0.9248 B^2)w_t \\ x_t-0.8003 B x_t+0.7279 B^2 x_t -0.9351 B^3 x_t-0.0079 B^4 x_t +0.0154 B^5 x_t = w_t+0.2130 B w_t+0.9248 B^2 w_t \\ x_t = 0.8003 B x_t - 0.7279 B^2 x_t + 0.9351 B^3 x_t + 0.0079 B^4 x_t - 0.0154 B^5 x_t + w_t+0.2130 B w_t+0.9248 B^2 w_t \\ w_t=\sigma_t \epsilon_t \\ var\left(w_t \mid w_{t-1}\right)=\sigma_t^2=\alpha_0+\alpha_1 w_{t-1}^2+\beta_1 \sigma_{t-1}^2 \\ \sigma_t^2=0.000000095+0.036 w_{t-1}^2+0.96 \sigma_{t-1}^2 \end{gathered} \]

GARCH Model

The second approach involves directly fitting a GARCH model to the returns, without fitt the ARIMA model. In this method, we inspect the ACF and PACF of the squared returns, rather than using ARIMA residuals, to determine the appropriate GARCH model.

Code
# Initialize model storage
models <- list()
cc <- 1

# Grid search for GARCH(p, q), where p = ARCH order, q = GARCH order
for (p in 1:9) {
  for (q in 0:9) {
    models[[cc]] <- garch(returns, order = c(q, p), trace = FALSE)
    cc <- cc + 1
  }
}

# Extract AIC values
garch_aic <- sapply(models, AIC)

# Find best model (lowest AIC)
best_index <- which.min(GARCH_AIC)
models[[best_index]]

Call:
garch(x = returns, order = c(q, p), trace = FALSE)

Coefficient(s):
       a0         a1         b1  
9.413e-08  3.602e-02  9.599e-01  

The GARCH(1,1) model is selected as the best model based on the lowest AIC value.

We were comparing Method 1: ARIMA(4,1,2) + GARCH(1,1) and Method 2: GARCH(1,1).

Cross Validation

Code
# Step 1: Prepare data
log.b <- log(dxy$Adjusted)
returns <- diff(log.b)

# Set rolling window parameters
k <- 4000  # 80% of data length as initial training size
n <- length(returns)

# Initialize error storage
err1 <- c()  # ARIMA(4,1,2) + GARCH(1,1)
err2 <- c()  # GARCH(1,1)
err3 <- c()  # GARCH(2,1)
err4 <- c()  # ARCH(4)

# Rolling forecast loop
for (i in 1:(n - k)) {

  # Training and test data
  xtrain <- log.b[1:(k - 1) + i]
  xtest <- log.b[k + i]
  
  ## Model 1: ARIMA(4,1,2) + GARCH(1,1) on residuals
  arima.fit <- Arima(xtrain, order = c(4,1,2), include.drift = TRUE)
  arima.res <- residuals(arima.fit)
  fit1 <- garchFit(~ garch(1, 1), data = arima.res, trace = FALSE)
  fcast1 <- predict(fit1, n.ahead = 1)
  
  ## Model 2: GARCH(1,1) on returns
  returns_train <- diff(xtrain)
  fit2 <- garchFit(~ garch(1, 1), data = returns_train, trace = FALSE)
  fcast2 <- predict(fit2, n.ahead = 1)

  ## Model 3: GARCH(2,1) on returns (new model)
  fit3 <- garchFit(~ garch(2, 1), data = returns_train, trace = FALSE)
  fcast3 <- predict(fit3, n.ahead = 1)

  ## Model 4: ARCH(4) on returns (new model)
  fit4 <- garchFit(~ garch(4, 0), data = returns_train, trace = FALSE)
  fcast4<- predict(fit4, n.ahead = 1)
  
  # Forecasting log price (meanForecast is on returns, so add to last value of xtrain)
  pred1 <- tail(xtrain, 1) + fcast1$meanForecast
  pred2 <- tail(xtrain, 1) + fcast2$meanForecast
  pred3 <- tail(xtrain, 1) + fcast3$meanForecast
  pred4 <- tail(xtrain, 1) + fcast4$meanForecast

  # Squared errors for each model
  err1 <- c(err1, (pred1 - xtest)^2)
  err2 <- c(err2, (pred2 - xtest)^2)
  err3 <- c(err3, (pred3 - xtest)^2)
  err4 <- c(err4, (pred4 - xtest)^2)
}

# Calculate RMSE for each model
RMSE1 <- sqrt(mean(err1))  # ARIMA(4,1,2) + GARCH(1,1)
RMSE2 <- sqrt(mean(err2))  # GARCH(1,1)
RMSE3 <- sqrt(mean(err3))  # GARCH(2,1)
RMSE4 <- sqrt(mean(err4))  # ARCH(4)

tabel <- data.frame(
  Model = c("ARIMA(4,1,2)+GARCH(1,1)", "GARCH(1,1)", "GARCH(2,1)", "ARCH(4)"),
  RMSE = round(c(RMSE1, RMSE2, RMSE3, RMSE4), 4)
)
highlight_row <- which.min(tabel$RMSE)
kable(tabel, align = 'c', caption = "Model Comparison Based on RMSE") %>%
  kable_styling(full_width = FALSE, position = "center") %>%
  row_spec(highlight_row, bold = TRUE, background = "#FFFF99")
Model Comparison Based on RMSE
Model RMSE
ARIMA(4,1,2)+GARCH(1,1) 0.0044
GARCH(1,1) 0.0044
GARCH(2,1) 0.0044
ARCH(4) 0.0044

Volatality Plot

Code
# Extract the conditional variances from GARCH
ht <- garch_model@h.t

# Create dataframe and plot
vol_df <- data.frame(Date = dxy$Date, Volatility = ht)

p <- ggplot(vol_df, aes(x = Date, y = Volatility)) +
  geom_line(color = "#ff9933") +
  labs(
    title = "Volatility Plot (Conditional Variance from GARCH Model)",
    x = "Date",
    y = "Conditional Variance"
  ) +
  theme_minimal()

ggplotly(p)

The plot shows the volatility (conditional variance) of the U.S. Dollar Index returns as modeled by a GARCH model. From the graph, we can see significant spikes in volatility during specific periods, which correspond to major global or domestic economic events:

  1. 2008 Financial Crisis (Mid 2008 – 2009):
    The global financial crisis brought massive uncertainty to financial markets. The collapse of Lehman Brothers, the bailout of key financial institutions, and the widespread loss of confidence in the global financial system caused extreme market turmoil. Although the U.S. dollar often strengthens as a safe-haven during crises, the unprecedented disruptions led to highly volatile returns in the USD.

  2. 2011 U.S. Debt-Ceiling Crisis:
    In 2011, political deadlock over raising the U.S. debt ceiling pushed the Treasury close to default. Although a last-minute agreement was reached, Standard & Poor’s downgraded the U.S. credit rating from AAA to AA+ on August 5. This is the first downgrade for U.S. in history. This triggered heightened uncertainty and sharp market movements, causing significant fluctuations in the dollar index.

  3. 2014–2015 Oil Price Crash:
    A dramatic drop in global oil prices, driven by oversupply and weakened demand, introduced new economic instability, especially for oil-dependent countries. Shifting expectations about U.S. monetary policy, global growth, and commodity markets led to increased volatility in the dollar. During this period, the Federal Reserve also began signaling future tightening, contributing to market swings.

  4. COVID-19 Pandemic (2020):
    The onset of the COVID-19 pandemic in early 2020 caused an abrupt global economic shock. As countries imposed lockdowns and supply chains collapsed, financial markets experienced extreme volatility. Central banks, including the Federal Reserve, responded with aggressive stimulus measures, such as emergency rate cuts and large-scale asset purchases, leading to major fluctuations in the U.S. dollar.

  5. Federal Reserve Policy Shift (2022):
    In late 2022, U.S. inflation data (CPI) came in lower than expected, leading markets to anticipate a slowdown in the Federal Reserve’s interest rate hikes. As a result, investors began unwinding long dollar positions and shifted capital toward riskier assets, which caused a sharp drop and increased volatility in the U.S. Dollar Index.