Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • نبذة مختصرة :
      Data assimilation (DA) plays a pivotal role in diverse applications, ranging from weather forecasting to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on the Kalman filter's linear update equation to correct each of the ensemble forecast member's state with incoming observations. Recent advancements have witnessed the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a new DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz 63 and 96 systems, where the agent's objective is to maximize the geometric series with terms that are proportional to the negative root‐mean‐squared error (RMSE) between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo‐based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Numerical results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent's capability to assimilate non‐Gaussian observations, addressing one of the limitations of the EnKF. Plain Language Summary: Reliable forecasts of the state of chaotic systems, such as environmental flows, require combining observational data and dynamical model outputs through a process called data assimilation. The ensemble Kalman filter (EnKF) is the most commonly adopted algorithm for this task, however, is subject to some limitations when applied to nonlinear/non‐Gaussian systems. Recently, there has been interest in using deep learning (DL), particularly within a supervised learning setup, for DA. However, making DL models work well in new situations that differ from those experienced during training is challenging. In this work, we propose a new DA approach that leverages reinforcement learning (RL). RL helps the system make corrections to its predictions based on incoming observations, even when the model has not been trained for those specific scenarios. Compared to the EnKF framework, RL offers a novel algorithm for nonlinear corrections of the forecasts. Numerical results show that the proposed RL algorithm outperforms the EnKF and demonstrates the RL agent's ability at addressing some shortcomings of the EnKF. Key Points: Deep reinforcement learning (RL) is introduced for data assimilation in application to the Lorenz 63 and 96RL generalizes to new situations unseen during training through actively learning from the data and system dynamicsThe RL agent allows for nonlinear correction of the forecast using the observationsThe performance of the proposed RL algorithm generally surpasses that of the standard ensemble Kalman filter (EnKF) [ABSTRACT FROM AUTHOR]
    • نبذة مختصرة :
      Copyright of Journal of Advances in Modeling Earth Systems is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)