نبذة مختصرة : 학위논문(박사) -- 서울대학교대학원 : 공과대학 산업공학과, 2023. 8. 장우진. ; The advancement of deep neural networks and the capability to effectively process complex data has resulted in a significant increase in academic interest in the application of algorithm trading modeled on neural network structures. The utilization of machine learning in algorithmic trading is driven by two primary objectives. The first objective is to identify meaningful characteristics that can shed light on the fluctuations observed in the financial market. The second objective is to detect underlying causal relationships within multivariate financial time series data. The task of extracting valuable features from financial time series to make predictions or explain market movements is challenging due to the inherent volatility and high levels of noise present in such data. Most algorithmic trading methodologies to date have primarily focused on the process of feature engineering, aimed at directly extracting meaningful features or factors from financial time series data. The majority of algorithmic trading models currently in use determine the optimal position through supervised learning models, such as predicting direction or price, rather than learning from a profit-maximizing objective function. This approach not only fails to fully incorporate the direct utility of a given position, but also leads to double error resulting from indirect decision making. In light of these limitations, reinforcement learning-based algorithmic trading models have emerged as a viable alternative. These models learn the optimal behavior by maximizing the expected reward from observations within a given market environment. This approach overcomes the limitations of supervised learning-based algorithmic trading models by directly incorporating the direct utility of a given position. The present study proposes a novel convergence model that integrates recurrent reinforcement learning (RRL) and the self-attention mechanism. The efficacy of the proposed model is rigorously tested ...
No Comments.