DeepSeek-14B Models Show Promise in Predicting Stock Price Changes
Two large language models, DeepSeek-14B and its fine-tuned version DeepSeek-14B-SFT, have shown promising results in predicting stock market changes. The research, conducted on a vast dataset covering over 5,000 stocks in 2024, highlights the potential of more efficient training strategies in financial prediction.
The study began by comparing the two models. The fine-tuned version, DeepSeek-14B-SFT, provided more concise and potentially more accurate predictions. This initial finding set the stage for further exploration.
The team then validated their approach, RETuning, on the large-scale Fin-2024 dataset. This dataset, covering 5,123 stocks, integrated six key information sources for diverse analysis. The results demonstrated improved LLM performance in evolving market conditions. RETuning enables LLMs to dynamically organize and score evidence, leading to more accurate predictions of potential price increases or decreases.
Experiments revealed that RETuning not only improves prediction ability but also enables inference-time scalability. Moreover, it generalizes to other financial tasks and out-of-distribution stocks. This versatility underscores the potential of RETuning as a powerful tool in stock market prediction.
The research, while not specifying the authors of the Reflective Evidence Tuning model, provides valuable insights into improving LLM performance in financial forecasting. By encouraging logical reasoning and dynamic evidence scoring, RETuning offers a promising path forward in the challenging task of predicting stock movements.