Stock Market Price Prediction Using Recurrent Neural Network And LSTM

Categories: ResearchStock Market

The stock market or equity market refers to the markets where shares or stocks are traded. The stock market is a volatile market and a great source that can generate wealth. It is very important to predict future trends of stocks to gain profits. Predicting a trend in stock prices requires advanced algorithms of machine learning. The recurrent neural network (RNN) algorithm is one of the most powerful algorithms for sequential data. LSTM is the acronym for long short-term memory. They are the units of the recurrent neural network.

The role of LSTM is to remember information for a long duration. The goal is to predict daily stock prices of selected listed companies of NASDAQ Stock Exchange based on Recurrent Neural Network (RNN). In this work, we have trained input historical data of stock market price of various firms like Tesla, Apple and Facebook past ten years. It will show the results of Predicted stock values vs Real ground truth values.

Introduction

RNN Architecture has been proved successful in forecasting stock prices.

Get quality help now
Marrie pro writer
Marrie pro writer
checked Verified writer

Proficient in: Research

star star star star 5 (204)

“ She followed all my directions. It was really easy to contact her and respond very fast as well. ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

The prediction of some future event or events by analysing the historical data is known as for. It spans many areas including business and industry, economics, environmental science and finance. Forecasting is classified as short-term forecasting, medium-term forecasting, and, long-term forecasting. Prediction done for dataset less than a year is known as short-term forecasting. Prediction done for dataset between 2-3 years is medium-term forecasting and prediction done for dataset more than 3 years is long-term forecasting. A time series data can be defined as a chronological sequence of observations for a selected variable.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

The variable is stock price, it can either be univariate or multivariate. Univariate data includes data of only one stock whereas multivariate data includes stock prices of more than one company for various instances of time. Analysis of time series data helps in identifying patterns, trends and periods or cycles existing in the data. Also, the analysis of patterns helps in identifying the best- performing companies for a specified period. Forecasting and time series analysis is important for predicting stock prices. Predicting stock market price is one of the most important issues in finance. Many researchers have been given their idea how to forecast the market price to make gain using different techniques, such as technical analysis, statistical analysis, with different methods.

Nowadays, artificial neural networks (ANNs) have been applied to predict exchange index prediction. ANN is one of data mining techniques that have learning capability of the human brain. Data patterns may perform dynamics and unpredictable because of complex financial data used. Several types of research efforts have been made to improve the efficiency of shared values. ANNs have been used in stock market prediction during the decade. Kimoto had used one of the first projects that were the forecasting of the Tokyo stock market index by using ANNs.

Mizuno and friends had applied the Tokyo stock exchange to forecast buying and selling signals with an overall forecasting rate of 63% by using ANN. Sexton and friends started off learning at random points that indicate in the training process. Phua and friends had applied ANNs with the genetic algorithm to the stock market value of Singapore and predicted the market value with an accuracy of 81 %. The aim of this work is to see if the solution will be suitable for predicting prices in stock markets, which are one of the most difficult time series to predict. Basing on time series data from NASDAQ we will try to predict next bid or ask value. If there exists any long- or short-term dependency with historical data, my LSTM model should outperform basic perceptron which will be used for comparison.

Theoretical Frameworks

Feedforward Neural Networks: Feedforward neural network is the simplest type of neural network which moves on only one direction. It does not consist of loops. It has three layers. Input layer, Hidden layer, and the output layer. In a feedforward network, the information moves from the input nodes, through the hidden nodes, and to the output nodes. Input layer:It consists of artificial neurons which brings input data from outside environment and feed that data into the network which is used for further training. No computation is performed at the input layer, it just passes on the information to the hidden layer.

Hidden layer: It is the layer between the input and output layers. It consists of a collection of neurons which take information from the input layer and produces an output and then sends an output to the output layer. In this layer, computation is performed.

Output layer: It consists of output neurons which gather the information from the hidden layer and gives output to the outside world.

Recurrent Neural Networks: Feedforward neural networks need a fixed size input and they give fixed size output. They do not capture sequences or time series data. This makes feedforward network unsuitable for the lot of tasks that involves time series data. The Recurrent neural network is designed for capturing time series or sequential data and they have a backward connection between hidden layers. They can take variable type inputs and give variable type outputs. They have an internal memory because of which they can remember important things about the input they received.

Long Short Term Memory: Gradient vanishing problem arises in RNN wherein it is difficult to train the neural network. To overcome this problem, we use LSTM. An LSTM cell has an input gate, cell state, forget gate, and output gate. It also consists of the sigmoid layer, tanh layer and, pointwise multiplication operation. Input gate consists of the input which will be used for further computations. Cell State adds or remove information with the help of gates. Forget gate decides the fraction of the information to be allowed or not. Output gate consists of the output generated by the LSTM. Sigmoid layer describes how much of each component should be let through by generating numbers between zero and one, Tanh layer creates new vector value that is added to state. In a normal basic version, the hidden layer A is just single sigmoid or tanh layer. Since for learning such an architecture uses Backpropagation Through Time (BPTT) it is usually very difficult to train such networks. The problem is known as vanishing and exploding gradient which makes it impossible for the network to learn long-term dependencies. It is solved by using Long Short-Term Memory Networks (LSTM) that handles layer A differently. In that model A is more complex structure containing several tanh and sigmoid layers and +, * operators. LSTM handles and passes something called the cell state. It can add or remove information to/from the cell and in that way, he stores historical data.

Methodology

Selection of Data: From 3500 listed companies of NASDAQ, we selected 3 companies for this study. The companies were selected based on the stability of the stock price and the performance in the market (trading volume and frequency). Tesla, Apple, and Facebook were selected for this study. Last ten-year data from these companies were selected for model building.

Selection of Variables: For model building, Close, Open, High and Low prices of the past two days were selected as input variables for each company as they were significantly correlated with the output variable Close Price.

Data Preprocessing: It is the process of transforming raw data into the understandable format. It will be beneficial to normalize your training data before you feed data into your model. Having different features with widely different scales fed to your model will lead the network to weight unequally. This leads to an incorrect prioritization of some features over the others.

Splitting data in X_train, y_train, X_test, and Y_test: To train a model we use training set where X-train is the training dataset and y-train are labelled to the data in X-train. The test set is used to test your model after the model has gone through initial vetting by validation set were x-test is the test dataset and y-test are labelled to the data in x-test.

Building the RNN model with 2 LSTM Layers: RNN model with 2 LSTM layers was built. To stack one LSTM layer on another we must set return_sequence as TRUE. The process is as follows:LSTM --> Dropout --> LSTM --> Dropout --> Fully-Connected (Dense)

Train the network: After building the RNN model. It is important to train the data. It can take a lot of time for training because of a large dataset. The training time depends on your GPU. I have set batch size as 5 and epoch as 15 for all three predictions. Once the data is trained we can find RMSE. Root Mean Square Error (RMSE) is the standard deviation of the prediction errors. RMSE is a measure which is often used for predicting results.

Batch size: The total number of training examples present in the single batch. You cannot pass entire dataset into a neural network at once. So, we must divide the dataset into batches.

Epoch: One epoch is when an entire dataset is passed forward and backward through the neural network only once. Since one epoch is too big to feed to the computer at once, so we divide it into several smaller batches. As the number of epochs increases, a greater number of times the weight is changed in the neural network and the curve goes from underfitting to optimal curve.

Iterations: Iterations are the number of batches needed to complete one epoch [6]. e. g. If we have a dataset of 2500 training examples and we gave the batch size of 5. So, it will take 500 iterations to complete one epoch.

Visualizing the prediction: Using Matplotlib we scaled the results and found out the graph of predicted stock price value and real stock price value.

Results of the Study

This experiment was done with RNN and LSTM. Close prices of Tesla, Facebook, and Apple were successfully predicted. All the three-prediction got >0. 5 RMSE value.

Conclusion

The RNN model for stock price prediction was proposed. Seeing the above results, we can say that neural network architectures can make predictions. We trained the model using the historical data of Tesla, Apple and, Facebook and was able to predict stock price. This shows that the model can identify the trends in stock prices. Also, it is evident from the results that, RNN architecture can predict stock prices. For the proposed methodology RNN is identified as the best model. The trends in the stock market may not always be in a regular pattern, they vary. Based on the company, the existence of the trends and the period of their existence will differ. The analysis of these type of trends and cycles will give more profit to the investors. To analyse such information, we must use networks like RNN.

Updated: Feb 22, 2024
Cite this page

Stock Market Price Prediction Using Recurrent Neural Network And LSTM. (2024, Feb 21). Retrieved from https://studymoose.com/stock-market-price-prediction-using-recurrent-neural-network-and-lstm-essay

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment