Multi objective Automated Type-2 Parsimonious Learning Machine to Forecast Time-Varying Stock Indices Online

Abstract : Real-time forecasting of the financial time-series data is challenging for many machine learning (ML) algorithms. First, many ML models operate offline, where they need a batch of data, which may not be available during training. Besides, due to a fixed architecture of the majority of the offline-based ML models, they suffer to deal with the uncertain nature of financial time-series data. In contrast, online learning mode evolving-structured ML models could be promising for financial time-series forecasting. For real-time deployment of such models, low memory demand is a must. Besides, the model's explainability plays a crucial role in forecasting financial time-series. Considering all the requirements, a rule-based autonomous neuro-fuzzy learning algorithm called the parsimonious learning machine (PALM) is proposed here to forecast time-varying stock indices.To provide efficient automation of the proposed algorithm by maintaining the model explainability in terms of limited number linguistic IF-THEN rules, two popular multiobjective evolutionary algorithms (MEAs), such as a real-coded genetic algorithm (GA) and a self-adaptive differential evolution (DE) algorithm are utilized here. In addition, fuzzy type-2 variants of PALMs' are considered here due to better uncertainty handling capacity than their type-1 counterparts. To evaluate the proposed algorithm's performance, the closing stock price of fifteen (15) different stock market indices are predicted here. From the results, it is observed that the MEA-based PALMs are performing better than the state-of-the-art benchmark online ML models and providing a rule-based explainable model to the end-user.
 EXISTING SYSTEM :
 ? The learning mechanisms of the existing online NFS methods are generally heuristic and lack a formal theoretical foundation. ? The ever-increasing complexity of the recently developed online learning methods leads not only to high variability in the outcomes of different realizations of an NFS model on different datasets, but also a less practical system that is difficult to implement. ? Simplicity is much wanted. Meanwhile, empirical studies on the existing online NFS algorithms have so far been conducted on relatively small datasets (typically having < 10; 000 data points), and their scalability against long data stream has not been comprehensively tested.
 DISADVANTAGE :
  Improving the performance of a FLS can be considered as a search problem in high-dimensional space where each point may represent the structure of a FLS or elements of the structure such as 1) rule set, 2) MFs, 3) inference system, and 4) FLS’s corresponding behavior. In that high-dimensional space, the performance of a FLS may form a hyper-surface to achieve predefined modeling or predictive accuracy. ? Therefore, the selection of proper thresholds has a dependency on experts’ knowledge or requires several iterations to achieve the desired accuracy in regression-based problems. ? To overcome such dependency, evolutionary algorithms (EAs) can be utilized in PALM to regulate the thresholds of evolving their structure since EAs have already been successfully utilized in various conventional fuzzy logic systems
 PROPOSED SYSTEM :
 ? These can be largely attributed to the absence of pruning procedure in the DENFIS method. It is also shown that SPLAFIS outperforms CVR in terms of test NDEI and speed when the data size gets larger. ? All in all, these results suggest that the proposed SPLAFIS model is scalable and, at the same time, able to balance between predictive accuracy and rule base size. ? All training observations are sequentially (i.e., one-by-one) presented to the system At any time, only one observation is seen and learned . ? An observation is discarded immediately after learning on that observation is completed . The system has no prior knowledge on how many total observations will be presented
 ADVANTAGE :
 ? In DE, the greedy selection procedure is employed by selecting the better among new solutions and their parents, which gives the DE advantages over the GA in converging the performance. ? In contrast with GA, the search in DE is not time-consuming. It is easy to use and performs well with a simple configuration for a variety of test problems. ? In , DE has been utilized not only to optimize the fuzzy rules and MFs but also to optimize the strength of the fuzzy inference operators. In, DE aided to realize the fuzzy clustering of their type-2 fuzzy neural network. ? They have used the DE to optimize both the antecedent and consequent parts of the MF in their type-2 fuzzy neural network.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com