BelVis PRO EHP
Sophisticated methods for superior forecasts
The BelVis PRO Enhancement Package (EHP) is based on the latest research and development from the KISTERS Modelling Department, and is now fully integrated in BelVis PRO. EHP adds the following to the already extensive functionality of BelVis PRO:
- New forecasting methods and optimised training algorithms, which significantly enhance the quality of forecasts and permit the shortest possible training times.
- Cross-model tools, such as enhanced statistical evaluations and sensitivity analyses.
- Optimised processes for ALN, including stability analysis graphs.
Recurrent Neural Networks (RNN)
RNN may be utilised for the expansion of ALN and ANN models, and offer optimised forecast quality. The RNN is a data reservoir, which is fed from a variety of existing model inputs (vectors), and which can generate over 100 new model inputs.
RNN can be configured very easily in BelVis PRO EHP. A new random assignment is created at the touch of a button, and with it a new - and potentially better - modelling approach.
Using RNN generates random and creative additions to the user‘s own modelling approach. It is quite amazing to see how random creativity can supplement own model formation efforts, thus enhancing the forecast quality.
Hierarchical models facilitate a significant improvement of the forecast quality without the addition of new information. Several submodels (ALN, ANN, RNN, …) can be organised as a hierarchy through a simple drag & drop process. Only the uppermost model must be supplied with data, trained, and calculated. The specified training and calculation methods are then automatically applied to the submodels.
The optional optimisation of a hierarchical model automatically finds the best mix of all submodels, i.e. imprecise submodels are disabled.
Innovative ANN training methods
BelVis PRO EHP offers a number of innovative ANN training methods, two of which have been submitted for patent consideration.
These methods facilitate automatic training, and automatic evaluation and optimisation of hidden neurons is now also possible.
Despite all these innovations, which present a significant algorithmic challenge, training times have been kept extremely short. The proverbial trip to the coffee machine is not longer necessary.