Machine Learning in the Context of Time Series

Stefan Herdy

Research output: ThesisMaster's Thesis

256 Downloads (Pure)


The major goals of modern applied technology is to optimize processes, make them more cost and resource efficient, increase the security of a process and many more. In order to achieve this, it is necessary to understand the underlying process. However, one can only gain understanding of something by collecting and evaluating data and information. Nowadays, many technological systems are equipped with sensors to monitor their condition and to be able to make possible statements about a future condition. Data that arises from the monitoring of such systems as a function of time is called time series data. The analysis of time series data is a growing field and, especially with the help of the field of machine learning, which has become popular in recent years, some new methods for time series analysis can potentially be developed. If data is to be analysed and knowledge is to be gained from it, the data should represent reality as well as possible. If a few data points differ significantly from the collective, this usually indicates an abnormal process or a faulty measurement. The detection of such outliers has two main reasons. On the one hand, outliers indicate an error or an unexpected event that often needs to be recognized, and on the other hand, cleaning up a data record, i.e. deleting outliers, can improve the results of a data analysis. An outlier is a data sample that contains abnormal trends in it. These can arise due to errors in the process, errors in the measurement or also through one-off events that do not negatively influence the process. Such anomalies can often be quickly identified by humans after a good visualization. Usually very large amounts of data have to be evaluated, which would cost much time when its done manually. Therefore, there is a desire for automatic detection of these anomalies in the data. There is a possibility to detect these outliers with the help of machine learning algorithms. In the last few years, machine learning has become very popular and became a promised solution to a variety of modern problems, such an automatic detection of anomalies in time-dependent data is a non-trivial task and requires the use of suitable algorithms that are able to learn and recognize typical time profiles of the data. The development of such machine learning models and the evaluation of the developed models for time series analysis in general are the major parts of this thesis. The first chapters of this thesis are an introduction to the basic concepts of machine learning and are essential to understand the following machine learning applications on the time series data. After this introduction the implemented applications are explained. The main part of this thesis is the introduction of a new machine learning method that can be used in time series analysis. The goal of this thesis is to evaluate the limits of machine learning and to learn how machine learning can be applied to time series data in a reasonable way. The last chapter contains an overview of the results and explains what we can learn from this work for future machine learning applications. So the main three chapters of this thesis are an introduction to to machine learning and neuronal nets, the development of new machine learning methods for time series analysis and finally an interpretation of the results.
Translated title of the contributionEntwicklung von Machine Learning Modellen in Bezug auf Zeitreihendaten
Original languageEnglish
Awarding Institution
  • Montanuniversität
  • O'Leary, Paul, Supervisor (internal)
Award date18 Dec 2020
Publication statusPublished - 2020

Bibliographical note

embargoed until null


  • Machine Learning
  • LSTM
  • Time Series Analysis
  • Autoencoder
  • Variational Autoencoder
  • Time Series Prediction
  • Neuronal Nets
  • Latent Space
  • Deep Learning
  • Outlier Detection

Cite this