Datenbestand vom 13. März 2019
Tel: 089 / 66060798
Mo - Fr, 9 - 12 Uhr
Fax: 089 / 66060799
aktualisiert am 13. März 2019
978-3-8439-3413-8, Reihe Informatik
Recurrent Neural Networks for Sequential Pattern Recognition Applications
178 Seiten, Dissertation Eberhard-Karls-Universität Tübingen (2017), Softcover, A5
In this thesis different variants of Recurrent Neural Networks (RNNs) are comprehensively studied. Due to their cyclic structure, implementing a memory, RNNs are capable of handling temporally encoded patterns. They are thus inherently more powerful than Feed-Forward Networks (FFNs) and can theoretically map between arbitrarily long sequences. It is conceivable, in principle, to apply RNNs to any problem that appears sequentially in some way - whereas time is just an obvious sequential dimension. Moreover, even many problems, which seem static at the first glance, can be interpreted in a sequential manner. On the other hand, RNNs are usually harder to handle than their acyclic counterparts. Whether they unfold their potential strongly depends on several aspects, e.g., the chosen architecture, the training data, or the training method and its respective control parameter.
The contributed investigations of this thesis range from methodical enhancements concerning the structure as well as the learning procedure, over conceptional approaches exploring novel application fields for RNNs, up to concrete practical implementations of such systems and their analysis.
The presented research covers RNNs from two areas, or as often perceived, from two worlds. First, RNNs trained via gradients, i.e., Back-Propagation Through Time (BPTT), are addressed. These are standard RNNs, variants of Long Short-Term Memories (LSTMs) and several bidirectional RNN architectures. Second, networks related to reservoir computing, particularly Echo State Networks (ESNs), are considered, which entirely differ in the way they are trained.
The application scenarios addressed in this thesis are visual terrain classification as well as vibration-based terrain classification, robot arm control with inverse recurrent forward models, the prediction of molecular binding activity from human-readable molecule description strings (SMILES), and linear as well as non-linear signal modeling. For several of these applications RNNs achieved novel state-of-the-art results.
Moreover, the particular applications presented throughout the chapters provide a variety of important hints and tricks of the trade to get RNNs effectively working in the respective domains. This includes partially novel techniques, but also specifically elaborated network designs.