Learning Polar Coordinates on Visual Examples

After having written two articles about machine translation, I thought it might be time to address the flourishing field of reinforcement learning. For this sake, I have designed a Cython-optimized billiard environment that simulates physically-accurate (up to torque) game episodes. [Read More]

Convolutional Sequence to Sequence Learning

My last article revolved around the so-called Transfomer, an innovative new architecture that is particulary suited for sequence-to-sequence learning tasks such as machine translation. At its core, it abstains from using recurrent cells (e.g. LSTMs/GRUs) and solely relies on dense layers that are complemented by a content-based attention mechanism. This,... [Read More]

Attention Is All You Need

The Transformer

Modern deep learning heavily relies on having enough computational resources to try out sufficiently complex models and also to keep iteration costs as low as possible. Particularly in the microcosmos of sequential models, Recurrent Neural Networks (RNNs) have proven their effectiveness in various tasks such as machine translation, text-to-speech and... [Read More]