Machine Learning

Learning the Enigma with Recurrent Neural Networks

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 2 years, 9 months ago.

  • arXiv
    5 pts

    Learning the Enigma with Recurrent Neural Networks

    Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms — the mappings from plaintext to ciphertext — for three polyalphabetic ciphers (Vigen`ere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ‘cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigen`ere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.

    Learning the Enigma with Recurrent Neural Networks
    by Sam Greydanus

You must be logged in to reply to this topic.