Machine Learning

Neural Program Synthesis with Priority Queue Training

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 11 months ago.


  • arXiv
    5 pts

    Neural Program Synthesis with Priority Queue Training

    We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We employ an iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far. Then, we synthesize new programs and add them to the priority queue by sampling from the RNN. We benchmark our algorithm, called priority queue training (or PQT), against genetic algorithm and reinforcement learning baselines on a simple but expressive Turing complete programming language called BF. Our experimental results show that our simple PQT algorithm significantly outperforms the baselines. By adding a program length penalty to the reward function, we are able to synthesize short, human readable programs.

    Neural Program Synthesis with Priority Queue Training
    by Daniel A. Abolafia, Mohammad Norouzi, Quoc V. Le
    https://arxiv.org/pdf/1801.03526v1.pdf

You must be logged in to reply to this topic.