Update: The workshop will appear at ICLR2022! Stay tuned for more information to come!
An exciting application of deep learning methods is completion, repair, or synthesis of program code. This field has received a fair amount of attention in the last decade, yet arguably the recent application of large-scale language modelling techniques holds a tremendous promise to completely revolutionize this area (Chen et al., 2021; Austin et al., 2021). The new large pretrained models excel at completing code and synthesizing code from natural language descriptions; they work across a wide range of domains, tasks, and programming languages. The excitement about new possibilities is spurring tremendous interest in both industry and academia. Yet, we are just beginning to explore the potential of large-scale deep learning for code, and state-of-the-art models still struggle with correctness and generalization. This calls for platforms to exchange ideas and discuss the challenges in this line of work.