Miltos Allamanis research is at the intersection of machine learning, programming languages, and software engineering. His research aims to combine the rich structural aspects of programming languages with machine learning to create better tools for developers, while using problems in this area to motivate machine learning research. He obtained my PhD from the University of Edinburgh, UK. More information about him and his publications can be found at https://miltos.allamanis.com

Jacob Andreas is the X Consortium Assistant Professor at MIT. His research aims to build intelligent systems that can communicate effectively using language and learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. As a researcher at Microsoft Semantic Machines, he founded the language generation team and helped develop core pieces of the technology that powers conversational interaction in Microsoft Outlook. He has been the recipient of a Sony Faculty Innovation Award, a Kolokotrones for teaching at MIT, and paper awards at NAACL and ICML.

Xinyun Chen is a Ph.D. candidate at UC Berkeley, working with Prof. Dawn Song. Her research lies at the intersection of deep learning, programming languages, and security. Her recent research focuses on neural program synthesis and adversarial machine learning. She received the Facebook Fellowship in 2020, and Rising Stars in Machine Learning in 2021. Her work SpreadsheetCoder for spreadsheet formula prediction was integrated into Google Sheets, and she was part of the AlphaCode team when she interned at DeepMind.

Creating solutions to unforeseen problems is second nature in human intelligence – a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

Graham Neubig is an associate professor at the Language Technologies Institute of Carnegie Mellon University. His research focuses on multilingual natural language processing, natural language interfaces to computers, and machine learning methods for NLP, with the final goal of every person in the world being able to communicate with each-other, and with computers in their own language. He also contributes to making NLP research more accessible through open publishing of research papers, advanced NLP course materials and video lectures, and open-source software, all of which are available on his web site.

Jerry Tworek is a research scientist at OpenAI. His current focus is on teaching artificial programmers to work hand in hand with humans and increase their productivity. Graduated with an MSc in Mathematics from University of Warsaw and spent the first five years of his career in the Hedge Fund industry, which finally led him to study deep reinforcement learning. Jerry took part in the OpenAI robotics project “Solving Rubik’s Cube with a Robot Hand” which he later presented at NeurIPS 2019 DeepRL workshop. Currently holds a lead research role in OpenAI program synthesis effort.