10 December

New AI System AlphaCode Shows Promising Results in Writing Code

News

min. read

Reading Time: 5 minutes
Symbolic neural network with code in the background

In today’s world, software plays a critical role in nearly every aspect of our lives. From smartphones and social media platforms to nuclear weapons and car engines, the software is the driving force behind much of the technology we rely on every day.

However, despite the growing importance of software in our world, there is a global shortage of skilled programmers. This shortage has led many to ask: what if anyone could simply explain what they wanted a program to do, and a computer could translate that into lines of code?

This idea, known as natural language programming, has been the subject of research and development for many years. The goal is to create a system that can understand and interpret human language in order to generate code automatically.

This would be a game-changing development for the world of programming. It could make coding more accessible to people who don’t have the technical skills or experience to write code themselves, and it could also help to reduce the time and effort required to create software.

Of course, there are challenges to overcome in order to make natural language programming a reality. For example, human language is often ambiguous and can be difficult to interpret, even for other humans. Additionally, the vast range of possible human language expressions means that it would be difficult to create a system that can understand and interpret all of them.

AlphaCode brings us closer to natural language programming

A new artificial intelligence (AI) system called AlphaCode, developed by the research lab DeepMind, a subsidiary of Alphabet (Google’s parent company), has been shown to be capable of writing code in a way that may one day assist experienced programmers. However, the researchers behind the study say that the system is not advanced enough to replace human coders entirely.

“It’s very impressive, the performance they’re able to achieve on some pretty challenging problems,” said Armando Solar-Lezama, head of the computer assisted programming group at the Massachusetts Institute of Technology.

AlphaCode outperforms human programmers in online coding competitions

AlphaCode is an example of a natural language programming system, which is designed to understand and interpret human language in order to generate code automatically. This technology has the potential to make coding more accessible to people who don’t have the technical skills or experience to write code themselves, and could also help to reduce the time and effort required to create software.

However, the researchers behind AlphaCode say that the system is still in the early stages of development, and is not yet advanced enough to replace human programmers entirely. They believe that it may be able to assist experienced coders in the future, but will not be able to completely replace them.

AlphaCode, has been shown to be able to write code in a way that may one day assist experienced programmers. AlphaCode is an example of a natural language programming system, which is designed to understand and interpret human language in order to generate code automatically. This technology has the potential to make coding more accessible to people who don’t have the technical skills or experience to write code themselves, and could also help to reduce the time and effort required to create software.

AlphaCode goes beyond the previous standard-bearer in AI code writing, Codex, a system released in 2021 by the nonprofit research lab OpenAI. Codex was created by fine-tuning the large language model GPT-3 on more than 100 gigabytes of code from Github, an online software repository. The Codex software is able to write code when prompted with an everyday description of what it is supposed to do, but performs poorly when faced with more difficult problems.

Potential applications of AI coding in software development

The creators of AlphaCode focused on solving these difficult problems. They began by feeding a large language model many gigabytes of code from GitHub, in order to familiarize it with coding syntax and conventions. They then trained it to translate problem descriptions into code, using thousands of problems collected from programming competitions. For example, a problem might ask for a program to determine the number of binary strings (sequences of zeroes and ones) of length n that don’t have any consecutive zeroes.

When presented with a new problem, AlphaCode generates a large number of candidate code solutions (in Python or C++) and filters out the ones that do not work. In contrast to previous systems like Codex, which generated tens or hundreds of candidates, AlphaCode generates up to more than 1 million.

To filter these solutions, AlphaCode first keeps only the 1% of programs that pass the test cases that accompany the problem. It then clusters the remaining programs based on the similarity of their outputs to made-up inputs, and submits programs from each cluster, one by one, starting with the largest cluster. This continues until a successful solution is found or the maximum number of submissions is reached (about 10, which is the maximum that humans typically submit in programming competitions). By submitting programs from different clusters, AlphaCode is able to test a wide range of programming tactics. This is the most innovative step in AlphaCode’s process, according to Kevin Ellis, a computer scientist at Cornell University who works on AI coding.

AlphaCode faces challenges in becoming a viable replacement for human coders

After training, AlphaCode was able to solve about 34% of assigned problems, according to a study published this week in the journal Science. This is a significant improvement over Codex, which only achieved single-digit-percentage success on similar benchmarks.

To further test its capabilities, DeepMind entered AlphaCode into online coding competitions. In contests with at least 5000 participants, the system outperformed 45.7% of human programmers. The researchers also compared AlphaCode’s programs with those in its training database and found that it did not duplicate large sections of code or logic. Instead, it generated new, original solutions, which surprised Ellis.

“It continues to be impressive how well machine-learning methods do when you scale them up,” he said. “The results are stunning,” added Wojciech Zaremba, a co-founder of OpenAI and co-author of the Codex paper.

According to Yujia Li, a computer scientist at DeepMind and co-author of the AlphaCode study, AI coding could have applications beyond winning competitions. It could be used to do the tedious, repetitive work of software development, freeing up human developers to work at a higher, more abstract level. It could also help non-programmers create simple programs.

David Choi, another study author at DeepMind, envisions running the AlphaCode model in reverse: translating code into explanations of what it is doing, which could be helpful for programmers trying to understand others’ code. “There are a lot more things you can do with models that understand code in general,” he said.

Concerns about the long-term risks of self-improving AI systems

However, there are still challenges to overcome. For example, AlphaCode requires a massive amount of computing power – tens of billions of trillions of operations per problem – which is currently only available to the largest tech companies. Additionally, the problems it solved in online programming competitions were narrow and self-contained, whereas real-world programming often requires managing large code packages in multiple places, which requires a more holistic understanding of the software, according to Solar-Lezama.

The study also raises concerns about the potential long-term risks of AI systems that are able to improve themselves. Some experts believe that such self-improvement could lead to a superintelligent AI that takes over the world. While this scenario may seem unlikely at the moment, researchers still believe it is important to institute guardrails and built-in checks and balances to prevent it from happening.

“Even if this kind of technology becomes supersuccessful, you would want to treat it the same way you treat a programmer within an organization,” Solar-Lezama said. “You never want an organization where a single programmer could bring the whole organization down.”


Let's talk

SEND THE EMAIL

I agree that my data in this form will be sent to [email protected] and will be read by human beings. We will answer you as soon as possible. If you sent this form by mistake or want to remove your data, you can let us know by sending an email to [email protected]. We will never send you any spam or share your data with third parties.

I agree that my data in this form will be sent to [email protected] and will be read by human beings. We will answer you as soon as possible. If you sent this form by mistake or want to remove your data, you can let us know by sending an email to [email protected]. We will never send you any spam or share your data with third parties.