GPT-3 for Regex, Bucket Policies, and Solidity Tasks
GPT-3 can convert tedious, syntax-heavy tasks into actionable tooling by generating regex patterns from plain English, crafting precise bucket policies, and explaining or auditing Solidity contracts.
Often, you’d find that GPT-3 (and successor models) could do tasks that weren’t exactly superhuman, but were tedious—and do them reliably well.
That distinction matters, because a lot of the early value wasn’t “AI replaces expert.” It was more like: “AI takes the stuff you’d normally procrastinate on, or screw up because the syntax is unforgiving, and just gets you unstuck.”
Below are a few examples that stood out to me early on.
Regex is one of those things where, in theory, it’s straightforward. In practice, it’s an art form that still is beyond my comprehension.
What was surprising (and genuinely useful) was that you could tell GPT-3, in plain English:
- “Parse this paragraph and grab these kinds of words,” or
- “Match this pattern but exclude those cases,” or
- “Find X, replace with Y, but only when it’s inside Z,”
…and it would generate the regex.
And it worked really, really well.
I often noticed people trying to use the model to perform the transformation (formatting text directly in the chat, etc.) instead of just telling the model: no, write the pattern for it—then run a script to apply it. That workflow was (and still is) a great example of early “tool use”: the model wasn’t the tool, it was the thing that helped you build the tool.
2) Bucket policies: “make it correct, before you make it fast”
Another area where this felt like magic was policies for things like storage buckets in AWS and Google Cloud.
Writing a policy that describes how things are supposed to behave can be very tricky, because you’re trying to express real-world rules precisely:
- A person can upload something if they’re logged in.
- Somebody might not have to be logged in to view something.
- Or maybe they can view only in certain cases.
- Or they have to do this here, but not there.
If you’ve tried to do this without being an expert, you know what happens: you spend hours (sometimes days) dealing with the aftermath of a poorly formatted policy.
And when you hear stories about some misconfigured app using Firebase or something else, it’s often because the person was trying to build quickly and didn’t have the time to understand the implications of leaving something unprotected.
That’s one of the things I really appreciated early on: even a model like GPT-3 could make things more secure if you bothered to use it—because you could explain what you wanted in natural language and have it output a bucket policy that matched your intent.
3) Solidity: writing contracts, and explaining “trap doors”
Another area of early exploration for me was Solidity, the programming language used for writing blockchain contracts.
It’s been a recurring problem that people can write a contract that seemingly performs one way, but actually has little hidden trap doors in there that make it behave differently.
One of the first things I used the earliest Codex model (the one we released five years ago) for was writing Solidity contracts. And I found that it was not only good at writing them, but also at explaining them.
I could give it a Solidity contract and say: “Hey, can you find the errors and whatnot there?”
What’s kind of funny is this was pretty apparent five years ago, but it was still early. Later on, companies started using AI to help with blockchain contracts, and when I look at their offerings, I often find they really weren’t that much more sophisticated than what anybody could do who spent a weekend trying to understand what the model could do.
Example: Solidity contract + GPT-3’s output
Here’s a sample Solidity contract:
//SPDX-License-Identifier: GPL-3.0
pragma solidity ^0.8.0;
contract Funding{
address payable[] public members;
address public manager;
constructor(){
manager = msg.sender;
}
receive () payable external{
require(msg.value == 0.2 ether);
members.push(payable(msg.sender));
}
function getBalance() public view returns(uint){
uint arrayLength = members.length;
uint balance = address(this).balance;
for (uint i=0; i<arrayLength; i++) {
if(members[i] == msg.sender){
return balance;
}
}
return 0;
}
function payBalance(address _address) public{
require(msg.sender == manager);
uint balance = getBalance();
payable(_address).transfer(balance);
}
}
And here’s the output from GPT-3 that I included:
// This Solidity contract does the following:
// - Defines a struct type to hold the members of the contract
// - Defines a struct type to hold the balance of the contract
// - Defines a constructor function to initialize the contract
// - Defines a function to receive funds
// - Defines a function to pay funds
Even with something as niche (and risk-prone) as smart contracts, the ability to get a model to explain what it thinks the code is doing—and to help you inspect it for mistakes—was one of those early “oh wow” moments for me. It wasn’t about magic intelligence. It was about turning a bunch of tedious, syntax-heavy work into something you could drive with natural language, and then verify with actual tooling.