Blog
Leveraging AI for Coding
2024-05-06 18:00:00 +0900
Using AI to help you be a more efficient programmer is a bit like Kenny Rogers' classic song "The Gambler": "You've got to know when to hold 'em, know when to fold 'em, know when to walk away, and know when to run." "You've got to know when to hold 'em" in the sense that most of the time the results you will get from large language models (LLMs) for code are surprisingly good, and you can use the results with little modification. You need to "know when to fold 'em" in that you will need to recognize when you need to change your prompts to get the results you want, and we discuss some strategies below. You need to "know when to walk away" in the sense that at the end of the day, these models are only combining code they have seen before and do not generalize logic well, so when the results are not getting better despite trying different prompts, you need to cut your losses and do the work manually. Finally, you need to know "when to run" and not use AI-assisted coding in the first place. If you are trying to use a language the model has not been trained on or a library built after the model's training date, it is more time-efficient not to use it at all. Moreover, if safety and security considerations are paramount and you do not already have the skill to understand when the results may be hallucinations or wrong, you shouldn't use AI-generated code.
WARNING: Although everything written here is up-to-date at the time of writing, models and tools are changing rapidly.
Currently, you can roughly divide up AI-assisted coding tools into two types of tools, Chatbots, and IDE integration.
Chatbots
Chatbots are used for the big picture, to explore ideas, and to set up the development of products. Although they can also be used for the details as well, modifying code is done more efficiently by using an IDE with integrated LLMs for a smoother workflow.
- All chatbots have free and paid tiers.
- Multimodal is the ability to handle images and other media types.
- By run code we mean you can ask the chatbot to run code for you.
Chatbot Prompts
Before we start, keep in mind that the results from large language models (LLMs) are very brittle. The results can vary widely due to a minor change in the prompt, using a different LLM, or even an updated version of the same LLM.
Large language models operate by predicting the most likely next word in a sequence, generating text one word at a time. They calculate the probability of each possible word in their vocabulary and choose one based on those probabilities. Some chatbots, such as ChatGPT, have a "temperature" setting that controls the randomness of this selection process: with a low temperature (like 0), the model typically chooses the word with the highest probability, leading to more consistent but potentially repetitive outputs. A higher temperature introduces more randomness, allowing for more varied and creative responses, but at the risk of nonsensical or off-topic outcomes, known as "hallucinations." For programming, where reproducibility is valuable, a low temperature setting is recommended.
You should have an initial template to set the overall context, called the system message. This can be copied and pasted into the prompt (or some services allow it to be set up as Custom Instructions, so it doesn't have to be added each time you restart). Example:
Prompting Strategies
Keep prompts short and specific, then build on them with multiple prompts to guide the AI to what you are looking for. If necessary, label parts of the prompt, for example: Context, Input, Instruction, Restriction, Output Format.
- Zero-shot prompting - no examples provided, relies entirely on the model's training.
- Few-shot prompting - a few examples provided or Q&A examples.
- Chain prompting - provide a series of intermediate reasoning steps.
Finally, don't forget you can ask the chatbot to format the output as JSON, XML, table, list, markdown, HTML, MathJax, LaTeX, etc.
For a more detailed look try this Prompt Engineering Guide.
Examples
General Examples
Project Example
Use continue
or go on
to get around token output limits.
IDE Integration
IDEs use the context that you have provided via the comments and code in your file and offer suggestions. They allow you to work on details while staying in your dev tooling.
IDE Prompts
Most IDE integration is driven by code comments and code that you write, which the AI assistant then gives a suggestion or multiple suggestions for completion. The most popular AI assistants for IDEs use the [Tab] key to accept the code suggestion or the [Escape] key to skip it. Beyond that, you need to look up, and it is worth the time, the keyboard shortcuts and the functionality for the specific AI coding assistant that you have chosen. Most IDE integrations also support selecting code and then choosing to have it explained, fixed, refactored, optimized, or sent to an integrated chatbot.
Examples
General Coding
Code Quality
There are also specialized tools for code quality, such as Code Climate and Treno.
Minified code can be unminified by using the prompt "fix this code:". The quality of the results will vary. There are also open-source LLMs such as LLM4Decompilespecifically for decompiling binary code to C.
You can use AI tools for many things, such as linting, calculating cyclomatic complexity, Halstead complexity, maintainability index, code coverage, etc. However, as these items have 100% reliable deterministic tools available, use those instead, and you do not have to worry about hallucinations or mistakes.
Data
APIs
Chatbots are useful for both creating and exposing your own APIs as well as finding and consuming other people’s APIs.
UI
OpenArt and Canva are sites specifically for creating images and professional designs. Teleporthq (React, Vue, Angular, HTML&CSS, or UIDL) and v0 by Vercel (React and Tailwind CSS) are sites specifically for creating website front-ends. Anima and Locofy are plugins for Figma to generate React, Vue, or HTML code to create website front-ends, with Locofy also supporting Adobe XD as a source and Next.js, Gatsby, or React Native as output targets.
Testing
Note: GitHub Copilot allows you to select the code you want to unit test and then ask Copilot #selection write a unit test for this code
. Similarly, for Amazon CodeWhisperer, you can select the code you want to test and then right-click 'Send to Amazon CodeWhisperer' -> 'Send to prompt', and in the Amazon CodeWhisperer chat window, type write a unit test for
.
For more sophisticated integration and behavior testing, you may need to write a little or even a lot of the test case code manually before your AI assistant has enough context to be helpful.
Documentation
The AI assistant generated documentation may not be perfect, but it can be much quicker to edit than doing it all from scratch.
- You can ask your LLM to generate documents such as user manuals, FAQs, README files, API documentation, troubleshooting guides, etc.
- It can generate UML, Mermaid, C4 models (Context, Containers, Components, Code), and other text diagrams.
- Use purpose-built software documentation tools such as:
- Build your own Chatbot with ChatGPT Assistants
Deployment
It may not be the readily apparent, but LLMs can also be helpful for pull requests, virtual machines, cloud infrastructure, and various other aspects of deployment.
Ongoing Maintenance, Updates, and Migrations
Other developer related AI tools
- UI designer Uizard
- Bug reporter JamGPT Chrome extension
- Security Synk finds and fixes security issues in proprietary code
- Issue tracker Bugasura like JIRA with built-in AI
- Issue tracker Tegon open-source, AI-first alternative to Jira, Linear
- Local Git repo code editor Aider is a CLI tool to pair program with LLMs.
- Terminal AI coding agent Plandex for large, complex tasks in existing codebases
Books
AI-Assisted Programming (all stages of code creation)
AI-Powered Developer (thorough example of doing a significant project)
Coding with AI for Dummies (looks at a variety of tools)
Future (Agents, etc.)
- GitHub Copilot Workspace allows high-level specification and planning similar to using the Chats, but it is also integrated into your IDE and can actually make changes in the project files for you. GCW has a nice feature where it discusses a specification of proposed changes it will make before making the changes. Then, after you approve the steps in the plan, it will make the actual changes in your code for you.
- ChatGPT Plus + LangChain
- Devin More hype than delivery as of time of writing
This article is licensed under a Creative Commons Attribution 4.0 International license.