Learn and understand more about Cody's features and core AI functionality.
Cody suggests completions as you type using context from your code, such as your open files and file history. It’s powered by the latest instant LLM models for accuracy and performance.
Autocomplete supports any programming language because it uses LLMs trained on broad data. We've found that it works exceptionally well with JavaScript, TypeScript, Python, and Go code.
By default, a fully configured Sourcegraph instance picks a default LLM to generate code autocomplete. Custom models can be used for Cody autocomplete via the completionModel
option inside the completions
site config.
We also recommend reading the Enabling Cody on Sourcegraph Enterprise guide before you configure the autocomplete feature.
NOTE: Self-hosted customers need to update to a minimum of version 5.0.4 to use autocomplete.
NOTE: Cody autocomplete currently only work with Anthropic's Claude Instant model. Support for other models will be coming later.
VS Code logs can be accessed via the Outputs view. To access autocomplete logs, you need to enable Cody logs in verbose mode. To do so:
- Go to the Cody Extension Settings and enable:
Cody › Debug: Enable
andCody › Debug: Verbose
- Restart or reload your VS Code editor
- You can now see the logs in the Outputs view
- Open the view via the menu bar:
View > Output
- Select Cody by Sourcegraph from the dropdown list
Chat lets you ask Cody general programming questions or questions about your specific code. You can chat with Cody in the Chat
panel of the editor extensions or with the Ask Cody
button in the Sourcegraph UI.
Cody uses several search methods (including keyword and semantic search) to find files in your codebase that are relevant to your chat questions. It then uses context from those files to provide an informed response based on your codebase. Cody also tells you which code files it reads to generate its responses.
Context retrieval isn't perfect, and Cody occasionally uses incorrect context or hallucinates answers. When Cody returns an incorrect response, it is often worth asking the question again slightly differently to see if Cody can find better context the second time.
Cody's chat function can handle use cases like:
- Ask Cody to generate an API call. Cody can gather context on your API schema to inform the code it writes
- Ask Cody where a specific component is defined within your codebase. Cody can retrieve and describe the files where that component is written
- Ask Cody questions that require an understanding of multiple files, such as how data is populated in a React app. Cody can find the React component definitions to understand what data is being passed and where it originates
More specifically, Cody can answer questions like:
- How is our app's secret storage implemented on Linux?
- Where is the CI config for the web integration tests?
- Can you write a new GraphQL resolver for the AuditLog?
- Why is the UserConnectionResolver giving an error
unknown user
, and how do I fix it?
You can also open Cody's chat inline in VS Code using the +
icon. This opens a chat box that can be used for general chat questions, code edits, and refactors. Select a code snippet to ask Cody for an inline code edit, then type /edit
plus your desired code change. Cody will generate edits, which you can accept or reject with the Apply
button.
You can also use the or /touch
command in the inline chat box if you'd like Cody to place its output in a new file.
Examples of /edit
instructions Cody can handle:
- Factor out any common helper functions (when multiple functions are selected)
- Use the imported CSS module's class
n
- Extract the list item to a separate React component
- Handle errors in this code better
- Add helpful debug log statements
- Make this work (and yes, it often does work—give it a try!)
NOTE: Inline chat functionality is currently only available in the VS Code extension. The
/edit
command was called/fix
prior to version 0.10.0 of the VS Code extension.
Commands allow you to run common actions quickly. Commands are predefined, reusable prompts accessible by hotkey from within the VS Code extension. Like autocomplete and chat, commands will search for context in your codebase to provide more contextually aware and informed answers (or to generate more idiomatic code snippets).
The commands available in VS Code include:
- Document Code
- Explain Code
- Generate Unit Tests
- Code Smell
There are three ways to run a command in VS Code:
- Type
/
in the chat bar. Cody will then suggest a list of available commands - Right click and select
"Cody"
> Choose a command from the list - Use the predefined command hotkey:
⌥
+C
/Alt
+C
NOTE: This functionality is also available in the JetBrains extension under the name
Recipes
. To access it, navigate to theRecipes
panel (next to theChat
panel), and you can find each available recipe as a button within the UI.
Custom commands let you save your quick actions and prompts for Cody based on your common workflows. They are defined in JSON format and allow you to call CLI tools, write custom prompts, and select context to be sent to Cody. This provides a flexible way to tailor Cody to your needs.
You can invoke custom commands with the same hotkey as predefined commands. Alternatively, you can right-click the selected code, open the Cody context menu, and select the Custom Commands (Experimental)
option.
You can define custom commands for Cody in the cody.json
file. To make commands only available for a specific project, create the cody.json
file in that project's .vscode
directory. When you work on that project, these workspace-specific custom commands will be available.
To make custom commands globally available across multiple projects, create a new cody.json
file in your home directory's .vscode
folder. These global custom commands will be available in Cody in any workspace.
Cody uses LLMs trained on broad code, and we've found it to support all common programming languages effectively. However, the quality of autocompletion and other features may vary based on how well the underlying LLM model was trained in a given language.