On codebase context and related features
Windsurf’s context engine builds a deep understanding of your codebase, past actions, and next intent.
Historically, code-generation approaches focused on fine-tuning large language models (LLMs) on a codebase, which is difficult to scale to the needs of every individual user. A more recent and popular approach leverages retrieval-augmented generation (RAG), which focuses on techniques to construct highly relevant, context-rich prompts to elicit accurate answers from an LLM.
We’ve implemented an optimized RAG approach to codebase context, which produces higher quality suggestions and fewer hallucinations.
Windsurf offers full fine-tuning for enterprises, and the best solution combines fine-tuning with RAG.
Out of the box, Windsurf takes multiple relevant sources of context into consideration.
This feature allows teams to pull in Google Docs as shared context or knowledge sources for their entire team.
Currently, only Google Docs are supported. Images are not imported, but charts, tables, and formatted text are fully supported.
Configure knowledge base settings for your team. This page will only be visible with admin privileges.
Admins must manually connect with Google Drive via OAuth, after which they can add up to 50 Google Docs as team knowledge sources.
Cascade will have access to the docs specified in the Windsurf dashboard. These docs do not obey individual user access controls, meaning if an admin makes a doc available to the team, all users will have access to it regardless of access controls on the Google Drive side.
Context Pinning is great when your task in your current file depends on information from other files. Try to pin only what you need. Pinning too much may slow down or negatively impact model performance.
Here are some ideas for effective context pinning:
.proto
files, abstract class files, config templates).When conversing with Windsurf Chat, you have various ways of leveraging codebase context, like @-mentions or custom guidelines. See the Chat page for more information.
Yes, Windsurf does index your codebase. It also uses LLMs to perform retrieval-augmented generation (RAG) on your codebase using our own M-Query techniques.
Indexing performance and features vary based on your workflow and your Windsurf plan. For more information, please visit our context awareness page.
On codebase context and related features
Windsurf’s context engine builds a deep understanding of your codebase, past actions, and next intent.
Historically, code-generation approaches focused on fine-tuning large language models (LLMs) on a codebase, which is difficult to scale to the needs of every individual user. A more recent and popular approach leverages retrieval-augmented generation (RAG), which focuses on techniques to construct highly relevant, context-rich prompts to elicit accurate answers from an LLM.
We’ve implemented an optimized RAG approach to codebase context, which produces higher quality suggestions and fewer hallucinations.
Windsurf offers full fine-tuning for enterprises, and the best solution combines fine-tuning with RAG.
Out of the box, Windsurf takes multiple relevant sources of context into consideration.
This feature allows teams to pull in Google Docs as shared context or knowledge sources for their entire team.
Currently, only Google Docs are supported. Images are not imported, but charts, tables, and formatted text are fully supported.
Configure knowledge base settings for your team. This page will only be visible with admin privileges.
Admins must manually connect with Google Drive via OAuth, after which they can add up to 50 Google Docs as team knowledge sources.
Cascade will have access to the docs specified in the Windsurf dashboard. These docs do not obey individual user access controls, meaning if an admin makes a doc available to the team, all users will have access to it regardless of access controls on the Google Drive side.
Context Pinning is great when your task in your current file depends on information from other files. Try to pin only what you need. Pinning too much may slow down or negatively impact model performance.
Here are some ideas for effective context pinning:
.proto
files, abstract class files, config templates).When conversing with Windsurf Chat, you have various ways of leveraging codebase context, like @-mentions or custom guidelines. See the Chat page for more information.
Yes, Windsurf does index your codebase. It also uses LLMs to perform retrieval-augmented generation (RAG) on your codebase using our own M-Query techniques.
Indexing performance and features vary based on your workflow and your Windsurf plan. For more information, please visit our context awareness page.