Every few weeks I revisit Zed, the fastest code editor ever, and I'm pleasantly surprised by its newly released features. While it appears to be a fairly minimal IDE on the surface, the team has managed to implement powerful AI-assisted capabilities without compromising your ability to edit at full throttle.
This isn't meant to be a comprehensive Zed guide - they have excellent docs and an incredible active community where I continue discovering new ways to upgrade my workflow. But the standard way to explore AI is by hitting slash.
In place of the /workflow command, which was their most agentic feature, they've adopted a more robust /prompt command, similar to Cursor's rulesets. Prompts are modular, like isolated functions designed to instruct the LLM to do one thing and one thing only. You can group them and even reference them in other prompts.
For example, if you use the same tech stack for your frontend projects, you can create a prompt to describe libraries used in package.json and their associated coding patterns. You can apply a similar prompt for backend repos and merge them into a single prompt that contains both project repositories.
Here, I'm inserting multiple prompts into my default prompt that is inserted in every chat.
If you prefer LLMs apply the recommended suggestions, the suggest edit prompt can mimic the same functionality until they launch their agentic capabilities.
[todo: link ref to github prompts]
Beyond multiline and multi-file editing, you can run inline assists against all selections, even if they don't match. Want to update Stripe to another third party library? It can try to find the comparable types. I'm sure you can cook up a better use case!
Model context protocol is all the vibes today. Using their universal slash directive, you can create your own extension to interact with a non LLM service, interfaces, or APIs with natural language. More to come on this.
The postgres extension I've tried so far has been surprisingly useful - it checks our db specification against dynamic datasets. For instance, running the postgres-context-server and printing out my tables without instructions allowed AI to compare my defined formatting against how real-life orders are actually stored from user interaction. This provides feedback to improve my schema using real data instead of mocked data.
The consensus is Claude Sonnet 3.5 v2 is great for implementation while GPT-4o is better for planning out drafts.
When tackling a new domain, like building my first set of server APIs or an SDK, I'm less familiar with clean code practices. Asking for best practices or probing alternate ways to perform the same task after the first response is helpful. Often this feels more like a stylistic choice than a matter of correctness. Still, it's a great way to learn.
Claude is very sycophancy. As it stands, it won't replace your mentors since it's very agreeable to your POV. Don't expect it to argue, but do probe it for variations.
Ask LLMs to keep responses succinct with only code snippets or brief explanations if you're familiar with the intended task. When refactoring a function without relying on a library or converting code between languages, you're not trying to learn something new. I'd suggest adding these requests at the end of your prompt.
Keep your explanations brief. Only show me code snippets if asked for code changes while printing out the full function rather than inserting code comment blocks. Do not add user comments.
The following aren't Zed-specific but are powerful for vibe coding or AI-assisted development.
It's good practice to create create feature-focused pull requests so users or API users, if interfacing with a non GUI, remain isolated. This lets you quickly revert commits if needed in production. You can reference these commits via an LLM and ask it to extract features while resolving conflicts with downstream commits.
Like pull requests, writing per-feature tests helps you confidently add new features without introducing bugs or altering existing functionality.
In another world, I might think twice about building custom scripts. At Coinbase, one of our core engineering principles was 1-2-automate. The logic was simple: above a certain threshold, the cost-benefit analysis favors process reuse, which can be as valuable as core code building blocks (aka glue code). This could include operations tasks like reconciling payments, writing test mocks and fixtures, or automating Slack notifications.
Pre-production on Outpaint, I needed to frequently update DB schemas and refine relationships. I wrote an elaborate script across 4-5 prompts to tear down the DB, reconstruct it, and run migrations. Not only is it functional, but it's also clean and beautiful, complete with log files.
In a way, we've created more work for ourselves but I'd argue it's more fun than repeating tear-downs four times daily - now I can justify the cost.
If you're exploring Zed, consider trying my Ariake theme. Drawing inspiration from traditional Japanese colors and ancient poetry, it offers a serene vibeness through its thoughtfully curated monochromatic palette.
Arti Villa