The Lawn Mower Metaphor: Getting Real Value from Windsurf Without Cutting the Flowers
In this blog
- The problem
- Constraints
- The lawn mower metaphor (and what the "flowers" are)
- A cautionary story: When the flowers got cut (Milvus implementation attempt)
- Technique #1: Use Windsurf to get the big picture before writing code
- Technique #2: Plan-first, stepwise execution while writing code
- Signature story: Implementing a client design system in ~one hour
- What I gained
- Practical playbook: Tips for using coding agents effectively
- Download
I've worked on proof-of-concept (PoC) projects where the schedule was short, the team was small and the starting context was thin.
Day 1 of those projects usually consisted of cloning an existing repo, seeing unfamiliar architecture decisions, grasping at requirements that were still in motion, and dreading an upcoming demo that would happen before I had enough time to digest the codebase.
With constraints like these, the goal isn't to become an expert in every layer of the system. The goal is to develop demonstrable features quickly, keep the codebase from falling apart, and build enough confidence to keep moving. That's where Windsurf, the AI agent-powered IDE (Integrated Development Environment), and its coding agent, Cascade, became useful tools for me.
While a coding agent like Cascade is viewed by many as an additional developer that can move fast on its own, I learned early on that it's not a replacement for my own judgment and expertise. It was better for me to think of this tool more like a self-propelled lawn mower:
- It can cover a lot of ground quickly.
- It can do work while I steer.
- But if I don't stay attentive, it can run into my garden and cut my flowers.
In this blog post, I'll share the workflow I settled into, along with stories that show both the benefit (shipping faster) and the risk (breaking things when I let it run too freely) of using agentic coding tools on short-term projects.
The problem
On fast-turnaround projects, you're constantly fighting a few forces at once:
- Unknown codebase + unclear requirements
- Frequent demos in a short amount of time
- Missing roles (e.g., no UX engineer, no QA engineer)
- Time pressure (not enough time for ideal processes like test-driven development)
If you've ever been in this situation, there are a couple of traps you can fall into. You can either slow down to build confidence, risk missing deadlines, or speed up, risking breaking the app more often and creating technical debt. Neither is great.
In my experience, Windsurf can help you speed up while mitigating risk if you guide it.
Constraints
I'm grounding this in two proof-of-concept projects:
- A 9-week RAG chatbot PoC (Vue frontend + Python backend). I was the only full-time engineer, and I had 0 Vue experience going in.
- A 4-week, competitive bid lab management PoC with a "vibe-coded" starting point. Two full-time engineers, but still no designer or QA engineer.
The lawn mower metaphor (and what the "flowers" are)
If Windsurf Cascade is the mower, the "flowers" are the things you can't afford to destroy while you're moving quickly.
For me, the flowers were:
- Existing features
- Existing JavaScript and Python dependencies
- Clean code composition
- New features that satisfy requirements
So, what made me so protective of the flowers in the first place?
A cautionary story: When the flowers got cut (Milvus implementation attempt)
The RAG PoC was already configured to use pgvector when I joined the project. I had a bit of extra time and decided to explore the level of effort to implement Milvus.
I asked Cascade to explain the effort. It generated an answer and began implementing Milvus immediately! The biggest problem was that Cascade failed to install versions of the Milvus Python packages that were compatible with the existing packages in the app, and it broke the application.
Luckily, all of this was done on a branch. Ultimately, the client didn't care which vector database we used, so I abandoned the attempt and moved on.
This is the clearest example I have of why guardrails matter. Cascade cut the "existing dependencies" flowers when it changed dependencies without my approval. And since I let all execute at once, Cascade cut the "existing features" flowers, and the app wouldn't start.
Learning from those failures, I've found a couple of techniques to steer my mower that have given me much better outcomes.
Technique #1: Use Windsurf to get the big picture before writing code
In software engineering, the fastest path to implementing features isn't immediately editing code. It is useful to develop an understanding of the codebase and its patterns.
When you drop into a repo you didn't create, you're not just learning syntax. You're learning the patterns a team decided to use, how the team decided to separate business logic from presentation, how layers of the application integrate and which tradeoffs are baked in.
On day one for both projects, instead of choosing my favorite LLM* and writing a prompt to generate code for feature X, I prompted Cascade to explain the architecture, identify the main implementation patterns, and point out where to make changes for the first feature I want to implement safely. Once I had the big picture, I could drill down to specific screens, components, or backend routes without feeling like I was randomly clicking through folders.
* Note: In this post, I'm not focusing as much on how I decided to use any specific agentic coding model. To extend the metaphor, some models may be better for trimming grass next to a fence. Others may cover as much ground as a riding lawn mower. I'm focusing more on keeping as much control of the coding agent as possible, regardless of which model you choose.
Guardrail
The most important phrase for me in this phase was:
"Do not make any code changes yet."
Note: You could also use "Ask" or "Plan" mode in Cascade to prevent code changes from being made until you are ready. If you're anything like me, though, you might forget to switch away from Code mode before asking a question. For that reason, I use this statement as an extra guardrail.
Try these prompts
- "Explain the architecture and implementation patterns in the frontend and backend of this codebase. Start with the big picture, then show how data flows through [some single user action]. Do not make any code changes."
- "Walk me through the [general architectural pattern] strategy in this app. Highlight any frameworks it's using. What are the pros and cons of this strategy? Do not make any code changes."
Mini case: Understanding Vue in terms of React
On the RAG PoC, the frontend was developed with Vue. I was new to Vue, however, I'm very familiar with React.
Cascade and Claude Sonnet 4.5 helped by translating Vue concepts into a React-like mental model I already had:
- Single File Components (SFCs) in Vue vs React Components
- Composition API (Vue 3) ≈ React Hooks
- Template syntax in Vue vs JSX in React
- Vue event emitters vs React callbacks
That mapping didn't make me a Vue expert overnight, but it did help me better understand how the Vue code worked.
Results
Using Windsurf to gain a better picture of the entire codebase before writing code helped protect the "existing features" and "clean code composition" flowers by ensuring that I understood the system before Cascade or I touched it. My intact garden looked like a RAG PoC shipped on time, a working chatbot using the client's ingested documents and a cleanly abandoned Milvus implementation.
Technique #2: Plan-first, stepwise execution while writing code
Under time pressure, the most important thing to feature development isn't raw speed, it's controlled speed.
If you let a coding agent do "everything at once," it will often try to be helpful by making lots of changes across lots of files. That's how you end up with dependency mismatch errors, half-wired UI, broken builds, or changes that drift from what you needed.
A plan-first workflow is the fence that keeps the mower in bounds.
The workflow
- Describe the new feature in detail.
- Ask Windsurf to ask clarifying questions.
- When all the questions are answered, request a step-by-step implementation plan.
- Review the plan, ask it to explain its reasoning and make suggestions
- Execute step 1 of the implementation plan only.
- Validate*.
- Move to the next step.
* What does "Validation" look like?
- The app is not broken due to compilation errors.
- The main user path still works on hot reload, when applicable.
- The app's functionality still works when intended dependency changes occur, and the app is restarted.
- No new errors appear in the console and/or server logs.
- No unintended dependency changes took place.
- No unintended styling changes occur in a user interface (UI).
- No unintended changes to API contracts occur.
- Database changes are captured in applicable scripts.
What else helped me in these steps
- Repeating "Do not make any code changes yet" until I was ready for the agent to make code changes
- Keeping the implementation plan in a separate file* so I can reference it manually or in Cascade chat using the CMD/CTRL + L keyboard shortcut
* Note: Cascade's "Plan" mode generates implementation plans and adds them to new markdown files by default.
Mini case: Making UI/UX improvements without a designer
On the lab management PoC, no designer was staffed on the project. I'm not a designer either. I didn't know how much Cascade or any of the LLMs would be able to help, but they surprised me and were a structured second opinion.
For example, one view in the app displayed the status of all servers in a data center.
I attached a screenshot of the view to Cascade, along with the following prompt
The UI of this app was vibe coded and I am looking to identify ways to make
the UI/UX experience more intuitive. Right now, I need ideas on best practices
for displaying many items in a single view that I could apply to this page.
Please generate ideas and don't make ANY code changes for right nowClaude Sonnet's most valuable feedback was:
- Reduce visual clutter by implementing filters that show/hide servers based on status
- Increase contrast by slightly dimming servers with "good" status, so attention goes to "bad" or "neutral" statuses
- Increase user understanding by using the filters to clearly label statuses and colors
Results
Following a plan-first, stepwise execution plan helped protect the "existing dependencies" and "new features that satisfy requirements" flowers by keeping changes small, deliberate and reversible.
Signature story: Implementing a client design system in ~one hour
Because the lab management app was a competitive bid project, I thought it would be a good idea to implement a design that was both fresh and familiar to the client. I found their design system docs on their publicly available developer website and decided to implement the design system in our PoC. If I had done this myself without assistance, I estimate it would have taken me at least a week.
With Cascade and Claude Opus's help, I implemented it in roughly one hour. I validated the work with a side-by-side comparison (my app on one side, the design system docs on the other). The strongest validation came during demos when the client's feedback was that the app was beautiful. Creating customer delight under so much pressure gave me a ton of confidence... and a well intact garden!
What I gained
On Day 1 of both projects, I was dreading the first demo. By demo day, I had well-manicured lawns and beautiful gardens. I also:
- Shipped both PoCs on time with client-praised UX, despite missing design roles
- Avoided major rollbacks by catching breaks early through stepwise execution and validation
- Got excited about trying these techniques in a codebase with strong test coverage (another useful tool for keeping the lawn mower away from the flowers)
- Learned that while this process isn't perfect (I still made mistakes when I got impatient and skipped steps), mistakes were smaller and easier to fix.
Practical playbook: Tips for using coding agents effectively
Context ramp-up
- Try different coding models and settle on a few that you feel generate the most consistent quality code
- Ask for an architecture and folder tour
- Ask for data flow explanation
Execution
- Write a step-by-step plan
- Execute step-by-step
- Validate after every step
Safety guardrails
- Make small changes
- Keep working on branches and commit frequently
- Don't let agents install/change dependencies without checking existing compatibility
- Review diffs of requirements.txt, node_moduels.js or any other dependency tracking files before accepting
- Recreate virtual environments after dependency changes
- Smoke test immediately since dependency breaks can be silent until runtime.
And remember
- Use your expertise to guide your best judgment
- Don't treat the tool's output as authoritative
- Keep your lawn mower away from your flowers