From Make.com to Python: One Indie Founder’s Automation Graduation Story
April 8, 2026
For the first year of my micro-SaaS, Make.com was the engine room. It connected Stripe to Slack, copied new users into a Google Sheet, and pinged me when someone tripped a fraud heuristic. I could “see” the whole business in colored modules. Then growth happened—not hockey-stick growth, but enough customers that scenarios started colliding. I spent more time debugging visual pipelines than shipping features. Moving critical paths to Python was not ideology; it was triage.
This is the story of that graduation: what broke first in no-code, what Python fixed, what it did not, and how I would choose today if I were starting over.
The golden age of the canvas
Make.com (formerly Integromat) earns its fans honestly. Branching, iterators, error handlers, and data stores let you express surprisingly serious workflows without provisioning a server. For a solo founder, that is not a small thing—every hour not spent on Terraform is an hour you can spend talking to users.

My canvas looked responsible. Triggers fired on webhooks. Routers separated trials from paid accounts. Aggregators batched rows for a weekly email. On paper, it was clean. In production, it was a mural where every new sticker covered a crack. The cracks showed up as duplicate runs, partial writes, and scenarios that were “green” while the business was wrong.
When the canvas becomes a maze
The first pain was not performance; it was reasoning. A scenario with twenty modules is twenty places a vendor can change an API field. When Stripe’s payload shape shifted slightly, my router still passed data—but passed the wrong string into the CRM. The run succeeded. The customer record was nonsense. I only caught it because someone replied to an onboarding email with “why does it say I work at ‘undefined’?”
Debugging meant clicking through history, comparing bundles, and praying the error was reproducible. I missed stack traces. I missed unit tests. I missed writing assert customer.company and having the build fail before a human saw the bug.
The straw that bent toward code
The breaking point was a refund workflow. Trials could convert, churn, dispute, or upgrade mid-month. Make could model that, but the model was a nested tree of filters where each branch touched the same Stripe customer. Retries made duplicates. Partial failures left invoices “open” in accounting while Slack said “refund complete.” I needed a single place to say: given this event, transition state from A to B, or abort with a logged reason.
That is a state machine, not a pretty diagram. I prototyped the transition table in Python in an afternoon—plain functions, enums, and a SQLite database for idempotency keys. It was ugly code, but it was one file I could read top to bottom.
Why not hire it out or buy a bigger Make tier?
I tried throwing money at the problem first—more operations, higher limits, better error emails. That reduced throttling but did not reduce ambiguity. A paid tier does not give you a compiler for your business rules. It gives you more room to be wrong at higher throughput.
I also considered contracting a small agency to “clean up” the scenarios. Quotes came back fine, but the maintenance model was still me clicking through vendor UIs when something broke at 11 p.m. Ownership mattered. If I could describe the workflow in plain English but not in code, I did not actually understand it—I was just fluent in the diagram.
What Python bought me
Tests. I could feed synthetic Stripe events into the handler and assert outcomes. That single capability caught more regressions than weeks of clicking through Make history.
Version control. Scenarios in a GUI do not diff cleanly in Git. Python in a repo does. Code review became possible—even if the reviewer was future me.
Observability. Structured logs with correlation IDs replaced screenshots of scenario runs. When something failed, I knew which customer, which event id, and which line threw.
Cost clarity. Operations moved from per-operation credits to a modest VPS bill I could predict. At my scale, money was not the main win—predictability was.

What Python did not fix
Shipping a script is not the same as shipping a system. I still needed process: migrations, backups, secrets rotation, and a deployment story that did not rely on “SSH and hope.” Python removed one class of problems and introduced another—dependency updates, security patches, and the temptation to over-engineer.
I also kept Make for the long tail. Low-risk notifications, marketing zaps, and experiments that touch nothing financial still live on the canvas. The goal was never purity; it was separation of concerns. Money-moving, identity-touching, legally-meaningful flows got code. Everything else could stay visual.
How I would decide today
If you are pre-revenue and validating, use no-code aggressively. Speed of iteration beats architectural pride. The moment a workflow (a) handles money, (b) enforces policy you could explain to a regulator, or (c) fails in ways that embarrass you in front of customers, start extracting that path into code you can test.
If you are solo, keep the codebase boring: one service, one database, one queue if you need it. Resist microservices cosplay. The indie shops I admire run a handful of well-tested paths, not a zoo of Lambdas nobody remembers how to deploy.
Migration without a big bang
I did not rewrite everything on a weekend. I put a thin HTTP endpoint in front of Python handlers and moved webhooks one by one. Make scenarios became dumb forwarders until I could delete them. Parallel running helped: old path and new path both listened, Python wrote results to a “shadow” table until outputs matched. Only then did I cut over.
That sounds slower than a rewrite. It was. It also meant I slept.
Time accounting (the part nobody Instagrams)
Learning enough Python to be dangerous took evenings across several weeks—less than I had already burned re-running scenarios and apologizing to customers. The hidden tax was operational: setting up systemd units, reading logs without drowning in noise, and writing runbooks so I could hand off alerts without context switching back into panic mode.
If you are comparing calendars, compare honestly: “hours saved by no-code” versus “hours lost to opaque failures.” For me the crossover happened when customer count crossed the threshold where every mistake touched multiple people. Your threshold may differ. The exercise is still worth doing on paper.
Mistakes I made so you can skip them
- Over-abstracting too early. My first Python port tried to be a generic “automation framework.” It collapsed under its own cleverness. The version that shipped was boring: functions named after business events, not design patterns.
- Ignoring observability until prod. Print statements are fine for day one. By week two you want structured JSON logs and redacted secrets. Retrofitting logging is slower than starting with it.
- Deleting Make too fast on low-risk flows. I briefly moved a marketing nurture webhook into code because pride wanted one stack. It added maintenance with no reliability win. I moved it back until there was a real reason.
Indie hacking in 2026: where this fits
Tooling changed, incentives did not. Solo founders still win on speed of learning, still lose when complexity hides inside pretty UIs, and still have to decide when “good enough automation” stops being good enough. AI assistants lower the floor for writing small services; they do not remove the need for clear boundaries around money and identity.
If you are evaluating stacks in 2026, ask a blunt question: Which parts of my product need proofs? Those parts deserve code, tests, and version control. The rest can stay where you can see it—canvas, nodes, and all.
Emotional truth
Part of me missed the canvas. Dragging modules felt like progress in a way typing does not. Code felt colder—until the first time a test saved a launch weekend. Now when I open the repo, I feel the same relief I used to feel when a scenario ran green—except green means something I can prove again tomorrow.
There is also a social dimension: indie communities love sharing screenshots of clever stacks. Nobody posts photos of their pytest suite. I had to recalibrate what “shipping” looked like. Public progress became fewer tweets and more merged pull requests. The audience shrank; the reliability grew. That trade still feels correct when the alternative is explaining a billing bug to a stranger who trusted you with their credit card.
Takeaways
No-code automation carried my business until complexity outpaced visibility. Python did not solve every problem, but it restored invariants: tests, diffs, and logs for the workflows that must not lie. If your scenarios are hard to explain out loud, they will be harder to run silently. Graduate the scary parts first; leave the fun experiments on the canvas.
Graduation is not a trophy—it is a boundary. On one side, tools that help you move fast without ceremony. On the other, tools that help you stay honest when speed would otherwise break trust. Pick the boundary with intent, not with whatever stack looked coolest in last week’s thread.