The problem with my Make.com setup
I use Make.com a lot. Client work, my own projects, experiments, tracking weird biohacking data, even baseball stuff. It started simple. Then, like every tool I enjoy, it turned into a zoo.
Too many scenarios. Too many modules. Too many runs where I had to ask: “Why did this even fire?”
A few months ago I did a hard audit of all my Make scenarios. Not a theoretical one. I looked at what ran weekly, what broke, what I ignored, and what quietly ate time and operations without giving anything back.
This is the result. The Make modules that actually earn their place in my stack every week. And the ones I stopped using because they were just clever complexity.
Ground rules for staying in my stack
I started by setting a few brutal rules. A module or pattern had to tick at least one of these boxes to survive:
- It removes a recurring manual task I was doing at least weekly.
- It surfaces information I actually act on within 24 hours.
- It is easy to debug at 23:30 when a client pings me.
If a module only existed because it was “cool” or because some YouTube tutorial said it was “powerful”, it went on the chopping block.
With that, here is what survived.
HTTP modules: boring, essential, everywhere
If Make.com removed the HTTP modules tomorrow, half my setup would die. HTTP is my default way to talk to anything that does not have a decent Make app, or has one but I do not trust it.
The ones I use every week:
- HTTP → Make a request For hitting custom webhooks, internal tools, small APIs I host on cheap servers.
- Webhook → Custom webhook For receiving events from my own apps and client projects.
Example from my actual stack. I have a tiny Go API that aggregates baseball practice data, sleep metrics, and a couple of biomarker logs. Make calls this API via HTTP every morning at 06:00, does a bit of mapping, and posts a summary into a private Slack channel I call #body-log.
I used to run this through three different native integrations. It broke often. I was constantly re-authing things. Replaced everything with one HTTP module, one custom webhook, and a small script on my server. Zero regrets.
Module verdict: HTTP stays. Always.
Routers + filters: my actual “no-code if/else”
I tried to avoid routers for a while. They felt heavy and visually noisy. Then I realised I was trying to fake routers with duplicated scenarios and filters at the start. That was worse.
Now I use routers and filters in almost every scenario that survives longer than a week.
Typical weekly pattern. I have a master “events in my world” scenario. It listens to webhooks from:
- My portfolio contact form
- Client project error hooks
- New newsletter signups
- Some very opinionated monitoring pings I run on my infra
One router. Four paths. Each path has a very clear filter like “type = 'contact_form'” or “severity >= 'error'”. The outputs hit different Slack channels, different Notion databases, and in one case an SMS if it is client-critical.
The important part. I stopped doing clever nested routers. One level only. If I need more logic than that, it goes into my code, not into Make.
Module verdict: Routers stay, but only one level deep.
Slack: my notification bus, not my database
Slack is my broadcast layer. Not my brain. Not my storage. If a scenario ends in Slack, I either need to react or at least see it once. Otherwise it should not go there.
Modules that run every week:
- Slack → Create a message For system alerts, new leads, and health summaries.
- Slack → Create a scheduled message For repeating nudges. Example: a weekly prompt to do a quick retrospective on projects and training.
One specific flow. When someone fills in a project inquiry on my site, Make receives the webhook, tags it based on budget and timeline, pushes it into a Notion CRM database, then posts a short summary to #inbox with a direct “Reply in under 12 hours” reminder tag.
I used to push every possible event into Slack. Deploy hooks. CI results. New followers. Half of that turned into noise. I killed an entire category of Slack notifications and did not miss them once.
Module verdict: Slack stays, but only for things I can act on quickly.
Notion: slow, slightly painful, still very useful
I have a love-hate relationship with Notion as an API target. It is not fast. It is sometimes weird with rate limits. But it sits in the middle of how I plan work and content, so I tolerate it.
Every week I use:
- Notion → Create a database item Anything that should live longer than a notification.
- Notion → Update a database item For syncing status from other tools.
One concrete example. For this blog, I draft in Markdown locally. When I push a new post into my Git repo, a small server-side hook hits Make via webhook. Make then creates or updates a Notion page in my “Content Log” database with:
- Title
- Slug
- Status (Draft, Scheduled, Live)
- URL
- Technical notes (stack, integrations, experiments)
I used to manage that inside Notion directly with templates and properties and a whole ceremony. Now Make is the bridge from my real workflow (git, editor, deploy) into my “remember this later” tool.
What I stopped doing. I used to sync every little metric into Notion. Newsletter subscriber counts. Basic analytics. It felt like a quantified self dashboard. I never looked at it. Now I only store artefacts and decisions, not vanity numbers.
Module verdict: Notion stays, but only for durable records, not dashboards.
Google Sheets: my scratchpad and debug console
I think Google Sheets is underrated as a Make companion. Not as a database, but as a scratchpad.
Weekly modules:
- Google Sheets → Add a row For lightweight logging and temporary reports.
- Google Sheets → Get a cell / Get a row For quick lookups when a real database would be overkill.
Example. During a recent experiment on sleep and training volume, I patched together data from several APIs that did not agree on timestamps or time zones. Before building a nice pipeline, I sent everything to a Google Sheet first. One flat sheet. One row per event. Timestamps, scores, tags.
I used that as a human-readable debug view while I iterated on the mapping logic in Make. Once I trusted the pipeline, I replaced the sheet with a proper HTTP call into my API. The sheet scenario is still there, just turned off. It is my “turn this on for debugging” escape hatch.
Module verdict: Sheets stays as my visible logbook, not as core storage.
Text & data transformers: the unsung heroes
If a Make scenario feels complicated, it is usually because I forgot that Make has a lot of little transformer modules and functions that remove the need for code.
Every week I use:
- Text parser and regex-related functions For pulling IDs out of URLs, cleaning labels, normalising tags.
- Formatter → Date and time For making timestamps actually readable and consistent.
- Array → Aggregate / Iterator For reshaping messy API responses.
One real scenario. My monitoring setup sends JSON payloads with long nested structures. Make receives them, then I use “Iterator” to walk through failing checks, clean each label with a Text function, and then aggregate them back into a compact Slack message.
I used to firehose the raw JSON into Slack. No one read it. Not even me. Now the message is human-sized and I only link to the raw payload when I actually need to debug.
Module verdict: These stay. They make other modules simpler.
The modules I stopped using regularly
Now the fun part. The tools I actively moved away from. Some of them are great on paper, just not for how I work.
Gmail & email modules as the primary trigger
I used to have a bunch of scenarios that started with “New email in Gmail” or “Watch emails”. They tried to be smart. Auto-labeling. Auto-forwarding. Auto-creating tasks.
The problem. Email is already messy. Adding an automation layer that sometimes works and sometimes does nothing makes it worse. I also do not want to give every automation direct access to my entire inbox.
What I do instead. Email is almost always the output now, not the input. If something critical happens in a system that is not wired into my attention stack, I send myself a short email. That is it.
Module verdict: Gmail triggers are out. Email stays as occasional output only.
Make Data Store as a hidden database
Make has a built-in Data Store. On paper it is convenient. Throw key-value pairs in there. Persist things. Done.
In practice, I found it dangerous. It is invisible from the outside, hard to migrate, and a bit too easy to rely on for critical data. I had one scenario where I used Data Store to track daily task limits per client. When I rebuilt that scenario a year later, I forgot the Data Store even existed and almost wiped it.
Now I keep state either in my own API, in a real database, or in something like Notion if the stakes are low. Data should live somewhere I can back up, inspect, and move without Make in the middle.
Module verdict: Data Store is out for me, except for throwaway experiments.
Scheduling heavy logic directly inside Make
I went through a phase where I tried to make Make my central scheduler. Complex recurring jobs. Custom repeat rules. User-specific offsets. You can do it. I did. It was fragile.
Make is great at “When X happens, do Y”. It is less great at being a full-on job scheduler for dozens of different rules and calendars. Especially once you start mixing time zones.
Now I keep scheduling logic in my own services. They decide when to call Make, usually via webhook. Make just does the orchestration work.
Module verdict: Advanced scheduling inside Make is out. I trust my own code for that.
Over-using search modules as if Make was my search engine
Make has lots of “Search” modules. Search rows. Search pages. Search messages. I abused them early on.
Pattern looked like this. Scenario triggers. Then: “Search X by Y” to find a record. Then update it. Worked fine at small scale. Terrible once you grow a bit. Slow. Expensive. Painful to debug when matching goes wrong.
These days I try to design flows so I always pass IDs around explicitly. When something is created, I keep the identifier and store it in the other system right away. No more “guess which row” logic.
Module verdict: Search is a fallback now, not a default.
How I decide if a new module earns a place
Every time I add a new module or app into my Make world, I run it through a simple checklist.
- Can I explain this scenario in one sentence? If I cannot, it is already too complex.
- Can I debug it half-asleep? If an error email at midnight would confuse future me, I simplify it.
- Can I rebuild this without Make? If the answer is no, I am locking too much into one tool.
Make is fantastic glue. It is not my app platform. It is the thing that connects my apps, my scripts, and the buckets where I keep data.
The modules that survived this audit all share one thing. They do boring, reliable work. HTTP moves data. Routers branch logic. Slack shouts when something matters. Notion stores decisions. Sheets gives me a visual log. Everything else is optional.
If your Make account feels loud and untrustworthy, I would start there. Open your scenario list. Sort by last run. Keep what actually runs weekly and makes your life easier. Kill the rest. You will not miss as much as you think.
Subscribe to my newsletter to get the latest updates and news
Member discussion