Make.com Error Handling Patterns That Actually Work

The worst Make.com scenarios are not the ones that crash. They are the ones that silently stop working. Here is how I design every scenario to make failure loud and obvious.
Make.com Error Handling Patterns That Actually Work
Photo by Victor / Unsplash

The real problem is the ones that quietly stop working while your Zapier-traumatized brain still trusts the automations.

I have had Make scenarios fail for ten days straight without throwing a visible error. Data drift. API changes. A new field in a webhook payload. Suddenly a key path does not exist anymore and a module happily sends empty data to the next step.

Nothing on fire. Just invisible damage.

After shipping too many brittle automations for myself and for clients, I ended up with a small set of defensive patterns that I now use everywhere.

This post is not a Make.com features tour. I will show you the three modules I bolt onto almost every scenario to prevent silent failures. You can copy this structure into your own workspace and adapt it.

My rule: every scenario must be noisy when it breaks

I treat every automation as a junior developer on the team.

If it messes up, I want it to over-communicate. Fast. Loud. With context.

So I assume three things will always happen eventually:

  • An external API changes something small and undocumented.
  • A record arrives that does not match my neat mental model.
  • Some service gets throttled or rate-limited while Make shrugs.

That mindset pushed me into a very simple structure. Almost every scenario I build now contains these three modules:

  • A centralised Log to store module.
  • A noisy Notify me module.
  • An opinionated Guardrail module that explicitly validates assumptions.

They are boring. Which is why they work.

Module 1: the logging layer that never ships to production without it

First piece of the puzzle is a single place where all scenarios can write structured logs.

No fancy observability stack. Just something that never lies and does not magically aggregate your data into marketing dashboards.

I have used three options for this logging sink:

  • A dedicated table in Airtable or Baserow.
  • A Google Sheet with frozen header row.
  • A PostgreSQL table on Supabase for higher volume projects.

I still think Airtable is the easiest start. You can see the data. You can filter failures. And you can connect it back into Make for clean up or replay.

The actual Make.com pattern

In Make I keep a private folder called _core. Inside that folder there is a scenario named Log event.

That scenario does only one thing. It takes a JSON payload via a webhook and writes it verbatim into my logging table. I do not spread logging logic across dozens of scenarios. I centralise it.

The structure looks like this:

  • Webhook module: log_event.
  • JSON module: convert raw body to JSON (I like to validate here).
  • Data store module or Airtable/Supabase module to insert the row.

The record schema is boring on purpose:

  • timestamp (ISO string)
  • scenario_name
  • event_type (info, warning, error)
  • context (short text, usually the source module)
  • payload (raw JSON blob as text)

I do not try to be clever here. I log the raw payload from the scenario that called it. The goal is simple. When something smells off, I can scan a single table and see every warning or error from every scenario, in order.

How I plug this into every scenario

Inside a normal project folder I almost always add a minimal module called HTTP > Make a request. That module calls the Log event webhook with a JSON payload.

I usually create a Make variable at the top called LOG_ENDPOINT. That keeps things portable across dev and prod environments.

Then, anywhere something non-trivial happens, I branch into a simple HTTP call:

  • If I catch an error.
  • If I hit an unexpected condition.
  • If I am about to transform user data in a destructive way.

Yes, it is extra modules. No, I do not care. Logs are cheaper than guessing.

Concrete example. In a scenario that syncs new leads from a Webflow form into a CRM, I log on three occasions:

  • When the webhook fires, I log info with the entire form body.
  • When the CRM API rejects a lead, I log error with HTTP status and body.
  • When a duplicate email is detected, I log warning with the match key.

That is it. No heroics. When a client says “a lead went missing”, I go to the log table and filter by email. If it is not there, the problem is upstream from Make. If it is there with an error state, I have the full payload and the API response.

Module 2: notifications that feel like a teammate pinging you

Logs are for forensics. Notifications are for keeping your day intact.

Most people either do not set up alerts at all, or they set up one giant catch-all nightmare channel where every little hiccup screams at them.

I think both are bad. My rule is that every notification should feel like a colleague tapping you on the shoulder with context and a suggestion.

The notification scenario

Just like logging, I centralise notifications into a single scenario that other scenarios call.

The structure:

  • Webhook: notify.
  • JSON parse to read the payload.
  • Router to different channels (Slack, email, Telegram, whatever you use).

Payload schema is simple:

  • severity (info, warning, error, critical)
  • scenario_name
  • message (one line, human readable)
  • details_url (optional link to a record, log entry, or ticket)
  • payload (short JSON fragment, not the full body)

Then I route severity levels into different channels. This is where it starts to feel like an actual system, not an ad hoc set of tiny scripts.

Example setup that works well for me:

  • info: ignored by default, maybe logged only.
  • warning: posted to a low-noise Slack channel like #automation-notes.
  • error: DM to me on Slack or a dedicated Telegram chat.
  • critical: multiple channels, plus maybe an email with the full story.

How it plugs into scenarios

In a normal Make scenario I add a single HTTP request module called Notify, pointed at that central webhook.

I hook that into error handlers instead of relying on Make's built in “On error” logs only.

Concrete pattern:

  • I add an Error handler to the important API modules, set to Break or Resume with explicit routing.
  • Inside that error branch, I call Notify with severity = error or critical and include the bundle[1].error object.
  • Optionally I also call the logging webhook with the same payload at severity error.

So if a CRM API returns 500, I get a DM that looks like this:

[error] Lead sync failed
Scenario: Webflow → Pipedrive
Message: POST /deals returned 500
Details: https://airtable.com/.../recXXXXXXXX
Payload: {"email":"user@example.com"}

That link points to the log table entry. I can see the full payload and response when I actually care. My chat app stays readable and I do not need to open Make's execution inspector on my phone.

Module 3: guardrails that enforce reality instead of hope

This is the bit most people skip.

Make lets you assemble very optimistic flows. You drop in a webhook, you assume the field exists, you assume the email is there, you assume the enum values will never change, and you map everything directly into later modules.

I have stopped trusting incoming data like that. Instead I build explicit guardrail modules that validate assumptions before anything important happens.

What a guardrail actually looks like

In practical terms a guardrail is usually a Tools > Iterator or Tools > Filter module followed by a Router with two paths:

  • Valid: data meets the conditions, continue as normal.
  • Invalid: data is weird, log it, notify me, and stop.

Core idea is that the Invalid path is explicit. It is not the default. It is not “we will see the error in the logs eventually”.

Example: scenario that takes Stripe subscription events and syncs them into my own database.

I add a guardrail just after the webhook module:

  • Filter 1: event.type is one of ["checkout.session.completed","customer.subscription.updated"].
  • Filter 2: data.object.customer_email is not empty.
  • Filter 3: data.object.currency is "usd" or "eur" only.

If any of these fail, the bundle goes to the Invalid path. That path does three things:

  • Calls Log event with severity warning, context stripe_guardrail_failed.
  • Calls Notify with severity warning and a short description.
  • Intentionally stops. It does not try to be clever.

This saves me from quiet schema drift. If Stripe changes fields or we start accepting a new currency, I get a warning for the first event and can update the scenario. That is much better than discovering six weeks of broken MRR reports.

Guardrails for AI heavy scenarios

Since I work a lot with AI flavored workflows, I have started to treat LLM responses as hostile input as well. They can be wrong. They can be weird. They can be empty.

So I now add a guardrail after every LLM call.

Example pattern:

  • LLM call returns JSON.
  • JSON parse module validates the structure.
  • Filter checks required keys: title, summary, tags.
  • Filter on simple sanity rules such as len(summary) > 150.

If the guardrail fails, I log and notify with severity warning, include the raw LLM output, and stop that branch.

I do not try to auto-retry multiple times inside the same run. That tends to create loops with unpredictable cost. If the model starts hallucinating wildly, I would rather see one strong warning and fix the prompt or system design.

How this looks in a real scenario

To make this less abstract, here is the structure of an actual scenario I use:

Use case: collect newsletter signups from a custom form, enrich them with Clearbit, run them through an LLM summariser, and store everything.

  • Webhook: newsletter_signup.
  • Guardrail 1: filter on email present and consent = true.
  • If invalid: call Log event (warning), call Notify (warning), stop.
  • Call Clearbit API (company and role enrichment).
  • Error handler on Clearbit: on 4xx/5xx call Notify (error), call Log event (error), then continue with partial data.
  • Guardrail 2: ensure email still present, not unsubscribed already.
  • Call LLM to summarise the user and suggest tags.
  • Guardrail 3 on LLM response: JSON parse and sanity checks.
  • If invalid: Notify (warning) that summarisation failed, log, then store raw data without summary.
  • Create record in my database.
  • Final Log event with severity info for successful flow.

Failures surface like this:

  • Bad form data: Slack warning with a link to the raw payload.
  • Clearbit rate limit: error DM with suggested quick fix (pause scenario or change plan).
  • LLM meltdown: warning in Slack, still no broken downstream tools.

The main pattern to notice is that I do not let Make's default error handling decide what is important. I use my three modules to control the noise and make intentional calls about what should stop and what can be skipped.

Building your own "no silent failure" starter kit

If you want to copy this approach into your own workspace, here is the order I would set it up.

1. Build the logging scenario first

  • Create a simple table for logs.
  • Build a Log event scenario with one webhook input.
  • Define the schema you care about and keep it stable.
  • Expose the webhook URL as an environment variable or saved value.

2. Build the central notification scenario

  • Create a Notify scenario with a webhook.
  • Map severities into channels that match your tolerance for noise.
  • Keep the message format consistent across scenarios.

3. Start adding guardrails in your most important flow

  • Pick the scenario that would really hurt if it went wrong.
  • Add one guardrail at the start that validates the incoming data.
  • Add one guardrail after the riskiest external call or AI step.
  • Route invalid data into logs and notifications.

Do not try to retrofit every scenario at once. Start with one that runs daily and touches revenue or customers.

Once you get used to this pattern, you will feel uncomfortable shipping anything without it. Which is exactly the point.

Automations should not feel like magic tricks. They should feel like small, boring systems that fail loudly, with receipts.

That is where Make.com starts to feel like an actual platform and not just a visual scripting toy with pretty connectors.

Subscribe to my newsletter

Subscribe to my newsletter to get the latest updates and news

Member discussion