People send us screenshots like this all the time:
"98% delivered, 24% opens, 3% clicks... are we good?"
The annoying truth is those numbers are proxies. Delivered is an SMTP acceptance. Opens are image loads. Clicks are URL requests. And a decent amount of opens and clicks can be automated.
Below is how we read these metrics in practice: what they mean, what they don't, and what we look at when something feels off.
TLDR
- Delivery means "accepted by the receiving mail server," not "landed in the inbox."
- Opens are image loads. Real people can open without loading images, and machines can load images without a person reading anything.
- Clicks are closer to intent, but still noisy. Security scanners and link checkers can create "clicks" that weren't human.
- Mailbox providers track engagement you can't see (replying, moving to folders, marking spam/not spam, time spent reading).
- Use these metrics for trends and debugging, not absolute truth.
- If you want cleaner measurement, anchor it to first-party outcomes (replies, sign-ins, purchases, product events), not opens.
A quick cheat sheet (what the metric really is)
- Delivered: your mail server got a successful SMTP acceptance (usually a
250response). - Open: a device or service fetched your tracking pixel image URL.
- Click: a device or service requested your tracked redirect URL.
Everything else is inference, and it gets messy fast.
1) "Delivered" is an SMTP receipt, not an inbox guarantee
Most platforms count a message as "delivered" when the recipient's infrastructure says "OK, we'll take it" during the SMTP conversation. You'll often see something like:
250 2.0.0 OK: queued as ...
That's a real success signal, and you should care about it. But it doesn't tell you where the message went next.
What delivery is good for
- Your message was accepted (it wasn't blocked at the front door).
- You're probably not failing authentication catastrophically (if you were, you'd often see rejects/deferrals).
- A provider isn't outright refusing you (which is what a true "can't send mail" crisis looks like).
What delivery can't tell you
- Inbox vs spam folder placement: the provider can accept the mail and still route it to spam.
- Post-acceptance filtering: some filtering happens after SMTP accept (it's possible to get an asynchronous bounce later, but "accepted then bounced" isn't the common case for most modern consumer mail).
No provider (Bento included) can tell you, with certainty, that a specific message landed in the inbox for every recipient. "Delivered" just isn't that metric.
What to watch instead (for deliverability health)
If you're trying to answer "are we having a deliverability problem?", start with signals that are much closer to the truth:
- Bounce/deferral rate by provider (Gmail vs Microsoft vs Yahoo)
- Spam complaint rate (keep it low; spikes matter more than day-to-day noise)
- Sudden provider-specific drops (e.g., only Microsoft starts deferring)
When something is actually wrong, the most useful artifact is still the least glamorous: the bounce or deferral message.
Note: if you use Bento, we can pull SMTP responses for a specific send and show you exactly what's happening. Just ask.
2) Opens are image loads (and they're easy to misread)
An "open" is usually counted when a hidden tracking pixel (a tiny image) is requested from your tracking server.
So the metric is downstream of a lot of things you don't control:
- Whether the recipient's client loads images
- Whether the provider proxies/caches the image
- Whether a security system fetches remote content while scanning
- Whether the email gets clipped/truncated before the pixel is even reachable
Why opens get overcounted
Opens can be inflated by automation. Common patterns:
- Privacy prefetching / proxying: Apple Mail's Mail Privacy Protection (MPP) is the big one. It can fetch your pixel even if the person never reads the email.
- Security scanning: filters may fetch remote content to classify messages.
- Provider caching: the "open" might be a proxy/cache fetch more than a person reading.
Depending on where scanning happens, you can even see weird sequences like an "open" timestamp that shows up before the message was successfully delivered.
Why opens get undercounted
You can miss real readership for totally normal reasons:
- Images disabled by default (the user reads the text without loading images).
- Plain-text viewing (no pixel to load).
- Offline reading (email opened on a plane/train, images never load).
- Long-message clipping (Gmail is famous for clipping HTML around ~102KB; if your pixel is at the bottom, it can get cut off).
What opens are still good for
Opens are still useful. Just use them for the right job:
- Trend detection: "opens at Gmail dropped week-over-week" can be a clue.
- Relative testing: subject line A vs B, send time tests, same audience, same provider mix.
- Triage: if opens fall off a cliff (not down 2%, but down to single digits), it's worth investigating.
Opens are not a clean engagement metric anymore. If you build automation off opens ("if opened, do X"), just know you're building on a squishy signal.
3) Clicks are closer to intent (but scanners can click too)
Click tracking typically works by turning your links into tracked redirect URLs. When someone (or something) requests that URL, your tracking server records it, then redirects to the real destination.
Clicks are a better indicator of interest than opens. But clicks can also be created by automation, especially in corporate environments.
Bento does block some of this, but not all.
Why you might see scanner clicks
Some systems follow links to protect users:
- Security link scanners (corporate gateways, endpoint protection)
- Safe browsing checks that fetch pages to evaluate risk
That can produce click spikes that look real in a dashboard but don't convert, don't scroll, and don't behave like humans. If you have a Microsoft-heavy audience, this can show up as a weird spike in "clicks" that never match real sessions on the site.
How to make click data more useful
Treat clicks as a starting point. The real question is what happens after the click.
- Look at time-to-click: clicks that happen immediately at delivery time are more likely to be automated than clicks that happen minutes or hours later.
- Track what happens after the click: did the visitor log in, purchase, or trigger a product event?
- Use UTMs + first-party events: clicks are a bridge; outcomes are the destination.
4) "Engagement" to mailbox providers is not your dashboard "engagement"
When Gmail, Yahoo, or Microsoft decide whether to place future emails in the inbox, they're not basing that decision on your open rate chart.
They see signals you don't, like:
- Replies and forwards
- Reading behavior (time spent, scrolling)
- Moving a message out of spam (strong positive)
- Marking as spam (strong negative)
- Filing into folders, starring/flagging, deleting immediately
This is why "our opens are low" and "we're going to spam" are not the same statement.
You can have an audience that reads everything with images off (low opens) and still have great inbox placement. You can also have high opens from automated fetching and still be underperforming with real humans.
5) How we use these metrics in practice
- Delivery: break out bounces/deferrals by provider and read the actual SMTP messages before you change anything.
- Opens: ignore small movement. Look for big shifts against your baseline, especially by provider.
- Clicks: assume some scanner noise, then validate with first-party outcomes (logins, purchases, activations).
- Always watch: spam complaints, hard bounces, and sustained sending to unengaged segments. Those can hurt inbox placement fast if they keep happening.
Bots, scanners, and privacy: don't overreact
It's tempting to see automated opens/clicks and assume "these contacts aren't real." Usually that's the wrong conclusion.
Security systems and privacy features often interact with emails that are absolutely intended for real humans, and those humans may still read and value your mail. On the flip side, some audiences (especially security-conscious ones) routinely block images, so they'll look "unengaged" in open-based reporting even when they're not.
The better approach is to treat automation as measurement noise, then anchor your decisions in outcomes that matter: replies, conversions, product usage, and long-term list health.
If you want cleaner measurement, track outcomes (not pixels)
If you're currently optimizing for opens, consider shifting the question from "did they open?" to "did they do the thing we wanted?"
That might be:
- Started onboarding
- Activated a feature
- Booked a demo
- Made a purchase
- Replied with a question
Those outcomes are harder to fake, and they map directly to business value.
Further reading
If you want a deeper deliverability-native breakdown of how these metrics behave in the wild, this piece is worth your time:
- Deliveries and Opens and Clicks (Word to the Wise)
If you're tired of guessing what your email metrics mean, Bento helps you zoom in on the signals that actually move inbox placement (bounces, complaints, provider-specific behavior) and build automation off first-party events, not just opens.
Learn more at bentonow.com or start with a quick setup at bentonow.com/signup.
Enjoyed this article?
Get more email marketing tips delivered to your inbox. Join 4,000+ marketers.
No spam, unsubscribe anytime.



