At some point, every startup routes its production alerts into Slack. It makes sense at the time. The team is already there. The integration takes ten minutes. The alert fires and lands in a channel everyone can see. It feels like a complete solution until the day something breaks at 9pm and the alert sits unread between a GitHub PR notification and someone sharing a meme in the same channel.
This is the conversation I find myself having more often than almost any other with engineering leaders at growing startups. Not about monitoring. Not about runbooks. About why Slack, the tool the entire team lives in, is structurally the wrong place to trust with the one notification that cannot wait.
The problem is not that your engineers ignore alerts. It is what Slack was designed to do.
Slack is an attention equaliser. That is a feature everywhere except incident response.
Slack was built to keep teams connected and informed. It does this by treating every message with roughly equal weight. A deployment notification, a customer complaint forwarded from support, a PR review request, a critical production alert, and a thread about where to book the team lunch all arrive in the same interface, with the same visual language, competing for the same attention. The platform is very good at this. Everything gets seen eventually.
Eventually is the word that matters. By month three of routing alerts into Slack, the alert channel is usually muted. By month six, someone creates a separate real-alerts channel because the original is useless. This is not a configuration failure. It is the predictable result of asking a conversation tool to carry the urgency of an incident management system. Slack levels attention. Incident response requires a channel that spikes it.
The numbers behind what feels like an attitude problem
The average Slack user sends 92 messages daily and checks the platform 13 times a day. That sounds like high engagement until you realise that checking Slack 13 times a day across 30 channels means each check is roughly 90 seconds of attention split across everything that arrived since the last check. Knowledge workers receive an average of 65 notifications a day, and a critical alert arriving at 11pm is one notification among however many others landed in that window.
Research shows engineering teams receive over 2,000 alerts weekly, with only 3% requiring immediate action. After weeks of alerts that turned out to be noise, the engineer's instinct when a notification arrives is not urgency. It is a low-level assumption that this is probably another one that does not need them right now. That assumption is correct 97% of the time. The 3% that is wrong is the one that costs.
Why the Slack alert channel degrades in a predictable sequence
Stage one: the channel works because it is new
When alert routing first goes into Slack, the team pays attention. Every notification is novel. The channel has low volume. Engineers check it because they are curious about what it surfaces. This phase lasts weeks, sometimes a couple of months, depending on alert volume and how much noise gets routed in alongside the signal.
Stage two: the noise builds and trust erodes
As more integrations get added, as monitoring thresholds get left at their defaults, as the team adds more services, the channel fills up. The ratio of actionable alerts to noise drops. Teams add more alert rules after every incident where monitoring missed something, and those rules rarely get removed because nobody wants to be responsible for the next miss. The channel becomes a stream that engineers scan rather than read. Critical alerts still land there.They are just harder to find.
Stage three: the channel gets muted or abandoned
The engineer who is technically on-call has Slack open but the alerts channel muted or on low priority. They will check it when they get around to it. The monitoring system considers its job done. The production issue that needs attention in the next ten minutes is sitting unread. 73% of organisations experienced outages directly linked to ignored or suppressed alerts, according to Splunk's State of Observability 2025, a number that has remained stubbornly consistent despite years of tooling improvement. The tooling improved. The channel architecture did not.
The channel you use for everything cannot also be the channel you trust for the one thing that cannot wait. Those two requirements are in direct conflict, and Slack optimises for the first one.
What a channel designed for urgency looks like differently
The attention architecture has to be different by design
The reason WhatsApp works as an incident escalation channel for Indian engineering teams is not familiarity, though that is part of it. It is that WhatsApp operates on a different attention model. A message arriving on WhatsApp, particularly a direct message or a message in a small, low-volume group, arrives with a different psychological weight than the fiftieth Slack notification of the day. The reflex to check is trained differently. The signal-to-noise ratio, for most engineers in India, is better there than in any Slack channel they are part of professionally.
This is not a cultural observation. It is an attention architecture observation. The channel that carries the critical alert needs to be one where the engineer's default response to a notification is to look immediately, not eventually. That channel needs to be separate from the high-volume conversational noise of the team's day-to-day work. And it needs to escalate automatically when the first person does not respond, which Slack does not do on its own.
The three things a Slack-first alerting system cannot provide
The first is guaranteed acknowledgement. Slack can show a message as delivered. It cannot confirm that a human has seen it, taken ownership, and begun working on it. Those are different events and the gap between them is where incidents become expensive. The second is automatic escalation on silence. If the on-call engineer does not respond in Slack, nothing automatically happens. Someone has to notice the silence and decide to act. At 2am on a Sunday, that dependency is a real failure risk. The third is a channel the engineer's attention is reliably concentrated on outside working hours. Most engineers, after a full day in Slack, are not monitoring it with the same reflexes they bring to a personal message channel at 11pm.
These are not problems that better Slack configuration solves. They are structural limitations of using a conversation platform as an incident response system. The teams that figure this out tend to do it after an incident where all three failures showed up at once, which is an expensive way to learn something that could have been addressed in a calm afternoon.
This is the gap Shankh was built around. Not to replace the monitoring stack or the communication tools the team already uses, but to sit between the alert and the engineer as a layer that is specifically designed to guarantee a human responds, through the channel where that response is most likely to happen, with automatic escalation when it does not. The Slack channel continues to exist. It just stops being the place where incident ownership is decided.
Slack is genuinely good at what it was built for. Keeping distributed teams connected, moving conversations out of email, making information visible across an organisation. None of those things are incident response. The mistake most startups make is not choosing Slack, it is never revisiting whether Slack is the right tool for the specific job of ensuring that a production alert gets acknowledged by a specific human within a specific window. That question has a different answer from the question of where the team should collaborate, and treating them as the same question is how critical alerts end up sitting unread between a meme and a pull request review.

