5 min read
#79 - Why MSPs Get Stuck at $2M (And What Actually Breaks the Ceiling)
Most MSPs don’t stall at $2M because the market “gets harder.” They stall because the company quietly stops being a craft and starts becoming an...
The perimeter tools most MSPs deploy are necessary — but they don't protect the data once someone is inside. Word documents, spreadsheets, payroll exports, old deal rooms, AI prompts — none of it is inventoried, valued, or protected at the file level. That gap is where the real financial exposure lives. The pressure on MSPs to speak to executive-level risk is only growing — and the advisors who learn to frame cybersecurity in financial terms will own the most durable client relationships. Return on Mitigation is the framework for that conversation: what would a breach actually cost, which data creates the greatest exposure, and what is the measurable return of reducing it — in dollars, not checklists.
James Oliverio built and sold an MSP, spent decades in Wall Street CIO roles, earned a Harvard certification in information risk management, and is now Founder and CEO of ideaBOX. What makes his perspective valuable is not the technology — it is the question he leads with: what data do we hold that, if stolen, would change the trajectory of this business? For MSP owners, the ability to facilitate that question — with a quantified number, a financial model, and a remediation plan — is the difference between being a vendor in a compliance conversation and being an advisor in a business-level security strategy. That is the positioning shift this episode is about.
Every organization has a visible security posture and a hidden data liability. The visible posture — firewalls, endpoint protection, compliance audits, backup systems — is what gets measured, budgeted, and reported. The hidden liability is the unstructured data that flows in and out every day: compensation spreadsheets emailed to the wrong distribution list, deal room files from acquisitions that closed five years ago but are still synced to a personal Dropbox, patient records analyzed in Excel by someone flagging billing anomalies, engineering specifications shared with a third-party manufacturer under an NDA that nobody audited. These files do not appear on a balance sheet, but they carry a quantifiable financial exposure. Every sensitive record has a market value — a date of birth is worth roughly $10 on the dark web; a credit card record can exceed $100 — and when those records exist inside an organization's environment without protection or even awareness, the exposure compounds silently. Return on Mitigation is the discipline of calculating that number, connecting it to the controls that do or do not exist, and presenting that analysis to the people who have the authority to act on it: the C-suite and the board.
Sony. Target. Equifax. The NSA. Each of those organizations had sophisticated security stacks at the time of their breaches — firewalls, intrusion detection, access controls, and in some cases dedicated security operations centers. What they did not have was file-level protection on the data that was ultimately stolen. This is the structural failure that the perimeter model cannot address: once an attacker is inside the network — through phishing, a compromised third-party vendor, a misconfigured access control, or an insider — unprotected files are available and readable. Traditional data loss prevention tools help detect and flag anomalous data movement, but they rarely remediate. They produce a case file. They generate an alert. They confirm, after the fact, that something went wrong. Data Security Posture Management takes a fundamentally different approach: the data itself is protected at the file level, with encryption applied automatically based on sensitivity classification. The stolen file is unreadable. The breach still occurred, but the liability did not materialize because the data could not be accessed. That distinction — between detecting a breach and surviving one — is what the perimeter conversation has never been equipped to provide.
AI adoption is accelerating faster than AI governance, and the gap between them is creating a new category of data liability that most organizations have not begun to measure. Employees who are genuinely trying to be more productive are feeding ChatGPT, Grok, and Microsoft Copilot with company data — patient records, internal financial models, proprietary engineering specifications, HR data — without any awareness that the prompt may conflict with their organization's NDAs, data governance policies, or compliance obligations. For a manufacturing company protecting patented product specifications, for a healthcare organization managing ePHI, or for a financial services firm handling client account information, this is not a hypothetical risk. It is a current exposure with no visibility and no control. The organizations that discover this exposure through a proactive data risk assessment — rather than through a compliance investigation or a breach — are in a structurally different position. And the MSPs and security advisors who identify it and provide a mechanism to monitor, classify, and restrict AI data usage are entering a conversation that their competitors are not yet equipped to have.
Cyber insurance has become a standard line item on most companies' risk management budgets, and many owners operate under the assumption that the policy they are paying for translates directly into financial protection when a breach occurs. The underwriting data does not support that assumption. In 2024, a substantial share of cyber insurance claims were denied — not because the breach did not happen, but because the organization had materially misrepresented its risk profile when completing the underwriting questionnaire. Insurers deploy forensics investigators after a breach, and those investigators find the same thing that a proactive data risk assessment would reveal: the actual volume of sensitive records inside the organization is orders of magnitude larger than what was declared at policy inception. A company that reported 5,000 sensitive records and was found to hold 4 million is not just underinsured — it is in violation of its policy terms and recovers nothing. The implication for MSPs is direct and actionable: the clients who are relying on their cyber policy as a financial backstop may be carrying a false sense of protection, and the MSPs who surface that gap — with evidence, not just warnings — are the advisors who earn the most consequential seat in the room.
There is a structural irony beneath most MSP security conversations that rarely gets named directly: the MSPs advising clients on cyber risk are often among the least-assessed organizations in their own ecosystem. MSP engineers carry client credentials. They have access to PSA systems, documentation platforms, remote access tools, and network configurations that, if compromised, provide an attacker with a skeleton key to every client environment simultaneously. Beyond access credentials, the MSP itself generates sensitive data daily: payroll files with employee social security numbers, client contracts with financial terms, prospect data accumulated over years of sales activity, and in many cases, acquisition materials from deals that were explored and declined. Most MSPs have never run a data risk assessment on their own business — and the ones who do are consistently surprised by what they find. The strategic argument for starting there is not only about self-protection. It is about credibility. An MSP that has run the ROM calculation on its own business, discovered unexpected exposure, remediated it, and can walk a client through that experience has moved from selling a concept to sharing a story. That distinction is the difference between a vendor conversation and an advisory relationship — and in the business of building trust with executives, the story is the proof.
Return on Mitigation is a financial framework for quantifying the economic exposure created by an organization's data — specifically the unstructured data that traditional security tools do not inventory or protect. It assigns dollar values to sensitive records based on established pricing data from dark web markets, regulatory fine history, and breach precedent, then models the financial impact of a breach across dimensions including operational downtime, compliance penalties, valuation loss, and market share erosion. The ROM calculation answers a question most organizations have never asked: if the data we hold were stolen today, what would that actually cost us — and what is the measurable return of reducing that exposure before an event occurs?
Traditional cybersecurity ROI focuses on the return generated by a security investment — how much a specific control reduces the likelihood or cost of a future incident. ROM is focused on quantifying the existing exposure before any incident occurs. It is less about justifying a spend and more about establishing a baseline: here is what your data risk profile looks like today, in dollars, and here is how specific mitigation controls reduce that number. It produces a metric the C-suite can track, budget against, and present to a board — which is why it resonates in executive conversations where traditional technical security metrics do not.
Unstructured data refers to information stored in formats without a predefined data model — Word documents, spreadsheets, PDFs, emails, and similar files. Unlike structured data in databases, unstructured data is difficult to inventory, classify, and protect at scale, and it is where the most sensitive organizational information typically lives: compensation data, patient records, customer PII, engineering specifications, financial models, and acquisition documents. Every major breach studied in Harvard's information risk management program — Sony, Target, Equifax, the NSA — involved the theft of primarily unstructured data that the organization did not know it held in the volume or sensitivity that was ultimately compromised.
The most effective entry point is not the tool — it is the question. Opening with "what data do you hold that, if stolen, would change the outcome of your business?" surfaces the conversation at the right level before any technology is introduced. From there, a data risk assessment that inventories and classifies the client's unstructured data produces a quantified liability number that the C-suite can engage with immediately. The advisory play is to position that number against the client's current security spend, their cyber policy coverage, and their compliance obligations — identifying where investment is well-placed and where gaps create material financial exposure that the existing stack does not address.
The primary reason cyber insurance claims are denied is misrepresentation of the insured organization's risk profile at the time of policy application. Insurers ask detailed questions about the volume of sensitive records held, the security controls in place, and the organization's data management practices. Most organizations answer based on perception or incomplete knowledge of their actual data environment — particularly regarding unstructured data that has never been inventoried. When a breach occurs and forensic investigators examine the environment, they typically find that the actual volume of sensitive records is significantly higher than what was declared, which constitutes a material misrepresentation under the policy terms. The claim is denied. Proactive data risk assessment closes this gap by producing an accurate record count before the policy is written — and before it needs to be invoked.
Yes — and not only for the self-protective value it provides. Running the assessment internally gives the MSP owner and team a firsthand understanding of what the tool finds, how the output is presented, and how the conversation should be structured at the executive level. More importantly, it produces a genuine story to tell: an MSP that discovered unexpected exposure in its own environment, took specific action to remediate it, and can demonstrate the before-and-after risk profile is infinitely more credible than one selling a concept it has never personally experienced. In the advisory business, the story is the proof. MSPs who start with themselves earn the right to have the conversation with every client in their book.
James Oliverio is the Founder and CEO of ideaBOX and a Senior Advisor and Premier Partner at Actifile. With a career spanning Wall Street CIO roles at major investment banks including UBS, Credit Suisse, and Donaldson, Lufkin & Jenrette, James built and operated Another Nine — a Westchester, New York-based MSP — before selling it in 2015. He subsequently earned a Harvard certificate in Information Risk Management and developed the Return on Mitigation (ROM) framework, which quantifies organizational data exposure in financial terms and connects those numbers to the controls that reduce them. Today, James works with MSPs and enterprise clients across financial services, healthcare, and manufacturing to discover, protect, and govern their unstructured data using the Actifile Data Security Posture Management platform.
Connect with James on LinkedIn →
Josh Peterson is the CEO of Bering McKinley and host of The BMK Vision Podcast. Since 2004, Josh has worked with hundreds of MSP owners to build operationally sound, profitable businesses through consulting, peer teams, and direct coaching.
Connect with Josh Peterson on LinkedIn →
The conversation James Oliverio is driving — where security is positioned as a financial discipline, not a compliance exercise — is exactly the kind of thinking the BMK Vision framework is built to support. If you are ready to build a more strategic advisory practice, the Vision Operating System is where that work begins.
5 min read
Most MSPs don’t stall at $2M because the market “gets harder.” They stall because the company quietly stops being a craft and starts becoming an...
1 min read
There is a version of the salesperson problem that almost every MSP owner has lived. Five years in, the person is likable, clients tolerate them,...
1 min read
There is a quiet trap inside most MSP businesses, and it starts with a question that feels already answered: who is your ideal client? Most owners...