The perimeter tools most MSPs deploy are necessary — but they don't protect the data once someone is inside. Word documents, spreadsheets, payroll exports, old deal rooms, AI prompts — none of it is inventoried, valued, or protected at the file level. That gap is where the real financial exposure lives. The pressure on MSPs to speak to executive-level risk is only growing — and the advisors who learn to frame cybersecurity in financial terms will own the most durable client relationships. Return on Mitigation is the framework for that conversation: what would a breach actually cost, which data creates the greatest exposure, and what is the measurable return of reducing it — in dollars, not checklists.
James Oliverio built and sold an MSP, spent decades in Wall Street CIO roles, earned a Harvard certification in information risk management, and is now Founder and CEO of ideaBOX. What makes his perspective valuable is not the technology — it is the question he leads with: what data do we hold that, if stolen, would change the trajectory of this business? For MSP owners, the ability to facilitate that question — with a quantified number, a financial model, and a remediation plan — is the difference between being a vendor in a compliance conversation and being an advisor in a business-level security strategy. That is the positioning shift this episode is about.
The Balance Sheet Risk No One Is Counting
Every organization has a visible security posture and a hidden data liability. The visible posture — firewalls, endpoint protection, compliance audits, backup systems — is what gets measured, budgeted, and reported. The hidden liability is the unstructured data that flows in and out every day: compensation spreadsheets emailed to the wrong distribution list, deal room files from acquisitions that closed five years ago but are still synced to a personal Dropbox, patient records analyzed in Excel by someone flagging billing anomalies, engineering specifications shared with a third-party manufacturer under an NDA that nobody audited. These files do not appear on a balance sheet, but they carry a quantifiable financial exposure. Every sensitive record has a market value — a date of birth is worth roughly $10 on the dark web; a credit card record can exceed $100 — and when those records exist inside an organization's environment without protection or even awareness, the exposure compounds silently. Return on Mitigation is the discipline of calculating that number, connecting it to the controls that do or do not exist, and presenting that analysis to the people who have the authority to act on it: the C-suite and the board.
- Unstructured data — Word documents, spreadsheets, PDFs, emails — accounts for the majority of stolen information in major breach events, yet most organizations have never inventoried what sensitive records they hold or where those records currently reside.
- The FAIR Institute provides financial modeling frameworks that assign dollar values to data types based on dark web pricing, breach precedent, and regulatory fine history — turning an abstract risk conversation into a measurable liability that executives can act on.
- For an MSP owner, the practical starting point is not a technology deployment. It is a single question directed at the client's leadership team: what data, if stolen today, would change the financial outcome of this business — and do we know where it is?
Why the Perimeter Has Never Been Enough
Sony. Target. Equifax. The NSA. Each of those organizations had sophisticated security stacks at the time of their breaches — firewalls, intrusion detection, access controls, and in some cases dedicated security operations centers. What they did not have was file-level protection on the data that was ultimately stolen. This is the structural failure that the perimeter model cannot address: once an attacker is inside the network — through phishing, a compromised third-party vendor, a misconfigured access control, or an insider — unprotected files are available and readable. Traditional data loss prevention tools help detect and flag anomalous data movement, but they rarely remediate. They produce a case file. They generate an alert. They confirm, after the fact, that something went wrong. Data Security Posture Management takes a fundamentally different approach: the data itself is protected at the file level, with encryption applied automatically based on sensitivity classification. The stolen file is unreadable. The breach still occurred, but the liability did not materialize because the data could not be accessed. That distinction — between detecting a breach and surviving one — is what the perimeter conversation has never been equipped to provide.
- Detection without remediation is not a complete security strategy — it is a more expensive way to document the breach after it happens. MSPs positioning themselves as advisors need to understand and articulate that gap before their clients encounter it.
- File-level encryption shifts the legal and financial burden of a breach event: even if a third-party vendor's network is compromised, the files they were entrusted with remain unreadable, and forensic evidence can demonstrate the protection was in place before the incident occurred.
- The downstream financial consequences of a breach — compliance fines, operational downtime, market share erosion, valuation impairment — are effects that nobody models proactively. ROM is the discipline of modeling them first, before they happen.
The AI Blind Spot: When Efficiency Becomes Exposure
AI adoption is accelerating faster than AI governance, and the gap between them is creating a new category of data liability that most organizations have not begun to measure. Employees who are genuinely trying to be more productive are feeding ChatGPT, Grok, and Microsoft Copilot with company data — patient records, internal financial models, proprietary engineering specifications, HR data — without any awareness that the prompt may conflict with their organization's NDAs, data governance policies, or compliance obligations. For a manufacturing company protecting patented product specifications, for a healthcare organization managing ePHI, or for a financial services firm handling client account information, this is not a hypothetical risk. It is a current exposure with no visibility and no control. The organizations that discover this exposure through a proactive data risk assessment — rather than through a compliance investigation or a breach — are in a structurally different position. And the MSPs and security advisors who identify it and provide a mechanism to monitor, classify, and restrict AI data usage are entering a conversation that their competitors are not yet equipped to have.
- AI usage by employees is happening whether or not an organization has a formal policy, and most organizations that do have policies have no technical mechanism to enforce or audit them in practice.
- The same data discovery and classification tools that identify unstructured data risk on an internal network can surface which file types and record categories are being shared with large language models — creating an audit trail that previously did not exist.
- For MSPs, this represents a distinct positioning opportunity: the conversation about AI governance and data exposure belongs to the advisor who initiates it first, and it belongs at the executive level — not in a helpdesk ticket or a compliance review.
The Cybersecurity Insurance Trap
Cyber insurance has become a standard line item on most companies' risk management budgets, and many owners operate under the assumption that the policy they are paying for translates directly into financial protection when a breach occurs. The underwriting data does not support that assumption. In 2024, a substantial share of cyber insurance claims were denied — not because the breach did not happen, but because the organization had materially misrepresented its risk profile when completing the underwriting questionnaire. Insurers deploy forensics investigators after a breach, and those investigators find the same thing that a proactive data risk assessment would reveal: the actual volume of sensitive records inside the organization is orders of magnitude larger than what was declared at policy inception. A company that reported 5,000 sensitive records and was found to hold 4 million is not just underinsured — it is in violation of its policy terms and recovers nothing. The implication for MSPs is direct and actionable: the clients who are relying on their cyber policy as a financial backstop may be carrying a false sense of protection, and the MSPs who surface that gap — with evidence, not just warnings — are the advisors who earn the most consequential seat in the room.
- Cyber underwriting questionnaires ask about sensitive record counts, backup disciplines, and access controls — questions that most organizations answer based on perception rather than any actual inventory of their data environment.
- Proactive data risk assessment creates the same evidence base that insurers use to deny claims — and MSPs who deploy it for clients are building an advisory position that is very difficult for a competitor to displace.
- The longer-term market opportunity is significant: functioning as a credible data risk rating resource for cyber underwriters, with documented risk profiles across the MSP's entire client base, is a positioning play that has not yet been claimed at scale.
Eat Your Own Dog Food: The MSP as the First Client
There is a structural irony beneath most MSP security conversations that rarely gets named directly: the MSPs advising clients on cyber risk are often among the least-assessed organizations in their own ecosystem. MSP engineers carry client credentials. They have access to PSA systems, documentation platforms, remote access tools, and network configurations that, if compromised, provide an attacker with a skeleton key to every client environment simultaneously. Beyond access credentials, the MSP itself generates sensitive data daily: payroll files with employee social security numbers, client contracts with financial terms, prospect data accumulated over years of sales activity, and in many cases, acquisition materials from deals that were explored and declined. Most MSPs have never run a data risk assessment on their own business — and the ones who do are consistently surprised by what they find. The strategic argument for starting there is not only about self-protection. It is about credibility. An MSP that has run the ROM calculation on its own business, discovered unexpected exposure, remediated it, and can walk a client through that experience has moved from selling a concept to sharing a story. That distinction is the difference between a vendor conversation and an advisory relationship — and in the business of building trust with executives, the story is the proof.
- MSPs are high-value targets precisely because a single compromise can cascade to dozens of client environments simultaneously — making the MSP's own data risk profile one of the highest-stakes assessments in their market, and one of the most neglected.
- Running a data discovery and risk assessment internally gives the MSP owner and leadership team firsthand familiarity with what the tool finds, how the output is presented, and how the conversation should be structured at the C-suite level — preparation that no sales training can replicate.
- A simple starting point: if payroll files containing employee PII are accessible on a shared drive without encryption, the MSP has already answered the question of whether it is operationally ready to advise clients on data security at a file level.
Frequently Asked Questions
What is Return on Mitigation (ROM) in cybersecurity?
Return on Mitigation is a financial framework for quantifying the economic exposure created by an organization's data — specifically the unstructured data that traditional security tools do not inventory or protect. It assigns dollar values to sensitive records based on established pricing data from dark web markets, regulatory fine history, and breach precedent, then models the financial impact of a breach across dimensions including operational downtime, compliance penalties, valuation loss, and market share erosion. The ROM calculation answers a question most organizations have never asked: if the data we hold were stolen today, what would that actually cost us — and what is the measurable return of reducing that exposure before an event occurs?
How is ROM different from a traditional cybersecurity ROI calculation?
Traditional cybersecurity ROI focuses on the return generated by a security investment — how much a specific control reduces the likelihood or cost of a future incident. ROM is focused on quantifying the existing exposure before any incident occurs. It is less about justifying a spend and more about establishing a baseline: here is what your data risk profile looks like today, in dollars, and here is how specific mitigation controls reduce that number. It produces a metric the C-suite can track, budget against, and present to a board — which is why it resonates in executive conversations where traditional technical security metrics do not.
What is unstructured data and why does it represent the greatest cybersecurity risk?
Unstructured data refers to information stored in formats without a predefined data model — Word documents, spreadsheets, PDFs, emails, and similar files. Unlike structured data in databases, unstructured data is difficult to inventory, classify, and protect at scale, and it is where the most sensitive organizational information typically lives: compensation data, patient records, customer PII, engineering specifications, financial models, and acquisition documents. Every major breach studied in Harvard's information risk management program — Sony, Target, Equifax, the NSA — involved the theft of primarily unstructured data that the organization did not know it held in the volume or sensitivity that was ultimately compromised.
How should an MSP approach the ROM conversation with clients?
The most effective entry point is not the tool — it is the question. Opening with "what data do you hold that, if stolen, would change the outcome of your business?" surfaces the conversation at the right level before any technology is introduced. From there, a data risk assessment that inventories and classifies the client's unstructured data produces a quantified liability number that the C-suite can engage with immediately. The advisory play is to position that number against the client's current security spend, their cyber policy coverage, and their compliance obligations — identifying where investment is well-placed and where gaps create material financial exposure that the existing stack does not address.
Why are so many cybersecurity insurance claims being denied?
The primary reason cyber insurance claims are denied is misrepresentation of the insured organization's risk profile at the time of policy application. Insurers ask detailed questions about the volume of sensitive records held, the security controls in place, and the organization's data management practices. Most organizations answer based on perception or incomplete knowledge of their actual data environment — particularly regarding unstructured data that has never been inventoried. When a breach occurs and forensic investigators examine the environment, they typically find that the actual volume of sensitive records is significantly higher than what was declared, which constitutes a material misrepresentation under the policy terms. The claim is denied. Proactive data risk assessment closes this gap by producing an accurate record count before the policy is written — and before it needs to be invoked.
Should an MSP run a data risk assessment on its own business before offering it to clients?
Yes — and not only for the self-protective value it provides. Running the assessment internally gives the MSP owner and team a firsthand understanding of what the tool finds, how the output is presented, and how the conversation should be structured at the executive level. More importantly, it produces a genuine story to tell: an MSP that discovered unexpected exposure in its own environment, took specific action to remediate it, and can demonstrate the before-and-after risk profile is infinitely more credible than one selling a concept it has never personally experienced. In the advisory business, the story is the proof. MSPs who start with themselves earn the right to have the conversation with every client in their book.
Episode Highlights
- 00:01 — From Wall Street CIO to MSP founder to data risk advisor: the career arc that built the ROM framework
- 07:53 — Return on Mitigation introduced: how to quantify breach exposure in dollars and why it reaches executives where technical metrics do not
- 13:44 — The AI exposure gap: employees feeding ChatGPT with patient records and proprietary data — and why most organizations have zero visibility into it
- 17:00 — The DLP ceiling: why detection without file-level remediation leaves the data fully accessible once the perimeter is breached
- 28:20 — The Dropbox discovery: finding $27 million in personal data risk on a single Mac — and what it took to believe the number was real
- 32:06 — The insurance reality: how forensic evidence is used to deny claims — and why the same evidence MSPs can deploy proactively is what insurers use against clients reactively
- 39:11 — The first client: why running the ROM assessment on your own MSP before selling it to clients changes the nature of every conversation that follows
About the Guest: James Oliverio
James Oliverio is the Founder and CEO of ideaBOX and a Senior Advisor and Premier Partner at Actifile. With a career spanning Wall Street CIO roles at major investment banks including UBS, Credit Suisse, and Donaldson, Lufkin & Jenrette, James built and operated Another Nine — a Westchester, New York-based MSP — before selling it in 2015. He subsequently earned a Harvard certificate in Information Risk Management and developed the Return on Mitigation (ROM) framework, which quantifies organizational data exposure in financial terms and connects those numbers to the controls that reduce them. Today, James works with MSPs and enterprise clients across financial services, healthcare, and manufacturing to discover, protect, and govern their unstructured data using the Actifile Data Security Posture Management platform.
Connect with James on LinkedIn →
About the Host: Josh Peterson
Josh Peterson is the CEO of Bering McKinley and host of The BMK Vision Podcast. Since 2004, Josh has worked with hundreds of MSP owners to build operationally sound, profitable businesses through consulting, peer teams, and direct coaching.
Connect with Josh Peterson on LinkedIn →
Related Resources from Bering McKinley
- Cyber Risk, Claims, and Coverage: A Deep Dive with Bill Haber
- Navigating Compliance Challenges for MSPs
Want to Continue the Conversation?
The conversation James Oliverio is driving — where security is positioned as a financial discipline, not a compliance exercise — is exactly the kind of thinking the BMK Vision framework is built to support. If you are ready to build a more strategic advisory practice, the Vision Operating System is where that work begins.
Josh Peterson