If you want people to be more secure in cyberspace, there are only a few levers you can pull. You can secure them directly. You can give them better tools so they can secure themselves. You can align incentives so that security is rewarded and insecurity is costly. Or you can require security through law, regulation, and enforcement. For years, U.S. cybersecurity strategy has used all four—sometimes awkwardly, sometimes effectively, but generally with the understanding that no single lever is sufficient.
The March 6, 2026 release of President Trump’s Cyber Strategy for America, reflects a noticeable shift in how those levers are weighted. Developed within the Executive Office of the President, with significant involvement from the Office of the National Cyber Director and coordination across the national security community, the document is less a compliance blueprint than a statement of posture.
It assumes something important: That cybersecurity is fundamentally about adversaries.
That assumption drives the strategy’s emphasis on using “all instruments of national power” to disrupt cyber threats—law enforcement, intelligence, sanctions, and other tools designed to impose costs on bad actors. The focus is less on improving the security posture of individual organizations and more on shaping the behavior of those who are attacking them.
At one level, that makes sense. Nation-states and sophisticated threat groups are real. They conduct espionage, preposition in critical infrastructure, and, in some cases, support or tolerate criminal ecosystems that target U.S. entities. A strategy that ignores that reality would be incomplete.
But there is a tension embedded in this approach, and it is a significant one.
The vast majority of cyber harm experienced by businesses and individuals does not come from cyber warriors engaged in geopolitical conflict. It comes from fraudsters. Ransomware operators, business email compromise actors, phishing crews, account takeover specialists—these are not primarily strategic adversaries in the traditional sense. They are economic actors. Some operate independently. Some operate in loosely organized networks. Some are tolerated or even indirectly supported by hostile states. But their motivation is overwhelmingly financial.
That distinction matters.
Because strategies built around deterrence and power projection are designed to influence actors who respond to those signals. Nation-states may be deterred, at least at the margins, by the threat of retaliation, sanctions, or exposure. Criminal enterprises are far more elastic. When pressure is applied, they adapt. They change infrastructure, tactics, jurisdictions, and targets. They fragment and reassemble. They are less sensitive to displays of power and more responsive to shifts in opportunity and risk.
In that sense, the new strategy reflects an implicit theory: That the answer to cyber “crime” is the application of power, and that demonstrating that power will reduce harmful activity.
That theory is not obviously wrong. Targeted disruptions—taking down infrastructure, arresting key actors, freezing financial flows—can and do have impact. They create friction. They impose costs. They may temporarily degrade capability.
But they rarely eliminate it.
More often, they change the shape of the problem.
We have seen this repeatedly. Crackdowns on one form of cybercrime lead to the emergence of another. Takedowns of centralized infrastructure push actors toward more decentralized models. Increased pressure on ransomware groups leads to shifts in tactics—data theft without encryption, double extortion, or entirely new monetization schemes. The system evolves in response to the pressure applied.
The new strategy, by elevating disruption and deterrence, is pulling hard on the first lever—securing the environment directly by going after the actors. At the same time, it signals a relative de-emphasis on prescriptive regulation and detailed compliance frameworks, calling instead for “common sense” approaches and reduced burden.
That rebalancing has consequences.
Earlier strategies, particularly the 2023 framework, placed significant weight on the other levers. They emphasized secure-by-design technology, liability for insecure software, and regulatory baselines intended to raise the floor across the private sector. The theory was that many cyber incidents were preventable—that they resulted from known vulnerabilities, weak controls, and misaligned incentives. Fix those, and you reduce the attack surface.
The 2026 strategy does not reject that view, but it sidelines it.
It suggests, implicitly, that even a well-secured environment will continue to be targeted, and that the decisive factor is not just how strong the defenses are, but how constrained the attackers become. That is a shift from engineering and governance toward operations and consequences.
The private sector’s role in this model is largely observational and cooperative. Companies are expected to detect, analyze, and share information about threats, enabling government action. They are not being asked to engage in offensive operations, but they are positioned as essential participants in a system designed to act against adversaries at scale.
That reflects reality, but it also exposes a gap.
If most cyber harm is driven by economically motivated actors who are highly adaptive, then disruption alone is unlikely to produce sustained reductions in risk. It may suppress specific groups or campaigns, but it does not fundamentally alter the incentives that drive cybercrime. As long as the expected return on attack exceeds the expected cost, the activity persists.
That brings the other levers back into focus.
Technology matters because it can make certain classes of attacks more difficult or less scalable. Incentives matter because they influence investment decisions—whether organizations prioritize security or defer it. Regulation matters because it establishes baseline expectations and creates accountability when those expectations are not met.
A strategy that leans heavily on power must still engage with those elements if it is to have a lasting effect.
The 2026 document leaves much of that to future development. It is intentionally high-level, offering direction rather than detailed implementation. That gives it flexibility, but it also means that the real strategy will be defined by what follows—by how agencies interpret the mandate, by what regulations are relaxed or introduced, by how aggressively disruption operations are pursued, and by how the private sector is integrated into that process.
There is also a broader implication.
By framing cybersecurity more explicitly as a domain of adversarial competition, the strategy moves policy closer to a national security model. That may be appropriate for certain classes of threats, particularly those involving state actors. But when the same framework is applied to what is, at its core, a vast and evolving ecosystem of fraud, the fit becomes less clear.
Crime is not always reduced by displays of power. Sometimes it is displaced. Sometimes it is transformed. Sometimes it simply becomes more efficient.
The new strategy recognizes, correctly, that cybersecurity cannot be reduced to compliance checklists and best practices. It is an active contest with actors who adapt and persist. But in emphasizing that reality, it risks underweighting another: That most of the harm is driven by economics, not geopolitics.
If that is true, then the most effective response will not come from any single lever.
It will come from pulling all of them—technology, incentives, regulation, and, where appropriate, power—in a way that not only disrupts attackers, but also changes the underlying conditions that make cybercrime profitable in the first place.