On January 16, 2026, the Supreme Court granted certiorari to review a decision of the United States Court of Appeals for the Fourth Circuit in United States v. Chatrie. The case places squarely before the Court one of the most consequential Fourth Amendment questions of the digital era: Whether the government may satisfy the Constitution’s particularity requirement by seizing massive datasets first and deciding what it is actually interested in only later. The grant of review reflects growing judicial unease with investigative practices that are increasingly defined not by targeted suspicion, but by the government’s ability to collect, store, and algorithmically analyze data at scale.
A geofence warrant is fundamentally different from the traditional search warrant familiar to Fourth Amendment doctrine. A conventional warrant identifies a specific person, place, or thing to be searched—such as a particular home, device, or account—based on probable cause that evidence of a crime will be found there. A geofence warrant, by contrast, begins with geography and time rather than suspicion. The government defines a physical area (for example, a five-block radius around a bank) and a temporal window (for example, three hours before and after a robbery), and compels a third-party data custodian—typically Google—to search its entire location database and produce data for every device present within that space and time. Only after this bulk production does the government narrow the field, often in stages, to identify a single device or individual of interest. The constitutional problem is not merely breadth, but inversion: Instead of particularity constraining what is seized, the seizure is general, and “particularity” is relegated to a post-collection filtering exercise conducted by the government itself. As the Fourth Circuit acknowledged in Chatrie, this model risks transforming warrants into digital general warrants, precisely what the Fourth Amendment was designed to prohibit.
This inversion directly implicates the Fourth Amendment’s command that warrants must “particularly describ[e] the place to be searched, and the persons or things to be seized.” At the founding, that language was a reaction against British writs of assistance and general warrants that authorized officers to rummage broadly through private papers in the hope of finding something incriminating. The requirement of particularity was meant to operate the other way, limiting what the government could take in the first instance. Geofence warrants instead authorize the government to take everything within a defined digital perimeter and rely on later discretion—often aided by algorithms—to decide what matters.
The Supreme Court’s modern Fourth Amendment cases foreshadow the tension. In Jones v. United States in 2012, the Supreme Court held that it was illegal for the government to surreptitiously install a GPS device on someone’s car without a warrant, emphasizing the sensitivity of location data and what it reveals about a person. Riley v. California, in 2014, the Court stressed that modern digital devices like computers and cell phones differ “in both a quantitative and a qualitative sense” from physical containers because of their vast storage capacity and the intimate details they reveal. Four years later, In Carpenter v. United States, the Court held that long-term historical cell-site location information is so revealing that access to it generally requires a warrant, rejecting the idea that individuals meaningfully “volunteer” such data.
Geofence warrants go further still, capturing location data not for one suspect over time, but for everyone in a place, without individualized suspicion at the moment of collection. Over the years, the lower courts have differed about whether or not the use of such geofence warrants satisfies the “particularity” requirement of the Fourth Amendment, or whether they amount to “seize everything and sort it out later” warrants.
The government’s use of geofence warrants in the January 6 prosecutions illustrates the power—and danger—of this approach. Investigators used location data to identify devices that moved through restricted areas of the U.S. Capitol on January 6, 2021, and then correlated those movement patterns with phone records and other data to trace individuals back to hotels, homes, and workplaces across the country. These techniques enabled hundreds of prosecutions, but they also normalized a model in which the government reconstructs people’s movements retrospectively by querying massive datasets. The government argued that, since the U.S. Capitol was closed on January 6 for the certification of the electoral college vote, anyone other than staff and members who were present unlawfully, and therefore all of the phones showed evidence of a crime. The same analytical machinery that identified rioters inside the Capitol is equally capable of tracking anyone else whose movements intersect with an area of investigative interest.
Other Mass Data Seizures
The erosion of particularity is even more stark when mass data collection collides with searches of journalists or searches for privileged information. In the search for Washington Post reporter Hannah Natanson, federal agents reportedly seized her entire phone, two computers, and a Garmin watch based on suspicion that she had communicated with a particular source. That seizure swept in years of communications with confidential sources, unpublished reporting, and privileged newsgathering material. There has been no public indication that investigators implemented minimization procedures, sought appointment of a special master, or even used a taint team with binding protocols. Nor is there evidence that the magistrate judge was informed that Natanson was a reporter, despite the 1980 statute, the Privacy Protection Act, which Congress enacted specifically to prevent newsroom searches after the Supreme Court’s 1978 ruling in Zurcher v. Stanford Daily Press, which said that journalists were not exempt from such searches. As intrusive as a seizure of any records from a journalist may be (in Zurcher, the government seized three pieces of paper), what the government does now is seize every record of a journalist. When local officials in Marion, Kansas, believed that a reporter accessed a government database to search for a local resident’s DUI conviction improperly, they obtained search warrants and seized the computers and cell phones of the entire newsroom and all of the reporters, as well as those of a member of the City Council.
The police may not even tell the court that they are seeking the records of a journalist. In an earlier investigation involving Associated Press reporter Ted Bridis, prosecutors obtained an order compelling disclosure of Bridis’ decryption key without informing the magistrate that the target was a journalist, thereby sidestepping heightened scrutiny. In the prosecution of journalist Timothy Burke, the government seized and reviewed more than 100 terabytes of data, with no meaningful minimization and no apparent disclosure to the magistrate of Burke’s journalistic role. (Note: I represent Mr. Burke in connection with this case) In each instance, the warrant authorized the seizure of vast quantities of information unrelated to the suspected offense, leaving it to investigators to decide what to look at after the fact. Seize first, search later. And seize in bulk.
Artificial intelligence intensifies these concerns. Once data is seized in bulk, machine-learning tools can rapidly classify, correlate, and infer sensitive facts at a scale no human review could match. The government can plausibly argue that it need not know in advance what it is looking for, because algorithms will find patterns later. That logic collapses the distinction between targeted and untargeted searches and converts the particularity requirement into a procedural formality rather than a substantive safeguard. So the real problem here is that a large mass of data is “siezed” (first), then “searched” (second), and then reviewed by a human. Since the Fourth Amendment requires particularity before the seizure is permitted, the fact that a human only sees a narrow set of the product of the search (e.g., the portion of the geofence that is relevant) is the seizure of the records of hundreds or thousands of innocent persons’ location data, a violation of their privacy? Is there a different result if the collector of that data (here, Google) filters the data before delivering it to the government?
These issues are no longer confined to past cases. In Minneapolis, amid a large-scale federal immigration enforcement operation, civil liberties groups have alleged that ICE and DHS agents are documenting and tracking individuals who observe or follow agents’ movements—conduct protected by the First Amendment—and then using collected data to identify and approach those individuals elsewhere. There is no public indication that geofence warrants have yet been used in that context, but there is also no reason to believe that the U.S. Attorney’s Office in Minneapolis would refrain from deploying the same investigative tools used in January 6 cases. The government has asserted that tracking or following ICE agents may violate 18 U.S.C. § 372, a WWII-era conspiracy statute. If geofence or similar bulk-data warrants are used to identify observers first and assess culpability later, the same Fourth Amendment problems arise.
The Supreme Court’s decision to hear Chatrie thus arrives at a critical juncture. The Court must decide whether the Fourth Amendment tolerates warrants that define searches by geography and time rather than by individualized suspicion, and whether mass collection followed by discretionary sorting—often aided by AI—can ever satisfy the constitutional demand for particularity. If the answer is yes, then the Fourth Amendment risks becoming a relic of a pre-digital age: Formally intact, but functionally incapable of constraining a government that can collect everything first and ask constitutional questions later.
Recent Articles By Author
