MSG Spent $6 Million on Facial Recognition. Staff Used It to Build Files on Critics.
Madison Square Garden's biometric system has been used to preemptively enroll critics who never visited, eject a 9-year-old because of her mother's law firm, and compile an 18-page surveillance dossier on a trans woman. A lawsuit put the specifics on paper.
Nina Richards attended concerts at Madison Square Garden the way most people do — she bought tickets, scanned them at the door, and went inside. What she didn't know was that security staff were producing detailed reports of her visits: when her ticket scanned, which escalator she used, what she ate, how she interacted with staff. The reports ran to 18 pages.
She was in the system under a stalking allegation that, according to a lawsuit filed by a former MSG security employee, was false. The real reason, the lawsuit alleges, was her gender identity.
What MSG Built
Madison Square Garden invested at least $6 million in a biometric security system — metal detectors retrofitted with cameras positioned to capture the face of every person walking through the entrance. The cameras feed into a facial recognition platform that matches live captures against a database of enrolled individuals. When a match fires, the system alerts staff.
The technical implementation is not unusual. Similar systems operate at airports, stadiums, casinos, and retail chains across the United States. The differentiating factor is always the enrollment mechanism: who decides who goes in the database, on what basis, with what oversight.
At MSG, that mechanism was almost entirely human discretion with no documented accountability. Former security chief Jeff Eversole is named in the lawsuit filed by former staffer Donnie Ingrasselino. Ingrasselino alleges that Eversole directed his team to build surveillance files on specific individuals and enroll them in the biometric system — not because they had done anything threatening or illegal, but because venue management had decided they were people worth watching.
The Three Cases
The designer who had never visited
A graphic designer was added to MSG's facial recognition watchlist before he had ever entered the building. According to the lawsuit, security staff identified him through photographs from his social media accounts. The trigger was a friend wearing a shirt that read "Ban Dolan" — a reference to MSG's principal owner, James Dolan — at an event the designer himself did not attend.
Let me be precise about what this means operationally: MSG security was monitoring social media for people with associations to criticism of the venue's ownership, downloading photos of their faces, and enrolling them in a biometric system so that cameras would flag them if they ever approached the entrance.
He had not threatened anyone. He had not done anything. He had a friend with an opinion.
The mother and the "priority 8" label
A woman attending a Rockettes performance with her 9-year-old daughter was intercepted at the entrance when the facial recognition system matched her face against the watchlist. Her offense: she worked at a law firm that handled personal injury cases against MSG Entertainment's restaurant operations.
The ban was not personal to any specific action she had taken — it extended to all attorneys at the firm. MSG was using a $6 million biometric system as a tool to enforce blanket exclusions against the employees of opposing counsel. Any lawyer from any firm that had sued MSG restaurants was enrolled.
Her daughter was flagged with a "priority 8" designation in the system. The child had done nothing. She was there because her mother was there, and her mother's presence was a problem for MSG.
The 18-page dossier
The most extensively documented case in the lawsuit concerns Nina Richards. Ingrasselino's complaint describes detailed monitoring reports — 18 pages in total — compiled by security staff across her visits. The reports tracked the specifics: ticket scan times, escalator use, eating, staff interactions. The documentation is the kind of record a surveillance operation produces when someone has been actively watched across multiple visits, not merely flagged at entry.
Ingrasselino's lawsuit alleges that Richards was targeted "because of her gender identity" — that the surveillance wasn't triggered by a threat assessment but by who she was. The stalking allegation under which she was enrolled in the system was, according to the lawsuit, fabricated.
The Threat Model Most People Are Missing
The public conversation about facial recognition in the United States has spent years focused on government surveillance: police using it to identify protest participants, ICE using it to track immigration cases, federal agencies building watch lists. That's a legitimate set of concerns, and I've written about it before. But it has trained people to think about the facial recognition threat as primarily a state threat.
The MSG story is a private-sector threat, and it operates under a completely different set of constraints — which is to say, almost no constraints at all.
When a government agency uses facial recognition, there are at least nominal legal guardrails: warrant requirements in some jurisdictions, civil rights statutes, FOIA disclosures, oversight bodies. The guardrails are often inadequate and inconsistently enforced, but they exist. When a private entertainment venue uses facial recognition, the legal landscape is almost entirely blank.
MSG is not required by federal law to tell you they have a facial recognition system. They are not required to tell you if you're enrolled in their database. There is no right to inspect your entry, no right to challenge it, no right to know the reason. The decision to add someone to the watchlist — and the decision about what to do when the system fires a match — is made by private employees with no external accountability.
This is not a hypothetical state of affairs. The three cases in the lawsuit exist precisely because the people running the system made exactly the decisions you'd expect when there are no consequences for misuse.
The Legal Gap
Biometric privacy protection in the United States is a patchwork, and most of it has holes large enough to swallow the entire MSG situation.
The strongest law is the Illinois Biometric Information Privacy Act, BIPA. BIPA requires any entity collecting biometric identifiers — fingerprints, face geometry, retina scans, voiceprints — to obtain written consent from the individual, maintain the data under a defined retention policy, and not sell or profit from biometric data without consent. Critically, BIPA allows individuals to sue for violations without showing concrete harm. That private right of action is what makes it real: BIPA lawsuits have cost companies hundreds of millions of dollars in settlements, and the threat of private litigation creates genuine compliance pressure.
Texas has the Capture or Use of Biometric Identifier Act, CUBI. Washington State's My Health MY Data Act covers some biometric data. A handful of other states have narrower provisions.
New York, where MSG operates, does not have a general biometric privacy law with the enforcement teeth of BIPA. It has narrower protections in specific contexts and a recently passed biometric data law for employment that doesn't cleanly extend to entertainment venues. If you're walking into MSG as a visitor, you are not, in most circumstances, in a jurisdiction that gives you meaningful legal recourse against being enrolled in a facial recognition database.
This is the gap that turns the MSG situation from an abuse story into a structural problem. The behavior documented in Ingrasselino's lawsuit is wrong and probably actionable on discrimination grounds — the targeting of Richards based on gender identity opens MSG to significant liability under New York state human rights law. But the existence of the surveillance system itself, and its use to exclude critics and opposing counsel, has no clean legal hook. A private venue in New York can build this system. It can use it to keep out people it doesn't like. It doesn't have to tell you it's doing it.
Private Surveillance Has Its Own Interests
Government surveillance and private surveillance have the same technical capability and different incentive structures, and the incentive structure matters a lot for thinking about what goes wrong.
Governments surveil for power: political control, law enforcement, border enforcement, counterterrorism. The threat model is that state power will be used against people with disfavored political views, disfavored identities, or disfavored associations.
Private venues surveil for a mix of legitimate security reasons and illegitimate business and personal ones. The legitimate uses are real: a major entertainment venue has genuine security needs, and biometric systems can improve safety in ways other methods can't. The illegitimate uses documented at MSG include: suppressing criticism of ownership, protecting the company's legal position in civil litigation, and targeting a specific individual based on her gender identity.
Those aren't edge cases. Those are the natural outputs of a system where the people running it have interests that don't align with the interests of the people being surveilled, and there are no external mechanisms forcing alignment.
The surveillance pricing ban Maryland advanced on April 22 — the same day Privacy Guides published the MSG story — describes the same structural problem in a different domain. A company collects data about you. The data is used to serve the company's interests rather than yours. Maryland's bill would restrict using behavioral data to personalize prices based on estimated ability to pay. The principle is the same: data collected for one stated purpose gets deployed for purposes that disadvantage the person the data is about.
That's what happened at MSG. The security infrastructure was sold as threat prevention. It became a tool for managing personal feuds, silencing critics, and targeting a trans woman eating a hot dog.
What You Can Do
-
Know your state's biometric law. If you're in Illinois, BIPA gives you real rights: consent requirements, retention limits, a private right of action. Texas and Washington have weaker versions. If you're in a state with no biometric privacy law and a venue is scanning your face, there is currently no legal mechanism forcing them to tell you or stop.
-
Assume you may be in a database. Any major entertainment venue, casino, or high-traffic retailer with modern security infrastructure is probably running some form of facial recognition. You don't know if you're enrolled on a watchlist. You can't easily find out.
-
Support the National Biometric Privacy Act. It has been introduced in multiple congressional sessions and hasn't passed. State-level pressure has been more effective — state biometric privacy legislation is the current most viable path to legal coverage. EFF tracks what's advancing.
-
If you've been wrongly denied entry to a venue, document it. The MSG cases came to light because Donnie Ingrasselino kept records. The people being surveilled — the designer, the mother, Nina Richards — didn't know they were in the system until something happened. If you're denied access to a venue without clear explanation, note the date, time, what you were told, and who you spoke to.
-
Check whether your state legislature has an active biometric bill. Contact your state representative. The boring version of fighting this is the one that works.
For any individual concerned about a specific venue, the honest answer is: there's no reliable opt-out mechanism. Not attending is the only guaranteed way to avoid having your face captured, and that's not a realistic or acceptable answer for most people.
The System Worked Exactly as Designed
This is the observation I keep coming back to: nothing in the MSG story is a malfunction. The facial recognition system matched faces to the database. It flagged entries. It alerted staff. The cameras captured every person who walked through the door, exactly as they were built to do.
The 18-page dossier on Nina Richards isn't a bug. It's a human decision, made by people who had access to the system, to use it against a trans woman attending concerts. The ejection of a 9-year-old with a "priority 8" tag isn't a defect. It's the logical output of a watchlist that treats proximity to opposing counsel as a venue security concern.
Every argument for facial recognition in private venues — enhanced safety, faster threat identification, protecting assets — assumes the people operating the system are using it for the stated purpose. MSG shows that assumption failing in three documented cases, at minimum. We know about these three because one person filed a lawsuit. Donnie Ingrasselino was inside the security operation and saw what was happening.
Most of the people in databases like MSG's will never have that documentation.
The question isn't whether the technology works. It's whether you trust the person deciding who goes in the file — and whether anyone is watching them when they decide.
Sources: Privacy Guides — MSG Facial Recognition Surveillance, Privacy Guides — Maryland Set to Ban Surveillance Pricing