A Republic if You Can Keep It Pulse
April 1, 2026·0 comments·Stories of America
AI Reshapes the Institutional Environment: Election Integrity Debates Intensify, Executive Power Asserts Constitutional Flexibility, and Surveillance Policing Narratives Diverge
Executive Summary
- Media narratives affirming election security and those alleging election fraud are both strengthening simultaneously rather than trading off against each other, driven by the rapid proliferation of AI-generated deepfakes in the 2026 midterm cycle. This dual intensification suggests that AI content is amplifying polarization in election discourse from both directions, while regulatory frameworks—federal and state alike—lag far behind the technology's deployment in political campaigns.
- Language framing executive power as constitutionally flexible has climbed well above its long-term average, while language praising the Constitution as a foundational document has weakened—a divergence with direct consequences for AI governance. The White House's proposed framework to preempt state AI laws embodies this executive-centered posture, but legal analysts and civil liberties organizations have flagged that the vision depends on congressional action that has thus far not materialized, leaving AI organizations caught between competing federal and state regulatory regimes.
- Media discourse around policing remains heavily skewed toward institutional critique, with language characterizing police as violent and oppressive far outpacing language portraying them as dedicated public servants, while the presumption-of-innocence narrative continues to weaken in tandem with the expansion of AI-driven surveillance and predictive policing tools. The 77-point gap between these policing narratives, combined with reporting on algorithmic misidentification and unauditable AI-generated police reports, signals growing reputational exposure for organizations whose technologies are deployed in law enforcement contexts.
- Across all three domains—elections, executive governance, and law enforcement—AI is emerging as a force multiplier for institutional tension rather than a stabilizing tool. The technology is simultaneously the subject of contested governance, an accelerant of political polarization through synthetic media, and an expanding presence in surveillance infrastructure, creating compounding narrative risks for organizations operating in or adjacent to AI.
- Courts have emerged as the primary institutional counterweight in media narratives, with language praising the judiciary rising modestly even as confidence in other institutional checks—separation of powers, election integrity, and due process protections—remains depressed. This pattern suggests that media framing increasingly concentrates institutional legitimacy in the judiciary alone, a configuration that carries fragility risk if judicial authority itself comes under sustained political pressure.
---
Competing Election Narratives Surge in Tandem as AI Deepfakes Enter the 2026 Midterm Cycle
Perscient's semantic signature tracking the density of language affirming that America's elections are the most secure in the world posted the largest one-month change in our data set, climbing by 45.9 points to an Index Value of 38, well above its long-term mean. At the same time, our semantic signature tracking language consistent with claims that American elections are broken and being stolen strengthened by 31.6 points to an Index Value of 29, also now above average. That both narratives are accelerating simultaneously, rather than one rising at the other's expense, points to a deeply polarized but intensely active discourse environment as the 2026 midterm cycle takes shape.
The catalyst binding these two competing narratives is the rapid proliferation of AI-generated content in the political arena. Deepfake campaign advertisements are entering the field with few guardrails: there is no federal regulation constraining AI in political messaging, leaving only a patchwork of largely untested state laws. The most visible example this month came when Senate Republicans released an online ad featuring a realistic but entirely fabricated version of Democratic Texas Senate candidate James Talarico, in which the AI-generated figure appeared to speak directly into the camera for over a minute. The ad was the latest in a series of AI-generated creations from the National Republican Senatorial Committee. Public Citizen called it "a disgraceful attempt to intentionally deceive voters", while Our Revolution noted that the practice is becoming a broader GOP campaign playbook.
The scale of the challenge extends well beyond a single ad. The World Economic Forum warned in March that advanced AI and synthetic media are driving a systemic global crisis that risks destabilizing modern democracies, noting that deepfakes have become nearly indistinguishable from reality. Purdue University professor Daniel Schiff warned that the growing use of AI political content "very much risks being supercharged" in its potential to erode voter trust in institutions. Cornell researcher Sarah Kreps has noted that synthetic media is likely to become a routine campaign tool in both parties.
Alongside the AI dimension, the federal government is actively pushing into election administration in ways that feed both narratives. The U.S. Department of Justice has now sued 29 states and the District of Columbia over their refusal to provide unredacted voter rolls containing driver's license and partial Social Security numbers. Democracy Docket flagged reporting that DOJ may share this voter data with the Department of Homeland Security. Simultaneously, the RNC remains involved in 120 election integrity lawsuits across 30 states; priority areas include voter ID, mail-in ballot security, noncitizen voting prevention, and voter roll accuracy. Polling data from Scott Rasmussen indicated that 55% of voters believe that requiring photo ID would do the most to restore confidence in elections, while a March NPR/PBS News/Marist poll found that the share of Americans expressing little or no confidence in election fairness has climbed to 34%, up from 24% last year.
The simultaneous strengthening of both election semantic signatures suggests that AI-generated content is not simply fueling one side's narrative but amplifying the overall intensity of election discourse from both directions. Although 26 states have passed laws addressing AI in political ads, most focus only on disclosure rather than outright bans, and the regulatory gap remains wide. AI-produced political content is becoming a defining controversy of the midterm cycle, raising both reputational and regulatory risk for organizations working in or around AI.
Executive Power Asserts Constitutional Flexibility Over AI Governance as Courts Push Back
The intensifying election discourse is unfolding against a broader backdrop of contested executive authority with direct implications for how AI will be governed in the United States. Perscient's semantic signature tracking language consistent with the idea that the Constitution is not a death pact and should be bent rose by 20.7 points to an Index Value of 56, more than 55% above its long-term mean and the second-highest absolute value in the data set. Conversely, our semantic signature tracking language praising America's Constitution as one of the most brilliant ever written declined by 10.4 points to 10.
Our semantic signature tracking language consistent with assertions that the presidency is becoming increasingly imperial remains the highest Index Value in the data set at 75, though it moderated by 25.0 points from its prior-month level of 100. Meanwhile, our semantic signature tracking language affirming that America's separation of powers is working strengthened modestly by 4.6 points but remains below average at -10. The imperial presidency narrative has come off its recent levels but still dominates discourse, while confidence in checks and balances remains depressed.
The administration's direct application of this executive-centered approach to AI governance came into sharp focus on March 20, when the White House released its National Policy Framework for Artificial Intelligence. Building on prior executive actions including a December 2025 Executive Order and Trump's "America's AI Action Plan," the framework proposes that Congress adopt legislation broadly preempting state AI laws deemed to impose "undue burdens." The document encourages relying on existing sector-specific regulators rather than creating a new centralized federal AI regulatory authority. The Electronic Frontier Foundation criticized the framework for "barring states from enacting protections for their residents", while others in the technology sector celebrated it as a significant win for AI builders.
Legal analysts have noted the gap between executive aspiration and legal reality. A Ropes & Gray analysis observed that the Attorney General's judgment is not, in itself, a basis for preemption, and this "vision is not a legal reality" without congressional action. Although Congress has authority to preempt state AI laws through legislation, it has thus far declined to do so. The Washington Post reported that while conditions for a bipartisan deal on an AI framework are beginning to emerge, the politics to pass actual legislation remain very challenging.
Courts have been the principal check on executive overreach. A Just Security litigation tracker as of late March shows 238 total plaintiff wins blocking administration actions versus 117 government wins. Our semantic signature tracking language praising the Supreme Court as a bulwark of freedom rose by 6.9 points to an Index Value of 4, moving above its long-term mean, while our semantic signature tracking language arguing that even the courts have started overstepping their power declined by 7.5 points to -12. Law firms targeted by the administration urged a federal appeals court to uphold rulings blocking executive orders against them, reinforcing the judiciary's role as the active counterweight.
As of March 26, 2026, President Trump had signed 252 executive orders, 59 memoranda, and 135 proclamations in his second term. More than 160 new state AI laws are taking effect across diverse domains, and no comprehensive federal AI legislation is on the immediate horizon. The clash between federal preemption efforts and state regulatory autonomy creates substantial compliance uncertainty for AI organizations. Media discourse increasingly frames executive action as constitutionally flexible rather than constitutionally grounded, even as courts serve as the primary institutional check.
AI Surveillance Intensifies the Divergence Between Policing and Due Process Narratives
These AI governance tensions extend into law enforcement, where expanding surveillance technology is driving the sharpest divergence between policing and civil liberties narratives. Perscient's semantic signature tracking the density of language characterizing American police as becoming violent and oppressive holds an Index Value of 43, well above its long-term mean and essentially flat month-over-month. In contrast, our semantic signature tracking language portraying American police as public servants doing their best declined by 4.6 points to -34, remaining well below average. The gap between these two signatures, roughly 77 points, indicates that media language about policing is heavily weighted toward critique rather than support.
Our semantic signature tracking language affirming that everyone is innocent until proven guilty in America fell by 6.4 points to an Index Value of -35, continuing a trajectory that places the presumption-of-innocence narrative well below its long-term norm. The weakening of this narrative runs parallel to the expansion of AI-driven predictive tools in law enforcement.
The companies driving this expansion are growing rapidly. Flock Safety, now valued at $7.5 billion, has deployed more than 80,000 AI-powered cameras that actively surveil nearly every American as they go about daily life. However, Flock Safety's license plate readers frequently misread plates, and innocent people bear the consequences. The Marshall Project documented that while law enforcement cameras are proliferating everywhere, many agencies have few safeguards to prevent abuse by individual officers. One social media commentator captured the concern succinctly, noting that with basic facial recognition, "no one interviewed her, no one checked bank records. They just trusted the algorithm".
Beyond surveillance cameras, AI is reshaping the administrative infrastructure of policing itself. A Mark43 survey found that 51% of first responders are actively using AI to automate administrative tasks, 49% for real-time video surveillance and facial recognition, and 93% of U.S. first responders support their agency's use of AI. Axon's Draft One tool has become the most widely used generative AI tool for writing police reports. Yet experts warn that the real dangers are bias amplification, reduced accountability, and the impossibility of auditing AI-generated reports. A 2026 study in Public Administration Review examining 71 federal and state court dockets found that when algorithmic decision-making systems fail, they become conduits for administrative error, and deviations from legally prescribed outcomes occur due to flawed data, problematic design choices, or inherent system limitations.
The broader surveillance architecture is also drawing longer-horizon attention. Commentator Dwarkesh Patel observed that by 2030, it will be less expensive to monitor every nook and cranny in America than it is to remodel the White House, noting that under current law, Americans have no Fourth Amendment protection against data shared with third parties, and that AI eliminates the practical bottleneck that once limited mass surveillance. Another observer warned that a "government that gets comfortable with machine judgment in war will get comfortable with machine judgment in policing", describing a trajectory where capability creates appetite, appetite creates justification, and infrastructure settles in and starts looking normal.
Our semantic signature tracking language about American cancel culture ruining lives rose by 4.6 points but remains deeply below average at an Index Value of -61, the lowest absolute value in the data set. Cultural accountability concerns have not gained meaningful media traction as a counterweight to the systemic institutional critiques that dominate the discourse.
Together, these trends suggest that the media environment is framing AI in policing primarily through a lens of civil liberties risk rather than public safety benefit. The decline in presumption-of-innocence language alongside the persistently elevated critique of policing institutions signals that for AI organizations whose technologies are deployed in law enforcement contexts—from facial recognition to report generation to predictive policing—the narrative environment carries growing scrutiny and reputational exposure.
Archived Pulse
March 2026
- Supreme Court Tariff Ruling Recalibrates Executive Power Debate While Constitutional Praise and Flexibility Narratives Rise in Tandem
- AI-Powered Immigration Enforcement Drives a Sharp Intensification of Language About Police Oppression
- Election Narratives Sharply Rebound as AI Deepfakes and Federal Intervention Reshape the 2026 Midterm Landscape
February 2026
- Surge in Concerns About Policing and Federal Law Enforcement
- Elevated Concerns About Presidential Power Amid Constitutional Flexibility Debates
- Cancel Culture Discourse Fades into the Background
January 2026
- Imperial Authority Language Reaches Highest Levels in Tracking History
- Constitutional System Faces Mixed Narrative Environment
- Multiple Narratives About Role of Police in Retreat
December 2025
- Executive Authority Narratives Drift Higher as Judicial Pushback Intensifies
- Due Process Narratives Rebound as Cancel Culture Discourse Holds Steady
- Praise for Founding Documents Wavers
November 2025
- Executive Authority Narratives Reach Near-Historic Levels Amid Ongoing Debate
- Courts Face Accusations of Overreach as Judicial Power Debates Intensify
- Constitutional Principles Under Pressure as Separation of Powers Debates Continue
Pulse is your AI analyst built on Perscient technology, summarizing the major changes and evolving narratives across our Storyboard signatures, and synthesizing that analysis with illustrative news articles and high-impact social media posts.
