Articles & Videos

12328 items
Who’s Deciding Where the Bombs Drop in Iran? Maybe Not Even Humans.
New Republic 1 week ago

Who’s Deciding Where the Bombs Drop in Iran? Maybe Not Even Humans.

Within hours of the first U.S. and Israeli weapons exploding in Iran on Saturday morning, at least 153 people, many of them children, according to the BBC, died in an explosion at a girls’ school in Southern Iran. The bombing was reported originally by the Iranian news agency. Israel said it wasn’t aware of any Israel Defense Forces operations in the area, and a U.S. spokesman said, “We take these reports seriously.”We don’t know if the children died because of Israeli or U.S. weapons. But it might not matter. The two militaries have been working together on the planning for this attack and have been sharing technology that Israel has been practicing with on the civilian population of Gaza for more than two years. Assuming the report is accurate, it means that we are immediately witnessing the fullest expression of the most inhumane weapons of the twenty-first century: autonomous bombs and missiles. Their “autonomy” refers to the fact that humans need not be “in the loop” in any meaningful way when deciding where to target or whether to launch such weapons. A combination of human intelligence collected over time; geolocation of mobile phones; and recent images taken by satellites, drones, or people who post images on social media sites contributes to the data these systems digest to guess if an enemy combatant is present at the suspected target. Military officers merely outsource their own moral and military judgment to proprietary systems. Then things blow up.When such systems get it wrong, they get it very wrong. Civilians, often children, pay the price for the errors of autonomous systems. We have seen it for five years in Ukraine. We have seen it for three years in Gaza. We see it in real time this week in Iran. And it’s unconscionable. There is a phrase that has quietly become one of the most consequential in American national security law, and it appears in no statute, no executive order, no treaty. It is a phrase that the Pentagon began demanding AI companies accept as a condition of doing business with the U.S. military: any lawful use.Last week, Anthropic, one of the most advanced artificial intelligence laboratories in the world, declined to accept those words without restriction. On February 26, CEO Dario Amodei released a statement listing conditions the Pentagon was imposing on contractors and said: “These threats do not change our position: we cannot in good conscience accede to their request.”The next day, the Trump administration reached for instruments of punishment normally reserved for the People’s Republic of China. President Trump posted on Truth Social that “the leftwing nut jobs at Anthropic” had made “a DISASTROUS MISTAKE” and directed every federal agency to immediately cease all use of Anthropic’s technology. Defense/War Secretary Pete Hegseth declared Anthropic a “supply chain risk to national security,” a designation typically applied to Huawei and other companies deemed to be extensions of hostile foreign states. Any contractor, supplier, or partner doing business with the U.S. military was immediately barred from any commercial activity with Anthropic. Anthropic had asked for two things: a contractual guarantee that its generative AI model, Claude, would not be used as part of autonomous weapons systems and a guarantee that it would not be used for mass domestic surveillance of American citizens. These were not new conditions. Anthropic had maintained them explicitly in its usage policy since June 2024, before the Pentagon contract, worth up to $200 million, was signed in July 2025. The administration knew the terms. It signed anyway. And then it decided, months later, that those terms were intolerable.To understand what “any lawful use” means in practice, it helps to understand what it is designed to eliminate: the possibility that a private company could tell the U.S. military how its technology may or may not be used. In the Pentagon’s view, once a tool is purchased, the buyer sets the terms of its application. The vendor’s values, safety commitments, and ethical frameworks become, at the moment of transaction, irrelevant. The military has its own lawyers. It has its own review processes. It has its own standards. And given the degradation of legal safeguards and restrictions on the entire executive branch in the last year, almost any act of depravity or mass murder could be ruled “lawful” by a Pentagon that has purged itself of its most moral and ethical lawyers and leaders and a Supreme Court devoted to maximizing Trump’s autocracy.The same logic—that internal military review is sufficient to govern the deployment of powerful technologies—underwrote the expansion of the NSA surveillance state revealed by Edward Snowden. It underwrote the algorithmic targeting programs in Yemen and Somalia, where AI-assisted kill lists generated strikes that killed the wrong people with a regularity that official reviews consistently declined to examine. In his February 26 statement, Amodei said: “Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.” This technical conclusion is shared by a significant portion of the AI research community, grounded in the basic observation that large language models hallucinate, misidentify, and fail in ways that are not fully predictable. In a military context, an unpredictable failure is a dead civilian with no accountable author. The military wants a powerful tool, available at scale, deployable at speed, unconstrained by the values of its designers. It wants an AI that will do what it is told without the inconvenience of a conscience embedded in its terms of service.The category of “supply chain risk”—previously occupied chiefly by companies suspected of channeling data to Beijing—now encompasses companies that ask that their AI not be used to kill people without a human being making the final decision.The most alarming yet unsurprising element of what followed is how quickly the rest of the industry pandered to Trump. Within hours of the administration’s announcement, OpenAI CEO Sam Altman posted that his company had struck a deal with the Department of Defense to deploy its models on classified networks. Altman claimed that OpenAI’s agreement included the same prohibitions on autonomous weapons and domestic surveillance that Anthropic had demanded—the exact conditions for which Anthropic had been declared a supply chain risk. This contradiction was left unaddressed. Elon Musk’s xAI, whose Grok system stands to inherit Anthropic’s classified network access, had already agreed to the morally noxious standard of “any lawful purpose.” Musk himself posted that “Anthropic hates Western Civilization.” The immediate beneficiary of Anthropic’s ejection was its direct commercial competitor, operated by a man who is simultaneously one of the administration’s most powerful insiders. In military and intelligence matters, the stakes are quite high for this sort of crony anti-capitalism. A supply chain risk designation is not merely a policy decision. It is a weapon of economic coercion. It means that any company doing business with the Pentagon must certify it has no commercial relationship with Anthropic. The effect is not to remove Anthropic from one government contract. It is to make Anthropic radioactive to any enterprise with government ambitions, which in the technology sector is most enterprises. This could include major universities. Northeastern, Syracuse, Dartmouth, and Howard universities have all announced partnerships with Anthropic to meld its products with teaching and research missions. Claude was, at the time of its ejection, the only AI model deployed on the Pentagon’s classified networks. It was used, according to reporting by The Wall Street Journal, in the operation to capture Venezuelan President Nicolás Maduro. It could have been used in an operation against Iran (and might be used today, because Claude is embedded in many essential services used by the military, including services provided by Palantir). The military and Palantir chose Claude because it had the level of quality control necessary for the most sensitive and demanding tasks.The systems that will replace Claude in classified environments—Musk’s Grok and, presumably, OpenAI’s models under whatever deal Altman has negotiated—will arrive with fewer commitments to autonomous weapons restrictions and will be operated by companies that have demonstrated they will accommodate the administration’s demands. They are also notoriously shoddy products produced by megalomaniacal CEOs. This could endanger the U.S. service people and contractors, as well as further endanger Iranian civilians.What those systems will be used for, under what circumstances, with what human oversight, and subject to what review when something goes wrong—these questions have no public answers, because the administration’s entire posture has been that such questions are none of a private company’s business.When the government asserts the right to use powerful AI without vendor-imposed constraints, and punishes the vendor that declines to comply, it eliminates one of the only remaining points of friction in the kill chain. The vendors who remain are the ones who said yes. And what they said yes to was, deliberately and explicitly, left undefined. This was not a good situation last week. We should not have to depend on the whims of technology oligarchs to protect lives and our democracy. Sadly, that is the state of American governance in the twenty-first century.Democracies demand accountability. In many ways, accountability is forgotten in America. It’s been evacuated from our government by the Trump movement, first by removing legal safeguards and the inspectors general who were there to enforce them, and then by the imposition of opaque artificial intelligence systems throughout the bureaucracy, often at the direction of Musk’s boy army, DOGE. Accountability requires, at minimum, that someone be held responsible for consequential decisions. Someone should be punished when things go badly. Also, someone should be honored and rewarded when things go right. Autonomous weapons are, by design, accountability-dissolving machines. When an algorithm makes a targeting recommendation, and a human approves it in seconds without adequate information (or not at all, as many systems might be human-free), and the AI system that generated the recommendation is governed by a contract that says it can be used for “any lawful purpose,” the chain of accountability does not merely become hard to trace. It becomes nonexistent.While the Anthropic debacle is a fresh assertion of the autocratic power Trump wields over the private sector, it’s just a corrupt twist in a long plot the militaries of the world have been running for at least a decade.The war in Ukraine has become a laboratory for artificial intelligence and autonomous weapons. Russian and Ukrainian soldiers, civilians, farmers, and grandmothers are the subjects and victims of the experiment. What is being tested is nothing less than the proposition that machines guided by algorithms can make life-and-death decisions faster, cheaper, and more reliably than humans. We should be deeply unsettled by how enthusiastically this proposition is being embraced, and how little democratic deliberation has accompanied it.Ukraine and Russia have both deployed what analysts cautiously call “loitering munitions”—drones that can hover over a battlefield, identify targets, and strike, sometimes with minimal human intervention in the final moments of the kill chain. Ukraine’s Brave1 defense tech cluster, established in 2023, has accelerated the integration of artificial intelligence into drone warfare, enabling target-recognition systems that draw on machine learning to distinguish combatants from civilians—or, rather, to attempt such distinctions under battlefield conditions that confound even trained human observers. Russia, for its part, has deployed the Lancet-3, a loitering munition with alleged semiautonomous targeting capabilities, responsible for the documented destruction of Ukrainian armor, artillery, and infrastructure. The Lancet’s lethal power has rattled NATO planners who spent decades preparing for a different kind of war.The International Committee of the Red Cross has warned, repeatedly and with increasing alarm, that autonomous weapons systems risk violating international law precisely because the contextual moral judgments required in warfare—proportionality, distinction, precaution—are not reducible to pattern recognition. Yet the pressure of battlefield necessity, compounded by the venture-capital logic now deeply embedded in defense procurement, pushes development forward regardless. Ukraine’s innovative and genuinely heroic use of drone technology to resist a brutal invasion should not blind us to the longer arc. Every algorithm trained on Ukrainian targeting data, every autonomous engagement protocol normalized by this conflict, becomes a template. Silicon Valley’s defense-tech renaissance—Palantir’s celebrated role in Ukrainian battlefield analytics chief among them—is not driven primarily by solidarity with a besieged democracy. It is driven by contracts, markets, and the opportunity a live war provides to experiment.While Ukraine has been a vast lab, in which civilian casualties have been considered necessary externalities in the conflict, the genocide in Gaza seems like something far different. It is not only a humanitarian catastrophe. It is a demonstration project. On December 26, 2024, The New York Times published one of the most significant pieces of investigative journalism to emerge from the Gaza war. Reviewing dozens of military records and interviewing more than 100 soldiers and officials, reporters documented how Israel had “severely weakened its system of safeguards meant to protect civilians, adopted flawed methods to find targets and assess the risk of civilian casualties, routinely failed to conduct post-strike reviews of civilian harm or punish officers for wrongdoing, and ignored warnings from within its own ranks and from senior U.S. military officials about these failings.” This was not a rogue operation. It was policy, set at the highest levels soon after the October 7, 2023 attacks on Israeli civilians.That order—unprecedented in Israeli military history—transformed the rules of engagement within hours of the Hamas attack. Where previous conflicts had permitted strikes only when officers concluded no civilians would be hurt, or occasionally when up to five civilians might be endangered, the new order instantly elevated the acceptable threshold to 20 civilian deaths per strike as a standing baseline. Suddenly, the military could target rank-and-file militants at home, surrounded by families. The definition of a legitimate military target expanded to include lookouts, money changers suspected of handling Hamas funds, and the entrances to tunnel networks typically located inside residential buildings. A secondary order issued on October 8 went further still, declaring that strikes on military targets could “cumulatively endanger up to 500 civilians each day.” The effect was swift and catastrophic. Israel fired nearly 30,000 munitions into Gaza in the war’s first seven weeks—more than in the next eight months combined.Since October 2023, the Israeli military has deployed AI systems at a scale that has no precedent in the history of urban warfare. The most extensively documented of these is a system called Lavender, reported in April 2024 by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, based on testimony from multiple Israeli intelligence officers. Lavender used machine learning to generate a list of tens of thousands of Palestinians flagged as suspected militants—at one point reportedly numbering around 37,000 individuals—and operated with what sources described as a 10 percent error rate, which its operators were said to have accepted as tolerable. The Times investigation substantially confirmed and extended this picture. Intelligence officers, working under intense pressure to propose new targets each day after burning through a prewar database of vetted targets within the first days of the conflict, turned to automated surveillance systems and AI to triangulate data and locate militants at a pace no human team could manage manually. Israel has long maintained a database listing phone numbers and home addresses of suspected militants. Tapping Gaza’s telecom networks, intelligence officers tracked calls associated with people on the list. But the databases, according to six officers interviewed by the Times, sometimes contained outdated data, increasing the likelihood of misidentifying a civilian as a combatant. And the volume of calls was far too great for manual review. Artificial intelligence was deployed to close the gap.This is what the military-industrial complex looks like when it has been through the Silicon Valley wash cycle: the same logic of optimization, scale, and throughput that gave us recommendation engines and behavioral ad targeting now applied to the industrial generation of kill lists. The language is sanitized (“machine-assisted decision-making,” “target generation,” “automated computing systems”), but the function is not. Israeli intelligence officers themselves used starker language, describing their operations as a “mass assassination factory,” according to +972 Magazine.The IDF has disputed key elements of the +972 reporting, insisting that human commanders retain final authority over strikes and that AI systems function as decision-support tools rather than autonomous executors. The Times investigation added necessary nuance to this claim. Yes, human officers formally approved targets, but when hundreds of AI-generated targets are being processed each day, when verification is inconsistent across units, when a statistical model built on neighborhood cell phone data stands in for genuine surveillance of a specific building, the philosophical category of “meaningful human control” becomes a bureaucratic fiction rather than a genuine safeguard. The fog of war is more a flurry of drones and missiles now. The legal architecture that was supposed to govern these questions, such as the Convention on Certain Conventional Weapons, has proven entirely inadequate to the speed at which autonomous and semiautonomous systems have been deployed. States with advanced militaries have systematically blocked binding treaty negotiations on autonomous weapons, preferring voluntary guidelines that impose no enforceable constraints. The United States, which provides Israel with the weapons, the intelligence partnerships, and the diplomatic protection at the United Nations that make the Gaza campaign possible, has been among the most consistent opponents of binding international rules.What was normalized in Gaza will not stay in Gaza. This is perhaps the most important thing to understand, and the thing that the framing of the current conflict as a local, bounded military operation most dangerously obscures. Every targeting algorithm stress-tested over Gaza’s densely populated streets, every AI system whose performance data is being collected in real time, represents intellectual property and operational knowledge that will flow, via export and emulation, through the global defense technology market into other conflicts, other theaters, other cities. Israel is among the world’s leading exporters of military technology, and its battlefield-proven systems have historically commanded premium prices precisely because they have been tested under live conditions. The Times investigation provides something these systems previously lacked: a detailed, independently documented record of how AI-assisted target generation performs at scale in a major urban war. There is a concept in ethics called “moral distance.” It refers to the psychological and cognitive space that separates a person who causes harm from the harm itself. Autonomous weapons systems are, among other things, moral distance machines. They allow states to industrialize killing while diffusing responsibility across systems, operators, commanders, procurement officers, software engineers, and shareholders, until no individual person feels or can be made to feel that they bear meaningful accountability for a specific death.The International Committee of the Red Cross, Amnesty International, Human Rights Watch, and a growing coalition of U.N. special rapporteurs have all called for a halt to the deployment of autonomous weapons systems without enforceable accountability mechanisms. These calls have been met with the polite indifference that powerful states reserve for international norms they find inconvenient.The question democratic societies must answer—urgently, before the template is fully set—is whether they intend to remain passive consumers of the demonstration, or whether they will demand that the architects of algorithmic warfare be held to the same standards of accountability they claim to believe in when the cameras are pointed somewhere more comfortable.The deaths of the girls in Southern Iran should haunt us forever. They did not ask to live in a world in which billionaires and their political partners play with remote-control toys for fun and profit. We have major moral questions to ask about how war should be waged going forward. But suddenly, this week, we have an urgent call to voice disgust as well as dissent.

What About Those Celebrating
Iranians? We’ve Seen This Movie Before.
New Republic 1 week ago

What About Those Celebrating Iranians? We’ve Seen This Movie Before.

The MAGA faithful, well schooled over the years in spotting what they believe to be instances of liberal “hypocrisy,” have taken to the interwebs since Saturday to say things like: Just look at all those overjoyed Iranians, you stupid libtards. Does this not make you happy? Is this not what the United States of America is supposed to be for? Is your hatred of Donald Trump so all-consuming that you’d rather see this operation fail, and the democratic aspirations of those poor Iranians, their breasts pounding with hope for the first time in decades, crushed?These are the kinds of questions that seem, to MAGA loyalists, to be conversation-enders—absolutely open-and-shut. But they are not open-and-shut at all. In fact, they’re quite jejune. If you know or bother to recall a little history—of the world, of the region, indeed of U.S.-Iran relations—you know enough to know that those celebrations, while absolutely, 100 percent understandable coming from members of the Iranian diaspora who have relatives who are either living grim lives or are in prison or perhaps dead, are alas premature.I’ll get to that history, but first, in the interest of transparency, let me answer the three questions I posed above. First, yes, the sight of Iranians celebrating is a nice thing to see, although only to a point, as I’ll explain. Second, yes, the spread of liberal democracy is what the United States of America is supposed to stand for; it simply isn’t clear to me (and many millions of others) that things are quite that simple here. We’ve observed Trump for over a decade, after all, and he has shown no such commitment to either democracy or liberation—he would just prefer for more people to be living under his book, as opposed to someone else’s. And it might surprise you to learn my answer to the third question: No, actually. I consider Trump a walking malignancy in virtually every imaginable way, a cruel charlatan and sociopath who has done untold damage to the nation and world over the years. But if the Islamic Republic were to fall tomorrow and Iran were to turn into another Sweden, and Trump got all the credit for it, I’d be very happy for the long-suffering people of Iran and would likely even admit that Trump did a good thing! Alas, there isn’t much chance of that happening. The odds are better than even that those hopeful people dancing in the streets Saturday will be disappointed. Perhaps crushed. I’m afraid history tells us so.Many of the people of Ukraine cheered the Wehrmacht when the Germans marched through in 1941. Why shouldn’t they have? The Germans were there to topple Stalin, who had starved four million of them to death in the prior decade. The Germans will save us from Dzhugashvili’s madness, many Ukrainians thought; indeed, quite a few became fascist fighters, under the leadership of the odious Stepan Bandera. Well … things didn’t quite work out as hoped. The Nazis’ economic exploitation of Ukraine was remorseless, their treatment of the population extremely violent and punitive. Ukrainians were Slavic and considered Untermenschen (under-men) by the Germans. QED. Eric Koch, chosen by Hitler to be the Ukrainian Reichskommissar, once said: “Even if I find a Ukrainian who is worthy to sit with me at table, I must have him shot.” Well, an interesting story, you say, but pretty remote from 2026 Iran. Not really, but—as you wish. So let’s consider the example of Iraq in 2003.Then, as now, there was much celebrating by Iraqis across the world when George W. Bush announced the start of Operation Iraqi Freedom. This was fueled in part by the grotesquely irresponsible promises of people like Richard Perle and Paul Wolfowitz that the war would last about as long as one of Cher’s marriages.Chances are you know what happened. Or is your memory really that short? The war was a disaster for the United States for four years before the 2007 troop surge reduced the violence. As many as 200,000 Iraqis died. The whole mess, which the likes of Perle and Wolfowitz told us would pay for itself, cost the United States more than $2 trillion. Is Iraq a democracy today? Maybe, if you squint at it the right way. They have elections (which may be more than we can soon say). But the Sweden-based V-Dem Institute, which rates all the countries of the world on a set of democratic measures, calls Iraq an “electoral autocracy” (the third-worst of four categories) and places it in the bottom 30 to 40 percent of countries on its Liberal Democracy Index.Better than life under Saddam? Yes, but not by nearly as much as those 2003 Iraqis would have hoped. And that’s after many years of civil war and turmoil. Even cursory knowledge of this history ought to be enough to prevent any industrial-scale MAGA finger-wagging at those of us who aren’t popping champagne corks just yet. Finally, there are lessons to be learned from the last time an Iranian government fell. That was the Shah’s regime, of course, back in 1979, when Iran flipped from being a corrupt and savage American client state under the Shah to being a corrupt and savage bane of America under Ayatollah Khomeini. If you’re so inclined, read this brilliant and detailed BBC report from 2016, when new documents became available, about how the Carter administration tried and failed to manage the transition from the one to the other.Khomeini, in exile in Paris, made lovely promises. “You will see we are not in any particular animosity with the Americans,” he said. He vowed that his Islamic Republic would be “a humanitarian one, which will benefit the cause of peace and tranquility for all mankind.”The central tension in Iran then was between the military and the clerics. The Carter team, once it gave up on the Shah, tried to manage events such that a new regime led by Khomeini would be Islamist but not radically so and would reach certain accommodations with secular parties and the generals. The military made a number of concessions, the BBC wrote, but: “All the concessions made by the military weren’t enough for Khomeini. On 15 February four senior military generals were summarily executed on the rooftop of a high school. It was just the beginning of a slew of executions.”The point is—these situations are easy to misjudge and extremely hard to manage by presidents who aren’t corrupt criminals. Power reverts toward the extremes in such cases because power gravitates toward people with money and guns, and peace-loving liberals who want secular democracy to flourish tend not to have stockpiled a lot of either of those things.Ah, but the MAGA acolyte will scoff—that was weak Jimmy Carter, not our latter-day Rambo, Trump. Think that if you wish. But two points. First, Carter is hardly the only president to have misjudged such situations. Johnson in Vietnam, Reagan in Central America, Bush in Iraq, Obama in Libya—these situations were all different, but they have one thing in common: an outcome considerably at odds with the one the president was trying to achieve and sold to the American people.Second, there is the matter of Trump himself. He knows nothing about Iran. That BBC article is around 4,000 words. I’d be shocked if he’s read half that many words about this country’s long and often glorious history. And it remains a mystery how he flip-flopped, to use a phrase Republicans once favored, from saying no wars to bleating: “The heavy and pinpoint bombing, however, will continue, uninterrupted throughout the week or, as long as necessary to achieve our objective of PEACE THROUGHOUT THE MIDDLE EAST AND, INDEED, THE WORLD!” That is so thoroughly neoconnish in sentiment that Dick Cheney or Don Rumsfeld couldn’t have said it better.I understand why Iranians are trying to be hopeful. The regime that has been destroying their country for 47 years is a nightmare. But this situation calls for a public that remembers a little history and demands democratic accountability. Trump has plenty of applauding seals.

Thieves, Liars And Genociders Declare War On Iran
Scheer Post 1 week ago

Thieves, Liars And Genociders Declare War On Iran

By Nate Bear Don Not Panic Substack Too much to say and almost nothing to say. The US and Israel have declared war on Iran. Of course they have. It was, as I argued on Monday, hard to see any other outcome. It’s the early hours, so it’s unclear exactly what has been hit. We know […]

The Gulf next door
Semafor 1 week ago

The Gulf next door

The drones and missiles that got through the UAE’s defenses consummated what had always been a lingering sense of vulnerability.