Marquis event featuring #LucySuchman, presented by Jackman Humanities Institute and Faculty of Information, University of Toronto, at Innis Town Hall
- Wendy Duff, Dean of the Faculty of Information, UToronto
- Alison Keith, Director, Jackman Humanities Institute
Also opening of Reading Faces, Reading Minds art exhibition at the Bissell Building.
This digest was created in real-time during the meeting,based on the speaker’s presentation(s) and comments from the audience. The content should not be viewed as an official transcript of the meeting, but only as an interpretation by a single individual. Lapses, grammatical errors, and typing mistakes may not have been corrected. Questions about content should be directed to the originator. The digest has been made available for purposes of scholarship, posted by David Ing.
Introduction by Brian Cantwell Smith, Professsor of Artificial Intelligence and the Human
- Lucy Suchman, Professor Lancaster U.
- Knew since graduate school research, at Xerox PARC
- Xerox hired this student of Garfield and Sachs, ethnomethodology
- 1987 Plans and Situated Action
- 2007 Human-Machine Reconfigurations
- President of Society for Social Studies of Science
- Deep critical sensibilities
- Original conceptions of AI, perceptive and effective in raising challenges in the reigning mythos
- Work now on automated warfare, lethal weapons, problems of immersive simulation (may hear some of that tonight)
(Video replay at https://www.facebook.com/iSchoolToronto/videos/342667976597711/).
Sketch of research in progress
- Technology in militarism
- Will unpack predispositions
- 3 indicative technologies
- U.S. citizen, moved to Canada
- Possibility of threat to the world
- U.S. military budget exceeds the next 8 combined
- 70 bases
- Discourse of U.S. vulnerability, also arms race based on AI
Counter-move to human and non-human
- Climate and environment
- Human relations with multiple species, multi-materiality
- Humanity, cut in relations in humans
Karen Barad 2007 Meeting the Universe Halfway: One cut between friend and enemy
- Apparatus as specific material discursive practices, holds matters and meaning together
- They produce differences: integral cut in the phenomenon produced
- Recognition, from social and bodily engagement withe world, more than just the eye
- Recognizability maintains use through practices
- Normative differential responsiveness
- Objects not intrinsically delineated, reiterated cuts
Brian Cantwell-Smith, concept of registration, in The Promise of Artificial Intelligence Reckoning and Judgment (MIT Press forthcoming)
- Judgement requires holding rationality to registration
Judith Butler: Limits of registration to the site of possibility
- Bodies that Matter 1993: Nexus of power and knowledge that gives rise to intelligible things, but also the breaking point where it fails
- Both the conditions on which the object is constituted
- Domain of intelligible body
Butler, 2016 Frames of War: human and non-human targets
- Technology of war, but operation of technologies depends on how it works in the field
- Technological grasping and circulation
- Interpretative manoeuvre, who is a target?
Recognition of another as human, tied to faith
- Face recognition is machine analytics
- Correlations in face recognition have nothing to do with faces that we recognize
- Mapping of output to the faces depends on human sensemaking
- Last year, ACLU ran tests, incorrectly mapped 28 members of Congress as people who committed crimes, Amazon protested on confidence levels too low, but the fear of a threat leads to false positives, disproportionate arrest of people of colour
Situational awareness, core doctrine of command and control
- Major Brad Dostol, 2001, Center for Army Lessons Learning, first use of armed drone in Afghanistan, killed everyone except the intended target
- Ability to maintain a clear mental picture, friendly and threat
- Must provide situational understanding
- This is an idealized view, god-like of situational environment
1. The training situation (immersive simulation, 2016)
- Culture and Cognitive Combat Immersive Training Demonstration, Flatworld Archive, USC of military with the film industry (photo)
- New pedagogies of training
- Anthropological critique in immersive viirtual reality
- C3ITD at Institute for Creative Technology, demonstrated in 2006, with intensive in Iraq
- Military training assumes the recognition of that which is already a threat
- Division of friend-enemy, military training produces the body that the practices are meant to cover
- Reiteration, but materialization never quite fits
- Soldier: the enemy isn’t out there, but becomes viable depending
- Construction of the enemy is self-productive of the enemy, a mode of reiteration
- Rather than simply preparing the soldier, simulation contributes
2. Remote control, to separate the soldier from the combat
- NY Times, QinetiQ, Nov. 2010, remotely control, some armed robots are operated with video-game-style consoles
- Focus on interfaces that configure warfighters between subject and object
- International Humanitarian Law, developed after WWII, as part of Geneva Convention
- Rule #1: Parties must distinguish between combatants and civilians
- Adherence to rule #1 is problematic
- Derek Gregory, June 2015, Geographical Imaginations, noisy network of threats, sweet target, sweet chiild
- Full motion video from Predators and Reapers, all seeing eyes, to see a fully transparent battlespace, but vision is more than biological
- “Google’s march to the war must be stopped“, Lucy Suchman, May 16, 2018 — on the Maven project
- Objects of interest include vehicles, buildings and human
- Further automation of dystopic regimes can only server to worsen situations
- Rendering of precision air strikes, 2004-2015 by Pitch Interactive
- 190 identified as children, 534 civilians, high value targets 52 (known, and an imminent thread), then 2565 people as “other’, not identified
- John O. Brennan, 2012: a weapon that can distinguish between a civilian and a terrorist
- U.S. counter terror air strikes doubles in Trump’s first year, and secrecy has increased
3. Autonomous weapon systems
- e.g. Samsung’s sentry robot, on Korean DMZ, heat and motion detectors across 2 miles
- Precursor to fully-autonomous, any warm body is a body out of place
- Compared to Berlin Wall, actually a space between 2 walls
- Human figure in surrender, outside of combat, under protection of war, signal is clear
- Collateral damage, image released by Chelsea Manning and Wikileaks, Permission to Engage, killing of Reuters cameraman, man in van who wanted to help, and his child in the van
- Camera was read as a weapon, van was read as military
- Fraught with uncertainty
- Misreading, premise
3. Autonomous weapon systems
- Convention on Certain Conventional Weapons, CCW: prohibition on excessively injurious or to have indiscrimate effects
- Testimony on machine autonomy, 5 minutes: problems of situational awareness, accepted as distinction between legitimate and illegitimate
- Machine autonomy c.f. autonomy of a person
- Can’t fully specify in an autonomous weapon
- NGO Article 36 proposed meaningful human control, for acceptability, has been accepted by CCW
- Presence of a human in a loop, as essential for accountability
- However, remote control, the human and the loop are in distributed noise
- Only meaningful to intelligence, in identification of targets
- Situational awareness requires an openness, including performative consequences
- Require time, and intent for communication, that warfare diminishes
Military power that diminishes distinction
- Human rights lawyer, Payam Akhavan, CBC Massey Lecture 2017
- Protection of human rights at a distance
- Serving human, we must first be broken open
- Fantasies of recognition
- More dangerous in the moment, into a horizon of endless war
- Challenge inevitable AI
- If we are part of the world becoming, then commitment to collective transformation
Any good news? This is depressing.
- With Akhavan, looking for an alternative
- Human rights lawyer, looking for multinational bodies with some force, creating legal structures to govern warfighting, so that people can be held to account
- Becoming part of interventions, in the hope that there will be a growing articulation of the problem, with the trajectory
- Hopeful of interruptions in the cybernetic loops
Any evidence (China, Russia) taking a different approach?
- Tend to frame in terms of militarism
- Not framing China and Russia as military powers
- Most people don’t appreciate disproportionate, more than the next 8 combined
- Not an expert on China and Russia, need more discussion
- Google Maven points to China and Russia as command economies, and democracies are handicapped, e.g. Defense Innovation Board between military and Silicon Valley
- AI systems are fielded, and then found to be impossible, sent back into lab
Surveillance to be used for good, and not evil? Collateral damage. Forensic architecture doing similar things, working within legalistic.
- Counter-surveillance initiatives are interesting
- Forensic architecture recreate, and then uncover illegal, kept in the dark
- Doesn’t help unwind pervasiveness of surveillance infrastructure, but does redirect
Any legal consensus on humans being held accountable for autonomous?
- Under laws of war, humans must be accountable
- Opens questions of how accountability will be maintained
- U.S. has lots of military lawyers, working on chains of accountability
- Is it the software developed? The operation commander?
- Have been in meeting with senior military officer about lethal autonomous weapons, said that he wouldn’t deploy unreliable.
- A lot of pushback, but accountability is messy.
- What forums do we have to pursue those questions?
Criteria of recognizability, and achieving discrimination makes killing by algorithm or autonomous weapon illegal. Human killing also becomes a form of illegal. Undermining AI, undermines entire appartus. Counter-argument of threshold of reliability? Recognizability won’t work. A sense of the backup to the argument, to protect against a sufficient threshold of reliability.
- Not sure what that looks like.
- Rule #1 of IHL is clear.
- World of irregular warfare, has undermined that.
- Fundamental criterion
- Next principle of proportionality
- Proportionality still predisposes a distinction, about how much collateral damage
- A resource, feels problematic
- Want to hold the military, those who are operating in the governance structure, to the fundamental principle
International law as reasonable to limits to armaments? Big contracts. Protecting from external threats. Naive to think laws will limit.
- Agree, not naive.
- Questions on efficacy on the rule of law.
- Come less from the idea of real threats, than military-industrial complex, deep and invested interests in militarism.
- Agree that law is only significant to the degree that there are bodies to enforce them, and the U.S. has undermined those
- Don’t want to see rational actors, reject that description as inevitable.
Has research taken you to what would happen in the U.S., when there aren’t funds available to do this work. Diversion of funds from education to military. Social chaos.
- How long can this continue in the absence of revitalization of domestic education and health?
- A political struggle, from the street to congress.
- Important to interrupt the assertion that the U.S. is engaged in necessary defence.
- Not in proportion to thread
- Every person killed is a friend or family to another, network effects