We know that AI-powered cyber-physical systems (CPS) will scale in society. The challenge we face now is how we do that responsibly and sustainably? If we act proactively, we can avoid some of the negative impacts we have seen during other technological leaps. We need to start creating now for that future 30 years hence, when we are completely embedded in both a digital and physical environment, and are experiencing a climate unrecognisable from the climate of today [...] for a future characterised by economic prosperity, social equality and wellbeing, and environmental sustainability."
In Our Image: Artificial Intelligence and the Humanities
Artificial intelligence has infiltrated our daily lives—in the ways we conduct business, govern, provide healthcare and security, and communicate. The large-scale cultural and societal implications of these changes—and the ethical questions they raise—pose a serious challenge as we embrace a future increasingly shaped by the implementation of AI technology. This collection highlights perspectives from leading humanists, scientists, engineers, artists, writers, and software company executives collectively advancing inquiry into key emerging questions.
"David Beckham does not speak Arabic, Hindi or Mandarin. But when the soccer legend starred in a PSA for malaria awareness this spring, he effortlessly switched among these and six other languages, thanks to cutting-edge technology that could soon change how Hollywood localizes its movies and TV shows..."
Fluxus Landscape is an art and research project created in partnership with the Center for the Advanced Study in the Behavioral Sciences (CASBS) at Stanford University and Şerife Wong with support from the Stanford Institute for Human-Centered Artificial Intelligence. The project lends a nuanced perspective to a rapidly growing and complex field. Users are encouraged to edit and build upon the work. Research meets art.
Thank you so much for participating in our presentation today. We would highly appreciate it if you had suggestions about historical or fictional narratives that we could add to the CEN website to support the AI Responsible Curriculum initiative. You can share book titles, movie titles, links, articles, and anything we could use to search for the narratives.
D&S researchers Madeleine Clare Elish and Tim Hwang discuss the social challenges of AI in their collection of essays, An AI Pattern Language.
From the authors:
How are practitioners grappling with the social impacts of AI systems?
In an AI Pattern Language, we present a taxonomy of social challenges that emerged from interviews with a range of practitioners working in the intelligent systems and AI industry. In the book, we describe these challenges and articulate an array of patterns that practitioners have developed in response. You can find a preview of the patterns on this page, and you’ll find more context, information, and analysis in the full text.
The inspirational frame (and title) for this project has been the unique collection of architectural theory by Christopher Alexander’s A Pattern Language (1977). For Alexander, the central problem is the built environment. While our goal here is not as grand as the city planner, we took inspiration from the values of equity and mutual responsibility, as well as the accessible form, found in A Pattern Language. Like Alexander’s patterns, our document attempts to develop a common language of problems and potential solutions that appear in different contexts and at different scales of intervention.
While we believe the views we present are significant and widely held, these patterns are neither comprehensive nor proscriptive. Rather, this document is an experiment in cataloguing and catalyzing. AI is not out of our control, and an AI Pattern Language calls attention to the ways in which humans make choices about the development and deployment of technology. This text was created in the spirit not of an answer, but of a question: how can we design the technological future in which we want to live?
Elish, Madeleine Clare, and Tim Hwang. "An AI Pattern Language." Data & Society, September 29, 2016.
"What happens when robots not only learn to write well, but the tech becomes easily accessible and cheap? As Hal Crawford explains, it’ll likely be teachers who feel the effects first."
Artificial intelligence (AI) is arguably the driving technological force of the first half of this century, and will transform virtually every industry, if not human endeavors at large. Businesses and governments worldwide are pouring enormous sums of money into a very wide array of implementations, and dozens of start-ups are being funded to the tune of billions of dollars.
It would be naive to think that AI will not have an impact on education—au contraire, the possibilities there are profound yet, for the time being, overhyped as well. This book attempts to provide the right balance between reality and hype, between true potential and wild extrapolations. Every new technology undergoes a period of intense growth of reputation and expectations, followed by a precipitous fall when it inevitably fails to live up to the
expectations, after which there is a slower growth as the technology is developed and integrated into our lives.
Holmes, Wayne, Maya Bialik, and Charles Fadel. Artificial Intelligence In Education: Promises and Implications for Teaching and Learning. Center for Curriculum Redesign, 2019.
"It was an exciting discovery when I read Condiciones Extremas by Juan B. Gutiérrez. Beyond the outstanding quality of the content, this digital novel also impressed me with its use of innovative technology. New technology has always amazed me. In this case innovation in literature with AI (artificial intelligence), immediately called my attention." Aesthetics, Art, Graduate Students, Music Appreciation, Joanna Newsom, Poetry, Self-Realization
"It was an exciting discovery when I read Condiciones Extremas by Juan B. Gutiérrez. Beyond the outstanding quality of the content, this digital novel also impressed me with its use of innovative technology. New technology has always amazed me. In this case innovation in literature with AI (artificial intelligence), immediately called my attention." This Humanities Moment was collected as part of the 2021 Graduate Student Summer Residency Program.
By teaching machines to understand our true desires, one scientist hopes to avoid the potentially disastrous consequences of having them do what we command.
Imagine a classroom in the future where teachers are working alongside artificial intelligence partners to ensure no student gets left behind.
The AI partner’s careful monitoring picks up on a student in the back who has been quiet and still for the whole class and the AI partner prompts the teacher to engage the student. When called on, the student asks a question. The teacher clarifies the material that has been presented and every student comes away with a better understanding of the lesson.
This is part of a larger vision of future classrooms where human instruction and AI technology interact to improve educational environments and the learning experience.
Venell, Tessa. "Artificial Intelligence and the Classroom of the Future." BrandeisNOW, November 19, 2020.
Existing and emerging technologies can help save teacher time—time that could be redirected toward student learning. But to capture the potential, stakeholders need to address four imperatives.
Bryant, Jake, Christine Heitz, Saurabh Sanghvi, and Dilip Wagle. "Artificial Intelligence in Education: How Will it Impact K-12 Teachers?" McKinsey & Company, January 14, 2020.
Where does Blackness manifest In the ideology of Western technoculture? Technoculture is the American mythos and ideology; a belief system powering the coercive, political, and carceral relations between culture and technology. Once enslaved, historically disenfranchised, and never deemed literate, Blackness is understood as the object of Western technical and civilizational practices. This presentation is a critical intervention for internet research and science and technology studies (STS), reorienting Western technoculture’s practices of “race-as-technology” to visualize Blackness as technological subjects rather than as “things”. Hence, Black technoculture.
It’s getting easy to create convincing—but false—videos through artificial intelligence. These “deepfakes” can have creative applications in art and education, but they can also cause great harm— from ruining the reputation of an ex-partner to provoking international conflicts or swinging elections. When seeing is not believing, who can we trust, and can democracy and truth survive?
In an increasingly data-driven, automated world, the question of how to protect individuals’ civil liberties in the face of artificial intelligence looms larger by the day. Coded Bias follows M.I.T. Media Lab computer scientist Joy Buolamwini, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within facial recognition algorithms now prevalent across all spheres of daily life.
While conducting research on facial recognition technology at the M.I.T. Media Lab, Buolamwini, a "poet of code," made the startling discovery that the algorithm could not detect dark-skinned faces or women with accuracy. This led to the harrowing realization that the very machine-learning algorithms intended to avoid prejudice are only as unbiased as the humans and historical data programming them.
Coded Bias documents the dramatic journey that follows, from discovery to exposure to activism, as Buolamwini goes public with her findings and undertakes an effort to create a movement toward accountability and transparency, including testifying before Congress to push for the first-ever legislation governing facial recognition in the United States. The film also includes data journalist Meredith Broussard; Silkie Carlo, director of Big Brother Watch, who is monitoring the trial use of facial recognition technology by U.K. police; Virginia Eubanks, author of Automating Inequality; Ravi Naik, human rights lawyer and media commentator; Dr. Safiya Umoja Noble, author and expert on algorithmic discrimination and technology bias; and Zeynep Tufekci, author of Twitter and Teargas.
We can have democracy, or we can have a surveillance society, but we cannot have both.
Zuboff, Shoshana. "The Coup We Are Not Talking About." The New York Times, January 29, 2021
"Applying Social Information Processing Theory to investigate the use of non-verbal cues in Xbox Live: Co-Authored with Dr. Wayne Buente
Culturally Responsive Exercise: Employing the Xbox Kinect as a Technology Based System to Address Health Disparities for Women of Color: Co-Authored with Student Researcher, Renata McCormack"
"Today, data science is a form of power. It has been used to expose injustice, improve health outcomes, and topple governments. But it has also been used to discriminate, police, and surveil. This potential for good, on the one hand, and harm, on the other, makes it essential to ask: Data science by whom? Data science for whom? Data science with whose interests in mind? The narratives around big data and data science are overwhelmingly white, male, and techno-heroic. In Data Feminism, Catherine D'Ignazio and Lauren Klein present a new way of thinking about data science and data ethics—one that is informed by intersectional feminist thought.
Illustrating data feminism in action, D'Ignazio and Klein show how challenges to the male/female binary can help challenge other hierarchical (and empirically wrong) classification systems. They explain how, for example, an understanding of emotion can expand our ideas about effective data visualization, and how the concept of invisible labor can expose the significant human efforts required by our automated systems. And they show why the data never, ever “speak for themselves.”
Data Feminism offers strategies for data scientists seeking to learn how feminism can help them work toward justice, and for feminists who want to focus their efforts on the growing field of data science. But Data Feminism is about much more than gender. It is about power, about who has it and who doesn't, and about how those differentials of power can be challenged and changed."
As data are increasingly mobilized in the service of governments and corporations, their unequal conditions of production, their asymmetrical methods of application, and their unequal effects on both individuals and groups have become increasingly difficult for data scientists–and others who rely on data in their work–to ignore. But it is precisely this power that makes it worth asking: “Data science by whom? Data science for whom? Data science with whose interests in mind? These are some of the questions that emerge from what we call data feminism, a way of thinking about data science and its communication that is informed by the past several decades of intersectional feminist activism and critical thought. Illustrating data feminism in action, this talk will show how challenges to the male/female binary can help to challenge other hierarchical (and empirically wrong) classification systems; it will explain how an understanding of emotion can expand our ideas about effective data visualization; how the concept of invisible labor can expose the significant human efforts required by our automated systems; and why the data never, ever “speak for themselves.” The goal of this talk, as with the project of data feminism, is to model how scholarship can be transformed into action: how feminist thinking can be operationalized in order to imagine more ethical and equitable data practices.