AI & the Movement
Many people in the movement have AI in one of two camps:
- It’s a scam
- It’s a force of evil ushering in a new era of darkness
While there is truth to both, reducing AI to its worst parts is to dismiss fundamental shifts in our work and the lives of the people we care about.
There are many reasons not to. AI is trending at a hard time in our space; claims that this technology is going to “fix everything” feel like a bad joke, and our work is so deeply human that anything which is framed as a replacement for people is on some gut level offensive to that which we hold dear.
But dismissing AI is causing us to turn away at a time when we should be leaning in. The future is not yet written, and every day people across the AI space engage in discussion about what that future should look like.
I believe the community has an opportunity to help write that future. To do so, we need to develop a shared vision for this technology - a community model for AI that combines a communal understanding of these tools with guidelines to identify good use cases and make them work for us while also contesting bad actors and shutting them down.
A community model of AI
As a movement we need a useful way of thinking about AI and its potential impact to learn where these tools can be useful and shut down bad applications of the technology.
A starting place I’ve found helpful is to think about the scale of the impact of AI as “the next internet.” The internet transformed our lives, families and cities and changed almost all of the systems we use and many of the ones we work to change. Over time, I expect AI will also grow to be a part of our lives in ways it’s hard to imagine right now.
Like the internet, AI as a technology is capable of great evil. But it also can be a force for good, and an asset for the work we do. Internet tools like social media have been used to bring awareness to issues and hold bad actors accountable, build and mobilize bases, and organize leaders within and across movements at huge scale. AI tools are already helping organizations apply for funding, conduct research and analysis for campaigns, and contest police fines & fees.
By building a community model of AI, we can fight bad use cases while building knowledge and power in communities that are already experiencing an influx of AI into their lives, jobs and homes.
Why now?
AI is already here, and our communities will feel the negative potential impact of AI sooner and more intensely. I believe AI presents a higher risk for our communities because of the following:
-
AI will recreate the bias of the people building it and the world it is built in. There are tons of examples of AI tools reproducing racism, and the extreme efficiency of AI systems will supercharge the effects at a scale it’s hard to imagine right now (even racists need to sleep - AI doesn’t)
-
AI isn’t inherently good and will do bad things if bad actors control it.
-
New tech - especially surveillance tech - is always tested on communities of color first. AI is already being used to bolster surveillance, and the “efficiency without ethics” of AI tools allows people to aim for nightmare scenarios (Ice director wants to run deportations like ‘Amazon Prime for human beings’)
-
Most people working on AI already acknowledge it will make inequality worse by concentrating wealth among the people who control it. McKinsey found that AI could grow the racial wealth gap by $43 billion annually over the next 20 years.
-
No one - even the people building AI - fully understands how it works. A lot of incredibly important decisions (education, legal, healthcare, housing, finance) are at risk of being handed over to a black box. Also, people will use AI’s complexity to hide bad intentions and to remove human accountability from decision making. People with fewer resources will get these systems tested on them earliest and have the fewest options for fixing mistakes or addressing mistreatment.
-
AI is already eliminating jobs by allowing one person to do the work of several, and automating away work like entry level (my first job wouldn’t exist today). This hurts people trying to start careers or get back into the workforce, including those reentering after incarceration. While it may eventually create a bunch of jobs, folks with privilege, resources, and experience with AI from previous jobs will have an advantage.
Adding to the community model
Framing the potential impact of AI as “the next internet” is a useful starting place for our communal model. Beyond this shared understanding, the model could involve elements that identify community members at a higher risk of harm, coordinate our collective response, and help our own teams use these tools effectively and responsibly.
-
Coordinating a collective response: Because AI recreates bias, we should push back against claims that it represents a fairer or more neutral way of making decisions.
-
Identifying bad use cases: Bad actors are still bad actors. If a Palantir of the world builds AI or helps the government deploy AI then red flags should probably go up. We can use public pressure to limit bad actors’ ability to build, deploy, or profit from AI.
-
Identifying existing work where AI can help: Companies are already pitching AI tools to government departments, promising money savings that are hard to ignore. We should continue identifying bad contracts and contesting them, ideally before they are signed - AI could augment our oversight by analyzing budgets or summarizing contracts.
-
Movement best practices for using tools: Be concerned with privacy when using AI models like ChatGPT. These models use data stolen from across the internet to work and you should assume that any data entered into the model is being stored by the company and could be used to help them develop future models. Remove sensitive information when using them and won’t upload files that contain names, addresses, or other PII. Some companies give you the ability to request that they not use the data you give them in order to build future models, but there is no way to verify this is actually happening.
-
Community members at higher risk: AI is going to improve tech companies’ ability to know our interests and target us with ads and content. The TikTok Algorithm getting 10x better at serving up content will fuel technology and phone addiction, which can severely impact the education of young people and is linked with depression and mental health issues at all ages (many Tech CEOs send their own kids to tech-free schools for this reason).
Looking ahead
Just like AI can serve evil, it can also be a force for good.
AI can help with the day to day of our work, freeing us up to spend more time on the hard problems that it can’t yet work on. We can leverage AI for research to more efficiently counteract false narratives and help them reach folks in our base at the right time. AI can help organizations identify and apply for funding, draft grant reports, and write compelling stories about impact. AI can help us cut through the bureaucracy that is too often weaponized against organizers and help organizations without technical teams build websites and even full applications.
If you could go back and speak to yourself at the rise of the internet, what would you want to say? What would you tell yourself to prepare for - and what advice would you have for how the internet could be brought to help the movement?
We have that opportunity now. A community model for AI prepares us to engage with the potential impact of AI and articulate a vision of the future where our communities are recognized. I know that we can engage and guide this future while maintaining focus on what is most important - our love for people.