By Erin Reilly
Like a lot of people, I’ve always been obsessed with sci-fi movies. For many of us, it’s our first introduction to Artificial Intelligence. Total Recall introduced us to implanting false memories to experience thrills of people or places we dared not venture, and today, we have VR that allows us to do just that, minus the memory implants. Data from Star Trek: The Next Generation, a self-aware, fully functional android who struggled to understand human behavior and the idiosyncrasies of it, informed us that Artificial Intelligence is always wanting to improve its knowledge and better understand how humanity works. However, the recent Facebook shutdown, where two AI programs started talking to each other in their own language, was an example of our fear of AI learning too much. Then you have the concern of I, Robot, Blade Runner, or its latest sequel, Blade Runner 2049 — films that each in their own way offer glimpses to a future run by machines rather than man; a world where machines are as flawed as humans.
Artificial Intelligence is often used as the engine behind our news, websites, recommendation engines, etc. We want computers to be smart. Maybe not smarter than us, but at least at a level to make our lives easier, more rewarding, and catered to our every desire — or do we? If intelligent machines are built to serve, to support human’s needs, to do no harm — why do we make them in our image when, in actuality, “to be human is to be beautifully flawed” as Eric Wilson, author of October Baby, shared.
Case in point — there is less than 25% females in the tech industry, and more than 50% of the tech industry are white males. It shouldn’t be a surprise that those who are building the AI models are only a sliver of those who need to be represented, and thus Artificial Intelligence is susceptible to the same biases and errors we see in our own society, including gaps in race, gender, disability, religious affiliation, sexual orientation, and languages/cultures around the globe. Now more than ever, we need to consider how to diversifying artificial intelligence — or it will always be flawed.
I’ve been thinking about the flaws in AI over the past two years as I worked with a team to develop my first AI project on fan motivation and engagement. As a Media Literacy educator, I see AI as an emerging media that impacts our daily lives requiring the need for us to become literate in its practice. Media Literacy is “the ability to access, analyze, evaluate, create and act using all forms of communication” (National Association for Media Literacy Education). We can use the guiding questions of media literacy and apply it to Artificial Intelligence. Asking questions like “Who are the authors and intended audiences?”, “What is the intended messages and meanings?”, and “What is being represented or not in how it was made?”
To explore this topic, I developed a briefing book and organized a ½ day pre-conference workshop for educators this summer at NAMLE’s National Biennial Conference. Participants tackled real-world problems to better understand what AI is, explore the values and ethical norms of AI, and discuss how these important topics could be applied into classroom activities.
The groups brainstormed wonderful activities (Watch the videos here). I encourage you to try them out on your own, with your children or students, if you’re a teacher. As example, I’d like to share one of the activities that struck a chord with me and is timely as we move into the holiday season.
Ok, fun and simple activity …try it out for a week and see what your profile reveals. Purchasing behavior provides a rich set of data on you that can be used in many (and potentially unpredictable) ways. It’s often not transparent on how this data is used.
Following the summer workshop, in November during National Media Literacy Week, I designed and organized in partnership with West Virginia University’s Media Innovation Center and MediaShift a Symposium and Women’s Hackathon on Diversifying Artificial Intelligence. This was WVU’s third year hosting the Hack the Gender Gap series, and Artificial Intelligence was the central topic.
Diversifying Artificial Intelligence Symposium and Hackathon offered an opportunity for female students, faculty and industry professionals to come together and innovate new solutions to address these gaps in artificial intelligence. The extended weekend began with moderating a Symposium where panelists, TrollBusters founder Michelle Ferrier, human rights attorney and social entrepreneur Flynn Coleman, and Susan Etlinger, an expert in AI, data and ethics at Altimeter, shared their expertise with concrete examples of how AI impacts diversity in our daily lives.
Michelle Ferrier explained how history continues to repeat itself in the design of new technologies. Referencing Kodak film (that was only calibrated to only Caucasian skin tones) she shared, “The past is present in the same kinds of technologies as we design these tools to supposedly be smart for us. Here I was in the airport bathroom struggling to get the water from the automatic faucet to work but it couldn’t recognize my skin color. This is an ordinary thing that you wouldn’t think about but obviously has some significant effects on hygiene.”
Susan Etlinger told us about a group of researchers who discovered that the Word2vec model, a machine learning model used to train recommendation engines and search algorithms. This serves as the black box for a variety of applications and websites we use in our daily lives. It’s actually blatantly sexist illustrated here.
“Man is to programmer as women is to home maker. Data and algorithms aren’t perfect and pristine. They encode all of our biases whether we know we have them or not,” shared Susan Etlinger.
Flynn Coleman followed this with, “Our future is being decided by a relatively homogenous small group of people. You are our future. You’re going to be building our future and our biggest questions right now is how we’re going to infuse values and ethics into these artificial machines.”
These are just a few highlights of the expertise offered to participants as insights into the many gaps needing to find solutions with fresh eyes and different perspectives at the center of development.
The following days of the Hackathon were long and intensive for the teams, filled with inspiration and information given to the participants in short bursts of time between team work and brainstorming. Additional voices and expertise were added to the mix between time for teams to work and flesh out their ideas.
USC Professor, Amara Aguilar kicked off the hackathon helping the teams better understand what AI is with basic definitions “It’s basically performing X that humans might perform such as visual perception or decision making.” Followed with some benefits of AI from collaborating with journalists to more efficiency in sifting through data and empowering our relationship with the audience. Yet, it’s also a critical time to address diversity and Amara outlined some issues to consider from biases, diversity, personalization and transparency.
Ximena Acosta shared the importance of human design with a solid example of “One voice command clearly does not fit all! We have an array of languages, accents, mispronunciations and other human factors that severely affect how we interact with products” sharing a fail in Siri not even pronouncing her name right and personal stories of her mom and sister with thick Columbian accents never having Alexa understand their requests. Startup experience on what are some potential business models and how to successfully pitch your idea came from fellow entrepreneurs, Megan Tiu from Frenzy.ai emphasizing the importance of getting in front of your customers. “Your customers tell you what they want. You must figure out what they need.” And, Jennifer Ellis, from Giggle Chips and a venture coach to students shared “Finish strong. This is your idea…make sure your story is heard.”
In 5-minute ignite pitches, the six teams’ hard work paid off. The Symposium and Hackathon offered opportunities for all who attended to be aware of AI’s flaws, and the team’s solutions addressed a variety of gaps currently seen in the market. Beyond working with Mark Glazer to facilitate the event, I had the opportunity to act as a floating mentor getting a glimpse at their progress throughout and offering guidance as needed.
Projects shared centered around the themes of authenticity, transparency, personalization and context, developing new AI models that opened up datasets, encouraged disadvantaged communities to have a voice and agency in what was being developed. Many offered assistance to humans as a guide on the side acknowledging biases and offering more neutral language. And, many were niche focused on a specific community such as Native Americans, LGBTQ or students with a reading disability.
The topic of Diversifying AI impacts us all. Initially in the development of this program, I didn’t want to limit this experience to just females. But, in the end, I’m glad we did. This experience offered a safe space for women to not only see that they a seat at the table, but also a powerful voice to make the change they want to see. Each team delivered solutions that lit a flame to push new boundaries. At the close of the weekend, Team Mak won with Context. Ironically, their solution name, Context, is fitting — because meaningful context is what we’re trying to achieve with Artificial Intelligence.