Table of Contents
Women in AI are transforming the world in unprecedented ways, but who are the people behind this revolution? Throughout the year, we will publish several pieces to showcase their work, challenges, and perspectives. You can read more profiles here.
In this interview, we spoke with Francine Bennett, a founding member of the board and the interim Director of the Ada Lovelace Institute, an independent research and deliberative body that aims to ensure data and AI work for people and society.
Bennett has a diverse background in AI, having worked in biotech, data science consultancy, and charity sectors. She is also a founding trustee of DataKind UK, a non-profit organization that provides data science support to British charities.
How did you get your start with Women in AI? What attracted you to the field?
I have a degree in pure maths, which I loved, but I didn’t think much of applied maths at first. I thought it was just boring calculations. I got interested in AI and machine learning later on when I realized that the abundance of data in many domains opened up new possibilities to solve problems in novel and creative ways. I was fascinated by the potential and the diversity of applications of Women in AI and machine learning.
What work are you most proud of (in the AI field)?
I’m most proud of the work that is not necessarily the most technically complex, but that makes a real difference for people. For example, I worked on a project that used machine learning to find patterns in patient safety incident reports at a hospital, which helped the medical staff improve their practices and outcomes.
I’m also proud of advocating for the importance of putting people and society at the center of Women in AI, rather than just focusing on the technology. I think I can do that with credibility because I have experience both in building and using Women in AI, and in understanding how it affects people’s lives in practice.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I try to work in places and with people who value the person and their skills over their gender and to use my influence to make that the norm. I also try to work in diverse teams whenever I can, because I think that creates a better atmosphere and allows everyone to reach their potential.
More broadly, I think it’s obvious that Women in AI is a multifaceted field that will have an impact on many aspects of life, especially for marginalized communities, so it’s essential that people from all backgrounds and perspectives are involved in shaping it if we want it to work well for everyone.
What advice would you give to women seeking to enter the AI field?
Enjoy it! This is such an interesting, intellectually challenging, and constantly evolving field. You’ll always find something useful and stimulating to do, and there are many important applications that nobody has even thought of yet. Also, don’t be too anxious about needing to know every single technical thing (nobody knows every single technical thing). Just start by working on something that intrigues you, and learn from there.
What are some of the most pressing issues facing AI as it evolves?
Right now, I think one of the biggest issues is the lack of a shared vision of what we want Women in AI to do for us and what it can and can’t do for us as a society. There’s a lot of technical advancement going on, which is likely having high environmental, financial, and social impacts, and a lot of excitement about deploying those new technologies without a solid understanding of the potential risks or unintended consequences.
Most of the people building and talking about Women in AI are from a pretty narrow demographic. We have a window of opportunity now to decide what we want from AI and to work to make that happen. We can learn from other types of technology and how we handled their evolution or what we wish we had done better.
For example, what are the equivalents for AI products of crash-testing new cars, holding liable a restaurant that accidentally gives you food poisoning, consulting impacted people during planning permission, or appealing an AI decision as you could a human bureaucracy?
What are some issues AI users should be aware of?
I’d like people who use AI technologies to be confident about what the tools are and what they can do, and to talk about what they want from AI. It’s easy to see AI as something mysterious and uncontrollable, but actually, it’s just a toolset, and I want humans to feel able to take charge of what they do with those tools. But it shouldn’t just be the responsibility of the users. The government and the industry should also create conditions that enable people who use AI to be confident.
What is the best way to responsibly build AI?
We ask this question a lot at the Ada Lovelace Institute, which aims to make data and AI work for people and society. It’s a tough one, and there are many angles you could take, but there are two really big ones from my perspective.
The first one is to be willing sometimes not to build or to stop. We often see AI systems with great momentum, where the builders try to add on ‘guardrails’ afterward to mitigate problems and harms, but don’t consider the possibility of stopping or changing direction.
The second one is to really engage with and try to understand how all kinds of people will experience what you’re building. If you can empathize with their experiences, then you have a better chance of building positive and responsible AI, that truly solves a problem for people, based on a shared vision of what good looks like, as well as avoiding the negative, such as making someone’s life worse because their situation is very different from yours.
For example, the Ada Lovelace Institute partnered with the NHS to develop an algorithmic impact assessment, which developers should do as a condition of access to healthcare data. This requires developers to assess the possible societal impacts of their AI system before implementation and bring in the lived experiences of people and communities who could be affected.
How can investors better push for responsible AI?
By asking questions about their investments and their possible futures. For this AI system, what does it look like to work brilliantly and be responsible? Where could things go wrong? What are the potential knock-on effects for people and society? How would we know if we need to stop or change things significantly, and what would we do then? There’s no one-size-fits-all prescription, but just by asking the questions and signaling that being responsible is important, investors can influence where their companies are putting attention and effort.