Ever feel like AI is developing way too fast? Like it’s making decisions that affect us, but we’re just along for the ride? In AI Needs You: How We Can Change AI’s Future and Save Our Own, Verity Harding flips that narrative on its head and challenges us to think differently. This isn’t just another tech book—it’s a wake-up call to take control over AI’s role in our lives.
Why Read This Book?
If you’ve been feeling uneasy or even curious about the rapid rise of AI, Harding’s book is for you. She’s a former policy expert who’s spent years in the tech industry, including time at Google, so she knows firsthand what it’s like inside the machine. But what makes AI Needs You stand out is that it’s not just aimed at techies. Harding wrote this for everyone, especially those of us who might feel left behind or unsure how to contribute to a world increasingly dominated by artificial intelligence.
Harding doesn’t sugarcoat the issues: yes, AI can do amazing things, but it can also go off the rails if left unchecked. She believes we—yes, that includes you—need to be actively involved in shaping AI’s path. This book breaks down the barriers and explains why we all have a stake in AI’s future, even if we’ve never coded a line in our lives.
The book on amazon 👉 AI Needs You 📚
How You Can Make a Difference: Practical Steps to Shape AI’s Future
Rather than leaving readers feeling overwhelmed, Harding provides actionable steps anyone can take to make a difference in the future of AI. Here’s a more detailed look at what each step involves:
Be Informed and Stay Curious: If AI feels like a big, daunting topic, don’t worry—you’re not alone. Harding emphasizes that you don’t need a computer science degree to understand the basics of AI. Start by reading accessible articles, watching introductory videos, or even following tech-focused podcasts that explain AI concepts in plain language. When you understand the basics—how machine learning works, the importance of data, the ethical concerns—you’re better equipped to join discussions and make informed decisions about AI products and policies. This knowledge is especially important as AI becomes integrated into more aspects of our lives, from healthcare to hiring practices.
Engage with Policy Discussions: Harding underscores the need for regulations that make AI systems safer and more transparent. One policy movement she discusses is the push for “algorithmic transparency,” where AI companies disclose the data sources and processes that guide their systems’ decisions. By supporting organizations that advocate for these policies or joining local forums on AI policy, you can help ensure that AI developers prioritize accountability. If your city or country holds public hearings on AI legislation, consider attending or writing to your representatives to show that you care about ethical, transparent AI. Harding emphasizes that policymakers notice when constituents voice concerns, especially on complex issues like AI.
Support Ethical Companies and Responsible Tech: Consumer demand shapes markets, and Harding explains that one way to influence the tech industry is by choosing products and services from companies that are transparent and ethical in their AI use. Some companies have started to adopt “fair AI” standards, where they commit to ethical practices and fair use of data. By doing a little research on a company’s AI and data practices before making purchases or using their services, you send a signal to the market. Harding even suggests checking for “AI accountability” reports that some companies publish, as these can offer insight into their practices. Supporting companies that are mindful about AI isn’t just a moral stance—it helps build a future where responsible AI is the norm.
Start Conversations and Share Your Knowledge: Harding believes that conversations around AI shouldn’t be limited to tech hubs or academic conferences. She encourages readers to bring up AI topics with friends, family, coworkers, and on social media. Why? Because when people talk about AI in everyday contexts, it demystifies the technology and raises awareness of its implications. The more we normalize discussing AI’s ethical concerns—whether it’s about facial recognition, algorithmic bias, or data privacy—the more likely we are to influence companies and policymakers to take our concerns seriously. If you’re a teacher, consider incorporating basic AI ethics discussions into your classroom. If you’re a business leader, host a meeting on how AI might impact your industry. Starting these conversations may feel small, but they’re essential for building a well-informed public that demands responsible AI development.
AI in Action (and Sometimes, in Need of Adjustment)
In AI Needs You, Verity Harding does more than outline theories—she brings AI’s impact to life through real, relatable stories. Here are a few of the most compelling examples, each with unique lessons about the powerful (and sometimes risky) capabilities of AI:
Healthcare Heroes and Hiccups: One of AI’s most promising areas is in healthcare. Imagine an AI system that could analyze patient data and identify rare diseases faster than any doctor, potentially saving lives with early detection. Harding highlights a case in which an AI diagnostic tool, used in a hospital setting, began missing critical diagnoses after it was deployed. Why? The AI had been trained on a dataset that didn’t fully represent all demographics—missing key data from certain age groups and racial backgrounds. The result was a biased system that excelled for some patients but underperformed dangerously for others. This story serves as a crucial reminder: AI is only as strong as the data we feed it. If we want fair, accurate healthcare AI, we must be diligent in building and testing systems that account for everyone. Harding points to this example as evidence of why it’s essential for the public—not just tech and medical professionals—to advocate for comprehensive, representative data in AI development.
When Social Media AI Gets…Socially Unethical: Social media platforms use sophisticated AI algorithms to determine what content to show us, often based on our interests and engagement history. Harding dives into how these algorithms, while intended to keep us engaged, can inadvertently (or sometimes even deliberately) amplify misinformation and divisive content. She recounts a well-documented example where an AI-driven algorithm for a major platform prioritized sensational and polarizing content, leading to real-world consequences, including heightened political divisions and widespread misinformation. Harding makes a powerful point: these algorithms aren’t neutral. They’re built by humans, with embedded goals and biases, and that means they can—and should—be guided by ethical considerations. For readers, this example is a wake-up call to understand that these systems influence public opinion and decision-making on a mass scale. Harding emphasizes the need for transparency from companies about their AI and accountability for the effects these systems have on society.
Self-Driving Cars: Navigating the Crossroads of Ethics and Tech: The self-driving car industry is another area where AI is poised to reshape our daily lives, but it’s also an arena where ethical decisions come into sharp focus. Harding examines the famous “trolley problem,” where a self-driving car must make a split-second choice between two risky options (e.g., protecting the passenger or protecting a pedestrian). She dives into the ethical and legal dilemmas these scenarios create, asking tough questions like: who is responsible for these decisions? Should it be the manufacturer, the programmer, or even the government? Harding argues that these ethical quandaries can’t be left solely to engineers or executives. Instead, we need public forums and regulations that include citizen voices, since everyone shares the roads. Her example underlines the need for public education and engagement on issues surrounding AI in autonomous vehicles, because at the end of the day, it’s our collective safety and values that are at stake.
Join the Conversation! Your Voice Matters
Harding closes AI Needs You with a powerful message: AI’s future is a shared journey. This book is an invitation for all of us to take an active role, regardless of our background or expertise. In a world where AI is transforming how we work, communicate, learn, and even access healthcare, Harding reminds us that we can choose to be active participants instead of passive observers. By being informed, supporting responsible AI, advocating for transparent policies, and starting conversations, we’re shaping a future where AI serves humanity’s best interests.
What do you think? Are you ready to get involved in shaping the future of AI? Have you read AI Needs You or have questions about the topics it covers? Let’s keep the conversation going—leave a comment and share your thoughts!
The book on amazon 👉 AI Needs You 📚
5 powerful quotes from AI Needs You by Verity Harding:
- “AI is not an unstoppable force of nature—it is a human creation, shaped by our choices, our values, and our vision for the future.”
- “The greatest risk with AI is not that it will outsmart us, but that we will fail to take responsibility for how it is built and used.”
- “If AI is making decisions that affect millions, then millions should have a say in how it works.”
- “Transparency is not a luxury in AI development—it is a necessity. We cannot trust what we do not understand.”
- “The future of AI is not written in code alone; it is written in the conversations we have, the policies we shape, and the ethical choices we make today.”
These quotes capture Harding’s core message: AI’s future is in our hands, and we must actively engage to ensure it serves humanity’s best interests.