Daniel Howden, AI watchdog: “These systems are already affecting human and civil rights”

Foto del autor

By TP


British journalist Daniel Howden (50, Weymouth), an expert on corruption and migration, has been a correspondent for the main English media for much of his career. Three years ago, at the height of the rise of artificial intelligence (AI), he founded Lighthouse Reports, a platform dedicated to investigating how central and local governments around the world are using algorithms to make decisions and hold them accountable. The platform, which partners with newspapers, podcasts or television stations, has worked with more than 120 media outlets to publish their reports co-created with journalists from the respective alliances. Howden's strategy is to bring in specialists without the rush of an editorial office and to take advantage of the showcase and the already established relationship between the media and the public. They have produced around 20 investigations a year in different corners of the planet. He talks about his discoveries in this interview conducted in Santiago, when he participated in the Ethical Algorithms Project at the Adolfo Ibañez University, with the support of the innovation laboratory of the Inter-American Development Bank (Bid Lab). Question. Why did you want to focus on accountability for the use of algorithms? A. Automated decision-making systems are being deployed around the world, such as in criminal justice, health care, or welfare services without much or no public consultation. We are being watched, ranked, and scored by systems that most of us don’t understand. For the average person, that means decisions are being made about their lives that they have no control over, whether it’s applying for a mortgage, a job, or a government benefit. If they can’t understand how their application was accepted or rejected, it means they didn’t go through due process and can’t challenge the decision or learn about the data they probably didn’t know was collected. Most of these AI systems are being deployed by governments, cities, and public agencies without oversight. So journalists have to step into that uncomfortable space to report and advocate for regulation. Q. What did you encounter when you started Lighthouse? A. What frustrated me was that the journalism dedicated to the technology industry always talked about AI as something dark that would happen in the near future: “This is going to happen.” But they ignored the fact that there are already things to report on that are present in our lives. If you are in the poorest part of the world, it is very possible that the international support that goes to aid programs is based on an algorithm developed by the World Bank that calculates poverty using a rather controversial methodology. Q. In what cases is this automated decision-making system being used? A. In criminal justice sentencing in the United States and some places in Europe to a greater and lesser extent, for example. The systems provide risk scores, which are then used by judges to issue sentences and determine how much jail time a detainee should serve. Prison authorities use them to decide who should go to a maximum security prison or who gets parole. Q. What information do they use to issue the sentence? A. It is an interaction of variables. Simple ones, like age, gender, crime classification, but also ethnicity, family size, last known address, financial records… There are factors that cannot be considered. We also look in depth at systems for detecting citizen fraud in welfare states. One question is how it is decided who is going to be investigated for possible fraud. In the Netherlands there was a system that considered 315 variables, where more than 20 were related to language. What were they trying to determine? Who was a native citizen and who was not and based on that weigh the risk. You cannot say that a person is more likely to commit fraud because they are an immigrant. Q. Are the biases of AI systems a mirror of the biases of society? A. The technology companies that sell these systems claim that they make objective decisions, which eliminates the factor of human bias, but it depends on how they are trained. If the training data set is a reflection of years of biased actions, that bias will be reflected. One example that has been investigated is predictive policing. The system tells the police that they need to concentrate resources in one area because historical records of crimes are concentrated in that place. In theory, that sounds good. The risk is that for many years the police have been tasked with looking for certain types of crimes and in certain neighborhoods. In Sao Paulo, for example, a location-based predictive system recommends that the police concentrate on poor neighborhoods, where those who work in rich neighborhoods live. Statistically, those in the rich neighborhood are more likely to buy illegal drugs, but they make the transaction in the poor neighborhood, so the system will never tell the police to go to the rich area. This is how bias is built in. Q. What is done? A. On the one hand, ethical proponents of AI posit that we can do a better job of building and training these systems. Others believe that these tools are not appropriate for some more sensitive tasks, especially linked to the most vulnerable. But we are still at the stage of detecting the really bad systems and creating incentives for public authorities to better understand the technology and work towards better trained systems. We are skipping the part where we make them work in an ethical way, make them accountable for mistakes and make their results questionable. Q. Is artificial intelligence affecting human rights? A. Public authorities are using AI in governments, cities and national systems that are affecting human rights. The system to detect possible fraud in welfare programs or whether someone stays in jail are some examples, but also artificial intelligence systems are increasingly being used to decide who is interviewed for a job or how suitable they are to be granted a loan. Our attitude cannot be to say that it is impossible to move fast enough to regulate AI so we sit back. We cannot say that it is smarter than us and that we are not going to try to assess the bias in systems that are going to make very basic decisions that impact our civil rights. Q. How do we avoid falling into that attitude? A. Let’s move away from the hype that AI is both very exciting and very scary. Let’s not talk about it as if it’s an inevitable thing that will simply remove our ability to make decisions about what our societies will look like, leaving all that authority in the hands of a few tech companies. That’s great for them, but not for anyone who hopes to be a citizen and not just a consumer of a product. You have to think like we did with drug regulation: the average politician is not in a position to pass tests on the latest drugs, that’s up to public institutions that inspect and regulate. In that area we decided that it was something in the public interest that needed to have those safeguards. Q. Is what’s happening similar to what’s happened with social media? A. There’s a lot of hype around AI. We’re told it will either fix everything or kill us all – both incredibly heady ideas, but they speak to the future. There’s much less discussion about what we can do now to address the systems that are already in our lives. There are rules about all the other systems that affect us, but there’s the idea that AI is exceptional. We heard those arguments before from big tech companies like Amazon or Airbnb. We were told that they couldn’t be regulated like any other retailer or the hospitality industry. When Airbnb, for example, has had a profound impact on the cost of rent. These companies should not be the dominant voices in the conversation. It’s right for our governments to think about how to create flexible, future-proof legislation that incorporates the civil and human rights that we have. We shouldn’t sacrifice civil and human rights in pursuit of an amazing future for AI. Q. What is the position of the AI ​​industry? A. It wants light regulation, over which it has a lot of influence. Governments can play a role in two ways: set a regulatory playing field, which makes sense for the industry because it means that all actors must develop and deploy the technology in the same way, or set a standard on the systems that will make decisions that impact the public sector, be transparent about these and require technology providers to give access to third parties, such as the media and inspectors and auditors.