Interview with Carissa Véliz, Author of "Privacy is Power" and "Prophecy"

We sat down with Carissa Véliz, author of Privacy is Power and University of Oxford associate professor, to talk about how predictive AI will make a 'meritocracy' impossible, how lifelike chat bots are designed to deceive you, and the importance of privacy in the digital age.

Carissa Véliz is an associate professor at the Institute for Ethics in AI at the University of Oxford, a renowned author and speaker, a board member of the Proton Foundation, and a member of UNESCO's Women 4 Ethical AI.

Her new book, 'Prophecy,' comes out on April 21st. Prophecy is about how extensive use of predictive analytics is undermining our abilities to defy the odds, making systems unaccountable, and increasing risk in business and society while creating a false sense of security.

Prophecy by Carissa Véliz: 9780385550970 | PenguinRandomHouse.com: Books
From an award-winning University of Oxford professor comes a brilliant, urgent new look at prophecies—the predictions that determine our lives, from our personal finances and the quality of our healthcare…

Some references in this video can be found on her website:

Transcript

This transcript has been lightly edited to improve readability.

Nate Bartram (Privacy Guides): Thank you so much for your time. We really appreciate you being here. I read your Wired article from 2021. You said, “If AI is predicting your future, are you still free?” And you noted that in an age of individualized algorithms like personalized insurance rates, you said “you are increasingly paying your own way.”

My first thought when I read that is I think a lot of people at face value would say “this sounds awesome!” Like, why should my health insurance be higher because someone else is like a smoker, even though I go running every day, or something like that. I was wondering if you could explain to the audience like why that’s a little bit too good to be true.

Carissa Véliz: Thank you, and thank you for having me.

Yeah, I think it’s a very intuitive thought, this thinking that “well if I’m healthier than others, why should I be paying for someone else’s vices?” But precisely the whole point of insurance is to benefit individually from the law of large numbers.

So the law of large numbers is a statistical phenomenon whereby a certain amount of people will be unlucky, whether it’s suffering from your house getting flooded, or a fire, or getting an illness. You don’t know who is going to be lucky and who’s going to be unlucky, and the idea of insurance is that we pull the risk together such that the lucky pay for the unlucky. And the point is that nobody knows who they’re going to be, and that’s why insurance makes sense, because if you’re paying for your own way, then if you’re part of the unlucky ones, then you’re going to pay a very large amount of money for something that you may or may not be to blame for. 

We tend to think about cases in which people are unhealthy, and we tend to be harsh in judging them, but the reality is that a big proportion of our health is ruled by random things like genetics, and where you live, and the kind of quality of air you’re inhaling, and and things that are beyond your control. And more importantly there there are two points:

One is, the point of solidarity and the point of insurance is partly solidarity, but also partly to keep the community safe. So if we push risk onto the shoulders of individuals, that risk might break them, and when individuals break, society breaks.

One example of this was the 2008 financial crisis, in which banks were giving loans that were very high risk because they knew that — or they should have known that — certain people were not going to be able to pay those loans. And the result is not only that those decisions broke those people, but that it created a financial crisis that we all had to suffer.

And the other point is a moral one, about what is fair? Why are we allowing people to pay for things that they’re not to be blamed for? And what kind of society we want to build? 

So, in Prophecy, I discuss this example of Harvard: At some point Harvard had an insurance — a medical insurance program — for its employees, and it had two tiers: a cheap tier, and a more expensive tier. And some people were complaining that the most expensive tier was giving a lot of money to the people who chose it, because Harvard paid for a proportion. So you would get essentially more of a benefit if you went for the expensive tier. And also, there was a problem that Harvard was spending more money than it was bringing in.

And it changed into a another kind of program in which Harvard paid just the same fee for everyone, and then people could choose whether they wanted the cheap, or the more expensive program. But that led to the result that only the sicker patients chose the more expensive program, and that made it too expensive on the following year. 

So after a few years, you didn’t have the expensive program anymore, and everybody lost out: The sicker patients, because they couldn’t access the better program anymore, but also the healthy ones who wanted that peace of mind.

That’s interesting how it’s it’s almost counterintuitive that, like you said, it’s it’s about making it fair for everyone. One person might pay a little extra and it’s not really like “fair” to them, but it’s also not fair to the other people who are, like you said, born in an area with low air quality or something. So, thank you.

You said later on in the same article that we strive to structure our societies on the basis of merit, and you made the argument that AI algorithms can make us nihilistic, and can make society deterministic.

I will be honest, I don’t think I have a specific question here, but I was really interested in that idea and I was wondering if you could expand on that a little bit, because I’ve never heard this point raised before, and I think it’s a pretty universal thing that we would all agree, that society would be better if it was merit-based, and I’ve just never heard anyone point that out about algorithms before.

Yes. One of the things I’ve noticed in researching AI is how there is this double narrative going on.

On the one hand, we are being told — especially when we’re on the losing end of a decision — that it’s kind of our fault, and that society is merit-based, and if we don’t get an opportunity or get some kind of punishment, then it’s our fault. 

At the same time, we are using AI more and more as a predictive tool. So instead of judging people on the basis of what they have already done and what they deserve, we judge them on the basis of what an algorithm thinks they will do.

And so we get the short end on both narratives [laughter]. We get told that on the one hand that we should have done better, but on the other we are being treated as things, and not as human agents who have a say in our future. The more we use predictive algorithms on people, the more we close off opportunities before people can even stand a chance to defy the odds. 

When you look at the history of prediction, there has always been a kind of dance between our philosophical views of free will and what people deserve, and how much prediction we use. And there is a high correlation between an overuse of prediction when it comes to human beings, and authoritarianism. Because when we predict that somebody will do something in the future and treat them accordingly before they even do it, essentially we’re treating them as things. And you know this is the topic of the famous movie Minority Report.

We are at a historical moment in which we are using predictive algorithms everywhere: in the doctor’s office, in justice systems, when it comes to job opportunities, loan opportunities, even dating apps. This proliferation of prediction is essentially narrowing our field of freedom, and that’s only when it comes to AI, but if you add on top of it other kinds of predictive practices, like prediction markets are gaining a lot of attention now, and they’re not only gaining attention from the part of the public and people who participate in them, but more and more I see newspapers reporting on prediction markets as if it was a trustworthy source of information. 

When you look at and analyze the history of prediction, we realize that although predictions might seem like quests for knowledge or hypotheses about the future, more often than not their power plays in disguise.

In Prophecy, I argue that predictions are like the arena where fights about the future take place. And when somebody’s making a prediction, it might seem like they’re describing the world, but what in fact they’re doing is issuing a kind of command, and ordering people to bend reality to their vision of the future, which is usually a a future that is in their interest. Often it’s a financial interest.

Another one of your articles that I read was “chatbots shouldn’t use emojis” from March of 2023.

And real quick, all these articles are on her website and I highly recommend reading them. They’re very short and very insightful.

But you mentioned that some people think they might be a little too clever to be emotionally manipulated by a chatbot, which is why you argue that they shouldn’t use emojis. But, then you cited a 2021 study that found that people consistently underestimate how susceptible they are towards misinformation.

So, personally, this is a big argument of mine is that privacy matters… It’s not just about advertisers like nudging us to buy some new shoes or something, right? But sometimes they try to influence our opinions and beliefs. And I feel like I see so many people who think that they’re way too clever to fall for this, so I was curious if you could talk a little bit more about that study because again, I’ve never heard of that either.

Yes, we tend to have a very idealized version of ourselves. We tend to think of ourselves as very rational, and as as following our beliefs, and our beliefs being based on evidence and experience, and for the most part that’s not entirely wrong. 

However, we are influenced in ways that are not obvious to us, and the literature in psychology on this is so extensive that it’s hard to know even where to start. But things like, for example, I remember one study about how smells affect us, and when we’re smelling something that is unpleasant, even when we’re not conscious of it, we make harsher judgments.

We are mostly influenced in minor ways. So you wouldn’t make a huge life decision on the basis of that kind of influence. However, for example, another study suggests that when judges are hungry, so before lunchtime, they tend to give out harsher sentences.

And this is partly because for them it’s just another day at the office. But for the person receiving the judgment, it might be life-altering, or it will be life-altering.

We are especially vulnerable to being influenced in moments in which we might not be our full selves. It might be that you’re super stressed out because of a particular situation. Or it might be that you are scared about a particular piece of news that is very alarming. Or it might be that you’re in some kind of other emotional state. 

In those cases, it might make the be the difference between you doing something and not doing something. 

Also important, is to think about how many billions of people we are. Such that if you have a way to expose millions and millions of people to a very alarming message, just statistically, even if we were all pretty rational beings, pretty well informed, pretty smart, you will find a proportion of those people who are who you just catch at the wrong time.

They might be in the hospital. They might be worried about their pension, or their loan, or a job, or whatever vulnerable circumstance. And I’ve talked to many friends who have been the subject of of frauds. Every person I’ve met who has fallen for a kind of this scam was called in a moment in which their guard was down, for whatever reason. 

So I think we need to be much more realistic about how we are influenced by what we see on screens, and how it’s more and more the case that we are not accessing reality through vouched editors — like when you read a newspaper — but through screens that are designed to engage us depending on our personalities. 

That makes us even more vulnerable than we would in a situation before social media came about, or before these algorithms designed for engagement became so common. 

Yeah, for sure. I like what you said about the moment of weakness. Troy Hunt from Have I Been Pwned, I think last year, fell for a phishing attack and he talked about he put a whole blog post about it, about how he had just gotten home from international travel, and he was tired, like all the tricks you mentioned that worked on him. And I think Cory Doctorow fell for one too, but I could be remembering that wrong... 

Actually, to what the last thing you just said, in that same article, you said it would be more ethical to design chat bots to be noticeably different from humans: “To minimize the possibility of manipulation and harm, we need to be reminded that we are talking to a bot.” 

Which drives home what you just said, as well as you talk about this in your upcoming book and you talked about it in one of your talks, how every tech is designed with a specific use case and a purpose in mind. It kind of makes you wonder why chatbots are so lifelike.

Yes, they’re designed to be impersonators, and that should concern us! That technology is designed to be misleading, to hijack our normal emotional responses. 

One way in which we could make them different is just not only not allowing them to use emojis, but not allowing them to even use the pronoun “I.” Because using the pronoun “I” suggests there’s someone else on the other side of the screen, and there isn’t.

But just like you can still see and get fooled by a visual illusion — even when you know it’s an illusion! — when you talk to a chatbot it’s still so compelling. And furthermore if they catch you at a bad time at a time. when you’re grieving or a time when when you are in need of consolation, you might be particularly vulnerable to these systems. And we are already seeing this effect that some people are calling “chatbot psychosis” or “chatbot delusions” in which because chatbots are designed to essentially please human beings, you get a lot of validation from them which can — a healthy measure of validation can be positive — but when a system consistently validates your every thought, there is this phenomenon whereby people spiral into delusions, and we haven’t seen it once or twice twice or three times: This is something that people are reporting at quite alarming rates. 

One of the things that keeps us tied to reality is talking to other human beings, because they disagree with us. And it’s annoying, and it’s frustrating, and it can lead to conflict. But somebody challenging your views is an essential part of staying mentally sane and having perspective on the world. 

Yeah, I heard somebody say that about the whole “AI girlfriend” phenomenon that they’re like, “Yeah, but a real relationship is like, there’s struggle and there’s disagreement, and an AI girlfriend will never be able to do that.” Going back to your TEDx talk about how privacy can save your life, you mentioned, “Every time you share your data, you’re sharing the data of other people as well.”

That’s something that I don’t see discussed a lot in the privacy community — just every once in a while — and I was wondering if you could expand on that, and and talk about how we share the data of others with our own.

Using the term “personal data” is quite misleading, because it seems to suggest two things, both of which are false:

One is that personal data is a very individual matter, when in fact most of your personal data is shared with someone else. For example, your location data: you usually work in the same place as other people and you usually live with someone else or close to other people, so if you reveal that data, you are revealing the data of other people as well.

Even something as individual or seemingly individual as genetic data is incredibly collective, because when you share your genetic data, you’re not only sharing your data, you’re sharing the data of all your family. Not only your close family like your siblings and your parents and your children, but also the data of distant kin, who can have repercussions from it, and they are not giving consent. 

The second thing that it suggests — this this term “personal data” — that is false is that it’s a personal choice, it’s a personal preference. So if you’re less shy than other people then and you have nothing to hide, and you’re not a criminal, then it’s fine for you to share the data. 

But again, that doesn’t take into account how not only does your data contain data about other people, but your decisions about privacy have collective repercussions.

For example, in the Cambridge Analytica scandal, people gave away their privacy. I think it was for $2 [laughs].

And of course, when they gave away their privacy, they weren’t entirely aware of what they were doing. The terms and conditions didn’t say, “We will use this data to try to sway elections around the world and to profile people.” But that is in essence what what happened.

So even when you are sharing data that arguably is only about you, if that data gets used to infer data about other people or to issue targeted ads with a political agenda, then other people are also suffering from that loss of your privacy. 

That is one reason why we need to have better guard rails, in general, about what is safe. We have seen this in other spheres of life. For example, when it comes to cars, because when you have a crash you might seriously injure yourself, but you might also injure others. So there are reasons for why we should limit your speed, or you might be requested to use your seat belt, etc. 

And we see this also in public health. So even if you wanted to experience what it’s like to get a very contagious disease, society has an interest that you don’t get it, because you might give it to someone else.

Currently in the digital sphere, we haven’t built the right guard rails to protect society as a whole, to protect democracy. It’s not only about you.

You said in your TEDx talk that currently AI uses incredible amounts of personal data, but it doesn’t have to be that way. 

A little bit more of a technical question here: If we were to get AI right from a privacy perspective, what do you think that would look like? Would it be some kind of like opt-in signal like a robots.txt file? Would it be the the company’s paying people for the data that they scrape up? Would it be both? Would it be something else? What do you think would be a good start towards that future? 

We’re already seeing more hybrid versions of AI, large language models that don’t only use machine learning, but that use decision trees or that use even things like calculators.

So, for example, famously, large language models can’t do math because what they are good at is picking up patterns, but that doesn’t mean that they’re making any kind of calculations. You’re already seeing large language models that when the system realizes that the person is is asking for some kind of arithmetic, it swaps into a calculator mode in which you can actually use it as a calculator and it’s not a large language model anymore.

Similarly, companies — like for example Empathy Holdings — are using large language models to ask users questions to then narrow down what they are looking for. And for example, if they’re looking for a question about a particular fridge, once the system has recognized the model of the fridge, you get the actual PDF of the manual, and then when you ask questions, you only get a highlight function.

And that way you don’t get any kind of confabulation which people call “hallucination,” but I don’t like the term because it suggests that the system has a an experience, which it doesn’t. 

And we are seeing other kinds of AI that is being used in legal settings that asks the system twice to make sure that there there isn’t any kind of fabrication of cases. So we’re we’re already seeing some of it.

But for me, the ideal kind of AI for the purposes of privacy would have two sides of it: On the side of its training and creation, it would be very respectful of data and ideally it wouldn’t be trained with personal data at all, much less without the consent or knowledge of people.

Once trained, it would be able to delete any kind of personal data, and it would be also trained not to give off any personal data that could be inferred from non-personal data.

And then on the other side of the equation, it wouldn’t take any personal data from the people who use the AI. So at the moment, I read an article not too long ago giving an example fleshing out how much information a large language model is getting out of you and how it’s profiling you, and it’s using information that is obvious: like the kinds of things that you say about where you live if you give it your address, or your job title, or other kinds of personal information.

But it’s also inferring things from how you use language: It’s inferring things like where you are from, and what kind of social class you belong to, and what kind of educational background you have. 

It’s using all of that for marketing purposes, or they can sell on that information. And so you would [need] respect for privacy on both ends. 

Interestingly, that coincides with a better approach with respect to ecology and energy, because at the moment these systems are incredibly inefficient. The best kind of AI would be a technology that doesn’t use that much data, and that is based much more on reasoning, and that would be more privacy preserving, more ecologically respectful, and it would also lead to much less confabulation if it’s not a statistical process and it’s more of a reasoning device.

That’s a tall order, but that is also a fact. Part of what we’re saying when we criticize AI is saying, well, this is a pretty bad product. It’s a pretty unsafe product, and we should do better, and we deserve better technology.

You kind of touched on this one a little bit, but in that same vein, if we go back to the Wired article where you talked about how AI is kind of taking away our our agency and our potential, how would we solve that one? Is it just regulating how predictive algorithms are used, or do you think there’s any other solutions to that one? 

It’s a combination of things. It’s partly about culture, about identifying the things that we value and nurturing them. For example, at the moment, I think we are valuing convenience too highly.

Of course, convenience is a very important feature of life. If you choose always the inconvenient way of doing things, you would never get anything done essentially. However, this using of “convenience” as a kind of trump card for anything just doesn’t pan out. It leads to a pretty bad life. 

Everything that is most meaningful in your life is pretty inconvenient.

Is it convenient to have a family? No. Friends are a pain, and reading is effortful, and doing a PhD is a headache, and exercising is a drag! [laughs]

You know, part of the the what we get from things is what we invest in them. And when something is effortful, it makes it more meaningful, when when you can reap the the fruits of that.

So, it’s partly about not losing skills that we shouldn’t be losing so that we don’t lose our autonomy, and not delegating important things to AI.

And what are those important things? Well, it depends. The devil is in the details. But in general, one way to look at it is that democracy is a kind of conversation between citizens. And when we delegate that conversation to AI, when we delegate that language to AI, it’s like we stand up from the table of democracy and and leave our place.

Another thing that is very important is to design AI better. To make it safer in in all ways, including privacy.

Another element that would be very important is the protection of privacy, because you don’t have democracy without privacy. And at the moment we are still pushing for more and more surveillance.

What we’re doing in practice is asking the question, how much surveillance can democracy take? And I really don’t want to find out, but we are pushing in that direction.

Another element is about regulating predictions, yes. We hear a lot about AI bias, but I think we’re misdiagnosing some of the problems, because we’re missing the the most primal underlying problem that is having to do with prediction.

And it’s like a kind of arrogance to to think that it’s okay for anyone to make any kind of prediction about anyone else in the world, and act accordingly without any kind of supervision, any kind of permission, any kind of input from that person.

Even Ancient Rome regulated predictions! So it was illegal to predict the death of the emperor — for obvious reasons! — because it has a tendency to become a self-fulfilling prophecy. Once somebody has said the emperor will die by this day, there will be people vying for power and very often the emperor ended up dead! 

If Ancient Rome regulated predictions, it’s kind of incredible that we haven’t caught up to that problem.

We have to have a public debate about what kinds of predictions are okay to make: when, how should we use them, and what are the limits?

How do you think we can get to a a better AI scenario like we’ve been talking about? And I’m thinking specifically about how right now these companies are investing unfathomable amounts of money. I feel like it can be discouraging when you’re going up against that kind of economic incentive, and it feels like “well how can I make change?” What do you think are things that we can do to help try to nudge AI in a better in a direction that’s better for society and benefits everyone, instead of just the people at the top? 

Part of it, we’re already doing it by having these conversations. This is partly the fabric of democracy. You and I talking about this, and other people listening to this podcast and commenting, and then talking about it with their friends and their family. This is how democracy gets built.

Part of it is putting pressure on our political representatives to represent us adequately, and to defend our privacy, and to demand better products from from companies. And that’s not going very well. [laughter] 

But it’s a big job, and and we have to do it.

Part of it is demanding better from companies and calling them out when they don’t act right. And part of it is using the right products! Very often we do have a good alternative and not not enough people use it.

So Signal is a great example, because it works just as well as WhatsApp, and it doesn’t collect your data, and it was also a game changer for the whole industry. Before Signal came along, it wasn’t common to encrypt messages. And once Signal started encrypting them and making it easy — that you didn’t have to be a technical person, it was just a default — then other companies caught up. So we need companies to be more innovative. 

Another great example is Proton.

And full disclosure, I’m part of the board of the Proton Foundation, but they don’t pay me. And I’m part of their board because I believe in what they’re doing. 

And they have encrypted email, they have VPNs, they have a password manager. Anyone who’s using Gmail or another provider is using a worse service, and when there are better alternatives. 

Instead of using Google search, use DuckDuckGo, because every time you use those services you are voting as well. You are sending the message that this is important, that consumers care about this, and if companies want to have a competitive advantage, they better protect us.