L’intelligenza artificiale e la fiducia, di Bruce Schneier
2024-4-26 19:42:0 Author: attivissimo.blogspot.com(查看原文) 阅读量:2 收藏

Questa è la mia traduzione integrale del saggio AI and Trust di Bruce Schneier, realizzata e pubblicata con il suo permesso. Una sintesi è pubblicata nel mio podcast. È disponibile anche una traduzione in tedesco, pubblicata da Netzpolitik.org.

L’intelligenza artificiale e la fiducia, di Bruce Schneier

Oggi mi sono fidato molto. Mi sono fidato che il mio telefono mi avrebbe svegliato in tempo. Mi sono fidato che Uber avrebbe predisposto un taxi per me e che il tassista mi avrebbe portato sano e salvo in aeroporto. Mi sono fidato che migliaia di altri conducenti per strada non avrebbero tamponato la mia auto. All’aeroporto mi sono fidato dei controllori al check-in, dei tecnici addetti alla manutenzione, di tutte le altre persone che mantengono operativa una compagnia aerea, del pilota dell’aereo sul quale ho volato, e di migliaia di altre persone in aeroporto e sull’aereo, ciascuna delle quali avrebbe potuto aggredirmi. E mi sono fidato di tutte le persone che mi hanno preparato e servito la colazione, e dell’intera filiera alimentare: qualunque anello di questa catena avrebbe potuto avvelenarmi. Quando sono atterrato qui, mi sono fidato di altre migliaia di persone: in aeroporto, sulla strada, in questo edificio, in questa sala. E tutto questo è avvenuto prima delle 10.30 di stamattina.

La fiducia è essenziale per la società. Gli esseri umani, come specie, tendono a fidarsi gli uni degli altri. Siamo tutti seduti qui, e per la maggior parte non ci conosciamo fra noi, e siamo fiduciosi del fatto che nessuno ci aggredirà. Se questa fosse una sala piena di scimpanzé, questo sarebbe impossibile. Noi ci fidiamo migliaia di volte al giorno. La società non potrebbe funzionare senza la fiducia. E il fatto che non ci facciamo neanche caso è una misura di quanto funzioni bene tutto questo.

In questo mio intervento proporrò numerose argomentazioni. La prima è che esistono due tipi differenti di fiducia, quella interpersonale e quella sociale, e che li confondiamo continuamente. La seconda è che questa confusione aumenterà con l’intelligenza artificiale: commetteremo un errore di categorizzazione fondamentale e penseremo alle intelligenze artificiali come amiche quando sono in realtà solo servizi. La terza è che le aziende che controllano i sistemi di intelligenza artificiale si approfitteranno della nostra confusione per sfruttarci e non saranno affidabili. La quarta è che è compito del governo creare la fiducia nella società e quindi creare un ambiente che consenta una IA degna di fiducia. Questo comporta della regolamentazione: non dell’IA, ma delle organizzazioni che controllano e utilizzano l’IA.

Bene, facciamo un passo indietro e consideriamo tutte queste cose molto più lentamente. La fiducia è un concetto complicato e questa parola è sovraccarica di tanti significati. C’è la fiducia personale e intima. Quando diciamo che ci fidiamo di un amico o di un’amica, è più per com’è quella persona che per le cose specifiche che fa. Confidiamo che si comporterà in maniera affidabile, ci fidiamo delle sue intenzioni e sappiamo che quelle intenzioni ispireranno le sue azioni. Questa possiamo chiamarla “fiducia interpersonale”.

Esiste anche la fiducia meno intima, meno personale. Magari non conosciamo qualcuno di persona oppure non ne conosciamo le motivazioni, ma possiamo fidarci del suo comportamento. Non sappiamo se qualcuno vuole derubarci oppure no, ma forse possiamo fidarci che non lo faranno. È più una questione di affidabilità e prevedibilità. Questa possiamo chiamarla “fiducia sociale”: la capacità di fidarsi degli sconosciuti.

La fiducia interpersonale e quella sociale sono entrambe essenziali nella società attuale. Funziona così: abbiamo meccanismi che inducono le persone a comportarsi in maniera affidabile, sia a livello interpersonale, sia a livello sociale. Questo, a sua volta, permette agli altri di avere e dare fiducia, e questo rende possibile la fiducia nella società, e tutto questo permette alla società di continuare a funzionare. Il sistema non è perfetto: ci saranno sempre persone che abusano della nostra fiducia, ma il fatto che la maggior parte di noi sia affidabile per la maggior parte del tempo è sufficiente.

Ho parlato di questi concetti nel 2012, in un libro intitolato Liars and Outliers, descrivendo quattro sistemi che rendono possibile la fiducia: i nostri valori morali innati, la preoccupazione per le nostre reputazioni, le leggi alle quali siamo soggetti e le tecnologie di sicurezza che vincolano il nostro comportamento. Ho scritto che le prime due sono più informali delle due successive, e che queste due successive sono più scalabili, consentono società più grandi e più complesse e rendono possibile la cooperazione tra sconosciuti.

Quello che non avevo considerato è quanto siano differenti le prime due dalle seconde. I valori morali e la reputazione valgono tra una persona e un’altra, sulla base dei rapporti umani, della reciproca vulnerabilità, sul rispetto, l’integrità, la generosità e molte altre cose, che sono i fondamenti della fiducia interpersonale. Le leggi e le tecnologie di sicurezza sono sistemi di fiducia che ci obbligano a comportarci in modo affidabile, e costituiscono la base della fiducia sociale.

Fare il tassista una volta era uno dei mestieri più pericolosi in questo paese. Uber ha cambiato tutto. Non conosco il mio conducente Uber, ma le regole e la tecnologia danno a me e a lui la fiducia che nessuno di noi cercherà di truffare o attaccare l’altro. Siamo entrambi sotto continua sorveglianza e siamo in competizione per avere punteggi elevati.

Molta gente scrive a proposito della differenza tra vivere in una società che ha un livello di fiducia elevato e vivere in una che ha un livello basso, dicendo che l’affidabilità e la prevedibilità rendono tutto più semplice, e notando cosa si perde quando la società non ha queste caratteristiche. Si parla anche di come le società passano da livelli alti di fiducia a livelli bassi, e viceversa. Tutto questo si basa sulla fiducia sociale.

Questa letteratura è importante, ma per questo mio discorso il punto centrale è che la fiducia sociale è maggiormente scalabile [nel senso di “usabile su scala più ampia”, N.d.T.]. Un tempo serviva avere un rapporto personale con un funzionario di banca per ottenere un prestito. Oggi tutto questo si fa algoritmicamente e ci sono molte più opzioni tra cui scegliere.

La fiducia sociale è maggiormente scalabile ma incorpora ogni sorta di pregiudizio e preconcetto, perché per poter essere scalable deve essere strutturata, orientata ai sistemi e alle regole, ed è a questo punto che vengono incorporati i pregiudizi. Inoltre il sistema deve essere reso quasi completamente insensibile al contesto, e questo gli toglie flessibilità.

[IN LAVORAZIONE da qui in poi] 

But that scale is vital. In today’s society we regularly trust—or not—governments, corporations, brands, organizations, groups. It’s not so much that I trusted the particular pilot that flew my airplane, but instead the airline that puts well-trained and well-rested pilots in cockpits on schedule. I don’t trust the cooks and waitstaff at a restaurant, but the system of health codes they work under. I can’t even describe the banking system I trusted when I used an ATM this morning. Again, this confidence is no more than reliability and predictability.

Think of that restaurant again. Imagine that it’s a fast food restaurant, employing teenagers. The food is almost certainly safe—probably safer than in high-end restaurants—because of the corporate systems or reliability and predictability that is guiding their every behavior.

That’s the difference. You can ask a friend to deliver a package across town. Or you can pay the Post Office to do the same thing. The former is interpersonal trust, based on morals and reputation. You know your friend and how reliable they are. The second is a service, made possible by social trust. And to the extent that is a reliable and predictable service, it’s primarily based on laws and technologies. Both can get your package delivered, but only the second can become the global package delivery systems that is FedEx.

Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability—social trust.

But because we use the same word for both, we regularly confuse them. And when we do that, we are making a category error.

And we do it all the time. With governments. With organizations. With systems of all kinds. And especially with corporations.

We might think of them as friends, when they are actually services. Corporations are not moral; they are precisely as immoral as the law and their reputations let them get away with.

So corporations regularly take advantage of their customers, mistreat their workers, pollute the environment, and lobby for changes in law so they can do even more of these things.

Both language and the laws make this an easy category error to make. We use the same grammar for people and corporations. We imagine that we have personal relationships with brands. We give corporations some of the same rights as people.

Corporations like that we make this category error—see, I just made it myself—because they profit when we think of them as friends. They use mascots and spokesmodels. They have social media accounts with personalities. They refer to themselves like they are people.

But they are not our friends. Corporations are not capable of having that kind of relationship.

We are about to make the same category error with AI. We’re going to think of them as our friends when they’re not.

A lot has been written about AIs as existential risk. The worry is that they will have a goal, and they will work to achieve it even if it harms humans in the process. You may have read about the “paperclip maximizer“: an AI that has been programmed to make as many paper clips as possible, and ends up destroying the earth to achieve those ends. It’s a weird fear. Science fiction author Ted Chiang writes about it. Instead of solving all of humanity’s problems, or wandering off proving mathematical theorems that no one understands, the AI single-mindedly pursues the goal of maximizing production. Chiang’s point is that this is every corporation’s business plan. And that our fears of AI are basically fears of capitalism. Science fiction writer Charlie Stross takes this one step further, and calls corporations “slow AI.” They are profit maximizing machines. And the most successful ones do whatever they can to achieve that singular goal.

And near-term AIs will be controlled by corporations. Which will use them towards that profit-maximizing goal. They won’t be our friends. At best, they’ll be useful services. More likely, they’ll spy on us and try to manipulate us.

This is nothing new. Surveillance is the business model of the Internet. Manipulation is the other business model of the Internet.

Your Google search results lead with URLs that someone paid to show to you. Your Facebook and Instagram feeds are filled with sponsored posts. Amazon searches return pages of products whose sellers paid for placement.

This is how the Internet works. Companies spy on us as we use their products and services. Data brokers buy that surveillance data from the smaller companies, and assemble detailed dossiers on us. Then they sell that information back to those and other companies, who combine it with data they collect in order to manipulate our behavior to serve their interests. At the expense of our own.

We use all of these services as if they are our agents, working on our behalf. In fact, they are double agents, also secretly working for their corporate owners. We trust them, but they are not trustworthy. They’re not friends; they’re services.

It’s going to be no different with AI. And the result will be much worse, for two reasons.

The first is that these AI systems will be more relational. We will be conversing with them, using natural language. As such, we will naturally ascribe human-like characteristics to them.

This relational nature will make it easier for those double agents to do their work. Did your chatbot recommend a particular airline or hotel because it’s truly the best deal, given your particular set of needs? Or because the AI company got a kickback from those providers? When you asked it to explain a political issue, did it bias that explanation towards the company’s position? Or towards the position of whichever political party gave it the most money? The conversational interface will help hide their agenda.

The second reason to be concerned is that these AIs will be more intimate. One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.

And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.

You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.

The natural language interface is critical here. We are primed to think of others who speak our language as people. And we sometimes have trouble thinking of others who speak a different language that way. We make that category error with obvious non-people, like cartoon characters. We will naturally have a “theory of mind” about any AI we talk with.

More specifically, we tend to assume that something’s implementation is the same as its interface. That is, we assume that things are the same on the inside as they are on the surface. Humans are like that: we’re people through and through. A government is systemic and bureaucratic on the inside. You’re not going to mistake it for a person when you interact with it. But this is the category error we make with corporations. We sometimes mistake the organization for its spokesperson. AI has a fully relational interface—it talks like a person—but it has an equally fully systemic implementation. Like a corporation, but much more so. The implementation and interface are more divergent than anything we have encountered to date—by a lot.

And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.

It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.

We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.

It’s no accident that these corporate AIs have a human-like interface. There’s nothing inevitable about that. It’s a design choice. It could be designed to be less personal, less human-like, more obviously a service—like a search engine . The companies behind those AIs want you to make the friend/service category error. It will exploit your mistaking it for a friend. And you might not have any choice but to use it.

There is something we haven’t discussed when it comes to trust: power. Sometimes we have no choice but to trust someone or something because they are powerful. We are forced to trust the local police, because they’re the only law enforcement authority in town. We are forced to trust some corporations, because there aren’t viable alternatives. To be more precise, we have no choice but to entrust ourselves to them. We will be in this same position with AI. We will have no choice but to entrust ourselves to their decision-making.

The friend/service confusion will help mask this power differential. We will forget how powerful the corporation behind the AI is, because we will be fixated on the person we think the AI is.

So far, we have been talking about one particular failure that results from overly trusting AI. We can call it something like “hidden exploitation.” There are others. There’s outright fraud, where the AI is actually trying to steal stuff from you. There’s the more prosaic mistaken expertise, where you think the AI is more knowledgeable than it is because it acts confidently. There’s incompetency, where you believe that the AI can do something it can’t. There’s inconsistency, where you mistakenly expect the AI to be able to repeat its behaviors. And there’s illegality, where you mistakenly trust the AI to obey the law. There are probably more ways trusting an AI can fail.

All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.

The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

It’s government that provides the underlying mechanisms for the social trust essential to society. Think about contract law. Or laws about property, or laws protecting your personal safety. Or any of the health and safety codes that let you board a plane, eat at a restaurant, or buy a pharmaceutical without worry.

The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details. Places where governments don’t provide these things are not good places to live.

Government can do this with AI. We need AI transparency laws. When it is used. How it is trained. What biases and tendencies it has. We need laws regulating AI—and robotic—safety. When it is permitted to affect the world. We need laws that enforce the trustworthiness of AI. Which means the ability to recognize when those laws are being broken. And penalties sufficiently large to incent trustworthy behavior.

Many countries are contemplating AI safety and security laws—the EU is the furthest along—but I think they are making a critical mistake. They try to regulate the AIs and not the humans behind them.

AIs are not people; they don’t have agency. They are built by, trained by, and controlled by people. Mostly for-profit corporations. Any AI regulations should place restrictions on those people and corporations. Otherwise the regulations are making the same category error I’ve been talking about. At the end of the day, there is always a human responsible for whatever the AI’s behavior is. And it’s the human who needs to be responsible for what they do—and what their companies do. Regardless of whether it was due to humans, or AI, or a combination of both. Maybe that won’t be true forever, but it will be true in the near future. If we want trustworthy AI, we need to require trustworthy AI controllers.

We already have a system for this: fiduciaries. There are areas in society where trustworthiness is of paramount importance, even more than usual. Doctors, lawyers, accountants…these are all trusted agents. They need extraordinary access to our information and ourselves to do their jobs, and so they have additional legal responsibilities to act in our best interests. They have fiduciary responsibility to their clients.

We need the same sort of thing for our data. The idea of a data fiduciary is not new. But it’s even more vital in a world of generative AI assistants.

And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.

The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.

A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.

We can never make AI into our friends. But we can make them into trustworthy services—agents and not double agents. But only if government mandates it. We can put limits on surveillance capitalism. But only if government mandates it.

Because the point of government is to create social trust. I started this talk by explaining the importance of trust in society, and how interpersonal trust doesn’t scale to larger groups. That other, impersonal kind of trust—social trust, reliability and predictability—is what governments create.

To the extent a government improves the overall trust in society, it succeeds. And to the extent a government doesn’t, it fails.

But they have to. We need government to constrain the behavior of corporations and the AIs they build, deploy, and control. Government needs to enforce both predictability and reliability.

That’s how we can create the social trust that society needs to thrive.


文章来源: http://attivissimo.blogspot.com/2024/04/lintelligenza-artificiale-e-la-fiducia.html
如有侵权请联系:admin#unsafe.sh