How the Promise of AI Will Be a Nightmare for Data Privacy
2024-9-27 12:29:39 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

I hate billing. I love getting paid. I hate billing. I hate doing invoices, timesheets, expense reports, travel planning and everything necessary to bill. So this is one of the many things that AI – particularly with large learning models and large action models can do for me – it can figure out – by looking at my keystrokes, the documents I am creating or modifying, my Google searches, my phone calls and text messages (across platforms), my travel schedule, etc., what matters I am working on and for how long, and generate a bill or invoice for me to review. Who am I kidding – for me to send without review.

Very cool and very scary.

When we think of data privacy in the context of AI, we are generally concerned (at this point at least) with the data used to train the AI programs. The fact that an AI program will suck up, aggregate and analyze petabytes of information from various sources, including copyrighted information online, public files, license information, pictures, video — you know, anything they can Hoover up. We also think of privacy issues when AI programs make decisions that impact our lives – directed search results, personalized data, etc. Scary stuff.

But as we start delegating LLMs and LAMs the authority to act on our behalf (our personal avatars), we create a true privacy nightmare. LAMs require data from multiple sources, access to multiple platforms and integration with disparate databases and systems to work effectively. Ai built into wearable devices can, for example, capture images of every place you are, everything you do and everything you interact with. That’s great if you are not Marilu Henner (kids, Google it) and don’t have hyperthymesia (adults, Google it). If you want to be able to walk into a room, know the names of all of the people in the room and your previous interactions with them (“Dave! Haven’t seen you since Idaho. How are Kenny and Jessica?”) or worse, know everything posted online about them, then LLM’s are for you! But the point of LAM and LLMs are not just to find information, but to assimilate it. Resistance is…. You get it.

Unlike data collected by, for example search engines like Google, LAMs and LLMs can collect passive data – connecting data collected from wearables (your watch and glasses), your phone (speed, direction, location, etc.), medical information (pulse, pulse ox, bp, sleep or other cycles) and information from smart devices – doorbells, lights, cameras, motion sensors, smart cars, etc. Add financial records, documents, communications, etc., and its not that you have a detailed image of the person – what you have is just about everything about that person.

Claroty

Problem is, there is no specific “repository” of that information – no single data “collector.” The disparate databases – Uber, Facebook, Instagram, whatever – are connected by YOU to feed the LAM. It’s that integration that allows a hypothetical invoicing program to know that my trip to Boston is for one client (including mileage, meals, airfare, hotel, etc.), but my phone call at Logan is for another, and bill accordingly. It also means integrating my Verizon phone, my Uber account, my Kayak, and my Bill.com accounts with Venmo or Paypal or whatever. Its not that each have access to each other’s data – its that the AI program is one ring to rule them all.
And, of course, in the darkness, bind them.

Because the AI operated “in the darkness.” In this case, data in, action out. And the cooler the action out is, the more we are willing to give it our data. Until it is too late. Our current legal model for data privacy is focused on data collectors and data processors. Those to whom we give our data for a specific purpose, and those who get that data from or for someone else. Under this scenario, the AI “program” is a data processor – but is the entity that develops the program – Open AI – a processor here, or just a service offering. They get your personal data (other than the data they mine from the web) because you give it to THEM. Well, you provide them access. For what purpose? Who knows. Something cool.

It’s easy to imagine how our robot overlords can easily turn on us. In the 1980’s companies wishing to market goods or services to us had only the vaguest notion of who we are and what we wanted. Age, race, income, education, geography, political affiliation, etc, were rough approximations for “this guy wants Rheingold,” or “she’s a Pepsodent gal.” Sports ads had beer and trucks. Daytime soaps? Well, soap. As “the Interwebs” knew more about us through searches and data aggregation, they were able to target specific promotions to specific individuals (folks who like vaping may also like lung surgery…). Ultimately, these ads get creepily targeted (I see you have visited Dr. Berman, the cardiologist — are you interested in cremation services?)

AI takes this all to a new level because it does not rely on a few channels of information, but virtually (pun intended) all of them. And the same AI that helps me send out bills may help the IRS challenge or track those bills, or opposing counsel know about the surprise motion I intend to file in a case. Remember, all of this is data I chose to share with the LLM or the LAM. But how the data is stored, processed, secured and used? That’s a mystery.

Artificial intelligence (AI), particularly large language models (LLMs) and large action models (LAMs), represents a significant leap forward in technology. These advanced AI systems are poised to revolutionize numerous facets of our lives, from personalized financial planning and health management to enhanced customer service and smart home automation. However, the efficacy of these systems is inherently tied to their ability to collect and process extensive personal data, raising critical concerns about data privacy and security.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/09/how-the-promise-of-ai-will-be-a-nightmare-for-data-privacy/
如有侵权请联系:admin#unsafe.sh