Consumer and privacy predictions for 2025
2024-11-27 18:15:16 Author: securelist.com(查看原文) 阅读量:1 收藏

Part of the Kaspersky Security Bulletin, our predictions for 2024 identified key consumer cyberthreats and trends shaped by global events, technological advances and evolving user behavior.

Last year, we suggested that charity-related scams would increase globally. While cybercriminals exploited humanitarian crises and charitable causes, taking advantage of both major conflicts and new donation methods, the anticipated boost could not be confirmed. We witnessed cases of abusing such efforts, particularly those associated with the Israeli-Hamas conflict, with Kaspersky researchers uncovering more than 540 scam emails and numerous fraudulent websites that imitated legitimate humanitarian aid campaigns. However, charity platforms’ ever-evolving protective measures and growing integration between charitable giving and day-to-day online shopping have provided more secure and convenient ways for users to make a contribution without exposing themselves to scammers, which also proves our prediction about such collaboration as accurate.

In line with our expectations regarding VPN usage and internet segmentation, 2024 saw a notable global surge in the popularity of VPN and proxy services, with applications gaining significant popularity across various countries. This trend is largely driven by users seeking to bypass regional content restrictions and enhance online privacy. However, this increased demand has attracted malicious actors. Cybercriminals are exploiting the popularity of VPN services by spreading potentially harmful applications disguised as legitimate VPN tools. Kaspersky has reported a surge in these malicious apps, capable of compromising user data and security.

In 2024, the prediction that national security concerns would lead to restrictions imposed on apps and services, thus creating new security issues, proved accurate. Governments worldwide used security as a justification for limiting access to popular platforms, often leaving users with fewer and potentially less secure alternatives in the process. Notable developments included the temporary suspension of X (formerly Twitter) in Brazil by a court ruling, ongoing discussions about TikTok’s ownership structure in the United States, and the removal of various messaging apps from Apple’s App Store in China.

Our prediction that play-to-earn (P2E) gaming platforms would attract cybercriminals was also fulfilled, with multiple cases highlighting the sector’s vulnerabilities. Kaspersky researchers uncovered phishing schemes targeting Hamster Kombat players, a popular Telegram-based clicker game, where attackers used fraudulent links to steal credentials and gain unauthorized access to user accounts. Similarly, Kaspersky GReAT discovered a scheme devised by the Lazarus group, which developed a malicious decoy game disguised as a legitimate P2E platform containing sophisticated malware designed to steal cryptocurrency and sensitive user data.

Despite the growing need driven by advancements in generative AI technology, the prediction about the development of a universal deepfake verification tool remained unfulfilled in 2024. While user-generated content (UGC) platforms like TikTok and Instagram have introduced policies requiring creators to label AI-generated content, the effectiveness of these measures is limited by their reliance on users’ honesty and awareness.

Just as predicted, the rise of voice deepfakes continued in 2024, fueling scams like vishing (voice phishing). A notable example is the proliferation of “fake kidnapping” scams, where attackers use voice imitation technology to impersonate real individuals and extort money from their families. The availability of open-source voice generation models expanded significantly, making these tools more accessible and lowering the technological threshold for malicious actors. The challenge of combating deepfakes extends beyond advances in tech and includes the need to raise public awareness and ensure the seamless integration of detection tools into everyday life.

Lastly, cybercriminals capitalized on the anticipation surrounding major film and game releases in 2024, which aligned with our earlier predictions. The release of “Joker 2” was accompanied by scams like phishing websites and fake streaming links, that aimed to deceive eager fans. Although “Grand Theft Auto VI” (GTA VI) is scheduled for release in 2025, scammers have already started to exploit its popularity by creating fake beta versions and unauthorized mobile releases to trick users into downloading malware or submitting personal information.

Overview of 2024 privacy predictions

The privacy landscape in 2024 was shaped by significant technological advances and evolving societal concerns, aligning with many of our predictions but leaving some areas underdeveloped.

Biometric data gained recognition as a critical aspect of privacy protection, with the European Union adopting the Artificial Intelligence (AI) Act to address privacy concerns associated with facial recognition and other biometric technologies. This marked a notable step towards expanding the concept of private data beyond traditional means of identification. However, while the EU led these efforts, global consensus and comprehensive implementation of similar standards remain ongoing challenges, rendering the anticipated trend of stricter regulation of biometric data usage only partially fulfilled.

Predicted privacy debates surrounding AI-enabled wearables, such as Humane’s AI Pin, did not gain much traction, as these devices struggled to make significant advancements in 2024. As a result, discussions surrounding these technologies often merged with those about AR and VR devices, which saw more tangible development. Devices like Apple Vision Pro and Meta’s Ray-Ban smart glasses highlighted similar concerns around data collection, biometric privacy, environmental mapping and bystanders’ consent. While these advancements brought privacy challenges to the forefront, the lack of robust regulatory frameworks left these predictions only partially realized. The immersive and pervasive nature of AR/VR technology underscores the urgency of establishing concrete regulatory measures to address these evolving concerns.

In 2024, the prediction that leaked passwords would become less of a concern saw partial realization. The increased adoption of passwordless authentication, supported by passkeys and biometric logins from major tech companies like Google, Microsoft and Apple, reduced the reliance on traditional passwords and mitigated the impact of credential leaks. However, the transition remains incomplete, with gaps in adoption leaving room for continued exploitation.

Assistant bots showed promise in enhancing privacy, particularly in mitigating phishing risks through call transcription and incoming caller screening, as predicted. However, the rise of sophisticated scams targeting bot vulnerabilities underscored the dual role of these technologies, showcasing both their potential and the need for stronger safeguards.

Overall, 2024 demonstrated progress in addressing critical privacy concerns while highlighting the need for continued advancement, global collaboration and comprehensive regulatory efforts to fully realize the potential of emerging technology in safeguarding privacy.

Consumer and privacy predictions for 2025

AI becomes an everyday reality

In 2025, artificial intelligence (AI) will solidify its role as a core element of daily life, transitioning from an innovative tool to a mundane utility. The rapid adoption of AI-driven technology across various domains — from search engines to creative tasks — has already reshaped how people work, learn and communicate. Major platforms like Google and Bing have integrated AI into search results, while users increasingly rely on chatbots for everything from answering questions and editing media to learning languages and simplifying workflows.

This trend is set to expand further with the anticipated release of advanced AI features in key operating systems like iOS and Android, marking a new phase in AI accessibility. As these capabilities roll out, AI will influence not only personal convenience but also broader industries. In academia, for instance, AI has accelerated research processes, and its contributions may reach new heights, as highlighted by the potential for groundbreaking achievements like the Nobel Prize awarded to the cofounder and scientist behind the AI initiative DeepMind.

However, alongside this normalization, challenges remain. AI’s ability to produce personalized deepfakes continues to evolve, raising ethical and privacy concerns in the absence of robust detection tools. As AI systems increasingly interact with and shape the physical world, the need for safeguards and accountability will grow. By 2025, AI’s ubiquity will transform it from a novelty into an indispensable part of modern life, with both opportunities and risks becoming more pronounced.

Fraudsters to exploit high-profile entertainment releases in 2025

In 2025, cybercriminals are expected to capitalize on the excitement surrounding major gaming, console and film releases. The launch of highly anticipated games like Mafia: The Old Country, Civilization VII and Death Stranding 2 will likely be accompanied by scams involving fake pre-orders and counterfeit digital keys. Similarly, rumors about the release of Nintendo’s next-generation console may fuel scams tied to pre-orders, early sales and fake hacking tools, some of which could deliver malware disguised as rootkits.

On the cinematic front, anticipated sequels and remakes like Superman, Jurassic World Rebirth, Captain America: Brave New World, Return to Silent Hill, and Tron: Ares will provide scammers with ample opportunity. Fraudulent campaigns may target fan forums and social media platforms by promoting fake early screenings, counterfeit merchandise and phishing emails. As the hype around these premieres intensifies, so will the sophistication and scale of cybercriminal activity seeking to exploit eager fans and consumers.

Proliferating subscription services to fuel fraud risks

As the global economy increasingly shifts towards subscription-based models, a significant uptick in fraud related to fake subscription offerings is anticipated. Cybercriminals are expected to exploit the growing shift in habits regarding subscriptions and reliance on these by creating counterfeit services that mimic legitimate ones. These fraudulent platforms aim to deceive users into providing personal and financial information, leading to identity theft and financial loss.

Moreover, with the proliferation of subscription services, some users may turn to unofficial resources to access content at reduced prices or for free. These non-official channels often lack proper security and may serve as hotspots for malware distribution, phishing attacks and other cyberthreats. Engaging with these platforms not only undermines legitimate businesses’ interests but also exposes users to heightened risks of fraud and data breaches.

Australia is considering legislation to ban children under 16 from using social media platforms like Facebook, Instagram, TikTok and X (formerly Twitter). The success of this measure hinges on its technical implementation, particularly in establishing reliable and effective age verification systems. If these challenges are successfully resolved, this legislation could serve as a model for similar restrictions globally. Moreover, successful implementation of a stricter approach might create a precedent for projecting restrictive measures onto other user groups, potentially reshaping international norms for regulating online platform access and use.

While unrelated to Australia’s initiative, content sharing platforms like Instagram are independently exploring advanced solutions to address age-related access issues. Instagram, for example, plans to deploy artificial intelligence to detect users misrepresenting their age, demonstrating how technology can enhance compliance with age-based policies. These innovations highlight the potential for scalable enforcement solutions, even as they face significant hurdles in ensuring accuracy and fairness.

Political polarization to fuel cyberbullying

In 2025, the increasing political divides affecting countries worldwide are expected to fuel a rise in cyberbullying, exacerbated by the global reach of social media platforms. Economic disparities, social movements and geopolitical conflicts have heightened tensions across Europe, Asia, Africa and the Americas. Social media platforms amplify these divides through algorithms that promote echo chambers and inflammatory content, creating an environment ripe for targeted harassment. Emerging AI tools, such as those used to create deepfakes or doctored posts, further enable malicious actors to escalate harassment. Although there are measures taken by various social media platforms to protect the online community from abuse, offensive content is hard to distinguish comprehensively, leaving users exposed to cyberbullying.

This trend will lead to an increase in the frequency of targeted attacks, doxing and coordinated cyberbullying campaigns, often crossing national boundaries. Individuals may face harassment not only from domestic adversaries but also from users abroad, making cyberbullying a transnational issue.

New regulations to expand user ownership of their data

In 2025, privacy regulations are set to hand users more control over their personal data than ever before. New laws may enable individuals to monetize their data, turning it from a corporate commodity into a personal asset. Expanded portability rights could make it easier to move data across platforms, encouraging competition and giving users the freedom to switch services without losing their digital history. Simplified consent models and enhanced rights to correct or delete data will further empower users to manage their online presence.

Globally, privacy frameworks like California’s CPRA and the EU’s GDPR are inspiring similar reforms in regions such as Asia and across U.S. states. Innovations like decentralized data storage may also emerge, giving users direct control over their data. By the end of 2025, the balance of power in the digital ecosystem may shift decisively toward individuals.


文章来源: https://securelist.com/ksb-consumer-and-privacy-predictions-2025/114620/
如有侵权请联系:admin#unsafe.sh