Can AI Agents Fix The Internet’s Trust Problem?
Swarm Network提出去中心化“真相协议”,利用AI代理协作验证信息,在链上发布认证。其应用Rollup News已处理数百万内容,目标构建真相经济体系。 2025-8-20 10:35:20 Author: hackernoon.com(查看原文) 阅读量:8 收藏

In an internet drowning in claims, counter-claims, and synthetic media, Swarm Network proposes something bold: a decentralized “Truth Protocol” where thousands of AI agents collaborate, cross-check evidence, and post on-chain attestations. With 10,000+ agent licenses sold, millions of rollups processed, and a live news verification product on X, the team says it’s building an economy where truth is more profitable than deception. Today, we dig into the architecture, incentives and risks of that vision.

Ishan Pandey: Hi Yannick, it is great to have you on our “Behind the Startup” series. Let us start with your journey. What first drew you to multi agent AI and blockchain, and what did you build at Delysium, including the AI Agent Network and Lucy OS, that led you to found Swarm Network?

Yannick Myson: My journey started with a simple but powerful question: how can we coordinate intelligence, both human and artificial, at scale without relying on centralized gatekeepers? That question became my obsession. At Delysium, I helped design and scale the AI Agent Network and I was the main author of the latest white paper.

Delysium became one of the most prominent Web3 AI projects, reaching a peak valuation of over 2.5 billion USD and expanding globally across markets. That experience gave me a front-row seat to the potential of AI agents, but it also exposed the limitations of centralized infrastructures. We needed a way to make these systems open, transparent, and verifiable.

That insight became the foundation of Swarm. Swarm is a decentralized AI protocol where users can generate clusters of intelligent agents, deploy them in swarms, and collectively verify real-world information on-chain. It’s not theoretical. Our first application, Rollup News, is live on X and has processed over 3 million posts, with more than 128,000 users participating. Anyone can tag @Rollup_News to verify a claim and receive an auditable, on-chain result in seconds.

Since launch, we’ve raised 3 million USD from strategic investors including the SUI Foundation, Ghaf Capital, Brinc, Zerostage, and Y2Z. Over 10 million USD more was generated through public sales of Agent Licenses, with more than 10,000 licenses sold. These licenses give users access to operate agents, earn rewards, and participate in the network’s governance.

Swarm is building the missing layer of truth on the internet, a verifiable system where AI agents and humans collaborate to anchor fact-based claims on-chain.

Ishan Pandey: Could you share a couple of current B2B and B2C Web3 use cases you’re supporting, say DeFi, RWA platforms, marketplaces, or decentralized social? We would love to hear the outcomes you’ve seen beyond top-line figures.

Yannick Myson: At Swarm, our mission is to build decentralized infrastructure for truth, enabling anyone to verify information openly and collaboratively. We believe the next evolution of Web3 is not just financial but epistemic,  helping the internet decide what is real and what is not.

Our core B2C use case is Rollup News, a fact-checking protocol embedded directly into X (formerly Twitter). Users tag @Rollup_News on a post and receive a real-time verdict backed by on-chain claims and an auditable evidence trail. It's designed to restore trust in social media by making truth verifiable in the moment.

To date, Rollup News has processed over 3 million posts and reached more than 128,000 users. During our most recent public test, in July, we saw over 7,000 daily active users at peak. The product has also been selected as the first official AI-powered fact-checking provider integrated with Google, highlighting its utility beyond crypto-native audiences.

On the B2B side, Swarm supports DeFi and real-world asset protocols that rely on trustworthy off-chain data. Our system turns external inputs like audits, shipping confirmations, or financial statements into cryptographic claims that can be independently verified and referenced on-chain. This helps ensure that core business logic runs on data that can be proven, not just assumed.

Whether it’s a user fact-checking social content or a protocol verifying real-world events, Swarm provides a new layer of trust for the internet, transparent, decentralized, and built for scale.

Ishan Pandey: Could you walk us through the core architecture? What are “Atomic claims”? And how do agent clusters coordinate, resolve conflicts, and prevent herding effects in adversarial or ambiguous information environments?

Yannick Myson: The core of Swarm’s architecture is designed around clarity, scale, and accountability. At the heart of this system are atomic claims, small, verifiable statements of fact. Instead of treating a post or data blob as a single unit, we break it down into precise components like: "X person made Y statement on Z date." This modular approach allows every piece of information to be verified independently and transparently.

These atomic claims are processed by specialized clusters of AI agents. Each cluster includes different agent roles: retrieval agents source data, reasoning agents evaluate consistency, and consensus agents determine the final verdict. Think of it like a decentralized debate, with each role accountable to others. To avoid groupthink or single-source reliance, the system enforces diversity in both data inputs and methodologies. This guards against herding and makes consensus more resilient in ambiguous or adversarial scenarios.

When needed, human reviewers step in, especially for edge cases involving language, nuance, or cultural context. Every decision made, from data used to logic applied, is logged and stored on-chain. That means anyone can audit how a claim was processed, who contributed, and whether it met the standards set by the network.

This architecture is built not just for speed or automation, but for explainability. It's a new kind of infrastructure where intelligence is distributed, reasoning is transparent, and outcomes are auditable.

Ishan Pandey: How do you harden the system against Sybil attacks, collusion between agent operators, and coordinated misinformation campaigns that try to game economic rewards?

Yannick Myson: Swarm is designed from the ground up with adversarial environments in mind. To protect against Sybil attacks and manipulation, we combine strong economic incentives with layered accountability.

First, participation is gated by Agent Licenses. These licenses are not free or easily replicable  they represent economic skin in the game. To operate an agent, you must hold a license, which makes mass spamming or low-effort collusion expensive from the start.

Second, we use cross-verification. Agents are not allowed to validate each other's work in isolation. Their outputs are compared against multiple peers and often challenged by independent clusters. When discrepancies emerge, the system favors the path with the strongest evidence trail. This discourages echo chambers and coordinated misinformation.

Third, everything is logged. Every interaction, every verdict, every piece of evidence is recorded immutably on-chain. If an operator tries to game the system, those patterns don’t just leave a trace ,they leave a public footprint. Reputation systems kick in here: agents with a history of accurate performance earn higher standing, while manipulation erodes that status in full view of the network.

This transparency, combined with meaningful costs to entry and automated dispute resolution, creates a system where gaming the rewards becomes both economically and reputationally unviable.

Ishan Pandey: The team is planning an Agent Service Market and a License Leasing Market. What prevents a race to the bottom in terms of quality, and how will reputation be computed and made tamper-proof?

Yannick Myson: A market only collapses when quality is invisible. In Swarm, quality is on-chain, permanent, and tied to performance.

Every agent's work is tied to public verification outcomes. If a retrieval agent delivers low-quality sources, or a reasoning agent consistently misjudges claims, their history shows it. That history is recorded immutably, not just on a centralized dashboard, but within the protocol itself.

Reputation in Swarm isn't abstract. It's computed from verifiable data: successful rollups, alignment with consensus, error rates, challenge disputes, and more. That reputation score directly affects rewards, leasing rates, and eligibility to join certain high-value swarms.

Low-reputation agents are filtered out over time, not by subjective moderation, but by hard performance metrics and swarm-level selection. High-performing agents, on the other hand, rise in demand.

Because license holders stake their token rewards and reputations in every transaction, quality becomes the game. The design rewards long-term reliability over short-term volume, preventing the kind of degradation most marketplaces suffer from.

Ishan Pandey: If Swarm succeeds, what changes for an average internet user reading breaking news or for a DeFi protocol consuming off-chain signals, compared to today?

Yannick Myson: For everyday users, the experience of consuming news completely transforms. Instead of relying on blind trust in headlines, social media posts, or anonymous sources, users get real-time, on-chain verified claims directly within their feed. If you see a viral post, you can instantly request a "rollup" from our product Rollup News. Within seconds, you receive a structured verdict with supporting evidence, a transparent reasoning trail, and on-chain provenance. The user doesn’t just receive a yes or no,  they get a proof-backed explanation.

For protocols and applications, the change is even more foundational. DeFi platforms can finally base critical decisions on verified off-chain truths. Whether it's proving that a physical asset exists for an RWA platform or confirming the outcome of an external event for a prediction, Swarm enables them to rely on structured, cryptographically verifiable claims instead of opaque data feeds.

We’re turning trust into infrastructure. Swarm shifts the internet away from blind reliance on platforms and influencers toward a model where facts are auditable, context is preserved, and verification is a public good accessible to all.

Ishan Pandey: Finally, what is the single strongest critique of Swarm you’ve heard and how has it concretely changed the roadmap or design?

Yannick Myson: The strongest critique we’ve received is that Swarm could be mistaken for just another oracle project, a rebranded data feed with a layer of AI. That forced us to get sharper about our differentiation and go deeper into both decentralization and explainability.

Instead of just feeding data to smart contracts, Swarm verifies claims about the world. This required us to rethink verification from the ground up. We introduced atomic claims — small, testable statements, as the base unit. Then we built agent clusters around them, with specialized roles for retrieval, reasoning, and consensus. Each step is logged, and the full evidence trail is made transparent and available on-chain.

We also made a deliberate move toward user-owned agent licenses to prevent centralization of power. Our Walrus Protocol integration ensures long-term storage of reasoning paths, claim histories, and agent communication, enabling full auditability. In short, that critique made us double down on what makes Swarm different, a system built not just for data, but for trust at internet scale.

Don’t forget to like and share the story!

This author is an independent contributor publishing via our business blogging program. HackerNoon has reviewed the report for quality, but the claims herein belong to the author. #DYO


文章来源: https://hackernoon.com/can-ai-agents-fix-the-internets-trust-problem?source=rss
如有侵权请联系:admin#unsafe.sh