Categoria: Blockchain

  • Markets and Tokenization: Why Financial Infrastructure Is Starting to Change

    Tokenization has long been described as one of the most promising transformations in modern finance. For years, however, the concept remained suspended between technological enthusiasm and limited market reality. That balance is now beginning to shift. What makes tokenization relevant today is no longer only the possibility of representing assets on distributed ledgers, but the growing recognition among central banks, international standard setters, and market participants that tokenization may alter how financial markets issue, trade, settle, and service assets. In other words, tokenization is increasingly being discussed not as a niche digital-asset experiment, but as a question of market infrastructure

    At its core, tokenization means creating or representing an asset on a shared, trusted, and programmable digital ledger. The IMF has framed the issue in precisely these terms, stressing that the economic significance of tokenization lies less in the label itself than in the combination of three attributes: sharedness, trust, and programmability. This matters because financial markets are still shaped by frictions that arise across the lifecycle of assets—from issuance and trading to servicing and redemption. Tokenization is attractive not because it magically removes intermediation, but because it may reduce some of these frictions by allowing multiple parties to operate on a more integrated technological environment.

    The appeal is easy to understand. Today’s market infrastructure often relies on layered and sequential processes: one system records ownership, another handles settlement, another supports custody, and others manage collateral, reconciliation, and reporting. Tokenized arrangements promise to compress some of these steps into a more unified architecture. The BIS has argued that tokenization could both improve the existing system and enable new forms of financial contracting, especially in cross-border payments and securities markets. The ECB has similarly emphasized, as recently as March 2026, that tokenized assets and DLT-based infrastructures could make wholesale financial markets faster, cheaper, more integrated, and operational on a near-continuous basis.

    This is why the tokenization debate has moved beyond crypto-native circles. Recent Eurosystem work shows that the discussion is now firmly located in mainstream financial-market policy. Between May and November 2024, the Eurosystem conducted more than fifty trials and experiments with sixty-four market participants on DLT use in wholesale financial markets, and in 2025–2026 the ECB moved toward a strategic roadmap for Europe’s tokenized finance architecture. That evolution is important because it signals an institutional change in tone: tokenization is no longer viewed only as a speculative frontier, but as a possible layer of future market plumbing.

    And yet, despite the momentum, the market remains at an early stage. The OECD has noted that although interest in tokenization has grown and the distinction between crypto-assets and regulated tokenized assets has become clearer, actual adoption remains scarce. IOSCO reaches a similar conclusion: tokenization arrangements remain a small part of the financial sector, even if issuance and experimentation are increasing in selected jurisdictions. That gap between attention and adoption is perhaps the most important fact in the entire debate. Tokenization matters, but it is not yet dominant. The technology is advancing faster than market-wide implementation.

    There are several reasons for this. First, tokenization only creates broad efficiency gains if a critical mass of market participants uses interoperable systems. Finance is a network industry: isolated efficiency inside one firm does not necessarily translate into market-wide improvement. Second, legal and operational certainty still matter more than technological elegance. Ownership, finality, custody, insolvency treatment, asset servicing, and recordkeeping all require robust answers before tokenized markets can scale. Third, many legacy infrastructures already perform their core functions reasonably well. Replacing them demands not only innovation, but a compelling cost-benefit case. The IMF has gone so far as to suggest that, in some cases, the real novelty may lie less in “sharedness” alone and more in the programmability that tokenized systems can introduce.

    That last point is crucial. The strongest case for tokenization is not simply digitization, but programmable finance. Once assets exist as programmable tokens, the possibility emerges for conditional transfers, atomic settlement, automated compliance checks, composability across financial products, and more dynamic collateral use. IOSCO identifies precisely these features—fractionalization, programmability, composability, and atomicity—as among the main reasons why tokenization may reduce market friction and expand the design space of financial products and services. This is the aspect of tokenization that may prove most transformative over time: not merely faster settlement, but new market logic embedded into the asset itself.

    The effects on markets could therefore be significant. In primary markets, tokenization may simplify issuance and broaden access to certain asset classes, including through fractionalization. In secondary markets, it may reduce reconciliation burdens and shorten settlement chains. In post-trade environments, it may improve collateral mobility and create more integrated asset servicing. More broadly, tokenization may reduce some of the informational and transactional frictions that still produce inefficiencies in capital allocation. The IMF’s analysis is particularly useful here because it treats tokenization not as ideology, but as a mechanism that may affect search costs, transaction costs, counterparty risk, and other frictions across the asset lifecycle.

    Still, efficiency is only one side of the equation. Tokenization also raises questions about market structure, competition, and concentration. If tokenized infrastructures are built by partial coalitions of brokers, exchanges, custodians, or technology providers, they may create new forms of exclusion or lock-in. An IMF working paper published in 2025 makes precisely this point: tokenized markets with faster and cheaper settlement can generate socially suboptimal outcomes if coalition formation is incomplete or interoperability is weak. This means that tokenization is not just a technical choice; it is also a policy question about access, coordination, and the architecture of financial competition.

    From a regulatory perspective, the message emerging from international bodies is relatively clear. Tokenization does not suspend existing financial-law principles. IOSCO expressly stresses that the use of a new technological medium should not, in itself, materially alter the applicability of established regulatory principles. In practice, that means investor protection, market integrity, disclosure, governance, custody, operational resilience, and conflict-management rules remain central even when assets move to tokenized environments. The real challenge is not whether regulation continues to apply, but how it should be adapted to infrastructures that are more programmable, more interconnected, and potentially more continuous than traditional systems.

    For Europe, this makes tokenization a strategic issue rather than a fashionable one. If tokenized markets develop around settlement assets, ledger interoperability, and programmable market functions, then the debate is ultimately about who will shape the rails of future finance. Recent ECB messaging suggests a strong institutional interest in ensuring that tokenized financial markets do not evolve in a fragmented manner detached from central-bank settlement foundations. The BIS has framed the issue in similarly systemic terms, arguing that tokenized platforms anchored by central bank reserves, commercial bank money, and government bonds could help lay the groundwork for a next-generation monetary and financial system.

    The most realistic conclusion is therefore neither utopian nor dismissive. Tokenization will not instantly remake financial markets, and the current level of adoption remains limited. But it is increasingly difficult to regard it as marginal. What is changing is not only the technology, but the institutional seriousness with which it is being examined. Markets may ultimately adopt tokenization not because it is novel, but because, in specific segments, it may prove better at integrating issuance, trading, settlement, servicing, and compliance into a more coherent operational framework. If that happens, tokenization will matter less as a “digital asset” story and more as a story about the redesign of market infrastructure itself.

  • Blockchain as an Enabler of Agentic AI

    Agentic AI is usually presented as the next evolutionary step beyond generative AI: systems capable not only of producing outputs, but of pursuing objectives through planning, tool use, adaptation, and multi-step execution. Yet the more AI becomes “agentic,” the more it encounters an old problem in digital systems: trust. If an agent can act, transact, delegate, negotiate, and coordinate with other agents, the key question is no longer only what it can do, but how its identity, permissions, actions, and incentives can be verified. This is precisely where blockchain may become less a speculative accessory and more an enabling infrastructure. Recent work on agentic AI identity and governance increasingly converges on four needs—authentication, provenance, auditability, and interoperable trust—which are also classic strengths of distributed ledger architectures.

    The first contribution blockchain can make to agentic AI is machine identity. Human-centric login and authorization models are poorly suited to an environment in which software agents act persistently, sometimes across platforms, on behalf of users, firms, or even other agents. The OpenID Foundation’s 2025 report on identity management for agentic AI argues that agents require new authentication and authorization frameworks precisely because they operate across contexts, hold delegated authority, and may act for multiple principals. In parallel, the W3C’s DID framework defines decentralized identifiers as a way to establish verifiable, decentralized digital identity without reliance on a single registry or identity provider. In practice, this means blockchain-based or blockchain-anchored identity systems can help assign persistent, tamper-evident identities to agents, enabling them to prove who they are, what credentials they hold, and what authority has been delegated to them.

    The second area is trustworthy provenance. Agentic systems depend on long chains of perception, retrieval, reasoning, and action. As these chains become more autonomous, the ability to reconstruct who did what, on which data, under which permissions, and with which outputs becomes critical. This is especially true in high-stakes sectors such as finance, healthcare, and public administration. Blockchain does not solve truth at the input layer, but it can create immutable logs of decisions, tool invocations, model states, approvals, and transaction histories. This can materially strengthen accountability and reduce disputes over whether an agent followed instructions, exceeded its mandate, or operated on manipulated inputs. Emerging research on blockchain-monitored agentic architectures and legal infrastructure for the “agentic web” explicitly treats distributed ledgers as a foundation for verifiable transactions, registries, and adjudicable action trails in machine-mediated environments.

    A third contribution lies in multi-agent coordination. Agentic AI becomes truly transformative when agents do not merely assist humans one by one, but interact with other agents in open or semi-open ecosystems. At that point, coordination problems emerge: discovery, role allocation, commitment enforcement, payment, dispute resolution, and reputation. Blockchain is useful here because it offers a shared state layer in which commitments can be recorded, conditions can be enforced through smart contracts, and economic interactions can occur without every participant depending on the same intermediary. Recent academic and policy discussions increasingly describe a future in which blockchain supports interoperable, economically active agent networks by providing verifiable transaction rails and shared coordination rules. In that sense, blockchain can serve as a governance substrate for machine collaboration, not just as a payment rail.

    The fourth and perhaps most practical contribution is agent-native payments and programmable incentives. Agentic systems will struggle to scale commercially if every action requiring payment, licensing, access, or settlement must be routed through human billing flows, bank forms, or closed platform wallets. Several recent discussions in the payments and digital-assets space point to programmable money, micropayments, and machine-to-machine settlement as increasingly relevant use cases for distributed systems. The BIS Innovation Hub’s 2025 workshop report identifies digital identity, programmable money, and micropayment flows among the key use cases for scalable distributed architectures. In this context, blockchain-based payment rails—especially where settlement logic is programmable—can give AI agents a native economic layer: they can pay for APIs, compute, data, storage, content, or execution outcomes in granular increments. That does not imply that every such payment must occur on a public chain, but it does suggest that tokenized and programmable infrastructures are unusually well suited to machine actors operating at internet speed.

    A fifth area is governance by design. One of the central concerns with agentic AI is that it becomes difficult to separate the intentions of the user, the constraints of the developer, the incentives of the platform, and the behavior of the model. Blockchain can help by externalizing some governance rules into transparent and reviewable mechanisms. Smart contracts, registries, credential checks, and cryptographically signed policies can create enforceable boundaries around what an agent may do, with whom, and under what conditions. This is particularly relevant for enterprise and regulated use cases, where agent autonomy must remain bounded by ex ante rules and ex post auditability. The attraction, in other words, is not decentralization for its own sake, but the possibility of embedding governance into operational architecture.

    At the same time, it would be naïve to frame blockchain as a universal cure for agentic AI’s problems. Distributed ledgers introduce their own frictions: scalability limits, privacy trade-offs, governance complexity, legal uncertainty, and integration costs. The BIS has repeatedly stressed that for digital identity, cross-border payments, and programmable money, the main barriers are not purely technical but institutional and legal. Moreover, privacy remains a major issue: immutable logging can strengthen accountability, but it can also create tensions with confidentiality, commercial secrecy, and data protection. For this reason, the most credible architectures are likely to be hybrid ones—combining off-chain computation, privacy-enhancing technologies, selective disclosure, and on-chain verification only where immutability and shared trust are truly needed.

    The broader point is that agentic AI needs an infrastructure of trust if it is to evolve beyond siloed assistants into durable economic actors. Blockchain may supply part of that infrastructure by giving agents verifiable identity, persistent memory of commitments, auditable action trails, programmable value transfer, and rule-based coordination in environments where no single platform can be assumed to be neutral or universally trusted. The likely future, therefore, is not one in which blockchain and AI converge as a matter of hype, but one in which blockchain quietly provides the institutional plumbing for agentic systems that must identify themselves, prove authorization, transact, and be held accountable. In that sense, blockchain does not make agentic AI more intelligent. It makes it more governable.