Visa's Head of Crypto: Eight Evolutionary Directions for Crypto and AI in 2026
Jan 07, 2026 22:30:06
Author: Cuy Sheffield, Vice President and Head of Crypto at Visa
Compiled by: Saoirse, Foresight News
As cryptocurrencies and AI gradually mature, the most significant shifts in these two fields are no longer about what is "theoretically feasible," but rather what can be "reliably implemented in practice." Currently, both technologies have crossed critical thresholds, achieving significant performance improvements, but the actual application adoption rates remain uneven. The core development dynamics of 2026 stem from this gap between "performance and adoption."
Here are several core themes I have been closely following, along with my preliminary thoughts on the development directions of these technologies, areas of value accumulation, and why the ultimate winners may be starkly different from industry pioneers.
Theme 1: Cryptocurrencies are transforming from speculative asset classes to quality technologies
The first decade of cryptocurrency development is characterized by "speculative advantages"—its market is global, continuous, and highly open, with extreme volatility making cryptocurrency trading more vibrant and attractive than traditional financial markets.
However, at the same time, the underlying technology was not ready for mainstream applications: early blockchains were slow, costly, and lacked stability. Outside of speculative scenarios, cryptocurrencies have almost never surpassed existing traditional systems in terms of cost, speed, or convenience.
Now, this imbalance is beginning to reverse. Blockchain technology has become faster, more economical, and more reliable, and the most attractive application scenarios for cryptocurrencies are no longer speculation but rather infrastructure—especially in settlement and payment processes. As cryptocurrencies gradually become more mature technologies, the core position of speculation will weaken: it will not disappear entirely, but it will no longer be the primary source of value.
Theme 2: Stablecoins are a clear result of cryptocurrencies' "pure practicality"
Unlike previous cryptocurrency narratives, the success of stablecoins is based on specific, objective standards: in certain scenarios, stablecoins are faster, cheaper, and more widely available than traditional payment channels, while also seamlessly integrating into modern software systems.
Stablecoins do not require users to view cryptocurrencies as an "ideology" to believe in; their applications often occur "implicitly" within existing products and workflows—this has allowed institutions and enterprises that previously viewed the cryptocurrency ecosystem as "too volatile and not transparent enough" to finally understand its value clearly.
It can be said that stablecoins help cryptocurrencies re-anchor on "practicality" rather than "speculation," establishing a clear benchmark for "how cryptocurrencies can successfully land."
Theme 3: When cryptocurrencies become infrastructure, "distribution capability" is more important than "technological novelty"
In the past, when cryptocurrencies primarily played the role of "speculative tools," their "distribution" was intrinsic—new tokens only needed to "exist" to naturally accumulate liquidity and attention.
However, as cryptocurrencies become infrastructure, their application scenarios are shifting from the "market level" to the "product level": they are embedded in payment processes, platforms, and enterprise systems, often without end users being aware of their existence.
This shift is highly beneficial for two types of entities: first, enterprises with existing distribution channels and reliable customer relationships; second, institutions with regulatory licenses, compliance systems, and risk control infrastructures. Relying solely on "novelty of protocols" is no longer sufficient to drive large-scale adoption of cryptocurrencies.
Theme 4: AI agents have practical value, impacting beyond the coding domain
The practicality of AI agents is becoming increasingly evident, but their role is often misunderstood: the most successful agents are not "autonomous decision-makers," but rather "tools that reduce coordination costs in workflows."
Historically, this has been most apparent in the software development field—agent tools have accelerated the efficiency of coding, debugging, code refactoring, and environment setup. However, in recent years, this "tool value" has significantly spread to more areas.
Take tools like Claude Code as an example; although positioned as "developer tools," their rapid adoption reflects a deeper trend: agent systems are becoming the "interface for knowledge work," rather than being limited to programming. Users are beginning to apply "agent-driven workflows" to research, analysis, writing, planning, data processing, and operational tasks—these tasks lean more towards "general professional work" rather than traditional programming.
The key is not "ambient coding" itself, but the core patterns behind it:
- Users delegate "goal intentions," not "specific steps";
- Agents manage "contextual information" across files, tools, and task management;
- Work modes shift from "linear progression" to "iterative, conversational."
In various knowledge work contexts, agents excel at gathering context, executing defined tasks, reducing handoffs, and accelerating iterative efficiency, but they still have shortcomings in "open-ended judgment," "accountability," and "error correction."
Therefore, most agents used in production scenarios still need to be "limited in scope, subject to oversight, and embedded in systems," rather than operating completely independently. The actual value of agents stems from "restructuring knowledge workflows," rather than "replacing labor" or "achieving complete autonomy."
Theme 5: The bottleneck of AI has shifted from "intelligence level" to "trustworthiness"
The intelligence level of AI models has rapidly improved, and the limiting factors are no longer "single language fluency or reasoning ability," but rather "reliability in actual systems."
Production environments have zero tolerance for three types of issues: first, AI "hallucinations" (generating false information), second, inconsistent output results, and third, opaque failure modes. Once AI is involved in customer service, financial transactions, or compliance processes, "roughly correct" results are no longer acceptable.
Building "trust" requires four foundations: first, traceability of results, second, memory capability, third, verifiability, and fourth, the ability to proactively expose "uncertainty." Until these capabilities are sufficiently mature, the autonomy of AI must be limited.
Theme 6: Systems engineering determines whether AI can land in production scenarios
Successful AI products view "models" as "components" rather than "finished products"—their reliability stems from "architectural design," not "prompt optimization."
Here, "architectural design" includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. Therefore, the current development of AI is increasingly approaching "traditional software engineering," rather than "cutting-edge theoretical research."
Long-term value will tilt towards two types of entities: first, system builders, and second, platform owners who control workflows and distribution channels.
As agent tools expand from coding to research, writing, analysis, and operational processes, the importance of "systems engineering" will become even more pronounced: knowledge work is often complex, reliant on state information, and contextually dense, making agents that can reliably manage memory, tools, and iterative processes (rather than just generating outputs) more valuable.
Theme 7: The contradiction between open models and centralized control raises unresolved governance issues
As AI systems' capabilities enhance and their integration into the economic sphere deepens, the question of "who owns and controls the most powerful AI models" is generating core contradictions.
On one hand, R&D in the cutting-edge AI field remains "capital-intensive," increasingly influenced by "computational power acquisition, regulatory policies, and geopolitical factors," leading to rising concentration; on the other hand, open-source models and tools continue to iterate and optimize under the impetus of "broad experimentation and convenient deployment."
This "coexistence of concentration and openness" has led to a series of unresolved issues: dependency risks, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a "hybrid model"—cutting-edge models drive technological breakthroughs, while open or semi-open systems integrate these capabilities into "widely distributed software."
Theme 8: Programmable money gives rise to new types of agent payment flows
As AI systems play roles in workflows, their demand for "economic interactions" is increasing—such as paying for services, calling APIs, compensating other agents, or settling "usage-based interaction fees."
This demand has brought "stablecoins" back into focus: they are seen as "machine-native currency," possessing programmability and auditability, and can complete transfers without human intervention.
Take x402, a "developer-oriented protocol," as an example; although it is still in the early experimental stage, its direction is very clear: payment flows will operate in "API form," rather than traditional "checkout pages"—this allows for "continuous, refined transactions" between software agents.
Currently, this field is still immature: transaction volumes are small, user experiences are rough, and security and permission systems are still being improved. However, innovations in infrastructure often begin with such "early explorations."
It is worth noting that its significance is not "autonomy for autonomy's sake," but rather "when software can complete transactions through programming, new economic behaviors become possible."
Conclusion
Whether in cryptocurrencies or artificial intelligence, the early stages of development favored "eye-catching concepts" and "technological novelty"; in the next phase, "reliability," "governance capability," and "distribution capability" will become more important competitive dimensions.
Today, the technology itself is no longer the main limiting factor; "embedding technology into actual systems" is key.
In my view, the hallmark of 2026 will not be "a breakthrough technology," but rather "the steady accumulation of infrastructure"—these facilities, while operating quietly, are also subtly reshaping "the way value flows" and "the modes of work."
Recommended Reading:
RootData 2025 Web3 Industry Annual Report
Insider trading may be the most valuable part of the prediction market
Dragonfly Managing Partner Haseeb: The 3 top crypto investors in my eyes
a16z executives on 2026 crypto trends: stablecoins, payments, RWA…
Latest News
ChainCatcher
Jan 17, 2026 05:10:44
ChainCatcher
Jan 17, 2026 05:03:46
ChainCatcher
Jan 17, 2026 05:00:46
ChainCatcher
Jan 17, 2026 04:33:02
ChainCatcher
Jan 17, 2026 04:32:43












