Are you a subscriber to Anthropic’s Claude Pro ($20 monthly) or Max ($100-$200 monthly) plans and use its Claude AI models and products to power third-party AI agents like OpenClaw? If so, you’re in for an unpleasant surprise. Anthropic announced a few hours ago that starting tomorrow, Saturday, April 4, 2026, at 12 pm PT/3 pm ET, it will no longer be possible for those Claude subscribers to use their subscriptions to hook Anthropic’s Claude models up to third-party agentic tools, citing the strain such usage was placing on Anthropic’s compute and engineering resources, and desire to serve a wide number of users reliably. “We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren’t built for the usage patterns of these third-party tools,” wrote Boris Cherny, Head of Claude Code at Anthropic, in a post on X. “Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.”The company also reportedly sent out an email to this effect to some subscribers. However, it’s not certain if subscribers to Claude Team and Enterprise will be impacted similarly. We’ve reached out to Anthropic for further clarification and will update when we hear back.To be clear, it will still be possible to use Claude models like Opus, Sonnet, and Haiku to power OpenClaw and similar external agents, but users will now need to opt into a pay-as-you-go “extra usage” billing system or utilize Anthropic’s application programming interface (API), which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far. The reason for the change: ‘third party services are not optimized’ The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize “prompt cache hit rates”—reusing previously processed text to save on compute. Third-party harnesses like OpenClaw often bypass these efficiencies. “Third party services are not optimized in this way, so it’s really hard for us to do sustainably,” Cherny explained further on X. He even revealed his own hands-on attempts to bridge the gap: “I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages.”Prior to the news, Anthropic had also begun imposing stricter Claude session limits every 5 hours of usage during business hours (5am-11am PT/8am-2pm ET), meaning that the number of tokens you could send during those sessions dropped.This frustrated some power users who suddenly began reaching their limits far faster than they had previously — a change Anthropic said was to help “manage growing demand for Claude” and would only affect up to 7% of users at any given time. Discounts and credits to soften the blowAnthropic is not banning third-party tools entirely, but it is moving them to a different ledger. The new “Extra Usage” bundles represent a middle ground between a flat-rate subscription and a full enterprise API account.The Credit: To “soften the blow,” Anthropic is offering existing subscribers a one-time credit equal to their monthly plan price, redeemable until April 17.The Discount: Users who pre-purchase “extra usage” bundles can receive up to a 30% discount, an attempt to retain power users who might otherwise churn.Capacity Management: Anthropic’s official statement noted that these tools put an “outsized strain” on systems, forcing a prioritization of “customers using our core products and API.”‘The all you-can-eat buffet just closed’The response from the developer community has been a mixture of analytical acceptance and sharp frustration.Growth marketer Aakash Gupta observed on X that the “all-you-can-eat buffet just closed,” noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. “Anthropic was eating that difference on every user who routed through a third-party harness,” Gupta wrote. “That’s the pace of a company watching its margin evaporate in real time.”However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the “capacity” argument.“Funny how timings match up,” Steinberger posted on X. “First they copy some popular features into their closed harness, then they lock out open source.” Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on — such as the ability to message agents through external services like Discord and Telegram — to Claude Code. Steinberger claimed that he and fellow investor Dave Morin attempted to “talk sense” into Anthropic, but were only able to delay the enforcement by a single week.User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: “If I switch both [OpenClaw instances] to an API key or the extra usage you’re recommending here, it’s going to be far too expensive to make it worth using. I’ll probably have to switch over to a different model at this point.”.“I know it sucks,” Cherny replied. “Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best modeLicensing and the OpenAI shadowThe timing of the crackdown is particularly notable given the talent migration. When Steinberger joined OpenAI in February 2026, he brought the “OpenClaw” ethos with him. OpenAI appears to be positioning itself as a more “harness-friendly” alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users.By restricting subscription limits to their own “closed harness,” Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the “agentic” ecosystem in the first place.The Bottom LineAnthropic’s decision is a cold calculation of margins versus growth. As Cherny noted, “Capacity is a resource we manage thoughtfully.” In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over. For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.
BUSINESS
Geno Auriemma’s Final Four Meltdown Is Not What Women’s Basketball Needs
Geno Auriemma’s postgame meltdown after UConn’s Final Four loss to South Carolina is a step backward for women’s basketball at a critical moment of growth.
Why Most Founders Get Their First Marketing Hire Wrong — and What to Do Instead
Most founders make the same early marketing hiring mistake. Here’s what to consider before building your team.
An All Too Early Look At MLB TV Viewership For The 2026 Season
Broadcasts of Major League Baseball early in the 2026 season have been robust. Here’s the numbers for regional and national broadcasts.
Here’s what ‘cracking’ bitcoin in 9 minutes by quantum computers actually means
Google’s quantum paper made headlines with that number. Here’s what it means, what’s actually at risk, and why 6.9 million bitcoin are more exposed than the rest.
OpenAI just made a decision that shocked the whole tech ecosystem
OpenAI builds artificial intelligence. It does not buy talk shows. Until now.The company announced April 2 that it has acquired TBPN, the Technology Business Programming Network, a daily live tech and business show that has become something of a cult phenomenon in Silicon Valley, per OpenAI. It is the company’s first acquisition of a media company. Deal terms were not disclosed.The New York Times once described TBPN as “Silicon Valley’s newest obsession,” and as “SportsCenter for the terminally online M.B.A. grad,” per The Wrap. The show streams three hours a day, weekdays from 11 a.m. to 2 p.m. PT, on YouTube, X, Spotify, Apple Podcasts, LinkedIn, Substack, and Instagram.What TBPN actually isTBPN was launched by former tech founders John Coogan and Jordi Hays in October 2024 and began its daily livestream format in March 2025. The show covers tech, business, AI, and defense, with guests ranging from Meta CEO Mark Zuckerberg to Microsoft CEO Satya Nadella to Salesforce CEO Marc Benioff. Sam Altman has appeared multiple times.Related: OpenAI is shutting down Sora, and the Disney deal is offThe show has 11 employees and averages around 70,000 viewers per episode across platforms, per Variety. Despite its relatively small footprint, TBPN generated approximately $5 million in advertising revenue in 2025 and was profitable with no outside investors, per Axios citing the Wall Street Journal. It was on track to exceed $30 million in revenue in 2026, per CNBC.TBPN attracted sponsorships from fintech companies Ramp and Plaid, Google’s Gemini, and holds a partnership with the New York Stock Exchange.Why OpenAI bought a talk showThe acquisition is not about content. It is about communications, per OpenAI’s own announcement.Fidji Simo, OpenAI’s CEO of AGI Deployment, explained the rationale directly. “As I’ve been thinking about the future of how we communicate at OpenAI, one thing that’s become clear is that the standard communications playbook just doesn’t apply to us,” she wrote. “We’re not a typical company. We’re driving a really big technological shift. And with the mission of bringing AGI to the world comes a responsibility to help create a space for a real, constructive conversation about the changes AI creates,” per OpenAI.More AI Stocks:Morgan Stanley sets jaw-dropping Micron price target after eventBank of America updates Palantir stock forecast after private meetingMorgan Stanley drops eye-popping Broadcom price targetTBPN will sit within OpenAI’s Strategy organization and report to Chris Lehane, the company’s chief global affairs officer. Coogan and Hays will also contribute to OpenAI’s broader communications and marketing efforts outside the show.Altman was direct about his enthusiasm. “TBPN is my favorite tech show,” he posted on X. “We want them to keep that going and for them to do what they do so well. I don’t expect them to go any easier on us,”.
Sam Altman named TBPN as his favorite show. Sullivan/Getty Images
The editorial independence questionThe acquisition raises an obvious concern. TBPN regularly covers OpenAI, its competitors, and the broader AI industry. Once owned by the company it covers, how independent can it really be?OpenAI has explicitly committed to preserving TBPN’s editorial independence. Simo wrote that TBPN will “continue to run their programming, choose their guests, and make their own editorial decisions,” calling it “foundational to their credibility,” per OpenAI.Coogan echoed the point on the show itself. “TBPN is not going away,” he said. “We’re going to be live every day, three hours, long as we want. We have a lot of flexibility.” He added: “We can say whatever we want because we’re live, and we don’t need to run anything through anyone,”.Co-host Hays framed the deal in broader terms. “While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” he said. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us,” per OpenAI.Key facts about the TBPN acquisition:TBPN launched in October 2024; daily livestream format began March 2025.Hosts: John Coogan and Jordi Hays, both former tech foundersAverage viewership: ~70,000 per episode across platforms.2025 ad revenue: ~$5 million; 2026 revenue projection: $30 million+, per CNBCTBPN will report to Chris Lehane within OpenAI’s Strategy org.The show will wind down its advertising business following the acquisition.What it means for OpenAIThe deal comes just days after OpenAI announced a $122 billion funding round at an $852 billion post-money valuation. It also follows the company’s recent shutdown of Sora, its video generation product, just one week prior.TechCrunch noted the significance of who TBPN reports to inside OpenAI. Chris Lehane, described as a master of political strategy, joined OpenAI in 2024 and has been a key figure in shaping the company’s policy positions, including its push to prevent states from regulating AI.For a company that is remaking how the world processes information, the acquisition of a show that shapes how Silicon Valley talks about itself is a telling move. OpenAI is not just building the technology. It now owns part of the conversation about it.Related: OpenAI just raised $110 billion and it still may not be enough
Karpathy shares ‘LLM Knowledge Base’ architecture that bypasses RAG with an evolving markdown library maintained by AI
AI vibe coders have yet another reason to thank Andrej Karpathy, the coiner of the term. The former Director of AI at Tesla and co-founder of OpenAI, now running his own independent AI project, recently posted on X describing a “LLM Knowledge Bases” approach he’s using to manage various topics of research interest. By building a persistent, LLM-maintained record of his projects, Karpathy is solving the core frustration of “stateless” AI development: the dreaded context-limit reset.As anyone who has vibe coded can attest, hitting a usage limit or ending a session often feels like a lobotomy for your project. You’re forced to spend valuable tokens (and time) reconstructing context for the AI, hoping it “remembers” the architectural nuances you just established. Karpathy proposes something simpler and more loosely, messily elegant than the typical enterprise solution of a vector database and RAG pipeline. Instead, he outlines a system where the LLM itself acts as a full-time “research librarian”—actively compiling, linting, and interlinking Markdown (.md) files, the most LLM-friendly and compact data format.By diverting a significant portion of his “token throughput” into the manipulation of structured knowledge rather than boilerplate code, Karpathy has surfaced a blueprint for the next phase of the “Second Brain”—one that is self-healing, auditable, and entirely human-readable.Beyond RAGFor the past three years, the dominant paradigm for giving LLMs access to proprietary data has been Retrieval-Augmented Generation (RAG). In a standard RAG setup, documents are chopped into arbitrary “chunks,” converted into mathematical vectors (embeddings), and stored in a specialized database. When a user asks a question, the system performs a “similarity search” to find the most relevant chunks and feeds them into the LLM.Karpathy’s approach, which he calls LLM Knowledge Bases, rejects the complexity of vector databases for mid-sized datasets. Instead, it relies on the LLM’s increasing ability to reason over structured text.The system architecture, as visualized by X user @himanshu in part of the wider reactions to Karpathy’s post, functions in three distinct stages:Data Ingest: Raw materials—research papers, GitHub repositories, datasets, and web articles—are dumped into a raw/ directory. Karpathy utilizes the Obsidian Web Clipper to convert web content into Markdown (.md) files, ensuring even images are stored locally so the LLM can reference them via vision capabilities.The Compilation Step: This is the core innovation. Instead of just indexing the files, the LLM “compiles” them. It reads the raw data and writes a structured wiki. This includes generating summaries, identifying key concepts, authoring encyclopedia-style articles, and—crucially—creating backlinks between related ideas.Active Maintenance (Linting): The system isn’t static. Karpathy describes running “health checks” or “linting” passes where the LLM scans the wiki for inconsistencies, missing data, or new connections. As community member Charly Wargnier observed, “It acts as a living AI knowledge base that actually heals itself.”By treating Markdown files as the “source of truth,” Karpathy avoids the “black box” problem of vector embeddings. Every claim made by the AI can be traced back to a specific .md file that a human can read, edit, or delete.Implications for the enterpriseWhile Karpathy’s setup is currently described as a “hacky collection of scripts,” the implications for the enterprise are immediate. As entrepreneur Vamshi Reddy (@tammireddy) noted in response to the announcement: “Every business has a raw/ directory. Nobody’s ever compiled it. That’s the product.”Karpathy agreed, suggesting that this methodology represents an “incredible new product” category. Most companies currently “drown” in unstructured data—Slack logs, internal wikis, and PDF reports that no one has the time to synthesize. A “Karpathy-style” enterprise layer wouldn’t just search these documents; it would actively author a “Company Bible” that updates in real-time.As AI educator and newsletter author Ole Lehmann put it on X: “i think whoever packages this for normal people is sitting on something massive. one app that syncs with the tools you already use, your bookmarks, your read-later app, your podcast app, your saved threads.”Eugen Alpeza, co-founder and CEO of AI enterprise agent builder and orchestration startup Edra, noted in an X post that: “The jump from personal research wiki to enterprise operations is where it gets brutal. Thousands of employees, millions of records, tribal knowledge that contradicts itself across teams. Indeed, there is room for a new product and we’re building it in the enterprise.”As the community explores the “Karpathy Pattern,” the focus is already shifting from personal research to multi-agent orchestration. A recent architectural breakdown by @jumperz, founder of AI agent creation platform Secondmate, illustrates this evolution through a “Swarm Knowledge Base” that scales the wiki workflow to a 10-agent system managed via OpenClaw. The core challenge of a multi-agent swarm—where one hallucination can compound and “infect” the collective memory—is addressed here by a dedicated “Quality Gate.” Using the Hermes model (trained by Nous Research for structured evaluation) as an independent supervisor, every draft article is scored and validated before being promoted to the “live” wiki. This system creates a “Compound Loop”: agents dump raw outputs, the compiler organizes them, Hermes validates the truth, and verified briefings are fed back to agents at the start of each session. This ensures that the swarm never “wakes up blank,” but instead begins every task with a filtered, high-integrity briefing of everything the collective has learnedScaling and performanceA common critique of non-vector approaches is scalability. However, Karpathy notes that at a scale of ~100 articles and ~400,000 words, the LLM’s ability to navigate via summaries and index files is more than sufficient.For a departmental wiki or a personal research project, the “fancy RAG” infrastructure often introduces more latency and “retrieval noise” than it solves.Tech podcaster Lex Fridman (@lexfridman) confirmed he uses a similar setup, adding a layer of dynamic visualization:”I often have it generate dynamic html (with js) that allows me to sort/filter data and to tinker with visualizations interactively. Another useful thing is I have the system generate a temporary focused mini-knowledge-base… that I then load into an LLM for voice-mode interaction on a long 7-10 mile run.”This “ephemeral wiki” concept suggests a future where users don’t just “chat” with an AI; they spawn a team of agents to build a custom research environment for a specific task, which then dissolves once the report is written.Licensing and the ‘file-over-app’ philosophyTechnically, Karpathy’s methodology is built on an open standard (Markdown) but viewed through a proprietary-but-extensible lens (note taking and file organization app Obsidian).Markdown (.md): By choosing Markdown, Karpathy ensures his knowledge base is not locked into a specific vendor. It is future-proof; if Obsidian disappears, the files remain readable by any text editor.Obsidian: While Obsidian is a proprietary application, its “local-first” philosophy and EULA (which allows for free personal use and requires a license for commercial use) align with the developer’s desire for data sovereignty.The “Vibe-Coded” Tools: The search engines and CLI tools Karpathy mentions are custom scripts—likely Python-based—that bridge the gap between the LLM and the local file system.This “file-over-app” philosophy is a direct challenge to SaaS-heavy models like Notion or Google Docs. In the Karpathy model, the user owns the data, and the AI is merely a highly sophisticated editor that “visits” the files to perform work.Librarian vs. search engineThe AI community has reacted with a mix of technical validation and “vibe-coding” enthusiasm. The debate centers on whether the industry has over-indexed on Vector DBs for problems that are fundamentally about structure, not just similarity.Jason Paul Michaels (@SpaceWelder314), a welder using Claude, echoed the sentiment that simpler tools are often more robust:”No vector database. No embeddings… Just markdown, FTS5, and grep… Every bug fix… gets indexed. The knowledge compounds.”However, the most significant praise came from Steph Ango (@Kepano), co-creator of Obsidian, who highlighted a concept called “Contamination Mitigation.” He suggested that users should keep their personal “vault” clean and let the agents play in a “messy vault,” only bringing over the useful artifacts once the agent-facing workflow has distilled them.Which solution is right for your enteprise vibe coding projects?FeatureVector DB / RAGKarpathy’s Markdown WikiData FormatOpaque Vectors (Math)Human-Readable MarkdownLogicSemantic Similarity (Nearest Neighbor)Explicit Connections (Backlinks/Indices)AuditabilityLow (Black Box)High (Direct Traceability)CompoundingStatic (Requires re-indexing)Active (Self-healing through linting)Ideal ScaleMillions of Documents100 – 10,000 High-Signal DocumentsThe “Vector DB” approach is like a massive, unorganized warehouse with a very fast forklift driver. You can find anything, but you don’t know why it’s there or how it relates to the pallet next to it. Karpathy’s “Markdown Wiki” is like a curated library with a head librarian who is constantly writing new books to explain the old ones.The next phaseKarpathy’s final exploration points toward the ultimate destination of this data: Synthetic Data Generation and Fine-Tuning. As the wiki grows and the data becomes more “pure” through continuous LLM linting, it becomes the perfect training set. Instead of the LLM just reading the wiki in its “context window,” the user can eventually fine-tune a smaller, more efficient model on the wiki itself. This would allow the LLM to “know” the researcher’s personal knowledge base in its own weights, essentially turning a personal research project into a custom, private intelligence.Bottom-line: Karpathy hasn’t just shared a script; he’s shared a philosophy. By treating the LLM as an active agent that maintains its own memory, he has bypassed the limitations of “one-shot” AI interactions. For the individual researcher, it means the end of the “forgotten bookmark.” For the enterprise, it means the transition from a “raw/ data lake” to a “compiled knowledge asset.” As Karpathy himself summarized: “You rarely ever write or edit the wiki manually; it’s the domain of the LLM.” We are entering the era of the autonomous archive.
Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026
Jensen Huang walked onto the GTC stage Monday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of industry dominance.The Nvidia CEO unveiled the Agent Toolkit, an open-source platform for building autonomous AI agents, and then rattled off the names of the companies that will use it: Adobe, Salesforce, SAP, ServiceNow, Siemens, CrowdStrike, Atlassian, Cadence, Synopsys, IQVIA, Palantir, Box, Cohesity, Dassault Systèmes, Red Hat, Cisco and Amdocs. Seventeen enterprise software companies, touching virtually every industry and every Fortune 500 corporation, all agreeing to build their next generation of AI products on a shared foundation that Nvidia designed, Nvidia optimizes and Nvidia maintains.The toolkit provides the models, the runtime, the security framework and the optimization libraries that AI agents need to operate autonomously inside organizations — resolving customer service tickets, designing semiconductors, managing clinical trials, orchestrating marketing campaigns. Each component is open source. Each is optimized for Nvidia hardware. The combination means that as AI agents proliferate across the corporate world, they will generate demand for Nvidia GPUs not because companies choose to buy them but because the software they depend on was engineered to require them.”The enterprise software industry will evolve into specialized agentic platforms,” Huang told the crowd, “and the IT industry is on the brink of its next great expansion.” What he left unsaid is that Nvidia has just positioned itself as the tollbooth at the entrance to that expansion — open to all, owned by one.Inside Nvidia’s Agent Toolkit: the software stack designed to power every corporate AI workerTo grasp the significance of Monday’s announcements, it helps to understand the problem Nvidia is solving.Building an enterprise AI agent today is an exercise in frustration. A company that wants to deploy an autonomous system — one that can, say, monitor a telecommunications network and proactively resolve customer issues before anyone calls to complain — must assemble a language model, a retrieval system, a security layer, an orchestration framework and a runtime environment, typically from different vendors whose products were never designed to work together.Nvidia’s Agent Toolkit collapses that complexity into a unified platform. It includes Nemotron, a family of open models optimized for agentic reasoning; AI-Q, an open blueprint that lets agents perceive, reason and act on enterprise knowledge; OpenShell, an open-source runtime enforcing policy-based security, network and privacy guardrails; and cuOpt, an optimization skill library. Developers can use the toolkit to create specialized AI agents that act autonomously while using and building other software to complete tasks.The AI-Q component addresses a pain point that has dogged enterprise AI adoption: cost. Its hybrid architecture routes complex orchestration tasks to frontier models while delegating research tasks to Nemotron’s open models, which Nvidia says can cut query costs by more than 50 percent while maintaining top-tier accuracy. Nvidia used the AI-Q Blueprint to build what it claims is the top-ranking AI agent on both the DeepResearch Bench and DeepResearch Bench II leaderboards — benchmarks that, if they hold under independent validation, position the toolkit as not merely convenient but competitively necessary.OpenShell tackles what has been the single biggest obstacle in every boardroom conversation about letting AI agents loose inside corporate systems: trust. The runtime creates isolated sandboxes that enforce strict policies around data access, network reach and privacy boundaries. Nvidia is collaborating with Cisco, CrowdStrike, Google, Microsoft Security and TrendAI to integrate OpenShell with their existing security tools — a calculated move that enlists the cybersecurity industry as a validation layer for Nvidia’s approach rather than a competing one.The partner list that reads like the Fortune 500: who signed on and what they’re buildingThe breadth of Monday’s enterprise adoption announcements reveals Nvidia’s ambitions more clearly than any specification sheet could.Adobe, in a simultaneously announced strategic partnership, will adopt Agent Toolkit software as the foundation for running hybrid, long-running creativity, productivity and marketing agents. Shantanu Narayen, Adobe’s chair and CEO, said the companies will bring together “our Firefly models, CUDA libraries into our applications, 3D digital twins for marketing, and Agent Toolkit and Nemotron to our agentic frameworks to deliver high-quality, controllable and enterprise-grade AI workflows of the future.” The partnership extends deep: Adobe will explore OpenShell and Nemotron as foundations for personalized, secure agentic loops, and will evaluate the toolkit for large-scale workflows powered by Adobe Experience Platform. Nvidia will provide engineering expertise, early access to software and targeted go-to-market support.Salesforce’s integration may be the one enterprise IT leaders parse most carefully. The company is working with Nvidia Agent Toolkit software including Nemotron models, enabling customers to build, customize and deploy AI agents using Agentforce for service, sales and marketing. The collaboration introduces a reference architecture where employees can use Slack as the primary conversational interface and orchestration layer for Agentforce agents — powered by Nvidia infrastructure — that participate directly in business workflows and pull from data stores in both on-premises and cloud environments. For the millions of knowledge workers who already conduct their professional lives inside Slack, this turns a messaging app into the command center for corporate AI.SAP, whose software underpins the financial and operational plumbing of most Global 2000 companies, is using open Agent Toolkit software including NeMo for enabling AI agents through Joule Studio on SAP Business Technology Platform, enabling customers and partners to design agents tailored to their own business needs. ServiceNow’s Autonomous Workforce of AI Specialists leverage Agent Toolkit software, the AI-Q Blueprint and a combination of closed and open models, including Nemotron and ServiceNow’s own Apriel models — a hybrid approach that suggests the toolkit is designed not to replace existing AI investments but to become the connective tissue between them.From chip design to clinical trials: how agentic AI is reshaping specialized industriesThe partner list extends well beyond horizontal software platforms into deeply specialized verticals where autonomous agents could compress timelines measured in years.In semiconductor design — where a single advanced chip can cost billions of dollars and take half a decade to develop — three of the four major electronic design automation companies are building agents on Nvidia’s stack. Cadence will leverage Agent Toolkit and Nemotron with its ChipStack AI SuperAgent for semiconductor design and verification. Siemens is launching its Fuse EDA AI Agent, which uses Nemotron to autonomously orchestrate workflows across its entire electronic design automation portfolio, from design conception through manufacturing sign-off. Synopsys is building a multi-agent framework powered by its AgentEngineer technology using Nemotron and Nemo Agent Toolkit.Healthcare and life sciences present perhaps the most consequential use case. IQVIA is integrating Nemotron and other Agent Toolkit software with IQVIA.ai, a unified agentic AI platform designed to help life sciences organizations work more efficiently across clinical, commercial and real-world operations. The scale is already significant: IQVIA has deployed more than 150 agents across internal teams and client environments, including 19 of the top 20 pharmaceutical companies.The security sector is embedding itself into the architecture from the ground floor. CrowdStrike unveiled a Secure-by-Design AI Blueprint that embeds its Falcon platform protection directly into Nvidia AI agent architectures — including agents built on AI-Q and OpenShell — and is advancing agentic managed detection and response using Nemotron reasoning models. Cisco AI Defense will provide AI security protection for OpenShell, adding controls and guardrails to govern agent actions. These are not aftermarket bolt-ons; they are foundational integrations that signal the security industry views Nvidia’s agent platform as the substrate it needs to protect.Dassault Systèmes is exploring Agent Toolkit software and Nemotron for its role-based AI agents, called Virtual Companions, on its 3DEXPERIENCE agentic platform. Atlassian is working with the toolkit as it evolves its Rovo AI agentic strategy for Jira and Confluence. Box is using it to enable enterprise agents to securely execute long-running business processes. Palantir is developing AI agents on Nemotron that run on its sovereign AI Operating System Reference Architecture.The open-source gambit: why giving software away is Nvidia’s most aggressive business moveThere is something almost paradoxical about a company with a multi-trillion-dollar market capitalization giving away its most strategically important software. But Nvidia’s open-source approach to Agent Toolkit is less an act of generosity than a carefully constructed competitive moat.OpenShell is open source. Nemotron models are open. AI-Q blueprints are publicly available. LangChain, the agent engineering company whose open-source frameworks have been downloaded over 1 billion times, is working with Nvidia to integrate Agent Toolkit components into the LangChain deep agent library for developing advanced, accurate enterprise AI agents at scale. When the most popular independent framework for building AI agents absorbs your toolkit, you have transcended the category of vendor and entered the category of infrastructure.But openness in AI has a way of being strategically selective. The models are open, but they are optimized for Nvidia’s CUDA libraries — the proprietary software layer that has locked developers into Nvidia GPUs for two decades. The runtime is open, but it integrates most deeply with Nvidia’s security partners. The blueprints are open, but they perform best on Nvidia hardware. Developers can explore Agent Toolkit and OpenShell on build.nvidia.com today, running on inference providers and Nvidia Cloud Partners including Baseten, CoreWeave, DeepInfra, DigitalOcean and others — all of which run Nvidia GPUs.The strategy has a historical analog in Google’s approach to Android: give away the operating system to ensure that the entire mobile ecosystem generates demand for your core services. Nvidia is giving away the agent operating system to ensure that the entire enterprise AI ecosystem generates demand for its core product — the GPU. Every Salesforce agent running Nemotron, every SAP workflow orchestrated through OpenShell, every Adobe creative pipeline accelerated by CUDA creates another strand of dependency on Nvidia silicon.This also explains the Nemotron Coalition announced Monday — a global collaboration of model builders including Mistral AI, Cursor, LangChain, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab, all working to advance open frontier models. The coalition’s first project will be a base model codeveloped by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, that will underpin the upcoming Nemotron 4 family. By seeding the open model ecosystem with Nvidia-optimized foundations, the company ensures that even models it does not build will run best on its hardware.What could go wrong: the risks enterprise buyers should weigh before going all-inFor all the ambition on display Monday, several realities temper the narrative.Adoption announcements are not deployment announcements. Many of the partner disclosures use carefully hedged language — “exploring,” “evaluating,” “working with” — that is standard in embargoed press releases but should not be confused with production systems serving millions of users. Adobe’s own forward-looking statements note that “due to the non-binding nature of the agreement, there are no assurances that Adobe will successfully negotiate and execute definitive documentation with Nvidia on favorable terms or at all.” The gap between a GTC keynote demonstration and an enterprise-grade rollout remains substantial.Nvidia is not the only company chasing this market. Microsoft, with its Copilot ecosystem and Azure AI infrastructure, pursues a parallel strategy with the advantage of owning the operating systems and productivity software that most enterprises already use. Google, through Gemini and its cloud platform, has its own agent vision. Amazon, via Bedrock and AWS, is building comparable primitives. The question is not whether enterprise AI agents will be built on some platform but whether the market will consolidate around one stack or fragment across several.The security claims, while architecturally sound, remain unproven at scale. OpenShell’s policy-based guardrails are a promising design pattern, but autonomous agents operating in complex enterprise environments will inevitably encounter edge cases that no policy framework has anticipated. CrowdStrike’s Secure-by-Design AI Blueprint and Cisco AI Defense’s OpenShell integration are exactly the kind of layered defense enterprise buyers will demand — but both are newly unveiled, not battle-hardened through years of adversarial testing. Deploying agents that can autonomously access data, execute code and interact with production systems introduces a threat surface that the industry has barely begun to map.And there is the question of whether enterprises are ready for agents at all. The technology may be available, but organizational readiness — the governance structures, the change management, the regulatory frameworks, the human trust — often lags years behind what the platforms can deliver.Beyond agents: the full scope of what Nvidia announced at GTC 2026Monday’s Agent Toolkit announcement did not arrive in isolation. It landed amid an avalanche of product launches that, taken together, describe a company remaking itself at every layer of the computing stack.Nvidia unveiled the Vera Rubin platform — seven new chips in full production, including the Vera CPU purpose-built for agentic AI, the Rubin GPU, and the newly integrated Groq 3 LPU inference accelerator — designed to power every phase of AI from pretraining to real-time agentic inference. The Vera Rubin NVL72 rack integrates 72 Rubin GPUs and 36 Vera CPUs, delivering what Nvidia claims is up to 10x higher inference throughput per watt at one-tenth the cost per token compared with the Blackwell platform. Dynamo 1.0, an open-source inference operating system that Nvidia describes as the “operating system for AI factories,” entered production with adoption from AWS, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure alongside companies like Cursor, Perplexity, PayPal and Pinterest.The BlueField-4 STX storage architecture promises up to 5x token throughput for the long-context reasoning that agents demand, with early adopters including CoreWeave, Crusoe, Lambda, Mistral AI and Nebius. BYD, Geely, Isuzu and Nissan announced Level 4 autonomous vehicle programs on Nvidia’s DRIVE Hyperion platform, and Uber disclosed plans to launch Nvidia-powered robotaxis across 28 cities and four continents by 2028, beginning with Los Angeles and San Francisco in the first half of 2027.Roche, the pharmaceutical giant, announced it is deploying more than 3,500 Nvidia Blackwell GPUs across hybrid cloud and on-premises environments in the U.S. and Europe — what it calls the largest announced GPU footprint available to a pharmaceutical company. Nvidia also launched physical AI tools for healthcare robotics, with CMR Surgical, Johnson & Johnson MedTech and others adopting the platform, and released Open-H, the world’s largest healthcare robotics dataset with over 700 hours of surgical video. And Nvidia even announced a Space Module based on the Vera Rubin architecture, promising to bring data-center-class AI to orbital environments.The real meaning of GTC 2026: Nvidia is no longer selling picks and shovelsStrip away the product specifications and benchmark claims and what emerges from GTC 2026 is a single, clarifying thesis: Nvidia believes the era of AI agents will be larger than the era of AI models, and it intends to own the platform layer of that transition the way it already owns the hardware layer of the current one.The 17 enterprise software companies that signed on Monday are making a bet of their own. They are wagering that building on Nvidia’s agent infrastructure will let them move faster than building alone — and that the benefits of a shared platform outweigh the risks of shared dependency. For Salesforce, it means Agentforce agents that can draw from both cloud and on-premises data through a single Slack interface. For Adobe, it means creative AI pipelines that span image, video, 3D and document intelligence. For SAP, it means agents woven into the transactional fabric of global commerce. Each partnership is rational on its own terms. Together, they form something larger: an industry-wide endorsement of Nvidia as the default substrate for enterprise intelligence.Huang, who opened his career designing graphics chips for video games, closed his keynote by gesturing toward a future in which AI agents do not just assist human workers but operate as autonomous colleagues — reasoning through problems, building their own tools, learning from their mistakes. He compared the moment to the birth of the personal computer, the dawn of the internet, the rise of mobile computing.Technology executives have a professional obligation to describe every product cycle as a revolution. But here is what made Monday different: this time, 17 of the world’s most important software companies showed up to agree with him. Whether they did so out of conviction or out of a calculated fear of being left behind may be the most important question in enterprise technology — and it is one that only the next few years can answer.
WWE SmackDown Results: Pat McAfee Is Randy Orton’s Mystery Caller
Pat McAfee was revealed as Randy Orton’s mystery caller on SmackDown.
BNP Paribas warns stakes ‘couldn’t be higher’ for Tesla stock investors
Tesla has already had a rough run in 2026, but on Thursday, April 2, the stock had its worst session of the year after the company reported first-quarter deliveries that fell short of industry expectations. Analysts at BNP Paribas are sounding the alarm, saying 2026 will be a make-or-break year for the electric vehicle maker. Tesla reported first-quarter production of 408,386 vehicles and deliveries of 358,023, well short of analyst expectations of 370,000 and its internal consensus estimate of 365,000.It wasn’t all bad news: deliveries actually improved 6% year over year, but the increase is somewhat skewed since 2025’s Q1 total was 13% lower than 2024’s. So the company’s comps were favorable. Tesla’s stock fell 5.4% Thursday, bringing its 2026 decline to more than 20% so far. Tesla’s Model 3 and Model Y accounted for 341,893 of the deliveries, while the “other models,” like the Model S and Model X (which will officially end production forever later this year) and the Cybertruck, accounted for the remaining 16,000+ deliveries. But while Musk criticizes the state of California on Twitter, and makes juvenile jokes about rockets, analysts at BNP Paribas have serious concerns about the company, saying its switch away from the Model S and Model X towards Optimus robots and Cybercabs better work, because Tesla’s future may be at stake.
2026 has been a rollercoaster for Tesla so far. Photo by Newsday LLC on Getty Images
Stakes for Tesla ‘could not be higher’ says BNP Paribas analystsEarlier this year, Tesla announced it was pulling the plug on the Model S and Model X and would replace that production capacity with Optimus humanoid robots as part of the company’s plan to build 1 million of them per year. That plan may worry investors, since there is currently no discernible market for humanoid robots, and selling 10,000 of them in a year would be impressive. But the vehicle models the company is getting rid of haven’t sold either, so that it may be a wash in the end. However, analysts at BNP Paribas aren’t taking this Tesla experiment lightly because the company is also spending a lot of money to make it happen. “Given Tesla’s sizable cash burn this year ($7 billion estimate by BNPP) and indications for massive multi-year investments on the horizon tied to a TeraFab and 100 GW solar capacity, the ‘stakes’ of TSLA’s demonstrated robotaxi and Optimus progress could not be higher,” analysts said in a note Thursday. According to BNP, the other models that combined delivered 16,000 vehicles in the quarter benefited from demand that was artificially inflated, so once again, moving off of them makes sense. However, Musk has made some pretty big promises about what Optimus and Robotaxi can do, and the firm says it’s time for Tesla to put up or shut up in 2026. “We view 1Q26’s deliveries – modestly below consensus – as yet another input to the TSLA stock’s challenged setup for this year, with EGS storage deployments also meaningfully light,” BNP analysts said. “A critical factor to this year is the Co.’s progress rate in its active Robotaxi fleet, which is climbing yet still limited to just two cities. The core catalysts for TSLA center on its ability to show meaningful progress toward its AI-defined future, inclusive of Robotaxi fleet expansion (targeting 7 new cities in 1H26) and commercialized production of Optimus by year-end.”Related: Rivian and Lucid can operate like Tesla after new legislative winIf their analysis seems a bit dim, the firm is one of the few on Wall Street with a negative view of the stock. BNP reiterated its underperform rating and $280 price target on Tesla shares, representing a potential 22% downside from the stock’s current level. Tesla investors get good news out of the latest delivery dataIt wasn’t all doom and gloom for Tesla in the first quarter. The electric vehicle maker topped Chinese rival BYD for the global EV sales crown, and the company even increased deliveries in BYD’s home country as sales of Model 3 and Model Y cars made in Tesla’s Shanghai factory, which includes exports to Europe and other markets, rose by nearly 9% year over year to 85,670, Reuters reported. It was the fifth straight month of rising sales and the second straight quarter of year-over-year gains. After losing close to half of its market share in Europe last year, driven by numerous issues, including CEO Elon Musk’s increased involvement in politics. The company is showing signs of recovery in the region so far in 2026. Year over year, Tesla registrations in France rose 55% in February, 74% in Spain, and 32% in Norway. Tesla more than doubled its sales in Portugal, with registrations jumping 112%.France, Germany, Belgium, and the Netherlands account for 60% of European EV sales on their own, and the results are mixed.France and Germany saw registration gains of 52% and 24%, respectively, while Belgium and the Netherlands experienced declines of 11.5% and 35.4%, respectively.Tesla’s sales in Europe declined by nearly 40% from January to April 2025, compared to the same period the previous year. In June, sales dropped another 39%. Tesla’s first-half sales were down 44% in Europe, per the European Automobile Manufacturers Association (ACEA).That trend followed into the second half of 2025 across the continent, including the United Kingdom, where registrations dropped by more than 29% in December.The year 2025 was the second consecutive year of declining Tesla sales in Europe. Last year, they fell 27%, despite the company introducing newer, cheaper versions of its top-selling Model Y and Model 3 vehicles.Tesla’s market share in the EU, Britain, and the European Free Trade Association fell to 0.8% in January, well below its 1.8% market share in 2025, 2.5% in 2024, and 2.9% the year before that.Related: Tesla gets some more good news from a key region