Anthropic appears to have accidentally revealed the inner workings of one of its most popular and lucrative AI products, the agentic AI harness Claude Code, to the public.A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package on the public npm registry pushed live earlier this morning. By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted archive, acted as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers. For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run-rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property.The timing is particularly critical given the commercial velocity of the product.Market data indicates that Claude Code alone has achieved an annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year. With enterprise adoption accounting for 80% of its revenue, the leak provides competitors—from established giants to nimble rivals like Cursor—a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent.We’ve reached out to Anthropic for an official statement on the leak and will update when we hear back. The anatomy of agentic memoryThe most significant takeaway for competitors lies in how Anthropic solved “context entropy”—the tendency for AI agents to become confused or hallucinatory as long-running sessions grow in complexity. The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional “store-everything” retrieval.As analyzed by developers like @himanshustwts, the architecture utilizes a “Self-Healing Memory” system. At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations. Actual project knowledge is distributed across “topic files” fetched on-demand, while raw transcripts are never fully read back into the context, but merely “grep’d” for specific identifiers.This “Strict Write Discipline”—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.For competitors, the “blueprint” is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a “hint,” requiring the model to verify facts against the actual codebase before proceeding.KAIROS and the autonomous daemonThe leak also pulls back the curtain on “KAIROS,” the Ancient Greek concept of “at the right time,” a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental shift in user experience: an autonomous daemon mode. While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles background sessions and employs a process called autoDream.In this mode, the agent performs “memory consolidation” while the user is idle. The autoDream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts. This background maintenance ensures that when the user returns, the agent’s context is clean and highly relevant. The implementation of a forked subagent to run these tasks reveals a mature engineering approach to preventing the main agent’s “train of thought” from being corrupted by its own maintenance routines.Unreleased internal models and performance metricsThe source code provides a rare look at Anthropic’s internal model roadmap and the struggles of frontier development. The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapping to Opus 4.6 and the unreleased Numbat still in testing.Internal comments reveal that Anthropic is already iterating on Capybara v8, yet the model still faces significant hurdles. The code notes a 29-30% false claims rate in v8, an actual regression compared to the 16.7% rate seen in v4. Developers also noted an “assertiveness counterweight” designed to prevent the model from becoming too aggressive in its refactors. For competitors, these metrics are invaluable; they provide a benchmark of the “ceiling” for current agentic performance and highlight the specific weaknesses (over-commenting, false claims) that Anthropic is still struggling to solve.”Undercover” ClaudePerhaps the most discussed technical detail is the “Undercover Mode.” This feature reveals that Anthropic uses Claude Code for “stealth” contributions to public open-source repositories. The system prompt discovered in the leak explicitly warns the model: “You are operating UNDERCOVER… Your commit messages… MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.” While Anthropic may use this for internal “dog-fooding,” it provides a technical framework for any organization wishing to use AI agents for public-facing work without disclosure. The logic ensures that no model names (like “Tengu” or “Capybara”) or AI attributions leak into public git logs—a capability that enterprise competitors will likely view as a mandatory feature for their own corporate clients who value anonymity in AI-assisted development.The fallout has just begunThe “blueprint” is now out, and it reveals that Claude Code is not just a wrapper around a Large Language Model, but a complex, multi-threaded operating system for software engineering. Even the hidden “Buddy” system—a Tamagotchi-style terminal pet with stats like CHAOS and SNARK—shows that Anthropic is building “personality” into the product to increase user stickiness.For the wider AI market, the leak effectively levels the playing field for agentic orchestration. Competitors can now study Anthropic’s 2,500+ lines of bash validation logic and its tiered memory structures to build “Claude-like” agents with a fraction of the R&D budget. As the “Capybara” has left the lab, the race to build the next generation of autonomous agents has just received an unplanned, $2.5 billion boost in collective intelligence.What Claude Code users and enterprise customers should do now about the alleged leakWhile the source code leak itself is a major blow to Anthropic’s intellectual property, it poses a specific, heightened security risk for you as a user. By exposing the “blueprints” of Claude Code, Anthropic has handed a roadmap to researchers and bad actors who are now actively looking for ways to bypass security guardrails and permission prompts. Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now design malicious repositories specifically tailored to “trick” Claude Code into running background commands or exfiltrating data before you ever see a trust prompt.The most immediate danger, however, is a concurrent, separate supply-chain attack on the axios npm package, which occurred hours before the leak. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.To mitigate future risks, you should migrate away from the npm-based installation entirely. Anthropic has designated the Native Installer (curl -fsSL https://claude.ai/install.sh | bash) as the recommended method because it uses a standalone binary that does not rely on the volatile npm dependency chain. The native version also supports background auto-updates, ensuring you receive security patches (likely version 2.1.89 or higher) the moment they are released. If you must remain on npm, ensure you have uninstalled the leaked version 2.1.88 and pinned your installation to a verified safe version like 2.1.86.Finally, adopt a zero trust posture when using Claude Code in unfamiliar environments. Avoid running the agent inside freshly cloned or untrusted repositories until you have manually inspected the .claude/config.json and any custom hooks. As a defense-in-depth measure, rotate your Anthropic API keys via the developer console and monitor your usage for any anomalies. While your cloud-stored data remains secure, the vulnerability of your local environment has increased now that the agent’s internal defenses are public knowledge; staying on the official, native-installed update track is your best defense.
BUSINESS
Imagine if your Teams or Slack messages automatically turned into secure context for your AI agents — PromptQL built it
For the modern enterprise, the digital workspace risks descending into “coordination theater,” in which teams spend more time discussing work than executing it. While traditional tools like Slack or Teams excel at rapid communication, they have structurally failed to serve as a reliable foundation for AI agents, such that a Hacker News thread went viral in February 2026 calling upon OpenAI to build its own version of Slack to help empower AI agents, amassing 327 comments. That’s because agents often lack the real-time context and secure data access required to be truly useful, often resulting in “hallucinations” or repetitive re-explaining of codebase conventions. PromptQL, a spin-off from the GraphQL unicorn Hasura, is addressing this by pivoting from an AI data tool into a comprehensive, AI-native workspace designed to turn casual, regular team interactions into a persistent, secure memory for agentic workflows — ensuring these conversations are not simply left by the wayside or that users and agents have to try and find them again later, but rather, distilled and stored as actionable, proprietary data in an organized format — an internal wiki — that the company can rely on going forward, forever, approved and edited manually as needed. Imagine two colleagues messaging about a bug that needs to be fixed — instead of manually assigning it to an engineer or agent, your messaging platform automatically tags it, assigns it and documents it all in the wiki with one click Now do this for every issue or topic of discussion that takes place in your enterprise, and you’ll have an idea of what PromptQL is attempting. The idea is a simple but powerful one: turning the conversation that necessarily precedes work into an actual assignment that is automatically started by your own messaging system. “We don’t have conversations about work anymore,” CEO Tanmai Gopal said in a recent video call interview with VentureBeat. “You actually have conversations that do the work.”Originally positioned as an AI data analyst, the company—a spin-off from the GraphQL unicorn Hasura—is pivoting into a full-scale AI-native workspace. It isn’t just “Slack with a chatbot”; it is a fundamental re-architecting of how teams interact with their data, their tools, and each other. “PromptQL is this workhorse in the background, this 24/7 intern that’s continuously cranking out the actual work—looking at code, confirming hypotheses, going to multiple places, actually doing the work,” Gopal said.Technology: messages that automatically turn into a shared, continuously updated context engineThe technical soul of PromptQL is its Shared Wiki. Traditional LLMs suffer from a “memory” problem; they forget previous interactions or hallucinate based on outdated training data. PromptQL solves this by capturing “shared context” as teams work. When an engineer fixes a bug or a marketer defines a “recycled lead,” they aren’t just typing into a void. They are teaching a living, internal Wikipedia. This wiki doesn’t require “documentation sprints” or manual YAML file updates; it accumulates context organically.“Throughout every single conversation, you are teaching PromptQL, and that is going into this wiki that is being developed over time. This is our entire company’s knowledge gradually coming together.”Interconnectivity: Much like cells in a Petri dish, small “islands” of knowledge—say, a Salesforce integration—eventually bridge to other islands, like product usage data in Snowflake.Human-in-the-Loop: To prevent the AI from learning “junk” (like a reminder about a doctor’s appointment from 2024), humans must explicitly “Add to Wiki” to canonize a fact.The Virtual Data Layer: Unlike traditional platforms that require data replication, PromptQL uses a virtual SQL layer. It queries your data in place across databases (Snowflake, Clickhouse, Postgres) and SaaS tools (Stripe, Zendesk, HubSpot), ensuring that nothing is ever extracted or cached,.PromptQL is designed to be a highly integrable orchestration layer that supports both leading AI model providers and a vast ecosystem of existing enterprise tools.AI Model Support: The platform allows users to delegate tasks to specific coding agents such as Claude Code and Cursor, or use custom agents built for specific internal needs.Workflow Compatibility: The system is built to inherit context from existing team tools, enabling AI agents to understand codebase conventions or deployment patterns from your existing infrastructure without manual re-explanationFrom chatting to doingThe PromptQL interface looks familiar—threads, channels, and mentions—but the functionality is transformative. In a demonstration, an engineer identifies a failing checkout in a #eng-bugs channel. Instead of tagging a human SRE, they delegate to Claude Code via PromptQL.The agent doesn’t just look at the code; it inherits the team’s shared context. It knows, for instance, that “EU payments switched to Adyen on Jan 15” because that fact was added to the wiki weeks prior. Within minutes, the AI identifies a currency mismatch, pushes a fix, opens a PR, and updates the wiki for future reference. This “multiplayer” AI approach is what sets the platform apart. It allows a non-technical manager to ask, “Which accounts have growing Stripe billing but flat Mixpanel usage?” and receive a joined table of data pulled from two disparate sources instantly. The user can then schedule a recurring Slack DM of those results with a single follow-up command.Also, users don’t even need to think about the integrity or cleanliness of their data — PromptQL handles it for them: “Connect all data in whatever state of shittiness it is, and let shared context build up on the fly as you use it,” Gopal said. Highly secureFor Fortune 500 companies like McDonald’s and Cisco, “just connect your data” is a terrifying sentence. PromptQL addresses this with fine-grained access control.The system enforces attribute-based policies at the infrastructure level. If a Regional Ops Manager asks for vendor rates across all regions, the AI will redact columns or rows they aren’t authorized to see, even if the LLM “knows” the answer. Furthermore, any high-stakes action—like updating 38 payment statuses in Netsuite—requires a human “Approve/Deny” sign-off before execution.Licensing and pricingIn a departure from the “per-seat” SaaS status quo, PromptQL is entirely consumption-based.Pricing: The company uses “Operational Language Units” (OLUs).Philosophy: Gopal argues that charging per seat penalizes companies for onboarding their whole team. By charging for the value created (the OLU), PromptQL encourages users to connect “everyone and everything”.Enterprise Storage: While smaller teams use dedicated accounts, enterprise customers get a dedicated VPC. Any data the AI “saves” (like a custom to-do list) is stored in the customer’s own S3 bucket using the Iceberg format, ensuring total data sovereignty.”Philosophically, we want you to connect everyone and everything [to PromptQL], so we don’t penalize that,” Gopal said. “We just price based on consumption.”Why it matters now for enterprisesSo, is PromptQL a Teams or Slack killer? According to Gopal, the answer is yes: “That is what has happened for us. We’ve shut down our internal Slack for internal comms entirely,” he said.The launch comes at a pivot point for the industry. Companies are realizing that “chatting with a PDF” isn’t enough. They need AI that can act, but they can’t afford the security risks of “unsupervised” agents. By building a workspace that prioritizes shared context and human-in-the-loop verification, PromptQL is offering a middle ground: an AI that learns like a teammate and executes like an intern, all while staying within the guardrails of enterprise security.For enterprises focused on making AI work at scale, PromptQL addresses the critical “how” of implementation by providing the orchestration and operational layer needed to deploy agentic systems. By replacing the “coordination theater” of traditional chat tools with a workspace where AI agents have the same permissions and context as human teammates, it enables seamless multi-agent coordination and task-routing. This allows decision-makers to move beyond simple model selection to a reality where agents—such as Claude Code—use shared team context to execute complex workflows, like fixing production bugs or updating CRM records, directly within active threads.From a data infrastructure perspective, the platform simplifies the management of real-time pipelines and RAG-ready architectures by utilizing a virtual SQL layer that queries data “in place”. This eliminates the need for expensive, time-consuming data preparation and replication sprints across hundreds of thousands of tables in databases like Snowflake or Postgres. Furthermore, the system’s “Shared Wiki” serves as a superior alternative to standard vector databases or prompt-based memory, capturing tribal knowledge organically and creating a living metadata store that informs every AI interaction with company-specific reasoning.Finally, PromptQL addresses the security governance required for modern AI stacks by enforcing fine-grained, attribute-based access control and role-based permissions. Through human-in-the-loop verification, it ensures that high-stakes actions and data mutations are held for explicit approval, protecting against model misuse and unauthorized data leakage. While it does not assist with physical infrastructure tasks such as GPU cluster optimization or hardware procurement, it provides the necessary software guardrails and auditability to ensure that agentic workflows remain compliant with enterprise standards like SOC 2, HIPAA, and GDPR.
Army Suspends Air Crew Flying Helicopters By Kid Rock’s Home, Report Says
Kid Rock posted videos of two AH 64 Apache helicopters flying over his Tennessee home last weekend.
Verizon’s new offer gives federal workers relief amid pay freeze
Verizon is granting relief to federal workers. Thousands remain unpaid due to the U.S. government’s partial shutdown, now in its sixth week. The U.S. Department of Homeland Security (DHS) shut down on Feb. 14 after lawmakers failed to reach a funding agreement for the agency.Democrats refused to back DHS funding as they demanded reforms for immigration enforcement, after U.S. Immigration and Customs Enforcement (ICE) officers shot and killed two U.S. citizens in Minneapolis. The U.S. Senate later agreed to pass a bill to fund most of the DHS (except ICE and U.S. Customs and Border Protection).”We’ve been clear from day one: Democrats will fund critical homeland security functions — but we will not give a blank check to Trump’s lawless and deadly immigration militia without reforms,” said Senate Democratic Leader Chuck Schumer in a statement.The shutdown only impacts DHS employees, including the Transportation Security Administration (TSA), the Federal Emergency Management Agency (FEMA), CBP, ICE, the U.S. Secret Service, U.S. Coast Guard, and the Cybersecurity and Infrastructure Agency (CISA). The DHS workforce consists of more than 260,000 employees, and over 90% are classified as essential personnel, meaning they are required to continue working during the shutdown without pay. However, they must receive back pay once funding for the department is restored. ICE, CBP, and a few other DHS agencies have been receiving pay during the shutdown. President Donald Trump also recently ordered TSA workers to receive back pay. This move comes after hundreds of TSA workers resigned and thousands called out of work, leading to long lines at airports nationwide.Federal employees get unexpected help from VerizonAs TSA workers finally receive payment, thousands of DHS employees across different federal agencies continue to work without compensation. As the shutdown surpasses 45 days, the longest government shutdown in U.S. history, Verizon is giving federal employees one less thing to worry about: falling behind on bills. The carrier is offering to waive late fees for federal workers and to provide flexible payment arrangements. All customers have to do is call Verizon at 1-800-Verizon (1-800-922-0204) to request this relief, they will need to provide verification that they are a federal employee. This isn’t the first time Verizon has extended relief to customers during emergency events that threatened to impact their ability to pay bills.Related: Verizon raises price on key discounted offer for customersIn 2020, during the COVID-19 pandemic, Verizon signed the Keep Americans Connected Pledge. This initiative asked broadband and telephone service providers not to terminate service for any residential or small business customers who were unable to pay bills due to pandemic disruptions.Also, between March 25, 2020, and April 30, 2020, Verizon granted its consumer and small-business postpaid customers 15GB of free 4G LTE hotspot data added to their wireless plans.Verizon prepaid customers and small-business postpaid metered customers also received 15GB of data added to their standalone or shared data plan, which was used for hotspot, smartphone, and other connected device use.
Verizon is extending relief to federal workers amid the partial government shutdown.Shutterstock
Federal workers navigate financial strain amid shutdown and new reliefVerizon’s latest offer comes at a time when DHS employees have been struggling to pay bills due to the partial government shutdown.On March 17, Doreen Greenwald, president of the National Treasury Employees Union, sent a letter to the House and Senate urging a bipartisan solution. She said thousands of DHS employees have had to resort to visiting food banks and taking other drastic measures to make ends meet as they miss their paychecks.“These frontline employees have had to wonder whether they’ll be able to pay their mortgage or buy groceries; a month of not knowing how long this shutdown will last,” wrote Greenwald in a recent letter to Congress. “Yet even with such uncertainty hanging over their heads, they still come to work every day to keep our country safe,” she continued.More Verizon News:Verizon CEO shifts gears after 2.25 million customers departVerizon plans to walk back controversial policy after backlashVerizon gets approval to make it harder for customers to leaveBefore Trump signed an executive order allowing TSA workers to begin receiving back pay, Aaron Barker, president of the American Federation of Government Employees Local 554, said during a March 16 press conference that many of these workers were struggling. He added that they were “coping with eviction notices, vehicle repossession, empty refrigerators, and overdrawn bank accounts” during the shutdown.Many called out sick because they couldn’t afford to commute to work, according to the AFGE. Some were even sleeping in their cars and at airports, while hundreds were forced to resign due to financial hardship. After the president ordered TSA workers to receive payment, AFGE National President Everett Kelley said in a statement on March 30 that while the union was grateful for the executive order to finally pay TSA workers, he stressed that all DHS employees need to be paid. “Congress needs to continue working to pass a real, bipartisan appropriations deal that funds DHS, pays all DHS workers, and keeps these vital agencies running,” said Kelley. “And they must pass the Shutdown Fairness Act so that no politician, of either party, can ever hold a public servant’s paycheck hostage again.”Related: T-Mobile quietly makes abrupt move as customer losses mount
Economic Confidence Jumps Unexpectedly Among Americans—But Concerns Rise Over Inflation
Views on the U.S. economy remained pessimistic.
Mercado Libre shuts down Mercado Coin, ending its loyalty-driven crypto experiment
Starting April 17, users will no longer be able to buy, sell or earn cashback in Mercado Coin, but can sell, spend, or have the token converted to local currency.
Investment blogger warns about Micron stock after massive run
Micron (MU) stock has rocketed over 300% in the past year on the back of AI-driven demand, even after falling over 30% following its mid-March earnings report.The company is delivering strong results in one of the tightest memory markets in years. Pricing is rising, margins are expanding, and demand remains strong across key end markets.That’s got most Wall Street analysts bullish. Despite recent weakness, no major sell-side investment firms have sell ratings on Micron stock, according to TipRanks, and the average 12-month price target is $533, up 67%.Still, after a massive run, one financial blogger, JR Research, with a 4.9-star rating on TipRanks, is warning expectations may have gone too far, even as industry conditions remain highly favorable.Micron valuation snapshotMarket cap: $402.8 billionEnterprise value: $397.0 billionShare price: $345Analysts’ avg target price: $528 (53% implied upside)2-Year expected annual EPS growth: 244.8%Forward P/E ratio: 4.0x
Source: TIKR.com
Blogger says Micron’s expectations may have gone too farMicron has fallen by more than 30% since reporting Q2 results on March 18th, despite record earnings and strong AI-driven demand.JR Research pointed directly to that risk, warning investors not to chase the stock after its massive run.A top 2% analyst on TipRanks, JR Research doesn’t see the memory shortage easing anytime soon. Micron is leaning into that demand, planning $25 billion in capex this year and more than $35 billion next year.But even with incredible demand and supply factors, they warn that expectations may have moved too far ahead of what the company can realistically deliver from here.“To me it’s simply one vertical line up, looking more like a rocketship that has launched straight into space,” JR Research said, referring to Micron’s stock price soaring during one of the most supply-constrained memory environments in years.JR Research framed it as a question of expectations, asking, “Shouldn’t you consider why the market is turning cagey, despite all these gangbusters outlooks that Wall Street and management have presented to us?” He pointed out analysts already expect margins to hit 81% next quarter, adding, “Do you really expect 90%?”More AI Stocks:Morgan Stanley sets jaw-dropping Micron price target after eventBank of America updates Palantir stock forecast after private meetingMorgan Stanley drops eye-popping Broadcom price targetIf margins and pricing can hold, the story for Micron may remain intact. If not, even strong fundamentals may not be enough to support further upside.Record Q2 resets Micron’s earnings baseMicron’s fiscal Q2 FY26 report and Q3 FY26 guidance reset the earnings debate. The company posted $23.86 billion in Q2 revenue with a 69.0% non-GAAP operating margin, then guided for $33.5 billion in Q3 revenue with a roughly 81% gross margin.In Micron’s earnings materials, CEO Sanjay Mehrotra said the company delivered record Q2 results amid tight industry supply and expects significant records again in fiscal Q3.That’s the profile of a company selling into one of the tightest and most profitable parts of the semiconductor market.The key issue now is whether Micron’s normalized earnings power has moved materially higher. If the company can sustain margins near the guided level, investors will have to treat this less as a peak quarter and more as evidence that the earnings base has been reset upward.AI memory tightness is industry-wideSamsung Electronics and SK Hynix have both signaled that AI-driven demand is keeping memory tight, with Samsung warning of an acute chip shortage and SK Hynix reporting record profits driven by explosive memory demand.Micron reported Cloud Memory revenue of $7.75 billion, while Mobile and Client revenue reached $7.71 billion. That suggests demand is coming from multiple end markets at once, not just a narrow group of hyperscale buyers.Management described memory as a “strategic asset,” and the latest results support that view. Cloud demand points to continued data-center buildout, while strength in mobile and client suggests AI features are increasing memory content across devices.
Tight AI memory supply across the industry is driving pricing power and lifting earnings beyond expectations.NurPhoto via Getty Images
Right now, Micron looks like one of the clearest public markers of how profitable that environment can be. When industry conditions shift, and one company is already showing outsized operating leverage, earnings estimates often lag reality.The risk is that industry supply eventually catches up. But until that happens, Micron has a favorable mix of pricing power, utilization, and product strength.What could drive Micron higherHBM supply tightness lifts pricing power and pushes more revenue into higher-margin productsBroad cloud memory demand supports utilization and extends the upcycle beyond AI acceleratorsRising memory content in AI-enabled phones and PCs expands mobile and client revenueHigher output drives operating leverage, turning incremental revenue into outsized earnings growthStrong free cash flow funds expansion internally and reduces financing riskDividend growth alongside capex signals confidence in the durability of cash generationWhat could pressure the stockA miss on Q3 margin guidance would weaken the case for a structurally higher profit profileFaster industry HBM supply growth could erode pricing discipline and mix benefitsA pause in cloud memory purchases would pressure utilization and revenue conversionSlower AI device adoption in mobile and client would narrow the demand storyCapacity additions arriving ahead of demand could restart oversupply and reverse margin gainsCompetitors taking more premium AI memory share could cap pricing power and mix improvementKey takeaways for Micron investorsA top-ranked financial blogger recently warned that expectations may have moved ahead of what the company can realistically deliver after such a sharp run.Even with tight supply and strong demand, Micron may now need to execute at a very high level just to support its current valuation.The bull case is still intact. AI-driven tightness is supporting pricing, margins, and earnings, and Micron is one of the clearest beneficiaries.The risk is that expectations have become elevated. When that happens, even solid results may not be enough if there are any signs of slowing or normalization.Related: Jim Cramer resets Nio stock outlook after earnings
Bitfarms targets zero bitcoin on the balance sheet as it pivots to AI
The company is actively selling bitcoin and redeploying capital into AI-focused data centers as part of a broader transformation away from mining.
Netflix Cofounder Reed Hastings Learned An Unconventional Leadership Lesson From His First Boss, Who Washed Office Coffee Cups At 4:30 A.M.
Hastings thought the office janitor was washing his coffee cups every week, but it was actually his boss.
Amazon is selling a boho-chic wicker patio set for $96 on the final day of its Big Spring Sale
TheStreet aims to feature only the best products and services. If you buy something via one of our links, we may earn a commission.Why we love this dealAs soon as spring rolls around, people start to spend more time outside. One of the most enjoyable ways to venture back into the great outdoors is to partake in some wonderfully unproductive lounging on a new patio set. Whether you opt for a foundational outdoor sectional or a small bistro set, your home’s outdoor space is where it’s at. Amazon currently has a deal on the latter of those two options, and we think it’s worth a look. However, this Big Spring Sale deal will be gone soon, so get your set while you can.The Tangkula 3-Piece Bistro Patio Set is on sale for only $96 at the moment, which is 20% off the regular price of $120. We can’t think of a better set to kick off spring than this beautiful wicker option.Tangkula 3-Piece Bistro Patio Set, $96 (was $120) at Amazon
Courtesy of Amazon
Shop at AmazonWhy do shoppers love it?This set is the pint-sized perfection, and we can’t get enough of it. With a powder-coated stainless steel frame, the base of this set is rustproof and corrosion-resistant. Each piece is then wrapped in lovely rattan wicker, adding a gorgeous rustic touch to an otherwise-industrial design. Included with the set are two ergonomic armchairs and a small bistro table. Each chair has a fun, bright red seat cushion that makes for an incredibly comfortable feel. The size of this diminutive set is one of its biggest selling points. It’s large enough to feel substantial, making it a wonderful option to fill a small balcony. However, it’s also small enough to be unassuming and fit perfectly as an accent piece in the corner of a large patio. The wraparound design of the chairs looks modern and stylish, while offering superior comfort to whoever may be sitting in them. The table has dimensions of 19.5 inches long by 19.5 inches wide by 19 inches high. Each chair measures 24 inches long by 20 inches wide by 31 inches high. The table has a shatter-resistant tempered glass top. It’s easy to clean with simple soap and water, and looks great next to the rugged rattan wicker wrapped around the table’s exterior. This set is available in 12 fun color options, so there’s definitely something for everyone’s tastes. Related: Amazon is selling a $160 charcoal grill for just $77 in time for Easter celebrationsDetails to knowMaterials: Powder-coated stainless steel wrapped in rattan wicker.Dimensions: The table measures 19.5 inches long by 19.5 inches wide by 19 inches high, and the chairs measure 24 inches long by 20 inches wide by 31 inches high.Tabletop: Shatter-resistant tempered glass.Colorways: 12 variants.Amazon shoppers were thrilled with this cute little patio set. One said it’s great “for my boho peeps,” adding that it’s a “very beautiful set. Stop your search here for a boho, affordable set…Super easy to put together and incredibly comfortable.”Shop more deals Yitahome 3-Piece Rocking Chair Bistro Patio Set, $103 at AmazonShintenchi 3-Piece Rocking Chair Patio Set, $90 (was $100) at AmazonVasagle End Table with Charging Station Set of 2, $40 (was $60) at AmazonThe Tangkula 3-Piece Bistro Patio Set is an incredible buy right now at just $96. If you want to dress up your backyard or patio without spending an arm and a leg, then this is the deal for you. Just be sure to put one in your cart quickly, as Amazon’s Big Spring Sale ends in a few hours.