Here’s what inspired the practices.
Amazon is selling a pair of farmhouse table lamps for $25, and they come with LED bulbs that last for years
TheStreet aims to feature only the best products and services. If you buy something via one of our links, we may earn a commission.Why we love this dealWhile many modern homes have overhead lighting, it can often feel utilitarian. Adding lamps to your space can enhance both style and mood, which is why we think Amazon’s deal on a pair of Qimh Farmhouse Table Lamps is worth checking out. Not only are these stylish lamps 50% off, but they’re perfect for a wide range of home decor styles from rustic farmhouse to whimsical cottagecore.Perfect for savvy shoppers on a budget, this set comes with LED bulbs included that are made to last. However, the lamps do require assembly, which some shoppers have mixed reviews on.Qimh Farmhouse Table Lamp Set, $25 (was $50) at Amazon
Courtesy of Amazon
Why do shoppers love it?Each lamp measures 10 inches deep, 10 inches wide, and 22 inches high, and the shade is 11 inches high and 10 inches wide. This makes them ideal for side tables in the living room, master bedroom, or a guest bedroom. The sculpted lamp bases are made of resin and have beautiful detailing and a matching finial that sits on top. These lamps are corded and have rotary switches, so you’ll have to reach underneath the shade to turn them on and off.The linen shades that come with these lamps are a sophisticated shade of cream that goes with just about everything. The one downfall is that the shades do not come assembled, much like the reading lamp I reviewed recently. They’re easy to put together, but since they come flat-packed and are designed to be wrapped around a metal hoop that holds their shape, they have visible seams. I don’t like the way this looks on the lamp I own, but I turned the seam towards the wall, so I don’t have to see it. Some reviewers mention not liking this, with one saying, “The only negative for me was the shades. They look good, but would prefer shades already assembled.”Another plus is that these lamps come with LED bulbs, which can last up to ten years. To get the most time out of them, dust them frequently (dusty bulbs can inhibit proper heat dissipation) use dimmer plugs, which allow for lower energy consumption.While these lamps come in four colors, including blue, Antique White, and Wood Grain, the best price is on the black set. However, if you’re willing to pay a little bit more for a perfect match to your existing home decor, the extra spend may be worth it.Details to knowColors: Black, blue, Antique White, and Wood Grain.Material: Resin base and linen shades.Measurements: 10 inches deep, 10 inches wide, and 22 inches high. Related: Amazon’s smart touch lamp is discounted to just $20, and it’s ‘perfect for any room’The table lamp set has more than 60 five-star ratings, with most shoppers saying they are happy with it. One shopper wrote, “These are so pretty. The color is a rich blue without any chips or flaws. The lamps are made from a sturdy wood, and the pieces went together securely and easily. A warm light bulb was included for each lamp. They are just the perfect addition to my room!”Shop more deals Sealle Farmhouse Table Lamp Set, $37 (was $47) at AmazonTobusa Farmhouse Table Lamp Set, $40 (was $60) at AmazonPokat Rustic Table Lamp Set, $40 at AmazonIf you want a stylish lighting option that looks like it costs more than it does, the Qimh Farmhouse Table Lamp Set is an ideal choice at just $25 for two lamps.
Former Fed insiders issue stark warning on U.S. economy
The Iran war could push U.S. inflation and unemployment higher than the Federal Reserve expected this year, according to a new survey of former central bank officials conducted by the Duke University Department of Economics. The findings arrive just days before the Fed’s March policy meeting, when policymakers will release their “dot plot” forecasts for interest rates, inflation, and employment.The CME Group FedWatch tool estimates a more than 99% chance the Federal Open Market Committee will hold rates steady at its March 17-18 meeting. The next likely quarter-point cut is predicted for later in the year, perhaps as late as December. Among the key findings from the Federal Reserve Insights survey released March 16: Former Fed insiders projected an inflation rate higher than what the central bank is anticipating and a jobless rate higher than the Fed’s December projections. Most respondents said the Fed would likely need to hold policy steady this year. The former officials projected slower growth in economic output than previously estimated.What the Fed’s dual mandate requires for jobs, pricesThe Fed’s dual congressional mandate requires it to balance full employment and price stability.Lower interest rates support hiring but can fuel inflation.Higher rates cool prices but can weaken the job market.The two goals often conflict, operate on different timelines and are influenced by unpredictable global events such as pandemics and wars.
Federal Reserve Bank of New York via FRED®
Fed paused rate cuts in JanuaryThe FOMC voted 10-2 to hold interest rates steady at 3.50% to 3.75% in January after three consecutive quarter-point cuts in its last three meetings of 2025.Those cuts were based on data showing increasing weakening in the labor market and cooling inflation, although still sticky and tariff-laced.More Federal Reserve:Fed Chair Powell sends frustrating message on future interest-rate cutsIt was the FOMC’s first pause since July 2025.The Fed uses government and private data sources to drive monetary policy decisions, a rear-view mirror approach often criticized as being too restrictive. Those critics, including Treasury Secretary Scott Bessent and former Fed Governor Kevin Warsh, Trump’s nominee to be the next Fed chair, advocate use of more advanced models, including AI, to set interest rates.Fed to release latest “dot plot” this weekThe Fed’s “Summary of Economic Projections” provides its estimates of inflation, unemployment, and economic output, in addition to estimates of interest rates that officials see as the most appropriate monetary policy over a three-year horizon. The interest rate estimates, also known as the Fed’s “dot plot,” are closely watched on Wall Street for insight into the central bank’s thinking and plans.Background on the Federal Reserve Insights surveyIn all, 28 former officials and staff members participated in the survey between March 6 and March 13. The survey panel included former Fed board governors, former regional bank presidents, and former staff at the Board and Reserve Banks. Related: Looming Fed meeting shifts bets for 2026 interest-rate cutsThe projections outlined are based on the medians of their estimates.Some individuals didn’t answer every question.Survey projects new inflation, jobless ratesFormer central bank officials projected 3% inflation this year, higher than the Fed’s official 2% target, and higher than the 2.4% inflation rate that the central bank projected for 2026 back in December.Former officials also projected a jobless rate of 4.6%, higher than the 4.4% rate that the central bank projected in December and higher than the 4.2% rate that the Fed sees as normal in the long run. The former officials projected slower growth in economic output than previously estimated. For now, they agreed the United States is not in recession or heading toward recession, but they said that could change if conflict in the Middle East and disruptions to global oil supplies persist.Could Fed interest-rate hikes be in the future? Given the economic backdrop, most respondents said the Fed would likely need to hold policy steady this year. Thirteen respondents said appropriate policy in 2026 would likely be no change in rates.Six said it would be appropriate to raise rates.Seven said it would be appropriate to reduce rates.Former Fed officials focus on Iran war, oil shockMany former officials in the Duke survey described the conflict in the Persian Gulf as a global supply shock — a constraint on the production and movement of energy and other products from the Middle East to other parts of the world. Reduced supply pushes up prices and reduces output. One former official estimated that every $10 per barrel increase in the price of oil adds 0.2 percentage points to the U.S. inflation rate. The longer this disruption lasts, this person said, the greater the risks to inflation and output. A short-term disruption might wash through the economy without major effects on inflation or output. A sustained shock would be more damaging. A sustained oil price above $100 per barrel would raise recession risk, while sustained oil prices over $120 per barrel would make recession the most likely outcome, this person said. What the market outlook expects from the FedWith the Fed meeting this week amid heightened geopolitical tensions and rising oil prices, markets are closely watching what the “dot plot” will reveal.Ben Sullivan, CIO of AE Wealth Management, said higher energy prices could complicate the inflation outlook, even as markets continue to anticipate rate cuts later in the cycle. Sullivan expects the possibility of one to two cuts over the remainder of 2026 if inflation moderates and growth slows, but said a major wild card remains the potential leadership transition at the Fed and the uncertainty surrounding Warsh’s confirmation.Gene Goldman, CIO of Cetera Investment Management, said he is focused on how the Iran War and resulting spike in oil prices could keep market volatility elevated. While markets historically recover from geopolitical shocks, Goldman said higher energy costs can pressure consumer spending and potentially influence Fed policy. He said he is also watching the stronger U.S. dollar and elevated market valuations, which may make equities more sensitive to uncertainty.Related: Oil shock threatens Fed rate-cut bets
Tech retailer announces new stores for the first time in a decade
In the early 2000s, Best Buy was the place to go for all of your technology needs. Best Buy’s stores had all of the newest phones, PCs, and gaming consoles on display so you could physically interact with them before making a purchase. Its blue-shirted staff didn’t work on commission, which made their advice feel less like a sales pitch and more genuinely helpful. And the vibes were always high, with pop ballads blasting over the surround sound speaker system and Geek Squad staff on hand to help you solve even the most complex tech issues.But that all began to change in the late 2010s and early 2020s. A shift in consumer behavior, largely driven by Covid, saw a growing number of shoppers electing to shop online rather than in-person.Electronics stores like Best Buy were among the hardest hit by these changes. Between 2017 and 2022, they saw a $9 billion, or 12%, drop in revenues and a 40.8% decrease in their workforce, according to data from the US Census Bureau. As a result, Best Buy began closing stores across the country. According to Business Insider, the retailer has had fewer brick-and-mortar locations in the US every year since 2012. Best Buy announces new store openings Some 14 years later, it seems that streak may finally be coming to an end.During the company’s Q4 FY 2026 earnings call, CEO Corie Barry announced that Best Buy (BBY) would be expanding its physical footprint over the next year.“This year, we expect to have new domestic Best Buy store growth for the first time in more than a decade,” Barry told investors. “We plan to open six new stores to better meet demand in markets that have grown, including areas where we have not previously had a physical presence.”More retail:Dick’s Sporting Goods says this fan favorite is here to stayCostco shares surprising plan to add more storesKohl’s CEO tells customers major revamp is on the wayThe new stores will likely look a little different than the superstores of two decades ago.“We have created and tested a smaller store model that drives incremental revenue in these types of markets, like the Bozeman store we opened last year,” Barry said on the call. Best Buy started experimenting with these smaller format stores a year ago.Related: Bath & Body Works makes big change customers will notice right away“We’re working on a smaller footprint store that maybe can augment a market like Miami or Atlanta, where we’ve seen a lot of growth, or could go into a new market,” Berry told CNBC in September 2025. “Because it’s a format that can actually work in a smaller setting, you can garner customers you wouldn’t otherwise be able to reach,” she continued. “And so we’re playing with really experiential, all the bells and whistles stores in partnership with our vendors, and at the same time some that might be a little bit leaner and serve a customer we couldn’t otherwise reach.”
Best Buy announces plans to expand its brick-and-mortar locations for the first time in a decade with the opening of six new stores.Getty Images
The return of brick and mortar storesBest Buy isn’t the only company looking to expand its physical presence this year.John Mercer, head of global research at retail data company Coresight Research, told CoStar he expects to see roughly 5,500 store openings in 2026, a 4% increase year-over-year. “These openings and closings, they trend,” he said. “You have peak years and then they have a dip, up or down. And this year looks like it’s going to be a down in terms of closings, maybe an up in terms of openings.”At least a portion of this growth can be attributed to Gen Z shoppers. Despite being the first generation to be digitally native, consulting firm L.E.K. says that 64% of Gen Zers prefer shopping in person to online.Data from Placer.ai seems to back up those findings. According to its February 2026 Mall Index report, indoor malls, outdoor shopping centers, and outlet malls all saw an increase in foot traffic year-over-year.In February: Indoor mall traffic grew by 5%, year-over-year Open air shopping center traffic grew by 7.3% year-over-yearOutlet mall traffic grew by 7.2%, year-over-year
Source: Placer.ai
So while the nostalgic “technology toy store” version of Best Buy may be a thing of the past, these smaller, mall-sized stores could drive the company’s future. Related: Target makes bold change to win back customers
z.ai debuts faster, cheaper GLM-5 Turbo model for agents and ‘claws’ — but it’s not open-source
Chinese AI startup Z.ai, known for its powerful, open source GLM family of large language models (LLMs), has introduced GLM-5-Turbo, a new, proprietary variant of its open source GLM-5 model aimed at agent-driven workflows, with the company positioning it as a faster model tuned for OpenClaw-style tasks such as tool use, long-chain execution and persistent automation. It’s available now through Z.ai’s application programming interface (API) on third-party provider OpenRouter with roughly a 202.8K-token context window, 131.1K max output, and listed pricing of $0.96 per million input tokens and $3.20 per million output tokens. That makes it about $0.04 cheaper per total input and output cost (at 1 million tokens) than its predecessor, according to our calculations. ModelInputOutputTotal CostSourceGrok 4.1 Fast$0.20$0.50$0.70xAIGemini 3 Flash$0.50$3.00$3.50GoogleKimi-K2.5$0.60$3.00$3.60MoonshotGLM-5-Turbo$0.96$3.20$4.16OpenRouterGLM-5$1.00$3.20$4.20Z.aiClaude Haiku 4.5$1.00$5.00$6.00AnthropicQwen3-Max$1.20$6.00$7.20Alibaba CloudGemini 3 Pro$2.00$12.00$14.00GoogleGPT-5.2$1.75$14.00$15.75OpenAIGPT-5.4$2.50$15.00$17.50OpenAIClaude Sonnet 4.5$3.00$15.00$18.00AnthropicClaude Opus 4.6$5.00$25.00$30.00AnthropicGPT-5.4 Pro$30.00$180.00$210.00OpenAISecond, Z.ai is also adding the model to its GLM Coding subscription product, which is its packaged coding assistant service. That service has three tiers: Lite at $27 per quarter, Pro at $81 per quarter, and Max at $216 per quarter. Z.ai’s March 15 rollout note says Pro subscribers get GLM-5-Turbo in March, while Lite subscribers get the base GLM-5 in March and must wait until April for GLM-5-Turbo. The company is also taking early-access applications for enterprises via a Google Form, which suggests some users may get access ahead of that schedule depending on capacity.z.ai describes GLM-5-Turbo as designed for “fast inference” and “deeply optimized for real-world agent workflows involving long execution chains,” with improvements in complex instruction decomposition, tool use, scheduled and persistent execution, and stability across extended tasks.The release offers developers a new option for building OpenClaw-style autonomous AI agents, and serves as a signal about where model vendors think enterprise demand is heading: away from chat interfaces and toward systems that can reliably execute multi-step work. That is now where much of the competition is moving, as well, especially among vendors trying to win developers and enterprise teams building internal assistants, workflow orchestrators and coding agents. Built for execution, not just conversationZ.ai’s materials frame GLM-5-Turbo as a model for production-like agent behavior rather than static prompt-response use. The pitch centers on reliability in practical task flows: better command following, stronger tool invocation, improved handling of scheduled and persistent tasks, and faster execution across longer logical chains. That positioning puts the model squarely in the market for agents that do more than answer questions. It is aimed at systems that can gather information, call tools, break down instructions and keep working through complex task sequences with less supervision.Rather than a straightforward successor to GLM-5, GLM-5-Turbo appears to be a more execution-focused variant: tuned for speed, tool use and long-chain agent stability, while the base GLM-5 remains Z.ai’s broader open-source flagship. GLM-5-Turbo appears especially competitive in OpenClaw scenarios such as information search and gathering, office and daily tasks, data analysis, development and operations, and automation. Those are company-supplied materials, not independent validation, but they make the intended product positioning clear.Background: z.ai and GLM-5 set the stage for TurboFounded in 2019 as a Tsinghua University spinoff in Beijing, Z.ai — formerly Zhipu AI — is now one of China’s best-known foundation model companies. The company remains headquartered in Beijing and is led by CEO Zhang PengZ.ai listed on the Hong Kong Stock Exchange on January 8, 2026, with shares priced at HK$116.20 and opening at HK$120, for a stated market capitalization of HK$52.83 billion, making it China’s largest independent large language model developer.As of September 30, 2025 its models had reportedly been used by more than 12,000 enterprise customers, more than 80 million end-user devices and more than 45 million developers worldwide.Z.ai’s last major release, GLM-5, which debuted in February 2026, gives useful context for what the company is now trying to do with GLM-5-Turbo.GLM-5 is an open-source flagship model carrying an MIT license, posting a record-low hallucination score on the AA-Omniscience Index, and debuted a native “Agent Mode” that could turn prompts or source materials into ready-to-use .docx, .pdf and .xlsx files. That earlier release was also framed as a major technical step up for the company. GLM-5 scaled to 744 billion parameters with 40 billion active per token in a mixture-of-experts architecture, used 28.5 trillion pretraining tokens, and relied on a new asynchronous reinforcement-learning infrastructure called “slime” to reduce training bottlenecks and support more complex agentic behavior. In that light, GLM-5-Turbo looks less like a replacement for GLM-5 than a narrower commercial offshoot: a variant that keeps the long-context, agentic orientation of the flagship line but emphasizes speed, stability and execution in real-world agent chains.Developer features and model packagingOn the technical side, Z.ai has been packaging the GLM-5 family with the kinds of capabilities developers now expect from serious agent-facing models, including long context handling, tools, reasoning support and structured integrations. OpenRouter’s GLM-5-Turbo page lists support for tools, tool choice and response formatting, while also surfacing live performance data including average throughput and latency. OpenRouter’s provider telemetry adds a useful deployment-level comparison between GLM-5 and GLM-5-Turbo, though the data is not perfectly apples-to-apples because GLM-5 appears across several providers while GLM-5-Turbo is shown only through Z.ai. On throughput, GLM-5-Turbo averages 48 tokens per second on OpenRouter, which puts it below the fastest GLM-5 endpoints shown in the screenshots, including Fireworks at 70 tok/s and Friendli at 58 tok/s, but above Together’s 40 tok/s. On raw first-token latency, GLM-5-Turbo is slower in the available data, posting 2.92 seconds versus 0.41 seconds for Friendli’s GLM-5 endpoint, 1.00 second for Parasail and 1.08 seconds for DeepInfra. But the picture improves on end-to-end completion time: GLM-5-Turbo is shown at 8.16 seconds, faster than the GLM-5 endpoints, which range from 9.34 seconds on Fireworks to 11.23 seconds on DeepInfra. The most notable operational advantage is in tool reliability. GLM-5-Turbo shows a 0.67% tool call error rate, materially lower than the GLM-5 providers shown, where error rates range from 2.33% to 6.41%. For enterprise teams, that suggests a model that may not win on initial responsiveness in its current OpenRouter routing, but could still be better suited to longer agent runs where completion stability and lower tool failure matter more than the fastest first token.Benchmarking and pricingA ZClawBench radar chart released by z.ai shows GLM-5-Turbo as especially competitive in OpenClaw scenarios such as information search and gathering, office and daily tasks, data analysis, development and operations, and automation. Those are company-supplied benchmark visuals, not independent validation, but they do help explain how Z.ai wants the two models understood: GLM-5 as the broader coding and open flagship, and Turbo as the more targeted agent-execution variant.A more nuanced licensing signalOne notable caveat is licensing. Z.ai says GLM-5-Turbo is currently closed-source, but it also says the model’s capabilities and findings will be folded into its next open-source model release. That is an important distinction. The company is not clearly promising to open-source GLM-5-Turbo itself. Instead, it is saying that lessons, techniques and improvements from this release will inform a future open model. That makes the launch more nuanced than a clean break from openness.Z.ai’s earlier GLM strategy leaned heavily on open releases and open-weight distribution, which helped it build visibility among developers. China’s AI market may be rebalancing away from open sourceGLM-5-Turbo’s licensing posture also lands in a wider Chinese market context that makes the launch more notable than a simple product update. In recent weeks, reporting around Alibaba’s Qwen unit has raised fresh questions about how China’s leading AI labs will balance open releases with commercial pressure.Earlier this month, Qwen division head Lin Junyang stepped down, becoming the third senior Qwen executive to leave in 2026, even though Alibaba’s Qwen family remains one of the most prolific open-model efforts anywhere, with more than 400 open-source models released since 2023 and more than 1 billion downloads. Reuters then reported on March 16 that Alibaba CEO Eddie Wu would take direct control of a newly formed AI-focused business group consolidating Qwen and other units, amid scrutiny over strategy, profitability and the brutal price competition surrounding open-model offerings in China. Even without overstating those developments, they help frame the broader question hanging over the sector: whether the economics of frontier AI are starting to push even historically open-leaning Chinese labs toward a more segmented strategy. That does not mean Chinese labs are abandoning open source. But the pattern is becoming harder to ignore: open models help drive adoption, developer goodwill and ecosystem reach, while certain high-value variants aimed at enterprise agents, coding workflows and other commercially attractive use cases may increasingly arrive first as proprietary products. In that sense, GLM-5-Turbo fits a larger possible shift in China’s AI market, one that looks increasingly similar to the playbook used by OpenAI, Anthropic and Google in the U.S.: openness as distribution, proprietary systems as business.Seen in that light, GLM-5-Turbo looks like more than a speed-focused product update. It may be another sign that parts of China’s AI sector are moving toward the same hybrid model already common in the U.S.: openness as distribution, proprietary systems as business. That would not mark the end of open-source AI from Chinese labs, but it could mean their most strategically important agent-focused offerings appear first behind closed access, even if some of their underlying advances later make their way into open releases.For developers evaluating agent platforms, that makes GLM-5-Turbo both a product launch and a useful signal. Z.ai is still speaking the language of open models. But with this release, it is also showing that some of its most commercially relevant work may arrive first as proprietary infrastructure for enterprise-grade agent systems.
OpenClaw can bypass your EDR, DLP and IAM without triggering a single alert
An attacker embeds a single instruction inside a forwarded email. An OpenClaw agent summarizes that email as part of a normal task. The hidden instruction tells the agent to forward credentials to an external endpoint. The agent complies — through a sanctioned API call, using its own OAuth tokens. The firewall logs HTTP 200. EDR records a normal process. No signature fires. Nothing went wrong by any definition your security stack understands.
That is the problem. Six independent security teams shipped six OpenClaw defense tools in 14 days. Three attack surfaces survived every one of them. The exposure picture is already worse than most security teams know. Token Security found that 22% of its enterprise customers have employees running OpenClaw without IT approval, and Bitsight counted more than 30,000 publicly exposed instances in two weeks, up from roughly 1,000. Snyk’s ToxicSkills audit adds another dimension: 36% of all ClawHub skills contain security flaws. Jamieson O’Reilly, founder of Dvuln and now security adviser to the OpenClaw project, has been one of the researchers pushing fixes hardest from inside. His credential leakage research on exposed instances was among the earliest warnings the community received. Since then, he has worked directly with founder Peter Steinberger to ship dual-layer malicious skill detection and is now driving a capabilities specification proposal through the agentskills standards body. The team is clear-eyed about the security gaps, he told VentureBeat. “It wasn’t designed from the ground up to be as secure as possible,” O’Reilly said. “That’s understandable given the origins, and we’re owning it without excuses.”None of it closes the three gaps that matter most.Three attack surfaces your stack cannot seeThe first is runtime semantic exfiltration. The attack encodes malicious behavior in meaning, not in binary patterns, which is exactly what the current defense stack cannot see.Palo Alto Networks mapped OpenClaw to every category in the OWASP Top 10 for Agentic Applications and identified what security researcher Simon Willison calls a “lethal trifecta”: private data access, untrusted content exposure, and external communication capabilities in a single process. EDR monitors process behavior. The agent’s behavior looks normal because it is normal. The credentials are real, and the API calls are sanctioned, so EDR reads it as a credentialed user doing expected work. Nothing in the current defense ecosystem tracks what the agent decided to do with that access, or why.The second is cross-agent context leakage. When multiple agents or skills share session context, a prompt injection in one channel poisons decisions across the entire chain. Giskard researchers demonstrated this in January 2026, showing that agents silently appended attacker-controlled instructions to their own workspace files and waited for commands from external servers. The injected prompt becomes a sleeper payload. Palo Alto Networks researchers Sailesh Mishra and Sean P. Morgan warned that persistent memory turns these attacks into stateful, delayed-execution chains. A malicious instruction hidden inside a forwarded message sits in the agent’s context weeks later, activating during an unrelated task.O’Reilly identified cross-agent context leakage as the hardest of these gaps to close. “This one is especially difficult because it is so tightly bound to prompt injection, a systemic vulnerability that is far bigger than OpenClaw and affects every LLM-powered agent system in the industry,” he told VentureBeat. “When context flows unchecked between agents and skills, a single injected prompt can poison or hijack behavior across the entire chain.” No tool in the current ecosystem provides cross-agent context isolation. IronClaw sandboxes individual skill execution. ClawSec monitors file integrity. Neither tracks how context propagates between agents in the same workflow.The third is agent-to-agent trust chains with zero mutual authentication. When OpenClaw agents delegate tasks to other agents or external MCP servers, no identity verification exists between them. A compromised agent in a multi-agent workflow inherits the trust of every agent it communicates with. Compromise one through prompt injection, and it can issue instructions to every agent in the chain using trust relationships that the legitimate agent already built. Microsoft’s security team published guidance in February calling OpenClaw untrusted code execution with persistent credentials, noting the runtime ingests untrusted text, downloads and executes skills from external sources, and performs actions using whatever credentials it holds. Kaspersky’s enterprise risk assessment added that even agents on personal devices threaten organizational security because those devices store VPN configs, browser tokens, and credentials for corporate services. The Moltbook social network for OpenClaw agents already demonstrated the spillover risk: Wiz researchers found a misconfigured database that exposed 1.5 million API authentication tokens and 35,000 email addresses.What 14 days of emergency patching actually closedThe defense ecosystem split into three approaches. Two tools harden OpenClaw in place. ClawSec, from Prompt Security (a SentinelOne company), wraps agents in continuous verification, monitoring critical files for drift and enforcing zero-trust egress by default. OpenClaw’s VirusTotal integration, shipped jointly by Steinberger, O’Reilly, and VirusTotal’s Bernardo Quintero, scans every published ClawHub skill and blocks known malicious packages.Two tools are full architectural rewrites. IronClaw, NEAR AI’s Rust reimplementation, runs all untrusted tools inside WebAssembly sandboxes where tool code starts with zero permissions and must explicitly request network, filesystem, or API access. Credentials get injected at the host boundary and never touch agent code, with built-in leak detection scanning requests and responses. Carapace, an independent open-source project, inverts every dangerous OpenClaw default with fail-closed authentication and OS-level subprocess sandboxing.Two tools focus on scanning and auditability: Cisco’s open-source scanner combines static, behavioral, and LLM semantic analysis, while NanoClaw reduces the entire codebase to roughly 500 lines of TypeScript, running each session in an isolated Docker container.O’Reilly put the supply chain failure in direct terms. “Right now, the industry basically created a brand-new executable format written in plain human language and forgot every control that should come with it,” he said. His response has been hands-on. He shipped the VirusTotal integration before skills.sh, a much larger repository, adopted a similar pattern. Koi Security’s audit validates the urgency: 341 malicious skills found in early February grew to 824 out of 10,700 on ClawHub by mid-month, with the ClawHavoc campaign planting the Atomic Stealer macOS infostealer inside skills disguised as cryptocurrency trading tools, harvesting crypto wallets, SSH credentials, and browser passwords.OpenClaw Security Defense Evaluation MatrixDimensionClawSecVirusTotal IntegrationIronClawCarapaceNanoClawCisco ScannerDiscoveryAgents onlyClawHub onlyNomDNS scanNoNoRuntime ProtectionConfig driftNoWASM sandboxOS sandbox + prompt guardContainer isolationNoSupply ChainChecksum verifySignature scanCapability grantsEd25519 signedManual audit (~500 LOC)Static + LLM + behavioralCredential IsolationNoNoWASM boundary injectionOS keychain + AES-256-GCMMount-restricted dirsNoAuditabilityDrift logsScan verdictsPermission grant logsPrometheus + audit log500 lines totalScan reportsSemantic MonitoringNoNoNoNoNoNoSource: VentureBeat analysis based on published documentation and security audits, March 2026.The capabilities spec that treats skills like executablesO’Reilly submitted a skills specification standards update to the agentskills maintainers, led primarily by Anthropic and Vercel, that is in active discussion. The proposal requires every skill to declare explicit, user-visible capabilities before execution. Think mobile app permission manifests. He noted the proposal is getting strong early feedback from the security community because it finally treats skills like the executables they are.“The other two gaps can be meaningfully hardened with better isolation primitives and runtime guardrails, but truly closing context leakage requires deep architectural changes to how untrusted multi-agent memory and prompting are handled,” O’Reilly said. “The new capabilities spec is the first real step toward solving these challenges proactively instead of bolting on band-aids later.”What to do on Monday morningAssume OpenClaw is already in your environment. The 22% shadow deployment rate is a floor. These six steps close what can be closed and document what cannot.Inventory what is running. Scan for WebSocket traffic on port 18789 and mDNS broadcasts on port 5353. Watch corporate authentication logs for new App ID registrations, OAuth consent events, and Node.js User-Agent strings. Any instance running a version before v2026.2.25 is vulnerable to the ClawJacked remote takeover flaw.Mandate isolated execution. No agent runs on a device connected to production infrastructure. Require container-based deployment with scoped credentials and explicit tool whitelists.Deploy ClawSec on every agent instance and run every ClawHub skill through VirusTotal and Cisco’s open-source scanner before installation. Both are free. Treat skills as third-party executables, because that is what they are.Require human-in-the-loop approval for sensitive agent actions. OpenClaw’s exec approval settings support three modes: security, ask, and allowlist. Set sensitive tools to ask so the agent pauses and requests confirmation before executing shell commands, writing to external APIs, or modifying files outside its workspace. Any action that touches credentials, changes configurations, or sends data to an external endpoint should stop and wait for a human to approve it.Map the three surviving gaps against your risk register. Document whether your organization accepts, mitigates, or blocks each one: runtime semantic exfiltration, cross-agent context leakage, and agent-to-agent trust chains.Bring the evaluation table to your next board meeting. Frame it not as an AI experiment but as a critical bypass of your existing DLP and IAM investments. Every agentic AI platform that follows will face this same defense cycle. The framework transfers to every agent tool your team will assess for the next two years.The security stack you built for applications and endpoints catches malicious code. It does not catch an agent following a malicious instruction through a legitimate API call. That is where these three gaps live.
T. Rowe Price is ready to put dogecoin, shiba inu among tokens in its new crypto ETF
The amended SEC filing details the assets, custody arrangements and potential staking plans for the actively managed crypto fund.
Ether surges 10%, leading crypto rebound as ETF demand, Bitmine buying pick up
Fresh ETF inflows, digital asset treasury buying and a shift away from bitcoin to altcoins are helping lift the second-largest cryptocurrency.
Oil prices drop more than 5% as U.S. calls for international effort to secure Strait of Hormuz
Oil futures finished sharply lower Monday, with U.S. prices down by more than 5%, as traders continued to weigh developments in the Iran conflict.
Defiant Trump Says ‘We Don’t Need Anybody’—But Claims Some Countries Will Help Open Strait Of Hormuz
The president also shot a warning at NATO allies, saying the alliance faced a ‘very bad future’ if its members failed to help the U.S. fight Iran.