Uber on Monday said it had agreed to buy the parking-reservations app SpotHero, a move that will allow people to book parking reservations on the Uber app.
BUSINESS
Safest carmaker issues recall over dangerous EV issue
Last year was a challenging one for vehicle quality control.More than 30 million vehicles in the U.S. were recalled in 2025 due to nearly 1,000 separate vehicle and equipment issues that posed safety risks, according to AutoInsurance.com, almost double the just more than 16 million that were sold last year.That lack of quality control has seeped into the early part of 2026, as over 23,000 vehicles were recalled in just the first two weeks of the year. Ford was the biggest offender last year, issuing nearly 140 recalls and easily breaking GM’s 2014 record of 78. Ford accounted for 35% of U.S. auto recalls the previous year, according to the National Highway Traffic Administration. Stellantis, in second place for number of recalls, accounted for only 12%.Meanwhile, Swedish-made, Chinese-owned Volvo is issuing its second major recall of the year after revealing it needs to fix more than 400,000 vehicles with potentially faulty rearview cameras that would not turn on when the car was in reverse.While the recall was for a relatively benign issue, which Volvo said affected 100% of the vehicles it recalled, this week’s recall involves a much more serious issue that could be deadly for drivers.
Volvo has issued its second major recall of 2026.Photo by John Keeble on Getty Images
Volvo issues recall for EV SUV battery fire riskThe EX30, Volvo’s EV mothership, has a potentially dangerous defect that could cause its battery packs to overheat and catch fire. Volvo consistently ranks among the global leaders in vehicle safety, and it takes its safety reputation seriously. So Reuters had to dig for this previously unreported recall.Related: Honda forced into another recall over potentially dangerous issueVolvo said it is now “contacting the owners of all affected cars to advise them of the next steps” and that it will replace affected battery modules free of charge. In the meantime, Volvo urges owners to limit charging to 70% to eliminate the fire risk.Volvo has been giving this advice to owners in the U.S., Australia, Brazil and a dozen other countries, according to the company’s regulatory filings, and it is also advising EX30 owners to park a distance away from buildings.Beyond the reputational harm bound to result from this recall, Volvo could pay up to $195 million, excluding logistics and repair costs, to fix the issue, according to Reuters. Two affected EX30 owners who talked to the news service said they wanted to return their vehicles. A British man said he bought the Volvo because of its safety reputation, but the company is “producing a car that is dangerous.”Another man from New Zealand reported that he is facing much higher costs because the charging cap cut into the car’s range, forcing him to fill up more often. Volvo cut 3,000 jobs last year Tariffs from 2025 have taken a significant toll on automakers, especially foreign ones. Last year, Volvo, which imports most of its U.S. vehicles from Europe and China, said customers would have to pay a large share of tariff-related costs. It added that threats of a 50% tariff would make it impossible to sell the Belgium-made EX30 EV in the U.S., according to Reuters.Volvo also scrapped its guidance amid tariff costs. Still, its most significant move last year was sharing plans to lay off 3,000 white-collar workers, representing about 15% of its total office-based global workforce, to cut SEK 18 billion (about $1.88 billion) in costs.The layoffs included about 1,000 consultant positions and around 1,200 office-based positions, mainly in Sweden, with the remainder in other countries.“The actions announced today have been difficult decisions, but they are important steps as we build a stronger and even more resilient Volvo Cars,” stated CEO Håkan Samuelsson. “The automotive industry is in the middle of a challenging period. To address this, we must improve our cash flow generation and structurally lower our costs. At the same time, we will continue to ensure the development of the talent we need for our ambitious future.”
Fed’s Waller calls March interest-rate cut ‘a coin flip’
Federal Reserve Governor Christopher Waller said he’ll vote to cut interest rates at the central bank’s meeting next month, depending on upcoming data reflecting the labor market.It may be appropriate to keep rates steady when the Federal Open Market Committee meets March 17-18 — if the February labor market data show, as they did in January, that downside risks to the labor market have diminished, Waller said.That decision will most likely prompt a slew of vocal criticism from President Donald Trump, who has been demanding the independent central bank slash benchmark interest rates to 1% or lower since taking office in January 2025.But Waller also said labor-market data may influence his decision to support another cut in the benchmark Federal Funds Rate, currently paused at 3.50% to 3.75%. “If the good labor market news of January is revised away or evaporates in February, it would support my position at the FOMC’s last meeting, that a 25-basis-point reduction in the policy rate was appropriate, and that such a cut should be made at the March meeting,” he said Feb. 23 in prepared remarks for an event with the National Association for Business Economics.“As things stand today, I rate these two possible outcomes as close to a coin flip,” he said.
Federal Reserve Bank of New York via FRED®
FOMC January meeting holds rates steadyThe FOMC voted 10-2 to hold interest rates steady at 3.50% to 3.75% in January after three consecutive quarter-point cuts in its last three meetings of 2025.The Federal Funds Rate guides interest rates for investors and consumers on auto and student loans, home-equity loans, and credit cards.For consumers, a delayed rate cut could mean higher borrowing costs that remain in place longer than expected.Waller and Fed Governor Stephen Miran dissented, saying they would have preferred a quarter-point cut due to softening in the labor market. It was the FOMC’s first pause since July 2025.How the Fed manages interest rates The Fed’s dual congressional mandate requires it to balance full employment and price stability.Lower interest rates support hiring but can fuel inflation.Higher rates cool prices but can weaken the job market.The two goals often conflict, operate on different timelines and are influenced by unpredictable global events. More Federal Reserve:Fed Chair Powell sends frustrating message on future interest-rate cutsAfter the December rate cut, Fed Chair Jerome Powell said that the lowering of rates brought monetary policy “within a broad range of neutral.”A neutral rate neither stimulates nor restrains economic growth.When the Federal Reserve last paused interest ratesThe Fed last paused interest rates in September 2023, holding the funds rate at 5.25% to 5.50% after a rapid tightening cycle aimed at curbing post-pandemic inflation.The pause lasted nearly a year as policymakers wanted to see if the higher borrowing costs would tame inflation without dipping the economy into a recession.During that pause, inflation gradually cooled and the labor market remained resilient.The central bank resumed cutting rates in September 2025 once Fed officials became confident that inflation was moving sustainably toward the Fed’s 2% target.Waller continues to focus on labor-market riskWaller dissented from the Fed’s decision in January to leave its benchmark policy rate unchanged, saying he preferred a quarter-point reduction because of signs of continued softness in the labor market.As TheStreet reported, the government’s employment report for January subsequently came in much hotter than economists and traders expected. Payrolls rose by the most in more than a year to 130,000, beating estimates of 55,000.The unemployment rate unexpectedly fell to 4.3% from 4.4%.Waller said he welcomed the positive figures in January, but said he has concerns they “may contain more noise than signal,” particularly because data revisions in the report also showed job creation in 2025 was close to zero. He said that suggests the job market over 2025 was “weak” and “fragile.”The Fed governor, a Trump appointee, also addressed a conundrum many economists have identified about the current K economy: Growth is relatively solid, yet employers added few, if any, jobs last year. According to Waller, even the meager gains reported earlier this month for last year will be eventually revised to below zero.Related: Fed officials signal shocking twist on interest-rate cuts“This would be the first time in my career, my life, that I saw an economy growing like this, and zero job growth,” Waller said in a moderated discussion following his remarks according to The New York Times. “I don’t even know quite how to think about this.” He also said that hiring could pick up this year and largely resolve the contradiction.The Bureau of Labor Statistics is due to release its February employment report on March 6 and the Consumer Price Index on March 11.White House demands drastic interest-rate cutsPresident Trump attacked the Fed on Feb. 20 in a TruthSocial post after the government reported that the economy grew more slowly in the final three months of last year than in the summer and fall. GDPslowed to an annual rate of 1.4%, down from 4.4% in the fall.“LOWER INTEREST RATES,” Trump posted. “’Two Late’ Powell is the WORST!!” he added, misspelling his usual nickname for Powell, whom he has referred to previously and frequently as “Too Late,” among other insults.Trump has said that lower interest rates will jump-start the stagnant housingmarket and reduce the level of interest on the $38.56 trillion federal debt.Newest PCE data show inflation ticking upThe Fed’s preferred inflation model is the Personal Consumption Expenditures (PCE) Price Index, and the most recent headline comes from the December 2025 report showing PCE at 2.9%, up from 2.8% in November.Tony Welch, chief investment officer at SignatureFD, told TheStreet that the most recent PCE data show inflation remaining sticky around the 3% level, which will keep “the Fed in a holding pattern.”“Services inflation remains the key friction point, and while it is improving directionally, it has not slowed enough to justify a near-term policy shift,’’ Welch said. “The implication is a longer window of ‘higher for longer’ policy than markets had priced late last year, even as the overall trajectory remains disinflationary.”The CME Group Fed Watch tool shows a 96.1% likelihood that the FOMC will hold rates steady in March. Markets are expecting two rate cuts in 2026, forecasting June and possibly December.Waller’s look at AI impact on U.S. economyWaller also said he doesn’t yet see artificial intelligence significantly impacting productivity across the economy. He added that recent strong trends could be due to a number of factors, including shifting work arrangements following the Covid pandemic.“The growth and productivity we’ve seen over the last year or two isn’t from AI,” he said in a question-and-answer session following his remarks, according to Bloomberg.“I don’t think any of us believe that that’s a big driver for productivity growth in the aggregate numbers,” Waller said.Waller addresses the Fed’s balance sheet controversyWaller, also during the Q&A session, weighed in on the Fed’s $6.6 trillion balance sheet.It has grown in size both because of the central bank’s asset purchases to support the economy during crises, and the Fed’s embrace of an “ample” system under which banks hold more reserves, boosting liquidity in the financial system, he said.Kevin Warsh, President Trump’s nominee to be the next Fed chair, and Treasury Secretary Scott Bessent are among critics of the Fed balance sheet’s size.They want the central bank to have a much smaller footprint in the markets.But Waller said returning to a “scarce” reserves regime isn’t desirable.“You don’t want banks every night of the day digging around in the couch cushions, looking for money. This is massively inefficient and stupid,” he said.What to expect as the March FOMC approachesThe Fed’s next move hinges less on political pressure and more on whether the labor market confirms resilience or reveals fresh cracks.If hiring weakens and inflation remains sticky near 3%, the data-driven FOMC policymakers may find themselves balancing competing risks with little margin for error.As for markets, the real question is no longer when interest rates will fall, but whether the economic data will force the Fed’s decision.Related: Fed official signals surprise rate-cut shift
How does the Dow Jones perform during stock market crashes?
Former Starbucks (SBUX) CEO Howard Schultz summed it up succinctly:“Managing and navigating through a financial crisis is no fun at all.” It’s hard to argue with that one.For investors, the very mention of Black Monday, the dot-com bust, or the 2008 financial crisis is often enough to revive hellish memories of steep losses and extreme market volatility.In the U.S., the three most widely followed stock benchmarks are the Dow Jones Industrial Average, the S&P 500, and the Nasdaq Composite. While they often move in the same direction during major market shocks, differences in how each index is structured can lead to notably different results during market crashes and financial crises.Of the three, the Dow is arguably the most unique, and thus theoretically the most prone to being an outlier during times of market uncertainty. But is that what history shows?Here, we examine the Dow’s performance during some of the most notorious market crashes and how it compared to the S&P 500 and the Nasdaq. But first, it’s important to understand how each index differs.
The Dow Jones Industrial Average is the oldest U.S. stock index, dating back to 1896.Photo by NurPhoto on Getty Images
How the Dow differs from the S&P 500 and the NasdaqThe Dow is the oldest of the three indexes by far, and it remains one of the most closely watched gauges of the U.S. stock market and economy today. The index tracks 30 large, established companies and was created in 1896, when Grover Cleveland was in the White House and the Tabulating Machine Company—later known as IBM (SBUX)—was just getting its start. With only 30 component stocks, the DJIA is very narrow in scope compared to its peers. It’s also price-weighted, meaning higher-priced stocks affect its value more than those that trade at lower share prices. More on the Dow: What is the Dow divisor & how does it work?Dow Jones vs. S&P 500: Which index actually represents the market?How to track Dow stocks in Google Sheets via Google FinanceIn contrast, most other bellwether indexes, including the S&P 500 and the Nasdaq Composite, are capitalization-weighted, meaning companies with higher market values (regardless of share price) have more influence than smaller companies. The S&P 500, which follows 500 of the largest publicly traded U.S. companies, launched in its modern form in 1957.The Nasdaq Composite began operating in 1971 as the world’s first electronic stock market and today includes thousands of stocks (anything that trades on the Nasdaq exchange with a share price of over $1), with a heavy weighting toward technology and growth companies.“Indexes like the S&P 500 and the Nasdaq Composite track a basket of stocks and consequently are more diversified than owning individual stocks, which can help mitigate the risks of any individual security,” wrote Seth Carlson of J.P. Morgan Wealth Management on the firm’s website.“While such indexes mitigate volatility given their diversification, it’s important to note that some stocks within the index may experience outsized returns or losses.”That dynamic was visible in 2023 and 2024, when a small group of large technology stocks — often referred to as the “Magnificent Seven” — accounted for a sizable share of the S&P 500’s gains.DJIA vs. S&P 500 vs. Nasdaq Composite at a glanceDJIAS&P 500Nasdaq CompositeLaunched189619571971Component stocks30500″Blue-chip” stocks hand-selected by committee Selection method”Blue-chip” stocks hand selected by committee 500 largest American stocks by market capStocks and other vehicles with share prices over $1 that trade exclusively on the Nasdaq stock marketWeighting methodPrice-weightedCapitalization-weightedCapitalization-weightedHow does the Dow perform when the market crashes?Investors frequently wonder how the Dow performs during major downturns compared with the broader S&P 500 and the tech-heavy Nasdaq. One important difference is how the indexes are constructed.The Dow is price-weighted, meaning stocks with higher share prices have greater influence on the index’s movement. By contrast, both the S&P 500 and the Nasdaq Composite are market-capitalization-weighted, giving the largest companies the biggest impact.Data from Nasdaq Indexes, analyzing the Nasdaq‑100’s performance across three major crises, shows that tech-heavy stocks often experience larger swings than broader markets.Related: What happens when a stock splits in the Dow Jones Industrial Average?While the Nasdaq‑100’s volatility can exaggerate short-term losses, the broader Nasdaq Composite, which includes thousands of stocks, generally follows the same trends but with less extreme moves. Since the Dow is made up of 30 mature, blue-chip companies — many of them established dividend payers — it often proves more resilient during sharp selloffs than indexes with heavier exposure to high-growth stocks. The Nasdaq Composite, by comparison, is typically the most volatile of the three because of its concentration in technology and growth-oriented companies. The S&P 500 generally tracks a path closer to the Dow, though its broader and more growth-oriented mix can lead to steeper declines during periods when investors aggressively rotate out of riskier sectors.How the Dow performed during 3 major financial crises compared to other indexesHere’s a more in-depth look at how the Dow — and its peers, the S&P 500 and the Nasdaq Composite —held up during three of the biggest market crashes in recent memory. Black Monday (Oct. 19, 1987)On the worst single trading day in modern U.S. market history, the Dow plunged 22.6%. The S&P 500 fell 20.47%, while the Nasdaq dropped about 11.4%. The episode highlighted how closely linked global financial markets had become, according to historical accounts from the Federal Reserve.The dot-com bust (2000–2002)When the technology bubble collapsed in the early 2000s, all three major indexes suffered prolonged declines. The Nasdaq, which had been driven sharply higher by unprofitable and highly speculative technology companies, fell far more dramatically than the broader market. The Dow also entered a multiyear bear market, but its losses were generally less severe than those seen in the Nasdaq. While the Dow recovered its prior peak several years later, the Nasdaq Composite did not fully regain its 2000 high until around 2015.Related: What Was the Dot-Com Bubble & Why Did It Burst?The global financial crisis (2007–2009)During the 2008 collapse, the three benchmarks again moved sharply lower together, but the magnitude of the declines differed. For the full year of 2008, the S&P 500 fell roughly 38.5%, the Nasdaq declined about 40.5%, and the Dow lost close to 34%. The Dow’s recovery was supported by extraordinary government and central bank intervention, including large-scale financial system rescues. All three indexes bottomed in March 2009.The Dow and the S&P 500 regained their pre-crisis highs by early 2013. The Nasdaq Composite, which had already been weighed down by much steeper losses during the earlier dot-com collapse, ultimately went on to outperform in the years following the financial crisis as technology stocks became market leaders again. Data published by Nasdaq Indexes show that the post-crisis recovery was marked by strong sector rotation, with technology-heavy benchmarks significantly outpacing indexes with larger weightings in financial and industrial companies.The bottom lineDuring financial crises, all three major U.S. stock indexes tend to suffer large and synchronized declines. Historically, however, the Dow has often held up better on a relative basis than the Nasdaq — and sometimes better than the S&P 500 — because of its smaller, more conservative, blue-chip composition.The trade-off is that in long recoveries driven by technology and growth stocks, the Nasdaq has frequently outperformed, while the Dow’s steadier mix of companies can lead to slower — but often less volatile — returns.For investors, the differences highlight why broad diversification across multiple types of index funds can matter most when markets are under severe stress.Related: DJIA Master List: What companies make up the Dow Jones Industrial Index in 2026?
Google clamps down on Antigravity ‘malicious usage’, cutting off OpenClaw users in sweeping ToS enforcement move
Google caused controversy among some developers this weekend and today, Monday, February 23rd, after restricting their usage of its new Antigravity “vibe coding” platform, alleging “maliciously usage.” Some users who had been using the open source autonomous AI agent OpenClaw in conjunction with agents built on Antigravity, as well as those who had connected OpenClaw agents to their Gmails, claimed on social media that they lost access to their Google accounts. According to Google, said users had been using Antigravity to access a larger number of Gemini tokens via third-party platforms like OpenClaw, which overwhelmed the system for other Antigravity customers. This move has cut off several users, underscoring the architectural and trust issues that can arise with OpenClaw. The timing of Google’s crackdown is particularly pointed. Just one week ago, on February 15, OpenAI CEO Sam Altman announced that OpenClaw creator Peter Steinberger had joined OpenAI to lead its “next generation of personal agents.” While OpenClaw remains an open-source project under an independent foundation, it is now financially backed and strategically guided by Google’s primary rival. By cutting off OpenClaw’s access to Antigravity, Google isn’t just protecting its server load; it is effectively severing a pipeline that allows an OpenAI-adjacent tool to leverage Google’s most advanced Gemini models.Google DeepMind engineer and former CEO and founder of Windsurf, Varun Mohan, said in an X post that the company noticed “malicious usage” that led to service degradation.“We’ve been seeing a massive increase in malicious usage of the Antigravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS [Terms of Service] and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users,” the post said. A Google DeepMind spokesperson told VentureBeat that the move is not to permanently ban the use of Antigravity to access third-party platforms, but to align its use with the platform’s terms of service. Unsurprisingly, Google’s move has caused a furor among OpenClaw users, including from OpenClaw creator Peter Steinberger, who announced that OpenClaw will remove Google support as a result. Infrastructure and connection uncertaintyOpenClaw emerged as a way for individual users to run shell commands and access local files, fulfilling a major promise of AI agents: efficiently running workflows for users. But, as VentureBeat has frequently pointed out, it can often run into security and guardrail issues. There are companies building ways for enterprise customers to access OpenClaw securely and with a governance layer, though OpenClaw is so new that we should expect more announcements soon.However, Google’s move was not framed as a security issue but rather as one of access and runtime, further showing that there is still significant uncertainty when users want to bring in something like OpenClaw into their workflow. This is not the first time developers and power users of agentic AI found their access curtailed. Last year, Anthropic throttled access to Claude Code after the company claimed some users were abusing the system by running it 24/7. What this does highlight is the disconnect between companies like Google and OpenClaw users. OpenClaw offered many interesting possibilities for creating workflows with agents. However, because it is continually evolving, users may inadvertently run afoul of ToS or rate limits. Mohan said Google is working to bring the banned users back, but whether this means the company will amend its ToS or figure out a secure connection between OpenClaw agents and Antigravity models remains to be seen. For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.Affected usersSeveral users said on both the Y Combinator chat boards and X that they no longer had access to their Google accounts after running OpenClaw instances for certain Google products. Google’s move mirrors a broader industry shift toward “walled garden” agent ecosystems. Earlier this year, Anthropic introduced “client fingerprinting” to ensure that its Claude Code environment remains the exclusive interface for its models, effectively locking out third-party wrappers like OpenClaw. For developers, the message is clear: the era of “bring your own agent” to a frontier model is ending. Providers are now prioritizing vertically integrated experiences where they can capture 100% of the telemetry and subscription revenue, often at the expense of the open-source interoperability that defined the early days of the LLM boom.Some have said they will no longer use Google or Gemini for their projects. Right now, people who still want to keep using Antigravity will need to wait until Google figures out a way for them to use OpenClaw and access Gemini tokens in a manner Google deems “fair.” Google DeepMind reiterated that it had only cut access to Antigravity, not to other Google applications. Conclusion: the enterprise takeawayFor enterprise technical decision-makers, the “Antigravity Ban” serves as a definitive case study in the risks of agentic dependency. As the industry moves from chatbots to autonomous agents, the following realities must now dictate strategy:Platform fragility is the new normal: The sudden lockout of $250/month “Ultra” users proves that even high-paying enterprise customers have little leverage when a provider decides to change its “fair use” definitions. Relying on OAuth-based third-party wrappers for core business logic is now a high-risk gamble.The rise of local-first governance: With OpenClaw moving toward an OpenAI-backed foundation and Google/Anthropic tightening their clouds, enterprises should prioritize agent frameworks that can run “local-first” or within VPCs. The “token loophole” that OpenClaw exploited is being closed; future agentic scale will require direct, high-cost API contracts rather than subsidized consumer seats.Account portability as a requirement: The fact that users “lost access to their Google accounts” underscores the danger of bundling development environments with primary identity providers. Decision-makers should decouple AI development from core corporate identity (SSO) where possible to avoid a single ToS violation paralyzing an entire team’s communications.Ultimately, the Antigravity incident marks the end of the “Wild West” for AI agents. As Google and OpenAI stake their claims, the enterprise must choose between the stability of the walled garden or the complexity (and cost) of truly independent, self-hosted infrastructure.
One engineer made a production SaaS product in an hour: here’s the governance system that made it possible
Every engineering leader watching the agentic coding wave is eventually going to face the same question: if AI can generate production-quality code faster than any team, what does governance look like when the human isn’t writing the code anymore?Most teams don’t have a good answer yet. Treasure Data, a SoftBank-backed customer data platform serving more than 450 global brands, now has one, though they learned parts of it the hard way.The company today officially announced Treasure Code, a new AI-native command-line interface that lets data engineers and platform teams operate its full CDP through natural language, with Claude Code handling creation and iteration underneath. It was built by a single engineer. The company says the coding itself took roughly 60 minutes. But that number is almost beside the point. The more important story is what had to be true before those 60 minutes were possible, and what broke after.”From a planning standpoint, we still have to plan to derisk the business, and that did take a couple of weeks,” Rafa Flores, Chief Product Officer at Treasure Data, told VentureBeat. “From an ideation and execution standpoint, that’s where you kind of just blend the two and you just go, go, go. And it’s not just prototyping, it’s rolling things out in production in a safe way.”Build the governance layer firstBefore even a single line of code was written, Treasure Data had to answer a harder question: what does the system need to be prohibited from doing, and how do you enforce that at the platform level rather than hoping the code respects it?The guardrails Treasure Data built live upstream of the code itself. When any user connects to the CDP through Treasure Code, access control and permission management are inherited directly from the platform. Users can only reach resources they already have permission for. PII cannot be exposed. API keys cannot be surfaced. The system cannot speak disparagingly about a brand or competitor.”We had to get CISOs involved. I was involved. Our CTO, heads of engineering, just to make sure that this thing didn’t just go rogue,” Flores said.This foundation made the next step possible: letting AI generate 100% of the codebase, with a three-tier quality pipeline enforcing production standards throughout.The three-tier pipeline for AI code generation The first tier is an AI-based code reviewer also using Claude Code.
The code reviewer sits at the pull request stage and runs a structured review checklist against every proposed merge, checking for architectural alignment, security compliance, proper error handling, test coverage and documentation quality. When all criteria are satisfied it can merge automatically. When they aren’t, it flags for human intervention.The fact that Treasure Data built the code reviewer in Claude Code is not incidental. It means the tool validating AI-generated code was itself AI-generated, a proof point that the workflow is self-reinforcing rather than dependent on a separate human-written quality layer.The second tier is a standard CI/CD pipeline running automated unit, integration and end-to-end tests, static analysis, linting and security checks against every change. The third is human review, required wherever automated systems flag risk or enterprise policy demands sign-off.The internal principle Treasure Data operates under: AI writes code, but AI does not ship code.Why this isn’t just Cursor pointed at a databaseThe obvious question for any engineering team is why not just point an existing tool like Cursor at your data platform, or expose it as an MCP server and let Claude Code query it directly.Flores argued the difference is governance depth. A generic connection gives you natural language access to data but inherits none of the platform’s existing permission structures, meaning every query runs with whatever access the API key allows. Treasure Code inherits Treasure Data’s full access control and permissioning layer, so what a user can do through natural language is bounded by what they’re already authorized to do in the platform. The second distinction is orchestration. Because Treasure Code connects directly to Treasure Data’s AI Agent Foundry, it can coordinate sub-agents and skills across the platform rather than executing single tasks in isolation: the difference between telling an AI to run an analysis and having it orchestrate that analysis across omni-channel activation, segmentation and reporting simultaneously.What broke anywayEven with the governance architecture in place, the launch didn’t go cleanly, and Flores was candid about it.Treasure Data initially made Treasure Code available to customers without a go-to-market plan. The assumption was that it would stay quiet while the team figured out next steps. Customers found it anyway. More than 100 customers and close to 1,000 users adopted it within two weeks, entirely through organic discovery.”We didn’t put any go-to-market motions behind it. We didn’t think people were going to find it. Well, they did,” Flores said. “We were left scrambling with, how do we actually do the go-to-market motions? Do we even do a beta, since technically it’s live?”The unplanned adoption also created a compliance gap. Treasure Data is still in the process of formally certifying Treasure Code under its Trust AI compliance program, a certification it had not completed before the product reached customers.A second problem emerged when Treasure Data opened skill development to non-engineering teams. CSMs and account directors began building and submitting skills without understanding what would get approved and merged, creating significant wasted effort and a backlog of submissions that couldn’t clear the repository’s access policies.Enterprise validation and what’s still missingThomson Reuters is among the early adopters. Flores said that the company had been attempting to build an in-house AI agent platform and struggling to move fast enough. It connected with Treasure Data’s AI Agent Foundry to accelerate audience segmentation work, then extended into Treasure Code to customize and iterate more rapidly.The feedback, Flores said, has centered on extensibility and flexibility, and the fact that procurement was already done, removing a significant enterprise barrier to adoption.The gap Thomson Reuters has flagged, and that Flores acknowledges the product doesn’t yet address, is guidance on AI maturity. Treasure Code doesn’t tell users who should use it, what to tackle first, or how to structure access across different skill levels within an organization.”AI that allows you to be leveraged, but also tells you how to leverage it, I think that’s very differentiated,” Flores said. He sees it as the next meaningful layer to build.What engineering leaders should take from thisFlores has had time to reflect on what the experience actually taught him, and he was direct about what he’d change. Next time, he said, the release would stay internal first.”We will release it internally only. I will not release it to anyone outside of the organization,” he said. “It will be more of a controlled release so we can actually learn what we’re actually being exposed to at lower risk.”On skill development, the lesson was to establish clear criteria for what gets approved and merged before opening the process to teams outside engineering, not after.The common thread in both lessons is the same one that shaped the governance architecture and the three-tier pipeline: speed is only an advantage if the structure around it holds. For engineering leaders evaluating whether agentic coding is ready for production, the Treasure Data experience translates into three practical conclusions.Governance infrastructure has to precede the code, not follow it. The platform-level access controls and permission inheritance were what made it safe to let AI generate freely. Without that foundation, the speed advantage disappears because every output requires exhaustive manual review.A quality gate that doesn’t depend entirely on humans is not optional at scale.
Build a quality gate that doesn’t depend entirely on humans. AI can review every pull request consistently, without fatigue, and check policy compliance systematically across the entire codebase. Human review remains essential, but as a final check rather than the primary quality mechanism.Plan for organic adoption. If the product works, people will find it before you’re ready. The compliance and go-to-market gaps Treasure Data is still closing are a direct result of underestimating that.”Yes, vibe coding can work if done in a safe way and proper guardrails are in place,” Flores said. “Embrace it in a way to find means of not replacing the good work you do, but the tedious work that you can probably automate.”
Jobs and CPI reports are not being politically manipulated, government’s statistics chief says
The temporary chief of the U.S. agency that produces critical economic reports on jobs, unemployment and inflation says the data is not being manipulated or influenced by politicians.
Did a blog post just cause software stocks to lose more than $200 billion in market cap?
For investors to wade back into the software sector, they “want and need to see the stocks stop trading down on new AI headlines,” one analyst says.
Manchester City’s Celebrations Show Weakness
As the victory only pulled them within two points of leaders Arsenal, critics might argue that the celebration was excessive.
IBM’s stock heads for worst month in 34 years — and Anthropic is partly to blame
IBM’s stock ended Monday down 13% as Anthropic’s Claude Code threatens to dismantle a critical part of its business.