This piece explores Putin’s mounting domestic pressures and the cost of his calamitous Ukraine War, but notes that Russia is an early winner from America’s Iran War.
BUSINESS
Jim Cramer drops unexpected take on Microsoft stock
Something unusual is happening in the software sector. And investors are starting to notice.Jim Cramer is flagging an unexpected laggard: Microsoft (MSFT). Despite its dominance as the world’s largest software company by revenue and a leader in cloud and artificial intelligence, the stock has recently struggled to keep pace. Even as the broader software space attempts a rebound.“Even when the software stocks are running, you can’t keep Microsoft’s stock from falling,” Cramer said in a tweet, capturing Wall Street’s growing frustration.According to the Mad Money host, the weakness isn’t about Microsoft’s long-term position. Instead, it reflects massive AI-driven capital spending, which has raised concerns about near-term returns, along with brief fears around intensifying AI competition. While he remains broadly bullish on the company’s future, he has questioned whether Microsoft’s AI execution is keeping up with peers during the latest earnings season.So what’s behind the weakness? And should you be concerned?
Photo by Matthias Balk/picture alliance via Getty Images
Cramer says Microsoft is facing pressure As per CNBC, Cramer believes the broader sell-off in software stocks is being overdone. But Microsoft’s situation is more nuanced.“The software companies are survivors,” he said on Mad Money. “ They can merge. They can adapt… but they’re priced for perfection though, and they do seem to have, let’s say, kind of a rugby-scrum feel about them, and we don’t pay up for scrum.”In other words, the issue isn’t survival. Its valuation. In fact, a recent wave of selling was triggered in part by a widely discussed research note imagining a future where artificial intelligence disrupts white-collar jobs and weakens traditional software business models.More Tech Stocks:Morgan Stanley sets jaw-dropping Micron price target after eventNvidia’s China chip problem isn’t what most investors thinkQuantum Computing makes $110 million move nobody saw comingCramer pushed back on the extreme narrative.“Yes, Wall Street can overreact better than anyone,” he said, arguing the market has turned a real concern into an “extinction event.”Still, he acknowledged that AI could compress margins and slow growth. Meaning software stocks may not command the same premium valuations they once did.And Microsoft, despite its dominance, is right in the middle of that shift.AI spending and growth concerns are weighing on MicrosoftMicrosoft’s recent stock weakness comes despite strong fundamentals. This is actually what makes the situation more confusing for you.The company reported impressive FY26 Q2 results:Revenue rose 17% to $81.3 billionNet income surged 60%EPS climbed sharply, reflecting strong profitabilityDuring the earnings release, CEO Satya Nadella emphasized that AI is already becoming a major business driver.“We are only at the beginning phases of AI diffusion, and already Microsoft has built an AI business that is larger than some of our biggest franchises,” said Satya Nadella.But the market is focusing on something else. What exactly? Costs.Microsoft’s massive push into AI is coming with a hefty price tag. As per Yahoo Finance reports,Capital expenditures hit $37.5 billion in one quarterSpending jumped 66% year over yearThat kind of investment is raising concerns about margins. That is especially if returns take time to materialize.At the same time, growth in Azure cloud services showed slight signs of slowing, slipping from 40% to 39% year-over-year growth. That may seem small. But for a company like Microsoft, even minor deceleration can trigger outsized reactions.Currently, MSFT has fallen more than 30% from its highs and is now trading close to its $344.79 52-week low. Over a mid-term duration, too, it has failed to impress as much, with a YTD return down 24.15% and a 1-year return down 5.44%What Cramer expects next for MicrosoftCramer remains broadly optimistic about software and about Microsoft’s long-term future.But he’s also realistic about what comes next. He believes the market is entering a new phase where:AI reshapes pricing powerGrowth becomes less predictableValuations reset lowerThat doesn’t mean collapse. It means adjustment.Cramer pointed out that companies can use AI to cut costs, improve efficiency, and adapt. Rather than be disrupted by it. At the same time, he highlighted that other sectors may benefit even more from AI-driven productivity gains, including:FinancialsTravel companiesRetailersMeanwhile, companies like NVDA are already seeing explosive demand, reinforcing the idea that AI is creating opportunity, not just risk.“For all the handwringing about how AI will be an engine of wealth destruction, it’s hard to deny that it’s also an incredible vehicle of wealth creation,” Cramer said as per CNBC.So where does that leave Microsoft?The company is still one of the most powerful players in tech. But for now, the market seems to be asking a tougher question: Can Microsoft turn its massive AI investment into growth fast enough to justify the cost? Until that answer becomes clearer, the stock may continue to face pressure, even in a sector that’s otherwise trying to move higher.Related: Jim Cramer says ‘sit on your hands’ as war rattles stocks
Trump promises TSA paychecks, even as House objections to Senate’s bill threaten to prolong government shutdown
A partial U.S. government shutdown that has caused chaos at airports was looking Friday like it could persist as the Republican-run House of Representatives voiced objections to a Senate deal that aims to fund the Transportation Security Administration but not all parts of the Department of Homeland Security.
Fidelity reveals a type of trust heirs won’t know exists
You have spent years building wealth that you fully intend to pass on to your children or grandchildren at some point. You have also watched those same family members grow up, stumble through early adulthood, and slowly figure out how money works in the real world.The question most parents and grandparents avoid asking is deceptively simple, but the answer could reshape your entire estate planning strategy for decades ahead. What happens to your heirs’ ambition, spending habits, and sense of personal drive once they discover exactly how much money is waiting for them?Fidelity Investments recently outlined a strategy that could let you sidestep that entire dilemma by keeping the trust itself completely hidden from your heirs.What a silent trust does and how it worksA silent trust is an irrevocable trust where the trustee is specifically instructed to withhold all information about the trust from the named beneficiaries. Your heir would not know the trust exists, what assets it holds, or that they have been named as a beneficiary at all, according to Fidelity Investments.”There’s usually a triggering event that determines when the existence of the trust is revealed to the beneficiary…age is often used, but it could also be a particular life event, such as graduating college, getting married, or having a child,” Director of Advanced Planning at Fidelity Jason Port explained. The trust still operates normally behind the scenes, with a trustee managing and distributing assets according to the terms the grantor originally established. The secrecy ends only when a specific trigger the grantor selected in advance actually occurs.Only a handful of states currently allow silent trust provisionsWhether you can set up a silent trust depends entirely on the laws of the state where the trust is established and governed going forward. Most states require trustees to keep adult beneficiaries reasonably informed about any trust that includes their names, as provided in Section 813 of the Uniform Trust Code.A small group of states has carved out exceptions that allow grantors to override those standard disclosure requirements within the trust documents themselves. Silent trusts are currently permitted in Alaska, Delaware, New Hampshire, South Dakota, Nevada, Tennessee, and Wyoming, estate planning attorneys have confirmed. Related: Fidelity’s 4 Roth strategies could save your family a fortune in taxesMichigan also recently enacted its own silent trust statute, Greenleaf Trust reported. If you do not live in one of those states, you can still create a silent trust in a jurisdiction that permits them. You would need to appoint a trustee, such as a licensed trust company operating in one of those qualifying states, to manage the arrangement.The reasons families choose to keep an inheritance hiddenYou might worry that your child will lose motivation to build a career once they learn that a financial safety net already exists for them. You might also be concerned about their ability to handle a large sum of money responsibly before they have developed real financial literacy skills.More Personal Finance:Retirees following 4% rule are leaving thousands on the tableFidelity says a $500 policy could protect your entire net worthFidelity’s 4 Roth strategies could save your family a fortune in taxesSome grantors simply prefer to keep their personal financial affairs completely private, even from their closest family members, and view the silent trust as a necessary boundary. These are not unusual concerns, and Fidelity notes that they are central to the existence of the silent trust concept in estate planning.Fidelity outlines three core advantages of the silent trust structureAs interest in silent trusts grows, Fidelity highlights three key advantages that make them especially appealing for long-term wealth planning. At their core, these benefits focus on protection, privacy, and control.1. Protection from legal and financial exposureIf your beneficiary does not know about the trust, they are not required to disclose it during a lawsuit, a divorce proceeding, or when filing for college financial aid. Port specifically highlighted this shielding benefit as one of the most practical advantages of the silent trust approach, Fidelity noted.2. A protected window for personal growthThe silence buys your beneficiary time to develop financial habits and a sense of personal responsibility without the prospect of a future windfall distorting their career or life decisions.3. Privacy for the grantor’s overall estate planSilent trusts keep the full scope of your wealth and your distribution strategy out of view, which can reduce family tension and protect business interests held inside the trust.The serious risks you need to weigh before choosing this pathDespite their appeal, silent trusts aren’t without risk. Fidelity highlights several concerns that can complicate outcomes if not properly planned for.No guarantee of readiness when the silence endsKeeping the trust hidden does not automatically produce a financially responsible heir on the other end. There is no guarantee your beneficiary will be any more prepared at the end of the silent period than they were at the start.“Not knowing is not always a sound plan,” Port said. The revelation itself could trigger hurt feelings and genuine family conflict if your heirs feel you deliberately withheld information about their financial future.Trustee oversight becomes significantly harderIf your beneficiary has no idea the trust exists, they obviously cannot monitor whether the trustee is managing the assets responsibly or faithfully following the trust terms. “If nobody is watching what the trustee is doing, how can we be sure that the trustee is faithfully following the terms of the trust?” Port asked.Recruiting a trustee may be more difficult than you expectA trustee of a silent trust bears fiduciary liability without the usual mechanism for relief. Normally, a beneficiary’s knowledge of the trustee’s decisions is what protects the trustee from future claims or legal disputes. Without that transparency, the trustee faces elevated risk, which may discourage some qualified institutions or experienced individuals from agreeing to take on the role.How a trust protector solves the oversight gapPort suggested appointing a third-party trust protector or designated representative who can receive trust information on the beneficiary’s behalf while keeping the beneficiary in the dark. That individual monitors the trustee’s conduct and ensures that the trust terms are being properly followed. States like Ohio actually require this type of beneficiary surrogate when a trust is structured as silent, the Vorys law firm explained. This structure preserves the secrecy you want while making sure someone independent is watching how your money is being managed on behalf of your heir.
A trust protector adds independent oversight to silent trusts, ensuring trustees are accountable while beneficiaries remain intentionally uninformed.fizkes/Shutterstock
What you should be doing during the silent yearsMaking your trust silent does not mean you should stop talking to your family about money entirely. Fidelity argues the opposite is more likely to deliver the outcome you want. Related: 5 retirement risks most Americans overlookYou can introduce your heir to your financial professionals, open a small investment account in their name, and involve them in charitable giving decisions during this silent period. “I believe that it’s important to be educating the next generation on the basics of proper financial management,” Port said. He added that families who involve younger members early in financial conversations tend to see much better outcomes when those heirs eventually inherit assets. If you are proactive about building financial literacy during the silent window, you may discover that a silent trust was not actually necessary in the first place.Alternatives that protect your wealth without full secrecyIf keeping the entire trust secret feels too extreme for your family situation, you have other options that establish guardrails without completely cutting your beneficiary out.Staggered disclosure provisionsYou can phase in information over time rather than keeping everything hidden until a single triggering event occurs. A staggered plan might reveal only the trust balance at age 25, then share complete terms and full access at age 35, as estate planning attorneys have outlined.Distribution restrictions tied to specific needsYou can limit what the trust funds may be used for, such as healthcare expenses, education costs, or a first home purchase, while still letting your heir know about the trust.“There’s risk in being too restrictive,” Port cautioned. He pointed out that life circumstances change in unpredictable ways, and overly rigid terms could prevent your beneficiary from using the funds precisely when they are needed most.The 2026 estate tax landscape adds new urgency for wealthy familiesThe federal estate tax exemption for 2026 stands at $15 million per individual, or $30 million for married couples who plan their estates properly, the IRS confirmed. Estates valued above that threshold face a 40% federal tax rate on every dollar that exceeds the exemption amount.The One Big Beautiful Bill Act permanently extended the higher exemption, which had previously been set to drop to roughly $7 million per person after 2025, according to Congressional Research Service data. Families with substantial assets now have a wider window for transferring wealth free of federal estate and gift taxes.For households approaching or exceeding those thresholds, a silent trust could serve as both a wealth-transfer vehicle and a strategic tool for managing how your heirs learn about their inheritance.Consult a qualified estate planning attorney before taking actionA silent trust is not a do-it-yourself project. The rules vary by state, the drafting requirements are precise, and the consequences of getting it wrong can be extremely costly. You should work with a qualified estate planning attorney who understands the trust laws in your state or in the specific jurisdiction where you plan to establish the trust. Your financial advisor should help you evaluate whether this strategy fits your broader wealth-transfer goals and family dynamics.Key takeaways to remember about silent trustsA silent trust allows you to transfer wealth to heirs who will not learn about it until a triggering event you have personally selected.Only a handful of states currently permit this structure, including Delaware, South Dakota, Nevada, Alaska, New Hampshire, Tennessee, and Wyoming.Appointing a trust protector or designated representative can help address the significant oversight gap that silence creates between the trustee and your heirs.The 2026 federal estate tax exemption of $15 million per person provides high-net-worth families with more room to transfer assets without incurring federal estate tax.Staggered disclosure provisions and distribution restrictions offer alternative strategies that balance transparency with protection for your family’s wealth and values.Related: Fidelity reveals 4 ways to protect your retirement income
Amazon is selling a pair of nightstands for $45, and hundreds of shoppers are buying them
TheStreet aims to feature only the best products and services. If you buy something via one of our links, we may earn a commission.Why we love this dealWhether you’re moving into your first apartment or your first home, fully furnishing a new space can be overwhelming (not to mention pricey). If you’re in that situation and looking for budget-friendly ways to fill your space, check out Amazon’s sale on the Forologee Nightstand Set, which has enclosed storage as well as convenient hooks for your headphones, necklaces, and more.Originally $60, the set is marked down to $45 during the Big Spring Sale, saving shoppers $15. It’s available in three colors, is easy to assemble, and has five-star ratings from more than 1,200 shoppers. It’s rare to find a nightstand set for less than $50, and these might be perfect for a spare bedroom, a child’s room, or even a dorm room.Furologee Nightstand Set, $45 (was $60) at Amazon
Courtesy of Amazon
Shop at AmazonWhy do shoppers love it?These nightstands measure 11.02 inches deep, 14.96 inches wide, and 23.62 inches high. Some reviews mention that they are smaller than they imagined, so be sure to take measurements before buying to ensure they fit your needs. The frame is made of sturdy metal with an engineered wood top, while the fabric drawers have a PVC front and a cute metal pull. Each drawer measures 13.8 inches wide, 10.6 inches deep, and 5.9 inches high, so you can fit everything from your laptop to extra phone chargers inside. There’s an open shelf above the two drawers you can use for decor or simply extra storage. But the coolest feature by far is the bar on each side with S-hooks. This is a great place to hang your keys, headphones, a sleep mask, or even the tube of a CPAP mask. The unit can withstand up to 50 pounds of weight, so don’t worry about putting plenty of stuff inside, as it can handle it.You can choose from three colors of this nightstand set, including white, Black Oak, and Rustic Brown. All three are on sale, but the Rustic Brown finish costs $3 more.Details to knowColors: Black Oak, Rustic Brown, and white.Material: Metal frame with fabric drawers.Requires assembly?: Yes. Related: Amazon’s ‘sturdy’ 4-door farmhouse storage cabinet is just $80 during the Big Spring Sale”These are perfect nightstand tables for my bedroom,” one shopper wrote. “Very easy to put together and sturdy! Highly recommend!” A second shopper wrote, “I’ve been assembling furniture all week, and this has been the easiest to put together. It’s cute and not complicated, and the directions actually make sense. They are smaller than I thought, but still get the job done!”Shop more deals Somdot White Nightstand With LED Light, $34 (was $36) at AmazonVingli White Nightstand Set, $51 (was $70) at AmazonSagenest Two-Drawer Nightstand, $20 (was $30) at AmazonSpending less than $50 on a pair of nightstands is rare, but thanks to the Amazon Big Spring Sale, you can do just that with the Forologee Nightstand Set.
Wall Street Bonuses Hit a Record High. Here’s How Much Bankers Made.
Wall Street’s bonus pool hit a record $49.2 billion in 2025, a 9% increase from the previous year.
‘He didn’t seem very alert’: Our new CPA said we owe a $443 tax refund, but we’re actually due $637. Do we fire him?
“We’re both concerned about the accuracy of our returns.”
Meta’s Rare Selloff Deepens After Court Losses, AI Delays And Metaverse’s Decline
Back-to-back landmark court losses pushed further losses for the Facebook parent’s stock.
It’s Official: House Rejects DHS Funding Plan—Shutdown Continues
House conservatives opposed the Senate deal passed early Friday, because it didn’t include funding for immigration enforcement.
IndexCache, a new sparse attention optimizer, delivers 1.82x faster inference on long-context AI models
Processing 200,000 tokens through a large language model is expensive and slow: the longer the context, the faster the costs spiral. Researchers at Tsinghua University and Z.ai have built a technique called IndexCache that cuts up to 75% of the redundant computation in sparse attention models, delivering up to 1.82x faster time-to-first-token and 1.48x faster generation throughput at that context length.The technique applies to models using the DeepSeek Sparse Attention architecture, including the latest DeepSeek and GLM families. It can help enterprises provide faster user experiences for production-scale, long-context models, a capability already proven in preliminary tests on the 744-billion-parameter GLM-5 model.The DSA bottleneckLarge language models rely on the self-attention mechanism, a process where the model computes the relationship between every token in its context and all the preceding ones to predict the next token.However, self-attention has a severe limitation. Its computational complexity scales quadratically with sequence length. For applications requiring extended context windows (e.g., large document processing, multi-step agentic workflows, or long chain-of-thought reasoning), this quadratic scaling leads to sluggish inference speeds and significant compute and memory costs.Sparse attention offers a principled solution to this scaling problem. Instead of calculating the relationship between every token and all preceding ones, sparse attention optimizes the process by having each query select and attend to only the most relevant subset of tokens.DeepSeek Sparse Attention (DSA) is a highly efficient implementation of this concept, first introduced in DeepSeek-V3.2. To determine which tokens matter most, DSA introduces a lightweight “lightning indexer module” at every layer of the model. This indexer scores all preceding tokens and selects a small batch for the main core attention mechanism to process. By doing this, DSA slashes the heavy core attention computation from quadratic to linear, dramatically speeding up the model while preserving output quality.But the researchers identified a lingering flaw: the DSA indexer itself still operates at a quadratic complexity at every single layer. Even though the indexer is computationally cheaper than the main attention process, as context lengths grow, the time the model spends running these indexers skyrockets. This severely slows down the model, especially during the initial “prefill” stage where the prompt is first processed.Caching attention with IndexCacheTo solve the indexer bottleneck, the research team discovered a crucial characteristic of how DSA models process data. The subset of important tokens an indexer selects remains remarkably stable as data moves through consecutive transformer layers. Empirical tests on DSA models revealed that adjacent layers share between 70% and 100% of their selected tokens.To capitalize on this cross-layer redundancy, the researchers developed IndexCache. The technique partitions the model’s layers into two categories. A small number of full (F) layers retain their indexers, actively scoring the tokens and choosing the most important ones to cache. The rest of the layers become shared (S), performing no indexing and reusing the cached indices from the nearest preceding F layer.During inference, the model simply checks the layer type. If it reaches an F layer, it calculates and caches fresh indices. If it is an S layer, it skips the math and copies the cached data.There is a wide range of optimization techniques that try to address the attention bottleneck by compressing the KV cache, where the computed attention values are stored. Instead of shrinking the memory footprint like standard KV cache compression, IndexCache attacks the compute bottleneck. “IndexCache is not a traditional KV cache compression or sharing technique,” Yushi Bai, co-author of the paper, told VentureBeat. “It eliminates this redundancy by reusing indices across layers, thereby reducing computation rather than just memory footprint. It is complementary to existing approaches and can be combined with them.”The researchers developed two deployment approaches for IndexCache. (It is worth noting that IndexCache only applies to models that use the DSA architecture, such as the latest DeepSeek models and the latest family of GLM models.)For developers working with off-the-shelf DSA models where retraining is unfeasible or too expensive, they created a training-free method relying on a “greedy layer selection” algorithm. By running a small calibration dataset through the model, this algorithm automatically determines the optimal placement of F and S layers without any weight updates. Empirical evidence shows that the greedy algorithm can safely remove 75% of the indexers while matching the downstream performance of the original model.For teams pre-training or heavily fine-tuning their own foundation models, the researchers propose a training-aware version that optimizes the network parameters to natively support cross-layer sharing. This approach introduces a “multi-layer distillation loss” during training. It forces each retained indexer to learn how to select a consensus subset of tokens that will be highly relevant for all the subsequent layers it serves.Real-world speedups on production modelsTo test the impact of IndexCache, the researchers applied it to the 30-billion-parameter GLM-4.7 Flash model and compared it against the standard baseline.At a 200K context length, removing 75% of the indexers slashed the prefill latency from 19.5 seconds down to just 10.7 seconds, delivering a 1.82x speedup. The researchers note these speedups are expected to be even greater in longer contexts.During the decoding phase, where the model generates its response, IndexCache boosted per-request throughput from 58 tokens per second to 86 tokens per second at the 200K context mark, yielding a 1.48x speedup. When the server’s memory is fully saturated with requests, total decode throughput jumped by up to 51%.For enterprise teams, these efficiency gains translate directly into cost savings. “In terms of ROI, IndexCache provides consistent benefits across scenarios, but the gains are most noticeable in long-context workloads such as RAG, document analysis, and agentic pipelines,” Bai said. “In these cases, we observe at least an approximate 20% reduction in deployment cost and similar improvements in user-perceived latency.” He added that for very short-context tasks, the benefits hover around 5%.Remarkably, these efficiency gains did not compromise reasoning capabilities. Using the training-free approach to eliminate 75% of indexers, the 30B model matched the original baseline’s average score on long-context benchmarks, scoring 49.9 against the original 50.2. On the highly complex AIME 2025 math reasoning benchmark, the optimized model actually outperformed the original baseline, scoring 92.6 compared to 91.0.The team also ran preliminary experiments on the production-scale 744-billion-parameter GLM-5 model. They found that eliminating 75% of its indexers with the training-free method yielded at least a 1.3x speedup on contexts over 100K tokens. At the same time, the model maintained a nearly identical quality average on long-context tasks.Getting IndexCache into productionFor development teams wanting to implement the training-free approach today, the process is straightforward but requires careful setup. While the greedy search algorithm automatically finds the optimal layer configuration, the quality of that configuration depends on the data it processes.“We recommend using domain-specific data as a calibration set so that the discovered layer-sharing pattern aligns with real workloads,” Bai said.Once calibrated, the optimization is highly accessible for production environments. Open-source patches are already available on GitHub for major serving engines. “Integration is relatively straightforward — developers can apply the patch to existing inference stacks, such as vLLM or SGLang, and enable IndexCache with minimal configuration changes,” Bai said.While IndexCache provides an immediate fix for today’s compute bottlenecks, its underlying philosophy points to a broader shift in how the AI industry will approach model design.“Future foundation models will likely be architected with downstream inference constraints in mind from the beginning,” Bai concluded. “This means designs that are not only scalable in terms of model size, but also optimized for real-world throughput and latency, rather than treating these as post-hoc concerns.”