Trump announced the extended ceasefire in a Truth Social post and alongside diplomats from both countries at the White House.
NYT Strands Answers Today: Hints & Clues For Friday, April 24 (Hullabaloo)
Looking for help with today’s NYT Strands puzzle? Here’s an extra hint to help you uncover the right words, as well as all of today’s answers and Spangram.
French weather forecast office files police complaint following suspicious Polymarket bets
Temperature readings at Charles de Gaulle International Airport rose by a few degrees in a matter of minutes.
Wedbush has blunt message for SoundHound AI stock investors
SoundHound AI (SOUN) just made its biggest strategic move of the year. And one of the stock’s most closely watched analysts responded the same day with a message that is more nuanced than a simple thumbs up or thumbs down.The deal itself is significant. But what Wedbush is saying about the company’s stock alongside it is what investors need to pay attention to.What SoundHound just didOn April 21, SoundHound AI announced a definitive agreement to acquire LivePerson in an all-stock deal valuing the target’s equity at approximately $43 million, representing a 22% premium to LivePerson’s 30-day volume-weighted average price, according to GlobeNewswire.The total enterprise value of the deal is approximately $250 million, which accounts for LivePerson’s discounted debt that SoundHound plans to retire using a mix of cash and equity at its discretion. At closing, SoundHound also expects to receive roughly $74 million of LivePerson’s cash. The combined company is expected to emerge debt-free.More Wall StreetJPMorgan resets S&P 500 price target for the rest of 2026Vanguard challenges the S&P 500 as a one-stop strategyGoldman Sachs resets Broadcom stock forecastThe strategic logic is straightforward. SoundHound brings proprietary voice and agentic AI. LivePerson brings digital messaging capabilities that power one billion customer messages per month. Together, the companies are positioning themselves as a single end-to-end omnichannel conversational AI platform.What Wedbush said about itWedbush maintained its Outperform rating on SoundHound shares and kept its 12-month price target at $12 following the announcement, according to Proactive Investors.”We believe this was a strategic move by SOUN that will better position the company to meet this transformational market shift coming while broadening its customer portfolio,” Wedbush analysts wrote.The firm specifically highlighted the data angle. The combination would give SoundHound’s voice AI systems access to LivePerson’s messaging data, creating what Wedbush described as “a data foundation of tens of billions of customer interactions annually.” That scale of training data could improve model performance and automation outcomes across the combined platform, according to Proactive Investors.Why the deal makes strategic senseLivePerson marks SoundHound’s fifth strategic acquisition. The company has now assembled a platform spanning voice AI, digital messaging, agentic AI, and enterprise customer service, building on earlier integrations including Amelia and Interactions.The combined customer base is substantial. The enlarged company will serve enterprise clients across more than 30 countries, including 12 of the top 15 global banks, four of the top five global airlines, and four of the top five global automakers.The cross-sell opportunity is the bigger number. SoundHound management projects a path to $500 million in revenue from the existing combined customer base over time, with a near-term target of $350 million to $400 million in 2027 revenue, including at least $100 million expected from LivePerson’s existing customers, according to GlobalData.Key figures from the SoundHound and LivePerson deal:Equity value of the acquisition: $43 million, at a 22% premium to LivePerson’s 30-day VWAP, according to SoundHound AITotal enterprise value including debt: approximately $250 million, SoundHound AI confirmedLivePerson cash expected at closing: approximately $74 million, according to GlobeNewswireCombined platform will handle one billion customer messages per month, SoundHound AI noted2027 revenue guidance for combined company: $350 million to $400 million, according to GlobalDataCross-sell revenue opportunity from existing customer base: $500 million, Yahoo Finance notedWedbush price target: $12, Outperform rating maintained, according to Proactive InvestorsSOUN shares rose nearly 5% to around $8 on the day of the announcement, Proactive Investors noted
Wedbush kept its rating but the number it attached to the stock tells a more complicated storyHarrer/Getty Images
The stock reaction and what it meansSOUN shares rose nearly 5% to around $8 on the day of the announcement. That reaction reflects cautious optimism rather than outright enthusiasm. The stock remains well below Wedbush’s $12 target, which itself sits below the Wall Street consensus average of roughly $14.67, according to Zacks.The initial selloff that followed the announcement, before shares recovered, is also worth noting. Some investors may have reacted negatively to the all-stock structure of the deal, which could dilute existing shareholders, before reassessing the strategic rationale and the balance sheet benefit of acquiring $74 million in LivePerson cash, according to TipRanks.The gap between where the stock trades and where analysts have it pegged tells part of the story. SoundHound is a company that the market is still trying to price. It has real growth, a maturing acquisition strategy, and a credible enterprise customer list. What it does not yet have is a clear path to consistent profitability, which remains the key variable for investors deciding how much upside to price in.What investors should watch from hereThe LivePerson deal is expected to close in the second half of 2026, subject to regulatory approvals, according to The Globe and Mail. That means the integration work and the revenue contribution from LivePerson’s customers will largely be a 2027 story. Investors who buy the thesis now are effectively betting on what the combined company looks like twelve to eighteen months from today.For the remainder of 2026, SoundHound will be judged on its standalone execution. The company reported $169 million in revenue for 2025 with a net loss of $14 million, according to Constellation Research. Management’s 2026 standalone guidance of $225 million to $260 million implies continued strong growth, but the market will want to see that momentum hold before giving the stock meaningfully more credit.The Q1 2026 earnings report, expected around May 6, will be the first real test. If revenue growth stays on track and the company shows improving margins alongside the LivePerson announcement, sentiment could shift more decisively in the stock’s favor.Wedbush’s message is ultimately constructive but measured. The deal is strategically sound. The growth story remains intact. But the stock’s path higher from here depends on execution, not just ambition. For SoundHound investors, that is both the opportunity and the risk heading into the second half of the year.Related: SoundHound sent on roller coaster ride by insider activity
What’s Changing (and Not Changing) With the Morningstar Medalist Rating?
We’re refreshing the Morningstar Medalist Rating’s methodology to provide a simpler, more transparent forward-looking assessment of mutual funds, exchange-traded funds, and other managed investments, including semiliquid funds and 529 college savings plans. Gold, Silver, and Bronze ratings will continue to identify funds that Morningstar expects to outperform the category average over a full market cycle. First announced in December 2025, the updated methodology better illustrates how we evaluate a fund’s expenses, and it introduces a simplified rating structure, new quantitative calculations, and fixed thresholds for assigning ratings across more than 360,000 share classes globally. These changes are designed to improve transparency, consistency, and comparability across funds, whether ratings are driven by analysts or produced algorithmically. What is not changing is the continued focus on the three fundamental pillars used to assess the likelihood of outperformance: People, Process, and Parent. The new Medalist Rating framework began to be implemented on a rolling basis on Thursday, April 23. Global implementation will be completed by Sunday, May 3, at which time, every Medalist‑rated share class will reflect the updated methodology, giving investors a clearer view of Morningstar’s highest‑conviction opportunities under the simplified approach. What Is the Morningstar Medalist Rating? To arrive at a Medalist Rating, Morningstar analysts—or an algorithm, if an analyst isn’t assigned to cover the fund—assess three fundamental pillars to determine whether it can outperform its category peers: People, Process, and Parent. The People Pillar encompasses the managers and analysts making a fund’s key investment decisions. The Process Pillar considers the effectiveness and repeatability of its investment strategy, while the Parent Pillar analyzes the offering firm and whether it puts fund investors’ interests before its own. (As of March 31, 2026, funds domiciled in Australia and New Zealand are not eligible for Medalist Ratings with algorithmic components.)Morningstar combines a fund’s fundamental pillar assessments and, together with an assessment of the fund’s price and likelihood of outperforming its category peers, awards it a Gold, Silver, Bronze, Neutral, or Negative rating. Distribution of New Medalist Ratings The methodology update will not change most funds’ overall Medalist Rating, and those that do change will typically reflect a one-notch move on the rating’s unchanged scale of Gold, Silver, Bronze, Neutral, or Negative. Overall, there will be more Gold and Silver ratings and fewer Bronze, Neutral, and Negative ratings. The Simplified Structure Before we get into the changes’ details, here’s what’s staying the same: Morningstar’s 130 global researchers will continue to qualitatively rate more than 3,200 fund strategies’ fundamental pillars in the same transparent manner they have used for decades. The analysts’ structured, well-documented, and peer-reviewed process consistently assesses funds’ investment quality and distinguishes those with stronger characteristics and better odds of future outperformance. The simplified Medalist Rating will make funds’ future performance drivers relative to their category peers clearer and highlight which funds Morningstar expects to outperform their peer group averages over a full market cycle.We’ll continue to display People, Process, and Parent Pillar ratings as High, Above Average, Average, Below Average, and Low. We’re further enhancing the rating by displaying a Medalist Rating Price Score of negative 2.5 to 2.5 to illustrate whether a fund’s fee is a significant headwind or a competitive advantage, as fees are often the best predictor of future performance. The Medalist Rating Price Score will boost low-cost funds’ ratings, while reducing those of expensive funds. The new approach applies straightforward weights to each rating component and sums them to determine the fund’s Medalist Rating. Component weights differ for actively managed funds and passively managed funds to reflect how those strategies are run. Here’s an example of how the fundamental pillars and the Medalist Rating will be displayed for a few share classes of a US large-growth equity fund:Taking a Category-Relative Approach The updated methodology does not factor into the ratings calculation the degree to which a category’s funds had been able to generate alpha in the past relative to their category index. The outgoing methodology assumed every fund in a category aimed to beat the Morningstar-assigned benchmark. Comparisons to the category benchmark created challenges for some funds. Many multi-asset allocation, noncore equity, and fixed-income funds have asset allocation and/or sector weights that differ significantly from the category benchmark, making that benchmark test less relevant than peer-to-peer comparisons. Instead, the new approach measures Medalists versus their category averages, rather than their category benchmarks. More-Transparent Inputs Of the 175,000 traditional fund and ETF share classes that received a Medalist Rating on March 31, 2026, about 130,000 relied on an algorithm to provide at least one pillar score where an analyst hadn’t rated it directly. Our previous approach used a machine-learning model to generate these ratings by mimicking how analysts set scores. While the model effectively identified funds likely to perform well, its adaptive nature made it difficult to pinpoint which data influenced a fund’s Medalist Rating. The updated algorithmic pillar methodology uses clearer and more-transparent input data. We’ll now show simple calculated data points that reflect how a Morningstar analyst would assess each pillar. For example, analysts look at managers’ experience when evaluating the People Pillar. The model that helps determine the People Pillar rating now includes a new data point—Fund Manager Successful Experience—to measure which funds are run by managers with a proven ability to outperform across their career. For active funds’ Process Pillar ratings, the algorithm now considers their information ratio, or their cumulative return divided by their tracking error relative to their index. The assessment also includes data that measures style drift and volatility, as well as the firm’s past success at delivering outperformance within the fund’s asset class. The algorithmic Parent Pillar considers the fees, manager tenure and retention, performance, and survival and obsolete rates of an asset manager’s full fund lineup. For passively managed funds, the Process Pillar carries the most weight in determining the Medalist Rating. The algorithmically generated pillar stresses the historical ability of passives to outperform actives in a category, volatility of their excess returns versus the category index (tracking error), and concentration risk among portfolios’ top holdings, among other measures. This simplified quantitative methodology will make it easier to identify which data contributed to a fund’s Medalist Rating and how. We will continue to mark algorithmic pillars with a superscript Q in our data displays and reports. For funds assessed through algorithmically generated pillars, we require that the fund has sufficient data coverage to fuel at least half of the pillar’s calculation. If the fund doesn’t meet the 50% data-calculation rate, it will not receive a Medalist Rating. This new requirement will limit the number of fully quantitative Medalist Ratings among newer funds, as well as funds in markets where public disclosure is less robust. Japan, for example, does not require funds to disclose who manages the portfolios, so many are ineligible for the People calculations driving the quantitative rating, such as Fund Manager Successful Experience. Assigning Medals With Fixed Rating Thresholds The updated methodology does away with the Medalist Rating’s existing forced distribution curve, which previously limited the number of Gold-, Silver-, and Bronze-rated funds in each category. The curve has been replaced by fixed thresholds. We expect the simplified approach, including the long-term nature of the data behind the algorithmic pillars, to create a more stable set of ratings. We understand investors monitor their funds’ Medalist Ratings and have questions when ratings change. The simplified approach will increase stability by eliminating a forced distribution of ratings that caused ratings to change based on updates to other funds. Implementing the Simplified Methodology By rolling out the changes starting on April 23, we’re applying the new fixed-threshold scores to all funds at once, to make easy comparisons. When we applied the updated calculations to existing Medalist Ratings, we saw fewer Bronze, Neutral, and Negative ratings and more Gold and Silver ratings relative to the previous methodology. That doesn’t mean it’s easy to get a Gold, Silver, or Bronze rating: About a third of the updated ratings will translate to a better-than-Neutral medal under the updated methodology.
Mystery solved: Anthropic reveals changes to Claude’s harnesses and operating instructions likely caused degradation
For several weeks, a growing chorus of developers and AI power users claimed that Anthropic’s flagship models were losing their edge. Users across GitHub, X, and Reddit reported a phenomenon they described as “AI shrinkflation”—a perceived degradation where Claude seemed less capable of sustained reasoning, more prone to hallucinations, and increasingly wasteful with tokens. Critics pointed to a measurable shift in behavior, alleging that the model had moved from a “research-first” approach to a lazier, “edit-first” style that could no longer be trusted for complex engineering. While the company initially pushed back against claims of “nerfing” the model to manage demand, the mounting evidence from high-profile users and third-party benchmarks created a significant trust gap. Today, Anthropic addressed these concerns directly, publishing a technical post-mortem that identified three separate product-layer changes responsible for the reported quality issues. “We take reports about degradation very seriously,” reads Anthropic’s blog post on the matter. “We never intentionally degrade our models, and we were able to immediately confirm that our API and inference layer were unaffected.”Anthropic claims it has resolved the issues by reverting the reasoning effort change and the verbosity prompt, while fixing the caching bug in version v2.1.116. The mounting evidence of degradationThe controversy gained momentum in early April 2026, fueled by detailed technical analyses from the developer community. Stella Laurenzo, a Senior Director in AMD’s AI group, published an exhaustive audit of 6,852 Claude Code session files and over 234,000 tool calls on Github showing performance falling from her usage before. Her findings suggested that Claude’s reasoning depth had fallen sharply, leading to reasoning loops and a tendency to choose the “simplest fix” rather than the correct one.This anecdotal frustration was seemingly validated by third-party benchmarks. BridgeMind reported that Claude Opus 4.6’s accuracy had dropped from 83.3% to 68.3% in their tests, causing its ranking to plummet from No. 2 to No. 10. Although some researchers argued these specific benchmark comparisons were flawed due to inconsistent testing scopes, the narrative that Claude had become “dumber” became a viral talking point. Users also reported that usage limits were draining faster than expected, leading to suspicions that Anthropic was intentionally throttling performance to manage surging demand.The causesIn its post-morem bog post, Anthropic clarified that while the underlying model weights had not regressed, three specific changes to the “harness” surrounding the models had inadvertently hampered their performance:Default Reasoning Effort: On March 4, Anthropic changed the default reasoning effort from high to medium for Claude Code to address UI latency issues. This change was intended to prevent the interface from appearing “frozen” while the model thought, but it resulted in a noticeable drop in intelligence for complex tasks.A Caching Logic Bug: Shipped on March 26, a caching optimization meant to prune old “thinking” from idle sessions contained a critical bug. Instead of clearing the thinking history once after an hour of inactivity, it cleared it on every subsequent turn, causing the model to lose its “short-term memory” and become repetitive or forgetful.System Prompt Verbosity Limits: On April 16, Anthropic added instructions to the system prompt to keep text between tool calls under 25 words and final responses under 100 words. This attempt to reduce verbosity in Opus 4.7 backfired, causing a 3% drop in coding quality evaluations.Impact and future safeguardsThe quality issues extended beyond the Claude Code CLI, affecting the Claude Agent SDK and Claude Cowork, though the Claude API was not impacted. Anthropic admitted that these changes made the model appear to have “less intelligence,” which they acknowledged was not the experience users should expect.To regain user trust and prevent future regressions, Anthropic is implementing several operational changes:Internal Dogfooding: A larger share of internal staff will be required to use the exact public builds of Claude Code to ensure they experience the product as users do.Enhanced Evaluation Suites: The company will now run a broader suite of per-model evaluations and “ablations” for every system prompt change to isolate the impact of specific instructions.Tighter Controls: New tooling has been built to make prompt changes easier to audit, and model-specific changes will be strictly gated to their intended targets.Subscriber Compensation: To account for the token waste and performance friction caused by these bugs, Anthropic has reset usage limits for all subscribers as of April 23.The company intends to use its new @ClaudeDevs account on X and GitHub threads to provide deeper reasoning behind future product decisions and maintain a more transparent dialogue with its developer base.
U.S. arrests soldier for Polymarket bets on Nicolas Maduro raid he participated in
The U.S. Department of Justice said Thursday it had arrested a soldier in the U.S. Army for placing bets on Polymarket about the Nicolas Maduro raid.
‘Michael’—The Michael Jackson Biopic Controversy, Explained
‘Michael’ has ignited controversy and backlash from critics who accuse the biopic of avoiding the dark side of Michael Jackson’s legacy.
Trump is swaying the market like no president has in decades, analysis shows
Data suggests President Trump has been the driver behind the best and worst days for stocks in his second term.
‘Big Business Opportunity’: Cannabis CEO Celebrates DOJ Easing Regulations On Medical Marijuana
The Trump administration reclassified medical marijuana as a less dangerous drug on Thursday.