The Michael Jackson biopic “Michael” is already delivering impressive early box office results. When will it be available to stream at home, and where can viewers watch it?
#1 Ranked Credit Card Issuer for App and Website Quality
Think fast! You need to quickly check your credit card account balance. Are you confident in your card’s app or website to come through for you?
Team Clark talks a lot about things like proper usage of credit cards or ways to maximize credit card rewards, but we don’t often touch on something that is often an important piece of card membership: A user-friendly phone app and website.
Don’t get me wrong: This is a secondary concern to things like your APR on balances, rewards on spending, credit limit, annual fee, etc.
But when those things are all mostly equal, I concede that picking a card that has an intuitive website and app does make for a more pleasant experience.
After all, you’ll be using it regularly for checking your balance, paying your bill, monitoring charges, checking credit scores and even taking advantage of discount offers.
J.D. Power, which is a popular consumer satisfaction surveyor, has captured the opinions of actual cardholders regarding these tools with its annual U.S. Credit Card Mobile App Satisfaction Study and U.S. Online Credit Card Satisfaction Study.
Let’s take a look at the latest results.
American Express Has Top Rated Credit Card App and Website
American Express has both the top rated credit card app and the best credit card website, according to the latest study by J.D. Power.
These results, which were compiled in 2025, show that the premium card issuer has the tech to go along with the card characteristics that consistently make it the top credit card issuer according to the U.S. Credit Card Satisfaction Study.
Using a 1,000-point scoring system, American Express scored a survey-best 687 on consumer satisfaction for its mobile app. It also led the pack with a 704 score on overall online satisfaction, which factors in the website quality.
Wells Fargo finished as runner up in both surveys, with Discover taking third for mobile apps and U.S. Bank ranking third for overall online experience.
J.D. Power Credit Card Mobile App Satisfaction Survey Scores
Credit Card IssuerIndex Score
American Express687
Wells Fargo676
Discover674
U.S. Bank666
Barclays661
Chase660
Capital One658
Bank of America656
Citi643
Synchrony643
Credit One Bank594
J.D. Power Credit Card Overall Online Satisfaction Survey Scores
Credit Card IssuerIndex Score
American Express704
Wells Fargo693
U.S. Bank690
Discover686
Chase680
Barclays678
Bank of America672
Capital One655
Synchrony642
Citi633
Credit One Bank619
About These J.D. Power Studies
The U.S. Credit Card Mobile App Satisfaction Study and U.S. Online Credit Card Satisfaction Study each measure overall satisfaction with banking and credit card digital channels based on four factors:
Navigation
Speed
Visual appeal
Information/content
The 2025 studies are based on responses from 16,781 retail bank and credit card customers nationwide and were fielded from January through March 2025.
It’s also worth noting that regional banks and credit unions, which can be strong credit card issuers, were not included in this survey.
How does your credit card app and website stack up? Do you agree with the results you’re seeing here? We’d love to hear your thoughts in the Clark.com community.
The post #1 Ranked Credit Card Issuer for App and Website Quality appeared first on Clark Howard.
‘We Want to Travel.’ My Wife and I Just Turned 40 With $1 Million Saved. Are We Crazy to Pause Our Retirement Contributions?
“My wife and I are 40 and 41 and make roughly $272,000 per year combined. Between our various retirement accounts we have around a million dollars saved,” a user shared in a recent post to the subreddit r/PersonalFinance. “We’re considering pausing our Roth contributions until we’re empty nesters. This would let us travel quite a bit more with our kids while they still live with us.”
This is not a bad problem to have — in fact, it’s great to be so ahead on saving for retirement that you can wonder whether it’s appropriate to spend more maximizing prime years with your family.
The post goes on to explain the couple is contributing $16,200 annually to a Roth 401(k), which could be paused to spend more freely on travel. Specifically, the user asks about foregoing this contribution until their kids, who are now around 10 years old, move out and go to college. The couple says they would like to retire in their mid-60s.
Does this move make sense?
Deal of the Week: 50% off your first week with Cook Unity, a meal delivery service crafted by chefs
Expert advice: There’s a risk of saving too much for retirement
We asked Rachel Lawrence, head of advice and planning at Monarch, a popular budgeting app, how she’d advise this couple.
“The crux is actually understanding how much you need, so how much you’re spending now and how much you might be spending in retirement,” says Lawrence, who’s also a certified financial planner. “They can have $1 million, but we all know $1 million isn’t worth what it used to be worth.”
A commonly cited rule of thumb recommends that by age 40, you ideally have three times your salary saved. In the couple’s case, they have closer to four times their income saved. However, Lawrence says she wouldn’t pay much attention to these “generic” rules, as individual circumstances vary so much.
Pet Protection: See How Spot Pet Insurance Can Help Your Dog or Cat
The Reddit post states that they would like “75-80% of pre-retirement income in retirement,” which implies relatively high retirement spending given their income level. Still, the figures shared in the post suggest the family can likely afford to vacation. The question will just be how much.
The family needs to decide how important travel is to them, which will require weighing some core values, Lawrence says. Sticking to a travel budget is also key, considering that regular trips as a family of four can add up fast, potentially derailing a savings plan.
While some experts and communities believe in saving as aggressively as possible for retirement to pursue financial independence above all else, that’s not necessarily the correct path for every saver.
“It doesn’t sound like that’s them, right?” Lawrence says, explaining it’s normal to have other priorities. “It sounds like they would rather delay financial independence because they value more highly this sense of adventure or quality time with kids.”
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
DeepSeek-V4 arrives with near state-of-the-art intelligence at 1/6th the cost of Opus 4.7, GPT-5.5
The whale has resurfaced. DeepSeek, the Chinese AI startup offshoot of High-Flyer Capital Management quantitative analysis firm, became a near-overnight sensation globally in January 2025 with the release of its open source R1 model that matched proprietary U.S. giants.It’s been an epoch in AI since then, and while DeepSeek has released several updates to that model and its other V3 series, the international AI and business community has been largely waiting with baited breath for the follow-up to the R1 moment.Now it’s arrived with last night’s release of DeepSeek-V4, a 1.6-trillion-parameter Mixture-of-Experts (MoE) model available free under commercially-friendly open source MIT License, which nears — and on some benchmarks, surpasses — the performance of the world’s most advanced closed-source systems at approximately 1/6th the cost over the application programming interface (API).This release—which DeepSeek AI researcher Deli Chen described on X as a “labor of love” 484 days after the launch of V3—is being hailed as the “second DeepSeek moment”. As Chen noted in his post, “AGI belongs to everyone”. It’s available now on AI code sharing community Hugging Face and through DeepSeek’s API. Frontier-class AI gets pushed into a lower price bandThe most immediate impact of the DeepSeek-V4 launch is economic. The corrected pricing table shows DeepSeek is not pricing its new Pro model at near-zero levels, but it is still pushing high-end model access into a far lower cost tier than the leading U.S. frontier models.DeepSeek-V4-Pro is priced through its API at $1.74 USD per 1 million input tokens on a cache miss and $3.48 per million output tokens. That puts a simple one-million-input, one-million-output comparison at $5.22. With cached input, the input price drops to $0.145 per million tokens, bringing that same blended comparison down to $3.625.That is dramatically cheaper than the current premium pricing from OpenAI and Anthropic. GPT-5.5 is priced at $5.00 per million input tokens and $30.00 per million output tokens, for a combined $35.00 in the same simple comparison. Claude Opus 4.7 is priced at $5.00 input and $25.00 output, for a combined $30.00.ModelInputOutputTotal CostSourceGrok 4.1 Fast$0.20$0.50$0.70xAIMiniMax M2.7$0.30$1.20$1.50MiniMaxGemini 3 Flash$0.50$3.00$3.50GoogleKimi-K2.5$0.60$3.00$3.60MoonshotMiMo-V2-Pro (≤256K)$1.00$3.00$4.00Xiaomi MiMoGLM-5$1.00$3.20$4.20Z.aiGLM-5-Turbo$1.20$4.00$5.20Z.aiDeepSeek-V4-Pro$1.74$3.48$5.22DeepSeekGLM-5.1$1.40$4.40$5.80Z.aiClaude Haiku 4.5$1.00$5.00$6.00AnthropicQwen3-Max$1.20$6.00$7.20Alibaba CloudGemini 3 Pro$2.00$12.00$14.00GoogleGPT-5.2$1.75$14.00$15.75OpenAIGPT-5.4$2.50$15.00$17.50OpenAIClaude Sonnet 4.5$3.00$15.00$18.00AnthropicClaude Opus 4.7$5.00$25.00$30.00AnthropicGPT-5.5$5.00$30.00$35.00OpenAIGPT-5.4 Pro$30.00$180.00$210.00OpenAIOn standard, cache-miss pricing, DeepSeek-V4-Pro comes in at roughly one-seventh the cost of GPT-5.5 and about one-sixth (1/6th) the cost of Claude Opus 4.7. With cached input, the gap widens: DeepSeek-V4-Pro costs about one-tenth as much as GPT-5.5 and about one-eighth as much as Claude Opus 4.7.The more extreme near-zero story belongs to DeepSeek-V4-Flash, not the Pro model. Flash is priced at $0.14 per million input tokens on a cache miss and $0.28 per million output tokens, for a combined $0.42. With cached input, that drops to $0.308. In that case, DeepSeek’s cheaper model is more than 98% below GPT-5.5 and Claude Opus 4.7 in a simple input-plus-output comparison, or nearly 1/100th the cost — though the performance dips significantly. DeepSeek is compressing advanced model economics into a much lower band, forcing developers and enterprises to revisit the cost-benefit calculation around premium closed models.For companies running large inference workloads, that price gap can change what is worth automating. Tasks that look too expensive on GPT-5.5 or Claude Opus 4.7 may become economically viable on DeepSeek-V4-Pro, and even more so on DeepSeek-V4-Flash. The launch does not make intelligence free, but it does make the market harder for premium providers to defend on performance alone.Benchmarking the frontier: DeepSeek-V4-Pro gets close, but GPT-5.5 and Opus 4.7 still lead on most shared testsDeepSeek-V4-Pro-Max is best understood as a major open-weight leap, not a clean across-the-board defeat of the newest closed frontier systems. The model’s strongest benchmark claims come from DeepSeek’s own comparison tables, where it is shown against GPT-5.4 xHigh, Claude Opus 4.6 Max and Gemini 3.1 Pro High and bests them on several tests, including Codeforces and Apex Shortlist. But that is not the same as a head-to-head against OpenAI’s newer GPT-5.5 or Anthropic’s newer Claude Opus 4.7.Looking only at DeepSeek-V4 versus the latest proprietary models, the picture is more restrained. On this shared set, GPT-5.5 and Claude Opus 4.7 still lead most categories. DeepSeek-V4-Pro-Max’s best showing is on BrowseComp, the benchmark measuring agentic AI web browsing prowess (especially highly containerized information), where it scores 83.4%, narrowly behind GPT-5.5 at 84.4% and ahead of Claude Opus 4.7 at 79.3%. On Terminal-Bench 2.0, DeepSeek scores 67.9%, close to Claude Opus 4.7’s 69.4%, but far behind GPT-5.5’s 82.7%.BenchmarkDeepSeek-V4-Pro-MaxGPT-5.5GPT-5.5 Pro, where shownClaude Opus 4.7Best result among theseGPQA Diamond90.1%93.6%—94.2%Claude Opus 4.7Humanity’s Last Exam, no tools37.7%41.4%43.1%46.9%Claude Opus 4.7Humanity’s Last Exam, with tools48.2%52.2%57.2%54.7%GPT-5.5 ProTerminal-Bench 2.067.9%82.7%—69.4%GPT-5.5SWE-Bench Pro / SWE Pro55.4%58.6%—64.3%Claude Opus 4.7BrowseComp83.4%84.4%90.1%79.3%GPT-5.5 ProMCP Atlas / MCPAtlas Public73.6%75.3%—79.1%Claude Opus 4.7The shared academic-reasoning results favor the closed models: On GPQA Diamond, DeepSeek-V4-Pro-Max scores 90.1%, while GPT-5.5 reaches 93.6% and Claude Opus 4.7 reaches 94.2%. On Humanity’s Last Exam without tools, DeepSeek scores 37.7%, behind GPT-5.5 at 41.4%, GPT-5.5 Pro at 43.1% and Claude Opus 4.7 at 46.9%. With tools enabled, DeepSeek rises to 48.2%, but still trails GPT-5.5 at 52.2%, GPT-5.5 Pro at 57.2% and Claude Opus 4.7 at 54.7%.The agentic and software-engineering results are more mixed, but they still show DeepSeek-V4-Pro-Max trailing GPT-5.5 and Opus 4.7. On Terminal-Bench 2.0, DeepSeek’s 67.9% is competitive with Claude Opus 4.7’s 69.4%, but GPT-5.5 is much higher at 82.7%. On SWE-Bench Pro, DeepSeek’s 55.4% trails GPT-5.5 at 58.6% and Claude Opus 4.7 at 64.3%. On MCP Atlas, DeepSeek’s 73.6% is slightly behind GPT-5.5 at 75.3% and Claude Opus 4.7 at 79.1%. BrowseComp is the standout: DeepSeek’s 83.4% beats Claude Opus 4.7’s 79.3% and nearly matches GPT-5.5’s 84.4%, though GPT-5.5 Pro’s 90.1% remains well ahead.So ultimately, DeepSeek-V4-Pro-Max does not appear to dethrone GPT-5.5 or Claude Opus 4.7 on the benchmarks that can be directly compared across the companies’ published tables. But it gets close enough on several of them — especially BrowseComp, Terminal-Bench 2.0 and MCP Atlas — that its much lower API pricing becomes the headline.In practical terms, DeepSeek does not need to win every leaderboard row to matter. If it can deliver near-frontier performance on many enterprise-relevant agent and reasoning tasks at roughly one-sixth to one-seventh the standard API cost of GPT-5.5 or Claude Opus 4.7, it still forces a major rethink of the economics of advanced AI deployment.DeepSeek-V4-Pro-Max is clearly the strongest open-weight model in the field right now, and it is unusually close to frontier closed systems on several practical benchmarks. While GPT-5.5 and Claude Opus 4.7 still retain the lead in most direct head-to-head comparisons across the company’s benchmark charts, DeepSeek V4 Pro gets close while being dramatically cheaper and openly available.A big jump from DeepSeek V3.2 To understand the magnitude of this release, one must look at the performance gains of the base models. DeepSeek-V4-Pro-Base represents a significant advancement over the previous generation, DeepSeek-V3.2-Base. In World Knowledge, V4-Pro-Base achieved 90.1 on MMLU (5-shot) compared to V3.2’s 87.8, and a massive jump on MMLU-Pro from 65.5 to 73.5. The improvement in high-level reasoning and verified facts is even more pronounced: on SuperGPQA, V4-Pro-Base reached 53.9 compared to V3.2’s 45.0, and on the FACTS Parametric benchmark, it more than doubled its predecessor’s performance, jumping from 27.1 to 62.6. Simple-QA verified scores also saw a dramatic rise from 28.3 to 55.2.The Long Context capabilities have also been refined. On LongBench-V2, V4-Pro-Base scored 51.5, significantly outpacing the 40.2 achieved by V3.2-Base. In Code and Math, V4-Pro-Base reached 76.8 on HumanEval (Pass@1), up from 62.8 on V3.2-Base. These numbers underscore that DeepSeek has not just optimized for inference cost, but has fundamentally improved the intelligence density of its base architecture. The efficiency story is equally compelling for the Flash variant. DeepSeek-V4-Flash-Base, despite utilizing a substantially smaller number of parameters, outperforms the larger V3.2-Base across wide benchmarks, particularly in long-context scenarios.A new information ‘traffic controller,’ Manifold-Constrained Hyper-Connections (mHC)DeepSeek’s ability to offer these prices and performance figures is rooted in radical architectural innovations detailed in its technical report also released today, “Towards Highly Efficient Million-Token Context Intelligence.” The standout technical achievement of V4 is its native one-million-token context window. Historically, maintaining such a large context required massive memory (the key values or KV cache). DeepSeek solved this by introducing a Hybrid Attention Architecture that combines Compressed Sparse Attention (CSA) to reduce initial token dimensionality and Heavily Compressed Attention (HCA) to aggressively compress the memory footprint for long-range dependencies. In practice, the V4-Pro model requires only 10% of the KV cache and 27% of the single-token inference FLOPs compared to its predecessor, the DeepSeek-V3.2, even when operating at a 1M token context.To stabilize a network of 1.6 trillion parameters, DeepSeek moved beyond traditional residual connections. The company’s researchers incorporated Manifold-Constrained Hyper-Connections (mHC) to strengthen signal propagation across layers while preserving the model’s expressivity. mHC allows an AI to have a much wider flow of information (so it can learn more complex things) without the risk of the model becoming unstable or “breaking” during its training. It’s like giving a city a 10-lane highway but adding a perfect AI traffic controller to ensure no one ever hits the brakes.This is paired with the Muon optimizer, which allowed the team to achieve faster convergence and greater training stability during the pre-training on more than 32T diverse and high-quality tokens. This pre-training data was refined to remove hatched auto-generated content, mitigating the risk of model collapse and prioritizing unique academic values. The model’s 1.6T parameters utilize a Mixture-of-Experts (MoE) design where only 49B parameters are activated per token, further driving down compute requirements.Training the mixture-of-experts (MoE) to work as a wholeDeepSeek-V4 was not simply trained; it was “cultivated” through a unique two-stage paradigm. First, through Independent Expert Cultivation, domain-specific experts were trained through Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) using the GRPO (Group Relative Policy Optimization) algorithm. This allowed each expert to master specialized skills like mathematical reasoning or codebase analysis.Second, Unified Model Consolidation integrated these distinct proficiencies into a single model via on-policy distillation, where the unified model acts as the student learning to optimize reverse KL loss with teacher models. This distillation process ensures that the model preserves the specialized capabilities of each expert while operating as a cohesive whole.The model’s reasoning capabilities are further segmented into three increasing “effort” modes. The “Non-think” mode provides fast, intuitive responses for routine tasks. “Think High” provides conscious logical analysis for complex problem-solving. Finally, “Think Max” pushes the boundaries of model reasoning, bridging the gap with frontier models on complex reasoning and agentic tasks. This flexibility allows users to match the compute effort to the difficulty of the task, further enhancing cost-efficiency.Breaking the Nvidia GPU stranglehold with local Chinese Huawei Ascend NPUsWhile the model weights are the headline, the software stack released alongside them is arguably more important for the future of “Sovereign AI.” Analyst Rui Ma highlighted a single sentence from the release as the most critical: DeepSeek validated their fine-grained Expert Parallelism (EP) scheme on Huawei Ascend NPUs (neural processing units). By achieving a 1.50x to 1.73x speedup on non-Nvidia GPU platforms, DeepSeek has provided a blueprint for high-performance AI deployment that is resilient to Western GPU supply chains and export controls.However, it’s important to note that DeepSeek still claims it used officially licensed, legal Nvidia GPUs for DeepSeek V4’s training, in addition to the Huawei NPUs.DeepSeek has also open-sourced the MegaMoE mega-kernel as a component of its DeepGEMM library. This CUDA-based implementation delivers up to a 1.96x speedup for latency-sensitive tasks like RL rollouts and high-speed agent serving. This move ensures that developers can run these massive models with extreme efficiency on existing hardware, further cementing DeepSeek’s role as the primary driver of open-source AI infrastructure. The technical report emphasizes that these optimizations are crucial for supporting a standard 1M context across all official services.Licensing and local deploymentDeepSeek-V4 is released under the MIT License, the most permissive framework in the industry. This allows developers to use, copy, modify, and distribute the weights for commercial purposes without royalties—a stark contrast to the “restricted” open-weight licenses favored by other companies. For local deployment, DeepSeek recommends setting sampling parameters to temperature = 1.0 and top_p = 1.0. For those utilizing the “Think Max” reasoning mode, the team suggests setting the context window to at least 384K tokens to avoid truncating the model’s internal reasoning chains.The release includes a dedicated encoding folder with Python scripts demonstrating how to encode messages in OpenAI-compatible format and parse the model’s output, including reasoning content. DeepSeek-V4 is also seamlessly integrated with leading AI agents like Claude Code, OpenClaw, and OpenCode. This native integration underscores its role as a bedrock for developer tools, providing an open-source alternative to the proprietary ecosystems of major cloud providers.Community reactions and what comes nextThe community reaction has been one of shock and validation. Hugging Face officially welcomed the “whale” back, stating that the era of cost-effective 1M context length has arrived. Industry experts noted that the “second DeepSeek moment” has effectively reset the developmental trajectory of the entire field, placing massive pressure on closed-source providers like OpenAI and Anthropic to justify their premiums. AI evaluation firm Vals AI noted that DeepSeek-V4 is now the “#1 open-weight model on our Vibe Code Benchmark, and it’s not close”.DeepSeek is moving quickly to retire its older architectures. The company announced that the legacy deepseek-chat and deepseek-reasoner endpoints will be fully retired on July 24, 2026. All traffic is currently being rerouted to the V4-Flash architecture, signifying a total transition to the million-token standard. DeepSeek-V4 is more than just a new model; it is a challenge to the status quo. By proving that architectural innovation can substitute for raw compute-maximalism, DeepSeek has made the highest levels of AI intelligence accessible to the global developer community at a far lower cost — something that could benefit the globe, even at a time when lawmakers and leaders in Washington, D.C. are raising concerns about Chinese labs “distilling” from U.S. proprietary giants to train open source models, and fears of said open source or jailbroken proprietary models being used to create weapons and commit terror.The truth is, while all of these are potential risks — as they were and have been with prior technologies that broadened information access, like search and the internet itself — the benefits seem far outweigh them, and DeepSeek’s quest to keep frontier AI models open is of benefit to the entire planet of potential AI users, especially enterprises looking to adopt the cutting-edge at the lowest possible cost.
I went to an advanced high-school personal-finance class. Here’s what I learned.
Financial literacy will get serious treatment in high schools starting next year as part of the AP’s new ‘Career Kickstart’ bundle
What Does Manchester United Need In The Summer Transfer Window?
Manchester United is well-placed to qualify for next season’s Champions League after a challenging 2025/26 campaign.
A record 20 million single women own homes — even though it’s more expensive for a woman to buy a house than a man
While the majority of home buyers are married couples, 21% today are single women, compared to just 9% of single men.
Tesla is headed down a path of ‘runaway, unsustainable’ spending, analysts warn
The company plans to spend “in excess” of $25 billion in 2026, far more than it has ever spent in a single year.
Analysts reset ServiceNow stock price target after earnings
ServiceNow (NOW) just delivered the kind of results in the first quarter that usually send a stock higher. Revenue and earnings beat estimates, subscription growth exceeded 20%, and management raised full-year guidance.Yet shares plummeted 18% in a single day.Investors are concerned about how AI will impact the company’s organic growth, margins, and valuation multiple going forward.Raised subscription guidanceServiceNow’s biggest positive Q1 development was the raised full-year subscription revenue outlook to $15.735-$15.775 billion, which implies 22-22.5% year-over-year growth for the full year, after Q1 subscription revenue rose 22% to $3.671 billion.But in the market’s eyes, the good news ended there, with the stock crashing nearly 18% on April 23 following earnings.It’s reassuring to see that most analysts following ServiceNow still believe in the company’s product, moat, and general competitive positioning.With the stock still down roughly 60% from its 52-week high, the key question is whether ServiceNow can reignite growth and sustain its margins enough to earn back a premium valuation.Organic growth under scrutinyManagement said delayed large on-premises deal closures tied to the Middle East conflict cut Q1 subscription revenue growth by roughly 75 basis points, arguing the shortfall was due to timing. However, if those deals were lost rather than deferred, ServiceNow’s growth visibility could fall further.Current remaining performance obligations rose 22.5% year over year to $12.64 billion, but analysts questioned how much of that growth was organic after accounting for acquisition effects. That leaves the next quarter’s organic cRPO trend as the clearest test of whether the raised guide reflects real demand or a narrow timing bridge.Following the first-quarter results, BMO Capital cut its price target from $120 to $115 but kept an Outperform rating, saying the results and guidance were largely in line, except for M&A and weaker organic cRPO guidance.Other firms also lowered targets after Q1 fiscal 2026 results, including Needham to $115, BTIG to $150, Mizuho to $140, Wolfe Research to $125, and Stifel to $120. The common themes were delayed Middle East deals, softer organic subscription outlook, and the risk that M&A integration pressures margins.That pullback reflects concern about the quality of growth, not a collapse in competitive positioning. Several firms maintained positive ratings, and the broader backdrop still supports the AI case, with enterprise software demand holding up despite near-term deal-timing noise.AI upsell is turning into revenueServiceNow showed that their AI product, Now Assist, is moving beyond pilots and landing at scale in large accounts.Management said the number of customers generating more than $1 million in annual contract value from Now Assist jumped more than 130% year over year. It also highlighted 16 deals worth more than $5 million in net new ACV.That matters because the expanding spend within ServiceNow’s installed base carries lower friction, supports retention, and increases contract value without relying on new customer wins.Trending Tech Stocks:Oracle adds $100B in market cap on major announcementJPMorgan has stark message on Qualcomm stockMorgan Stanley has a message for ServiceNow investorsServiceNow’s “AI control tower” pitch also sharpens the strategic case. The company is positioning itself as the workflow layer that coordinates activity across enterprise environments, rather than just adding AI features to separate modules.If that positioning works, AI could become a force multiplier for broader platform adoption rather than a narrow product add-on.Security deals expand TAM, raises execution riskThe acquisitions of Veza and Armis broaden ServiceNow’s push into security and identity and expand what it can sell to existing customers.Veza closed on March 2, 2026, and Armis closed on April 20, 2026. Strategically, Veza strengthens identity and access capabilities, while Armis adds cyber asset intelligence. Both give ServiceNow more products to sell into an existing enterprise customer base.
ServiceNow’s Veza and Armis deals expand its security opportunity, but they raise execution risk as investors wait to see if integration strengthens the platform.Witthaya Prasongsin via Getty Images
In the best case, Veza and Armis expand the company’s total addressable market and increase wallet share across current accounts. In the near term, though, integration spending, go-to-market changes, and product overlap could pressure margins before cross-sell appears in bookings.For now, the market is likely to treat acquisition-driven growth cautiously until ServiceNow proves these deals translate into clearer platform value.What could push NOW higherMiddle East deals closing would validate management’s timing explanation and support the raised outlook.Deeper Now Assist adoption would raise ACV per customer and reduce reliance on new-logo growth.Broader adoption of the AI control-tower model would strengthen platform centrality and improve expansion economics.Veza and Armis could help if ServiceNow converts them into cross-sell wins without pressuring margins.What could continue pressuring NOW sharesResults that are merely “fine” may not be enough, as investors keep selling software names that aren’t clearly winning in AI.Organic cRPO staying weak would undermine confidence in demand and guidance.Acquisition-driven growth could mask slower core momentum and pressure the stock’s premium valuation.Integration costs from Veza and Armis could weigh on margins before revenue benefits appear.AI adoption concentrated in a small set of large customers would limit broader revenue reacceleration.Key takeaways for investorsServiceNow did what investors usually want: it beat estimates and raised guidance. But that was not enough to settle the quarter’s biggest question.The bull case is clear. Delayed deals return, AI keeps lifting contract values without materially impacting demand, and Veza and Armis expand the platform without disrupting margins. In that scenario, ServiceNow keeps its premium status because growth comes from high-quality sources.The bear case is just as clear. Organic demand remains soft, acquisitions blur the underlying trend, and AI strength stays too narrow to offset broader weakness.The next few quarters will decide which narrative wins. It should be noted that even though the stock looks cheap, there doesn’t appear to be any near-term catalysts that will force a rerating.Related: Morgan Stanley has a message for ServiceNow investors
Lamine Yamal And Raphinha Are Injured: What Now for FC Barcelona?
FC Barcelona have lost both of their starting wingers thanks to injuries picked up by Lamine Yamal and Raphinha.