A show that prides itself on realism and grounded portrayals of healthcare professionals should not make this mistake.
BUSINESS
Testing autonomous agents (Or: how I learned to stop worrying and embrace chaos)
Look, we’ve spent the last 18 months building production AI systems, and we’ll tell you what keeps us up at night — and it’s not whether the model can answer questions. That’s table stakes now. What haunts us is the mental image of an agent autonomously approving a six-figure vendor contract at 2 a.m. because someone typo’d a config file.We’ve moved past the era of “ChatGPT wrappers” (thank God), but the industry still treats autonomous agents like they’re just chatbots with API access. They’re not. When you give an AI system the ability to take actions without human confirmation, you’re crossing a fundamental threshold. You’re not building a helpful assistant anymore — you’re building something closer to an employee. And that changes everything about how we need to engineer these systems.The autonomy problem nobody talks aboutHere’s what’s wild: We’ve gotten really good at making models that *sound* confident. But confidence and reliability aren’t the same thing, and the gap between them is where production systems go to die.We learned this the hard way during a pilot program where we let an AI agent manage calendar scheduling across executive teams. Seems simple, right? The agent could check availability, send invites, handle conflicts. Except, one Monday morning, it rescheduled a board meeting because it interpreted “let’s push this if we need to” in a Slack message as an actual directive. The model wasn’t wrong in its interpretation — it was plausible. But plausible isn’t good enough when you’re dealing with autonomy.That incident taught us something crucial: The challenge isn’t building agents that work most of the time. It’s building agents that fail gracefully, know their limitations, and have the circuit breakers to prevent catastrophic mistakes.What reliability actually means for autonomous systemsLayered reliability architectureWhen we talk about reliability in traditional software engineering, we’ve got decades of patterns: Redundancy, retries, idempotency, graceful degradation. But AI agents break a lot of our assumptions.Traditional software fails in predictable ways. You can write unit tests. You can trace execution paths. With AI agents, you’re dealing with probabilistic systems making judgment calls. A bug isn’t just a logic error—it’s the model hallucinating a plausible-sounding but completely fabricated API endpoint, or misinterpreting context in a way that technically parses but completely misses the human intent.So what does reliability look like here? In our experience, it’s a layered approach.Layer 1: Model selection and prompt engineeringThis is foundational but insufficient. Yes, use the best model you can afford. Yes, craft your prompts carefully with examples and constraints. But don’t fool yourself into thinking that a great prompt is enough. I’ve seen too many teams ship “GPT-4 with a really good system prompt” and call it enterprise-ready.Layer 2: Deterministic guardrailsBefore the model does anything irreversible, run it through hard checks. Is it trying to access a resource it shouldn’t? Is the action within acceptable parameters? We’re talking old-school validation logic — regex, schema validation, allowlists. It’s not sexy, but it’s effective.One pattern that’s worked well for us: Maintain a formal action schema. Every action an agent can take has a defined structure, required fields, and validation rules. The agent proposes actions in this schema, and we validate before execution. If validation fails, we don’t just block it — we feed the validation errors back to the agent and let it try again with context about what went wrong.Layer 3: Confidence and uncertainty quantificationHere’s where it gets interesting. We need agents that know what they don’t know. We’ve been experimenting with agents that can explicitly reason about their confidence before taking actions. Not just a probability score, but actual articulated uncertainty: “I’m interpreting this email as a request to delay the project, but the phrasing is ambiguous and could also mean…”This doesn’t prevent all mistakes, but it creates natural breakpoints where you can inject human oversight. High-confidence actions go through automatically. Medium-confidence actions get flagged for review. Low-confidence actions get blocked with an explanation.Layer 4: Observability and auditabilityAction Validation Pipeline If you can’t debug it, you can’t trust it. Every decision the agent makes needs to be loggable, traceable, and explainable. Not just “what action did it take” but “what was it thinking, what data did it consider, what was the reasoning chain?”We’ve built a custom logging system that captures the full large language model (LLM) interaction — the prompt, the response, the context window, even the model temperature settings. It’s verbose as hell, but when something goes wrong (and it will), you need to be able to reconstruct exactly what happened. Plus, this becomes your dataset for fine-tuning and improvement.Guardrails: The art of saying noLet’s talk about guardrails, because this is where engineering discipline really matters. A lot of teams approach guardrails as an afterthought — “we’ll add some safety checks if we need them.” That’s backwards. Guardrails should be your starting point.We think of guardrails in three categories.Permission boundariesWhat is the agent physically allowed to do? This is your blast radius control. Even if the agent hallucinates the worst possible action, what’s the maximum damage it can cause?We use a principle called “graduated autonomy.” New agents start with read-only access. As they prove reliable, they graduate to low-risk writes (creating calendar events, sending internal messages). High-risk actions (financial transactions, external communications, data deletion) either require explicit human approval or are simply off-limits.One technique that’s worked well: Action cost budgets. Each agent has a daily “budget” denominated in some unit of risk or cost. Reading a database record costs 1 unit. Sending an email costs 10. Initiating a vendor payment costs 1,000. The agent can operate autonomously until it exhausts its budget; then, it needs human intervention. This creates a natural throttle on potentially problematic behavior.Graduated Autonomy and Action Cost Budget Semantic HoundariesWhat should the agent understand as in-scope vs out-of-scope? This is trickier because it’s conceptual, not just technical.I’ve found that explicit domain definitions help a lot. Our customer service agent has a clear mandate: handle product questions, process returns, escalate complaints. Anything outside that domain — someone asking for investment advice, technical support for third-party products, personal favors — gets a polite deflection and escalation.The challenge is making these boundaries robust to prompt injection and jailbreaking attempts. Users will try to convince the agent to help with out-of-scope requests. Other parts of the system might inadvertently pass instructions that override the agent’s boundaries. You need multiple layers of defense here.Operational boundariesHow much can the agent do, and how fast? This is your rate limiting and resource control.We’ve implemented hard limits on everything: API calls per minute, maximum tokens per interaction, maximum cost per day, maximum number of retries before human escalation. These might seem like artificial constraints, but they’re essential for preventing runaway behavior.We once saw an agent get stuck in a loop trying to resolve a scheduling conflict. It kept proposing times, getting rejections, and trying again. Without rate limits, it sent 300 calendar invites in an hour. With proper operational boundaries, it would’ve hit a threshold and escalated to a human after attempt number 5.Agents need their own style of testingTraditional software testing doesn’t cut it for autonomous agents. You can’t just write test cases that cover all the edge cases, because with LLMs, everything is an edge case.What’s worked for us:Simulation environmentsBuild a sandbox that mirrors production but with fake data and mock services. Let the agent run wild. See what breaks. We do this continuously — every code change goes through 100 simulated scenarios before it touches production.The key is making scenarios realistic. Don’t just test happy paths. Simulate angry customers, ambiguous requests, contradictory information, system outages. Throw in some adversarial examples. If your agent can’t handle a test environment where things go wrong, it definitely can’t handle production.Red teamingGet creative people to try to break your agent. Not just security researchers, but domain experts who understand the business logic. Some of our best improvements came from sales team members who tried to “trick” the agent into doing things it shouldn’t.Shadow modeBefore you go live, run the agent in shadow mode alongside humans. The agent makes decisions, but humans actually execute the actions. You log both the agent’s choices and the human’s choices, and you analyze the delta.This is painful and slow, but it’s worth it. You’ll find all kinds of subtle misalignments you’d never catch in testing. Maybe the agent technically gets the right answer, but with phrasing that violates company tone guidelines. Maybe it makes legally correct but ethically questionable decisions. Shadow mode surfaces these issues before they become real problems.The human-in-the-loop patternThree Human-in-the-Loop Patterns Despite all the automation, humans remain essential. The question is: Where in the loop?We’re increasingly convinced that “human-in-the-loop” is actually several distinct patterns:Human-on-the-loop: The agent operates autonomously, but humans monitor dashboards and can intervene. This is your steady-state for well-understood, low-risk operations.Human-in-the-loop: The agent proposes actions, humans approve them. This is your training wheels mode while the agent proves itself, and your permanent mode for high-risk operations.Human-with-the-loop: Agent and human collaborate in real-time, each handling the parts they’re better at. The agent does the grunt work, the human does the judgment calls.The trick is making these transitions smooth. An agent shouldn’t feel like a completely different system when you move from autonomous to supervised mode. Interfaces, logging, and escalation paths should all be consistent.Failure modes and recoveryLet’s be honest: Your agent will fail. The question is whether it fails gracefully or catastrophically.We classify failures into three categories:Recoverable errors: The agent tries to do something, it doesn’t work, the agent realizes it didn’t work and tries something else. This is fine. This is how complex systems operate. As long as the agent isn’t making things worse, let it retry with exponential backoff.Detectable failures: The agent does something wrong, but monitoring systems catch it before significant damage occurs. This is where your guardrails and observability pay off. The agent gets rolled back, humans investigate, you patch the issue.Undetectable failures: The agent does something wrong, and nobody notices until much later. These are the scary ones. Maybe it’s been misinterpreting customer requests for weeks. Maybe it’s been making subtly incorrect data entries. These accumulate into systemic issues.The defense against undetectable failures is regular auditing. We randomly sample agent actions and have humans review them. Not just pass/fail, but detailed analysis. Is the agent showing any drift in behavior? Are there patterns in its mistakes? Is it developing any concerning tendencies?The cost-performance tradeoffHere’s something nobody talks about enough: reliability is expensive.Every guardrail adds latency. Every validation step costs compute. Multiple model calls for confidence checking multiply your API costs. Comprehensive logging generates massive data volumes.You have to be strategic about where you invest. Not every agent needs the same level of reliability. A marketing copy generator can be looser than a financial transaction processor. A scheduling assistant can retry more liberally than a code deployment system.We use a risk-based approach. High-risk agents get all the safeguards, multiple validation layers, extensive monitoring. Lower-risk agents get lighter-weight protections. The key is being explicit about these trade-offs and documenting why each agent has the guardrails it does.Organizational challengesWe’d be remiss if we didn’t mention that the hardest parts aren’t technical — they’re organizational.Who owns the agent when it makes a mistake? Is it the engineering team that built it? The business unit that deployed it? The person who was supposed to be supervising it?How do you handle edge cases where the agent’s logic is technically correct but contextually inappropriate? If the agent follows its rules but violates an unwritten norm, who’s at fault?What’s your incident response process when an agent goes rogue? Traditional runbooks assume human operators making mistakes. How do you adapt these for autonomous systems?These questions don’t have universal answers, but they need to be addressed before you deploy. Clear ownership, documented escalation paths, and well-defined success metrics are just as important as the technical architecture.Where we go from hereThe industry is still figuring this out. There’s no established playbook for building reliable autonomous agents. We’re all learning in production, and that’s both exciting and terrifying.What we know for sure: The teams that succeed will be the ones who treat this as an engineering discipline, not just an AI problem. You need traditional software engineering rigor — testing, monitoring, incident response — combined with new techniques specific to probabilistic systems.You need to be paranoid but not paralyzed. Yes, autonomous agents can fail in spectacular ways. But with proper guardrails, they can also handle enormous workloads with superhuman consistency. The key is respecting the risks while embracing the possibilities.We’ll leave you with this: Every time we deploy a new autonomous capability, we run a pre-mortem. We imagine it’s six months from now and the agent has caused a significant incident. What happened? What warning signs did we miss? What guardrails failed?This exercise has saved us more times than we can count. It forces you to think through failure modes before they occur, to build defenses before you need them, to question assumptions before they bite you.Because in the end, building enterprise-grade autonomous AI agents isn’t about making systems that work perfectly. It’s about making systems that fail safely, recover gracefully, and learn continuously.And that’s the kind of engineering that actually matters.Madhvesh Kumar is a principal engineer. Deepika Singh is a senior software engineer. Views expressed are based on hands-on experience building and deploying autonomous agents, along with the occasional 3 AM incident response that makes you question your career choices.
Ethereum faces make-or-break moment in high-stakes balancing act as scaling, quantum and AI pressures mount
While upgrades have improved efficiency and lowered costs, the ecosystem faces deeper structural questions around fragmentation, security, and purpose, even as it continues prioritizing base-layer scaling.
Popular Florida family theme park closes down for new development
Popular theme parks have begun disappearing from the landscape, as developers anticipate huge returns from converting massive amusement park parcels of land into lucrative real estate projects.Cedar Fair sold its valuable California Great America property in the Silicon Valley to real estate developer Prologis in 2022, before it merged with Six Flags in 2024.California Great America announced it will close its gates permanently at the end of the 2027 amusement park season.
Andretti Thrill Park will close for redevelopment after 27 years of operation. Shutterstock
Andretti Thrill Park closes downAnd now, another beloved family theme park, Andretti Thrill Park, which was co-founded by the late NASCAR driver John Andretti in 1999, has closed down permanently and will be redeveloped into an apartment complex with over 300 units, The Space Coast Rocket reported.The Melbourne, Fla.-based race car-themed thrill park’s owner Eddie Hamann, who co-founded the theme park with Andretti, said economic issues were not the reason for closing the park, as it was performing well financially.Co-Founder John Andretti passed awayAndretti passed away from colon cancer in 2020. John Andretti was the nephew of car-racing legend Mario Andretti.”There was no issue with the city, no issue with the county. Sales were still strong. We never lost money here,” Hamann said, according to The Space Coast Rocket.”It’s simply that the facility is 27 years old. It requires a lot of maintenance, and it was time to look at what the future holds,” Hamann said.A high demand for housing in the Melbourne area prompted the decision to redevelop the aging 16.7-acre property into apartments for students at the Florida Institute of Technology and local workers, the thrill park owner said.Andretti Thrill Park had become a popular entertainment venue for local families on the Space Coast, offering a variety of attractions, including 5 go-kart tracks, 18-hole miniature golf, laser tag arena, paddle boat rides, rock wall, Andretti 360 ride, drop tower, train ride, kiddie rides, and the Space Coast’s largest video arcade with 150 games, according to Trip Advisor.The thrill park’s website and social media sites have already been disabled.Another Andretti karting chain operatesHamann is also a partner in another Andretti family venture, Andretti Indoor Karting & Games. The go-karting and arcade chain has 14 locations in 8 states, which are not affiliated with Andretti Thrill Park.Fans of 7 Six Flags amusement parks will be relieved to know their favorite park destinations, however, will still be operating after the 2026 season ends.EPR Properties buys Six Flags parksEPR Properties purchased the parks for $342 million with plans to sign long-term leases with major theme park operators Enchanted Parks and La Ronde Operations Inc., the company announced on March 5.”This strategic acquisition represents a compelling opportunity to expand our attractions portfolio with high-quality experiential real estate assets in established regional markets,” Gregory K. Silvers, chairman and chief executive officer of EPR Properties, said in a statement.”These properties embody the essential characteristics we seek: delivering stable, long-term cash flows, strong drive-to accessibility, multi-generational appeal, and significant underlying land value” Silvers said.”This transaction aligns with our disciplined investment criteria and accelerates our strategic expansion into experiential properties that create enduring value for our shareholders,” he said.Six Flags Parks Sold to EPR PropertiesWorld of Fun, Kansas City, Mo., leased to Enchanted Parks.Valleyfair, Minneapolis, Minn., leased to Enchanted Parks.Six Flags St. Louis, Mo., leased to Enchanted Parks.Schlitterbahn Waterpark Galveston, Galveston, Texas, leased to Enchanted Parks.Michigan Adventure, Grand Rapids, Mich., leased to Enchanted Parks.Six Flags Great Escape, Queensbury, N.Y., leased to Enchanted Parks.Six Flags La Ronde, Queensbury, N.Y., leased to La Ronde Operations Inc. Related: Major fried chicken franchisee closes in Chapter 11 bankruptcy
Kate Spade Outlet’s $429 quilted crossbody bag is just $94 — and it comes in 3 colors
TheStreet aims to feature only the best products and services. If you buy something via one of our links, we may earn a commission.Why we love this dealWhen springtime rolls around, you may find yourself reaching for those cute spring-colored clothes and accessories, or maybe your spring cleaning has started early, and you’re in the market for some new accessories that match your favorite warm-weather clothes. If you need a stylish, high-quality bag that comes with hundreds of dollars of savings, Kate Spade Outlet is your place.While Kate Spade Outlet has many cute items, this may be our favorite so far. The Kate Spade Carey Quilted Crossbody Bag is an adorable and chic accessory to add to almost any outfit. The pastel yellow works with blue jeans, bright spring and summer colors, or pair it with an all-black-and-yellow outfit for an edgy vibe. Whatever your preference is, you’ll love the discount of $335. Shoppers can get this bag for just $94 and save 78% off the original price of $429. This price is reflected in the cart during checkout.Kate Spade Carey Quilted Crossbody Bag, $94 (was $429) at Kate Spade Outlet
Courtesy of Kate Spade Outlet
Shop at Kate Spade OutletWhy do shoppers love it?This bag is not only super adorable, but it’s also functional and ready for daily use. Measuring 6.2 inches tall, 3 inches deep, and 8.72 inches wide, this crossbody bag can hold the largest iPhone, a continental wallet, sunglasses, money, makeup, and other items. It has a large main zip compartment, an inner zip pocket, and an outer slip pocket for convenience. The turnlock closure keeps your items secure, and the shiny silver looks chic against the bright colors, while also matching the intertwined silver chain. Related: Kate Spade is selling a $359 shoulder bag from just $80 that’s available in 12 bright colors in time for springWith an adjustable strap that features an 11.5-inch drop chain and a connector that prevents the chain from digging into your shoulder, this bag is comfortable to wear all day. The adjustability allows it to be worn over the shoulder with a double chain, or as a crossbody with a single chain. The smooth quilted leather texture on this bag looks precious and gives off the look of a frosted cake, which fits perfectly with the Lemon Fondant color name, offering a beautiful pastel yellow that’s perfect for spring. This bag is also available in Orange Jasper and Blue Multicolor. Details to knowSize: It measures 6.2 inches tall, 3 inches deep, and 8.72 inches wide. Colors: This bag is available in Lemon Fondant, Orange Jasper, and Blue Multicolor.Storage: It features an exterior slip pocket, an interior zip pocket, and the main compartment. The same bag is also available in black or white, with one reviewer saying, “It’s beautiful! I love how the straps can be worn as a cross-body bag or on the shoulder. It’s so classy and pretty.” Another shopper said, “It is a lovely bag that goes perfectly with any outfit. There’s enough space to carry the basics for any night out. I also take it to work. I’m in love with it.”Shop more dealsKayla Small Crossbody Bag, $74 (was $300) at Kate Spade OutletHeart Quilted Crossbody Bag, $78 (was $429) at Kate Spade OutletMargot Textured Patent Leather Convertible Bag, $125 (was $329) at Kate Spade OutletWhether you want to update your wardrobe or find your new daily staple that can take you from work to a night out, the Kate Spade Carey Quilted Crossbody Bag is a great choice. It’s spacious, versatile, and adds a nice pop of color to any outfit. Plus, at a huge discount of 78% off, this bag is yours for just $94.
Paramount And Warner Bros. Discovery Combined Will Control 40% Of Acquired TV Viewing On Streaming.
Paramount and Warner Bros. Discovery will own the most valuable library of television shows with 40% of viewing to the top acquired shows on streaming.
Episode 8 Of ‘The Amazing Digital Circus’ Exposes Caine’s Big Secret
Episode eight of ‘The Amazing Digital Circus’ reveals the origin of Caine and hints at the true purpose of the Circus—here are all of the twists, explained.
Goldman Sachs sends blunt message on Nvidia stock after GTC
Despite all the bearish noise, Goldman Sachs isn’t backing down on Nvidia (NVDA) stock yet. After another stellar GTC showing, the bank reiterated its $250 price target and maintained a buy rating, underscoring confidence in the AI giant’s tremendous upside from current levels. It’s important to note that Goldman Sachs first raised its Nvidia price target to $250 back on Nov. 20, 2025. Since then, it has reiterated that target in multiple notes, including one following GTC.At the time of writing on March 21, 2026, Nvidia stock was last trading at $172.70, per Yahoo Finance. That said, Goldman Sachs analysts feel that CEO Jensen Huang’s keynote delivered exactly what the bulls needed to hear, in clearer demand visibility and a stronger case that AI spending isn’t slowing down.Wedbush analyst Dan Ives, who recently praised the AI bellwether following its first day at GTC 2026, echoed that sentiment.Ives said the company is still “alone at the top of the AI mountain,” expanding its reach across everything from compute and networking to inference and robotics.Ives also highlighted Nvidia’s massive lead over competitors in chips during a recent CNBC interview.With greater clarity expected around hyperscaler spending and powerful new models built on Blackwell, Goldman sees a far steadier pipeline of catalysts that will keep momentum firmly on Nvidia’s side.Wall Street updates Nvidia price targets after GTC 2026Rosenblatt Securities: $325Bank of America: $300Bernstein: $300Morgan Stanley: $260Benchmark: $250UBS: $245
Sources: Yahoo Finance, Investing.com
Goldman Sachs sees Nvidia’s GTC takeaways reinforcing AI dominanceGoldman Sachs analysts came away from Nvidia’s high-profile GTC event with a view that did enough to support the stock’s earlier gains while reinforcing its bullish long-term setup.A lot of that has to do with investors having more concrete visibility into where growth could come from next.More Nvidia:Nvidia stock gets major reality check on ‘$100B’ numberNvidia CEO delivers blunt 7-word rebuttal on software stocksBank of America resets Nvidia price target after earningsNaturally, a big part of that came from Nvidia’s massive $1 trillion revenue disclosure in data center sales through 2027. That alone helps answer a major concern among AI investors, especially those who believe that AI-led infrastructure spending might crest this year.Another huge part of the conference was Nvidia’s major push into Groq’s LPX rack, a sign that the tech behemoth wants a much sizeable role in the next leg of AI demand.
Nvidia’s GTC keynote draws investor attention as analysts digest implications for future AI demand trends.Morris/Bloomberg via Getty Images
Goldman’s Nvidia bull case by the numbers12-month price target: $250.00Nvidia stock price in the note: $183.22Implied upside: 36.4%Revenue forecast: $215.0 billion for 1/26, $393.6 billion for 1/27E, $521.5 billion for 1/28E, and $634.8 billion for 1/29EEPS forecast: 4.52 for 1/26, 8.97 for 1/27E, 12.29 for 1/28E, and 15.41 for 1/29EP/E ratio: 35.0x for 1/26, 20.4x for 1/27E, 14.9x for 1/28E, and 11.9x for 1/29EFCF yield, a cash flow return metric: 2.5% for 1/26, 4.1% for 1/27E, 6.5% for 1/28E, and 7.8% for 1/29EHere are four of the biggest takeaways from Goldman’s bullish note.The cleanest takeaway is that Nvidia has a lot more visibility into its data center business through 2027, projecting north of $1 trillion (a massive $500 billion jump from its previous outlook) in combined compute and networking revenue from its Blackwell and Rubin platforms. Nvidia just revealed a new inference-focused system built with Groq that could handle real-world AI workloads a lot more efficiently. For perspective, it can deliver up to 35 times better performance per watt and unlock 10 times more sales potential for complex AI models.On networking, Nvidia said it was using both copper and optical rather than choosing between the two. So the new systems, like its Spectrum-X switches and Rubin-based racks, are tailor-made to scale massive AI clusters, with setups supporting up to 576 GPUs working together.Finally, Nvidia is pushing harder into “agentic AI” with tools such as NemoClaw, which enable businesses to run autonomous AI systems efficiently. The overall goal is to make AI agents more practical and enterprise-ready.Related: Bank of America reveals S&P 500 ‘cheat sheet’On top of that, the bank sees the setup supported by multiple future catalysts, including clearer hyperscaler capital spending plans and new large language models trained on Blackwell, which should strengthen Nvidia’s tremendous performance edge.Nonetheless, Nvidia’s bull case is far from bulletproof. The firm flagged plenty of risks, including a marked slowdown in AI infrastructure spending, growing competitive pressures that could impact its market share, margin erosion as rivals get much more aggressive, and supply constraints limiting Nvidia’s ability to meet demand. Nvidia’s recent earnings performance historyNvidia has delivered four consecutive quarterly EPS beats, while its top-line growth has stayed consistently above the 50% mark in each period.So despite its detractors, it’s clear that at least from a fundamental standpoint, Nvidia’s solidified its position as the market’s most compelling AI growth story. It also underscores that demand for its AI chips and related infrastructure has been remarkably resilient. FQ4 2026 (Jan 2026): EPS 1.62 (beat by 0.08), revenue 68.13B (beat by 1.90B), year-over-year growth 73.21%FQ3 2026 (Oct 2025): EPS 1.30 (beat by 0.04), revenue 57.01B (beat by 2.06B), YoY growth 62.49%FQ2 2026 (Jul 2025): EPS 1.05 (beat by 0.04), revenue 46.74B (beat by 687.48M), YoY growth 55.60%FQ1 2026 (Apr 2025): EPS 0.81 (beat by 0.06), revenue 44.06B (beat by 807.34M), YoY growth 69.18%
Source: Seeking Alpha
Nvidia stock returns vs. Roundhill Magnificent 7 ETF1W: Nvidia stock -4.19% versus Roundhill Magnificent 7 ETF -2.62%1M: Nvidia stock -9.02% versus Roundhill Magnificent 7 ETF -6.76%6M: Nvidia stock -2.25% versus Roundhill Magnificent 7 ETF -10.35%YTD: Nvidia stock -7.40% versus Roundhill Magnificent 7 ETF -11.51%1Y: Nvidia stock 45.70% versus Roundhill Magnificent 7 ETF 25.04%
Source: Seeking Alpha
Related: Bank of America revamps Micron stock price target post earnings
The genius and the danger of STRC: How Strategy’s new funding model bends so it doesn’t break
Strategy’s STRC has bitcoin a major bitcoin accumulation tool, but analysts warn the risks aren’t as clear as the marketing makes them out to be.
Stocks are teetering on the edge of correction territory. Why the ‘TACO trade’ could flop.
The once-reliable trade on Wall Street, that President Trump “always chickens out,” could be torpedoed by the Iran conflict.