On Saturday, April 18, In Our Own Voice: National Black Women’s Reproductive Justice Agenda brought together elected officials, TV and film writers, creators, and community leaders for a powerful conversation unpacking how media narratives can impact policy.
Live Nation Verdict Could Be A Ticketmaster Disaster
A federal jury just ruled Live Nation’s ticketing monopoly illegal. How will this long-awaited landmark decision affect the live entertainment industry?
Going ‘Bogle Style’ in Your 50s: Jack Bogle’s Portfolio Shift You Should Know
Investing doesn’t require sifting through earnings reports and analyst predictions, trying to identify the stocks that are about to take off. In fact, the key to reaching your long-term financial goals is often to keep investing simple.
Vanguard founder Jack Bogle pioneered low-cost investing, which ushered in a new era of affordable mutual funds and exchange-traded funds (ETFs). If you’re in your 50s and nearing retirement, you may be wondering how to shift your portfolio to align with your risk tolerance, time horizon and goals. Bogle’s low-cost investing model can help – and implementing it can be fairly simple.
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
The compounding power of lower fees
Investing in low-cost index funds instead of funds with much higher expense ratios won’t change your returns overnight, but it can result in significant savings in the long run. There are plenty of ETFs that mirror the S&P 500 and other popular benchmarks with expense ratios below 0.10%. Funds charging 1% expense ratios look a lot less attractive in comparison.
For instance, someone with $500,000 in their portfolio invested in funds with a 1% expense ratio will pay $5,000 in fees by the end of the year. But someone investing the same amount in funds with 0.25% expense ratios will pay just $1,250.
Pet Protection: See How Healthy Paws Pet Insurance Can Help Your Dog or Cat
Why simplicity reduces risk for late-stage investors
Bogle’s recommended approach is to invest in a handful of broad index funds and have long holding periods. That way, your wealth doesn’t depend on a single stock or sector. It gets to rise during bull markets, but the losses are often less severe during bear markets and corrections.
Consistently buying shares of index funds via dollar-cost averaging, such as each month, and holding them for the long haul helps you avoid making investing decisions based on emotions.
Gold Investor Kit Offer: Sign up with American Hartford Gold today and get a free investor kit, plus receive up to $25,000 in free silver on qualifying purchases
How to go ‘Bogle-style’ in your 50s
Implementing Bogle’s investing advice once you’re in your 50s may not require substantial changes. It involves an audit so you can see which funds you’re invested in and how much you’re paying in fees. If you aren’t diversified across assets, such as stocks and bonds, domestic and international assets, and assets of different sizes and sectors, invest in funds that offer more diversification. If you’re paying more than you’re comfortable with in fees, you can sell shares in high-cost funds and invest in more affordable ones.
As you’re nearing retirement, it can make sense to max out your retirement savings accounts so you can enjoy tax advantages along the way. While Roth accounts shield you from taxes on withdrawals, a traditional retirement plan lets you enjoy tax-deferred contributions.
Platinum Savings: Open a savings account with CIT Bank and get 3.75% APY (and it takes only about 5 minutes)
Analyze your current tax situation and which tax bracket you expect to be in the future to determine which type of account you should invest in. Tax diversification can also help reduce risk and costs in retirement: Many investors put money in their employer-sponsored retirement accounts like 401(k)s, as well as individual retirement accounts (IRAs) and taxable brokerage accounts.
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
Google’s new Deep Research and Deep Research Max agents can search the web and your private data
Google on Monday unveiled the most significant upgrade to its autonomous research agent capabilities since the product’s debut, launching two new agents — Deep Research and Deep Research Max — that for the first time allow developers to fuse open web data with proprietary enterprise information through a single API call, produce native charts and infographics inside research reports, and connect to arbitrary third-party data sources through the Model Context Protocol (MCP).The release, built on Google’s Gemini 3.1 Pro model, marks an inflection point in the rapidly intensifying race to build AI systems that can autonomously conduct the kind of exhaustive, multi-source research that has traditionally consumed hours or days of human analyst time. It also represents Google’s clearest bid yet to position its AI infrastructure as the backbone for enterprise research workflows in finance, life sciences, and market intelligence — industries where the stakes of getting information wrong are extraordinarily high.”We are launching two powerful updates to Deep Research in the Gemini API, now with better quality, MCP support, and native chart/infographics generation,” Google CEO Sundar Pichai wrote on X. “Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering & synthesis using extended test-time compute — achieving 93.3% on DeepSearchQA and 54.6% on HLE.”Both agents are available starting today in public preview via paid tiers of the Gemini API, accessible through the Interactions API that Google first introduced in December 2025.Why Google built two research agents instead of oneThe launch introduces a tiered architecture that reflects a fundamental tension in AI agent design: the tradeoff between speed and thoroughness.Deep Research, the standard tier, replaces the preview agent Google released in December and is optimized for low-latency, interactive use cases. It delivers what Google describes as significantly reduced latency and cost at higher quality levels compared to its predecessor. The company positions it as ideal for applications where a developer wants to embed research capabilities directly into a user-facing interface — think a financial dashboard that can answer complex analytical questions in near-real time.Deep Research Max occupies the opposite end of the spectrum. It leverages extended test-time compute — a technique where the model spends more computational cycles iteratively reasoning, searching, and refining its output before delivering a final report. Google designed it for asynchronous, background workflows: the kind of task where an analyst team kicks off a batch of due diligence reports before leaving the office and expects exhaustive, fully sourced analyses waiting for them the next morning.The Google DeepMind team framed the distinction on X: “Deep Research: Optimized for speed and efficiency. Perfect for interactive apps needing quicker responses. Deep Research Max: It uses extra time to search and reason. Ideal for exhaustive context gathering and tasks happening in the background.””Deep Research was our first hosted agent in the API and has gained a ton of traction over the last 3 months, very excited for folks to test out the new agents and all the improvements, this is just the start of our agents journey,” Logan Kilpatrick, who leads developer relations for Google’s AI efforts, wrote on X.MCP support lets the agents tap into private enterprise data for the first timePerhaps the most consequential feature in today’s release is the addition of Model Context Protocol support, which transforms Deep Research from a sophisticated web research tool into something more closely resembling a universal data analyst.MCP , an emerging open standard for connecting AI models to external data sources, allows Deep Research to securely query private databases, internal document repositories, and specialized third-party data services — all without requiring sensitive information to leave its source environment. In practical terms, this means a hedge fund could point Deep Research at its internal deal-flow database and a financial data terminal simultaneously, then ask the agent to synthesize insights from both alongside publicly available information from the web.Google disclosed that it is actively collaborating with FactSet, S&P, and PitchBook on their MCP server designs, a signal that the company is pursuing deep integration with the data providers that Wall Street and the broader financial services industry already rely on daily. The goal, according to the blog post authored by Google DeepMind product managers Lukas Haas and Srinivas Tadepalli, is to “let shared customers integrate financial data offerings into workflows powered by Deep Research, and to enable them to realize a leap in productivity by gathering context using their exhaustive data universes at lightning speed.”This addresses one of the most persistent pain points in enterprise AI adoption: the gap between what a model can find on the open internet and what an organization actually needs to make decisions. Until now, bridging that gap required significant custom engineering. MCP support, combined with Deep Research’s autonomous browsing and reasoning capabilities, collapses much of that complexity into a configuration step. Developers can now run Deep Research with Google Search, remote MCP servers, URL Context, Code Execution, and File Search simultaneously — or turn off web access entirely to search exclusively over custom data. The system also accepts multimodal inputs including PDFs, CSVs, images, audio, and video as grounding context.Native charts and infographics turn AI reports into stakeholder-ready deliverablesThe second headline feature — native chart and infographic generation — may sound incremental, but it addresses a practical limitation that has constrained the usefulness of AI-generated research outputs in professional settings.Previous versions of Deep Research produced text-only reports. Users who needed visualizations had to export the data and build charts themselves, a friction point that undermined the promise of end-to-end automation. The new agents generate high-quality charts and infographics inline within their reports, rendered in HTML or Google’s Nano Banana format, dynamically visualizing complex datasets as part of the analytical narrative.”The agent generates HTML charts and infographics inline with the report. Not screenshots. Not suggestions to ‘visualize this data.’ Actual rendered charts inside the markdown output,” noted AI commentator Shruti Mishra on X, capturing the practical significance of the change.For enterprise users — particularly those in finance and consulting who need to produce stakeholder-ready deliverables — this transforms Deep Research from a tool that accelerates the research phase into one that can potentially produce near-final analytical products. Combined with a new collaborative planning feature that lets users review, guide, and refine the agent’s research plan before execution, and real-time streaming of intermediate reasoning steps, the system gives developers granular control over the investigation’s scope while maintaining the transparency that regulated industries demand.How Deep Research evolved from a consumer chatbot feature to enterprise platform infrastructureToday’s release crystallizes a strategic narrative Google has been building for months: Deep Research is not merely a consumer feature but a piece of infrastructure that powers multiple Google products and is now being offered to external developers as a platform.The blog post explicitly notes that when developers build with the Deep Research agent, they tap into “the same autonomous research infrastructure that powers research capabilities within some of Google’s most popular products like Gemini App, NotebookLM, Google Search and Google Finance.” This suggests that the agent available through the API is not a stripped-down version of what Google uses internally but the same system, offered at platform scale.The journey to this point has been remarkably rapid. Google first introduced Deep Research as a consumer feature in the Gemini app in December 2024, initially powered by Gemini 1.5 Pro. At the time, the company described it as a personal AI research assistant that could save users hours by synthesizing web information in minutes. By March 2025, Google upgraded Deep Research with Gemini 2.0 Flash Thinking Experimental and made it available for anyone to try. Then came the upgrade to Gemini 2.5 Pro Experimental, where Google reported that raters preferred its reports over competing deep research providers by more than a 2-to-1 margin. The December 2025 release was the pivot to developer access, when Google launched the Interactions API and made Deep Research available programmatically for the first time, powered by Gemini 3 Pro and accompanied by the open-source DeepSearchQA benchmark.The underlying model driving today’s improvements is Gemini 3.1 Pro, which Google released on February 19, 2026. That model represented a significant leap in core reasoning: on ARC-AGI-2, a benchmark evaluating a model’s ability to solve novel logic patterns, 3.1 Pro scored 77.1% — more than double the performance of Gemini 3 Pro. Deep Research Max inherits that reasoning foundation and layers autonomous research behaviors on top of it, achieving 93.3% on DeepSearchQA (up from 66.1% in December) and 54.6% on Humanity’s Last Exam (up from 46.4%).Google faces a crowded field of competitors building autonomous research agentsGoogle is not operating in a vacuum. The launch arrives amid intensifying competition in the autonomous research agent space. OpenAI has been developing its own agent capabilities within ChatGPT under the codename Hermes, which includes an agent builder, templates, scheduling, and Slack integration, according to reports circulating on social media. Perplexity has built its business around AI-powered research. And a growing ecosystem of startups is attacking various slices of the automated research workflow.What distinguishes Google’s approach is the combination of its search infrastructure — which gives Deep Research access to the broadest and most current index of web information available — with the MCP-based connectivity to enterprise data sources. No other company currently offers a research agent that can simultaneously query the open web at Google Search’s scale and navigate proprietary data repositories through a standardized protocol. The pricing structure also signals Google’s intent to drive adoption: according to Sim.ai, which tracks model pricing, the Deep Research agent in the December preview was priced at $2 per million input tokens and $2 per million output tokens with a 1 million token context window — positioning it as cost-competitive for the volume of research output it generates.Not everyone greeted the announcement with unalloyed enthusiasm, however. Several users on X noted that the new agents are available only through the API, not in the Gemini consumer app. “Not on Gemini app,” observed TestingCatalog News, while another user wrote, “Google keeps punishing Gemini App Pro subscribers for some reason.” Others raised concerns about the presentation of benchmark results, with one user arguing that Google’s charts could be “misleading” in how they represent percentage improvements. These complaints point to a broader tension in Google’s AI strategy: the company is increasingly directing its most advanced capabilities toward developers and enterprise customers who access them through APIs, while consumer-facing products sometimes lag behind.What Deep Research Max means for finance, biotech, and the future of knowledge workThe practical implications of today’s launch are most immediately felt in industries that depend on exhaustive, multi-source research as a core business function. In financial services, where analysts routinely spend hours assembling due diligence reports from scattered sources — SEC filings, earnings transcripts, market data terminals, internal deal memos — Deep Research Max offers the possibility of automating the initial research phase entirely. The FactSet, S&P, and PitchBook partnerships suggest Google is serious about making this work with the data infrastructure that financial professionals already use.In life sciences, the blog post notes that Google has collaborated with Axiom Bio, which builds AI systems to predict drug toxicity, and found that Deep Research unlocked new levels of initial research depth across biomedical literature. In market research and consulting, the ability to produce stakeholder-ready reports with embedded visualizations and granular citations could compress project timelines from days to hours.The key question is whether the quality and reliability of these automated outputs will meet the standards that professionals in these fields demand. Google’s benchmark numbers are impressive, but benchmarks measure performance on standardized tasks — real-world research is messier, more ambiguous, and often requires the kind of judgment that remains difficult to automate. Deep Research and Deep Research Max are available now in public preview via paid tiers of the Gemini API, with availability on Google Cloud for startups and enterprises coming soon.Eighteen months ago, Deep Research was a feature that helped grad students avoid drowning in browser tabs. Today, Google is betting it can replace the first shift at an investment bank. The distance between those two ambitions — and whether the technology can actually close it — will define whether autonomous research agents become a transformative category of enterprise software or just another AI demo that dazzles on benchmarks and disappoints in the conference room.
ChatGPT is so 2025 — here are the real AI gold mines for investors in 2026
Money is flooding into AI. Defense, healthcare and agentics are the big winners, writes Mark Minevich.
The meme-stock frenzy is a warning — these 7 high-quality stocks are better bets
Dividend-paying winners are likely to outperform a meme-themed ETF.
New York sues Coinbase, Gemini over prediction market offerings
New York has become the latest state to argue that prediction market contracts touching on sports and entertainment violate state gambling laws.
Inside The Cocktail Laboratory Where A World’s 50 Best Bar Is Rewriting Colombia’s Spirits Story
In Medellín, Mamba Negra’s new immersive Mamba Lab experience proves that Colombia has always had what it takes to shine on the global cocktail and spirits stage.
Disney Shares New Details About Its Upcoming Resort Near Magic Kingdom
Disney has announced that its newest hotel, Disney Lakeshore Lodge, will open next summer on the shores of Walt Disney World’s Bay Lake near Magic Kingdom.
GE loses $20B in market cap on earnings
GE Aerospace (GE) just delivered the kind of quarter that usually sends a stock higher. The company beat on earnings, revenue, and free cash flow, and demand remains strong across its core aerospace business. But the stock fell, wiping out roughly $20 billion in market value for the company.The disconnect stems from management not raising its 2026 outlook, leaving investors questioning whether the company’s improving fundamentals are actually translating into higher long-term earnings power.Q1 beat failed to lift GE’s 2026 outlookGE Aerospace delivered strong first-quarter results. Adjusted EPS came in at $1.86, well above the roughly $1.60 consensus, while revenue and free cash flow also topped expectations.But management held 2026 guidance steady. The company kept its adjusted EPS outlook at $7.10 to $7.40 and free cash flow at $8.0 billion to $8.4 billion, even after a clean beat across key metrics. Orders skyrocketed 87% to $23.0 billion, and deliveries rose 43% year over year, which confirms demand remains strong.Yet that demand did not translate into stronger margins. Operating margin came in at 21.8%, down 200 basis points from a year earlier, pouring cold water on the idea that higher output is already driving operating leverage.Top Investing News:JPMorgan resets S&P 500 price target for the rest of 2026Michael Burry drops shocking verdict on software stocksRobert Kiyosaki says only 6 assets will survive 2026That decline is where most investors’ concern lies. The bull case assumes that as production scales, incremental margins improve and earnings accelerate. Instead, GE delivered a sharp increase in volume yet weaker profitability.Analysts seemed to believe that GE would turn that backlog into higher-margin earnings, given consensus earnings per share estimates are $7.46 for 2026. Since the company did not move to close that gap, analysts may have to adjust their models and lower outlooks.Services backlog underpins recurring earnings visibilityThe clearest strength in the quarter came from GE’s commercial aftermarket business. Commercial Engines & Services revenue rose 34% to $8.92 billion, while management highlighted a services backlog now totaling over $210 billion.That backlog is at the core of the investment case because it ties a significant portion of future earnings to an installed engine base, rather than to the timing of new aircraft deliveries. Airlines can delay new orders, but they still need to maintain engines already in service.
GE’s $170B services backlog ties future earnings to its installed engine base, providing high-margin, recurring revenue regardless of new aircraft demand.NurPhoto via Getty Images
This dynamic makes services revenue structurally more valuable. It carries higher margins, better visibility, and less cyclicality than original equipment sales. It also provides a buffer if OEM deliveries remain uneven.But backlog alone does not close the case. Investors already recognize that demand is strong. The next step is to see services growth translate into a larger share of total profit, not just offset weaker economics in other parts of the business.Strong industry tailwinds raise the bar for GE’s executionGE Aerospace operates in a commercial aerospace oligopoly alongside RTX, Boeing, and Safran, where large installed engine bases drive high-margin aftermarket revenue.Defense & Propulsion Technologies added another layer of support, with revenue increasing 19% to $3.21 billion. On the earnings call, CEO Larry Culp emphasized that commercial aviation demand remains strong, with aftermarket activity picking up as airlines increase flying hours.But strong industry conditions no longer set GE apart. The market’s reaction shows investors are shifting focus from demand recovery to execution. Since industry tailwinds benefit everyone in the space, what matters now is whether GE can convert demand into stronger incremental margins and cash flow than its peers.What could drive GE shares higherStronger commercial services backlog conversion, shifting mix toward higher-margin aftermarket revenueMore shop visits across the installed base, increasing parts and maintenance intensityFurther supply chain normalization that improves fixed-cost absorption, not just shipment volumeClear margin expansion as deliveries rise, showing operating leverage is kicking inA formal increase to 2026 EPS guidance, signaling Q1 strength was structuralWhat could pressure the stockGuidance staying below expectations despite strong demandMargin pressure from weak cost absorption as production scalesA heavier mix of original equipment relative to services, diluting profitabilityContinued earnings beats without guidance increases or visible margin improvementGE’s next test is earnings conversionGE’s earnings beat confirmed demand strength, but the lack of higher guidance shifted the focus to execution, with investors now waiting for clear evidence that higher volume can drive stronger margins and earnings.Related: Oracle adds $100B in market cap on major announcement