“This broker does not achieve returns better than the S&P 500.”
How to Travel in Retirement While Cutting Costs: The Slow Travel Strategy Gaining Popularity
Traveling is something many people look forward to in retirement. But high prices and the stress of planning can get in the way.
The cost of plane tickets, hotels, dining and tours can add up quickly but there are some smart strategies retirees can use to cut their travel costs without feeling restricted. One of them is embracing the slow travel strategy that is gaining popularity.
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
What slow travel is, and why it can slash costs
Slow travel involves taking fewer vacations but spending more time in each destination. Instead of spending five days in Rome and another five days in Sydney, a slow traveler may spend two weeks in one of those locations, immersing themselves in the culture. Hilton’s 2025 annual travel trends report found that slow travel is on the rise.
Slow travel means you won’t spend as much money on flights or other forms of transportation. You may also be able to secure lower nightly lodging costs since you are staying throughout the week and you can travel to hot spots during the offseason. Retirees do not have to abide by school schedules when planning their trips.
Some people will stay in the same place a full month or longer (and get discounts from rental property owners who offer lower per-night prices for longer stays).
Where People Are Buying Gold Right Now
American Hartfold Gold – Get an free investor kit, plus see if you qualify for $25,000 in free silver
American Silver & Gold – Free account set up, free insured shipping and free storage for up to 5 years
Explore gold exposure with a gold ETF — Public’s investing app can do this for you
How retirees can make slow travel work financially
Traveling off-peak and booking longer stays will help, but you can complement those efforts by cooking food in the vacation rental’s kitchen, doing your own laundry and relying on public transportation. If you are eager to travel internationally, prioritize vacation spots where the dollar is strong.
Monthly rentals are common for lodging, but you can get creative with home swaps, house-sitting and extended-stay hotels. Compare the total travel costs when assessing options instead of letting nightly rates dictate your decision. It’s also important to assess costs for other parts of your trip, including transportation, food, activities, insurance and cellular data.
Intentionally slowing down on your travels can reduce your cost per day. That mentality ensures that you can see new parts of the world while preserving your nest egg.
The tradeoffs, risks and smart planning tips
Slow travel can save you money, but that doesn’t make it cheap. Staying in the same vacation destination for one month can still cost thousands of dollars, even if you save money with off-peak timing and reasonable lodging. You’re also taking a different type of trip — one that might not let you see every tourist destination on your bucket list.
If you are going abroad, you may have to consider medical coverage and prescription planning. A travel medical insurance policy can be an extra layer of financial protection. You will also have to observe international rules, such as visa or stay-length requirements. Visit Travel.State.gov to prepare for these types of trips, as it contains many travel advisories.
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
OpenAI’s ChatGPT Images 2.0 is here and it does multilingual text, full infographics, slides, maps, even manga — seemingly flawlessly
It’s been only a few months since OpenAI released its last big improvement to AI image generations in ChatGPT and through its application programming interface (API) — namely, a new image generation model known as GPT-Image-1.5, released in December 2025, which brought about improved instruction following, colors, and lighting.Now, after weeks of testing, the company that kicked off the generative AI boom is unveiling a far more dramatic and even more impressive update: ChatGPT Images 2.0, which has been available not-so-secretly for several weeks on LM Arena AI, a third-party testing platform used by OpenAI and other major AI model providers to get early feedback, under the name “duct tape.”Throughout that time, it’s already blown early users’ minds with its capacity to generate long blocks of text or disparate text panels within the same image, its insanely realistic generation of user interfaces and screenshots from popular websites and platforms, its reproduction of real life figures like OpenAI co-founder and CEO Sam Altman, and its ability to perform web research and put the results into the image itself. Now today, it’s officially rolling out to ChatGPT users on all tiers, and OpenAI confirms it can also produce floor plans, image grids and sets of many smaller images, and character models from multiple angles, and apply almost all of these features to user-uploaded imagery as well. The update, which encompasses the new gpt-image-2 model for API users and a suite of “Thinking” features for ChatGPT subscribers, represents a fundamental shift in how the company views visual media. As the official release notes state, “Images are a language, not decoration. A good image does what a good sentence does—it selects, arranges, and reveals”.OpenAI did not release benchmarks to us ahead of time on ChatGPT Images 2.0, but it is safe to say the model is performing at the “state-of-the-art” based on all the outputs I’ve seen. The move comes as the AI image model space has seen increasing competition, especially with the release of Google’s Nano Banana 2 image generation model (also known as Gemini 3 Pro Image or Gemini 3.1 Pro Image) in February 2026, which also offered dense text options “baked into” images similar to ChatGPT Images 2.0. But the latter’s fidelity in reproducing user interfaces, screenshots, and multiple image packs at once seem to exceed even Google’s latest image model’s capabilities in my brief testing and anecdotal usage and observation of other users’ images. OpenAI spokespersons and researchers re-iterated the company’s commitments to safety and tagging its image outputs with metadata as AI generated in the face of rising reports — including one recently from The New York Times — on AI user-generated characters (AI UGC) being used as the seed for realistic AI videos posted en masse on social media as part of political influence campaigns, including showing support for historically unpopular U.S. President Donald J. Trump with an army of fictitious people masquerading as “real Americans.” When VentureBeat asked in a closed press briefing directly about this story and GPT Images 2.0’s potential for usage in deceptive campaigning or advertising/influence campaigns Adele Li, OpenAI’s Product Lead for ChatGPT Images, responded: “We take safety and security incredibly seriously. That includes anything when it comes to political or election interference. And so while other platforms and companies may not have those safeguards, ChatGPT does, and we take monitoring and protection of our users, as well as the influence that our photos as they are created, incredibly seriously..in the last couple years, we’ve seen a lot more new entrants into the image generation space with different standards and philosophies as ChatGPT, but we’ve stayed steady through all that, and we’re really proud of releasing this model as it relates to advanced capabilities, but doing so in a safe and protected way.”OpenAI has also confirmed that it is deprecating GPT-Image-1.5 as the default model across its suite, though it will remain accessible via the API for legacy support. This transition signals OpenAI’s confidence that the 2.0 model is a superior replacement for both casual and high-value creative tasks.The reasoning era of AI image generationThe most significant technical advancement in Images 2.0 is the integration of OpenAI’s “O-series” reasoning capabilities. Historically, image models have operated as black boxes: you provide a prompt, and a single output is generated. Images 2.0 introduces an “agentic” approach. When a user selects a “Thinking” model within ChatGPT, the system no longer simply “draws”; it researches, plans, and reasons through the structure of an image before the first pixel is rendered.During a live press briefing, Li demonstrated this reasoning by uploading a complex PowerPoint file regarding internal product strategies. Rather than merely creating a related image, the model synthesized the document’s core data, identified the correct logos, and produced a professional poster that preserved the specific stylistic inputs of the original file.In my brief testing — I was given access last night and tested it on a few generations this morning — ChatGPT Images 2.0 is the first image model from OpenAI and one of only two (Nano Banana 2 being the other) that can seemingly accurately reproduce a map of the extent of the Aztec, Maya, and Inca empires at their respective heights along with a fully legible legend, making it useful for educational or internal training purposes on global knowledge and geography.This reasoning capability also allows the model to search the web in real-time to ensure visual accuracy for current events or specific technical artifacts.This is supported by a significantly more recent knowledge cutoff of December 2025, a major leap from previous iterations that struggled with modern context.The underlying architecture has been “revamped from scratch,” according to Research Lead Boyuan Chen. While Chen declined to confirm if the model uses a traditional diffusion or auto-regressive technique, he described it as a “generalist model” or a “GPT for images” that can handle 3D-style perspective shifts and complex spatial reasoning through simple text prompts.Precision, multilingual support and a “wow” factorThe product experience for Images 2.0 is defined by three major pillars: typography, linguistic diversity, and sequential consistency.One of the most persistent “tells” of AI-generated imagery has been the inability to render legible text. OpenAI claims Images 2.0 marks a “step change” in this department. The model is now capable of producing readable typography even in dense compositions, such as scientific diagrams, menus, or infographic posters.A look at the provided “Magazine Cover” sample (Open Scifi) illustrates this precision: every headline, volume number, and even the “Display until” date on the barcode is rendered with crisp, professional alignment that mirrors human-designed layouts. This capability extends into the “Thinking” mode, where the model can even generate three-page educational visuals—complete with quizzes—that maintain a consistent instructional flow.OpenAI has also addressed a long-standing Western bias in AI imagery. Images 2.0 is described as a “polyglot” model with significant gains in non-Latin script rendering. Specifically, the model now supports high-fidelity text generation in Japanese, Korean, Chinese, Hindi, and Bengali.In the “Global Language” diagram provided, which explains the water cycle, the model successfully renders complex Korean characters (Hangul) within an educational layout. The text is not just translated; it is “rendered correctly but with language that flows coherently,” ensuring that labels and explanations feel natively integrated into the design.For creators working on storyboards or brand campaigns, the most impactful new feature is the ability to generate up to eight distinct images from a single prompt. Crucially, these images maintain “character and object continuity” across the series.Li noted that this solves a “cumbersome” workflow where users previously had to prompt one image at a time and manually stitch them together. This feature enables the creation of entire manga sequences, children’s books, or a family of social media graphics that share the same visual DNA.Licensing and availabilityOpenAI’s rollout strategy reflects a clear push toward professional and enterprise adoption. While the base model is available to all users—including those on the free tier—the advanced “Thinking” and “Pro” capabilities are reserved for paid tiers.Free Users: Have access to the base ImageGen 2.0 model for standard tasks.Plus and Pro Users: Can access “Thinking” capabilities, which include tool use, web search, and multi-image generation.Pro Users: Receive additional access to “ImageGen Pro” models for more advanced image generation.API Developers: Can integrate gpt-image-2, which supports resolutions up to 4K (currently in beta) and flexible aspect ratios ranging from a wide 3:1 to a tall 1:3.Pricing in the API is as follows, echoing GPT-Image-1.5, the predecessor model, but actually shaving off $2 on the output side:Image
$8.00 for inputs
$2.00 for cached inputs
$30.00 for outputs
Text
$5.00 for inputs
$1.25 for cached inputs
$10.00 for outputsWhat is clear so far is that OpenAI is describing three practical layers of access, even if it has not published a precise tier-by-tier matrix. The baseline is ChatGPT Images 2.0, which OpenAI’s blog post states is available to all ChatGPT and Codex users and includes the core model improvements: better instruction following, stronger text rendering, multilingual gains, broader aspect ratios, and more polished, production-usable outputs. Above that is “thinking”, which the release defines more concretely: when a thinking model is selected, the system can take more time, use the web, analyze uploaded materials, reason through layout before generating, and produce multiple distinct images at once, including up to eight coherent outputs with continuity. In the briefing, Li also framed thinking and Pro as “juiced-up” versions of the base model with tool use, and said these advanced modes are slower, not faster, because they do more reasoning and search behind the scenes. What remains unclear is the exact feature boundary between Thinking and Pro. The materials say Pro users get access to more advanced image generation, but they do not spell out whether that means higher quality, higher limits, higher resolution, more outputs, or some other advantage distinct from thinking itself.For enterprise users, the safest way to think about the differences is not as three totally separate products, but as a spectrum from fast default generation to slower, more agentic, more structured generation. If a team needs quick creative drafts, marketing concepts, simple graphics, or everyday image edits, the base Images 2.0 model appears to be the relevant default. If the task involves factual grounding, transforming internal documents into explainers, creating multi-image sets, or maintaining consistency across a sequence of assets, the more important distinction is whether the organization has access to thinking-enabled outputs. Until OpenAI provides a clearer Pro-versus-Thinking breakdown, enterprise buyers should treat “thinking” as the meaningful functional upgrade and treat “Pro” as a possibly higher-end access tier whose exact incremental benefits still need clarification before procurement or workflow planning.Safety standardsOpenAI’s says ChatGPT Images 2.0 offers a”multi-layered stack” of safety protocols, including:Provenance: Adhering to industry standards for watermarking so that AI-generated images are identifiable.Model Safeguards: Using advanced perception models to filter out harmful or abusive content for both adults and children.Active Monitoring: Enforcing user policies through real-time reporting.Li emphasized that while their philosophy is to “maximize user creativity,” they maintain strict policies against election interference. What it means for enterprise usersThe shift from Images 1.5 to 2.0 is more than a resolution bump. By integrating reasoning, OpenAI is attempting to solve the “intent gap” that has plagued AI art since its inception. When you ask an AI for an “infographic about supply and demand,” you aren’t just looking for a picture; you are looking for a logical layout of information.The “Interior Design” sample (Japandi Furnishing Concept) highlights this systemic thinking. The model didn’t just generate a room; it created a cohesive floor plan, a color palette, a list of materials, and “inspiration” shots that all adhere to a singular aesthetic. This is what OpenAI calls moving from a “tool” to a “visual system”. However, this increased capability comes with a trade-off in speed. For the professional user, this is likely a worthwhile exchange: waiting an extra minute for a “production-ready asset” is still significantly faster than the hours required for manual design.As ChatGPT Images 2.0 rolls out, it marks the beginning of an era where AI doesn’t just assist in making art, but in conducting “economically valuable creative tasks”. Whether it can truly replace the intentionality of a human designer remains to be seen, but with 2K resolution, multilingual fluency, and the ability to “think” before it acts, OpenAI has certainly closed the distance.
The AI governance mirage: Why 72% of enterprises don’t have the control and security they think they do
Decision makers at 72% of organizations claim to have two or more AI platforms that they identify as their “primary” layer, according to a survey of 40 enterprise companies conducted by VentureBeat last month, revealing real gaps in security and control. For enterprise management and technical leaders, and especially security leaders, these multiple AI platforms extend the attack surfaces of most enterprises at a time when AI-driven attacks have become increasingly potent.The multiple platforms — which include offerings from hyperscaler or AI labs like Microsoft Azure, Google, OpenAI or Anthropic, or big application companies like Epic, Workday or ServiceNow — reflect a state of sprawl that has emerged as these big software providers rush to offer their own AI to their enterprise customers. Those customers, in their own rush to scale AI, are finding they aren’t building a singular strategy — in fact they may be building a collection of contradictions.The strategic paradox: why leading enterprises are building around their vendorsFor example, take the strategic paradox faced by Mass General Brigham (MGB) hospital system, which has 90,000 employees and is the largest employer in Massachusetts. The hospital system last year had to shut down an uncontrolled number of internal proof of concepts that had sprouted up as employees had gotten carried away with AI projects, said CTO Nallan “Sri” Sriraman at the VentureBeat AI Impact event in Boston on March 26, which focused on the challenges of scaling AI. Instead, the company decided it was better to wait for the software giants it already uses to deliver on their AI roadmaps. Since these companies have so many resources, and were making AI a top priority themselves, it made no sense for MGB to try to build its own AI layer that would be duplicative, he said. “Why are we building it ourselves?” he asked. “Leverage it.”Yet, even then, Sriraman’s team has been forced to build workarounds, where those companies haven’t done enough. For example, MGB has just completed a “full-scaled” custom build around Microsoft’s Copilot — to get essentially everything offered by that tool — by putting a “skin” around Copilot to handle the safety and data privacy concerns the major model providers haven’t yet mastered. Specifically, MGB needed a way for employees to prompt the AI and not have their protected health information (PHI) leaked back to the Copilot LLM provider, OpenAI. The new secure platform, which can support up to 30,000 users, is really the ultimate contradiction: Even though the company has a mandate to leverage the AI provided by the bigger companies, it needs to build around its failures. The contradiction goes even further. These software vendors used by MGB — which also include Epic, Workday and ServiceNow — are all now building agents for their AI, all operating differently. So MGB has to invest in building a “control plane that coordinates and orchestrates all of these agents,” Sriraman said. “That’s where our investment is going to be.”He noted that companies like his are “discovering and experimenting as the landscape keeps shifting.” The marketplace is “still nascent,” he said, which makes decisions difficult.The “six blind men” problemSriraman explained the current vendor landscape with an analogy: “When you ask six blind men to touch an elephant and say, what does this elephant look like?” Sriraman said. “You’re gonna get six different answers.”What emerges from the research VentureBeat conducted in the first quarter, along with conversations like the one in Boston, is a situation that we at VentureBeat are calling a “governance mirage.” While many enterprises say they have adequate governance, in reality they haven’t created clear accountability or specific guardrails, evaluations or security processes to ensure that governance.The data of disconnect: confidence vs. systematic oversightThe research comes from surveys across January, February and March by VentureBeat of enterprise companies with 100 or more employees, with 40 to 70 qualified respondents per topic area — covering agentic orchestration, AI security, RAG and governance. The data lacks statistical significance in many areas and should be treated as directional.The research on governance found that a majority, or 56%, of respondents said they are “very confident” that they’d detect a misbehaving AI model, suggesting that most decision-makers believe they have sufficient basic governance at their companies. However, nearly a third of respondents have no systematic mechanism to detect AI misbehavior until it surfaces through users or audits. In a world where telemetry leakage accounts for 34% of GenAI incidents (Wiz), and the global average breach cost has hit $4.4M (IBM 2025 Cost of a Data Breach), finding out after the damage is done is the default for too many companies.Moreover, 43% of respondents say a central team owns AI governance. That sounds reassuring — until you look at what’s happening everywhere else. Twenty-three percent say governance is unclear or actively contested between teams. Twenty percent say each platform team governs independently. Six percent say no one has formally addressed it. The rest said they were unsure who owned it.More telling is the barrier data. When asked about the single biggest obstacle to governing AI across platforms, “no single owner or accountable team” ranked second at 29% — just behind vendor opacity. Accountability structure and lack of vendor transparency are the two dominant failure modes, and they compound each other: Without a central owner, no one has the mandate to demand transparency from the vendors. The day-two bill: managing sprawl, creep, and lock-inThe scaling trap: Red Hat’s warningBrian Gracely, Senior Director at Red Hat, who also spoke at the VentureBeat Boston event last month, addressed the infrastructure side of this sprawl, warning that many enterprises are falling into a trap of deceptive initial wins.Gracely noted that the barrier to entry is almost nonexistent at the start, with nearly anyone able to spin up a project using a credit card and an API key. “Day zero is very, very easy,” Gracely said. “Day two is when the bill comes due.”Red Hat is positioning its software layer (OpenShift AI) as the necessary buffer to prevent enterprises from getting buried in a single provider’s proprietary ecosystem. Gracely’s point is direct: If your control system is built entirely inside one cloud provider’s toolset, you are effectively “renting a cage.” The illusion of speed in the early pilot phase often hides a technical debt that becomes obvious the moment you try to move your AI work to a different platform.Gracely illustrated this with a recent example. A senior leader from Red Hat’s centralized CTO office spent part of her vacation contributing to an open-source agent project called OpenClaw, which became widely popular in the first quarter. Within days of her name appearing as a project maintainer, Red Hat was fielding calls from major New York banks. Their problem was immediate: They realized they already had upwards of 10,000 employees bringing “claws” — agent-based tools — into their infrastructure with zero centralized oversight.Breaches caused by employees working on these sorts of unapproved technologies are costly. These so-called “shadow AI” incidents cost on average $670K more than standard incidents, according to IBM.Red Hat’s Gracely noted that while organizations can try to shut down these unapproved ports, they eventually have to figure out how to make them productive and secure — a task that requires a serious investment in an orchestration or platform layer.The dynamic defensive: MassMutual’s refusal to betWhile some enterprise companies seek an “AI operating system” that oversees all of their AI technologies and apps, others are simply refusing to sign the check. Sears Merritt, CIO and head of enterprise technology at MassMutual, is managing the governance conundrum by intentionally staying in a state of high-velocity flexibility.”Things are so dynamic, it’s hard to know which of the AI vendors will end up on top,” Merritt said at the Boston event. For that reason, MassMutual is refusing to enter any long-term contracts with AI vendors. Merritt’s strategy of “dynamic defensive” highlights a core finding of our research: Vendor popularity is changing radically month to month. Anthropic, for example, went from 0% in January to nearly 6% in February, in the number of respondents reporting what agent orchestration technology they were using. Again, the sample size was small, at 70 respondents. Still, even if directional, the dynamic landscape suggests picking a “primary” winner today is a fool’s errand.The January figure likely reflects survey composition: Respondents represent the broader enterprise market, not the developer community where Anthropic has seen its strongest early traction.Until recently, most organizations had signed up early with leaders like Microsoft and OpenAI as their main orchestration providers, due to their early lead with Copilot. Our finding that Anthropic is just now pushing into enterprise agent orchestration may be a confirmation of the recent excitement around that platform. One possible explanation is that enterprises already using Claude for model inference are now routing through Anthropic’s native tooling rather than third-party frameworks — though the sample is too small to draw firm conclusions.The rise of “platform creep”The leading providers are also shifting toward “managed agents,” as reflected by Anthropic’s recent announcement. This offering suggests possible continued platform creep, whereby providers like OpenAI and Anthropic take over more and more of the AI infrastructure — most specifically, in this case, the memory of agentic session details. And there the trap is set. Once your session data and orchestration live inside a provider’s proprietary database, you aren’t just using a model; you are living in its ecosystem. Moreover, persistent agent memory is a prime target for memory poisoning via injected instructions that influence every future interaction. And when that memory lives in a provider’s database, you lose your own forensic capability. The security irony: The fox guarding the hen houseWe are seeing this platform creep in our data as well. The most jarring finding in our Q1 data is what we call the “Security Irony”: the fact that the providers most responsible for creating enterprise AI risk are the same ones enterprises are using to manage it.Respondents said the top selection criterion for AI orchestration platforms was “security and permissions generally” (37.1%), beating out other criteria like cost, flexibility, control and ease of development. Yet, the market is choosing convenience over sovereignty. According to our survey, 26% of enterprises in February were using OpenAI as their primary security solution — the very same provider whose models create the risks they are trying to secure. That trend only seemed to strengthen in March, though, as stated before, we want to be careful. Our sample size is small, and this data should only be taken as directional. It’s not clear whether enterprises are choosing OpenAI as a security solution, or just relying on its built-in security features offered by Microsoft Azure (which partnered with OpenAI when it pushed its Copilot solution aggressively in 2024) because customers were already on that platform.Beyond the data, there are anecdotal signs that OpenAI’s enterprise position may be shifting. Anthropic’s Claude Code drew significant attention among developers early this year alongside the Claude 4.6 model. The subsequent announcement of Mythos, its security-focused model, prompted interest from enterprise security teams given its ability to identify vulnerabilities. OpenAI has also announced a security-focused model, GPT-5.4-Cyber.Our data may also point to a drop in OpenAI’s relative position in a few enterprise AI categories. One area was data-retrieval, where OpenAI again leads among third-party providers, but we saw an increase in the number of respondents instead using in-house solutions for retrieval — perhaps a sign that AI models and agents are getting better at natively being able to use tools to call directly to companies’ existing databases, and that custom code is often a way companies are building this in. However, here again we feel our data is at best directional for now.We are asking the fox to guard the hen house. Hyperscaler security features (like those from OpenAI, Azure, and Google) are winning, because they are already integrated into the platforms enterprises are using. But it creates a single-provider dependency. As agents gain the power to modify documents, call APIs and access databases, the “governance mirage” suggests we have control, while the data shows we are simply clicking “I agree” on whatever the hyperscalers offer. The resulting risks, however, include content injection, privilege escalation and data exfiltration.The path forward: toward a unified control planeThe search for the “Dynatrace for AI”So, what is the way out? Sriraman argued that the industry desperately needs a “central observability platform” — a “Dynatrace for AI” — that provides full end-to-end visibility, including model drift and safety prompting, agent behavior analytics, privilege escalation alerts, and forensic logging. He is currently working with a number of potential providers to deliver on this.The “swivel chair” warningSriraman warned that without a unified control plane, enterprises are at risk of sliding back into a fragmented “swivel chair” world — reminiscent of the early, inefficient days of Robotic Process Automation (RPA) — where employees are forced to constantly jump between different siloed AI tools to finish a single workflow. “We don’t want to create a world where you have to switch to do something here and then go back to the platform to do something else,” he said.But that desire for a single control plane conflicts with the desire to avoid lock-in. Our data shows the market has settled on the “hybrid control plane.” In other words, the most popular situation among our respondents (at 34.3%), was to use model provider-native solutions like Copilot Studio or OpenAI assistants for some workflows, while also running external options like LangGraph or custom orchestration for others. Smaller numbers of companies reported being more dogmatic here, whether that be deliberately removing the model provider from the orchestration layer entirely, relying only on custom orchestration tools, or relying only on the model provider’s technologyEnterprises trust no single provider enough to give them full control, yet they lack the engineering capacity to build entirely from scratch.The bottom line: The “big red button”Visibility and integration are only half the battle. In a high-stakes industry like healthcare, Sriraman argues that any legitimate control plane must also offer a hard-stop capability. “We need a big red button,” he said. “Kill it. We should be able to have that … without that, don’t put anything in the operational setting.” In fact, such a kill switch was formally called for by the security community group OWASP as part of a recommended security framework.The “governance mirage” is the belief that you can scale AI without deciding who owns the control and security plane.If you are one of the 72% of organizations claiming multiple “primary” platforms, be careful because you may not have a strategy; you may have a conflict of interest. It suggests that the winner of the war between the AI behemoths — OpenAI, Anthropic, Google, Microsoft, etc. — won’t necessarily be the one with the best model, but the one that manages to sit above the models and help enterprises enforce a single version of the truth. That may be difficult to achieve, though, given that companies won’t want lock-in with a single player. The data suggests enterprises are already resisting that outcome — and may need to formalize that resistance. Enterprises arguably need to own their control plane with independent security instrumentation, not wait for a vendor to win that role for them.
Mob boss John Gotti’s grandson is headed to prison for a $1.1 million Covid fraud and crypto scheme
he mob boss’ grandson defrauded the U.S. government’s Covid-19 relief system out of $1.1 million and invested at least half of it to invest in crypto businesses.
Intel’s stock has been so strong that even skeptics have changed their minds
The chip maker’s stock is on track for the best month in at least 46 years — leading to no fewer than two analyst upgrades two days before it reports earnings.
D.C. Climate Week 2026: Climate Policy Meets Clean Energy Innovation
D.C. Climate Week is underway for the second time, with more than 250 low-cost or free events across Washington. Join and explore what it has to offer.
Billionaire Michael Dell Announces $750 Million Donation For Hospital, Research Center In Austin
The donation will fund a new hospital and medical research campus.
Prices for World Cup public transportation range from free to $150. Here’s what’s going on.
In one instance, train fares are being inflated from $26 to $150.
Crypto’s massive exploit may force big banks to rethink their blockchain plans, Jefferies warns
The $293 million Kelp DAO exploit has exposed critical infrastructure risks, leading Jefferies to suggest that traditional financial firms may pause their blockchain initiatives to prioritize security.