North Korea’s Lazarus Group has a new attack vector that allows it to exploit an apparently routine business call as a gateway into a target’s systems.
Google’s Gemini can now run on a single air-gapped server — and vanish when you pull the plug
Cirrascale Cloud Services today announced it has expanded its partnership with Google Cloud to deliver the Gemini model on-premises through Google Distributed Cloud, making it the first neocloud provider to offer Google’s most advanced AI model as a fully private, disconnected appliance. The announcement, timed to coincide with Google Cloud Next 2026 in Las Vegas, addresses a stubborn problem that has plagued regulated industries since the generative AI boom began: how to access frontier-class AI models without surrendering control of your data.The offering packages Gemini into a Dell-manufactured, Google-certified hardware appliance equipped with eight Nvidia GPUs and wrapped in confidential computing protections. Enterprises and government agencies can deploy the system inside Cirrascale’s data centers or their own facilities, fully disconnected from the internet and from Google’s cloud infrastructure. The product enters preview immediately, with general availability expected in June or July.In an exclusive interview with VentureBeat ahead of the announcement, Dave Driggers, CEO of Cirrascale Cloud Services, described the deployment as “the next step of the partnership” and “being able to offer their most important model they have, which is Gemini.” He was emphatic about what customers would be getting: “It is full blown Gemini. It’s not pulled,” he told VentureBeat. “Nothing’s missing from it, and it’ll be available in a private scenario, so that we can guarantee them that their data is secure, their inputs are secure, their outputs are secure.”The move signals a deepening shift in the enterprise AI market, where the most capable models are migrating out of hyperscaler data centers and into customers’ own racks — a reversal of the cloud computing orthodoxy that defined the past decade.The impossible tradeoff that kept banks and governments on the AI sidelinesFor years, organizations in financial services, healthcare, defense and government faced a binary choice: access the most powerful AI models through public cloud APIs, exposing sensitive data to third-party infrastructure, or settle for less capable open-source models they could host themselves. Cirrascale’s new offering attempts to eliminate that tradeoff entirely.Driggers described how the trust problem escalated in stages. First, companies worried about handing their proprietary data to hyperscalers. Then came a deeper realization. “They started realizing, holy crap, when my users type stuff in, they’re giving private information away — and the output is private too,” Driggers told VentureBeat. “And then the hyperscalers said, ‘Your prompts and the responses? That’s our stuff. We need that in order to answer your question.'” That was the moment, he argued, when the demand for fully private AI became impossible to ignore.Unlike Google Distributed Cloud, which Google already offers as its own on-premises cloud extension, the Cirrascale deployment places the actual model — weights and all — outside of Google’s infrastructure entirely. “Google doesn’t own this hardware. We own the hardware, or the customer owns the hardware,” Driggers said. “It is completely outside of Google.”Driggers drew a sharp distinction between this offering and what competitors provide. When asked about Microsoft Azure’s on-premises deployments with OpenAI models and AWS Outposts, he was blunt: “Those are a lot different. This is the actual model being deployed on prem outside of their cloud. It’s not a cut down version. It’s the actual model.” Pull the plug and the model vanishes: how confidential computing guards Google’s crown jewelThe technical underpinnings of the deployment reveal how seriously both Google and Cirrascale are treating the security question. The Gemini model resides entirely in volatile memory — not on persistent storage. “As soon as the power is off, the model is gone,” Driggers explained. User sessions operate through caches that clear automatically when a session ends. “A company’s user inputs, once that session’s over, they’re gone. They can be saved, but by default, they’re gone,” he said.Perhaps the most striking security feature is what happens when someone attempts to tamper with the appliance. Driggers described a mechanism that effectively renders the machine inoperable: “You do anything that is against confidential compute, and it’s gone. Not only does the machine turn off, and therefore the model is gone, it actually puts in a marker that says, ‘You violated the confidential compute.’ That machine has to come back to us, or back to Dell or back to Google.” He characterized the appliance as something that “does time bomb itself if something goes wrong.”This level of protection reflects Google’s own anxiety about releasing its flagship model’s weights into environments it doesn’t control. The appliance is effectively a vault: the model runs inside it, but nobody — not even the customer — can extract or inspect the weights. The confidential computing envelope ensures that even physical possession of the hardware doesn’t grant access to the model’s intellectual property.When Google releases a new version of Gemini, the appliance needs to reconnect — but only briefly, and through a private channel. “It does have to get connected back to Google to load the new model. But that can go via a private connection,” Driggers said. For the most security-sensitive customers who can never allow their machine to connect to an outside network, Cirrascale offers a physical swap: “The server will be unplugged, purged, all the data gone, guaranteed it’s gone, a new server will show up with a new version of the model.”From Wall Street to drug labs, the rush for air-gapped AI is acceleratingDriggers identified three primary drivers of demand: trust, security and guaranteed performance. Financial services institutions top the list. “They’ve got regulatory issues where they can’t have something out of their control. They’ve got to be the one who determines where everything is. It’s got to be air gap,” Driggers said. The minimum deployment footprint — a single eight-GPU server — makes the product accessible in a way that Google’s own private offerings do not. Running Gemini on Google’s TPU-based infrastructure, Driggers noted, requires a much larger commitment. “If you want a private [instance] from Google, they require a much bigger bite, because to build something private for you, Google requires a gigantic footprint. Here we can do it down to a single machine.”Beyond finance, Driggers pointed to drug discovery, medical data, public-sector research, and any business handling personal information. He also flagged an increasingly critical use case: data sovereignty. “How about your business that’s doing business outside of the United States, and now you’ve got data sovereignty laws in places where GCP is not? We can provide private Gemini in these smaller countries where the data can’t leave.”The public sector is another major target. Cirrascale launched a dedicated Government Services division in March as part of its earlier partnership with Google Public Sector around the GPAR (Google Public Sector Program for Accelerated Research) initiative. That program provides higher education and research institutions access to AI tools including AlphaFold, AI Co-Scientist, and Gemini Enterprise for Education. Today’s announcement extends that relationship from the research tooling layer to the model itself.The performance guarantee is the third pillar. Driggers noted that frontier models accessed through public APIs deliver inconsistent response times — a problem for mission-critical business applications. The private deployment eliminates that variability. Cirrascale layers management software on top of the Gemini appliance that allows administrators to prioritize users, allocate tokens by role, adjust context window sizes, and load-balance across multiple appliances and regions. “Your primary data scientists or your programmers may need to have really large context windows and get priority, especially maybe nine to five,” Driggers explained, “but yet, the rest of the time, they want to share the Gemini experience over a wider group of people.” He also noted that agentic AI workloads, which can run around the clock, benefit from the ability to consume unused capacity during off-peak hours — a scheduling flexibility that public cloud deployments don’t easily support.Seat licenses, token billing and all-you-can-eat pricing: a model built for enterprise flexibilityThe pricing model reflects Cirrascale’s broader philosophy of meeting customers where they are. Driggers described several consumption options: seat-based licensing (with both enterprise and standard tiers), per-token billing, and flat “all-you-can-eat” pricing per appliance. The minimum commitment is a single dedicated server — the appliances are not shared between customers in any configuration. “We’ll meet the customer, what they’re used to,” Driggers said. “If they’re currently taking a seat license, we’ll create a seat license for them.”Customers can also choose to purchase the hardware outright while still consuming Gemini as a managed service, an arrangement Cirrascale has offered since its earliest days in the AI wave. Driggers said OpenAI has been a customer since 2016 or 2017, and in that engagement, OpenAI purchased its own GPUs while Cirrascale “took those GPUs, incorporated them into our servers and storage and networking, and then presented it back as a cloud service to them so they didn’t have to manage anything.”That flexible ownership model is particularly relevant for universities and government-funded research institutions, where mandates often require a specific mix of capital expenditure, operating expenditure, and personnel investment. “A lot of government funding requires a mixture of CapEx, OPEX and employment development,” Driggers said. “So we allow that as well.”Inside the neocloud that built the world’s first eight-GPU server — and just landed Google’s biggest AI modelCirrascale’s announcement arrives during a period of explosive growth for the neocloud sector — the tier of specialized AI cloud providers that sit between the hyperscalers and traditional hosting companies. The neocloud market is projected to be worth $35.22 billion in 2026 and is growing at a compound annual growth rate of 46.37%, according to Mordor Intelligence. Leading neocloud providers include CoreWeave, Crusoe Cloud, Lambda, Nebius and Vultr, and these companies specialize in GPU-as-a-Service for AI and high-performance computing workloads.But Cirrascale occupies a different niche within this booming category. While companies like CoreWeave have focused primarily on providing raw GPU compute at scale — CoreWeave boasts a $55.6 billion backlog — Cirrascale has positioned itself around private AI, managed services and longer-term engagements rather than on-demand elastic compute. Driggers described the company as “not an on-demand place” but rather a provider focused on “longer-term workloads where we’re really competing against somebody doing it back on prem.”The company’s history supports that claim. Cirrascale traces its roots to a hardware company that “designed the world’s first eight GPU server in 2012 before anybody thought you’d ever need eight GPUs in a box,” as Driggers put it. It pivoted to pure cloud services roughly eight years ago and has since built a client roster that includes the Allen Institute for AI, which in August 2025 tapped Cirrascale as the managed services provider for a $152 million open AI initiative funded by the National Science Foundation and Nvidia. Earlier this month, Cirrascale announced a three-way alliance with Rafay Systems and Cisco to deliver end-to-end enterprise AI solutions combining Cirrascale’s inference platform, Rafay’s GPU orchestration, and Cisco’s networking and compute hardware.The private AI era is arriving faster than anyone expectedThe Gemini partnership is the highest-profile move yet — and it taps into a broader industry current. The push to move frontier AI out of the public cloud and into private infrastructure is no longer a niche demand. Industry analysts predict that by 2027, 40% of AI model training and inference will occur outside public cloud environments. That projection helps explain why Google is willing to let its crown-jewel model run on hardware it doesn’t own, in data centers it doesn’t operate, managed by a company in San Diego. The alternative — watching regulated enterprises default to open-source models or to Microsoft’s Azure OpenAI Service — is apparently a worse outcome.The announcement also carries major implications for Google’s competitive positioning. Microsoft has built its enterprise AI strategy around the Azure OpenAI Service and its deep partnership with OpenAI, while AWS has invested in Amazon Bedrock and its own on-premises solutions through Outposts. Google Cloud Platform still trails both rivals in market share, though Q4 cloud revenue rose 48% year-over-year. Enabling Gemini to run on third-party infrastructure via partners like Cirrascale broadens its distribution surface in exactly the segments — government, finance, healthcare — where Microsoft and Amazon have historically held advantages. For Cirrascale, the partnership represents a chance to differentiate sharply in a market where most neoclouds are competing on GPU availability and price.Driggers expects rapid uptake in the second half of 2026. “It’s going to be crazy towards the end of this year,” he said. “Major banks will finally do stuff like this, because they can secure it. They can do it globally. Big research institutions who have labs all over the world will do these types of things.” He predicted other frontier model providers will follow with similar offerings soon, and he doesn’t see Gemini as the end of the story. “We really think that the enterprise have been waiting for private AI, not just Gemini, but all sorts of private AI,” Driggers said.That may be the most telling line of all. For three years, the AI revolution has been defined by a simple bargain: send your data to the cloud and get intelligence back. Cirrascale’s bet — and increasingly, Google’s — is that the biggest customers in the world are done accepting those terms. The most powerful AI on the planet is now available on a single locked box that can sit in a bank vault, a university basement, or a government facility in a country where Google has no data center. The cloud, it turns out, is finally ready to come back down to earth.
The modern data stack was built for humans asking questions. Google just rebuilt its for agents taking action.
Enterprise data stacks were built for humans running scheduled queries. As AI agents increasingly act autonomously on behalf of businesses around the clock, that architecture is breaking down — and vendors are racing to rebuild it. Google’s answer, announced at Cloud Next on Wednesday, is the Agentic Data Cloud.The architecture has three pillars:Knowledge Catalog. Automates semantic metadata curation, inferring business logic from query logs without manual data steward interventionCross-cloud lakehouse. Lets BigQuery query Iceberg tables on AWS S3 via private network with no egress feesData Agent Kit. Drops MCP tools into VS Code, Claude Code and Gemini CLI so data engineers describe outcomes rather than write pipelines”The data architecture has to change now,” Andi Gutmans, VP and GM of Data Cloud at Google Cloud, told VentureBeat. “We’re moving from human scale to agent scale.”From system of intelligence to system of actionThe core premise behind Agentic Data Cloud is that enterprises are moving from human‑scale to agent‑scale operations.Historically, data platforms have been optimized for reporting, dashboarding, and some forecasting — what Google characterizes as “reactive intelligence.” In that model, humans interpret data and decide what to do.Now, with AI agents increasingly expected to take actions directly on behalf of the business, Gutmans argued that data platforms must evolve into systems of action.
“We need to make sure that all of enterprise data can be activated with AI, that includes both structured and unstructured data,” Gutmans said. “We need to make sure that there’s the right level of trust, which also means it’s not just about getting access to the data, but really understanding the data.”The Knowledge Catalog is Google’s answer to that problem. It is an evolution of Dataplex, Google’s existing data governance product, with a materially different architecture underneath. Where traditional data catalogs required data stewards to manually label tables, define business terms and build glossaries, the Knowledge Catalog automates that process using agents.The practical implication for data engineering teams is that the Knowledge Catalog scales to the full data estate, not just the curated subset that a small team of data stewards can maintain by hand. The catalog covers BigQuery, Spanner, AlloyDB and Cloud SQL natively, and federates with third-party catalogs including Collibra, Atlan and Datahub. Zero-copy federation extends semantic context from SaaS applications including SAP, Salesforce Data360, ServiceNow and Workday without requiring data movement.Google’s lakehouse goes cross cloudGoogle has had a data lakehouse called BigLake since 2022. Initially it was limited to just Google data, but in recent years has had some limited federation capabilities enabling enterprises to query data found in other locations.Gutmans explained that the previous federation worked through query APIs, which limited the features and optimizations BigQuery could bring to bear on external data. The new approach is storage-based sharing via the open Apache Iceberg format. That means whether the data is in Amazon S3 or in Google Cloud , he argued it doesn’t make a difference.
“This truly means we can bring all the goodness and all the AI capabilities to those third-party data sets,” he said.The practical result is that BigQuery can query Iceberg tables sitting on Amazon S3 via Google’s Cross-Cloud Interconnect, a dedicated private networking layer, with no egress fees and price-performance Google says is comparable to native AWS warehouses. All BigQuery AI functions run against that cross-cloud data without modification. Bidirectional federation in preview extends to Databricks Unity Catalog on S3, Snowflake Polaris and the AWS Glue Data Catalog using the open Iceberg REST Catalog standard.From writing pipelines to describing outcomesThe Knowledge Catalog and cross-cloud lakehouse solve the data access and context problems. The third pillar addresses what happens when a data engineer actually sits down to build something with all of it.The Data Agent Kit ships as a portable set of skills, MCP tools and IDE extensions that drop into VS Code, Claude Code, Gemini CLI and Codex. It does not introduce a new interface.The architectural shift it enables is a move from what Gutmans called a “prescriptive copilot experience” to intent-driven engineering. Rather than writing a Spark pipeline to move data from source A to destination B, a data engineer describes the outcome — a cleaned dataset ready for model training, a transformation that enforces a governance rule — and the agent selects whether to use BigQuery, the Lightning Engine for Apache Spark or Spanner to execute it, then generates production-ready code.”Customers are kind of sick of building their own pipelines,” Gutmans said. “They’re truly more in the review kind of mode, than they are in the writing the code mode.”Where Google and its rivals divergeThe premise that agents require semantic context, not just data access, is shared across the market. Databricks has Unity Catalog, which provides governance and a semantic layer across its lakehouse. Snowflake has Cortex, its AI and semantic layer offering. Microsoft Fabric includes a semantic model layer built for business intelligence and, increasingly, agent grounding.The dispute is not over whether semantics matter — everyone agrees they do. The dispute is over who builds and maintains them.”Our goal is just to get all the semantics you can get,” he explained, noting that Google will federate with third-party semantic models rather than require customers to start over.Google is also positioning openness as a differentiator, with bidirectional federation into Databricks Unity Catalog and Snowflake Polaris via the open Iceberg REST Catalog standard.What this means for enterprisesGoogle’s argument — and one echoed across the data infrastructure market — is that enterprises are behind on three fronts:Semantic context is becoming infrastructure. If your data catalog is still manually curated, it will not scale to agent workloads — and Gutmans argues that gap will only widen as agent query volumes increase.Cross-cloud egress costs are a hidden tax on agentic AI. Storage-based federation via open Iceberg standards is emerging as the architectural answer across Google, Databricks and Snowflake. Enterprises locked into proprietary federation approaches should be stress-testing those costs at agent-scale query volumes.Gutmans argues the pipeline-writing era is ending. Data engineers who move toward outcome-based orchestration now will have a significant head start.
Lana Del Rey’s Debut Album Reaches A Landmark Before Her New Set Drops
Lana Del Rey’s breakout debut full-length Born to Die reaches 350 weeks on the U.K.’s albums chart. It is far and away her most successful release.
Democrats Didn’t Discover The Insurance Crisis. They Created It
Health insurers have grown bigger, more powerful and more deeply embedded in our healthcare system than ever before. But if Democrats are serious about fixing the problem, they’re more than a decade late.
Why a global oil spike will hit the U.S. harder than China
China has multiple sources of energy imports and it has prepared well for a sustained period of high oil prices. Unlike America.
The signal bitcoin momentum traders have been waiting for is here
What you need to know for April 22, 2026
Tesla Full Self-Driving faces its biggest challenge yet
On X (the former Twitter) these days, much of the conversation around Tesla Full-Self Driving (Supervised) is positive.In between retweeting race-baiting posts from white supremacist accounts, CEO Elon Musk, who also owns the social media platform, sometimes reposts videos of customers happy with Tesla FSD, and much (though not all)of the conversation around FSD lauds the technology.But on other social media platforms, a growing number of discontented Tesla owners express frustration over a range of issues. Some drivers have organized to file a class-action lawsuit on behalf of 3,000 people in California who are being left out of the company’s latest FSD upgrade.Tom LoSavio, the lead plaintiff in the case, purchased his Model S in 2017 for $100,000 and paid another $8,000 for lifetime access to FSD, according to The Wall Street Journal.However, in the nine years since, he has grown disenchanted with Tesla, alleging that Musk and the company made repeated false claims about the tech’s capabilities while charging thousands of dollars for pricey upgrades that didn’t exist then, and still don’t.At the heart of the lawsuit is Musk and Tesla’s repeated promise that most Teslas on the road today will one day be capable of autonomy. But the company’s latest move with its Hardware 4 chip is leaving old-school Tesla owners out in the cold.Tesla lawsuit rests on 2016 Elon Musk promiseTesla CEO Elon Musk has made plenty of promises that haven’t come to fruition and has blown past countless deadlines that he set himself, but one promise from 2016 is at the heart of a class action lawsuit that now has 3,000 plaintiffs. Tesla began including early versions of its self-driving tech in Teslas in 2014. Then, in 2015, Musk promised that Teslas would be able to drive themselves within two years. In 2016, according to the lawsuit, Tesla said that all new cars built from then on would have the hardware required for full self-driving capabilities, with Musk claiming that by the end of 2017, a Tesla could drive itself from Los Angeles to New York City. Last year, TheStreet covered two Tesla enthusiasts who documented their failed attempt to do just that. They didn’t even make it out of California, although plenty of Tesla owners claim they have made the trip successfully.Tesla has broken that promise multiple times since 2017, as the more sophisticated FSD technology required hardware updates to the company’s computers and cameras, which it began offering in 2020 and 2021.Some customers, like Tom LoSavio, who paid the $8,000 fee for lifetime access, were upgraded for free, while others who paid the monthly subscription price paid $1,000 for the 2020/2021 upgrade. But then Tesla upgraded the hardware again in 2023, for a fourth time, and started selling new cars with its latest chip, meaning those who had been either upgraded for free or paid to upgrade just a couple of years prior were once again running on outdated equipment. Similar lawsuits are popping up internationally. Tesla FSD was approved for use in the Netherlands earlier this month, but only the version running on the latest hardware. So Tesla owners who purchased their vehicles before 2023 are out of luck. “Why did I buy it? Because I believed they would make it happen,” one Dutch Tesla owner who paid €68,000 in 2019 for a Model 3 Performance, and an additional €6,400 for the upgraded Full Self-Driving capability, told the Journal. “I just didn’t think it would take them seven years, and still they wouldn’t deliver.”He’s organizing European Tesla owners into another lawsuit, and a similar class action suit is making its way through the Australian federal court, accusing the company of selling vehicles “incapable of supporting fully autonomous or close to autonomous driving,” based on the hardware that was purchased years ago.During Tesla’s earnings call in January 2025, Musk told investors that the company would have to upgrade the computer for customers who bought the lifetime FSD package. “That is the honest answer and that’s going to be painful and difficult. But we’ll get it done,” Musk said at the time, according to the Journal. “Now, I’m kind of glad that not that many people bought the FSD package.”
A Tesla promise from 2016 is at the heart of a class action lawsuit that now has 3,000 plaintiffs.Leong/Washington Post via Getty Images
Elon Musk’s promises keep investors intriguedFollowing a second consecutive year of falling deliveries and its first year of declining revenue in 2025, Tesla has lost a bit of its luster. Analysts at Deutsche Bank expect the bad times to stretch into 2026, but the firm remains bullish on the company, given its future ambitions.“While the autos business at Tesla may underperform in 2026, we think more attention is directed towards the company’s robotaxi expansion and efforts at humanoid development,” Deutsche Bank analysts said in a recent note.“To the extent that the macro regime doesn’t change materially, we think investors will continue to look beyond weakness in the autos business.”But the problem with Musk’s promises, as the lawsuits point out, is that he rarely delivers on them.“I think we will probably have autonomous ride-hailing in probably half the population of the U.S. by the end of the year,” Musk said during the company’s second-quarter earnings call in July 2025.Tesla has about 500 active Tesla Robotaxis operating in pilot programs as of March 2026.During the company’s third-quarter call, Musk dangled Tesla’s Optimus robot like shiny keys in front of investors, saying Tesla is “on the cusp of something really tremendous” with Optimus, and calling it the “biggest product of all time.”Musk even made a near-term promise. Tesla will be unveiling Optimus V3 “probably in Q1,” he said. “It won’t even seem like a robot. It’ll seem like a person in a robot suit,” Musk assured investors on the call.Tesla’s first-quarter earnings call is scheduled for Wednesday, April 22, after the closing bell. There is still no sign of Optimus V3, though the company shared news recently about two patents it filed in 2024 that supposedly give the robot “human-like dexterity.”Related: Tesla reality plays catch-up with Elon Musk’s promises
Blue Jays’ Rising Star Sends Phillies’ Bryce Harper Message As Concerns Mount
The Toronto Blue Jays’ unique two-way talent singled out the Philadelphia Phillies’ franchise slugger as his team struggles.
Is Living on a Cruise Ship Still a Retirement Bargain?
You’ve finally hit retirement, but you’re realizing it’s proving to be more expensive than you thought.
Lately, we’ve talked a lot about the math of moving to a foreign country to lower your cost of living. But a topic that has roared back into the conversation is retiring at sea. People want to know: Is living on a cruise ship full-time, year-round, actually a deal?
I’ll give you my perspective, but I’ll warn you upfront: The math has changed.
The “Golden Age” of Cheap Living at Sea
Back in the teens — after the Great Recession and leading up to pre-COVID — I got questions about this constantly. There was so much buzz about people saving money by living on ships.
For those of you who are long-time listeners, you might remember that my son was obsessed with cruise ships from the age of six. Because of that, we were on ships often, and we met people who were literally living on them all year long. They were booking one cruise after another and paying a fraction of what it would cost to live on land.
I once met a woman who was finishing her second full year on the same ship, in the same cabin. The crew adored her. At that time, it was costing her $40,000 a year for her housing, all her meals, and all her entertainment. It was a legitimate “hack” 10 or 15 years ago.
Why the Math Doesn’t Work Today
If you are looking at this today, you’re dealing with a totally different set of numbers.
The cruise lines survived a near-death experience during COVID. They had to take on monstrous piles of debt just to stay afloat. While some smaller lines went extinct, the major players aren’t just surviving—they are in a new “golden era.” Demand is sky-high, and as a result, so are the prices.
Here is how the cost of living at sea compares to living on land today:
The budget option: That woman I met who spent $40,000? Today, for a tiny, 120-square-foot inside cabin with no window on a budget line, you’re looking at $90,000 to $120,000 a year.
The luxury option: If you want a balcony and a higher-end experience, you are looking at $250,000 or more.
If your goal is to reduce your expenses, living on a ship is a “was,” not an “is.” It is no longer a way to save money compared to living in a modest home in the U.S., and it certainly isn’t cheaper than moving to an affordable country overseas.
The Reality Check: My Brother’s Five-Week Adventure
Even if you have the money — maybe you’re a wealthy retiree and you’re happiest at sea — you have to consider the lifestyle.
About a decade ago, my oldest brother and his wife decided they were going to live on cruise ships full-time. They went all in:
They sold their home.
They sold almost all their possessions.
They reduced their entire lives down to two storage units.
Do you know how long they lasted? Five weeks. After five weeks, they realized that the reality of ship life is very different from the idea. In fact, in the decade since that experiment ended, I don’t think they have set foot on a cruise ship.
Final Thoughts
If you have saved a lot and you truly love the ocean, go live at sea for a while. But don’t do it because you think you’re beating the system.
The days of the $40,000-a-year cruise retirement are over. Today, the cruise lines are the ones making the money — not the passengers. If you’re looking to protect your nest egg, you’re better off keeping your feet on solid ground.
The post Is Living on a Cruise Ship Still a Retirement Bargain? appeared first on Clark Howard.