From March 21 through March 31, LINE FRIENDS hosts the 2nd U.S. ZO&FRIENDS Pop-Up “ZOAFUL DAYS” at LINE FRIENDS New York City, LINE FRIENDS LA Hollywood, and US online.
BUSINESS
5 Career Lessons We Can Learn From Autumn Durald Arkapaw, The Cinematographer Behind ‘Sinners,’ Who Just Made History
Autumn Durald Arkapaw, director of photography for Ryan Coogler’s Sinners, and the first Black woman ever recognized in the category, took home an Oscar for her impactful work.
Wall Street found an ETF tax loophole worth $8.7 billion
Picture this: you’ve held a stock for 20 years, and now it’s worth millions. Yet your cost basis is almost nothing, but selling triggers a massive tax bill, so you hold.That’s the trap millions of long-term investors face. The more successful the investment, the more painful the exit.But a growing number of wealthy Americans have found a way out. They’re not selling, they’re not donating, but they’re seeding brand-new ETFs with their appreciated stock and walking away with diversified fund shares.Here’s what you need to know before this door potentially closesMorningstar identifies 39 ETFs seeded with $8.7 billion in appreciated assets.The strategy is called a Section 351 exchange. A new Morningstar analysis based on research by Brent Sullivan and Elliot Rozner reveals how fast it’s growing. Between 2021 and 2025, 39 U.S. ETFs launched with roughly $8.7 billion in seed assets from individual investors.This is no longer a niche technique. Wealth managers are transferring clients’ concentrated, low-basis stocks into newly formed ETFs. The investor receives ETF shares in return. No gain is recognized at the time of transfer if the exchange satisfies Section 351 requirements.The appeal is straightforward: you reposition appreciated holdings into a diversified strategy. You avoid an immediate capital gains event. You access the structural tax benefits of the ETF wrapper, including in-kind redemptions that minimize future fund-level realizations.How Section 351 lets investors swap stock for ETF shares tax-freeSection 351 of the Internal Revenue Code dates back to 1921. Congress created it to help small business owners incorporate without triggering a taxable event. The rule says you can transfer property to a corporation in exchange for stock, tax-free, if the transferors collectively own at least 80% of the new entity.Wealth managers realized this old rule applies to ETFs. A client contributes a portfolio of appreciated securities to a newly created ETF before launch. The client receives ETF shares, and the cost basis carries over.The basic mechanics:The investor transfers stocks from a taxable account into a brand-new ETF.The investor receives ETF shares with their original cost basis.The ETF manager later rebalances into a target portfolio using in-kind transactions.Capital gains are deferred until the investor eventually sells the ETF shares.Once inside the ETF wrapper, the manager can rebalance without triggering gains for shareholders. The ETF’s in-kind creation and redemption mechanism handles that. It’s a double layer of tax efficiency stacked on a single transaction.
Spencer Platt/Getty Images
The strict diversification test you must pass to qualifySection 351(e) includes a critical guardrail. The tax-free treatment is denied if the transfer results in “diversification” of the investor’s interests. Treasury regulations enforce this through what’s known as the 25/50 test.The 25/50 diversification rules:No single stock can represent more than 25% of the assets you contribute.Your top five holdings cannot exceed 50% of total contributed assets.Cash is excluded from the calculation.ETF holdings are evaluated on a look-through basisRelated: How to balance your portfolio with global exposureIf your portfolio is already diversified, the exchange qualifies. If it’s concentrated in one or two stocks, it doesn’t. You can’t dump $10 million of Nvidia into an ETF and call it a 351 exchange, as CNBC reported; the portfolio must meet the test at the time of contribution.The investors and firms driving the $8.7 billion waveThis strategy is not for everyday retail investors, as minimum contributions typically start at $1 million. Alpha Architect, one of the early movers, recommends a minimum portfolio of that size. Cambria Funds’ first 351-seeded ETF, launched in December 2024, carried the same floor.Large wealth management firms create private ETFs via 351 conversions for their clients. Smaller firms now participate through publicly traded ETFs. Recent launches include Stance’s Sustainable Beta ETF (November 2024), Cambria’s Tax Aware ETF (December 2024), and Longview’s Advantage ETF (February 2025).More Dividend stocks:Tim Cook quietly hands Apple investors a surprise pay raiseNancy Pelosi sells $1M of struggling dividend stockVerizon’s $20 billion acquisition resets dividend outlookMorningstar senior analyst Daniel Sotiroff told CNBC he expects the trend to accelerate. For investors with taxable accounts full of embedded gains, 351 exchanges solve a problem that traditional tax-loss harvesting can’t fix.The IRS is paying attention, and so is CongressThe tax-free treatment is not guaranteed. Morningstar’s analysis flags two aggressive patterns that could invite IRS scrutiny.Red flags the IRS is watching:“Stuffing”: Packing a new ETF with highly appreciated assets that don’t fit the fund’s stated strategy.“Sequential seeding”: Repeatedly creating new ETFs solely to cycle appreciated stock into tax-deferred wrappers.Both patterns could lead the IRS to re-characterize the transaction and impose immediate capital gains tax. The agency has broad authority under the economic substance doctrine to challenge transactions that lack business purpose beyond tax avoidance.Congress has noticed too. Senator Ron Wyden introduced legislation aimed at limiting 351 exchanges’ access to ETFs, Bloomberg reported. The U.S. Treasury Department held early discussions with the Investment Company Institute about potential guidance. Wyden called the strategy a loophole that Congress has tried to close at least three times before.How the loophole becomes permanent through the step-up in basisHere’s the part that makes tax experts uneasy. If you defer gains through a 351 exchange and hold those ETF shares until death, your heirs receive a stepped-up cost basis under current law. The embedded gain disappears entirely. No one ever pays the tax.Under Section 1014 of the Internal Revenue Code, inherited assets reset to fair market value at the date of death. Combined with the 351 exchange, this creates a pathway from concentrated stock to diversified ETF to tax-free inheritance.Related: Dividend-paying restaurant stock stumbles as gas prices surgeMorningstar’s analysis quotes a 1999 congressional press release comparing these strategies to “a phoenix rising from the ashes.” Each time Congress closes one path, the industry finds another. The ETF seed is the latest version.What this means if you’re sitting on large unrealized gainsYou probably don’t have $1 million in a single taxable account. Most readers don’t. But the mechanics of this strategy still matter to you, for two reasons.Why everyday investors should pay attention:If Congress restricts 351 exchanges, the broader ETF tax advantage (in-kind redemptions) could also come under scrutiny. That affects every ETF you own.If you hold concentrated positions from stock compensation, startup equity, or long-term holdings, smaller-scale versions of this concept may eventually reach lower minimums.Understanding how the wealthy defer taxes helps you pressure-test your own tax strategy. If you’re paying more than necessary, a conversation with a tax advisor is worth the fee.The standard ETF tax advantage already works in your favor. U.S. stock ETFs avoided tax on more than $211 billion in gains in a recent year, Bloomberg found. That’s not just for the ultra-rich. Every time you hold an S&P 500 ETF that distributes zero capital gains, you benefit from the same structural feature.The risks you should weigh before assuming this strategy is safeThe 351 exchange is legal today. Whether it stays fully intact depends on regulatory and legislative action that is already underway. Here are the practical risks, whether you’re considering the strategy directly or evaluating ETFs that were seeded this way.Key risks to consider:IRS recharacterization: If the agency determines a 351 exchange lacks economic substance, the investor owes capital gains tax retroactivelyLegislative risk: Pending proposals from Senator Wyden and Treasury discussions could narrow or eliminate the strategyLiquidity lock: Once your assets are in the ETF, you can’t easily pull them out. Selling ETF shares triggers the deferred gain.Rebalancing lag: Newly seeded ETFs can take up to 12 months to fully align with their target allocation, per Kitces.com researchHigh minimums: Most publicly traded ETFs require $1 million or more to participateCFP Charles Sachs of Imperio Wealth Advisors told CNBC he avoids the strategy because it limits client flexibility. Once you’re inside, switching strategies is difficult without triggering the very gains you deferred.Related: Vanguard Dividend ETF quietly outperforms amid market panic
Elon Musk makes a stunning pledge about his OpenAI lawsuit winnings
Elon Musk has been fighting one of the most closely watched lawsuits in tech history. Now, just weeks before it goes to trial, he has made a pledge that changed the conversation entirely.In an X post on Sunday, Musk said that if he wins his lawsuit against OpenAI and Microsoft, every dollar of the proceeds will go to charity. “Btw, the proceeds of any legal victory in the OpenAI case will be donated to charity,” he wrote. “I will in no way enrich myself.”The case is scheduled to go to trial on April 27 in Oakland, California, with jury selection beginning that day and proceedings expected to run through May.What the Musk-OpenAI lawsuit is actually aboutMusk co-founded OpenAI in 2015 alongside Sam Altman and Greg Brockman, contributing approximately $38 million, roughly 60% of the organization’s early seed funding. He says he did so on the understanding that OpenAI would remain a nonprofit dedicated to developing AI for the benefit of humanity.More Tesla:Top-rated analyst drops curt 8-word take on Tesla stockTesla investors may miss game-changing moveJudge orders Tesla to make major change or halt sales in CaliforniaHe left the board in 2018, citing potential conflicts of interest with Tesla’s AI development. Since then, OpenAI created a for-profit subsidiary in 2019, accepted billions from Microsoft, and completed a full restructuring into a Public Benefit Corporation in October 2025.Musk argues that the transformation amounted to fraud. He claims Altman and Brockman induced him to fund and build the organization under false pretenses, then steered it toward a commercial model that enriched themselves and their corporate partners.The $134 billion damages claimMusk is seeking between $78 billion and $135 billion in damages from OpenAI and Microsoft, based on calculations from his expert witness, economist C. Paul Wazzan. The theory is that Musk’s early contributions entitled him to a proportional share of what OpenAI subsequently became.U.S. District Judge Yvonne Gonzalez Rogers has already raised serious doubts about that number. At a pretrial hearing on March 13, she said a jury would likely see the damages methodology as “pulling these numbers out of the air,” and added she did not find it particularly convincing, Pymnts reported.Despite those reservations, she declined to throw out the expert testimony, noting that doing so would effectively end the trial before it began. The jury will hear it all.What OpenAI and Microsoft are sayingOpenAI has called the lawsuit “baseless” and described it as part of an “ongoing pattern of harassment” by Musk, who now runs xAI, a direct competitor to ChatGPT.The company has also argued that Musk pushed for a for-profit structure at various points and left the organization after demanding a controlling equity stake or the CEO role, both of which the other founders rejected. Microsoft, which holds a $135 billion stake in OpenAI, has denied any wrongdoing and said there is no evidence it aided and abetted any breach of obligations, noted Computerworld.Why the charity pledge mattersThe timing is hard to ignore. With trial less than six weeks away, the pledge reframes the entire narrative around the case.For months, OpenAI has portrayed the lawsuit as the work of a disgruntled competitor trying to hobble a rival while building his own for-profit AI company. Musk’s pledge undercuts that argument directly. It is difficult to accuse someone of a money grab when they have publicly committed to giving all of it away.Musk did not specify which organizations would receive the funds, saying only the focus would be on “safe AGI development,” which aligns with the original mission he claims OpenAI abandoned.The irony at the center of the caseOne of OpenAI’s sharpest lines of attack has been pointing out that Musk is suing over a for-profit pivot while running xAI, his own for-profit AI company. His counterargument is that there is a difference between building a for-profit company from scratch and converting a nonprofit explicitly founded on different terms.A judge already found that argument credible enough to put before a jury. Judge Gonzalez Rogers noted at the January hearing that there was “plenty of evidence” that OpenAI’s leadership made assurances the nonprofit structure would be maintained.
OpenAI and Microsoft have said Elon Musk’s lawsuit lacks merit.Moneymaker/Getty Images
Greg Brockman’s diary, now part of the court record, includes a 2017 entry: “I cannot believe that we committed to non-profit if three months later we’re doing b-corp then it was a lie.” Whether a jury finds that damning, or simply the private doubts of a founder navigating a difficult transition, will be one of the central questions when the trial begins.Key facts in the Musk vs. OpenAI caseMusk’s contribution: $38 million, roughly 60% of OpenAI’s early seed fundingTrial date: April 27, 2026, Oakland, CaliforniaMusk’s pledge: All winnings to charity focused on safe AGI developmentWhat happens between now and April 27The pretrial period is already producing significant legal fireworks. Judge Gonzalez Rogers spent four hours at the March 13 hearing working through dozens of motions, signaling she intends to run a tight trial. She also indicated she is unlikely to allow Musk to pursue punitive damages.Altman and Brockman are expected to testify. Jurors will review internal communications including diary entries, emails, and texts exchanged during OpenAI’s transition years.Musk’s charity pledge has added one more element to an already charged atmosphere. Whatever the jury decides on the legal merits, Musk has made sure this trial will be about more than money. He has framed it as a fight over who controls the most consequential technology of the century, and pledged to hand the spoils back to humanity if he wins.Related: Elon Musk must deliver on Tesla promise in 2026, Deutsche Bank says
Senator Tim Scott says market structure negotiations are advancing
The South Carolina Republican said he might see a draft of stablecoin yield language as soon as this week, and other issues continue to be negotiated.
AMC Brings Back ‘Sound Of Freedom’ For Limited Run—Years After Movie Sparked Political Firestorm
Critics accused “Sound of Freedom” of having ties to the QAnon conspiracy movement upon its release in 2023.
Mistral AI launches Forge to help companies build proprietary AI models, challenging cloud giants
Mistral AI on Monday launched Forge, an enterprise model training platform that allows organizations to build, customize, and continuously improve AI models using their own proprietary data — a move that positions the French AI lab squarely against the hyperscale cloud providers in one of the most consequential and least understood markets in enterprise technology.The announcement caps a remarkably aggressive week for Mistral, which also released its Mistral Small 4 model, unveiled Leanstral — an open-source code agent for formal verification — and joined the newly formed Nvidia Nemotron Coalition as a co-developer of the coalition’s first open frontier base model. Together, these moves paint the picture of a company that is no longer content to compete on model benchmarks alone and is instead racing to become the infrastructure backbone for organizations that want to own their AI rather than rent it.Forge goes significantly beyond the fine-tuning APIs that Mistral and its competitors have offered for the past year. The platform supports the full model training lifecycle: pre-training on large internal datasets, post-training through supervised fine-tuning, DPO, and ODPO, and — critically — reinforcement learning pipelines designed to align models with internal policies, evaluation criteria, and operational objectives over time.”Forge is Mistral’s model training platform,” said Maliena Guy, head of product at Mistral AI, in an exclusive interview with VentureBeat ahead of the launch. “We’ve been building this out behind the scenes with our AI scientists. What Forge actually brings to the table is that it lets enterprises and governments customize AI models for their specific needs.”Why Mistral says fine-tuning APIs are no longer enough for serious enterprise AIThe distinction Mistral is drawing — between lightweight fine-tuning and full-cycle model training — is central to understanding why Forge exists and whom it serves.For the past two years, most enterprise AI adoption has followed a familiar pattern: companies select a general-purpose model from OpenAI, Anthropic, Google, or an open-source provider, then apply fine-tuning through a cloud API to adjust the model’s behavior for a narrow set of tasks. This approach works well for proof-of-concept deployments and many production use cases. But Guy argues that it fundamentally plateaus when organizations try to solve their hardest problems.”We had a fine-tuning API relying on supervised fine-tuning. I think it was kind of what was the standard a couple of months ago,” Guy told VentureBeat. “It gets you to a proof-of-concept state. Whenever you actually want to have the performance that you’re targeting, you need to go beyond. AI scientists today are not using fine-tuning APIs. They’re using much more advanced tools, and that’s what Forge is bringing to the table.”What Forge packages, in Guy’s telling, is the training methodology that Mistral’s own AI scientists use internally to build the company’s flagship models — including data mixing strategies, data generation pipelines, distributed computing optimizations, and battle-tested training recipes. She drew a sharp line between Forge and the open-source tools and community tutorials that are freely available today.”There’s no platform out there that provides you real-world training recipes that work,” Guy said. “Other open-source repositories or other tools can give you generic configurations or community tutorials, but they don’t give you the recipe that’s been validated — that we’ve been doing for all of our flagship models today.”From ancient manuscripts to hedge fund quant languages, early customers reveal what off-the-shelf AI can’t doThe obvious question facing any product like Forge is demand. In a market where GPT-5, Claude, Gemini, and a growing fleet of open-source models can handle an enormous range of tasks, why would an enterprise invest the time, compute, and expertise required to train its own model from scratch?Guy acknowledged the question head-on but argued that the need emerges quickly once companies move beyond generic use cases. “A lot of the existing models can get you very far,” she said. “But when you’re looking at what’s going to make you competitive compared to your competition — everyone can adopt and use the models that are out there. When you want to go a step beyond that, you actually need to create your own models. You need to leverage your proprietary information.”The real-world examples she cited illustrate the edges of the current model ecosystem. In one case, Mistral worked with a public institution that had ancient manuscripts with missing text from damaged sections. “The models that were available were not able to do this because they’ve never seen the data,” Guy explained. “Digitization was not very good. There were some unique patterns and characters, and so we actually created a model for them to fill in the spans. This is now used by their researchers, and it’s accelerating their publication and understanding of these documents.”In another engagement, Mistral partnered with Ericsson to customize its Codestral model for legacy-to-modern code translation. Ericsson, Guy said, has built up half a decade of proprietary knowledge around an internal calling language — a codebase so specialized that no off-the-shelf model has ever encountered it. “The concrete impact is like turning a year-long manual migration process, where each engineer needs six months of onboarding, to something that’s really more scalable and faster,” she said.Perhaps the most telling example involves hedge funds. Guy described working with financial firms to customize models for proprietary quantitative languages — the kind of deeply guarded intellectual property that these firms keep on-premises and never expose to cloud-hosted AI services. Using Forge’s reinforcement learning capabilities, Mistral helped one hedge fund develop custom benchmarks and then trained the model to outperform on them, producing what Guy called “a unique model that was able to give them the competitive edge that was needed.”How Forge makes money: license fees, data pipelines, and embedded AI scientistsForge’s business model reflects the complexity of enterprise model training. According to Guy, it operates across several revenue streams. For customers who run training jobs on their own GPU clusters — a common requirement in highly regulated or IP-sensitive industries — Mistral does not charge for compute. Instead, the company charges a license fee for the Forge platform itself, along with optional fees for data pipeline services and what Mistral calls “forward-deployed scientists” — embedded AI researchers who work alongside the customer’s team.”No competitor out there today is kind of selling this embedded scientist as part of their training platform offering,” Guy said.This model has clear echoes of Palantir’s early playbook, where forward-deployed engineers served as the critical bridge between powerful software and the messy reality of enterprise data. It also suggests that Mistral recognizes a fundamental truth about the current state of enterprise AI: the technology alone is not enough. Most organizations lack the internal expertise to design effective training recipes, curate data at scale, or navigate the treacherous optimization landscape of distributed GPU training.The infrastructure itself is flexible. Training can happen on Mistral’s own clusters, on Mistral Compute (the company’s dedicated infrastructure offering), or entirely on-premises within the customer’s own data centers. “We have all these different cases, and we support everything,” Guy said.Keeping proprietary data off the cloud is Forge’s sharpest selling pointOne of the sharpest points of differentiation Mistral is pressing with Forge is data privacy. When customers train on their own infrastructure, Guy emphasized that Mistral never sees the data at all.”It’s on their clusters, it’s with their data — we don’t see anything of it, and so it’s completely under their control,” she said. “I think this is something that sets us apart from the competition, where you actually need to upload your data, and you have a black box effect.”This matters enormously in sectors like defense, intelligence, financial services, and healthcare, where the legal and reputational risks of exposing proprietary data to a third-party cloud service can be deal-breakers. Mistral has already partnered with organizations including ASML, DSO National Laboratories Singapore, the European Space Agency, Home Team Science and Technology Agency Singapore, and Reply — a roster that suggests the company is deliberately targeting the most data-sensitive corners of the enterprise market.Forge also includes data pipeline capabilities that Mistral has developed through its own model training: data acquisition, curation, and synthetic data generation. “Data is a critical piece of any training job today,” Guy said. “You need to have good data. You need to have a good amount of data to make sure that the model is going to be good performing. We’ve acquired, as a company, really great knowledge building out these data pipelines.”In the age of AI agents, Mistral argues that custom models still matter more than MCP serversThe timing of Forge’s launch raises an important strategic question. The AI industry in 2026 has been consumed by agents — autonomous AI systems that can use tools, navigate multi-step workflows, and take actions on behalf of users. If the future belongs to agents, why does the underlying model matter? Can’t companies simply plug into the best available frontier model through an MCP server or API and focus their energy on orchestration?Guy pushed back on this framing with conviction. “The customers that we’ve been working on — some of these specific problems are things that no MCP server would ever solve,” she said. “You actually need that intelligence. You actually need to create that model that will help you solve your most critical business problem.”She also argued that model customization is essential even in purely agentic architectures. “There are some agentic behaviors that you need to bring to the model,” Guy said. “It can be about reasoning patterns, specific types of documentation, making sure that you have the right reasoning traces. Even in these cases where people are going completely agentic, you still need model customization — like reinforcement learning techniques — to actually get the right level of performance.”Mistral’s press release makes this connection explicit, arguing that custom models make enterprise agents more reliable by providing deeper understanding of internal environments: more precise tool selection, more dependable multi-step workflows, and decisions that reflect internal policies rather than generic assumptions.The platform also supports an “agent-first” design philosophy. Forge exposes interfaces that allow autonomous agents — including Mistral’s own Vibe coding agent — to launch training experiments, find optimal hyperparameters, schedule jobs, and generate synthetic data. “We’ve actually been building Forge in an AI-native way,” Guy said. “We’re already testing out how autonomous agents can actually launch training experiments.”Mistral Small 4, Leanstral, and the Nvidia coalition: the week that redefined the company’s ambitionsTo fully appreciate Forge’s significance, it helps to view it alongside the other announcements Mistral made in the same week — a barrage of releases that together represent the most ambitious expansion in the company’s short history.Just yesterday, Mistral released Leanstral, the first open-source code agent for Lean 4, the proof assistant used in formal mathematics and software verification. Leanstral operates with just 6 billion active parameters and is designed for realistic formal repositories — not isolated math competition problems. On the same day, Mistral launched Mistral Small 4, a mixture-of-experts model with 119 billion total parameters but only 6 billion active per query, running 40 percent faster than its predecessor while handling three times more queries per second. Both models ship under the Apache 2.0 license — the most permissive open-source license in wide use.And then there is the Nvidia Nemotron Coalition. Announced at Nvidia’s GTC conference, the coalition is a first-of-its-kind collaboration between Nvidia and a group of AI labs — including Mistral, Perplexity, LangChain, Cursor, Black Forest Labs, Reflection AI, Sarvam, and Thinking Machines Lab — to co-develop open frontier models. The coalition’s first project is a base model co-developed specifically by Mistral AI and Nvidia, trained on Nvidia DGX Cloud, which will underpin the upcoming Nvidia Nemotron 4 family of open models.”Open frontier models are how AI becomes a true platform,” said Arthur Mensch, cofounder and CEO of Mistral AI, in Nvidia’s announcement. “Together with Nvidia, we will take a leading role in training and advancing frontier models at scale.”This coalition role is strategically significant. It positions Mistral not merely as a consumer of Nvidia’s compute infrastructure but as a co-creator of the foundational models that the broader ecosystem will build upon. For a company that is still a fraction of the size of its American competitors, this is an outsized seat at the table.Forge takes aim at Amazon, Microsoft, and Google — and says they can’t go deep enoughForge enters a market that is already crowded — at least on the surface. Amazon Bedrock, Microsoft Azure AI Foundry, and Google Cloud Vertex AI all offer model training and customization capabilities. But Guy argued that these offerings are fundamentally limited in two respects.First, they are cloud-only. “In one set of cases, it’s very easy to answer — they want to run this on their premises, and so all these tools that are available on the cloud are just not available for them,” Guy said. Second, she argued that the hyperscalers’ training tools largely offer simplified API interfaces that don’t provide the depth of control that serious model training requires.There is also the dependency question. Guy described digital-native companies that had built products on top of closed-source models, only to have a new model release — more verbose than its predecessor — crash their production pipelines. “When you’re relying on closed-source models, you are also super dependent on the updates of the model that have side effects,” she warned.This argument resonates with the broader sovereignty narrative that has powered Mistral’s rise in Europe and beyond. The company has positioned itself as the alternative for organizations that want to own their AI stack rather than lease it from American hyperscalers. Forge extends that argument from inference to training: not just running models you own, but building them in the first place.The open-source foundation matters here, too. Mistral has been releasing models under permissive licenses since its founding, and Guy emphasized that the company is building Forge as an open platform. While it currently works with Mistral’s own models, she confirmed that support for other open-source architectures is planned. “We’re deeply rooted into open source. This has been part of our DNA since the beginning, and we have been building Forge to be an open platform — it’s just a question of a matter of time that we’ll be opening this to other open-source models.”A co-founder’s departure to xAI underscores why Mistral is turning expertise into a productThe timing of Forge’s launch also arrives against a backdrop of fierce talent competition. As FinTech Weekly reported on March 14, Devendra Singh Chaplot — a co-founder of Mistral AI who headed the company’s multimodal group and contributed to training Mistral 7B, Mixtral 8x7B, and Mistral Large — left to join Elon Musk’s xAI, where he will work on Grok model training. Chaplot had previously also been a founding member of Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati.The loss of a co-founder is never insignificant, but Mistral appears to be compensating with institutional capability rather than individual brilliance. Forge is, in essence, a productization of the company’s collective training expertise — the recipes, the pipelines, the distributed computing optimizations — in a form that can scale beyond any single researcher. By packaging this knowledge into a platform and pairing it with forward-deployed scientists, Mistral is attempting to build a durable competitive asset that doesn’t walk out the door when a key hire departs.Mistral’s big bet: the companies that own their AI models will be the ones that winForge is a bet on a specific theory of the enterprise AI future: that the most valuable AI systems will be those trained on proprietary knowledge, governed by internal policies, and operated under the organization’s direct control. This stands in contrast to the prevailing paradigm of the past two years, in which enterprises have largely consumed AI as a cloud service — powerful but generic, convenient but uncontrolled.The question is whether enough enterprises will be willing to make the investment. Model training is expensive, technically demanding, and requires sustained organizational commitment. Forge lowers the barriers — through its infrastructure automation, its battle-tested recipes, and its embedded scientists — but it does not eliminate them.What Mistral appears to be banking on is that the organizations with the most to gain from AI — the ones sitting on decades of proprietary knowledge in highly specialized domains — are precisely the ones for whom generic models are least sufficient. These are the companies where the gap between what a general-purpose model can do and what the business actually needs is widest, and where the competitive advantage of closing that gap is greatest.Forge supports both dense and mixture-of-experts architectures, accommodating different trade-offs between performance, cost, and operational constraints. It handles multimodal inputs. It is designed for continuous adaptation rather than one-time training, with built-in evaluation frameworks that let enterprises test models against internal benchmarks before production deployment.For the past two years, the enterprise AI playbook has been straightforward: pick a model, call an API, ship a feature. Mistral is now asking a harder question — whether the organizations willing to do the difficult, expensive, unglamorous work of training their own models will end up with something the API-callers never get.An unfair advantage.
Vertigo Sidelines Alex Bowman For 3 More NASCAR Cup Series Races
Hendrick Motorsports NASCAR Cup Series driver Alex Bowman will miss three more races as he continues to recover from vertigo that he first experienced at COTA on March 1.
Lululemon’s stock falls, as a weak outlook underscores the need for a transformation
Yoga-wear maker reports results as founder tries to shake up board. The company said it had appointed a former Levi’s CEO as a new director.
3 signals will reveal if the Iran oil shock is just a blip — or the new normal
Beyond the $100 barrel, there are “second-round” effects that can hit your portfolio.