Meta CEO Mark Zuckerberg entered a downtown Los Angeles courthouse in largely the same way as all the attorneys, reporters, and advocates who’d come to watch his landmark trial testimony, but with one notable difference: he was flanked by an entourage that appeared to be wearing Meta’s Ray-Ban smart glasses. To get to the courtroom, he walked past a crowd of parents whose children died after struggling with issues they attribute to the design of social media platforms including those that Meta makes. He would spend the next eight hours often answering questions in his signature matter-of-fact (or less charitably, monotone) cadence, denying h …
Read the full story at The Verge.
The RAM crunch could kill products and even entire companies, memory exec admits
Phison is one of the leading makers of controller chips for SSDs and other flash memory devices – and CEO Pua Khein-Seng has now become a leading voice for just how bad the RAM shortage might get.
Companies may need to cut back their product lines in the second half of 2026, and some companies will even die if they can’t get the components they need, he agreed, in a televised interview with Ningguan Chen of Taiwanese broadcaster Next TV.
While the interview’s entirely in Chinese, friends of The Verge stepped forward to confirm parts of a machine-translated summary that’s been making headlines. They also note, importantly, that it’s the int …
Read the full story at The Verge.
Beijing Blasts Trump After US Releases New Details On Alleged 2020 Chinese Nuclear Test
Beijing Blasts Trump After US Releases New Details On Alleged 2020 Chinese Nuclear Test
Update: Despite the Lunar New Year holiday, Beijing has made it known it is not best pleased with Washington digging up Nuke blasts from the past.
Issuing a statement via state mouthpiece (@HuXijin_GT), the CCP suggested an ulterior motive for the timing of this announcement:
“Trump is eager to resume nuclear testing and needs a plausible reason, and accusing China of conducting nuclear tests is the perfect pretext.
Assistant Secretary of State Christopher Yeaw stated on Tuesday that the US is prepared to conduct low-yield nuclear tests in response to alleged secret nuclear tests by China and Russia.
The US is being far too hasty; having just fabricated rumors that China conducted an explosive nuclear test nearly six years ago, they are already announcing their own low-yield nuclear test.
Washington’s motives for spreading these rumors are too clear; they can’t even be bothered to feign it.”
Hard to disagree with the latter point.
* * *
As Kimberley Hayek detailed earlier via The Epoch Times, a senior State Department official released additional evidence Tuesday in support of U.S. allegations that China conducted an underground nuclear test in June 2020, as global arms control frameworks unravel.
Assistant Secretary of State Christopher Yeaw, while speaking to a Hudson Institute meeting, discussed data from a remote seismic station in Kazakhstan that recorded a magnitude 2.75 “explosion” approximately 450 miles from China’s Lop Nur test grounds on June 22, 2020.
“I’ve looked at additional data since then. There is very little possibility I would say that it is anything but an explosion, a singular explosion,” Yeaw said, underscoring that the data were not consistent with blasts from mining.
“It’s also entirely not consistent with an earthquake,” said Yeaw, a former intelligence analyst and defense official who holds a doctorate in nuclear engineering. “It is … what you would expect with a nuclear explosive test.”
Yeaw argued that China tried to hide the event through decoupling, detonating the device in a spacious underground cavity to diminish seismic waves.
Under Secretary of State for Arms Control Thomas DiNanno earlier this month accused China of performing such secretive nuclear arms tests and implementing measures to restrict seismic evidence.
“Today, I can reveal that the U.S. Government is aware that China has conducted nuclear explosive tests, including preparing for tests with designated yields in the hundreds of tons,” DiNanno said.
These claims back up Yeaw’s assertions of concealment tactics.
The Comprehensive Nuclear-Test-Ban Treaty Organization, which monitors global explosions, noted that available data do not allow for firm conclusions.
Executive Secretary Robert Floyd said in a statement that the seismic monitoring station in Kazakhstan captured “two very small seismic events” 12 seconds apart on June 22, 2020.
The organization’s network detects events equivalent to 551 tons (500 metric tons) of TNT or more, according to Floyd.
“These two events were far below that level,” Floyd said. “As a result, with this data alone, it is not possible to assess the cause of these events with confidence.”
China, a signatory to the 1996 Comprehensive Nuclear-Test-Ban Treaty but not a ratifier, rejected the initial U.S. accusation at an international conference this month. Beijing’s last acknowledged underground test occurred in 1996.
The United States, which also signed but did not ratify the treaty, is legally bound to its terms under international norms. America’s final underground test was in 1992, with subsequent reliance on sophisticated simulations and supercomputers for warhead maintenance.
President Donald Trump recently called on China to take part in trilateral talks with Russia to support the New Strategic Arms Reduction Treaty (New START), which ended Feb. 5.
China refused the invitation, arguing that its arsenal is far smaller than those of the United States and Russia. The Pentagon estimates China’s current operational warheads at more than 600. The stockpile is expected to exceed 1,000 by 2030.
The Federation of American Scientists, an organization working to minimize the risks of nuclear threats, tracks Russia as currently having 5,459 warheads, while the United States has 5,177.
The New START accord expiration removes caps on deployed strategic warheads and delivery vehicles, potentially accelerating buildups. Russia and the United States said they would informally observe limits.
Tyler Durden
Thu, 02/19/2026 – 04:15
Eni Considers Return To Oil Trading As Rivals Reap Billions
Eni Considers Return To Oil Trading As Rivals Reap Billions
By Charles Kennedy of OilPrice.com,
Italy’s Eni is considering reopening its oil-trading business as it misses out on the profits that its fellow European supermajors are generating from selling the commodities they produce, the company’s chief executive told the Financial Times.
“I stopped trading in 2019, but the other big companies are all traders,” Claudio Descalzi told the publication in an interview. “BP, Shell, Total are big traders, and they make billions from that.”
Indeed, trading has been especially profitable for the other supermajors, so Eni is pivoting via partnerships.
Descalzi told the FT that Eni was in preliminary talks with a number of commodity trading houses, including Mercuria.
“It is not in our DNA. We are not very commercial,” Descalzi explained.
“So I thought to become commercial, we have to have a partnership to understand the business.”
“If we can offer physical hedging, that is a big advantage for them. We can complement each other,” the chief executive of the supermajor added, noting the amount of oil and gas that Eni produces should make it an attractive partner.
Despite oil trading being a major profit source for Big Oil, Shell, for one, flagged a weaker performance of its trading division ahead of its fourth-quarter results announcement. BP also said its trading business has weakened over the final three months of last year.
TotalEnergies, meanwhile, recently sealed a trading joint venture deal with Bahrain’s BapcoEnergies backed by production flows from Bapco Energies’ refinery. The new entity is positioned as a competitive regional trading player, designed to maximize downstream value and broaden access to international markets for Bahraini oil products.
Big Oil, and especially European Big Oil, has recently pivoted away from its low-carbon energy ventures and back to its core business of producing and refining oil and gas amid slowing energy transition momentum. Shareholders are now pushing for growth as predictions for peak oil move into the more distant future.
Tyler Durden
Thu, 02/19/2026 – 03:30
Chinese EVs Flood Europe, Signals Hollowing Out Of Bloc’s Industrial Core
Chinese EVs Flood Europe, Signals Hollowing Out Of Bloc’s Industrial Core
The rapid growth of China’s electric vehicles on Europe’s streets and highways isn’t just a market share story. In fact, it’s an industrial security threat for the bloc. When Chinese manufacturers undercut domestic car brands, the damage goes well beyond margin pressure and shuttered production lines. The much larger and alarming issue is the hollowing out of Europe’s industrial core.
While Europe deindustrialises and focuses on Wokeism
Chinese company BYD is building a mega factory larger than San Francisco (Not AI)
At this scale, and such low costs, vast human resources, and its own market, it will become impossible for Europe to compete. pic.twitter.com/SnRjvO0Wp9
— Chay Bowes (@BowesChay) February 3, 2026
Goldman analyst Christian Frenes released the latest Chinese OEM Competition Monitor, which covers January registrations of Chinese EVs across Europe.
Even though Chinese brand EV sales softened in January, volumes remain elevated at 31,000 units in Jan-26 versus 40,100 in Dec-25 and 8,700 a year earlier, representing a whopping 257% year-over-year growth.
In Europe’s Big 5 markets (Germany, the United Kingdom, France, Italy, Spain), Chinese domestic brands nearly topped 5% of market share in January, up from 3.64% one year ago.
Market share growth of Chinese domestic brands outpaces that of local car companies.
“Heading into 2026, we expect Chinese OEMs to further intensify their European expansion plans, e.g., BYD offering c.30% price discounts while aiming to double its volume in Germany this year,” Frenes said.
Here’s the demand of Chinese and local car companies for January.
Where these Chinese car brands are invading Europe.
Frenes highlights several key developments of Chinese brand expansion across the bloc:
Chery & Jaguar Land Rover (JLR): Chery is reportedly exploring a manufacturing partnership with JLR that would leverage spare capacity at JLR’s Halewood plant in the UK (link). The plant, which has an annual capacity of c.200,000 units, was significantly underutilized in the past few years. We estimate the average utilization rate at c.60%. This initiative would build on the existing Chery–JLR relationship, as the two companies have operated a manufacturing joint venture in China since 2012. Discussions reportedly involve the UK government and JLR leadership, and Chery has publicly highlighted the UK as a potential production base as part of its localization strategy. No definitive agreement has been announced.
Geely & Ford: Both companies are reportedly in advanced discussions for a partnership under which Geely would utilize Ford’s manufacturing facilities in Valencia, Spain (link). We estimate the average ulilization rate of this factory to be at c.70% over the last 5 years. This approach is consistent with Geely’s established strategy of partnering with other automakers, such as its existing deal with Renault to leverage their factories in South Korea and Brazil. No definitive agreement has been announced.
Uncertainty over reported suspension of BYD’s Turkey plant: BYD has reportedly halted plans for its USD 1bn EV factory in Manisa, Turkey. Media cites that a dispute over core technology transfer requirements, along with parliamentary scrutiny, may have contributed to the investment being paused. The statement was rejected by the Turkish Trade Minister while the company has not issued any official confirmation.
BYD is planning for explosive demand across the bloc this year.
Our assessment here is much deeper than just market share; the fact that Brussels is allowing this invasion to occur in the first place puts severe pressure on European OEMs and suppliers.
Anduril Industries founder Palmer Luckey outlined exactly this threat in a recent Joe Rogan podcast.
“If you let them (US car manufacturers) freely compete, like if you let them go toe to toe, China would be thrilled if they could subsidise their way into destroying the American automotive apparatus, partly for economic reasons. But there’s another reason,” Luckey said.
He continued, “How did the United States win World War II … Manufacturing – some of it was new factories, but most of it was taking over old factories.”
“We took all of our farm implement factories, like John Deere and Caterpillar. They were building tanks and guns. We took all our automotive factories. We had them building aircraft, we had them building weapons, we had them building missiles,” he said.
He said, “In fact, we even designed those weapons so they could be manufactured by those plants … We won because we had all of this automotive and other industrial capacity.”
Luckey warned, “China would love to wipe out the American automotive industry, partly for economic reasons, because it also means we will never be able to fight a war against them. Imagine in America with like, we’ve lost a lot of manufacturing … If China could wipe out our industrial capacity entirely, they never need to worry about fighting a war with the US again because they know that we wouldn’t be able to get back in the game fast enough to matter.”
It’s as if Brussels is allowing its own decline, whether by letting a flood of Chinese EVs pour onto European streets or by pursuing climate policies that have weakened reliable power generation and eroded core industrial capacity.
However, we do think the bloc is starting to recognize where this trajectory ends and, as the world fractures and the war in Eastern Europe grinds on. That reality was reflected last week, when industrial leaders urged Brussels to dial back its carbon pricing regime to restore competitiveness.
Professional subscribers can read the full note Goldman on our new Marketdesk.ai portal
Tyler Durden
Thu, 02/19/2026 – 02:45
The Real Reason Vet Bills Are Skyrocketing — and What You Can Do About It
Taking your pet to a veterinarian is an important part of being a good pet owner. But the cost of bring your furry friend in to get checked may surprise you.
Vet bills have skyrocketed of late, and diagnostics and surgery can cost thousands of dollars. Adapting to this new reality and planning in advance in case of any emergencies can leave you more prepared for necessary visits.
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Retirees: How a Small Gold Allocation Can Soften Losses When the Stock Market Wobbles
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
Why vet costs are rising
Estimated lifetime pet care costs have jumped nearly 12% for dogs and 20% for cats since 2022, Synchrony Bank found in a report published in 2025.
Vet costs are rising in part because clinics are facing higher costs for medical supplies, pharmaceuticals, utilities and more, according to the American Veterinary Medical Association. Technological innovation has also allowed for more advanced treatments — but those treatments come at a higher cost. Another factor that has contributed to higher vet costs is that pets are living longer. Older pets require more care and more frequent checkups, and all of those visits can add up. There’s also a vet shortage challenging the industry.
Healthy Paws pet insurance: Get competitive rates here — it’s quick and easy
How to prepare
Bank of America recently found that 29% of lower-income households are living paycheck to paycheck. If you’re in that situation, a surprise bill could be detrimental to your finances.
You never know when your pet may need medical attention or a regular visit to the vet. That’s why it is important to prepare early, if possible, so you don’t get caught by surprise and have to rack up credit card debt. Setting an emergency fund for your pet that you gradually add to can help. Setting aside $50 per month in a special bank account for pet-related expenses adds up to $600 per year. Your pet may not need as many vet visits when it’s young, so you can save this money and use it if your pet needs a more complicated procedure in old age.
Pet Protection: See Lemonade’s pet insurance options — save and protect your dog or cat from high vet bills
Review your budget and cut expenses where you can, such as cancelling unused subscriptions. It may also be a good idea to see if you can cut trim costs in expensive categories, such as housing and transportation, to save the most money. Another way to boost your savings is to boost your income. Negotiating a raise, job hopping and side hustling are three viable options.
You can also explore pet insurance policies that reduce how much you have to pay for emergencies. Pet insurance can help via reimbursements for diagnostic tests, hospitalization, surgery and more. But you should compare policies for their premiums, coverage and other factors. You can also ask your vet about any payment programs that let you break up a big bill into several smaller monthly payments.
Discounts on Gas, Travel, Food and More: Secure an AARP Subscription for Just $15 a Year
Must Read
Experts are Bullish on Gold — Here’s How to Get In
Retirees: How a Small Gold Allocation Can Soften Losses When the Stock Market Wobbles
Warren Buffett on Market Volatility — and 3 Ways You Can Take Advantage
SEO Secrets That Separate Struggling Hustlers from Thriving Winners
One guy pours endless hours into blog posts, tweaking meta tags, begging for links – traffic flatlines. Another quietly builds something solid, updates once a quarter with fresh proof, gets cited in AI answers… and suddenly leads roll in without him lifting a finger for ads.
Same grind, different worlds. The split is brutal and obvious: winners treat SEO like building unbreakable trust. Strugglers treat it like a video game cheat code that stopped working ages ago.
Organic search still pulls in around 50-55% of site traffic for most businesses (yeah, even now), but the clicks? Vanishing. Zero-click searches hover at 60% overall, spiking to 80-85% when AI Overviews kick in.
Google’s AI summaries slash organic CTR for top spots by up to 58% compared to no-AI queries. Winners don’t panic – they pivot to becoming the source AI loves to quote. Strugglers keep optimizing for blue links that nobody clicks anymore.
The Brutal Mindset Flip Winners Make
Old-school hustlers chase rankings like it’s still 2018. Low KD keywords, 1,200-word filler, outreach spam. Winners? They laugh at that noise.
Rankings are nice, but the real prize is authority – the kind that makes ChatGPT, Perplexity, Gemini, and Google AI Overviews name-drop you without hesitation.
Stats don’t lie. AI search referrals exploded over 500% in recent years. But generic slop gets ignored; depth with real proof wins citations. E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) isn’t a guideline anymore – it’s table stakes.
Add real experience signals (case studies, original data, credentials) and you get preferential treatment in AI answers.
Take the mindset coach who stalled at 600 visitors/month. He ditched keyword roulette, built one beast of a pillar on “how entrepreneurs actually beat burnout” – raw stories, fresh 2026 stats, expert quotes, updated every few months.
Topical authority exploded. Branded searches shot up. Traffic? 14k+ monthly now, compounding quietly.
Or the SaaS guy gunning for “best remote tools 2026.” Skipped fluffy lists; added real benchmarks, user screenshots, video snippets. Even with AI stealing clicks, his brand got cited directly – visibility held, conversions climbed.
Bottom line: SEO isn’t tricking algorithms anymore. It’s becoming the obvious, trustworthy answer.
The Four Pillars Winners Lock Down in 2026
Miss one and you’re toast in this AI era.
1. Technical basics bulletproof – boring but deadly if ignored. Core Web Vitals green, mobile responsive, loads under 2.5s, no crawl waste.
Fix duplicates, broken links, add schema (Article, FAQ, HowTo). One e-com shop did a quick audit cleanup – organic sessions up 38% in weeks. Strugglers let tech rot for years.
2. Content that feels human and answers fast – lead with the solution in the first 50 words. Short paragraphs, scannable headings, visuals (charts, screenshots – not stock).
80% evergreen pillars for depth, 20% timely hooks (like “AI tools entrepreneurs swear by right now”). Make it quotable: tables, lists, bold stats.
3. E-E-A-T screaming from every page – author bios with real creds, inline sources, fresh testimonials, off-site proof (Reddit mentions, G2 reviews, podcast nods). One consultant landed a roundup quote – AI tools suddenly treated him as the voice.
For a straightforward, no-BS rundown pulling technical, content, and AI readiness together, this practical guide nails it: check the current steps on how to improve search engine optimization.
4. Show up everywhere search happens – YouTube shorts + long-form, Reddit threads dropping value, LinkedIn native posts, even quick TikToks. Branded search volume is your shield when algorithms swing.
Quarterly ritual winners run:
Speed audit (90+ PageSpeed target)
Refresh 5-10 older posts with current data
Schema updates
AI bot crawl check (allow for citations if you want ’em)
Branded vs. non-branded query tracking
New review/testimonial push
Internal link tightening in clusters
Skip it? Visibility erodes quietly.
Proof in the Numbers: Real Hustlers Who Turned It Around
Seen it repeat: fitness creator flatlined – switched to real-talk long-tails like “why gym motivation crashes after 30” + transcript embeds. Traffic tripled, signups poured in.
Side-hustle blogger clustered “scaling without burning out” – pillars feeding satellites. Shares brought natural links. Revenue? 280%+ in a year.
Tiny tweak example: writer added emotional hooks to titles (“How I Finally Quit…”) – CTR bumped 22%. Small pivot, real money.
Thread? Consistent value + fast adaptation. Ignore hype, execute boringly well.
Final Thoughts
February 2026 draws the line sharp: strugglers hunt loopholes, new plugins, viral bait – they end up ghosts. Winners stack real assets – solid tech, human-depth content, loud expertise, footprints across platforms.
No massive team needed. No fat ad budget. Just relentless trust-building over tricks. Do it right and the payoff compounds: traffic that doesn’t cost monthly, leads that land while you’re offline, a business that grows with you, not against you.
The playbook’s open. Only execution decides which side you land on.
The post SEO Secrets That Separate Struggling Hustlers from Thriving Winners appeared first on Addicted 2 Success.
New agent framework matches human-engineered AI systems — and adds zero inference cost to deploy
Agents built on top of today’s models often break with simple changes — a new library, a workflow modification — and require a human engineer to fix it. That’s one of the most persistent challenges in deploying AI for the enterprise: creating agents that can adapt to dynamic environments without constant hand-holding. While today’s models are powerful, they are largely static.To address this, researchers at the University of California, Santa Barbara have developed Group-Evolving Agents (GEA), a new framework that enables groups of AI agents to evolve together, sharing experiences and reusing their innovations to autonomously improve over time.In experiments on complex coding and software engineering tasks, GEA substantially outperformed existing self-improving frameworks. Perhaps most notably for enterprise decision-makers, the system autonomously evolved agents that matched or exceeded the performance of frameworks painstakingly designed by human experts.The limitations of ‘lone wolf’ evolutionMost existing agentic AI systems rely on fixed architectures designed by engineers. These systems often struggle to move beyond the capability boundaries imposed by their initial designs. To solve this, researchers have long sought to create self-evolving agents that can autonomously modify their own code and structure to overcome their initial limits. This capability is essential for handling open-ended environments where the agent must continuously explore new solutions.However, current approaches to self-evolution have a major structural flaw. As the researchers note in their paper, most systems are inspired by biological evolution and are designed around “individual-centric” processes. These methods typically use a tree-structured approach: a single “parent” agent is selected to produce offspring, creating distinct evolutionary branches that remain strictly isolated from one another.This isolation creates a silo effect. An agent in one branch cannot access the data, tools, or workflows discovered by an agent in a parallel branch. If a specific lineage fails to be selected for the next generation, any valuable discovery made by that agent, such as a novel debugging tool or a more efficient testing workflow, dies out with it.In their paper, the researchers question the necessity of adhering to this biological metaphor. “AI agents are not biological individuals,” they argue. “Why should their evolution remain constrained by biological paradigms?”The collective intelligence of Group-Evolving AgentsGEA shifts the paradigm by treating a group of agents, rather than an individual, as the fundamental unit of evolution.The process begins by selecting a group of parent agents from an existing archive. To ensure a healthy mix of stability and innovation, GEA selects these agents based on a combined score of performance (competence in solving tasks) and novelty (how distinct their capabilities are from others).Unlike traditional systems where an agent only learns from its direct parent, GEA creates a shared pool of collective experience. This pool contains the evolutionary traces from all members of the parent group, including code modifications, successful solutions to tasks, and tool invocation histories. Every agent in the group gains access to this collective history, allowing them to learn from the breakthroughs and mistakes of their peers.A “Reflection Module,” powered by a large language model, analyzes this collective history to identify group-wide patterns. For instance, if one agent discovers a high-performing debugging tool while another perfects a testing workflow, the system extracts both insights. Based on this analysis, the system generates high-level “evolution directives” that guide the creation of the child group. This ensures the next generation possesses the combined strengths of all their parents, rather than just the traits of a single lineage.However, this hive-mind approach works best when success is objective, such as in coding tasks. “For less deterministic domains (e.g., creative generation), evaluation signals are weaker,” Zhaotian Weng and Xin Eric Wang, co-authors of the paper, told VentureBeat in written comments. “Blindly sharing outputs and experiences may introduce low-quality experiences that act as noise. This suggests the need for stronger experience filtering mechanisms” for subjective tasks.GEA in actionThe researchers tested GEA against the current state-of-the-art self-evolving baseline, the Darwin Godel Machine (DGM), on two rigorous benchmarks. The results demonstrated a massive leap in capability without increasing the number of agents used.This collaborative approach also makes the system more robust against failure. In their experiments, the researchers intentionally broke agents by manually injecting bugs into their implementations. GEA was able to repair these critical bugs in an average of 1.4 iterations, while the baseline took 5 iterations. The system effectively leverages the “healthy” members of the group to diagnose and patch the compromised ones.On SWE-bench Verified, a benchmark consisting of real GitHub issues including bugs and feature requests, GEA achieved a 71.0% success rate, compared to the baseline’s 56.7%. This translates to a significant boost in autonomous engineering throughput, meaning the agents are far more capable of handling real-world software maintenance. Similarly, on Polyglot, which tests code generation across diverse programming languages, GEA achieved 88.3% against the baseline’s 68.3%, indicating high adaptability to different tech stacks.For enterprise R&D teams, the most critical finding is that GEA allows AI to design itself as effectively as human engineers. On SWE-bench, GEA’s 71.0% success rate effectively matches the performance of OpenHands, the top human-designed open-source framework. On Polyglot, GEA significantly outperformed Aider, a popular coding assistant, which achieved 52.0%. This suggests that organizations may eventually reduce their reliance on large teams of prompt engineers to tweak agent frameworks, as the agents can meta-learn these optimizations autonomously.This efficiency extends to cost management. “GEA is explicitly a two-stage system: (1) agent evolution, then (2) inference/deployment,” the researchers said. “After evolution, you deploy a single evolved agent… so enterprise inference cost is essentially unchanged versus a standard single-agent setup.”The success of GEA stems largely from its ability to consolidate improvements. The researchers tracked specific innovations invented by the agents during the evolutionary process. In the baseline approach, valuable tools often appeared in isolated branches but failed to propagate because those specific lineages ended. In GEA, the shared experience model ensured these tools were adopted by the best-performing agents. The top GEA agent integrated traits from 17 unique ancestors (representing 28% of the population) whereas the best baseline agent integrated traits from only 9. In effect, GEA creates a “super-employee” that possesses the combined best practices of the entire group.”A GEA-inspired workflow in production would allow agents to first attempt a few independent fixes when failures occur,” the researchers explained regarding this self-healing capability. “A reflection agent (typically powered by a strong foundation model) can then summarize the outcomes… and guide a more comprehensive system update.”Furthermore, the improvements discovered by GEA are not tied to a specific underlying model. Agents evolved using one model, such as Claude, maintained their performance gains even when the underlying engine was swapped to another model family, such as GPT-5.1 or GPT-o3-mini. This transferability offers enterprises the flexibility to switch model providers without losing the custom architectural optimizations their agents have learned.For industries with strict compliance requirements, the idea of self-modifying code might sound risky. To address this, the authors said: “We expect enterprise deployments to include non-evolvable guardrails, such as sandboxed execution, policy constraints, and verification layers.”While the researchers plan to release the official code soon, developers can already begin implementing the GEA architecture conceptually on top of existing agent frameworks. The system requires three key additions to a standard agent stack: an “experience archive” to store evolutionary traces, a “reflection module” to analyze group patterns, and an “updating module” that allows the agent to modify its own code based on those insights.Looking ahead, the framework could democratize advanced agent development. “One promising direction is hybrid evolution pipelines,” the researchers said, “where smaller models explore early to accumulate diverse experiences, and stronger models later guide evolution using those experiences.”
Alibaba’s Qwen 3.5 397B-A17 beats its larger trillion-parameter model — at a fraction of the cost
Alibaba dropped Qwen3.5 earlier this week, timed to coincide with the Lunar New Year, and the headline numbers alone are enough to make enterprise AI buyers stop and pay attention.The new flagship open-weight model — Qwen3.5-397B-A17B — packs 397 billion total parameters but activates only 17 billion per token. It is claiming benchmark wins against Alibaba’s own previous flagship, Qwen3-Max, a model the company itself has acknowledged exceeded one trillion parameters. The release marks a meaningful moment in enterprise AI procurement. For IT leaders evaluating AI infrastructure for 2026, Qwen 3.5 presents a different kind of argument: that the model you can actually run, own, and control can now trade blows with the models you have to rent.A New Architecture Built for Speed at ScaleThe engineering story underneath Qwen3.5 starts with its ancestry. The model is a direct successor to last September’s experimental Qwen3-Next, an ultra-sparse MoE model that was previewed but widely regarded as half-trained. Qwen3.5 takes that architectural direction and scales it aggressively, jumping from 128 experts in the previous Qwen3 MoE models to 512 experts in the new release.The practical implication of this and a better attention mechanism is dramatically lower inference latency. Because only 17 billion of those 397 billion parameters are active for any given forward pass, the compute footprint is far closer to a 17B dense model than a 400B one — while the model can draw on the full depth of its expert pool for specialized reasoning.These speed gains are substantial. At 256K context lengths, Qwen 3.5 decodes 19 times faster than Qwen3-Max and 7.2 times faster than Qwen 3’s 235B-A22B model. Alibaba is also claiming the model is 60% cheaper to run than its predecessor and eight times more capable of handling large concurrent workloads, figures that matter enormously to any team paying attention to inference bills. It’s also about 1/18th the cost of Google’s Gemini 3 Pro.Two other architectural decisions compound these gains:Qwen3.5 adopts multi-token prediction — an approach pioneered in several proprietary models — which accelerates pre-training convergence and increases throughput. It also inherits the attention system from Qwen3-Next released last year, designed specifically to reduce memory pressure at very long context lengths. The result is a model that can comfortably operate within a 256K context window in the open-weight version, and up to 1 million tokens in the hosted Qwen3.5-Plus variant on Alibaba Cloud Model Studio.Native Multimodal, Not Bolted OnFor years, Alibaba took the standard industry approach: build a language model, then attach a vision encoder to create a separate VL variant. Qwen3.5 abandons that pattern entirely. The model is trained from scratch on text, images, and video simultaneously, meaning visual reasoning is woven into the model’s core representations rather than grafted on.This matters in practice. Natively multimodal models tend to outperform their adapter-based counterparts on tasks that require tight text-image reasoning — think analyzing a technical diagram alongside its documentation, processing UI screenshots for agentic tasks, or extracting structured data from complex visual layouts. On MathVista, the model scores 90.3; on MMMU, 85.0. It trails Gemini 3 on several vision-specific benchmarks but surpasses Claude Opus 4.5 on multimodal tasks and posts competitive numbers against GPT-5.2, all while carrying a fraction of the parameter count.Qwen3.5’s benchmark performance against larger proprietary models is the number that will drive enterprise conversations. On the evaluations Alibaba has published, the 397B-A17B model outperforms Qwen3-Max — a model with over a trillion parameters — across multiple reasoning and coding tasks. It also claims competitive results against GPT-5.2, Claude Opus 4.5, and Gemini 3 Pro on general reasoning and coding benchmarks.Language Coverage and Tokenizer EfficiencyOne underappreciated detail in the Qwen3.5 release is its expanded multilingual reach. The model’s vocabulary has grown to 250k tokens, up from 150k in prior Qwen generations and now comparable to Google’s ~256K tokenizer. Language support expands from 119 languages in Qwen 3 to 201 languages and dialects.The tokenizer upgrade has direct cost implications for global deployments. Larger vocabularies encode non-Latin scripts — Arabic, Thai, Korean, Japanese, Hindi, and others — more efficiently, reducing token counts by 15–40% depending on the language. For IT organizations running AI at scale across multilingual user bases, this is not an academic detail. It translates directly to lower inference costs and faster response times.Agentic Capabilities and the OpenClaw IntegrationAlibaba is positioning Qwen3.5 explicitly as an agentic model — one designed not just to respond to queries but to take multi-step autonomous action on behalf of users and systems. The company has open-sourced Qwen Code, a command-line interface that lets developers delegate complex coding tasks to the model in natural language, roughly analogous to Anthropic’s Claude Code.The release also highlights compatibility with OpenClaw, the open-source agentic framework that has surged in developer adoption this year. With 15,000 distinct reinforcement learning training environments used to sharpen the model’s reasoning and task execution, the Qwen team has made a deliberate bet on RL-based training to improve practical agentic performance — a trend consistent with what MiniMax demonstrated with M2.5.The Qwen3.5-Plus hosted variant also enables adaptive inference modes: a fast mode for latency-sensitive applications, a thinking mode that enables extended chain-of-thought reasoning for complex tasks, and an auto (adaptive) mode that selects dynamically. That flexibility matters for enterprise deployments where the same model may need to serve both real-time customer interactions and deep analytical workflows.Deployment Realities: What IT Teams Actually Need to KnowRunning Qwen3.5’s open-weights in-house requires serious hardware. While a quantized version demands approximately 256GB of RAM, and realistically 512GB for comfortable headroom. This is not a model for a workstation or a modest on-prem server. What it is suitable for is a GPU node — a configuration that many enterprises already operate for inference workloads, and one that now offers a compelling alternative to API-dependent deployments.All open-weight Qwen 3.5 models are released under the Apache 2.0 license. This is a meaningful distinction from models with custom or restricted licenses: Apache 2.0 allows commercial use, modification, and redistribution without royalties, with no meaningful strings attached. For legal and procurement teams evaluating open models, that clean licensing posture simplifies the conversation considerably.What Comes NextAlibaba has confirmed this is the first release in the Qwen3.5 family, not the complete rollout. Based on the pattern from Qwen3 — which featured models down to 600 million parameters — the industry expects smaller dense distilled models and additional MoE configurations to follow over the next several weeks and months. The Qwen3-Next 80B model from last September was widely considered undertrained, suggesting a 3.5 variant at that scale is a likely near-term release.For IT decision-makers, the trajectory is clear. Alibaba has demonstrated that open-weight models at the frontier are no longer a compromise. Qwen3.5 is a genuine procurement option for teams that want frontier-class reasoning, native multimodal capabilities, and a 1M token context window — without locking into a proprietary API. The next question is not whether this family of models is capable enough. It is whether your infrastructure and team are ready to take advantage of it.Qwen 3.5 is available now on Hugging Face under the model ID Qwen/Qwen3.5-397B-A17B. The hosted Qwen3.5-Plus variant is available via Alibaba Cloud Model Studio. Qwen Chat at chat.qwen.ai offers free public access for evaluation.
Ledn raises $188m with first bitcoin backed bond sale in asset backed market
Crypto lender packages more than 5,400 bitcoin collateralized loans into first asset backed securities transaction of its kind.