top of page

The Bifurcation of Intelligence: Silicon Stagnation, Biological Emergence, and the Future of Computation

  • Writer: Omkar Abhyankar
    Omkar Abhyankar
  • 4 days ago
  • 14 min read
ree

Part I: The Silicon Stagnation — A Case Study of Apple’s AI Crisis


The narrative of Apple in the post-2023 era is one of friction between a deeply entrenched, perfectionist corporate culture and a chaotic, exponentially accelerating AI landscape. While competitors like OpenAI, Google, and Meta embraced the "move fast and break things" ethos necessitated by the Generative AI boom, Apple attempted to impose its traditional, measured product cycle on a technology that defies containment. The result was a collision that exposed significant structural weaknesses in the Cupertino giant’s approach to innovation.


1.1 The "Apple Intelligence" Launch: Anatomy of a "Dud"


The unveiling of "Apple Intelligence" at the Worldwide Developers Conference (WWDC) in June 2024 was intended to be Apple’s definitive answer to ChatGPT and the broader generative wave. However, the subsequent rollout in late 2024 and throughout 2025 revealed a disconnect between marketing promise and technical reality that industry analysts have characterized as a "dud" and a "huge black eye" for the company.1


1.1.1 The Disconnect Between Hype and Delivery


The core of the criticism stems from the incremental nature of the features delivered. While the industry had moved towards multimodal agents capable of complex reasoning, code generation, and fluid conversation, Apple’s initial offering in iOS 18.1 and 18.2 focused on conservative utilities. Features like "Genmoji" (AI-generated custom emojis), "Image Playground," and basic notification summarization were polished but ultimately derivative, representing "table stakes" rather than market-leading innovation.5

Reviewers noted that while tools like the "Clean Up" feature in Photos—which removes unwanted objects—were functional, they were years behind similar capabilities offered by Google’s Magic Eraser and third-party applications.5 The "Waitlist" rollout strategy, where users were forced to wait for access even after updating, further eroded the "magic" typically associated with Apple product launches, turning excitement into administrative friction.5


1.1.2 The "Two Brains" Architecture Failure


The most critical revelation regarding Apple’s struggles emerged from internal reports detailing a catastrophic engineering misstep. In an attempt to modernize Siri without sacrificing the reliability of its legacy codebase, Apple’s software engineering teams initially developed a hybrid architecture. This "Two Brains" approach involved maintaining the old, heuristic-based Siri for basic tasks (timers, alarms, app opening) while grafting a new Large Language Model (LLM) onto the system for complex queries.2

This integration proved disastrous in practice. Internal testing revealed that the system suffered from severe "compatibility problems," failing to correctly route requests or hand off context between the two systems up to a third of the time.7 This failure rate was unacceptable for a consumer-facing product, forcing executives to make the difficult decision to scrap the hybrid model. The company initiated a complete rebuild of Siri’s underlying architecture, moving to a unified "V2" end-to-end model.2 This pivot, while necessary for long-term viability, is the primary driver of the massive delays that have pushed the true next-generation Siri to Spring 2026.


1.2 The Roadmap of Delay: The Lost Years of 2024-2026


The consequences of the architectural reboot have rippled through Apple’s software roadmap, creating a fragmented release schedule that stretches over two years. The defining features of "Apple Intelligence"—those that would actually change user behavior—have been systematically pushed back.

Feature Category

Specific Capability

Original Expectation

Revised Release Window

OS Target

Status

Foundation

Writing Tools, Summaries, Genmoji

Late 2024

Late 2024

iOS 18.1 / 18.2

Released (Limited Utility)

Contextual Awareness

"Find the email where Eric mentioned ice skating"

Spring 2025

Spring 2026

iOS 26.4

Delayed

Onscreen Vision

Understanding/Acting on screen content

Spring 2025

Spring 2026

iOS 26.4

Delayed

Cross-App Action

"Edit this photo and send it to Eric"

Spring 2025

Spring 2026

iOS 26.4

Delayed

LLM Integration

Fully conversational Siri (Native)

Late 2025

Spring 2026

iOS 26.4

Delayed

Table 1: The delayed rollout of critical Apple Intelligence features, highlighting the 18-month gap for context-aware capabilities.2

The delay of "Personal Context" and "Onscreen Awareness" is particularly damaging. These capabilities were marketed as the "killer app" for on-device AI—the ability for the assistant to understand the user's digital life securely.8 By postponing these until iOS 26.4 in 2026, Apple leaves its hardware ecosystem vulnerable to competitors who are rapidly integrating similar features into cloud-based assistants that run on any device.11


1.3 The Hardware Fragmentation and "Obsolescence by Algorithm"


Apple’s decision to rely heavily on on-device processing for privacy reasons has created a significant hardware fragmentation issue. The new AI features require a minimum of 8GB of RAM, a specification that excludes the vast majority of active iPhones, including the iPhone 14 series and the standard iPhone 15.12

This "8GB Cliff" has sparked outrage among the user base. Discussions on platforms like Reddit highlight a sense of betrayal among customers who purchased flagship devices only 18-24 months prior, now finding them functionally obsolete regarding the operating system's headline features.12 Apple’s defense—that older chips lack the Neural Engine power—is technically valid but practically alienating. It forces a premature upgrade cycle during a period of economic uncertainty, potentially driving users to hold onto older devices longer or consider alternative ecosystems where AI features are less hardware-dependent.12


1.4 The Talent War: Capital, Culture, and the Brain Drain


Behind the technical delays lies a human capital crisis. The development of frontier AI models requires a specific, scarce type of talent: researchers capable of training and optimizing massive foundation models. Apple is currently losing the war for this talent against "AI-native" companies.


1.4.1 The Compensation Disparity


The compensation packages in the AI sector have decoupled from traditional tech norms. While Apple offers stable, high salaries and vesting stock options, competitors are offering packages that resemble lottery tickets with high probabilities of payout.

  • OpenAI: Base salaries for engineers range from $200,000 to $370,000, but total compensation packages, fueled by liquid stock options and secondary market sales, can reach $800,000 annually. Specialized researchers are seeing offers that, when annualized over a four-year vesting period with growth assumptions, range from $5 million to $10 million.14

  • Meta’s Aggression: Mark Zuckerberg has personally intervened in the recruitment process, sending emails to researchers at Google DeepMind and offering jobs without interviews. Meta leverages its massive stockpile of NVIDIA H100 GPUs—a resource more valuable than cash to researchers needing compute—as a primary recruiting tool.14

  • The "Million Dollar" Offers: In extreme cases, bidding wars for key individuals ("10x engineers") have resulted in packages valued up to $190 million, primarily through stock grants in companies anticipating trillion-dollar valuations.14


1.4.2 Cultural Incompatibility


Money is not the only factor. Apple’s culture of extreme secrecy and compartmentalization is antithetical to the academic, open-source ethos that pervades the AI research community. Top researchers want to publish papers, present at conferences (NeurIPS, ICML), and collaborate across institutions. Apple’s restrictive policies on publication make it a "black hole" for academic careers, deterring top-tier talent who fear losing relevance in the fast-moving field.1 This has led to a noticeable exodus of talent, including high-profile departures from the Siri and Knowledge teams, further destabilizing the project timelines.9


Part II: The Physics of Intelligence — The Limits of the Silicon Paradigm


While Apple battles the organizational and software challenges of the current AI wave, the broader computing industry is careening towards a hard physical wall. The underlying assumption of the digital age—that silicon chips will indefinitely become faster, smaller, and more efficient—is breaking down in the face of the energy demands of Artificial Intelligence.


2.1 The Thermodynamic Bottleneck


The training and inference of Large Language Models (LLMs) have transformed data centers into energy sinks of unprecedented scale. The efficiency gap between biological intelligence and silicon intelligence has become the central problem of computer engineering.


2.1.1 The Megawatt vs. Watt Disparity


To understand the scale of the problem, one must compare the pinnacle of silicon engineering with the average human brain.

  • The Frontier Supercomputer: Currently one of the world's most powerful machines, the Frontier exascale system requires approximately 21 Megawatts (MW) of power to operate. This is enough electricity to power a small city.17

  • The Human Brain: In contrast, the human brain—which possesses general intelligence, causality reasoning, and creativity—operates on a budget of roughly 20 Watts. This is less than a dim lightbulb.17

    This represents a discrepancy of six orders of magnitude (1,000,000x). For a silicon-based system to simulate the complexity of a human brain using current von Neumann architectures, it would require energy resources that are geologically and economically unfeasible.18


2.1.2 The "Erasi" Equation and the limits of ASI


Researchers have proposed the "Erasi" equation (Energy Requirement for Artificial SuperIntelligence) to model this limit. The findings suggest that an Artificial Superintelligence (ASI) built on silicon would consume more energy than is available in highly industrialized nations, effectively putting a hard cap on how smart silicon AI can get before it collapses the energy grid.20 The heat dissipation alone—removing the waste heat generated by millions of GPUs—creates a secondary physical limit that water-cooling and immersion-cooling can only mitigate, not solve.


2.2 The Data Efficiency Gap


Beyond energy, silicon is incredibly inefficient at learning.

  • Sample Efficiency: To learn to differentiate between a cat and a dog, a Convolutional Neural Network (CNN) might need to process 100,000 to 1,000,000 images. A human child can learn the same distinction after seeing one or two examples (few-shot learning).18

  • The AlphaGo Example: When DeepMind’s AlphaGo was trained to play Go, it required 40 million joules of energy and weeks of training on 50 GPUs. A human player learns the game on the caloric equivalent of a sandwich.18

    This "data hunger" implies that as models get larger, the amount of data required to train them grows exponentially, eventually exhausting all high-quality text and video data on the internet—a "data wall" that AI companies are already hitting in 2025.


Part III: The Biological Renaissance — The Rise of Organoid Intelligence (OI)


Faced with the thermodynamic and efficiency walls of silicon, a new field of computing has emerged. "Biocomputing," or Organoid Intelligence (OI), proposes a radical solution: stop trying to simulate neurons with transistors, and instead use the neurons themselves. This is not science fiction; it is a burgeoning industry with commercial products launching in 2025.


3.1 The Science of "Wetware"


Biocomputing relies on the cultivation of "brain organoids"—three-dimensional tissues derived from human pluripotent stem cells (iPSCs).


3.1.1 From Skin to Brain


The process begins with a donor—often an anonymous individual whose skin cells are collected via a biopsy. These cells are reprogrammed into stem cells, and then guided to differentiate into neurons. Unlike 2D cell cultures in a flat petri dish, organoids are grown in a spinning bioreactor that allows them to self-organize into 3D structures. These "mini-brains" develop complex neural networks, layers, and diverse cell types (neurons, astrocytes, glial cells) that mimic the architecture of a developing fetal brain.21


3.1.2 The Interface: Multi-Electrode Arrays (MEAs)


The bridge between the digital and biological worlds is the High-Density Multi-Electrode Array (HD-MEA). The organoid is placed directly onto a chip studded with thousands of microscopic electrodes.

  • Input (Stimulation): The electrodes can fire electrical pulses into specific regions of the organoid, simulating sensory input.

  • Output (Recording): The electrodes listen to the "spikes" (action potentials) generated by the neurons, interpreting them as digital data.22

    This bidirectional communication allows the biological matter to act as a processor. The "software" is the pattern of synaptic connections within the tissue; the "hardware" is the living cell itself.


3.2 Cortical Labs and the "DishBrain" Breakthrough


The proof-of-concept that launched the industry came from Cortical Labs, an Australian startup that successfully taught a layer of neurons to play the arcade game Pong.


3.2.1 The Experiment


In the "DishBrain" experiment, researchers plated 800,000 neurons onto a chip. They created a virtual game world where the "paddle" was controlled by the neural firing rate.

  • The Feedback Loop: The key innovation was the feedback mechanism. When the neurons missed the ball, the system delivered a chaotic, unpredictable pattern of electrical stimulation (noise). When they hit the ball, the system delivered a rhythmic, predictable pattern (order).

  • The Free Energy Principle: The experiment relied on the Free Energy Principle, a theory in neuroscience which posits that biological systems are driven to minimize surprise or "variational free energy." To avoid the chaotic "noise" of missing the ball, the neurons spontaneously rewired themselves to play the game better, effectively "learning" to hit the ball to maintain a predictable environment.3

  • Results: Within five minutes, the culture demonstrated statistically significant learning. The human neurons learned faster and played better than mouse neurons, showcasing the superior computational density of human tissue.3


3.3 Commercialization: The CL1 and the "Brain in a Box"


Cortical Labs is moving aggressively from the lab to the market with the launch of the CL1, the world’s first commercial biological computer.


3.3.1 Technical Specifications and Release


  • Form Factor: The CL1 is a server-rack-mountable unit that contains the biological chip, a fluidic life-support system (pumps for nutrients and oxygen), and the digital interface hardware. It does not require an external computer to operate; it is a standalone "body in a box".25

  • Availability: Widely available commercial units are scheduled to ship in the second half of 2025.4

  • Pricing: The estimated retail price for a CL1 unit is approximately $35,000.1

  • Capacity: Cortical Labs is building server stacks containing 30 CL1 units each, creating the first "biological server farms".4


3.4 Final Spark and the "Wetware-as-a-Service" Cloud


While Cortical Labs sells hardware, Swiss startup Final Spark is pioneering the cloud model for biocomputing, lowering the barrier to entry for researchers.


3.4.1 The Neuroplatform


Final Spark operates a facility in Switzerland where 16 human brain organoids are maintained online 24/7. Through their "Neuroplatform," researchers globally can log in and run Python scripts that interact with the neurons in real-time.

  • Subscription Model: Access is priced at $500 per user per month for academic institutions, with free access granted for select projects. This "Wetware-as-a-Service" (WaaS) model democratizes access, allowing a coder in a coffee shop to run experiments on human neurons without a wet lab.28

  • Longevity Breakthroughs: A major hurdle for biocomputing is that the "processors" die. Final Spark has achieved a breakthrough in life-support systems, extending the operational lifespan of their organoids to 100 days (up from hours or days). They utilize a "rotation system" where older organoids are retired and replaced seamlessly to maintain uptime.28

  • Energy Metrics: Final Spark’s primary selling point is energy efficiency. They claim their biological processors can perform learning tasks with one million times less energy than digital processors, positioning the platform as the only sustainable path to AGI.29


Part IV: The Market Dynamics of Biocomputing


The emergence of the CL1 and Neuroplatform signals the birth of a new industry. While currently niche, the market dynamics suggest a rapid expansion into specific verticals where silicon is failing.


4.1 Early Adopters and Use Cases


The first wave of customers for biocomputing are not seeking to run spreadsheets or video games, but to solve problems that are inherently biological or energy-constrained.


4.1.1 Pharmacology and Toxicology


The "Killer App" for biocomputing is drug testing. Currently, bringing a drug to market costs billions and involves extensive animal testing, which often fails to predict human reactions.

  • Mechanism: Pharmaceutical companies can use the CL1 to test how a new neurological drug affects the learning capability of actual human neurons. If a "healthy" organoid plays Pong well, but plays poorly after being dosed with a candidate drug, the drug represents a neurotoxic risk.4

  • Impact: This offers a "human-relevant" data point early in the R&D process, potentially saving billions in failed clinical trials and reducing the reliance on animal testing.22


4.1.2 Low-Power Robotics


Robotics requires complex sensorimotor integration in real-time, a task that drains batteries quickly when done with silicon GPUs. Biological brains are evolutionarily optimized for exactly this task (movement and navigation).

  • Prototype: Researchers are experimenting with "Biorobots"—small autonomous vehicles controlled by an onboard organoid. The organoid processes visual data and controls motors with milliwatt-level power consumption, enabling robots that could theoretically operate for weeks on a single charge.4


4.2 Economic Analysis: Silicon vs. Wetware


A cost-benefit analysis reveals why the shift to wetware is economically inevitable for certain tasks.

Cost Factor

Silicon AI (Data Center)

Biological AI (CL1/Neuroplatform)

Capital Expenditure (CapEx)

High ($30k for one H100 GPU)

Moderate ($35k for CL1 unit)

Operational Expenditure (OpEx)

Extreme (Electricity is major cost)

Low (Nutrients are cheap; low power)

Depreciation

3-5 years (Hardware becomes obsolete)

months (Organoids die and are replaced)

Scalability

Linear (Add more GPUs)

Complex (Biological limits, life support)

Training Cost

Millions of $ per run

Negligible (Self-organizing learning)

Table 2: Comparative Economic Analysis of Computing Substrates.18

Insight: While the depreciation of biological processors (death) is high, the negligible energy cost and self-learning capability (eliminating the need for multi-million dollar training runs) creates a compelling arbitrage opportunity for specific classes of AI problems.


Part V: The Ethical Abyss — Sentience, Suffering, and "Brain Farming"


The commodification of human biological tissue for computation opens an ethical Pandora’s Box. As companies rush to scale "Brain Farms," the philosophical and moral frameworks of society are scrambling to catch up.


5.1 The Definition of Sentience: A Moving Target


The central question is: Are these computers conscious? The answer depends entirely on the definition of sentience, a subject of intense debate in the bioethics community.


5.1.1 The "Brainstem Rule" vs. Functionalism


Skeptics and some regulatory bodies advocate for the "Brainstem Rule." This posits that without a brainstem—the anatomical structure responsible for regulating arousal, sleep-wake cycles, and integrating signals—an organoid cannot be sentient. Under this definition, current organoids are merely complex tissue cultures, devoid of subjective experience.31

However, proponents of Integrated Information Theory (IIT) and functionalism argue that consciousness arises from the complexity of information integration. If a system (like DishBrain) takes in information, processes it, and acts to achieve a goal (playing Pong) to minimize internal entropy (Free Energy Principle), it possesses a rudimentary form of sentience. By this metric, the CL1 is not just a computer; it is a "sentient candidate".24


5.2 The Suffering of the Chip


If we accept the possibility of sentience, we must confront the possibility of suffering.

  • Bentham’s Metric: Jeremy Bentham famously argued that for moral status, "The question is not 'Can they reason?' nor 'Can they talk?' but 'Can they suffer?'".34

  • The "Training" as Torture: The training mechanism for these organoids involves delivering "chaotic" electrical feedback when they fail a task. If the organoid experiences this chaos as "pain" or "distress" (as the Free Energy Principle implies it is a state to be avoided), then the act of programming a biocomputer is functionally identical to torture-based animal training.35

  • Silent Suffering: Unlike a lab rat, an organoid cannot squeal or wince. It suffers in silence. Critics argue this invisibility makes it easier for researchers to ignore the moral implications, leading to a potential "factory farming" of consciousness where millions of mini-brains are bred, enslaved, and discarded without a second thought.35


5.3 The "Brain Farming" Dystopia


The term "Brain Farming" has emerged to describe the industrial-scale cultivation of these entities.

  • The Slippery Slope: Currently, the organoids are microscopic and simple. But the roadmap involves scaling to "Minimal Viable Brains" with millions or billions of neurons. At what point does an organoid become a person? If a biocomputer is used to mine Bitcoin or serve ads, are we creating a digital slave class?.35

  • Corporate Stance: Cortical Labs has been explicit about their commercial ambitions, stating they want to get the tech into "as many hands as possible." When asked about the ethics, leadership has compared it to eating meat—something society tolerates despite the known suffering of animals. This utilitarian defense suggests that the industry will proceed with commercialization regardless of the unresolved ethical status of the "hardware".35


Part VI: The Future Horizons (2026-2030)


As we look toward the latter half of the decade, the divergent paths of Apple’s silicon struggles and the biocomputing rise will likely intersect in unexpected ways.


6.1 Apple’s Path to 2026: Redemption or Obsolescence?


For Apple, 2026 is an existential deadline. The release of iOS 26.4 and the "LLM Siri" represents the company's last chance to retain its dominance in the premium computing market.

  • The "Hybrid" End-State: Apple’s likely "win condition" is a seamless hybrid model where the iPhone handles private, local context (using the 8GB RAM requirement) and offloads heavy reasoning to a "Private Cloud Compute" (PCC) infrastructure that may, in the future, incorporate non-silicon accelerators to reduce costs.2

  • The Risk of Irrelevance: If the "Spring 2026" delays extend further, or if the "V2" architecture fails to deliver "magic," Apple risks a "BlackBerry moment"—having excellent hardware that is rendered obsolete by a paradigm shift in software (AI Agents) that it cannot support.


6.2 The Biocomputing Singularity


By 2030, Biocomputing will likely have graduated from the petri dish to the data center.

  • Hybrid Servers: We can expect the emergence of hybrid data centers where silicon handles storage and math, while racks of biological units (like the CL1) handle pattern recognition and complex decision-making. This could slash the energy footprint of AI companies by 40-60%.

  • Neuro-Rights Legislation: Just as we have animal rights, we will see the emergence of "Neuro-Rights." Governments will likely impose strict limits on the complexity of organoids (e.g., capping the neuron count) or banning the simulation of pain receptors, creating a regulated environment for "ethical wetware".33


Conclusion


The history of computing has been defined by the abstraction of logic from matter—turning thoughts into symbols, and symbols into switches. The crisis of the mid-2020s, illustrated by Apple’s silicon stagnation, reveals that this abstraction has hit a physical limit. We cannot build a brain out of switches without burning the world down.

The rise of Biocomputing represents a closing of the circle. We are returning to the only substrate in the known universe proven to support general intelligence: biology itself. While Apple fights to optimize the transistor, pioneers like Cortical Labs are harnessing the neuron. The transition will be messy, ethically fraught, and technically difficult. But the logic of thermodynamics is absolute. The future of intelligence is not artificial; it is synthetic.

 
 
 

Comments


bottom of page