The Fungal Robot: Cornell's Mycelium Experiment and the Regulatory Void Around Non-Neural Intelligence
Cornell's Organic Robotics Lab put a living king oyster mushroom in a robot's control loop. No existing AI or bioethics law knows how to audit it.
In August 2024, Cornell's Organic Robotics Lab published a paper in Science Robotics describing a robot whose movements are controlled not by a microcontroller algorithm or a neural network, but by the live electrophysiological activity of a king oyster mushroom. This is the first time a non-neural biological system has been placed inside a robot's control loop — and the first time that every existing ethical and regulatory framework has nothing to say about it.
The lead author is Anand Mishra, a research associate in the lab of Rob Shepherd at Cornell Engineering, with the University of Florence as co-author institution. The paper describes two machines: a five-legged soft robot that walks like a spider, and a wheeled rover. Both are driven by a cluster of living Pleurotus eryngii mycelium integrated into an electrical interface that shields against vibration and electromagnetic noise. The interface continuously reads the spike potentials the mycelium generates on its own, parses their rhythm, and translates them into motor commands. It is a closed loop. A follow-up experiment published in April 2025 showed that shining UV light on the mycelium's "head" changes the robot's gait — the fungus is not merely a signal source, it is making environmental judgments.
The reason this matters is not the novelty of the image. It is that the experiment quietly overturns an assumption that has gone largely unchallenged since Darwin: that intelligence must be housed in a nervous system. Fungi have no brain, no neurons, and no structures that traditional biology recognizes as signal-conducting cells. Yet the ionic gradients inside mycelium produce measurable spike potentials in response to light, humidity, chemicals, and mechanical touch — and those spikes carry stable temporal patterns. In Mishra's apparatus, the mycelium is not playing the role of a sensor. It is playing the role of a controller. It perceives, and it decides. That is the distinction.
The inversion is a conceptual one, not a mechanical one
Biohybrid robotics is not a new field. Over the past decade researchers have produced xenobots made from frog stem cells, cyborg moon jellies with implanted electronics, Shoji Takeuchi's self-repairing living skin at the University of Tokyo, EPFL's necrobotics built from discarded shrimp shells, and various other schemes for "using living tissue as a mechanical part." The shared premise of all of them was a clean division of labor: the biological half provided muscle, nerves, or soft material, and silicon electronics provided the brain. Biology was body; silicon was mind. Cornell's experiment inverts this. Silicon degrades into a signal amplifier and a motor driver, and the actual decision-maker is the fungus. The roles have been swapped.
The inversion matters because the assumption it overturns has never been seriously confronted by engineering ethics. The EU Artificial Intelligence Act, which entered into force in 2024, regulates AI systems by risk tier — but its opening articles define the subject as "systems based on machine learning, logic, or knowledge-based approaches," with every risk classification keyed to training data and model architecture. Mycelium-driven robots are neither. They are not algorithmic, and they are not symbolic; they are driven by live electrophysiology. Directive 2010/63/EU on the protection of animals in scientific research covers only vertebrates and a handful of cephalopods — fungi are not even candidates for discussion, and neither are plants. The US NIH bioethics review framework (the Common Rule and its annexes) covers human subjects and vertebrate animal experiments; fungal research requires no IRB approval. In other words: a living system that is actively making decisions sits, from Brussels to Bethesda, entirely outside any legal framework that thinks it needs supervision. The law is empty.
A living system that is actively making decisions sits, from Brussels to Bethesda, entirely outside any legal framework that thinks it needs supervision.
The commercial timeline is shorter than regulators expect
The first-order applications Mishra's team describes are agricultural. A mycelium robot can be deployed to a field to sense soil moisture, pH, residual NPK, and the presence of pathogens, and to decide — on the basis of the fungus's own electrical response — whether and how much to fertilize. This is not marketing copy. Mycelium is already a component of the soil microbial community; its chemical response precision is higher than any metal-oxide sensor. The global agricultural sensor market was roughly $2.3 billion in 2025 and is projected to reach $6.9 billion by 2035; the precision farming market sits at $14.2 billion in 2025 with a $48 billion forecast for 2035. A fungal robot that can compress sensing, decision, and response into a single living unit will eat the middle layer of that market whole. The middle layer is where the margins live.
The second track is deep-space exploration. Over the past five years NASA and ESA have repeatedly cited "long-duration biological sensing" as a priority in public roadmaps. Conventional electronic sensors fail at high rates in Martian surface conditions or under the ice of Europa, while fungi's tolerance to low temperatures and ionizing radiation has been demonstrated on Earth by the radiotrophic species found in the Chernobyl exclusion zone. A mycelium probe that can survive at minus fifty degrees Celsius for years, requires no external power, and sustains itself on sub-surface liquid water and organic matter is a hard currency for the next generation of planetary missions. This is not metaphor. Cornell's current engineering record is one month of viability; Mishra has stated in interviews that multi-year lifespans are not a technical obstacle.
Neither timeline is long. Agricultural pilots are three to five years out; deep-space pre-studies are already in JPL's internal roadmap. If the regulatory framework continues to operate on the old "ship first, legislate later" rhythm, then by the time mycelium sensors reach cornfields in the American Midwest, the ethics conference on "whether fungi have moral standing" will not even have convened.
The ethical vacuum is not a philosophical problem. It is a commercial risk.
One will object — fungi are not people, and worrying about their "welfare" is sentimental. The objection is intuitively solid, and legally and commercially unsound. The question is not whether mycelium "feels." The question is: when a living system is deployed to make decisions in a real environment, and the decision goes wrong, who does the law hold responsible? If a mycelium robot misreads the soil state in a field and over-fertilizes in a way that contaminates a neighboring farm's water supply, who is the defendant — the Cornell lab, the manufacturer, the farmer, or the mycelium itself? This is not a hypothetical. The responsibility chain for traditional industrial robots is clear: hardware vendors answer for physical defects, software vendors answer for algorithmic defects, users answer for misuse. Three-link closure. A mycelium controller falls outside all three — its "judgment" emerges from the self-organizing reactions of a living organism, bound by no human programming. And this is precisely the most dangerous state a technology can occupy: the capability has arrived, but the accountability mechanism is still at draft stage.
The Oxford legal philosopher Luciano Floridi proposed as early as 2023 the concept of "morally significant artifacts" — the idea that any system that makes decisions affecting human welfare, regardless of whether it possesses moral agency itself, must be accompanied by a corresponding accountability framework. The fungal robot is the extreme case of this concept. It is artificial, because the mycelium was cultivated. It is alive, because the mycelium continues to grow autonomously. It is decisional, because the electrical signal directly drives the motors. Three identities, all active at once. Classify it as a "device" and you ignore its living character; classify it as an "organism" and you understate its engineered nature. No existing regulatory paradigm leaves room for this in-between state. The slot is missing.
Whoever defines "intelligence" defines the rules
The second regulatory crisis triggered by fungal robots is thornier than the accountability question. It challenges the definitional boundary of the phrase "artificial intelligence" itself. The EU AI Act defines its regulated object as "systems based on machine learning, logic, or knowledge-based approaches" — the unspoken premise being that intelligence is a product of symbols or statistics. The Cornell experiment demonstrates a different possibility: intelligence as the self-organizing product of electrochemical gradients, requiring no humanly readable "model" at all. If the regulatory definition cannot accommodate this form, the next generation of biohybrid systems will grow freely in the shadow of the law.
China's posture in this field is worth watching as well. The Chinese Academy of Sciences has deployed serious investment in both synthetic biology and brain-inspired computing, and Fudan and Westlake Universities have set up young teams working on fungal electrophysiology — but as of April 2026 no Chinese body has issued any policy statement on "non-neural intelligent biological systems," and the public agenda of the Ministry of Science and Technology's ethics committee contains no such item. That is an absence, not a silence. Once the United States and Europe let Cornell's model capture the first wave of the agricultural sensing market, China's fastest route to catch up will not be to build better mycelium robots; it will be to step directly over the regulatory blank — the same strategy it has used in gene editing, autonomous driving, and large language models over the past decade. Post-He Jiankui, the strategy has grown more quiet, but it has not disappeared. The stated intention is late-mover catch-up; the operational reality is rule-avoidance.
The crux of the matter has never been the fungus itself. Whether mycelium possesses "feeling," whether fungi warrant moral standing — these are questions philosophers can argue at their leisure. The genuinely urgent matter is something else: a living control system is about to enter the real world in the form of a commercial product. Not a single regulator on the planet knows which statute book to audit it from. Cornell's paper was published two years ago, and the first generation of prototypes already walks, responds to light, and survives in the lab for over a month. The next generation will last years; the one after that, possibly decades. When the first mycelium sensor makes an erroneous judgment about a pesticide application in a Midwest cornfield, the lawyers will finally open the code books — and find that not a single provision names it.
SharpPost's assessment is blunt: the fastest-moving tracks in technology are almost always the ones where ethics is slowest to rise from its chair. The window this time is four to six years, not ten. Regulators should use the next three years to build a baseline category for "non-neural living control systems" and grant such systems a legal identity, instead of waiting until after the first incident to convene an expert panel. The mycelium will not wait.
读到这里,说明你关注真正重要的事
锐报每周深度分析直送邮箱——财经、地缘、科技,穿透表象。零广告,零废话。