As the global population has expanded over time, agricultural modernisation has been humanity’s prevailing approach to staving off famine.
A variety of mechanical and chemical innovations delivered during the 1950s and 1960s represented the third agricultural revolution. The adoption of pesticides, fertilisers and high-yield crop breeds, among other measures, transformed agriculture and ensured a secure food supply for many millions of people over several decades.
Using AI in agriculture
In assisting humans in fields and factories, AI may process, synthesise and analyse large amounts of data steadily and ceaselessly. It can outperform humans in detecting and diagnosing anomalies, such as plant diseases, and making predictions including about yield and weather.
Across several agricultural tasks, AI may relieve growers from labour entirely, automating tilling (preparing the soil), planting, fertilising, monitoring and harvesting.
Algorithms already regulate drip-irrigation grids, command fleets of topsoil-monitoring robots, and supervise weed-detecting rovers, self-driving tractors and combine harvesters. A fascination with the prospects of AI creates incentives to delegate it with further agency and autonomy.
This technology is hailed as the way to revolutionise agriculture. The World Economic Forum, an international nonprofit promoting public-private partnerships, has set AI and AI-powered agricultural robots (called “agbots”) at the forefront of the fourth agricultural revolution.
From hackers to accidents
First, given these technologies are connected to the internet, criminals may try to hack them.
Disrupting certain types of agbots would cause hefty damages. In the US alone, soil erosion costs US$44 billion (£33.6 billion) annually. This has been a growing driver of the demand for precision agriculture, including swarm robotics, that can help farms to manage and lessen its effects. But these swarms of topsoil-monitoring robots rely on interconnected computer networks and hence are vulnerable to cyber-sabotage and shutdown.
Similarly, tampering with weed-detecting rovers would let weeds loose at a considerable cost. We might also see interference with sprayers, autonomous drones or robotic harvesters, any of which could cripple cropping operations.
Beyond the farm gate, with increasing digitisation and automation, entire agrifood supply chains are susceptible to malicious cyber-attacks. At least 40 malware and ransomware attacks targeting food manufacturers, processors and packagers were registered in the US in 2021. The most notable was the US$11m ransomware attack against the world’s largest meatpacker, JBS.
Then there are accidental risks. Before a rover is sent into the field, it’s instructed by its human operator to sense certain parameters and detect particular anomalies, such as plant pests. It disregards, whether by its own mechanical limitations or by command, all other factors.
The same applies to wireless sensor networks deployed in farms, designed to notice and act on particular parameters, for example, soil nitrogen content. By imprudent design, these autonomous systems might prioritise short-term crop productivity over long-term ecological integrity. To increase yields, they might apply excessive herbicides, pesticides and fertilisers to fields, which could have harmful effects on soil and waterways.
Rovers and sensor networks may also malfunction, as machines occasionally do, sending commands based on erroneous data to sprayers and agrochemical dispensers. And there’s the possibility we could see human error in programming the machines.
Safety over speed
Agriculture is too vital a domain for us to allow hasty deployment of potent but insufficiently supervised and often experimental technologies. If we do, the result may be that they intensify harvests but undermine ecosystems. As we emphasise in our paper, the most effective method to treat risks is prediction and prevention.
We should be careful in how we design AI for agricultural use and should involve experts from different fields in the process. For example, applied ecologists could advise on possible unintended environmental consequences of agricultural AI, such as nutrient exhaustion of topsoil, or excessive use of nitrogen and phosphorus fertilisers.
Also, hardware and software prototypes should be carefully tested in supervised environments (called “digital sandboxes”) before they are deployed more widely. In these spaces, ethical hackers, also known as white hackers, could look for vulnerabilities in safety and security.
This precautionary approach may slightly slow down the diffusion of AI. Yet it should ensure that those machines that graduate the sandbox are sufficiently sensitive, safe and secure. Half a billion farms, global food security and a fourth agricultural revolution hang in the balance.