96% Accuracy, 0% Impact: The Promise and Paradox of AI in Food Security

The 96% Accuracy Illusion

Imagine a world where artificial intelligence can predict food crises with 96% accuracy. Neural networks process vast datasets, identifying patterns invisible to human analysts, and deliver precise forecasts about when and where hunger will strike. Researchers celebrate these breakthrough results, media outlets herald the dawn of AI-powered food security, and funding agencies invest billions in algorithmic solutions. Meanwhile, 281.6 million people faced hunger in Africa alone in 2020—an increase of 46.3 million from the previous year (Tamasiga et al., 2023).

This is not a hypothetical scenario. Christensen et al. (2021) actually developed prediction models achieving 96% precision, recall, and f1 scores in laboratory conditions. Yet when we examine the real-world performance of established early warning systems, a starkly different picture emerges that reveals the fundamental limitations of algorithmic approaches to food security. This staggering gap between laboratory brilliance and field failure exemplifies what we might call “technological myopia”: the systematic focus on technical performance metrics while remaining blind to implementation realities and structural barriers.

The contemporary discourse surrounding artificial intelligence and food security suffers from this peculiar form of technological myopia. While researchers develop increasingly sophisticated algorithms and celebrate impressive accuracy scores, the fundamental architecture of global hunger remains largely unchanged. The persistent gap between AI’s demonstrated capabilities and its real-world impact reveals deeper structural problems that algorithmic sophistication cannot address. Rather than asking how AI can solve food insecurity, we must confront a more uncomfortable question: Why does AI’s promise consistently exceed its delivery, and what does this systematic failure reveal about our approach to global hunger?

Understanding Technological Myopia in AI Food Security

Technological myopia operates through a predictable cycle that characterizes much of contemporary AI food security research. The cycle begins with impressive laboratory results—like the 96% accuracy achieved by Christensen et al.—which generate media attention and research funding. This attention leads to more sophisticated algorithmic development, producing even more impressive technical metrics, which in turn attract additional investment and research interest. However, this cycle systematically avoids engagement with the messy realities of deployment, the political economy of food systems, and the structural causes of hunger.

The myopia manifests in several ways that become clear when we examine the research landscape systematically. Yang et al. (2025) conducted a comprehensive systematic review of AI applications in food banks and pantries—institutions that directly serve food-insecure populations—and identified only five peer-reviewed papers published between 2015 and 2024. This scarcity is particularly striking given that 28.9% of the global population experiences moderate or severe food insecurity, creating an enormous potential market for AI-driven solutions. The disconnect between technical capability and practical application suggests that researchers are optimizing for publication metrics rather than real-world impact.

The geographic distribution of research efforts further illuminates this technological myopia. Sarku et al. (2023) identified a troubling pattern in AI food security research: while most studies focus on sub-Saharan Africa, the research organizations conducting this work are predominantly based in Europe and the Americas. This creates a fundamental disconnect between the communities facing food insecurity and those developing technological solutions for them. The resulting research ecosystem prioritizes publishable algorithmic innovations over deployable interventions, generating impressive technical metrics while failing to address the practical barriers that prevent AI implementation in food-insecure communities.

The Laboratory-to-Field Performance Gap: Authoritative Evidence

To understand why technological myopia persists, we must examine the systematic differences between laboratory conditions and real-world deployment environments. The most comprehensive evaluation of early warning system performance comes from Backer and Billing’s (2021) rigorous analysis of the Famine Early Warning Systems Network (FEWS NET) across 25 African countries from 2009 to 2020. Their findings reveal a troubling pattern that exemplifies the implementation challenges facing AI food security applications.

FEWS NET, established by USAID in 1985, represents one of the most sophisticated and well-resourced early warning systems in operation. The system integrates satellite data, weather monitoring, market analysis, and field reports to generate food security projections that guide humanitarian response across much of Africa. If any system should demonstrate the potential for algorithmic approaches to food security, it would be FEWS NET. Yet Backer and Billing’s independent evaluation reveals systematic performance degradation precisely when the system is most needed.

FEWS NET Performance by Food Security Level (2009-2020)
Food Security LevelAccuracy RateContext
Level 1 (Minimal)92.65%Normal conditions
Level 2 (Stressed)74.38%Moderate food insecurity
Level 3 (Crisis)65.84%Significant food insecurity
Level 4 (Emergency)41.23%Severe food insecurity
Level 5 (Famine)29.21%Catastrophic conditions
Overall Average83.64%All conditions

Source: Backer & Billing (2021), Global Food Security

This performance pattern reveals what might be termed “inverse effectiveness”—the system performs worst precisely when accuracy matters most. At Level 5 (Famine conditions), FEWS NET achieves only 29.21% accuracy, meaning it fails to correctly predict outcomes in more than two-thirds of cases. Even at Level 4 (Emergency), accuracy drops to 41.23%, barely better than random chance. This systematic degradation occurs despite decades of refinement, substantial funding, and access to increasingly sophisticated data sources and analytical tools.

The contrast with laboratory conditions becomes even more striking when we consider the controlled environment in which Christensen et al. achieved their 96% accuracy. Laboratory evaluations feature clean historical datasets, standardized formats, complete information, and the luxury of retrospective analysis. Researchers can optimize algorithms without confronting the political, economic, and logistical constraints that characterize actual humanitarian response. The 96% accuracy represents performance on carefully curated data under ideal conditions that bear little resemblance to the chaotic environments where early warning systems must actually operate.

Backer and Billing’s analysis illuminates several factors that contribute to the systematic performance degradation observed in real-world deployment. The study documents a consistent bias toward over-projection of severe food insecurity, suggesting that early warning systems may be optimized for avoiding false negatives rather than maximizing accuracy. This creates what humanitarian practitioners recognize as the “cry wolf” effect, where repeated over-projections may undermine system credibility and reduce responsiveness to genuine emergencies.

More fundamentally, the study reveals how implementation realities confound technical performance metrics. Backer and Billing note that “intervening humanitarian assistance contributes to explanation for over-projections,” highlighting a paradox inherent in early warning evaluation. When early warning systems successfully trigger humanitarian response, they may appear technically “wrong” because the intervention changes the predicted outcome. This creates a measurement problem that laboratory evaluations systematically avoid: success in the real world may look like failure in technical metrics.

The study also documents how “unanticipated climate and conflict shocks hinder the accuracy of projections,” revealing fundamental constraints on what algorithmic systems can realistically achieve. These constraints reflect the inherent unpredictability of the complex social, political, and environmental systems that drive food insecurity. Unlike laboratory conditions, real-world environments feature incomplete data, political interference, resource constraints, and the complex human factors that influence both crisis development and response capacity.

The FEWS NET performance data exemplifies a broader pattern documented across humanitarian early warning systems. Maxwell’s (2020) analysis of early warning and early action in East Africa identifies systematic gaps between technical prediction capabilities and implementation effectiveness. The study documents how early warning systems may achieve impressive technical metrics while failing to trigger timely or appropriate responses, creating what Maxwell terms “anticipatory information systems” that generate data without enabling action.

Similarly, Whittall’s (2010) critical examination of humanitarian early warning systems challenges the “myth” that technical sophistication translates to humanitarian effectiveness. Whittall argues that early warning systems often serve institutional needs for information and accountability rather than addressing the structural causes of humanitarian crises. This institutional capture helps explain why early warning systems may achieve impressive technical performance while humanitarian outcomes remain largely unchanged.

The pattern extends beyond early warning to encompass broader AI applications in humanitarian contexts. Krishnamurthy et al.’s (2020) analysis of food security prediction in the Greater Horn of Africa shows how climate and conflict events systematically degrade algorithmic performance, with the largest deviations occurring in the most food-insecure regions. This suggests that the environments most in need of AI intervention are precisely those where algorithmic approaches face the greatest limitations.

The authoritative evidence from FEWS NET evaluation reveals fundamental problems with how AI food security research approaches the relationship between technical performance and real-world impact. The focus on laboratory metrics like the 96% accuracy achieved by Christensen et al. systematically obscures the implementation challenges that determine whether AI systems can actually address food insecurity. The performance degradation documented by Backer and Billing suggests that current approaches to AI food security research may be optimizing for the wrong objectives. Rather than pursuing ever-higher accuracy scores in controlled conditions, research might focus on understanding why performance degrades in real-world deployment and developing approaches that maintain effectiveness under the chaotic conditions that characterize food crises.

More fundamentally, the FEWS NET evidence challenges the assumption that improved prediction necessarily leads to improved outcomes. Even if AI systems could achieve perfect prediction accuracy, the systematic gaps between early warning and early action documented by Maxwell and others suggest that technical solutions alone cannot address the political and institutional barriers that prevent effective humanitarian response. This analysis reveals how technological myopia operates in practice: impressive laboratory results generate research funding and media attention while systematic implementation failures remain largely invisible to the research community. The result is a research ecosystem that prioritizes technical sophistication over practical effectiveness, generating increasingly impressive algorithmic performance while the fundamental architecture of global hunger remains largely unchanged.

Four Fundamental Flaws in AI Food Security Approaches

The technological myopia that characterizes AI food security research manifests through four systematic flaws that prevent effective deployment and real-world impact. These flaws operate independently but reinforce each other, creating a research ecosystem that prioritizes technical sophistication over practical effectiveness.

1. The Implementation Paradox

The most striking feature of contemporary AI food security research is the inverse relationship between technical sophistication and implementation success. As algorithms become more sophisticated and achieve higher accuracy scores, their deployment becomes increasingly difficult and their real-world impact diminishes. This implementation paradox reflects the systematic avoidance of deployment challenges in favor of technical optimization.

The research literature reveals a consistent pattern of algorithmic advancement without corresponding implementation progress. Studies focus on improving accuracy from 90% to 95% while ignoring the fundamental barriers that prevent deployment of existing 90% accurate systems. The systematic review by Sarku et al. reveals three distinct patterns in AI model application: exclusive utilization of AI models without stakeholder involvement, partial stakeholder engagement in specific aspects of the modeling process, and iterative collaboration between AI developers and affected communities. Critically, the vast majority of studies remain experimental, lacking real-world implementation and feedback mechanisms that could validate and improve model effectiveness.

This pattern extends beyond individual studies to encompass the broader research ecosystem. The concentration of AI expertise in wealthy institutions creates what might be termed “technological colonialism,” where solutions are developed for, rather than with, food-insecure communities. Priyadarshini et al. (2018) developed neural network approaches to identify food insecure zones in Madhya Pradesh, India, using remote sensing data and socio-demographic indicators. While technically sophisticated, their approach exemplifies the top-down methodology that characterizes much AI food security research. Local communities become data sources rather than partners in solution development, reinforcing existing power imbalances rather than challenging them.

2. The Equity Illusion

The second fundamental flaw lies in AI’s systematic reinforcement of existing inequalities while claiming to address them. Despite rhetoric about democratizing access to food security solutions, AI applications often exacerbate the very disparities they purport to solve. This equity illusion operates through multiple mechanisms that remain largely unexamined in the research literature.

The digital divide represents the most obvious barrier to equitable AI deployment. Precision agriculture technologies, celebrated for their potential to transform smallholder farming, require infrastructure investments that remain prohibitively expensive for the very populations most affected by food insecurity. Alamu (2024) notes that in developing countries, where 80% of food is produced by smallholder farmers using rudimentary technologies, precision agriculture adoption remains extremely low due to financial constraints, technical knowledge gaps, and insufficient government support. The celebrated success stories—such as West African farmers achieving 60% savings in agrochemical use and 30% reductions in mineral fertilizer consumption—represent exceptional cases rather than scalable solutions.

More troubling is the systematic absence of ethical considerations in AI food security research. Yang et al. (2025) found that none of the studies in their systematic review addressed AI ethics, including model bias and fairness, or discussed intervention and policy implications in depth. This ethical blindness is particularly problematic given the vulnerable populations that AI food security systems claim to serve. Algorithmic bias in food distribution systems could systematically disadvantage already marginalized communities, while predictive models trained on historical data may perpetuate past inequities in resource allocation.

The economic structure of AI development creates additional barriers to equitable deployment. The high costs of developing and maintaining AI systems favor solutions that serve profitable markets rather than food-insecure populations. Commercial precision agriculture systems target large-scale farmers who can afford subscription services and equipment upgrades, while smallholder farmers—who produce most of the world’s food—remain underserved. This market logic ensures that AI advances will primarily benefit those already privileged with resources and technology access, potentially widening rather than narrowing food security gaps.

3. The Complexity Reduction Fallacy

The third critical flaw stems from AI’s systematic reduction of complex socio-political problems to technical optimization challenges. This complexity reduction fallacy manifests in the persistent focus on algorithmic performance metrics while avoiding engagement with the structural causes of food insecurity.

Food insecurity is fundamentally a problem of distribution and access, not production. Grewal et al. (2024) note that approximately 30% of the 430 billion pounds of food produced annually in the United States—worth $162 billion—goes uneaten, while nearly 200 million people globally lack consistent access to adequate nutrition. This paradox of abundance and scarcity reveals that food insecurity stems from economic and political structures rather than technical inefficiencies. Yet AI research consistently frames hunger as an information problem requiring algorithmic solutions rather than a justice problem requiring structural reform.

The bibliometric analysis by Tamasiga et al. (2023) reveals how AI food security research has evolved to focus on technical optimization while avoiding political economy considerations. Their analysis of publication trends from 1973 to 2022 shows increasing emphasis on machine learning applications, predictive modeling, and supply chain optimization, with limited attention to power dynamics, land rights, or economic inequality. This technical focus allows researchers to develop sophisticated models while avoiding the uncomfortable reality that food insecurity often results from deliberate policy choices rather than technical limitations.

Climate change provides a particularly clear example of this complexity reduction fallacy. While AI researchers develop increasingly sophisticated models to predict climate impacts on agricultural production, they systematically avoid engaging with the political and economic factors that determine vulnerability to climate change. Christensen et al. (2021) note that under the worst-case Representative Concentration Pathway (RCP 8.5) scenario, the area of moderate to very high desertification risk is projected to increase by 23% by century’s end. However, their analysis focuses on improving prediction accuracy rather than addressing the structural factors that make some populations more vulnerable to climate impacts than others.

4. The Validation Crisis

The fourth fundamental problem involves a systematic validation crisis that undermines claims about algorithmic effectiveness. Despite impressive laboratory results, the field lacks rigorous evidence that AI interventions improve food security outcomes in real-world settings. This validation crisis reflects deeper methodological problems that compromise the credibility of AI food security research.

The most obvious manifestation of this crisis is the persistent gap between algorithmic performance and system effectiveness, exemplified by the contrast between Christensen et al.’s 96% laboratory accuracy and FEWS NET’s 23% field performance. This gap suggests that the relationship between algorithmic accuracy and system effectiveness is far more complex than current research acknowledges. Technical performance metrics may be entirely irrelevant to real-world impact if deployment barriers prevent implementation or if the systems address the wrong problems.

The absence of long-term impact studies further undermines validation claims. Yang et al. (2025) found that AI food security research consistently lacks follow-up studies to assess sustained impact beyond pilot projects. The celebrated success stories—such as the threefold efficiency increases reported by farmers using the CocoaSense platform in Ghana—represent short-term outcomes rather than sustained transformations (Grewal et al., 2024). Without longitudinal studies, it remains unclear whether AI interventions produce lasting improvements in food security or merely temporary optimizations that fade as novelty effects diminish.

Perhaps most troubling is the systematic absence of comparative studies that evaluate AI interventions against alternative approaches. The research literature treats AI as inherently superior to conventional methods without providing empirical evidence for this assumption. It remains unclear whether the resources invested in AI development would achieve greater food security impact if directed toward direct interventions such as cash transfers, infrastructure development, or policy reform. This comparative blindness reflects the technological determinism that characterizes much AI food security research.

Case Studies in Technological Myopia

Remote Sensing and Agricultural Monitoring

The remote sensing and neural network approaches developed by Bhadra (2023) exemplify technological myopia in agricultural monitoring. The PROSAIL-Net system demonstrates impressive capabilities in estimating leaf chlorophyll and leaf angle from UAV hyperspectral images, achieving high accuracy in controlled experimental conditions. The sophisticated transfer learning algorithms and 3D CNN architectures represent genuine technical innovations that advance the state of agricultural monitoring technology.

However, this technical sophistication exists in isolation from the socio-economic factors that determine whether improved crop monitoring translates into enhanced food security for vulnerable populations. The system requires expensive UAV equipment, hyperspectral cameras, and sophisticated data processing capabilities that remain inaccessible to most smallholder farmers. More fundamentally, the approach treats agricultural productivity as a purely technical challenge, ignoring the land tenure, market access, and credit availability issues that often determine whether farmers can benefit from improved monitoring information.

Food Bank and Pantry Applications

The systematic review by Yang et al. (2025) reveals technological myopia in AI applications for food banks and pantries. Despite identifying only five peer-reviewed papers on this topic, the existing research focuses primarily on technical optimization of donation processes and distribution logistics. Four studies utilized structured data machine learning algorithms, including neural networks, K-means clustering, random forests, and Bayesian additive regression trees, while one employed text-based topic modeling.

This technical focus systematically avoids engagement with the structural causes of food bank reliance and the political economy of charitable food distribution. The research treats food insecurity as a logistics problem requiring algorithmic optimization rather than examining why people need food banks in the first place. None of the studies addressed the ethical implications of using AI to manage charitable food distribution or considered whether algorithmic optimization might reinforce stigmatization of food bank users.

Supply Chain Disruption Prediction

The bibliometric analysis by Tamasiga et al. (2023) reveals how AI research on supply chain disruptions exemplifies complexity reduction fallacy. Their analysis of publication trends shows increasing emphasis on machine learning applications for predicting and managing supply chain disruptions, with particular focus on COVID-19 impacts and climate change effects. The research demonstrates genuine technical sophistication in developing predictive models and optimization algorithms.

However, this technical focus systematically avoids engagement with the political and economic factors that create supply chain vulnerabilities in the first place. The research treats disruptions as natural phenomena requiring technical prediction rather than examining how power imbalances, trade policies, and economic structures create differential vulnerability to disruptions. The emphasis on prediction and optimization obscures questions about who benefits from current supply chain arrangements and who bears the costs of disruptions.

The Political Economy of Technological Myopia

Understanding why technological myopia persists requires examining the institutional and economic incentives that shape AI food security research. The academic reward system prioritizes publication in high-impact journals, which favor technically sophisticated studies with impressive performance metrics over implementation-focused research with messy real-world results. Researchers advance their careers by developing novel algorithms rather than by deploying existing solutions effectively.

Funding agencies compound this problem by evaluating proposals based on technical innovation rather than implementation potential. The emphasis on “breakthrough” technologies and “cutting-edge” approaches systematically disadvantages research focused on deployment challenges, stakeholder engagement, and incremental improvements to existing systems. This funding structure creates perverse incentives that reward technical sophistication over practical effectiveness.

The concentration of AI expertise in wealthy institutions further reinforces technological myopia by creating physical and cultural distance between researchers and the communities they claim to serve. Researchers in well-funded laboratories develop solutions for problems they understand primarily through academic literature rather than direct experience. This distance enables the development of technically sophisticated solutions that ignore practical constraints and cultural contexts.

Commercial interests also contribute to technological myopia by promoting AI solutions that serve profitable markets rather than urgent human needs. The venture capital funding that drives much AI development seeks scalable technologies with large addressable markets, not solutions for marginalized populations with limited purchasing power. This market logic ensures that AI development prioritizes applications that can generate revenue rather than those that address the most pressing food security challenges.

Toward Accountable AI: Moving Beyond Technological Myopia

Addressing technological myopia requires fundamental changes in how we conceptualize, develop, and evaluate AI applications for food security. Rather than pursuing ever-more sophisticated algorithms, the field must develop what we might call “accountable AI”—approaches that prioritize implementation effectiveness, equity outcomes, and democratic participation over technical performance metrics.

Accountable AI begins with honest acknowledgment of technology’s limitations. Food insecurity is fundamentally a problem of political economy, not technical optimization. While AI can provide valuable tools for monitoring, prediction, and resource allocation, it cannot address the structural inequalities that create and maintain hunger. Effective food security interventions require political action to redistribute resources, reform land tenure systems, and challenge the economic structures that concentrate food access among privileged populations.

The development of accountable AI requires genuine partnership with affected communities rather than extractive research relationships. This means moving beyond consultation toward shared ownership of research agendas, with communities playing central roles in defining problems, designing solutions, and evaluating outcomes. Such partnerships require long-term commitments, capacity building investments, and willingness to cede control over research directions to community priorities.

Implementation accountability demands rigorous validation standards that prioritize real-world effectiveness over laboratory performance. Future AI food security research must include longitudinal impact studies, comparative analyses with alternative interventions, and comprehensive cost-benefit assessments. The field needs systematic evaluation of implementation barriers and honest assessment of when AI interventions are inappropriate or counterproductive.

The equity implications of AI deployment must become central to research design rather than peripheral considerations. This means conducting systematic bias audits of algorithmic systems, ensuring diverse representation in development teams, and prioritizing solutions that serve the most marginalized populations. AI food security research must grapple seriously with questions of digital justice, asking not just whether technologies work but whether they reinforce or challenge existing inequalities.

Practical Steps Toward Implementation

For Researchers

Researchers must fundamentally reorient their approach to prioritize implementation over innovation. This requires developing new methodologies that center community partnership, long-term impact assessment, and comparative effectiveness research. Academic institutions should create incentive structures that reward deployment success rather than just publication metrics.

Specific changes include requiring community partnership agreements for all AI food security research, implementing mandatory five-year follow-up studies for any deployed system, and establishing ethical review processes that examine power dynamics and equity implications. Researchers should also develop new metrics that capture implementation success, community empowerment, and sustainable impact rather than focusing exclusively on technical performance.

For Funding Agencies

Funding agencies must shift evaluation criteria to prioritize implementation potential over technical novelty. This requires developing new review processes that include community representatives, implementation experts, and practitioners alongside technical reviewers. Funding should be structured to support long-term partnerships rather than short-term projects, with continued funding contingent on demonstrated real-world impact.

Geographic equity must become a central consideration, with significant portions of funding directed to institutions in the Global South and requirements for meaningful technology transfer and capacity building. Funding agencies should also support comparative effectiveness research that evaluates AI interventions against alternative approaches to determine optimal resource allocation.

For Policymakers

Policymakers must develop regulatory frameworks that ensure AI food security systems serve public interests rather than commercial priorities. This includes establishing ethical review requirements for AI systems that serve vulnerable populations, mandating community consent processes for data collection and system deployment, and creating accountability mechanisms for algorithmic bias and discrimination.

Policy frameworks should also address the structural causes of food insecurity rather than relying solely on technological solutions. This requires coordinated approaches that combine AI applications with land reform, social protection systems, and economic development initiatives that address the root causes of hunger.

Conclusion: Beyond the Promise and Paradox

The choice facing the AI food security community is not between embracing or rejecting technology, but between perpetuating solutions that serve existing power structures or developing accountable approaches that genuinely serve food-insecure communities. The 96% laboratory accuracy that characterizes much contemporary research represents both the promise and the paradox of algorithmic intervention: impressive technical capability that remains disconnected from real-world impact.

Moving beyond technological myopia requires intellectual humility about technology’s limitations and moral courage to confront the structural causes of food insecurity. AI can play a valuable supporting role in food security interventions, but only when deployed within broader strategies that address the political and economic roots of hunger. The sophisticated algorithms that achieve 96% accuracy in laboratory conditions must be evaluated not by their technical performance but by their contribution to reducing the 281.6 million people who faced hunger in Africa in 2020.

With climate change accelerating and global inequality deepening, the window for effective action on food security is rapidly closing. We cannot afford to waste another decade pursuing technically sophisticated solutions that fail to address hunger’s fundamental causes. The time has come to move beyond the promise and paradox of algorithmic intervention toward approaches that match technological capability with political commitment to justice. Only then can AI fulfill its potential to contribute meaningfully to the urgent work of ensuring food security for all.

References

Alamu, S. A. (2024). Systematic review of current trends in precision agricultural model to address food insecurity challenges. Journal of Applied Science and Environmental Management, 28(12), 4181-4192.

Backer, D., & Billing, T. (2021). Validating Famine Early Warning Systems Network projections of food security in Africa, 2009–2020. Global Food Security, 29, 100510. https://doi.org/10.1016/j.gfs.2021.100510

Bhadra, S. (2023). Informed AI for food insecurity: Applications of remote sensing, neural networks and transfer learning for digital agricultural monitoring [Doctoral dissertation]. Saint Louis University.

Christensen, C., Wagner, T., & Langhals, B. (2021). Year-independent prediction of food insecurity using classical and neural network machine learning methods. AI, 2, 244-260.

Grewal, D., Guha, A., Noble, S. M., & Bentley, K. (2024). The food production–consumption chain: Fighting food insecurity, loss, and waste with technology. Journal of the Academy of Marketing Science, 52, 1412-1430.

Krishnamurthy, P. K., Choularton, R. J., & Kareiva, P. (2020). How complex events affect food security early warning skill in the Greater Horn of Africa. Global Food Security, 26, 100274. https://doi.org/10.1016/j.gfs.2020.100274

Maxwell, D. (2020). Notes on early warning and early action in East Africa: Towards anticipatory information systems and action. Feinstein International Center, Tufts University.

Priyadarshini, K. N., Kumar, M., & Kumaraswamy, K. (2018). Identification of food insecure zones using remote sensing and artificial intelligence techniques. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-5, 659-664.

Sarku, R., Clemen, U. A., & Clemen, T. (2023). The application of artificial intelligence models for food security: A review. Agriculture, 13, 2037.

Tamasiga, P., Ouassou, E. H., Onyeaka, H., Bakwena, M., Happonen, A., & Molala, M. (2023). Forecasting disruptions in global food value chains to tackle food insecurity: The role of AI and big data analytics – A bibliometric and scientometric analysis. Journal of Agriculture and Food Research, 14, 100819.

Whittall, J. (2010). Humanitarian early warning systems: Myth and reality. Disasters, 34(2), 227-239. https://doi.org/10.1111/j.1467-7717.2009.01130.x

Yang, Y., An, R., Fang, C., & Ferris, D. (2025). Artificial intelligence in food bank and pantry services: A systematic review. Nutrients, 17, 1461.

Leave a comment