By Amina Ferati

Amina Ferati is the President of International Advisory, Products and Systems (i-APS). i-APS is a woman owned consulting firm that leverages global expertise with local presence. i-APS is a certified 8(a) and Economically Disadvantaged Woman Owned Small Business (EDWOSB).

An i-APS Yemen staff member collecting survey data.

As humanitarian crises become increasingly complex and multifaceted, the need for innovative and efficient solutions has never been more apparent. Just as artificial intelligence (AI) and machine learning (ML) are transforming the way we live, these technologies are also emerging as transformative forces in the humanitarian field. Our firm, International Advisory, Products, and Systems (i-APS) – with over a decade of experience providing monitoring, evaluation and learning services to donors including USAID, UN entities and INGOs – is pioneering the use of AI and ML in the humanitarian field.

AI refers to the use of computers to perform tasks that historically have required human intelligence. ML is the most frequently encountered subset of AI, and it enables computers to make predictions based on data and a general learning algorithm without explicit programming specific to the task, in fields like vision and language. Both are powerful tools that have the potential to support more effective, efficient, and data-driven decision-making, and can help us better understand the challenges faced by vulnerable populations and the effectiveness of interventions.

AI and ML have already made significant contributions to the humanitarian sector, accelerating time-sensitive or laborious activities and improving response efforts. For example, AI and ML solutions automated satellite imagery analysis to provide real-time damage assessments for refugee camps in South Sudan[1]. AI and ML solutions also significantly decreased the exclusion of vulnerable populations in accessing humanitarian services in Togo[2]. Besides innovative applications like those, they can also enhance and optimize conventional monitoring and evaluation activities, like transcription analysis. Transcription analysis involves extracting key insights from hours of qualitative interviews and hundreds of pages of transcripts, a task that typically requires a team of several people and weeks to process. Using AI and ML, this process only takes a few days with the assistance of data engineers, while also identifying subtle patterns and trends that might escape human analysts.

i-APS leverages this technology with KAPinsights, our AI and ML platform that provides humanitarian and development organizations with tools for analyzing and leveraging learning from large amounts of structured and unstructured data. Through KAPinsights, we have shortened the tools development process by over 65%, enabling us to ensure better quality control and flow data seamlessly from raw inputs to dashboards within an unprecedented timeframe. Moreover, our platform has effectively reduced work pressure and workload by more than 60%, enhancing efficiency and productivity. Currently our teams are fine tuning natural language processing models to extract key insights from open-ended questions in beneficiary surveys, key informant interviews and focus groups, and have built an AI aid for creating survey tools, suggesting relevant questions, and translating them to several languages. Our teams have already used these tools to deliver insights into USAID health systems strengthening funded programs in Yemen, and over 60 projects funded by UN entities.

While we see the benefits of these technologies, they also carry risk and raise serious ethical concerns. AI and ML driven solutions are only as good as the data being used. If this data contains factual errors, gaps, misconceptions, or biases of the designers and developers – this will perpetuate those same biases into the results being produced, ultimately making our decisions poorer and harming people and communities.

Additionally, the designs of the models themselves can reflect biases, including racial and ethnic bias, which have already created concerns in other sectors. This was recently seen in software use for surveillance, policing and incarceration[3], automated CV analysis for hiring, college admissions and credit lending in the financial sector[4]. When, for instance, a model is trained to predict who would get hired, historically—and not who would contribute effectively to the organization if they got hired—the model’s goal bakes in racial, ethnic and other biases. As a consequence, many model designs have reflected a bias toward outcomes for white males and excluding underrepresented communities and women. Fixing these design decisions requires careful curation of training datasets and bias correction and strong accountability from developers. Another challenge lies in the ‘black box’ nature of AI and ML. Understanding how these systems arrive at their decisions can be complex, making it difficult to identify and rectify any inherent errors or biases.

The result is a field with potential, but rife with ethical, legal, and practical challenges. At i-APS, we seek to better understand the potential of how AI and ML can support donors and implementing partners by leveraging the technology to support monitoring, evaluation and learning, through analyzing their data, drawing inferences, and supporting humanitarian response and programs. At the same time, we remain vigilant of the potential risks associated with AI and ML. This entails thorough consideration of training data quality, the potential biases inherent in both the data and algorithms used, and comprehensive understanding among our staff regarding the intricacies of AI analysis prior to implementing the technology within our services.

  1. Quinn John A. et al. (2018) Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping Phil. Trans. R. Soc. A.3762017036320170363 ?

  2. Aiken, E., Bellue, S., Karlan, D. et al. Machine learning and phone data can improve targeting of humanitarian aid. Nature 603, 864–870 (2022). ?
  3. Eubanks, Virginia. Automating inequality: How high-tech tools profile, police, and punish the poor. New York, NY: Picador, St. Martin’s Press, 2019. ?
  4. O’Neil, Cathy. Weapons of math destruction how big data increases inequality and threatens democracy. London, UK: Penguin Books, 2018. ?