Artificial Intelligence in International Development: Avoiding Ethical Pitfalls

Written by
Lindsey Andersen
May 20, 2019

By Lindsey Andersen


Artificial intelligence (AI) will soon be at the center of the international development field. Amidst this transformation,  there is insufficient consideration from the international development sector and the growing AI and ethics field of the unique ethical issues AI initiatives face in the development context. This paper argues that the multiple stakeholder layers in international development projects, as well as the role of third-party AI vendors, results in particular ethical concerns related to fairness and inclusion, transparency, explainability and accountability, data limitations, and privacy and security. It concludes with a series of principles that build on the information communication technology for development (ICT4D) community’s Principles for Digital Development to guide international development funders and implementers in the responsible, ethical implementation of AI initiatives.


Artificial intelligence (AI) has the potential to help solve some of the world’s most intractable international development problems. From helping farmers adapt to climate change, to predicting disease outbreaks, to making congested urban centers more livable, international development implementers have begun turning to AI for more effective solutions. With this potential, however, comes the possibility for abuse, misuse, and unintended consequences.

AI systems are powerful analytical tools, and without proper consideration of the ethical risks, they can harm the very communities they are designed to help.

For instance, a tool designed to monitor crop well-being could be misused to surveil and repress marginalized groups. The field of artificial intelligence and ethics has emerged to address these risks. However, because AI development and implementation is still heavily concentrated in the developed world, notably the United States, Europe, and China, little focus has been directed toward the unique ethical issues that arise when using AI tools in a development context. Because international development typically involves foreign organizations implementing projects in other countries, there are multiple layers of accountability and responsibility to consider. This is compounded by the fact that AI systems are typically built and managed by outside vendors. Perhaps most important to consider are the individual people whose data are used to fuel the AI system, and for whom the system is designed.

In light of this complicated context, this paper seeks to inform how international development funders and implementers engage with ethical considerations surrounding the use of artificial intelligence in a development context. It will begin by briefly explaining what artificial intelligence is and the state of the field today, followed by a review of how these tools are currently being developed for, and implemented in, international development. It will also examine the primary ethical concerns related to AI in development work, before concluding with a set of principles to guide the responsible use of AI in international development. These recommendations build off the Principles for Digital Development, a set of widely accepted best practices in the ICT4D field.


What is AI?

Although artificial intelligence may still seem like a futuristic fantasy, in truth, AI is all around us – though in mundane ways we rarely imagine.

It recommends TV shows on Netflix, suggests friends on Facebook, filters spam from our inboxes, identifies faces in photos, powers voice assistants, enables targeted advertising, and translates text.

Marvin Minsky, one of the founding AI scholars, defines AI as “the science of making machines do things that would require intelligence if done by men” (COMEST 2017, 17). However, there is no agreed upon definition of artificial Intelligence. AI is considered more of a field than an easily definable “thing,” and is made up of many subfields. Although AI has been an active field for decades (National Science and Technology Council 2016, 5), it has only taken off in the last ten years due to the availability of massive amounts of data, as well as an increase in computing power. The increased availability of data, largely thanks to our ever-expanding internet use, has significantly improved the capabilities of algorithms. Meanwhile, the rapid increase in computing power has enabled more cost-effective deployment of more powerful AI systems.

Currently, we have what is called “narrow AI”—single-task applications of artificial intelligence such as image recognition, language translation, and autonomous vehicles. In the future, researchers hope to achieve “artificial general intelligence” (AGI). This would involve systems that exhibit intelligent behavior across a range of cognitive tasks. However, these capabilities are not estimated to be achieved for decades (National Science and Technology Council 2016, 7).

Machine Learning

Machine learning (ML) is the basis of most of the major advancements in artificial intelligence and constitutes the vast majority of AI we interact with today. So much so, that the terms ML and AI are often used interchangeably. At its most basic, machine learning is a “statistical process that starts with a body of data and tries to derive a rule or procedure that explains the data or can predict future data” (National Science and Technology Council 2016, 18). Essentially, it is a machine that learns from data. This is different from the traditional approach to artificial intelligence, which involves a programmer trying to translate the way humans make decisions into software code. It is particularly useful in cases where it is difficult for a human programmer to write down explicit rules to solve a problem. Currently, many ML systems are far more accurate than humans at a variety of tasks, from driving to diagnosing disease.

As machine learning has been applied to certain tasks, specific ML approaches have developed. For instance, natural language processing helps computers understand, interpret, and manipulate human language—enabling common tools such as Google Translate and chatbots. Speech recognition also allows computers to translate spoken language into text. It is often paired together with natural language processing and is used in virtual assistants such as Apple’s Siri and Amazon’s Alexa. Finally, machine vision or computer vision allows computers to recognize and analyze images. It is used by Google Photos to help search for specific photos, as well as by Facebook to automatically tag friends in uploaded photos.


How AI is Used in International Development

The ability of AI tools to conduct complicated data analysis from massive amounts of data, at scale, presents a number of opportunities to help solve the world’s most challenging international development problems.

Today, the majority of AI initiatives in international development are still in the research, development, and piloting stage. Most rely on a few broadly available data sources such as satellite imagery, mobile phone data, and survey data. These data sources have enabled the development of AI systems in areas such as agriculture and healthcare. The use of AI in international development is likely to become more prevalent now that Amazon, Google, and Microsoft have all introduced cloud-based AI, significantly lowering the cost of running AI systems (Vosloo, Jan 2018). Although the most promising applications of AI in international development are still to come, a number of existing initiatives have begun experimenting with AI-powered solutions:

  • mCrops is using image processing tools to help farmers in Uganda diagnose crop disease.
  • Geekie, an adaptive learning start-up in Brazil, is using AI to provide tailored virtual tutoring to students.
  •, a Nigerian chatbot system, allows people to make payments and send money via messaging.
  • Aajoh, another Nigerian product, is developing an AI system for remote medical diagnosis to deal with a massive shortage of doctors in the country.
  • South African start-up Aerobotics uses drones and satellite images to help farmers optimize crop yields in Malawi, Zimbabwe, and Mozambique.
  • The United Nations is using natural language processing to analyze radio content in Uganda and gain insight into public opinion and the effectiveness of UN programs.
  • UNICEF is working on a facial recognition system to detect malnutrition in children around the world.
  • IBM has committed $100 million as part of its Project Lucy to help improve infrastructure across Africa. It is currently using AI to help farmers improve crop yields.

In the future, we can anticipate the continued development and improvement of existing AI models to play an even greater role in predicting disease outbreaks, drought, famine, and potentially even  armed conflict. Particularly promising areas of use include remote medical diagnosis and disease outbreak management. By identifying patterns in disease transmission, AI can assist health workers to target and plan treatment more effectively. Access to satellite imagery and developments in image recognition can help farmers increase crop yields with models that suggest optimal times to plant, fertilize, water, and harvest. AI can also be used to model the dynamics of urbanization and address pressing issues such as informal settlement expansion and transportation congestion. And it could help governments optimize their budgets through smart resource allocation and management.

The potential applications of AI in international development are endless, and this enthusiasm is driving the transition from hypothetical to mainstream integration of AI into development practitioners’ toolkits. In April 2018, virtual ICT4D training company TechChange ran a four-week course on AI in International Development.[1]

It is clear that AI for international development has arrived. Now is time to start thinking critically about how to implement it responsibly.


Ethics and AI: An Overview

Concerns about the ethics of AI stem from how the use of artificial intelligence has the potential to affect human lives. Not all uses of AI are controversial, however. For example, it is difficult to find ethical issues with systems that uses satellite imagery and meteorological information to aid farmers in optimizing crop yields.

Rather, the primary ethical issues facing AI today are due to “the idea of AI, machines, making consequential decisions about people, often replacing decisions made by human-driven bureaucratic processes” (National Science and Technology Council 2016, 2).

This is known as algorithmic decision-making, and the use of AI in such systems often involves issues of justice, fairness, and accountability.

An algorithm, at its simplest, is “a set of guidelines that describe how to perform a task.” Within computer science, an algorithm is a sequence of instructions that tells a computer what to do (Brogan 2016). It is important not to equate algorithms with artificial intelligence. AI works through algorithms; however, not all algorithms involve artificial intelligence.

The Risks of AI in Algorithmic Decision-Making

Algorithmic decision-making has been used for decades. Before the advent of AI, algorithms were deterministic—pre-programmed and unchanging. Because of their basis in statistical modeling, these algorithms can suffer from the same problems as traditional statistical models, such as sampling bias, unrepresentative data, biased data, or measurement error. Fortunately, because they are pre-programmed, it is possible to examine how these algorithms arrived at a given recommendation. One of the earliest forms of algorithmic decision-making, which is still in use in the United States, is federal sentencing guidelines. This involves nothing more than a weighted statistical formula, which recommends sentence length based on the attributes of the crime.[2] Today, algorithmic decision-making is largely digital. In many cases it employs statistical methods similar to the sentencing algorithm.

Machine learning algorithms also use statistical formulations and are therefore susceptible to these same issues. However, ML systems are different in a few key ways. First, whereas traditional statistical modeling involves a simple equation, machine learning captures a multitude of patterns beyond the boundaries of linearity. Second, unlike deterministic algorithms, machine learning algorithms calibrate themselves. Because they identify so many patterns, they are too complex for humans to understand. Thus, it is not possible to trace the decisions or recommendations they make (Srivastava 2015). In addition, many machine learning algorithms re-calibrate themselves through feedback, constantly changing the ways in which they arrive at outputs. One example is email spam filters, which continually learn and improve their spam detection capabilities as users mark messages as spam.

The proliferation of AI in data analytics has come with the rise of big data. Machine learning in algorithmic decision-making is ubiquitous in the West, from assigning credit scores, to identifying the best candidates for a job position, to ranking students for college admissions. Traditionally, statistical analysis and prediction was overseen by academic statisticians who were trained to spot issues in sampling methods and bias in the data. The monetization of data analytics has come with significantly less care and attention to potential problems (O’Neil 2016). Algorithmic decision-making systems are increasingly employing machine learning, and they are spreading rapidly. Because ML systems rely on mathematics and remove biased human decision-making, they are often seen as objective. The decisions they make and the outputs they produce are often not questioned. However, as this paper will show, their outputs can be far from objective. Though they face the same issues as traditional statistical analysis, the scale and reach of AI systems, the trend of rapid, careless deployment, and the immediate impact they have on many people’s lives pose a series of new problems.


Ethics and AI in International Development

In this section, I address the main ethical concerns of AI in international development. I examine how the existing AI and ethics areas of fairness, transparency, and accountability should be considered in the context of international development work. I also consider additional areas relevant to the use of AI in international development that have not yet been addressed by the literature or within broader public discussion on these topics. The main ethical concerns are divided into four categories: 1) fairness and inclusion; 2)  transparency, explainability, responsibility, and accountability; 3) data limitations; and 4) privacy and security.

Fairness and Inclusion

One key ethical concern related to AI is fairness. Unfair systems have a disparate impact on different groups of people, and are especially concerning when results disproportionately impact and reinforce existing patterns of group marginalization. These unfair systems are often the result of bias. AI can be biased both at the system level and the data level, resulting in biased outputs. These biased outputs then lead to a negative feedback loop, in which the system produces increasingly biased results over time.

Bias at the system level refers to developers creating an AI system that is biased, intentionally or unintentionally, by building their own personal biases into the parameters they consider or the labels they define. This problem often occurs when developers allow systems to equate correlation with causation (Executive Office of the President of the United States 2016, 8-10). For example, imagine that  a microfinance NGO that provides loans to low-income populations in Brazil is developing a credit scoring algorithm that looks at non-financial data sources because their target population tends to lack credit history. However, if the system includes the credit scores of Facebook friends as a parameter, it is more likely to punish those with low incomes simply because of the credit scores of their friends. System level bias also occurs when developers include parameters that are proxies for known bias (O’Neil 2016, 155-160). For example, although that Brazilian credit scoring algorithm may seek to avoid racial bias by not including race as a parameter, it will still have racially biased results if it includes common proxies for race in Brazil, such as income, education level, or postal code.

Bias can also occur at the input level, with the data itself. Because ML systems use an existing body of data to identify patterns, any bias in that data is naturally reproduced in the outputs. For example, a system used to recommend job candidates in India that uses the data of current and former successful employees to train the model is likely to recommend Hindu men while  disfavoring women and ethnic and religious minorities.

Another issue is selection bias, which occurs when the input data is unrepresentative of the target population. This results in conclusions that could favor certain groups over others. For example, if a GPS mapping system used only input data from smartphone users to estimate travel times and distances, it might be more accurate in wealthier areas of a city with a higher concentration of smartphone users and less accurate in poorer areas or informal settlements, where smartphone penetration is low.

An additional problem at the input level is poorly selected data, when it is determined that some data is important but not others that are also relevant. In the GPS mapping example, this could involve deciding to include information related to cars moving on roads, but not public transportation schedules or bike paths. This would result in a system that favors car use and discourages public transportation and biking.

A final example is incomplete, incorrect, or outdated data. If there is insufficient data, or data that is out of date, the results will naturally be inaccurate. Additionally, if the model is not continually updated with new data, it will naturally become less accurate over time because it is based on inputs that no longer reflect reality (Executive Office of the President of the United States 2016, 7-8). This is particularly of concern in international development, given the lack of data and poor data collection and management practices in many developing countries.

As AI fever sweeps the globe, there is justifiable concern in the developing world that foreign-developed AI will serve only to exacerbate already high levels of inequality and social division. Advocates in developing countries see the economic power of AI in the West and remain concerned about the ethics of systems designed in contexts so vastly different from their homelands.

Brazilian social justice organization Desabafo Social has highlighted potential concerns in a video campaign that showed bias in stock photo search algorithms. One video showed a search on Getty Images for the term “baby.”  Results presented only photos of white babies. In a country like Brazil, which has a white minority, we must ask: what are the implications of these searches? (Desabafo Social 2017). The majority of the world’s population lives in the developing world, and they should not just be passive consumers of artificial intelligence. Unfortunately, biased data and biased parameters are the rule rather than the exception. Although researchers have begun examining how best to address bias, or if it is possible to teach machines to learn without bias, there have yet to be concrete results. AI developers must be aware of all the ways in which bias can enter their systems, how it could skew the results, and that they should actively involve potential non-Western users in the development process (David Talbot et. al 2017).

Transparency, Explainability, Responsibility, and Accountability

One of the major challenges of AI-powered systems is accountability. When these systems make recommendations or decisions that affect people’s lives, it is not always clear who is ultimately accountable. There are a number of issues at play here. One is the inherent lack of transparency and explainability of AI systems themselves. Although there is a new area of research focused on creating systems that can explain themselves, researchers have not yet figured out how make them completely transparent. How can there be accountability when it is impossible to know how the system is making decisions?

Another issue is the perceived objectivity of these systems and overconfidence in the results they produce. This often eliminates direct responsibility because human operators mentally distance themselves from the outputs the system produces. In most cases, the people who use the system did not design it and their understanding of how it works is likely to be limited.

For example, returning to the alternative credit scoring algorithm, employees overseeing the loan approval process may have little insight into the parameters the system uses to evaluate credit risk. However, because the system is based on statistics, they see it as objective. To avoid the mental toll of turning down loan applicants, the employees may attempt to absolve themselves of responsibility by increasingly relying on the algorithmic credit risk assessment. In this case, who is ultimately responsible for the loan approval decision? The microfinance organization who is implementing the system or the developer of the system who used problematic parameters?

Furthermore, what happens if the system gets it wrong? Can the person appeal the decision? All ML systems have error rates. Even if the error rates are close to zero, in a tool with millions of users, thousands could be affected by the errors. Although in many cases ML systems are far more accurate than human beings, there is danger in ignoring errors and assuming that just because a system’s predictions are more accurate than a human’s that it will lead to a better outcome. Additionally, as Cathy O’Neil (2016) points out in Weapons of Math Destruction, in most cases, the people whom these systems are directly affecting rarely even know they are being evaluated by algorithms. How can there be any accountability if people do not even know these systems exist?

A related concern is how to establish responsibility of AI systems in international development initiatives. Typically, systems are built and managed by third party vendors, usually private companies. This trend is likely to continue because AI talent is in short supply and many ICT4D organizations and teams are not likely to have capacity to develop their own systems in the near future. This can result in many potential layers of influence for a given project: the funder, the foreign implementing organization, the local implementing organization, the implementing country government, and the software vendor. Caught up in all of this, of course, are the beneficiaries for whom the AI system is designed. And when there is an inherent lack of transparency in a system because it uses machine learning, it is difficult to achieve the kind of transparency and accountability expected in well-run international development initiatives.

Given this context, who is ultimately responsible for the ethical use of an AI system? How can this responsibility be managed given the number of players who may touch the system? These issues are particularly important when thinking about the eventual transfer of a system from the vendor who has developed it to the implementer who will be using it. The implementer could be an international organization, a local organization, or a local government actor. Implementers must be properly trained not just in the use of the system, but also its potential implications.

Data Limitations

As has already been mentioned, developing countries tend to lack comprehensive data. One of the major challenges to developing AI systems for use in developing countries is that currently good AI requires an incredible amount of data. Although researchers are experimenting with approaches that require much less data (Science Daily 2018), they are not yet widely proven.

Many sectors that could most benefit from AI solutions, such has healthcare and education, do not yet have enough data. Local data that does exist and can be made available to developers is frequently owned by governments or companies and is not open to the public. It also may be incomplete or generated for political ends. Aside from obvious issues with bias, this hampers the ability of local actors to be involved in the development of AI, as well as limits accountability (World Wide Web Foundation 2017).  These data issues also mean that in some cases, developers would need to use foreign data sources in order to have sufficient data to build an accurate model. This, of course, carries risks related to all the various types of bias mentioned previously.

When comprehensive data is readily available, it tends to come from wealthier parts of the population who have regular internet access and smartphones. This naturally excludes the poor, the elderly, rural areas, as well as other traditionally marginalized groups who may have less access to technology. In its 2018 Gender Gap study, the GSMA found that women in low- and middle-income countries were 10% less likely than men to own a mobile phone, equating to 184 million fewer women than men owning mobile phones in these markets (GSMA 2018). Given that mobile phone data is one of the few widely available data sources in developing countries, an AI system that uses this data as an input is producing outputs based disproportionately on the habits of men.

Privacy and Security

The final concern has to do with privacy and security. How should data be handled? Who is responsible for keeping it safe? What is the best way to protect the privacy of those whose data is being collected? Are there different considerations for different cultural contexts? While privacy and security are important for all ICT4D projects, there are additional risks to consider with AI systems. The first stems from the multiple layers of responsibility and the transfer of responsibility described previously. How can privacy and security be maintained as the system is transferred from vendor to implementer? Given the high levels of corruption and weak data security laws throughout the developing world, this is particularly important.

There are additional risks when implementing an AI system in a country with an authoritarian government or with authoritarian-leaning institutions such as the police, military, or intelligence services. A significant percentage of developing countries have these characteristics, and the risk of function creep in such contexts is strong. AI systems are powerful, and even a seemingly innocuous system like satellite imagery for crop monitoring could be used to conduct surveillance on a massive scale. Additionally, by finding patterns in data and parsing through the noise, an AI system could allow governments to more easily identify and categorize people as belonging to a particular group. This information could be used to deny services to certain groups or target them for more nefarious aims (World Wide Web Foundation 2017). In some cases, AI systems are already being used explicitly for this purpose. China has been exporting its AI surveillance technology to security forces in African countries with a history of repressing political opponents and ethnic and religious minorities (Gwagwa and Garbe 2018). In such a context, it is vital that AI for development initiatives have strong privacy and security measures to prevent abuse of their systems.


How should we approach AI in International Development?

How can AI be used ethically in international development? How can international development implementers address the risks related to fairness and inclusion, accountability, transparency, explainability, and responsibility, data limitations, and privacy and security? How can funders identify responsibly developed AI projects and monitor their implementation? AI in international development is fundamentally a ICT4D initiative, and the Principles for Digital Development can help address many of the potential ethical concerns of AI in development work. The Principles are a set of living guidelines meant to help ICT4D practitioners successfully use digital solutions to solve development challenges.  They have become the norm within the ICT4D community, and the guidance they provide is a helpful starting point for addressing the potential issues with using AI in international development.

Principles for Digital Development  1.	Design with the user “              Successful digital initiatives are rooted in an understanding of user characteristics, needs and challenges.  2.	Understand the ecosystem “              Consider the particular str

Below, I build on the best practices outlined by the Principles for Digital Development and propose recommendations for addressing AI-related gaps.

Because the specific risks of a given AI system are both sector and context specific, there is no one size fits all solution. Nevertheless, the following principles provide a good starting point for the responsible use of AI in international development for both funders and implementers alike.


Principles for Responsible AI in International Development

1. Consider whether an AI solution is appropriate.

It is tempting to apply a promising new technology to every problem. However, given the potential negative impacts, it is important to consider whether an AI solution is appropriate in the first place. This could include the following steps: 1) Determine whether an AI intervention is applicable; 2) Ensure the intervention is feasible; 3) Assess whether the system could produce biased outcomes and identify the potential consequences of those outcomes; 4) Consider any unintended consequences; 5) Conduct a cost-benefit analysis; 6) Conduct a risk assessment.

Implementers should first consider whether an AI intervention is applicable by determining if the problem can be solved with simpler technology or even no technology at all. Given the unique transparency and accountability challenges of AI, the simplest solution is likely preferable. Even if AI is an appropriate tool to solve a problem, it is important to consider whether it is feasible. First, think about the data context. Is there enough relevant, quality data? If not, can it be collected? If not, it is simply not possible to build an accurate AI model. Also consider the financial cost and time commitment. Remember that AI tools require an iterative development process, as well as continual retesting and updating of both the model and the data throughout the lifecycle of the tool. This can be both costly and time consuming, but is necessary to avoid quickly ending up with a tool that is out of date and inaccurate.

Also consider issues of fairness. Are there aspects of the existing data that could lead to biased outcomes? If so, what could be the consequences of these biased outcomes? And can these consequences be sufficiently mitigated? No international development implementer should roll out a system that might entrench existing patterns of discrimination and marginalization. Also think about what else could go wrong. Could there be any other unintended consequences? Is the context suitable culturally, politically, and security-wise? What is the risk of misuse?

Conduct a cost-benefit analysis to identify the pros and cons of using AI for a particular project, followed by a detailed risk assessment to identify the security and privacy risks and assess whether they can be sufficiently mitigated. Do not make the decision to move forward with an AI project quickly or lightly.

2. Involve stakeholders throughout the development process.

When AI systems are used to make predictions and decisions that directly impact human lives, those people must be involved in the development process. This requires going above and beyond a typical user-centered design process. Both users of the tool and members of the target community should be involved at every step of the process by providing input and voicing concerns. Regular interaction with the target community is key to a well-informed contextual analysis and can help flag potential issues early. It also helps build understanding of the tool and trust within the community, which is key to success.

When possible, include local tech talent in the development of an AI system. Not only will this help include more developing country voices in the field of AI, it will also make for a better product. Locals who understand both the context and the culture, as well as the technical side, can help develop a more relevant system and flag potential concerns which have been missed. However, in some cases this talent may not exist or may be underdeveloped. International development practitioners should do their part to help develop this talent to the extent possible. This includes teaching local AI developers responsible practices, including how to think about the ethical issues highlighted previously, and working closely with them throughout the development process.[3]

3. Develop rigorous standards for openness and accountability in AI projects.

Using open data and software standards is an important ICT4D norm that should also apply to AI projects. These standards encourage innovation and collaboration and are key for allowing technical experts to audit a system for bias or other issues.

Practitioners should also strive to publish non-sensitive training data after addressing privacy concerns.[4]

Given the lack of available high-quality data in much of the developing world, international development organizations have a duty to share whatever data they can so that others can use it.

However, transparency requires more than publishing the code and releasing documentation that is only decipherable to engineers.

Develop standards for transparency that make sense for the context. The goal should be ensuring all relevant stakeholders understand the tool, including what the tool does, what data it processes, and generally how it works.

Although total explainability of ML-based systems is not currently possible, developers can still provide valuable information about how a system works. Publish easy-to-understand explainers in the local language. Hold community meetings to explain the tool and allow community members to ask questions and provide feedback. Take care to consider literacy levels and the broader information ecosystem. An effective public educational process utilizes the existing ways in which a community receives and shares information, whether that be print, radio, word of mouth, or other channels.

Build accountability into the project and figure out how to institutionalize it long-term. This means establishing responsibility among project stakeholders, a process for ensuring the tool is used ethically, and a mechanism for stakeholders to voice concerns. This could include an independent ethical review of how the tool is being implemented, a publicly available validation study that details the impact and performance of the tool (Vosloo, March 2018), and a redress mechanism for stakeholders to file a complaint or appeal a decision.

4. Build in privacy and security by design.

Use the risk assessment to develop appropriate privacy and security approaches for the context. Make sure to think about the system as a whole, and not just the data. In addition to political and security factors, also consider the legal context. Are there any data protection laws that need to be followed? Or are there any laws that prohibit the use of encryption or would enable the government or other actors to access sensitive data? Take special care with data by defining ownership and access before collecting or analyzing data. Consider the potential of the tool to be misused or abused and the potential consequences of unauthorized access by different actors. Even seemingly innocuous data points can identify highly personal information about people’s lives when analyzed by an AI system.

Keep the best interests of users and the people whose data the tool is using at the forefront of plans to uphold privacy and protect personal data. This is particularly important when working with marginalized communities, who may not have had a say in how their data have been collected, used, or shared. Obtain informed consent prior to data collection, ensuring individuals know why their data is being collected, how it is being used and shared, how they can access or change the data collected, and that they can refuse to participate. Consent forms and informational materials should be written in the local language and easily understandable.[5]

5. Clearly establish roles and create a protocol for transfer of responsibility

As part of privacy and security planning, the risk assessment can be used to establish roles, define which stakeholders are responsible for AI project components, and delineate access to the tool and the data. Create a detailed protocol for the transfer of responsibility of the AI tool from the developers to the implementer that ensures security and privacy controls are maintained. This should include an extensive and context-appropriate training process that involves not only educating the implementer about how to use the AI tool, but also discussing the potential ethical issues that could arise and the exact use for which it is intended. Consider how the iterative development process and the need to continually test and update the model will affect stakeholder roles over time. Think ahead to prevent function creep and deter misuse of the tool. Develop contingency plans to address worst case scenarios, and do not fully transfer responsibility to an implementer without being confident they will use it appropriately.



Artificial intelligence is on the verge of rapid growth in international development and it is time the international development community directs resources towards driving its ethical implementation. When AI is used to make predictions and recommendations that impact people’s lives, we should be concerned about justice, fairness, and accountability. In the context of international development in particular, we also need to consider the various stakeholders and their overlapping layers of influence. We must consider where responsibility lies and how to establish transparency and accountability. There are also issues related to the lack of data in developing countries and how the data that is available excludes poor and marginalized groups.

Given the economic concentration of AI in the developed world, we must also be concerned about global inclusion and ensuring target communities are not just passive consumers of AI but rather active participants in the development process.

As a step toward addressing these issues, this paper introduced a series of principles to guide the responsible use of AI in international development. Ideally, these principles would be incorporated into the broader Principles for Digital Development, so that they become standard best practice in the ICT4D field. Updating the Principles will require a broader discussion among the ICT4D community. Moving forward,  members of the international development community and the broader AI and ethics field should convene. Cross-disciplinary collaboration is critical not just to establish what ethical AI means in non-Western contexts, but also to further define the role of artificial intelligence in international development.

About the Author

Lindsey Anderson is a 2019 Master in Public Affairs graduate of the Woodrow Wilson School at Princeton University. She can be reached at [email protected]. (Linkedin Profile)

[3] See “Build for Sustainability | Principles for Digital Development,” accessed May 14, 2018,

[4] See “Use Open Standards, Open Data, Open Source, and Open Innovation | Principles for Digital Development,” accessed May 14, 2018,

[5] See “Address Privacy & Security | Principles for Digital Development,” accessed May 14, 2018,


Brogan, Jacob “What’s the Deal With Algorithms?” Slate, February 2, 2016,….

COMEST, “Report of COMEST on Robotics Ethics; 2017,” World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), UNESCO., 17.

Committee on Technology, National Science and Technology Council, “Preparing for the Future of Artificial Intelligence” (Executive Office of the President of the United States, October 2016), 5,

Desabafo Social, “Let’s Talk about Your Search Algorithm, Getty Images?”, YouTube, March 28, 2017,

Executive Office of the President of the United States, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights,” May 2016, 7–8,

GSMA, “Connected Women: The Mobile Gender Gap Report 2018,” February 2018,

Gwagwa, Arthur  and Lisa Garbe, “Exporting Repression? China’s Artificial Intelligence Push into Africa,” Council on Foreign Relations, December 17, 2018,

“Minimalist machine learning algorithms analyze images from very little data,” Science Daily, February 21, 2018,

O’Neil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York: Crown Books, 2016, 155-160.

Srivastava, Tavish, “Difference Between Machine Learning & Statistical Modeling,” Analytics Vidhya, July 1, 2015,….

Talbot. David  et. al, “Charting a Roadmap to Ensure Artificial Intelligence (AI) Benefits All,” November 30, 2017,

Vosloo, Steve “The Rise of Artificial Intelligence for International Development - ICTworks,” January 5, 2018,….

Vosloo, Steve  “How Algorithmic Accountability Is Possible in International Development,” ICTworks (blog), March 17, 2018,

World Wide Web Foundation, “Artificial Intelligence: The Road Ahead in Low and Middle-Income Countries,” June 2017,>