March 27, 2023
Reading time: 53 minutes
Preface
Please note: this is a reproduction of an article from the book Missing links in AI governance published by UNESCO and Mila – Quebec Artificial Intelligence Institute. This article is available under the Creative Commons CC BY-SA 3.0 IGO license and permits others to distribute, remix, adapt, and build upon the work, even commercially, as long as appropriate credit is given for the original creation. By using the content of this publication, the users accept to be bound by the terms of use of the UNESCO Open Access Repository, with the exception of the Re-use/Adaptation/Translation section where the following clause prevails: Re-use/Adaptation/Translation: For any derivative work, please include the following disclaimer “The present work is not an official UNESCO or Mila publication and shall not be considered as such.”
[Use of the UNESCO or Mila logo on derivative works is not permitted. The creator of the derivative work is solely liable in connection with any legal action or proceedings, and will indemnify UNESCO and Mila and hold them harmless against all injury, loss or damages occasioned to UNESCO or Mila in consequence thereof. The designations employed and the presentation of material throughout this publication do not imply the expression of any opinion whatsoever on the part of UNESCO and Mila concerning the status, name, or sovereignty over any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries.The ideas and opinions expressed in this publication are those of the authors; they are not necessarily those of UNESCO or Mila, its Board of Directors, or their respective member countries.]
Citation: Ubalijoro, E., Poisson, G., Curran, N., Baek, K., Sabet-Kassouf, N. and Teng, M. (2023) Inclusive innovation in artificial intelligence: from fragmentation to wholeness. In B. Prud’homme, C. Régis, G. Farnadi, V. Dreier, S. Rubel and C. d’Oultremont (eds.), Missing links in AI governance. Paris, France and Montréal, Canada: UNESCO and Mila – Quebec Artificial Intelligence Institute, pp. 247-268. https://unesdoc.unesco.org/ark:/48223/pf0000384787
Authors
Éliane Ubalijoro
Executive Director for Sustainability in the Digital Age and Canada Hub director for Future Earth, Montreal, Canada.
Guylaine Poisson
Associate Professor, Information and Computer Sciences Department, University of Hawaii at Manoa, Honolulu, Hawaii, USA.
Nahla Curran
Undergraduate student, Department of Economics, Philosophy and Political Science, University of British Columbia, Okanagan, Kelowna, Canada.
Kyungim Baek
Associate Professor, Information and Computer Sciences Department, University of Hawaii at Manoa, Honolulu, Hawaii, USA.
Nilufar Sabet-Kassouf
Strategic Programs Manager for Sustainability in the Digital Age and Future Earth, Montreal, Canada.
Mélisande Teng
PhD student, Mila, University of Montreal, and LEADS Intern with Future Earth, Montreal, Canada.
Publisher
UNESCO, Mila – Quebec Artificial Intelligence Institute
Abstract
Artificial intelligence is shaping the future of humanity. But what happens when only a fraction of society is at the table that is defining that future? Trends in innovation are centered around profit and growth, with inclusivity loosely at their edges. The result is a fragmented digital age that increases biases, disparities and inequalities and that demands societal trade-offs of global well-being and technological trade-offs of AI performance and reliability. In this chapter, we explore the role of the digital divide, the lack of diversity and representation in AI and STEM, and the influence of innovation on funding and research incentives within academia, government and industry when it comes to defining AI policy and stakeholder engagement. We consider the impacts of siloed efforts for increasing diversity and inclusion, and we examine how they fall short in moving the needle towards systemic change. We argue for shifting the understanding of innovation to one of inclusive innovation and offer examples of how we might begin to drive this shift. Placing inclusivity at the heart of the future we are shaping through digital technology will allow us to move from a fragmented digital age to one of wholeness that benefits all.
This article relates to the following Sustainable Development Goals: SDG5 – Gender Equality; SDG9 – Industry, Innovation and Infrastructure; SDG10 – Reduced Inequalities; SDG11 – Sustainable Cities and Communities; SDG16 – Peace, Justice and Strong Institutions and SDG17 – Partnerships for the Goals.
Introduction
Artificial intelligence (AI) is a driver of innovations in a wide variety of sectors and industries that have different needs and different problems to solve. As such, it should not be developed in the silos of the tech world or any single discipline. The design, development, deployment and assessment of technology such as AI is complex and requires interdisciplinarity. But what is innovation? The answer to this question is important for understanding what is currently driving or steering the development of new technologies.
Innovations through AI-based technologies and applications are rapidly changing many aspects of our lives. Unfortunately, it is not always for the good of humanity. In this chapter, we argue that the unintended consequences of unfit AI (these include increasing biases, disparities and inequalities) can be addressed by shifting our understanding of innovation to one of inclusive innovation. “Inclusive innovation” refers to “the means by which new goods and services are developed for and/or by those who have been excluded from the development mainstream; particularly the billions living on lowest incomes,” ultimately broadening and diversifying the scope of stakeholders (Heeks et al., 2013, p. 1). This shift in thinking should be applied at all stages of AI-based technology development and deployment, as well as in policymaking and system improvement. Innovations in AI should be intrinsically inclusive and interdisciplinary. According to Dr. Katia Walsh, “artificial intelligence is the result of human intelligence, enabled by its vast talents and also susceptible to its limitations. Therefore, it is imperative that all teams that work in technology and AI are as diverse as possible” (as cited in Larsen, 2021). The degree to which AI can truly benefit the entire planet and beyond is tied to how much it accounts for this diversity.
In this chapter, we explore some of the major barriers to truly inclusive innovation and the measures needed for ensuring no one is left behind. These barriers include the divides and inherent biases already present in AI, how and why AI projects are currently funded, and the underlying investment incentives that are driving the direction of AI. The impact of technology that is not inclusive enough, as well as the lack of effort in addressing the root causes, is largely underestimated and is a barrier in the development of AI to benefit all. AI technologies are shaping the future of humanity and there needs to be a thoughtful reflection around this issue if we are to progress properly.
The quest for innovation
Since Alan Turing (1950) discussed how to build and test intelligent machines and the term “artificial intelligence” was coined in 1956 (McCarthy et al., 1955), there have been successes and setbacks throughout the seventy years of modern AI history. The surge in excitement for AI in the past decade has been driven by access to large amounts of data, cheaper and faster computers and the development of machine learning techniques, in particular deep learning. Nowadays, AI has permeated many aspects of our lives, from social media newsfeeds and online shopping to drug discovery (Fleming, 2018; Jiménez-Luna et al., 2021; Lada et al., 2021) and the fight against epidemics (Cho, 2020; Zeng et al., 2021). AI is one of the major forces revolutionizing human society and it is bringing in its trace a new era, the digital age.
Sadly, these advances have created a new type of global divide between the tech-rich and the tech-poor. The fast-paced advances of AI have deepened and widened the digital divide and amplified existing biases in, for example, academia, gender, race and rich-poor countries or populations (Carter et al., 2020). The current biases and divides in AI mirror some of the biases and divides that have plagued our societies for centuries. Whereas advances in technologies throughout history have often amplified colonialism, there is a real danger of moving towards new forms of colonialism reinforced by digital technologies and the current drivers of innovation in this field. Digital colonialism occurs when “large scale tech companies extract, analyze, and own user data for profit and market influence with nominal benefit to the data source” (Coleman, 2019, p. 417). It is the “exercise of imperial control at the architecture level of the digital ecosystem: software, hardware and network connectivity, which then gives rise to related forms of domination” (Kwet, 2019, p. 1). Take, for example, the division of work, with the invisible workers of AI such as data annotators, often from less privileged communities, who endure isolation and often difficult working conditions (Gray and Suri, 2019) and the biases in the data being used to train AI systems. As we design, develop and gather the data that are fed to machines so they can learn, we are inevitably transferring our biases to AI. Addressing the neglected issues in AI development and policies will not only serve to improve the technology itself but could be instrumental in addressing both current and future systemic biases and divides (May, 2020).
It is clear to us that AI-based technologies give us two possible avenues: we will inadvertently perpetuate new forms of colonialism in the digital age (Voskoboynik, 2018) or humanity can move forward, in an inclusive manner, to pursue the common goal of resolving present global challenges and to drive impactful and beneficial innovations together. How can we make sure that the correct path is followed?
Reimagining the key stakeholders of AI
A key to success for researchers in the academic world is to get funding for their research and publish in prominent journals and conference proceedings. To achieve this, many early-career researchers are advised to prioritize innovative research in their work. But what is innovation? When evaluating research proposals, funding agencies describe innovation as “creative, original, and transformative concepts and activities” or “unique and innovative methods, approaches, concepts, or advanced technologies” (National Science Foundation (NSF) and National Aeronautics and Space Administration (NASA) as cited in Falk-Krzesinski and Tobin, 2015, p. 15). With limited funding opportunities, a project often needs to show significant advances in the field to be rated as innovative. This often means striving for the newest and fastest method that requires abundant resources such as technologies, algorithms and systems developed with the use of high computing power, large storage, fast and reliable internet or cellular access (Thompson et al., 2020, p. 2).
This, however, narrows the scope of AI and its capacity to be used to meet global needs; currently its use is limited to what mostly drives profit in the Global North. In addition to the inherent issues this poses for a just and equitable society, it also presents technical challenges in developing “trustworthy and verifiable AI” (Dengel et al., 2021, p. 91) that is adaptable to limited resource settings. As stated by Dengel et al. (2021, p. 93),
current research evaluation methods and academic criteria tend to favor vertical, short-term, narrow, highly focused, community- and discipline-dependent research. It is the responsibility of all scientists in the academic world to foster a methodological shift that facilitates (or at least does not penalize) long-term, horizontal, interdisciplinary, and very ambitious research.
This is also true for industry. As stated in the United Nations Conference on Trade and Development Technology and Innovation, “as with any new technology, many companies, when they innovate and develop new goods and services, they tend to focus on higher-income consumers that can bear the higher initial prices of these products” (UNCTAD, 2021, p. 125). Unfortunately, the suitability of those new technologies for developing countries is often overlooked (’Utoikamanu, 2018).
To achieve the needed shifts in research and development, we need to include all stakeholders that are impacted by innovations in AI, not just those currently benefiting from it. In this way, we can broaden the perception of innovation to include adaptability and new applications of existing techniques.
In order to widen the capacities of AI and shift to more inclusive innovations, we need to first grasp the inherent biases that are both driving and perpetuated by the current standards for innovation.
The digital divide and inclusive innovation
The digital divide is defined by the Organization for Economic Co-operation and Development (2001, p. 5) as “the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard to both their opportunities to access information and communication technologies (ICTs) and to their use of the Internet for a wide variety of activities.”
This divide is most pronounced between the Global North and Global South. For example, in 2018 in Europe, 80 percent of the population was using the internet compared to only 25 percent of the population in Sub-Saharan Africa (UNCTAD, 2021, p. 78). Resources, both financial and technological, are predominantly concentrated in and directed to the Global North, often excluding stakeholders in the Global South from the global scientific research and innovation scene (Chan et al., 2021; Garcia, 2021; Mishra, 2021; Reidpath and Allotey, 2019; Skupien and Rüffin, 2020). However, significant divides within countries are also a major factor. The impacts of these divides were brought to light by the COVID-19 pandemic as the world shifted to life online, with work, shopping, healthcare services and education, requiring a computer and internet connection (United Nations, 2020). While this is a general issue and the pandemic example is recent, within the field of AI research and education, the lack of resources due to the tech divide is a pressing issue.
The technological divide contributes significantly to the lack of diversity in AI innovations. Or we could say that it overlooks or undervalues some great innovations. As this field and its funding depend on updated technology, the divide favors students and researchers from more privileged socioeconomic backgrounds (American University, 2020). Those that cannot afford computers or have no or slow internet access are excluded, and that impacts the field as a whole. When the vast majority of researchers in AI-based technologies are from a similar background, the rest of the world is squeezed out. Technology is often designed and developed by scientists for one sub-section of their country in one part of the world. The result is inequality fueled by the next updated technology.
To shrink this tech divide we need to reconsider the criteria for evaluating the quality of innovation in AI such that inclusive innovation is considered key. This would foster greater AI-based technologies that are adapted for different communities and would widen the pool of stakeholders. As we consider how the grand challenges of this century disproportionately affect marginalized people, having them at the forefront of innovation is critical to scaling tech innovation for good in the service of humanity and the planet.
Diversity and representation in AI and STEM
When inclusion is perceived as an act of charity or seen as accepting to lower our ambitions in order to advance the technology, we will always end up with biased technologies. Inclusive innovation needs to be seen for what it is: striving for a more inclusive, balanced technology that will benefit all. If we want to have machines capable of solving complex problems, we need to expose them to a wide variety of data. This means that people with diverse background expertise and experiences should be involved in all aspects of the development process of AI and AI-based technologies—data acquisition to train the AI systems, design, development, deployment, operation, monitoring and maintenance. This level of diversity should also be represented at all levels of AI-related policy- and decision-making. A lack of diversity and a failure to represent all stakeholders allow for omission by ignorance and not necessarily by intent, making it more difficult to address it explicitly (Coded Bias, 2020).
Diversity and representation issues are, of course, not inherent to AI. Historically, the fields of science, technology, engineering, and mathematics (STEM) have had a predominantly white, male base (Dancy et al., 2020, p. 1). Marginalization in STEM fields undeniably affects many communities, including Indigenous people, people with disabilities and the LGBTQ+ community (Miller and Downey, 2020; Schneiderwind and Johnson, 2020). Our focus in this chapter is on race, gender and socioeconomic status.
| TABLE 1 |
Percentage of people employed in the US in computer and mathematical occupations (Bureau of Labor Statistics, 2010; 2020).
2010 | 2020 | |
---|---|---|
Women | 25.8 | 25.2 |
Men | 74.2* | 74.8* |
White | 77.2* | 65.4 |
Black or African-American | 6.7 | 9.1 |
Asian | 16.1 | 23.0 |
Gender bias
Table 1 shows an uneven distribution of labor in the computer and mathematics field. It comes as no surprise that women in computer and math occupations represent only one-fourth of the field, and that alarmingly, over the last ten years, representation has slightly decreased. Although we can attribute the poor results to science-driven organizations not hiring women as much as men (Picture a Scientist, 2020), the gender gap in this field begins far earlier. As children, girls navigate the stereotypes that come with STEM from their parents, social norms and teachers, which demotivates many from pursuing interests in the field (Hill, C., 2020). Girls who do have an interest or perform well in math or science may not actually go into any STEM fields because they believe that these occupations are “inappropriate for their gender” (Hill et al., 2010, p. 22). The factors that influence women and girls from a young age demotivate many of them early on. Also, severe gender bias present in the workplace may cause women to leave STEM-related careers. This includes workplace environment, family responsibilities and implicit bias (Hill et al., 2010, pp. 24–25).
Implicit bias against women can be a significant impediment to success and advancing in a career; it can even be a factor in women’s choice to leave. One example of the consequences of this bias is the tone of recommendation letters for women, where personality traits are often highlighted over technical expertise (Trix and Psenka, 2003, p. 215). This and other consequences of implicit bias reduce the involvement of women in AI-based technology design and their presence as policymakers engaged in science. Furthermore, women considered successful in their field are more derogated and less well-liked than successful men, which contributes to a negative workplace environment and can make it almost impossible for women to move forward. In the private sector, women in STEM leave due to unclear advancement opportunities, feeling isolated, an unsupportive environment and an extreme schedule (Hill et al., 2010, p. 24). When a workplace tries to push you out, with a lack of opportunities for advancement compounded by constant microaggressions, there is no real reason to stay.
In terms of marital status and family responsibilities, there are also clear differences between men and women. In STEM academia, single women are more likely to have a tenure-track position than their married counterparts. As well, due to the demands of the field and the tradition of women as primary caregivers, women abstain from having children or delay maternity (Hill et al., 2010, p. 26). Furthermore, a study into retention in engineering found that women were more likely to leave due to “time and family-related issues” (Frehill et al., 2008). These gender-based factors all contribute to minimal women applicants and low retention of women in STEM.
Racial bias
From 2010 to 2020, there was an upward trend of non-white people employed in computer and math occupations. The factors that contribute to a small percentage of people of color in this field are similar to the reasons outlined in the gender gap. In this section, we focus on the underrepresentation of Black, Asian and Hispanic or Latinx people in STEM. From a young age, implicit bias can make the difference between a student continuing their education or dropping out. One study found that low-income Black students that have at least one Black teacher in third, fourth, or fifth grade are 29 percent less likely to drop out of high school (Dodge, 2018). At the high school level, when STEM is usually introduced to students, they can begin pursuing their interests before going to college; but those opportunities are not equal for all. A study by Teach for America found that “one in four schools [in the US] offers computer science courses” (Dodge, 2018). Typically, schools in upper-class neighborhoods with a predominately white student body have this exposure to computer science, leaving minority and low-income students behind. Without this previous exposure in school, it is difficult for students to cultivate their interest in this subject and to believe that they can pursue a college education in a field perceived as requiring high innate ability (Leslie et al., 2015; Miller, 2017; Riegle-Crumb et al., 2019). They also often have a weaker sense of identity and of belonging to the “typical computer scientist” culture (Metcalf et al., 2018, p. 613). This perpetuates the belief (similarly as with women in STEM) that STEM careers are not appropriate for minorities, despite their interest (Dodge, 2018). We can see the consequences of such biases in the education system (in the United States, for example) by looking at the low percentage of Black, Asian and Hispanic people entering the STEM workforce (Barber et al., 2020; Clark and Hurd, 2020).
The workplace itself can be another battleground if the challenges of the education system are conquered. The racism and racial bias found in STEM significantly affect diversity in the field (McGee and Bentley, 2017; McGee, 2020). In San Francisco, for example, 60 percent of Black people and 42 percent of Asians and Hispanics in STEM have experienced some sort of racially motivated discrimination (Dodge, 2018). This discrimination is not always in the form of hate speech. Like with women, it comes in the form of wage gaps, microaggressions, not nominating minorities for advancement, not giving minorities important projects, or placing less value in their work (Dodge, 2018). These factors all contribute to a negative workplace that not only harms minorities but decreases their interest in the field and leads to them often choosing to leave the field altogether (Dodge, 2018). Therefore, attracting and retaining more minority populations into STEM education is a necessary first step to ameliorate the biases in AI.
We highlighted two major gaps that occur early in the education system: the implicit biases among educators and the lack of access to computer science courses for minority children. Educators are susceptible to unintentional, implicit bias and the role they play in whether or not children pursue their interests in STEM cannot be underestimated (Bushweller, 2021). It is therefore important that proper bias training be prioritized early in the education system, as gaps in this space amplify those we see later in STEM (Warikoo et al., 2016).
In terms of the lack of computer science courses, a possible solution to address this is to encourage non-profit, ideally minority-led, organizations that offer computer science programs. Hiring teachers, donating updated technology and finding suitable spaces are all essential parts for this to succeed. The advantage of having this be minority-led is that minority children perform better when they are taught by someone with a similar background (Rosen, 2018). Supporting community-led extracurricular programs is also a way to incentivize the uptake of such courses. As a result, a higher demand from communities will increase the likelihood that computer science courses be offered in the academic curriculum. This is part of the solution for inclusive innovation in AI and should not be neglected.
Socioeconomic bias
Finally, another significant bias in the field is socioeconomic status. A Yale study found that the way an individual pronounces certain words is telling of their social status (Cummings, 2019). While this is not a major issue in and of itself, a person’s socioeconomic status can influence an employer’s decision to hire them. The same study with 274 “individuals with hiring experience” found that, when lacking any information about qualifications, employers identified candidates from high socioeconomic status as better for the job than those from lower status (Cummings, 2019). Additionally, those who were from a higher social class were given better pay and more opportunities for bonuses.
This issue of bias is more general to the entire workforce. However, if we go back to the question of racial bias in STEM, we see that there is an intersection between race and income, although gender also intersects with these two. In the United States, many low-income neighborhoods tend to be dominated by minorities, more specifically Black and Latinx people. This is due to a long history of discrimination that segregated and ghettoized minorities (Firebaugh and Acciai, 2016, p. 13372). The result is poorly funded schools and limited access to jobs. Coupled with employers that are biased towards high-income applicants, a young person’s socioeconomic status can influence their long-term future. There is no requirement for employers to hire a certain percentage from low-income neighborhoods. But without socioeconomic diversity in STEM, there is little representation from another sizable portion of the population. In addition, STEM-related technology that is developed to help these low-income neighborhoods will only be provided through a high-income lens.
Accounting for diversity in STEM is especially important in resolving some of the current major challenges with AI. One of these challenges is around “the lack of highly skilled experts in building AI systems” (Dengel et al., 2021, p. 93). As we have discussed, a significant portion of the population is currently excluded from developing and contributing talent and expertise to the AI field as a result of systemic biases (gender, race, socioeconomic status) even within countries that are on the tech-rich side of the digital divide. Another major challenge is with the efficiency of AI systems, given the insufficient representativeness in data fed into these systems (Kuhlman et al., 2020).
Humans feed their limited experiences and prejudices to a blank-slate algorithm and little by little it learns to reproduce this behavior. In the end, we have technology that is unreliable through no fault of its own. It simply did what it was supposed to: learn and replicate.
Algorithmic bias applied
The lack of resources and technological divide driving the lack of diversity in research comes to a head with the issue of learned bias in AI-based technologies. These algorithmic biases manifest themselves in applications as diverse as facial recognition technologies and hiring tools. In the documentary Coded Bias, Joy Buolamwini discovers that the AI in her Aspire Mirror project – a “device that enables you to look at yourself and see a reflection on your face based on what inspires you or what you hope to empathize with” based on a face detection software51 – does not recognize her face as a Black woman (Coded Bias, 2020). She resorts to wearing a plain white mask for her face to be seen. While this may simply seem to be an issue of a software error or bug, this technology has already begun to be applied to real-life uses, and Buolamwini’s experience is replicated a thousandfold.
One of the most common uses of AI-based technologies is in surveillance and security, typically for facial recognition. Coded Bias explores this issue in detail. As Buolamwini explains, because the facial recognition algorithm is programmed by white men, they feed white, male faces to the algorithm. After bringing this issue up to companies like Microsoft and IBM, Buolamwini saw that IBM improved the accuracy of their algorithm to recognize not only skin color but gender, seen in Table 2.
| TABLE 2 |
IBM algorithm accuracy from 2017 and 2018 (Buolamwini, 2019).
2017 | 2018 | |
---|---|---|
Skin color and gender | ||
Darker male | 88.0% | 99.4% |
Lighter male | 99.7% | 99.7% |
Darker female | 65.3% | 83.5% |
Lighter female | 92.9% | 97.6% |
In June 2020, Robert Williams, a Black man was arrested in Michigan, United States, for larceny following the use of facial recognition on the robber (Hill, 2020). Due to the police’s confidence in the algorithm, they arrested him without doing a due diligence (i.e., checking his alibi, questioning witnesses, and so on). He was subsequently released, and the charges dropped, but the mistake made by the algorithm and poor police work could have cost Robert Williams his life.52 With an overabundance of cameras installed, the use of facial recognition as a surveillance tool is slowly becoming a reality. And with that, the misidentification and prosecution of innocent people may skyrocket (Raji et al., 2020). Following the thread of bias in policing and security, AI-based technology is also found to unequally allocate police officers to certain communities (Heaven, 2020). There has historically been an over-policing of non-white communities, so-called “ghettoized” locations. An algorithm used in such contexts will learn where police and resources need to be allocated based on this historical data. It will learn to “increase vigilance in areas with a higher perceived propensity for crime, and will lead to an inequitable distribution of police and, in turn, inequitable criminalization” (Osoba and Welser IV, 2017, pp. 14–15).
Consequently, the algorithm can and will lead to an increase of minorities in prison for petty crimes, such as marijuana possession, speeding or being homeless, amplifying the inherent biases in the system (Heaven, 2020; O’Donnell, 2019). Failing to correct for these biases will reinforce them within AI systems and perpetuate their unevenly assignment of police to marginalized communities.
There is also a substantial amount of algorithmic bias in AI-enabled hiring processes. Employers are overconfident in algorithms that will in fact expand the gaps created by previous biases in hiring—and this often occurs without the awareness of employers (Hickok, 2020). As Miranda Bogen (2019) explains, AI in hiring works at multiple levels before an applicant even applies. Targeted ads for jobs through Facebook, LinkedIn and Indeed contribute to reinforcing racial and gender stereotypes by predicting “who is most likely to click on the ad” (Bogen, 2019). A joint study by Northeastern University and the University of Southern California looked into the skewed delivery for job ads on Facebook. For example, in the most extreme cases, jobs as a cashier “reach an 85 percent female audience” and positions in taxi companies “reach a 75 percent Black audience” despite the employer’s openness to all demographics (Ali et al., 2019, p. 4). However, the algorithm learned from the recruiters’ applicant preference and targets people that align with this preference. Once again, its job is to adapt, learn, and replicate the data it receives.
Along the hiring process, the algorithm can eliminate a significant number of candidates who may have experience but do not use keywords or phrases the algorithm was trained on (Bogen, 2019). Some algorithms may also use past hiring decisions as guidance on who to reject, which can perpetuate discrimination (Dastin, 2018). Other hiring tools will determine who will be successful in a position, using past experience, performance reviews, tenure, and sometimes a lack of negative signals such as disciplinary action (Bogen, 2019). These hiring algorithms of course include the field of AI itself. The human biases we discuss in this chapter (gender, race and socioeconomic status) are compounded and replicated by hiring algorithms, perpetuating the vicious cycle that fuels the lack of representation in computer science and AI programming.
The challenges with AI that we have discussed so far—lack of diversity, applied algorithmic bias, siloed and discipline-dependent research and non-inclusive innovation—are often undervalued in terms of their impact on the quality of the technology and workforce, as well as on the future of humanity. By forgoing truly inclusive innovation, we are essentially trading off global well-being and prosperity alongside higher standards for AI performance and reliability (Dengel et al., 2021) in exchange for short-term profit. A major driver of this trade-off lies within the current funding structure for AI research, an issue we address in the next section.
51. For further information about the Aspire Mirror Project, see: http://www.aspiremirror.com/
52. Another such incident happened in 2017. A Palestinian worker was wrongfully arrested because Facebook’s automated translation system mistranslated a “good morning” post written in Arabic as “attack them” in Hebrew and “hurt them” in English. See Y. Berger (2017).
AI funding structures and incentives
At the root of the barriers to truly inclusive AI are questions of how and why AI research is funded. As it is now, the selected projects receiving funding from industries or government agencies sadly do not prioritize inclusion and diversity. They are often not interdisciplinary or collaborative and fail to account for growing human, social and natural capital to the same degree as they look for returns on investment. Those innovations influence policymakers in terms of what will drive the direction of new technologies, which then feeds back to how funding is directed. And thus, a complex vicious cycle is perpetuated.
Though not built intentionally, a vicious cycle is created from the interconnectedness of AI research projects, funding sources and policies and is reinforced by the limited diversity of stakeholders benefiting from and influencing AI innovations.
AI and academia
Most research in AI and new technologies is directly or indirectly supported or conducted by corporations. According to the Congressional Research Service’s 2021 report on federal research and development funding (2020), 54 percent of applied research and 85 percent of development in the U.S. were funded by Business (see Figure 1). A recent assessment on AI policy and funding in Canada shows that even government funding of AI is primarily directed to “industry and academia with links to industry. Academia often acts as an intermediary between industry and government. Indirectly, these funds can still benefit for-profit organizations” (Brandusescu, 2021, p. 37). This can have a very strong impact on academic research, policymaking and the extent to which corporations influence innovation in AI.
The presence of the private sector in the academic field of AI is inextricable. According to the AI index report produced by the Stanford Institute for Human-Centered Artificial Intelligence (Zhang et al., 2021, p. 21), more than 15 percent of the peer-reviewed AI publications in 2019 are from corporations in every major country and region of the world. Industry is also absorbing the majority of AI expertise coming out of academia; in 2019, 65 percent of Ph.D. candidates in AI in North America went into industry after graduating (Zhang et al., 2021, p. 4). Corporations also sponsor or are highly present at many conferences and workshops in the field (Alford, 2021). For example, at the International Conference on Learning Representation (ICLR) in 2021, nearly 30 percent of the accepted papers were from corporations such as Google, Amazon, IBM and Facebook. Also, Google had four papers among the eight outstanding paper award winners and Facebook had one (ICLR, 2021).
Most corporations working in AI are driven by R&D agendas that are highly influenced by market demands and return on investment. Innovations that revolutionize the field, bring in new assets and broaden horizons are central to the R&D agendas. Also, expertise and skillsets developed in academia are a key resource. This is one of the reasons for the industry to fund academic research. It follows, then, that the industry’s needs ultimately influence many funded academic research projects. This dynamic between industry and academia creates a twofold tension. First, given that most major players in the AI and technology industry are concentrated in the Global North (Chan et al., 2021), the gaps that are limiting innovation that truly benefits all are further amplified. Second, the industry-academia dynamic disproportionately drives the direction of the field towards private sector interests rather than the public good.
Close collaborations between industry and academia are not inherently problematic and can benefit research and education in academic institutions (Etzioni, 2019). Stakeholder capitalism—“a form of capitalism in which companies seek long-term value creation by taking into account the needs of all their stakeholders, and society at large”—can be seen as a solution that works for people and the planet (Schwab and Vanham, 2021). However, this requires industries to place interdisciplinarity and inclusive innovation at the heart of their AI strategy, ultimately shifting market demand towards the public good and including marginalized people and communities among key stakeholders.
Feedback loops: Government funding, private sector incentives and policy
Excitement about AI is shifting funding away from more basic research towards applied research and “big innovation” that can be commercialized in the short and medium terms. Thus, AI is rapidly shifting the funding playing field for both the public and private sectors. Applied R&D is often incentivized by the potential for return on investment in terms of both profit and growth. Most applied research is currently funded by the private industry (Congressional Research Service 2020, Figure 1). Since the industry also indirectly funds academic research (for example, by supporting government funding programs (Brandusescu, 2021), it is difficult to dismiss the strong influence of the private sector on the direction of AI. Further, this influence is feeding back into policy strategies for economic growth, which also influence government funding programs (see Figure 1).
| FIGURE 1 |
AI funding structures and incentives. Adapted from models by Kimatu, J. N. (2016) and Ondimu, S. (2012).
As such, “it is worth questioning how the innovation economy is influenced by private interests and private power—and by extension, how AI public policy gets written” (Brandusescu, 2021, p. 38). Considering the feedback mechanism between public or government funding and innovations within the private AI sector, the need to place inclusivity at the heart of these innovations has never been more pressing if we want to have AI-based technologies that benefit all and are trusted by all. Innovation is a critical driver of research and funding incentives with direct impacts on academia, government and industry. Shifting from “big innovation” to inclusive innovation can shift funding and research dynamics, AI policy and stakeholder engagement to ensure that no one is left behind.
There are advantages to extending the focus of AI beyond its current siloed emphasis on science and technology and into fields such as neuroscience, computational linguistics, ethics, sociology and anthropology (Rahwan et al., 2019, p. 477). These advantages include increased interdisciplinarity and integration of the skillsets beyond the technical that are sorely missing from AI in general and that, as such, are hindering the field’s progress (Dengel et al., 2021).
The U.S. National Artificial Intelligence Act of 2020 (United States Congress, 2020) seeks to diversify funding to AI research and its applications to a wider scope of government agencies beyond national defense, which was previously the main driver of AI policies in the U.S. (Delgado and Levy, 2021). This act and other policies and initiatives are starting to change how funding agencies are operating, recognizing the needed change in funding priorities: “Artificial intelligence is increasingly becoming a highly interdisciplinary field with expertise required from a diverse range of scientific and other scholarly disciplines that traditionally work independently and continue to face cultural and institutional barriers to large scale collaboration” (United States Congress, 2020).
However, as stated in one of the U.S. Congress findings, “current federal investments and funding mechanisms are largely insufficient to incentivize and support the large-scale interdisciplinary and public-private collaborations that will be required to advance trustworthy artificial intelligence systems in the United States” (United States Congress, 2020, pp. 3–4). This is not surprising when we consider the heavy influence of private-sector interests on funding criteria for research and innovation outlined above. Also, given these criteria, it is not uncommon for researchers to adapt their work to fit the available funding opportunities. Therefore, in addition to AI public policy being caught in this vicious cycle, the quality of the AI itself only needs to satisfy the demands of the market. And unfortunately, presently the AI agenda is mainly in the hands of a limited number and diversity of stakeholders (Delgado and Levy, 2021). In order to shift the direction of incentives and break the cycle, criteria for funding should prioritize inclusive innovation as a focal point. Profit and growth need to be weighed against opportunities for scaling thriving human, natural and social capital.
One way for inclusive innovation to be prioritized is for government and industry to support more projects that are community-based, collaborative and interdisciplinary. Unfortunately, presently reaching for highly rated innovative projects too often means not selecting to work on inclusive AI in a limited-resource setting designed for local impact since this kind of project does not explicitly revolutionize the field in the short term and does not attract funding. For example, the new National Science Foundation (NSF, 2021) commitment to increase funding for applied AI may seek to diversify research. However, without anchoring these types of initiatives in inclusive innovation, this may actually move money towards tech innovation and away from fundamental research that does not present short-term commercial viability, which will disadvantage students who are interested in research focused on the public good (Viglione, 2020). Changing the incentives currently driving the vicious cycle in funding can allow for more early- to mid-career researchers to take on projects that prioritize inclusive innovation and nurture expertise for inclusive AI. We argue that inclusive innovation is the real and only innovation that should be considered in AI if we want it to benefit all. When AI innovation occurs in silos and is mostly incentivized by the industry, pressured by shareholders and profit, this outcome is not possible.
The issue of trickle-down science
To justify the current lack of inclusion in innovation, the concept of trickle-down economics has at times been extended to “trickle-down science.” Having a high concentration of resources and scholars in the Global North is expected to “produce the best science” whose “methods, theories, and insights” will trickle down into the Global South (Reidpath and Allotey, 2019, p. 1). Just as with trickle-down economics, this is not viable, and in fact the opposite is happening (Reidpath and Allotey, 2019, p. 1), partly because of the funding incentives and the current drivers of market demands discussed earlier in this chapter. For example, the high demand for resources coupled with fewer regulations and privacy protections in the Global South is driving the increased exploitation of both human resources (for instance, to perform activities such as data mining) and natural resources, such as the extraction of minerals (Arezki, 2021; Arun, 2020, p. 594; Mishra, 2021).
It is also important to consider the political environment of regions where AI-based applications are deployed. Oftentimes technology developed for the privileged few can be harmful in less-resourced regions. For example, the UN investigators’ report on the genocide of the Rohingya population in Myanmar in 2017 noted that “Facebook [had] been a useful instrument for those seeking to spread hate” (Human Rights Council, 2018, p. 34). This demonstrates the powerful effect that social media technologies can have on human rights when used in a place where the political and media environments are not healthy.
Rapid innovation often occurs at the expense of those who would supposedly benefit from the trickle-down philosophy, for whom the harmful repercussions disproportionately outweigh any potential benefit (Schia, 2018, p. 827). This is only reinforced by the continued exclusion of marginalized people and communities as key stakeholders. As Shirley Chisholm stated, “If they don’t give you a seat at the table, bring a folding chair.” The importance of ensuring representation, however challenging it may be, cannot be understated. But even then, the work is far from done.
What it really means to have a seat at the table
The challenges described in this chapter are of course not just related to AI or how we perceive innovation. They are representative of broader systemic issues that are evolving daily. A key barrier to addressing them is that current efforts are siloed rather than approached from a systems perspective.
Many AI initiatives are already making a lot of progress toward having AI benefit all and including more voices. These include AI4ALL,53 the African Master’s in Machine Intelligence,54 Quantum Leap Africa,55 the Centro de Excelencia en Inteligencia Artificial en Medellín56 and the African Supercomputing Center at Morocco’s UM6P university57.
Corporations such as Google and Microsoft, as well as foundations, policymakers and government funding agencies, do invest in some ways in AI projects for the social good,58 and as such, they are funding projects that would probably not otherwise be selected within current funding guidelines. However, when these initiatives are not anchored in inclusive innovation, they can sometimes backfire and intensify the marginalization of minorities (Latonero, 2019). For example, by exclusively placing these funding opportunities outside of mainstream funding cycles, the notion that these projects are marginal is reinforced rather than addressed.
This occurs on an individual level as well. When the only entry points for marginalized people into innovative research is through specialized programs, the feeling of the imposter syndrome (Tulshyan and Burey, 2021), so common to minorities in science and high-level positions, is accentuated. These programs intend to reduce the gap and include more minorities behind the scenes and as decision-makers. But they rarely address the broader systemic issues that result in prejudices, toxic environments and toxic colleagues that perpetuate the idea that minorities need to be invited into the circle of scientists and leaders. As discussed in previous sections, the personal self-doubt that starts in childhood coupled with a panoply of social barriers and expectations contribute to the silencing of minority voices that are at the table.
Unfortunately, these inclusion programs are often perceived as enough to bridge the gaps (Puritty et al., 2017). Of course, in practice, they are not. We see this with the percentage of women in math and computer science jobs in the U.S. dropping from 25.8 percent in 2010 to 25.2 percent in 2020 (See Table 1). They are good initiatives, so why are they not working as intended? As mentioned by Dengel et al. (2021, p. 90), “we still need a lot of work in research and a paradigm shift in AI to develop a real AI for humanity—a human-centric AI.” We argue that an essential requirement for this paradigm shift is to place inclusivity at the center of innovation rather than on the peripheries or as an afterthought.
We are not the first to point out all the biases and problems in AI technologies. We are also not the first to mention how much progress has been made. But it is important to continue elevating the standard for inclusivity and innovation. This can only serve to improve the systems in which we operate and the innovations we strive for (as we saw with facial recognition systems, for example). According to Giridharadas (2021), an author known for his critique of elites’ exclusionary take on world issues that should be subjected to collective action:
all grand challenges […] require public, institutional, democratic and universal solutions. They need to solve the problem at the root and for everyone. What we do together is more interesting, compelling, more powerful, more valuable than what we do alone. Current neo-liberal myth is that what we do alone is better and more beautiful than what we do together. We need to bring back the notion that we live in society within which we have interdependence. Valuing what we do together needs to be reclaimed. Only this collective intelligence will allow us to solve the grand challenges we face.
53. See the AI4ALL website (2021) for more information.
54. For more information on AIMS and the African Master’s in Machine Intelligence, see AIMS (2021).
55. See Quantum Leap Africa (2021) for additional details.
56. This Center was set up in partnership between Ruta N in Colombia and the Institute for Robotic Process Automation & Artificial Intelligence (IRPAAI). See Ruta N (2018) for more information.
57. The ASCC is in the University Mohamed VI Polytechnic, more information is found on their website (see ASCC 2020).
58. Examples include the Google Impact Challenge for Women and Girls (2021), the Microsoft AI for Good Research Lab (Microsoft Research, 2021), and the Creating Sustained Social Impact, by their Corporate Citizenship branch (Microsoft Corporate Citizenship, 2021).
Conclusion
The development of new algorithms, the advancement in computational resources and the availability of abundant data have driven the recent surge of innovation in AI. This is driving transformations in a wide range of industries and sectors that will likely revolutionize society as did past industrial revolutions. As such, humanity is once again confronted with the danger of perpetuating the repercussions of inequitable systems change driven by colonial mentalities and socio-economic divides. In particular, the digital divide is amplifying inequalities in terms of access to AI and the harmful consequences of human bias in AI-based technologies.
As discussed throughout the chapter, addressing the neglected issues in AI development starts by addressing our own human biases. The quality and accuracy of AI-based systems are compromised by the lack of diversity in data and human resources at all stages of AI development. This is amplified by the marginalization based on gender, race and socioeconomic status of entire groups within STEM, which also worsens the talent shortage currently challenging the AI field.
As a result of a sky-rocketing demand for AI, a vicious cycle for funding, both private and public, is reinforced by the underlying incentives for rapid, short-term profit and economic growth, steering the direction of AI and the digital age. This vicious cycle is common to the rapid growth mentality focused on “big innovation.” As we argue in this chapter, the focal point needs to shift to one of inclusive innovation, thereby increasing the diversity of voices and enabling greater capacity-building, especially within marginalized, resource-poor communities. For AI to truly reflect the power of human consciousness it should be a representation of the beauty and the power of diversity.
The increasing interconnectedness of global systems and challenges is shifting the emphasis from pure profit to valuing natural, human and social capital. There is no way for people and the planet to thrive without this shift. Prioritizing local solutions that embody universal ethical principles of trust, responsibility and empathy is key. Though it may be tempting to prioritize rapid growth and short-term profit for the sake of innovation, this approach will inevitably limit our AI systems to benefit a privileged few rather than to humanity as a whole. Once AI research and development is driven by inclusive innovation, we will be able to shift from a fragmented AI to one of wholeness that benefits all, including future generations.
References
AI4ALL. 2021. Home page. https://ai-4-all.org/
AIMS (African Masters in Machine Intelligence). 2021. Home page. https://aimsammi.org
Alford, A. 2021. AI conference recap: Google, Microsoft, Facebook, and others at ICLR 2021. InfoQ. June 8. https://www.infoq.com/news/2021/06/conference-recap-iclr-2021/
Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A. and Rieke, A. 2019. Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, No. 199, pp. 1–30. https://dl.acm.org/doi/pdf/10.1145/3359301
American University. 2020. Understanding the digital divide in education. School of Education Online blog, December 15. https://soeonline.american.edu/blog/digital-divide-in-education
Arezki, R. 2021. Transnational governance of natural resources for the 21st century. Brookings Institution blog, July 7. https://www.brookings.edu/blog/future-development/2021/07/07/transnational-governance-of-natural-resources-for-the-21st-century/
Arun, C. 2020. AI and the Global South: Designing for other worlds. M. D. Dubber, F. Pasquale and
S. Das (eds), The Oxford Handbook of Ethics of AI. Oxford, Oxford University Press, pp. 589–606.
ASCC (African SuperComputing Center). 2020. Home page.
Barber, P. H., Hayes, T. B., Johnson, T. L. and Márquez-Magaña, L. 2020. Systemic racism in higher education. Science, Vol. 369, No. 6510, pp. 1440–1441. https://www.science.org/doi/pdf/10.1126/science.abd7140
Berger, Y. 2017. Israel arrests Palestinian because Facebook translated “good morning”
to “attack them.” Haaretz, October 22. https://www.haaretz.com/israel-news/palestinian-arrested-over-mistranslated-good-morning-facebook-post-1.5459427
Bogen, M. 2019. All the ways hiring algorithms can introduce bias. Harvard Business Review, May 6. https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias
Brandusescu, A. 2021. Artificial intelligence policy and funding in Canada: Public investments, private interests. Centre for Interdisciplinary Research on Montreal, pp. 11–51. https://www.mcgill.ca/centre-montreal/files/centre-montreal/aipolicyandfunding_report_updated_mar5.pdf
Buolamwini, J. 2019. Compassion through computation: Fighting algorithmic bias. Video, World Economic Forum. https://youtu.be/5PGYOYZKsdY
Bureau of Labor Statistics (United States). 2010. Employed persons by detailed occupation, sex, race, and Hispanic or Latino ethnicity, labor force statistics from the current Population Survey. https://www.bls.gov/cps/aa2010/cpsaat11.pdf
—-. 2020. Employed persons by detailed occupation, sex, race, and Hispanic or Latino Ethnicity, labor force statistics from the current Population Survey. https://www.bls.gov/cps/cpsaat11.pdf
Bushweller, K. 2021. How to get more students of color into STEM: Tackle bias, expand resources. Education Week web article, March 2. https://www.edweek.org/technology/how-to-get-more-students-of-color-into-stem-tackle-bias-expand-resources/2021/03
Carter, L., Liu, D. and Cantrell, C. 2020. Exploring the intersection of the digital divide and artificial intelligence: A hermeneutic literature review. AIS Transactions on Human-Computer Interaction, Vol. 12, No. 4, pp. 253–275. https://aisel.aisnet.org/thci/vol12/iss4/5/
Chan, A., Okolo, C. T., Terner, Z. and Wang, A. 2021. The limits of global inclusion in AI development. Association for the Advancement of Artificial Intelligence. https://arxiv.org/pdf/2102.01265.pdf
Cho, A. 2020. Artificial intelligence systems aim to sniff out signs of COVID-19 outbreaks. Science, May 12. https://www.sciencemag.org/news/2020/05/artificial-intelligence-systems-aim-sniff-out-signs-covid-19-outbreaks
Clark, U. S. and Hurd, Y. L. 2020. Addressing racism and disparities in the biomedical sciences. Nature Human Behaviour, Vol. 4, No. 8, pp. 774–777. https://www.nature.com/articles/s41562-020-0917-7
Coded Bias. 2020. Motion picture, 7th Empire Media, Brooklyn, directed by Shalini Kantayya.
Coleman, D. 2019. Digital colonialism: The 21st century scramble for Africa through the extraction and control of user data and the limitations of data protection laws. Michigan Journal of Race and Law, Vol. 24, No. 2, pp. 417–439. https://repository.law.umich.edu/mjrl/vol24/iss2/6
Congressional Research Service (United States). 2020. Federal Research and Development (R&D) Funding: FY2021. https://fas.org/sgp/crs/misc/R46341.pdf
Cummings, M. 2019. Yale study shows class bias in hiring based on few seconds of speech. YaleNews, October 21. https://news.yale.edu/2019/10/21/yale-study-shows-class-bias-hiring-based-few-seconds-speech
Dancy, M., Rainey, K., Stearns, E., Mickelson, R. and Moller, S. 2020. Undergraduates’ awareness of white and male privilege in STEM. International Journal of STEM Education, Vol. 7, No. 52, pp. 1–17. https://stemeducationjournal.springeropen.com/articles/10.1186/s40594-020-00250-3
Dastin, J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, October 10. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Delgado, F. A. and Levy, K. 2021. A community-centered research agenda for AI innovation policy. Cornell Policy Review, May 4. https://www.cornellpolicyreview.com/a-community-centered-research-agenda-for-ai-innovation-policy/
Dengel, A., Etzioni, O., DeCario, N., Hoos, H., Li, F., Tsujii, J. and Traverso, P. 2021. Next big challenges in core AI technology. B. Braunschweig and M. Ghallab (eds.), Reflections on Artificial Intelligence for Humanity. Lecture Notes in Computer Science, Vol. 12600, Springer, Cham, pp. 90–115. https://doi.org/10.1007/978-3-030-69128-8_7
Dodge, A. 2018. What you need to know about the stem race gap. Ozobot blog, February 20. https://ozobot.com/blog/need-know-stem-race-gap
Etzioni, O. 2019. AI academy under siege. Inside Higher Ed, November 20. https://www.insidehighered.com/views/2019/11/20/how-stop-brain-drain-artificial-intelligence-experts-out-academia-opinion
Falk-Krzesinski, H. J. and Tobin, S. C. 2015. How do I review thee? Let me count the ways: A comparison of research grant proposal review criteria across US federal funding agencies. The Journal of Research Administration, Vol. 46, No. 2, pp. 79–94. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4892374/
Firebaugh, G. and Acciai, F. 2016. For blacks in America, the gap in neighborhood poverty has declined faster than segregation. Proceedings of the National Academy of Sciences, Vol. 113, No. 47,
pp. 13372–13377. https://www.pnas.org/content/pnas/113/47/13372.full.pdf
Fleming, N. 2018. How artificial intelligence is changing drug discovery. Nature, Vol. 557, No. 7706, pp. 55–57. link.gale.com/apps/doc/A572639347/AONE
Frehill, L. M., Di Fabio, N., Hill, S., Traeger, K., and Buono, J. 2008. Women in engineering: A review of the 2007 literature. SWE Magazine, Vol. 54, pp. 6–30.
Garcia, E. 2021. The international governance of AI: Where is the Global South? The Good AI blog, January 28. https://thegoodai.co/2021/01/28/the-international-governance-of-ai-where-is-the-global-south/
Giridharadas, A. 2021. Philanthropy and the state: Who is funding what and why? Video, UCL Institute for Innovation and Public Purpose. https://www.youtube.com/watch?v=fOAkNu7Y6f4
Google. 2021. Google Impact Challenge for Women and Girls: Introduction. https://impactchallenge.withgoogle.com/womenandgirls2021
Gray, M. L. and Suri, S. 2019. Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston, Houghton Mifflin Harcourt.
Heaven, W.D. 2020. Predictive policing algorithms are racist. They need to be dismantled.
MIT Technology Review, July 17. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice
Heeks, R., Amalia, M., Kintu, R., and Shah, N. 2013. Inclusive innovation: Definition, conceptualisation and future research priorities. Manchester Center for Development Informatics, No. 53, pp. 1–28. https://www.researchgate.net/publication/334613068_Inclusive_Innovation_Definition_Conceptualisation_and_Future_Research_Priorities
Hickok, M. 2020. Why was your job application rejected? Bias in recruitment algorithms, part 1. Montreal Ethics AI Institute blog, July 12. https://montrealethics.ai/why-was-your-job-application-rejected-bias-in-recruitment-algorithms-part-1/
Hill, C. 2020. The STEM gap: Women and girls in science, technology, engineering and math. AAUW resources section. https://www.aauw.org/resources/research/the-stem-gap/
Hill, C., Corbett, C. and St. Rose, A. 2010. Why so Few? Women in Science, Technology, Engineering, and Mathematics. Washington DC, AAUW. https://www.aauw.org/app/uploads/2020/03/why-so-few-research.pdf
Hill, K. 2020. Wrongfully accused by an algorithm. The New York Times, June 24. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
Human Rights Council. 2018. Report of the Independent International Fact-Finding Mission on Myanmar. Geneva. https://www.ohchr.org/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_64.pdf
ICLR (International Conference on Learning Representations). 2021. Announcing ICLR 2021 Outstanding Paper Awards. https://iclr-conf.medium.com/announcing-iclr-2021-outstanding-paper-awards-9ae0514734ab
Jiménez-Luna, J., Grisoni, F., Weskamp, N. and Schneider, G. 2021. Artificial intelligence in drug discovery: Recent advances and future perspectives. Expert Opinion on Drug Discovery, Vol. 16, No. 9, pp. 1–11. https://www.tandfonline.com/doi/pdf/10.1080/17460441.2021.1909567
Kimatu, J. N. 2016. Evolution of strategic interactions from the triple to quad helix innovation models for sustainable development in the era of globalization. Journal of Innovation and Entrepreneurship, Vol. 5, No. 16, pp. 1–7. https://innovation-entrepreneurship.springeropen.com/articles/10.1186/s13731-016-0044-x
Kuhlman, C., Jackson, L. and Chunara, R. 2020. No computation without representation: Avoiding data and algorithm biases through diversity. arXiv Preprint. https://arxiv.org/pdf/2002.11836.pdf
Kwet, M. 2019. Digital colonialism: US empire and the new imperialism in the Global South. Race & Class, Vol. 60, No. 4, pp. 3–26. https://journals.sagepub.com/doi/pdf/10.1177/0306396818823172
Lada, A., Wang, M. and Yan, T. 2021. How machine learning powers Facebook’s news feed ranking algorithm. Engineering at Meta blog, January 26. https://engineering.fb.com/2021/01/26/ml-applications/news-feed-ranking/
Larsen, J. 2021. Levi-Strauss’ Dr. Katia Walsh on why diversity in AI and ML is non-negotiable. VentureBeat, August 2. https://venturebeat.com/2021/08/02/levi-strauss-dr-katia-walsh-on-why-diversity-is-non-negotiable-in-ai-and-machine-learning/
Latonero, M. 2019. Opinion: AI for good is often bad. Wired, November 18. https://www.wired.com/story/opinion-ai-for-good-is-often-bad/
Leslie, S., Cimpian, A., Meyer, M. and Freeland, E. 2015. Expectations of brilliance underlie gender distributions across academic disciplines. Science, Vol. 347, No. 6219, pp. 262–265. https://www.science.org/doi/full/10.1126/science.1261375
May, A. 2020. Dr. Fei-Fei Li: “We can make humanity better in so many ways.” Artificial Intelligence in Medicine, December 12. https://ai-med.io/ai-champions/dr-fei-fei-li-we-can-make-humanity-better-in-so-many-ways/
McCarthy, J., Minsky, M. L., Rochester, N. and Shannon, C. E. 1955. A proposal for the Dartmouth Summer Research Project on Artificial Intelligence. http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
McGee, E. O. 2020. Interrogating structural racism in STEM higher education. Educational Researcher, Vol. 49, No. 9, pp. 633–644. https://journals.sagepub.com/doi/full/10.3102/0013189X20972718
McGee, E. and Bentley, L. 2017. The troubled success of Black women in STEM. Cognition and Instruction, Vol. 35, No. 4, pp. 265–289. https://www.tandfonline.com/doi/pdf/10.1080/07370008.2017.1355211
Metcalf, H.E., Crenshaw, T.L., Chambers, E.W. and Heeren, C. 2018. Diversity across a decade:
A case study on undergraduate computing culture at the University of Illinois. Proceedings of the 49th ACM Technical Symposium on Computer Science Education. Association of Computing Machinery, pp. 610–615. https://dl.acm.org/doi/abs/10.1145/3159450.3159497
Microsoft Corporate Citizenship. 2021. Creating sustained societal impact. https://www.microsoft.com/en-hk/sparkhk/creating-sustained-societal-impact
Microsoft Research. 2021. AI for Good Research Lab: Overview. https://www.microsoft.com/en-us/research/group/ai-for-good-research-lab/
Miller, O. 2017. The myth of innate ability in tech. Personal blog, January 9. http://omojumiller.com/articles/The-Myth-Of-Innate-Ability-In-Tech 4
Miller, R. A. and Downey, M. 2020. Examining the STEM climate for queer students with disabilities. Journal of Postsecondary Education and Disability, Vol. 33, No. 2, pp. 169–181. https://www.researchgate.net/publication/334654579_Examining_the_STEM_Climate_for_Queer_Students_with_Disabilities
Mishra, S. 2021. Opinion: Is AI deepening the divide between the Global North and South? Newsweek, March 9. https://www.newsweek.com/ai-deepening-divide-between-global-north-south-opinion-1574141
NSF. 2021. National Science Foundation Graduate Research Fellowship Program. https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=6201
O’Donnell, R. M. 2019. Challenging racist predictive policing algorithms under the equal protection clause. New York University Law Review, Vol. 94, No. 544, pp. 544–580. https://www.nyulawreview.org/wp-content/uploads/2019/06/NYULawReview-94-3-ODonnell.pdf
Ondimu, S. 2012. Possible approaches to commercialisable university research in Kenya. The 7th KUAT Scientific, Technological and Industrialization Conference, pp. 1–16. https://www.researchgate.net/publication/328095915_Possible_Approaches_to_Commercialisable_University_Research_in_Kenya
Organization for Economic Co-operation and Development. 2001. Understanding the Digital Divide. https://www.oecd.org/digital/ieconomy/1888451.pdf
Osoba, O. and Welser IV, W. 2017. An Intelligence in our Image: The Risks of Bias and Errors in Artificial Intelligence. Santa Monica, RAND Corporation. https://www.rand.org/pubs/research_reports/RR1744.html
Picture a Scientist. 2020. Motion picture, Uprising Production, Antarctica, directed by Ian Cheney and Sharon Shattuck.
Puritty, C., Strickland, L. R., Alia, E., Blonder, B., Klein, E., Kohl, M. T., McGee, E., Quintana, M., Ridley, R. E., Tellman, B. and Gerber, L. R. 2017. Without inclusion, diversity initiatives may not be enough. Science, Vol. 357, No. 6356, pp. 1101–1102. https://www.science.org/doi/full/10.1126/science.aai9054
Quantum Leap Africa. 2021. Preparing Africa for the Coming Quantum Revolution. https://quantumleapafrica.org
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., Mcelreath, R., Mislove, A., Parkes, D. C., Pentland, A. S., Roberts, M. E., Shariff, A., Tenenbaum, J. B. and Wellman, M. 2019. Machine behaviour. Nature, Vol. 568, No. 7753, pp. 477–486. doi: 10.1038/s41586-019-1138-y.
Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J. and Denton, E. 2020. Saving face: Investigating the ethical concerns of facial recognition auditing. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 145–151. https://dl.acm.org/doi/pdf/10.1145/3375627.3375820
Reidpath, D. and Allotey, P. 2019. The problem of “trickle-down science” from the Global North to the Global South. BMJ Global Health, Vol. 4, No. 4, pp. 1–3. https://gh.bmj.com/content/bmjgh/4/4/e001719.full.pdf
Riegle-Crumb, C., King, B. and Irizarry, Y. 2019. Does STEM stand out? Examining racial/ethnic gaps in persistence across postsecondary fields. Educational Researcher, Vol. 48, No. 3, pp. 133–144. https://journals.sagepub.com/doi/pdf/10.3102/0013189X19831006
Rosen, J. 2018. Black students who have one Black teacher are more likely to go to college.
Johns Hopkins University Hub, November 12. https://hub.jhu.edu/2018/11/12/black-students-black-teachers-college-gap/
Ruta N. 2018. Ruta N Medellín: Centro de Innovación y Negocios Inicio. https://www.rutanmedellin.org/es/
Schia, N. N. 2018. The cyber frontier and digital pitfalls in the Global South. Third World Quarterly, Vol. 39, No. 5, pp. 821–837. https://www.tandfonline.com/doi/pdf/10.1080/01436597.2017.1408403
Schwab, K. and Vanham, P. 2021. What is stakeholder capitalism? European Business Review, January 22. https://www.europeanbusinessreview.eu/page.asp?pid=4603
Schneiderwind, J. and Johnson, J. M. 2020. Why are students with disabilities so invisible in STEM education? Education Week, July 27. https://www.edweek.org/education/opinion-why-are-students-with-disabilities-so-invisible-in-stem-education/2020/07
Skupien, S. and Rüffin, N. 2019. The geography of research funding: Semantics and beyond.
Journal of Studies in International Education, Vol. 24, No. 1, pp. 24–38. https://journals.sagepub.com/doi/pdf/10.1177/1028315319889896
Thompson, N. C., Greenewald, K., Lee, K. and Manso, G. F. 2020. The computational limits of deep learning. MIT Initiative on the Digital Economy Research Brief, Vol. 4, pp. 1–16. https://arxiv.org/pdf/2007.05558.pdf
Trix, F. and Psenka, C. 2003. Exploring the color of glass: Letters of recommendation for female and male medical faculty. Discourse & Society, Vol. 14, No. 2, pp. 191–220. https://journals.sagepub.com/doi/pdf/10.1177/0957926503014002277
Tulshyan, R. and Burey, J. A. 2021. Stop telling women they have imposter syndrome. Harvard Business Review, February 11. https://hbr.org/2021/02/stop-telling-women-they-have-imposter-syndrome
Turing, A. M. 1950. Computing machinery and intelligence. Mind, Vol. 54, No. 236, pp. 433–460.
United Nations. 2020. Digital divide “a matter of life and death” amid COVID-19 crisis, Secretary-General warns virtual meeting, stressing universal connectivity key for health, development. Press release, June 11. https://www.un.org/press/en/2020/sgsm20118.doc.htm
UNCTAD. 2021. Technology and Innovation Report: Catching Technological Waves—Innovation with Equity. New York, United Nations Publications. https://unctad.org/webflyer/technology-and-innovation-report-2021
United States Congress. 2020. H.R.6216 – National Artificial Intelligence Initiative Act of 2020, pp. 1–56. https://www.congress.gov/bill/116th-congress/house-bill/6216/text#toc-H7A238FDF26594A338CB94267854F51D4
‘Utoikamanu, F. 2018. Closing the technology gap in least developed countries. UN Chronicle, December. https://www.un.org/en/chronicle/article/closing-technology-gap-least-developed-countries
Viglione, G. 2020. NSF grant changes raise alarm about commitment to basic research. Nature, Vol. 584, No. 7820, pp. 177–178. https://www.nature.com/articles/d41586-020-02272-x
Voskoboynik, D. M. 2018. To fix the climate crisis, we must face up to our imperial past. OpenDemocracy, October 8. https://www.opendemocracy.net/en/opendemocracyuk/to-fix-climate-crisis-we-must-acknowledge-our-imperial-past/
Warikoo, N., Sinclair, S., Fei, J. and Jacoby-Senghor, D. 2016. Examining racial bias in education: A new approach. Educational Researcher, Vol. 45, No. 9, pp. 508–514. https://journals.sagepub.com/doi/full/10.3102/0013189X16683408
Zeng, D., Cao, Z. and Neill, D. B. 2021. Artificial intelligence-enabled public health surveillance – from local detection to global epidemic monitoring and control. L. Xing, M. L. Giger and J. K. Min (eds), Artificial Intelligence in Medicine, pp. 437–453. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7484813/
Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J. C., Sellitto, M., Shoham, Y., Clark, J. and Perrault, R. 2021. Artificial intelligence index report. Stanford Institute for Human-Centered Artificial Intelligence. https://aiindex.stanford.edu/report/