Responsible AI in the wild: Lessons learned at AWS

Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

When we first joined AWS AI/ML as Amazon Scholars over three years ago, we had already been doing scientific research in the area now known as responsible AI for a while. We had authored a number of papers proposing mathematical definitions of fairness and machine learning (ML) training algorithms enforcing them, as well as methods for ensuring strong notions of privacy in trained models. We were well versed in adjacent subjects like explainability and robustness and were generally denizens of the emerging responsible-AI research community. We even wrote a general-audience book on these topics to try to explain their importance to a broader audience.

Related content
Generative AI raises new challenges in defining, measuring, and mitigating concerns about fairness, toxicity, and intellectual property, among other things. But work has started on the solutions.

So we were excited to come to AWS in 2020 to apply our expertise and methodologies to the ongoing responsible-AI efforts here — or at least, that was our mindset on arrival. But our journey has taken us somewhere quite different, somewhere more consequential and interesting than we expected. It’s not that the definitions and algorithms we knew from the research world aren’t relevant — they are — but rather that they are only one component of a complex AI workstream comprising data, models, services, enterprise customers, and end-users. It’s also a workstream in which AWS is uniquely situated due to its pioneering role in cloud computing generally and cloud AI services specifically.

Our time here has revealed to us some practical challenges of which we were previously unaware. These include diverse data modalities, “last mile” effects with customers and end-users, and the recent emergence of AI activism. Like many good interactions between industry and academia, what we’ve learned at AWS has altered our research agenda in healthy ways. In case it’s useful to anyone else trying to parse the burgeoning responsible-AI landscape (especially in the generative-AI era), we thought we’d detail some of our experiences here.

Modality matters

One of our first important practical lessons might be paraphrased as “modality matters”. By this we mean that the particular medium in which an AI service operates (such as visual images or spoken or written language) matters greatly in how we analyze and understand it from both performance and responsible-AI perspectives.

Consider specifically the desire for trained models be “fair”, or free of significant demographic bias. Much of the scientific literature on ML fairness assumes that the features used to compare performance across groups (which might include gender, race, age, and other attributes) are readily available, or can be accurately estimated, in both training and test datasets.

Related content
Two of the world’s leading experts on algorithmic bias look back at the events of the past year and reflect on what we’ve learned, what we’re still grappling with, and how far we have to go.

If this is indeed the case (as it might be for some spreadsheet-like “tabular” datasets recording things like medical or financial records, in which a person’s age and gender might be explicit columns), we can more easily test a trained model for bias. For instance, in a medical diagnosis application we might evaluate the model to make sure the error rates are approximately the same across genders. If these rates aren’t close enough, we can augment our data or retrain the model in various ways until the evaluation is passed to satisfaction.

But many cloud AI/ML services operate on data that simply does not contain explicit demographic information. Rather, these services live in entirely different modalities such as speech, natural language, and vision. Applications such as our speech recognition and transcription services take as input time series of frequencies that capture spoken utterances. Consequently, there are not direct annotations in the data of things like gender, race, or age.

But what can be more readily detected from speech data, and are also more directly related to performance, are regional dialects and accents — of which there are dozens in North American English alone. English-language speech can also feature non-native accents, influenced more by the first languages of the speakers than by the regions in which they currently live. This presents an even more diverse landscape, given the large number of first languages and the international mobility of speakers. And while spoken accents may be weakly correlated or associated with one or more ancestry groups, they are usually uninformative on things like age and gender (speakers with a Philadelphia accent may be young or old; male, female or nonbinary; etc.). Finally, the speech of even a particular person may exhibit many other sources of variation, such as situational stress and fatigue.

Regional dialects.jpeg
Data — such as regional variations in word choice and accents — may lead toward alternative notions of fairness that are more task-relevant, as with word error rates across dialects and accents.

What is the responsible-AI practitioner to do when confronted with so many different accents and other moving parts, in a task as complex as speech transcription? At AWS, our answer is to meet the task and data on their own terms, which in this case involves some heavy lifting: meticulously gathering samples from large populations of representative speakers with different accents and carefully transcribing each word. The “representative” is important here: while it might be more expedient to (for instance) gather this data from professional actors trained in diction, such data would not be typical of spoken language in the wild.

Related content
Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

We also gather speech data that exhibits variability along other important dimensions, including the acoustic conditions during recording (varying amounts and types of background noise, recordings made via different mobile-phone handsets, whose microphones may vary in quality, etc.). The sheer number of combinations makes obtaining sufficient coverage challenging. (In some domains such as computer vision, coverage issues that are similar — variability across visual properties such as skin tone, lighting conditions, indoor vs. outdoor settings, and so on — have led to increased interest in synthetic data to augment human-generated data, including for fairness testing here at AWS.)

Once curated, such datasets can be used for training a transcription model that is not only good overall but also roughly equally performant across accents. And “performant” here means something more complex than in a simple prediction task; speech recognition typically uses a measure like the word error rate. On top of all the curation and annotations above, we also annotate some data by self-reported speaker demographics to make sure we’re fair not just by accent but by race and gender as well, as detailed in the service’s accompanying service card.

Our overarching point here is twofold. First, while as a society we tend to focus on dimensions such as race and gender when speaking about and assessing fairness, sometimes the data simply doesn’t permit such assessments, and it may not be a good idea to impute such dimensions to the data (for instance, by trying to infer race from speech signals). And second, in such cases the data may lead us toward alternative notions of fairness that might be more task-relevant, as with word error rates across dialects and accents.

The last mile of responsible AI

The specific properties of individuals that can or cannot (or should not) be gleaned from a particular dataset or modality are not the only things that may be out of the direct control of AI developers — especially in the era of cloud computing. As we have seen above, it’s challenging work to get coverage of everything you can anticipate. It’s even harder to anticipate everything.

The supply chain phrase “the last mile” refers to the fact that “upstream” providers of goods and products may have limited control over the “downstream” suppliers that directly connect to end-users or consumers. The emergence of cloud providers like AWS has created an AI service supply chain with its own last-mile challenges.

Related content
The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

AWS AI/ML provides enterprise customers with API access to services like speech transcription because many want to integrate such services into their own workflows but don’t have the resources, expertise, or interest to build them from scratch. These enterprise customers sit between the general-purpose services of a cloud provider like AWS and the final end-users of the technology. For example, a health care system might want to provide cloud speech transcription services optimized for medical vocabulary to allow doctors to take verbal notes during their patient rounds.

As diligent as we are at AWS at battle-testing our services and underlying models for state-of-the-art performance, fairness, and other responsible-AI dimensions, it is obviously impossible to anticipate all possible downstream use cases and conditions. Continuing our health care example, perhaps there is a floor of a particular hospital that has new and specialized imaging equipment that emits background noise at a specific regularity and acoustic frequency. In the likely event that these exact conditions were not represented in either the training or test data, it’s possible that overall word error rates will not only be higher but may be so differentially across accents and dialects.

Such last-mile effects can be as diverse as the enterprise customers themselves. With time and awareness of such conditions, we can use targeted training data and customer-side testing to improve downstream performance. But due to the proliferation of new use cases, it is an ever-evolving process, not one that is ever “finished”.

AI activism: from bugs to bias

It’s not only cloud customers whose last miles may present conditions that differ from those during training and testing. We live in a (healthy) era of what might be called AI activism, in which not only enterprises but individual citizens — including scientists, journalists, and members of nonprofit organizations — can obtain API or open-source access to ML services and models and perform their own evaluations on their own curated datasets. Such tests are often done to highlight weaknesses of the technology, including shortfalls in overall performance and fairness but also potential security and privacy vulnerabilities. As such, they are typically performed without the AI developer’s knowledge and may be first publicized in both research and mainstream media outlets. Indeed, we have been on the receiving end of such critical publicity in the past.

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

To date, the dynamic between AI developers and activists has been somewhat adversarial: activists design and conduct a private experimental evaluation of a deployed AI model and report their findings in open forums, and developers are left to evaluate the claims and make any needed improvements to their technology. It is a dynamic that is somewhat reminiscent of the historical tensions between more traditional software and security developers and the ethical and unethical hacker communities, in which external parties probe software, operating systems, and other platforms for vulnerabilities and either expose them for the public good or exploit them privately for profit.

Over time the software community has developed mechanisms to alter these dynamics to be more productive than adversarial, in particular in the form of bug bounty programs. These are formal events or competitions in which software developers invite the hacker community to deliberately find vulnerabilities in their technology and offer financial or other rewards for reporting and describing them to the developers.

Bias bounties.png
In a fair-ML (“bias bounty”) competition, different teams (x-axis) focus on different demographic features (y-axis) in the dataset, indicating that crowdsourced bias mitigation can help contend with the breadth of possible sources of bias. (The darker the blue, the greater the use of the feature.)

In the last couple of years, the ideas and motivations behind bug bounties have been adopted and adapted by the AI development community, in the form of “bias bounties”. Rather than finding bugs in traditional software, participants are invited to help identify demographic or other biases in trained ML models and systems. Early versions of this idea were informal hackathons of short duration focused on finding subsets of a dataset on which a model underperformed. But more recent proposals incubated at AWS and elsewhere include variants that are more formal and algorithmic in nature. The explosion of models, interest in, and concerns about generative AI have also led to more codified and institutionalized responsible-AI methodologies such as the HELM framework for evaluating large language models.

We view these recent developments — AI developers opening up their technology and its evaluation to a wider community of stakeholders than just enterprise customers, and those stakeholders playing an active role in identifying necessary improvements in both technical and nontechnical ways — as healthy and organic, a natural outcome of the complex and evolving AI industry. Indeed, such collaborations are in keeping with our recent White House commitments to external testing and model red-teaming.

Responsible AI is neither a problem to be “solved” once and for all, nor a problem that can be isolated to a single location in the pipeline stretching from developers to their customers to end-users and society at large. Developers are certainly the first line where best practices must be established and implemented and responsible-AI principles defended. But the keys to the long-term success of the AI industry lie in community, communication, and cooperation among all those affected by it.

Related content

US, NY, New York
AWS AI is looking for passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Conversational AI Systems. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Understanding (NLU), Dialog Systems including Generative AI with Large Language Models (LLMs) and Applied Machine Learning (ML). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use language technology. You will gain hands on experience with Amazon’s heterogeneous text, structured data sources, and large-scale computing resources to accelerate advances in language understanding. We are hiring in all areas of human language technology: NLU, Dialog Management, Conversational AI, LLMs and Generative AI. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Seattle
An information-rich and accurate product catalog is a strategic asset for Amazon. It powers unrivaled product discovery, informs customer buying decisions, offers a large selection, and positions Amazon as the first stop for shopping online. We use data analysis and statistical and machine learning techniques to proactively identify relationships between products within the Amazon product catalog. This problem is challenging due to sheer scale (billions of products in the catalog), diversity (products ranging from electronics to groceries to instant video across multiple languages) and multitude of input sources (millions of sellers contributing product data with different quality). Amazon’s Item and Relationship Identity Systems group is looking for an innovative and customer-focused applied scientist to help us make the world’s best product catalog even better. We believe that failure and innovation are inseparable twins. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will work in a collaborative environment where you can experiment with massive data from the world’s largest product catalog, work on challenging problems, quickly implement and deploy your algorithmic ideas at scale, understand whether they succeed via statistically relevant experiments across millions of customers. Key job responsibilities * Map business requirements and customer needs to a scientific problem. * Align the research direction to business requirements and make the right judgments on research/development schedule and prioritization. * Research, design and implement scalable machine learning (ML), natural language, or computational models to solve problems that matter to our customers in an iterative fashion. * Mentor and develop junior applied scientists and developers who work on data science problems in the same organization. * Stay informed on the latest machine learning, natural language and/or artificial intelligence trends and make presentations to the larger engineering and applied science communities.
US, CA, San Diego
Are you passionate about automation, knowledge extraction, and artificial intelligence through the use of Machine Learning, Natural Language Processing, Recommender systems, Computer Vision, and Optimization? We have a team of experienced scientists with a critical business mission making revolutionary leaps forward in these spaces. On this team you will work with an immense and diverse corpus of text, image, and audio to build generative and discriminative models, analyze and model customer reading behavior to measure engagement and detect risks, study and optimize manufacturing and fulfillment processes, and build AI-based systems for helping indie authors with marketing their books. This will involve combining methods from several science domains with domain knowledge across multiple businesses into sophisticated ML workflows. Our team has mature areas and green-field opportunities. We offer scientific autonomy, value end-to-end ownership, and have a strong customer-focused culture. Come join us as we revolutionize the book industry and deliver an amazing experience to our Kindle authors and readers. Key job responsibilities As a Machine Learning Scientist at Amazon, you will connect with world leaders in your field working on similar problems. You will be working with large distributed systems of data and providing technical leadership to the product managers, teams, and organizations building machine learning solutions. You will be tackling Machine Learning challenges in Supervised, Unsupervised, and Semi-supervised Learning; utilizing modern methods such as deep learning and classical methods from statistical learning theory, detection, estimation. MLS’s are specialists with the knowledge to help drive the scientific vision for our products. They are externally aware of the state-of-the-art in their respective field of expertise and are constantly focused on advancing that state-of-the-art for improving Amazon’s products and services. Great candidates for this position will have experience in the areas of data science, machine learning, NLP, optimization, computer vision, or statistics. You will have hands-on experience with multiple science initiatives as well as be able to balance technical strength with business judgment to make decisions about technology, models and methodological choices. You will strive for simplicity, and demonstrate significant creativity and high judgment. About the team Kindle Direct Publishing (KDP) and Print-On-Demand (POD) have empowered a new wave of self-motivated creators, tearing down barriers that once blocked writers from reaching readers. Our team builds rich applications that empower anyone to realize their dream of becoming an author. We strive to provide an experience that is powerful, simple, and accessible to all. We build tools that enable authors to design high quality digital and print books, reaching readers all around the world. This role will help ensure we maintain the trust of both our Authors and Readers by ensuring all books published to Amazon meet our standards.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers.
US, WA, Bellevue
Do you want to work on a team where you are encouraged to build and have the autonomy to push boundaries? Invention has become second nature at Amazon, and the pace of innovation is only accelerating with breadth of our businesses expanding. Amazon’s growth requires leaders who move fast, have an entrepreneurial spirit to create new products, have an unrelenting tenacity to get things done, and are capable of breaking down and solving complex problems. The AIM, Planning team within SCOT comprises of S&OP, Inventory Prediction and Entitlement and Long-Term Capacity and Topology Planning. The team's charter is broad and complex and aimed at optimizing the utilization of fulfillment facilities and resources by accurately predicting demand and inventory efficiency measures while reducing stockouts and excess inventory costs across planning horizons, from short-term (within 13 weeks) to the long-term (13 weeks to 5 years). The team's north star is to be the reliable, single source of truth for inventory units and cube demand at granularities ranging from an FC’s bins to overall network level, and across planning horizons as close as next week to as far out as 3-5 years. To get there, we enhance or re-develop models and mechanisms where existing ones fail to account for structural shifts in supply chains, buying programs, or customer behaviors. We create new systems where science-based recommendations are currently lacking and being replaced by heuristics and offline human goal-seeking approaches. We strive to completely eliminate non-scientific interventions in our forecast guidance and capacity recommendations, and replace them with a system-driven outlook to uncover underlying root causes when departing from SCOT plans and recommendations. We institute authoritative and economics-based framework missing today to drive inventory efficiency measures for Retail buying programs (short/long-lead buys) and FBA plans that solve for capacity constraints in the most economical manner across horizons. This is a unique, high visibility opportunity for a senior science leader someone who wants to have business impact, dive deep into large-scale economic problems, enable measurable actions on the Consumer economy, and work closely with product managers, engineers, other scientists and economists. We are a Day 1 team, with a charter to be disruptive through the use of ML and bridge the Science and Engineering gaps that exist today. A day in the life In this pivotal role, you will be a technical leader in operations research or machine learning, with significant scope, impact, and visibility. Your solutions have the potential to drive billions of dollars in impact for Amazon's supply chain globally. As a senior scientist manager on the team, you will engage in every facet of the process—from idea generation, business analysis and scientific research to development and deployment of advanced models—granting you a profound sense of ownership. From day one, you will collaborate with experienced scientists, engineers, and product managers who are passionate about their work. Moreover, you will collaborate with Amazon's broader decision and research science community, enriching your perspective and mentoring fellow engineers and scientists. The successful candidate will have the strong expertise in applying operations research methodologies to address a wide variety of supply chain problems. You will strive for simplicity, demonstrate judgment backed by mathematical rigor, as you continually seek opportunities to innovate, build, and deliver. Entrepreneurial spirit, adaptability to diverse roles, and agility in a fast-paced, high-energy, highly collaborative environment are essential.
US, WA, Bellevue
We are a part of Amazon Alexa organization where our mission is “delight customers through contextual and personalized proactive experiences that keep customers informed, engaged, and productive without cognitive burden”. We are developing advanced systems to deliver engaging, intuitive, and adaptive content recommendations across all Amazon surfaces. We aim to facilitate seamless reasoning and customer experiences, surpassing the capabilities of previous machine learning models. We are looking for a passionate, talented, and resourceful Senior Applied Scientist in the field of Natural Language Processing (NLP), Large Language Model (LLM), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware personal assistant. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, shipping solutions via rapid experimentation and then iterating on user feedback and interactions. Key job responsibilities As a Senior Applied Scientist, you will leverage your technical expertise and experience to demonstrate leadership in tackling large complex problems, setting the direction and collaborating with applied scientists and engineers to develop novel algorithms and modeling techniques to enable timely, relevant and delightful recommendations and conversations. Your work will directly impact our customers in the form of products and services that make use of various machine learing, deep learning and language model technologies. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in the state of art.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, WA, Seattle
We are building GenAI based shopping assistant for Amazon. We reimage Amazon Search with an interactive conversational experience that helps you find answers to product questions, perform product comparisons, receive personalized product suggestions, and so much more, to easily find the perfect product for your needs. We’re looking for the best and brightest across Amazon to help us realize and deliver this vision to our customers right away. This will be a once in a generation transformation for Search, just like the Mosaic browser made the Internet easier to engage with three decades ago. If you missed the 90s—WWW, Mosaic, and the founding of Amazon and Google—you don’t want to miss this opportunity.
US, WA, Seattle
We are building GenAI based shopping assistant for Amazon. We reimage Amazon Search with an interactive conversational experience that helps you find answers to product questions, perform product comparisons, receive personalized product suggestions, and so much more, to easily find the perfect product for your needs. We’re looking for the best and brightest across Amazon to help us realize and deliver this vision to our customers right away. This will be a once in a generation transformation for Search, just like the Mosaic browser made the Internet easier to engage with three decades ago. If you missed the 90s—WWW, Mosaic, and the founding of Amazon and Google—you don’t want to miss this opportunity.