RecSys: Rajeev Rastogi on three recommendation system challenges

In a keynote address, the Amazon International vice president will discuss recommendations in directed graphs, training models whose target labels change, and using prediction uncertainty to improve model performance.

Rajeev Image 2.jpg
Rajeev Rastogi, vice president of applied science in Amazon’s International Emerging Stores division.

In a keynote address at this year’s ACM Conference on Recommender Systems (RecSys), which starts next week, Rajeev Rastogi, vice president of applied science in Amazon’s International Emerging Stores division, will discuss three problems his organization has faced in its work on recommendation algorithms: recommendations in directed graphs; training machine learning models when target labels change over time; and leveraging estimates of prediction uncertainty to improve models’ accuracy.

“The connections are that these are general techniques that cut across many different recommendation problems,” Rastogi explains. “And these are things that we actually use in practice. They make a difference in the real world.”

Directed graphs

The first problem involves directed graphs, or graphs whose edges describe relationships that run in only one direction.

“Directed graphs have applications in many different domains out there — from citation networks, where an edge U-V indicates paper U cites paper V, or in social networks, where an edge U-V would show that user U follows another user V, and in e-commerce, where an edge U-V indicates that customers bought product U before they bought product V,” Rastogi explains.

Although the problem of exploring directed graphs is general, the researchers in Rastogi’s organization focused on this last case: related-products recommendation, where the goal is to predict what other products might interest a customer who has just made a purchase.

“The interesting part here is that the related-products relationship is actually asymmetric,” Rastogi explains. “If you have, say, two nodes, a phone and a phone case, given a phone, you want to recommend a phone case. But if the customer has bought a phone case, you don't want to recommend a phone, because they most likely already have one.”

Like many graph-based applications, the Amazon team’s solution to the problem of asymmetric related-product recommendation involves graph neural networks (GNNs), in which each node of a graph is embedded in a representational space where geometric relationships between nodes carry information about their relationships in the network. The embedding process is iterative, with each iteration factoring in information about nodes at greater removes, until each node’s embedding carries information about its neighborhood.

“A single embedding space does not have the expressive power to model the asymmetric relationships between nodes in directed graphs,” Rastogi explains. “Something that we borrowed from past work is to represent each node with dual embeddings, and one of our novel contributions is really to learn these dual embeddings in a GNN setting that leverages the entire graph structure.”

BLADE.png
At center is a graph indicating the relationships between cell phones and related products such as a case, a power adaptor, and a screen guard. At left is a schematic illustrating the embedding (vector representation) of node A in a traditional graph neural network (GNN); at right is the dual embedding of A, as both a recommendation target (A-t) and a recommendation source (A-s), in BLADE. From "BLADE: Biased neighborhood sampling based graph neural network for directed graphs".

“Then we had additional techniques, like adaptive sampling,” Rastogi adds. “These vanilla GNNs sample fixed neighborhood sizes for every node. But we found that low-degree nodes” — that is, nodes with few connections to other nodes — “have suboptimal performance when you have fixed neighborhood sizes for every node, because low-degree nodes have sparse connectivity structures. And so less information gets transmitted when you're aggregating information from neighbors and so on.

“So we actually choose to sample larger neighborhoods for low-degree nodes and smaller neighborhoods for high-degree nodes. It's a little bit counterintuitive, but it gives us much better results.”

Delayed feedback

A typical machine learning (ML) model is trained on labeled data, and the model must learn to predict the labels — its training targets — from the data. The second problem Rastogi addresses in his talk is how best to train a model when you know that some of the target labels are going to change in the near future.

“This is, again, a very common problem across many different domains,” Rastogi says. “In recommendations, there can be a time lag of a few days between customers viewing a recommendation and purchasing the product.

“There's a trade-off here: If you use all the training data in real time, some of those more recent training examples may have target labels that are incorrect, because they are going to change over time. On the other hand, if you ignore all the training examples you got in the last five days, then you're missing out on recent data, and your model isn't going to be as good — especially in environments where models need to be retrained frequently.

Delayed feedback.png
An illustration of true negatives, delayed positives and true positives, from "Modelling delayed redemption with importance sampling and pre-redemption engagement".

“Here, what we've done is come up with an importance-sampling strategy that essentially weighs every training example with an importance weight. Let P(X,Y) be the true data distribution, and Q(X,Y) be the data distribution that you observe in the training set. Our importance-sampling strategy uses the ratio P(X,Y) divided by Q(X,Y) as the importance weight.

“Our key innovation centers on techniques to compute these importance weights in new scenarios. One is where we take into account preconversion signals. People tend to do something before they convert; they may add to cart, or they may click on the product to research it before completing the purchase. So we take into account those signals, and that helps us overcome data sparsity.

“But then it makes the computation of importance weights a little bit more complex. If it's very likely that the target label will actually change from 0 — a negative example — to 1 , then the importance weight would be much lower than if the likelihood of the example not changing was very low. Essentially, what you're trying to do is learn from the data the likelihood that the target label is going is change in the future and capture that in the importance weights.”

Prediction uncertainty

Finally, Rastogi says, the third technique he’ll discuss in his talk is the use of uncertainty estimates to improve the accuracy of model predictions.

“ML models typically will return point estimates,” Rastogi explains. “But usually you have a probability distribution. In some cases, you could know there's a 0.5 chance this customer is going buy the product. But in some cases, it could be anywhere between 0.2 and 0.8. What we found is, if you’re able to generate uncertainty estimates for model predictions, we can exploit them to improve model accuracy.

“We trained a binary classifier to predict ad click probability for an ads recommendation application. For every sample in the holdout set, we generated both the model score, which is the probability prediction, and also an uncertainty estimate, which is how certain I am about the predicted probability.

“If I looked at a lot of examples in the holdout set with a model score of 0.5, you would expect that about 50% of them resulted in clicks: that’s the empirical positivity rate. If it were 0.8, then the empirical positivity rate should be around 80%.

“But what we found is that as the variance of the model score increased, the empirical positivity rates went down. If I have a score of 0.8, I could say, well, it's between 0.79 and 0.81, which corresponds to a low variance. Or I could say, it's between 0.65 and 0.95, which indicates a high variance. We found that for the same model score, as the confidence intervals became larger, the empirical positivity rate started dropping.

“That has implications on selecting the decision boundary for binary classifiers. Traditionally, binary classifiers used a single threshold on model scores. But now, since the empirical positivity rate depends on both the model score and the uncertainty estimate, just selecting a single threshold value turns out to be suboptimal. If we select multiple thresholds, one per uncertainty level, we found that we can get much higher recall for a given precision.”

Members of Rastogi’s organization are currently writing a paper on their prediction uncertainty work — but the method is already in production.

“There are a lot of things that people publish papers about, and they're forgotten and never really used,” Rastogi says. “Coming from Amazon, we do science that actually makes a difference to customers and solves customer pain points. These are three examples of doing customer-obsessed science that actually makes a difference in the real world.”

Related content

US, NY, New York
AWS AI is looking for passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Conversational AI Systems. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Understanding (NLU), Dialog Systems including Generative AI with Large Language Models (LLMs) and Applied Machine Learning (ML). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use language technology. You will gain hands on experience with Amazon’s heterogeneous text, structured data sources, and large-scale computing resources to accelerate advances in language understanding. We are hiring in all areas of human language technology: NLU, Dialog Management, Conversational AI, LLMs and Generative AI. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Seattle
An information-rich and accurate product catalog is a strategic asset for Amazon. It powers unrivaled product discovery, informs customer buying decisions, offers a large selection, and positions Amazon as the first stop for shopping online. We use data analysis and statistical and machine learning techniques to proactively identify relationships between products within the Amazon product catalog. This problem is challenging due to sheer scale (billions of products in the catalog), diversity (products ranging from electronics to groceries to instant video across multiple languages) and multitude of input sources (millions of sellers contributing product data with different quality). Amazon’s Item and Relationship Identity Systems group is looking for an innovative and customer-focused applied scientist to help us make the world’s best product catalog even better. We believe that failure and innovation are inseparable twins. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will work in a collaborative environment where you can experiment with massive data from the world’s largest product catalog, work on challenging problems, quickly implement and deploy your algorithmic ideas at scale, understand whether they succeed via statistically relevant experiments across millions of customers. Key job responsibilities * Map business requirements and customer needs to a scientific problem. * Align the research direction to business requirements and make the right judgments on research/development schedule and prioritization. * Research, design and implement scalable machine learning (ML), natural language, or computational models to solve problems that matter to our customers in an iterative fashion. * Mentor and develop junior applied scientists and developers who work on data science problems in the same organization. * Stay informed on the latest machine learning, natural language and/or artificial intelligence trends and make presentations to the larger engineering and applied science communities.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers.
US, CA, San Diego
Are you passionate about automation, knowledge extraction, and artificial intelligence through the use of Machine Learning, Natural Language Processing, Recommender systems, Computer Vision, and Optimization? We have a team of experienced scientists with a critical business mission making revolutionary leaps forward in these spaces. On this team you will work with an immense and diverse corpus of text, image, and audio to build generative and discriminative models, analyze and model customer reading behavior to measure engagement and detect risks, study and optimize manufacturing and fulfillment processes, and build AI-based systems for helping indie authors with marketing their books. This will involve combining methods from several science domains with domain knowledge across multiple businesses into sophisticated ML workflows. Our team has mature areas and green-field opportunities. We offer scientific autonomy, value end-to-end ownership, and have a strong customer-focused culture. Come join us as we revolutionize the book industry and deliver an amazing experience to our Kindle authors and readers. Key job responsibilities As a Machine Learning Scientist at Amazon, you will connect with world leaders in your field working on similar problems. You will be working with large distributed systems of data and providing technical leadership to the product managers, teams, and organizations building machine learning solutions. You will be tackling Machine Learning challenges in Supervised, Unsupervised, and Semi-supervised Learning; utilizing modern methods such as deep learning and classical methods from statistical learning theory, detection, estimation. MLS’s are specialists with the knowledge to help drive the scientific vision for our products. They are externally aware of the state-of-the-art in their respective field of expertise and are constantly focused on advancing that state-of-the-art for improving Amazon’s products and services. Great candidates for this position will have experience in the areas of data science, machine learning, NLP, optimization, computer vision, or statistics. You will have hands-on experience with multiple science initiatives as well as be able to balance technical strength with business judgment to make decisions about technology, models and methodological choices. You will strive for simplicity, and demonstrate significant creativity and high judgment. About the team Kindle Direct Publishing (KDP) and Print-On-Demand (POD) have empowered a new wave of self-motivated creators, tearing down barriers that once blocked writers from reaching readers. Our team builds rich applications that empower anyone to realize their dream of becoming an author. We strive to provide an experience that is powerful, simple, and accessible to all. We build tools that enable authors to design high quality digital and print books, reaching readers all around the world. This role will help ensure we maintain the trust of both our Authors and Readers by ensuring all books published to Amazon meet our standards.
US, WA, Bellevue
Do you want to work on a team where you are encouraged to build and have the autonomy to push boundaries? Invention has become second nature at Amazon, and the pace of innovation is only accelerating with breadth of our businesses expanding. Amazon’s growth requires leaders who move fast, have an entrepreneurial spirit to create new products, have an unrelenting tenacity to get things done, and are capable of breaking down and solving complex problems. The AIM, Planning team within SCOT comprises of S&OP, Inventory Prediction and Entitlement and Long-Term Capacity and Topology Planning. The team's charter is broad and complex and aimed at optimizing the utilization of fulfillment facilities and resources by accurately predicting demand and inventory efficiency measures while reducing stockouts and excess inventory costs across planning horizons, from short-term (within 13 weeks) to the long-term (13 weeks to 5 years). The team's north star is to be the reliable, single source of truth for inventory units and cube demand at granularities ranging from an FC’s bins to overall network level, and across planning horizons as close as next week to as far out as 3-5 years. To get there, we enhance or re-develop models and mechanisms where existing ones fail to account for structural shifts in supply chains, buying programs, or customer behaviors. We create new systems where science-based recommendations are currently lacking and being replaced by heuristics and offline human goal-seeking approaches. We strive to completely eliminate non-scientific interventions in our forecast guidance and capacity recommendations, and replace them with a system-driven outlook to uncover underlying root causes when departing from SCOT plans and recommendations. We institute authoritative and economics-based framework missing today to drive inventory efficiency measures for Retail buying programs (short/long-lead buys) and FBA plans that solve for capacity constraints in the most economical manner across horizons. This is a unique, high visibility opportunity for a senior science leader someone who wants to have business impact, dive deep into large-scale economic problems, enable measurable actions on the Consumer economy, and work closely with product managers, engineers, other scientists and economists. We are a Day 1 team, with a charter to be disruptive through the use of ML and bridge the Science and Engineering gaps that exist today. A day in the life In this pivotal role, you will be a technical leader in operations research or machine learning, with significant scope, impact, and visibility. Your solutions have the potential to drive billions of dollars in impact for Amazon's supply chain globally. As a senior scientist manager on the team, you will engage in every facet of the process—from idea generation, business analysis and scientific research to development and deployment of advanced models—granting you a profound sense of ownership. From day one, you will collaborate with experienced scientists, engineers, and product managers who are passionate about their work. Moreover, you will collaborate with Amazon's broader decision and research science community, enriching your perspective and mentoring fellow engineers and scientists. The successful candidate will have the strong expertise in applying operations research methodologies to address a wide variety of supply chain problems. You will strive for simplicity, demonstrate judgment backed by mathematical rigor, as you continually seek opportunities to innovate, build, and deliver. Entrepreneurial spirit, adaptability to diverse roles, and agility in a fast-paced, high-energy, highly collaborative environment are essential.
US, WA, Bellevue
We’re building the speech and language solutions behind Alexa. We’re working hard, having fun, and making history; come join us! Amazon is looking for a Language Data Scientist to join our Language Science, Engineering and Research team. We are seeking a candidate with strong analytical skills, solid linguistics domain expertise, and Machine Learning experience to help us measure, analyze and solve complex problems. In this role, you are responsible for the design and delivery of LLM products using your linguistic, machine learning, and data analysis skills to understand what a customer meant. You are a key member in new feature development while proactively improving existing experiences. You work closely with linguists, scientists, engineers, and product managers, to deliver magical experiences that customers love. Key job responsibilities * Design, develop, and implement innovative NLP solutions to address large-scale qualitative and quantitative data needs * Streamline the development and evaluation process for LLMs with a focus on customer requests (text, speech, etc.) * Collaborate with engineers, scientists and linguists to ensure models are effective, accurate, and aligned with business goals * Conduct research and stay current on the latest advancements in NLP and machine learning * Analyze the interpret NLP model outputs, providing actionable insights to stakeholders * Document and present findings in a clear and concise manner
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, WA, Bellevue
We are a part of Amazon Alexa organization where our mission is “delight customers through contextual and personalized proactive experiences that keep customers informed, engaged, and productive without cognitive burden”. We are developing advanced systems to deliver engaging, intuitive, and adaptive content recommendations across all Amazon surfaces. We aim to facilitate seamless reasoning and customer experiences, surpassing the capabilities of previous machine learning models. We are looking for a passionate, talented, and resourceful Senior Applied Scientist in the field of Natural Language Processing (NLP), Large Language Model (LLM), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware personal assistant. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, shipping solutions via rapid experimentation and then iterating on user feedback and interactions. Key job responsibilities As a Senior Applied Scientist, you will leverage your technical expertise and experience to demonstrate leadership in tackling large complex problems, setting the direction and collaborating with applied scientists and engineers to develop novel algorithms and modeling techniques to enable timely, relevant and delightful recommendations and conversations. Your work will directly impact our customers in the form of products and services that make use of various machine learing, deep learning and language model technologies. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in the state of art.
US, WA, Bellevue
Amazon is seeking an exceptional Applied Science Manager to join AGI Info Content team. In this role, you will be at the forefront of developing and enhancing the intelligence of AmazonBot crawler and content processing. The team is a key enabler of Amazon's AGI initiatives such as data pipelines for Olympus model training and collecting data for AGI Info grounding services. Our systems operate on web scale. This requires great combination of innovation to utilize all SOTA ML techniques in combination with model optimization to operate on 100k+ requests/decision per second. Your work will directly impact the quality and efficiency of our data acquisition efforts, ultimately benefiting millions of customers worldwide. Key job responsibilities - Design, develop, and implement advanced algorithms and machine learning models to improve the intelligence and effectiveness of our web crawler and content processing pipelines. - Collaborate with cross-functional teams to identify and prioritize crawling targets, ensuring alignment with business objectives - Analyze and optimize crawling strategies to maximize coverage, freshness, and quality of acquired data while minimizing operational costs as well as dive deep into data to select the highest quality data for LLM model training and grounding. - Conduct in-depth research to stay at the forefront of web acquisition and processing. - Develop and maintain scalable, fault-tolerant systems to handle the vast scale of Amazon's web crawling operations - Monitor and analyze performance metrics, identifying opportunities for improvement and implementing data-driven optimizations - Mentor and guide junior team members, fostering a culture of innovation and continuous learning