April 2024
Welcome to our first LinkedIn newsletter.
You can find more information about our research areas, career opportunities, and academic collaborations on our website, along with FAQs.
Call for proposals
- Amazon Research Awards issues call for proposals: We're looking for proposals in AI for Information Security and Sustainability. The deadline for submissions is 11:59pm PT on May 7. Proposals related to theory, practice, and novel techniques are welcome, and they will be reviewed for the quality of their scientific content, creativity, and their potential for impact at scale. Grant recipients receive unrestricted funds and AWS promotional credits.
Deep dives
- Diffuse-to-Choose: Amazon's new "virtual-try-all" product visualization model is the first to work across a wide range of products and settings. Key to its success is a secondary U-Net encoder that extracts fine-grained product details from a rough copy-paste collage.
- A quick guide to Amazon's 20+ papers at ICASSP 2024: At this year's conference, Amazon researchers will present on topics including speech enhancement, spoken language understanding, dialogue, paralinguistics, and pitch estimation.
- Using Amazon web traffic to track the eclipse: To trace the path of the 2024 solar eclipse, Amazon researchers created a visualization that projects fluctuations in Amazon website traffic onto the U.S. map. The times and locations at which the total eclipse was visible correlate strongly with decreased website activity.
- The science behind Echo Frames: From the outside, Echo Frames look like a pair of regular eyeglasses. To tackle customer feedback, Amazon engineers and product designers built a new generation with enhanced audio playback—including custom-built speech-processing technology that dramatically improves word recognition—and a significant boost in battery life.
- Amazon Scholar solves century-old problem with automated reasoning: Using a SAT solver, Amazon Scholar and Carnegie Mellon University professor Marijn Heule has solved a century-old geometry problem. Along the way, he and his AWS colleagues developed a new proof-checking mechanism that's 10 to 20 times as efficient as its predecessor.
- Preskill wins prize for work on learning and quantum computing: Congratulations to Caltech professor and Amazon Scholar John Preskill for winning the 2024 Bell Prize for fundamental research on quantum mechanics. Preskill explains how he and his colleagues use both classical and quantum computing techniques to learn about quantum systems.
Challenges
- Multi-Task Online Shopping Challenge for LLMs: Introducing the Amazon KDD Cup 2024 challenge—an opportunity to harness LLMs for an enhanced online shopping journey. With 57 tasks and over 20,000 questions derived from actual Amazon data, participants will tackle aspects like shopping concept understanding, user behavior alignment, multi-lingual abilities, and more. Compete for a share of the $41,500 prize pool and the chance to showcase your work at the KDD Cup Workshop 2024
Upcoming conferences
- ICASSP: April 14 - 19
- ICLR: May 7 - 11
- The Web Conference: May 13 - 17
- LREC-COLING: May 20 - 25
- NAACL: June 16 - 21
- CVPR: June 17 - 21
Awards and recognitions
- Joan Feigenbaum , an Amazon Scholar, and the Grace Murray Hopper professor of computer science at Yale University , was elected to a Fellow by the International Association for Cryptologic Research (IARC).
- Cristiana Lara , an Amazon senior research scientist, received the inaugural INFORMS Early Career Practitioner Award, which recognized her 'exceptional contributions to business/industry, government and consulting, and service to the operations research and analytics profession'.
- Kai Yeung , an Amazon Healthcare Services senior research scientist, received an Award of Excellence from the Journal of Managed Care & Specialty Pharmacy (JMCP), which recognizes the author(s) of the best article to appear in the journal in the prior calendar year.
New publications
- A preference-driven paradigm for enhanced translation with large language models
- Accept the modality gap: An exploration in the hyperbolic space
- Adaptive slot attention: Object discovery with dynamic slot number
- Amazon MemoryDB: A fast and durable memory-first cloud database
- Automated multidimensional data layouts in Amazon Redshift
- Beyond boundaries: a human-like approach for question answering over structured and unstructured information sources
- Bring your own KG: Self-supervised program synthesis for zero-shot KGQA
- Can contrastive learning refine embeddings
- Can small language models help large language models reason better?: LM-guided chain-of-thought
- Cedar: A new language for expressive, fast, safe, and analyzable authorization
- CoCoMIC: Code completion by jointly modeling in-file and cross-file context
- CoMM: Collaborative multi-agent, multi-reasoning-path prompting for complex problem solving
- ConEC: Earnings call dataset with real-world contexts for benchmarking contextual speech recognition
- CPR: Retrieval augmented generation for copyright protection
- DEED: Dynamic early exit on decoder for accelerating encoder-decoder transformer models
- Differentially private conditional independence testing
- Diffusion models for multi-modal generative modeling
- Don’t just translate, summarize too: Cross-lingual product title generation in e-commerce
- EIVEN: Efficient implicit attribute value extraction using multimodal LLM
- Enhancing contextual understanding in large language models through contrastive decoding
- Enhancing low-resource llms classification with peft and synthetic data
- Evaluating human-AI partnership for LLM-based code migration
- FLAP: Flow-adhering planning with constrained decoding in LLMs
- GDA: Generalized diffusion for robust test-time adaptation
- GROUNDHOG: Grounding large language models to holistic segmentation
- How lexical is bilingual lexicon induction?
- Hyperbolic learning with synthetic captions for open-world detection
- Intelligent scaling in Amazon Redshift
- ITERALIGN: Iterative constitutional alignment of large language models
- Less is more for improving automatic evaluation of factual consistency
- Leveraging customer feedback for multi-modal insight extraction
- Leveraging large language models for multimodal search
- Leveraging uncertainty estimates to improve classifier performance
- Low-cost generation and evaluation of dictionary example sentences
- MAGID: An automated pipeline for generating synthetic multi-modal datasets
- MICo: Preventative detoxification of large language models through inhibition control
- Multi-review fusion-in-context
- Multi-stage multi-modal pre-training for automatic speech recognition
- Multiple-question multiple-answer text-VQA
- No more ambiguity in 360◦ room layout via bi-layout estimation
- On the scalability of diffusion-based text-to-image generation
- Proximal causal inference for synthetic control with surrogates
- RecMind: Large language model powered agent for recommendation
- REXEL: An end-to-end model for document-level relation extraction and entity linking
- RS-DPO: A hybrid rejection sampling and direct preference optimization method for alignment of large language models
- Semi-supervised dialogue abstractive summarization via high-quality pseudolabel selection
- Session-aware product filter ranking in ecommerce search
- Sharpness-aware optimization for real-world adversarial attacks for diverse compute platforms with enhanced transferability
- TAIL: Task-specific Adapters for Imitation Learning with large pretrained models
- The steerability of large language models toward data-driven personas
- Toward informal language processing: Knowledge of slang in large language models
- Towards equitable natural language understanding systems for dialectal cohorts: Debiasing training data
- Towards improved multi-source attribution for long-form answer generation
- Towards translating objective product attributes into customer
- Towards unified multi-modal personalization: Large vision-language models for generative recommendation and beyond
- Variance-reduced zeroth-order methods for fine-tuning language models
- ViewFusion: Towards multi-view consistency via interpolated denoising
For more resources, see our code and datasets section.
© 1996-2024 Amazon.com, Inc. or its affiliates | Privacy | Conditions of Use
posted up in the mornin waitin for work call fck that
3moReally??? Ty for the invite Amazon, dont mind if I do deep dive you on the toilet
Executive
3moThanks for sharing, an informative-insightful article, Amazon Science. Syed Awees, Aspiring Analyst. Kudos, to Rohit Prasad, SVP & Head Scientist, AGI (Artificial General Intelligence), and 'Team Alexa AI', for "Amazon '#Alexa - World's Leading Virtual Assistant Technology". Best wishes, to Jeffrey P. Bezos, Executive Chair, Andy Jassy, President & CEO, and 'Team #Amazon', for all your endeavors, and to achieve, many more, milestones, in the mission of "Earth's 3C (Customer Centric Company)!"
Human Being
3moHow deep does Science at Amazon truly travel?