Guides / Solutions / Ecommerce / Visual image search

You can let your users search for products using images, not just words. This querying technique is commonly referred to as reverse image search. It allows users to quickly find more information about a product.

This guide shows how to use a third-party API or platform to turn images into Algolia search queries. It uses the all-purpose Google Cloud Vision image recognition API, but you could use other providers like Amazon Rekognition or those made for particular use cases, such as ViSenze for retail. Before implementing reverse image search, first enrich your records using the same image recognition platform you plan to use for reverse image search. Once you’ve tagged your records, those same classifications can help retrieve the correct record when a user searches using an image.

Before you begin

This guide assumes certain data and software requirements:

  • Algolia records enriched using image classification. It doesn’t cover the UI part of this UX. Specifically, you’ll need to add the ability for users to upload an image to your search box and display results accordingly.
  • Access to an image recognition platform such as Google Cloud Vision API.

You can’t store images directly in Algolia. Instead, store the image on a content delivery network (CDN) or web server and add the image URL to a field in your records. When you retrieve a record from Algolia, use this URL to display the image in your app.

Image classification

The Algolia engine searches for records using a textual query or set of query parameters. The first step in transforming an image file into a set of query parameters is to use an image recognition platform. Image recognition platforms such as Google Cloud Vision API take an image and return a set of classifications—or “labels”—for it.

Once your users have uploaded an image to use for search, you need to run the image through the same image recognition platform you used to enrich your records. You can use the same function you used to classify images on your records:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// Import the Google Cloud client libraries
const vision = require('@google-cloud/vision');

// Instantiate Google Vision client
const client = new vision.ImageAnnotatorClient();

// Retrieve labels
async function getImageLabels(imageURL, objectID, scoreLimit) {
  const [result] = await client.labelDetection(imageURL);
  const labels = result.labelAnnotations
    .filter((label) => label.score > scoreLimit)
    .map((label) => (
      {
        description: labels.description,
        score: label.score
      }
    ))
  return { imageURL, objectID, labels };
}

const classifiedImage = await getImageLabels("https://images-na.ssl-images-amazon.com/images/I/41uIVaJOLdL.jpg", "439784001", 0.5)

This returns these labels:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
const classifiedImage = {
  imageURL: "https://images-na.ssl-images-amazon.com/images/I/41uIVaJOLdL.jpg",
  objectID: "439784001",
  labels: [
    {
      "description": "Outerwear",
      "score": 0.9513528,
    },
    {
      "description": "Azure",
      "score": 0.89286935,
    },
    {
      "description": "Sleeve",
      "score": 0.8724504,
    },
    {
      "description": "Bag",
      "score": 0.86443543,
    },
    {
      "description": "Grey",
      "score": 0.8404184,
    }
  ]
}

Turn classifications into an Algolia query

Once you’ve extracted classifications from an image, the next step is to turn them into an Algolia query. For this, you can send an empty query with classifications as optionalFilters. Optional filters boost results with matching values.

To use classifications as optionalFilters, you must first declare the classification attributes in attributesForFaceting.

First, you need to take each classification and format it properly:

1
2
3
4
function reduceLabelsToFilters(labels) {
  const optionalFilters = labels.map(label => `labels.description:'${label.description}'`);
  return optionalFilters;
}

In this example, image classifications are stored in the label.descriptions nested attribute in each product. You should update the optionalFilters text according to your record format.

Then you can pass these as optionalFilters as a query time parameter. How you do so depends on your frontend implementation.

If you’re using InstantSearch, you can use the configure widget:

1
2
3
instantsearch.widgets.configure({
  optionalFilters: reduceLabelsToFilters(labels)
});

If you’re using an API client, you can pass it as a parameter in the search method:

1
2
3
4
5
index.search('query', {
  optionalFilters: reduceLabelsToFilters(labels)
}).then(({ hits }) => {
  console.log(hits);
});