American Online Phonebook

Tag: Might

Once VMware is free from Dell, who might fancy buying it?

April 18, 2021 No Comments

TechCrunch has spilled much digital ink tracking the fate of VMware since it was brought to Dell’s orbit thanks to the latter company’s epic purchase of EMC in 2016 for $ 58 billion. That transaction saddled the well-known Texas tech company with heavy debts. Because the deal left VMware a public company, albeit one controlled by Dell, how it might be used to pay down some of its parent company’s arrears was a constant question.

Dell made its move earlier this week, agreeing to spin out VMware in exchange for a huge one-time dividend, a five-year commercial partnership agreement, lots of stock for existing Dell shareholders and Michael Dell retaining his role as chairman of its board.

So, where does the deal leave VMware in terms of independence, and in terms of Dell influence? Dell no longer will hold formal control over VMware as part of the deal, though its shareholders will retain a large stake in the virtualization giant. And with Michael Dell staying on VMware’s board, it will retain influence.

Here’s how VMware described it to shareholders in a presentation this week. The graphic shows that under the new agreement, VMware is no longer a subsidiary of Dell and will now be an independent company.

Chart showing before and after structure of Dell spinning out VMware. In the after scenario, VMware is an independent company.

Image Credits: VMware

But with VMware tipped to become independent once again, it could become something of a takeover target. When Dell controlled VMware thanks to majority ownership, a hostile takeover felt out of the question. Now, VMware is a more possible target to the right company with the right offer — provided that the Dell spinout works as planned.

Buying VMware would be an expensive effort, however. It’s worth around $ 67 billion today. Presuming a large premium would be needed to take this particular technology chess piece off the competitive board, it could cost $ 100 billion or more to snag VMware from the public markets.

So VMware will soon be more free to pursue a transaction that might be favorable to its shareholders — which will still include every Dell shareholder, because they are receiving stock in VMware as part of its spinout — without worrying about its parent company simply saying no.


Enterprise – TechCrunch


How Google Might Rank Image Search Results

August 24, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more machine learning in how Google ranks pages and images in search results.

That may leave what we know as traditional, or old school ranking signals behind.

It is worth looking at those older ranking signals because they may play a role in ranking

As I am writing about this new patent on ranking image results, I decided to include what I used to look at when ranking images.

Images can rank in image search, and can help pages they are on rank higher, making a page more relevant for the query terms it ranks for.

Here are signals that I would include when trying to rank image search results:

  • Use meaningful images reflecting what the page is about – make them relevant to a query
  • Use an image file name relevant to what the image is about (I separate words in file names for images using hyphens, too)
  • Use alt text for an alt attribute to describes the image well, with text relevant to the query and avoid keyword stuffing
  • Use a caption that is helpful and relevant to what the query term the page is about
  • Use a title and associated text on the page relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those signals help rank image search results and help that page rank as well.

A new patent application uses machine learning to rank image search results. It doesn’t list the features that help images rank, such as alt text, captions, or file names. It does refer to “features” that likely include those as well as other signals.

These machine learning patents will likely become more common from Google.

Machine Learning Models to Rank Image Search Results

This machine learning model may use many different types of machine learning models.

Those models can be:

  • Deep machine learning (e.g., a neural network that includes many layers of non-linear operations.)
  • Other models (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

This machine learning model accurately generates relevance scores for image-landing page pairs in the index database.”

The patent tells us about an image search system, which includes a training engine.

The training engine trains the machine learning model using training data from image-landing page pairs already associated with ground truth or known values of the relevance score.

An example of the machine learning model generates a relevance score for an image search result from an image, a landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system learns landing page features for the landing page identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system then provides the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. Combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model trained to generate a relevance score measuring the relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How Google Might Rank Image Search Results

August 24, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more machine learning in how Google ranks pages and images in search results.

That may leave what we know as traditional, or old school ranking signals behind.

It is worth looking at those older ranking signals because they may play a role in ranking

As I am writing about this new patent on ranking image results, I decided to include what I used to look at when ranking images.

Images can rank in image search, and can help pages they are on rank higher, making a page more relevant for the query terms it ranks for.

Here are signals that I would include when trying to rank image search results:

  • Use meaningful images reflecting what the page is about – make them relevant to a query
  • Use an image file name relevant to what the image is about (I separate words in file names for images using hyphens, too)
  • Use alt text for an alt attribute to describes the image well, with text relevant to the query and avoid keyword stuffing
  • Use a caption that is helpful and relevant to what the query term the page is about
  • Use a title and associated text on the page relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those signals help rank image search results and help that page rank as well.

A new patent application uses machine learning to rank image search results. It doesn’t list the features that help images rank, such as alt text, captions, or file names. It does refer to “features” that likely include those as well as other signals.

These machine learning patents will likely become more common from Google.

Machine Learning Models to Rank Image Search Results

This machine learning model may use many different types of machine learning models.

Those models can be:

  • Deep machine learning (e.g., a neural network that includes many layers of non-linear operations.)
  • Other models (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

This machine learning model accurately generates relevance scores for image-landing page pairs in the index database.”

The patent tells us about an image search system, which includes a training engine.

The training engine trains the machine learning model using training data from image-landing page pairs already associated with ground truth or known values of the relevance score.

An example of the machine learning model generates a relevance score for an image search result from an image, a landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system learns landing page features for the landing page identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system then provides the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. Combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model trained to generate a relevance score measuring the relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How Google Might Rank Image Search Results

August 5, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.

That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.

It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.

As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.

Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.

Here are signals that I would include when I rank image search results:

  • Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
  • Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
  • Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
  • Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
  • Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.

A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.

Machine Learning Models to Rank Image Search Results

Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.

The machine learning model can be a:

  • Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
  • Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”

We are told about an image search system, which includes a training engine.

The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.

The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. A combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model that has been trained to generate a relevance score that measures a relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


An American Pickle Might Have Been Fresher in the 2010s

August 3, 2020 No Comments

Seth Rogen’s new movie is funny, although some gags meant to skewer hip Brooklynites seem strangely dated.
Feed: All Latest


How Google Might Rank Image Search Results

July 11, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.

That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.

It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.

As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.

Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.

Here are signals that I would include when I rank image search results:

  • Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
  • Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
  • Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
  • Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
  • Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.

A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.

Machine Learning Models to Rank Image Search Results

Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.

The machine learning model can be a:

  • Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
  • Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”

We are told about an image search system, which includes a training engine.

The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.

The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. A combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model that has been trained to generate a relevance score that measures a relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


How Google Might Rank Image Search Results

July 4, 2020 No Comments

Changes to How Google Might Rank Image Search Results

We are seeing more references to machine learning in how Google is ranking pages and other documents in search results.

That seems to be a direction that will leave what we know as traditional, or old school signals that are referred to as ranking signals behind.

It’s still worth considering some of those older ranking signals because they may play a role in how things are ranked.

As I was going through a new patent application from Google on ranking image search results, I decided that it was worth including what I used to look at when trying to rank images.

Images can rank highly in image search, and they can also help pages that they appear upon rank higher in organic web results, because they can help make a page more relevant for the query terms that page may be optimized for.

Here are signals that I would include when I rank image search results:

  • Use meaningful images that reflect what the page those images appear on is about – make them relevant to that query
  • Use a file name for your image that is relevant to what the image is about (I like to separate words in file names for images with hyphens, too)
  • Use alt text for your alt attribute that describes the image well, and uses text that is relevant to the query terms that the page is optimized for) and avoid keyword stuffing
  • Use a caption that is helpful to viewers and relevant to what the page it is about, and the query term that the page is optimized for
  • Use a title and associated text on the page the image appears upon that is relevant for what the page is about, and what the image shows
  • Use a decent sized image at a decent resolution that isn’t mistaken for a thumbnail

Those are signals that I would consider when I rank image search results and include images on a page to help that page rank as well.

A patent application that was published this week tells us about how machine learning might be used in ranking image search results. It doesn’t itemize features that might help an image in those rankings, such as alt text, captions, or file names, but it does refer to “features” that likely include those as well as other signals. It makes sense to start looking at these patents that cover machine learning approaches to ranking because they may end up becoming more common.

Machine Learning Models to Rank Image Search Results

Giving Google a chance to try out different approaches, we are told that the machine learning model can use many different types of machine learning models.

The machine learning model can be a:

  • Deep machine learning model (e.g., a neural network that includes multiple layers of non-linear operations.)
  • Different type of machine learning model (e.g., a generalized linear model, a random forest, a decision tree model, and so on.)

We are told more about this machine learning model. It is “used to accurately generate relevance scores for image-landing page pairs in the index database.”

We are told about an image search system, which includes a training engine.

The training engine trains the machine learning model on training data generated using image-landing page pairs that are already associated with ground truth or known values of the relevance score.

The patent shows an example of the machine learning model generating a relevance score for a particular image search result from an image, landing page, and query features. In this image, a searcher submits an image search query. The system generates image query features based on the user-submitted image search query.

Rank Image Search Results includes Image Query Features

That system also learns about landing page features for the landing page that has been identified by the particular image search result as well as image features for the image identified by that image search result.

The image search system would then provide the query features, the landing page features, and the image features as input to the machine learning model.

Google may rank image search results based on various factors

Those may be separate signals from:

  1. Features of the image
  2. Features of the landing page
  3. A combining the separate signals following a fixed weighting scheme that is the same for each received search query

This patent describes how it would rank image search results in this manner:

  1. Obtaining many candidate image search results for the image search query
  2. Each candidate image search result identifies a respective image and a respective landing page for the respective image
  3. For each of the candidate image search results processing
    • Features of the image search query
    • Features of the respective image identified by the candidate image search result
  4. Features of the respective landing page identified by the candidate image search result using an image search result ranking machine learning model that has been trained to generate a relevance score that measures a relevance of the candidate image search result to the image search query
  5. Ranking the candidate image search results based on the relevance scores generated by the image search result ranking machine learning model
  6. – Generating an image search results presentation that displays the candidate image search results ordered according to the ranking
    – Providing the image search results for presentation by a user device

Advantages to Using a Machine Learning Model to Rank Image Search Results

If Google can rank image search query pairs based on relevance scores using a machine learning model, it can improve the relevance of the image search results in response to the image search query.

This differs from conventional methods to rank resources because the machine learning model receives a single input that includes features of the image search query, landing page, and the image identified by a given image search result to predicts the relevance of the image search result to the received query.

This process allows the machine learning model to be more dynamic and give more weight to landing page features or image features in a query-specific manner, improving the quality of the image search results that are returned to the user.

By using a machine learning model, the image search engine does not apply the same fixed weighting scheme for landing page features and image features for each received query. Instead, it combines the landing page and image features in a query-dependent manner.

The patent also tells us that a trained machine learning model can easily and optimally adjust weights assigned to various features based on changes to the initial signal distribution or additional features.

In a conventional image search, we are told that significant engineering effort is required to adjust the weights of a traditional manually tuned model based on changes to the initial signal distribution.

But under this patented process, adjusting the weights of a trained machine learning model based on changes to the signal distribution is significantly easier, thus improving the ease of maintenance of the image search engine.

Also, if a new feature is added, the manually tuned functions adjust the function on the new feature independently on an objective (i.e., loss function, while holding existing feature functions constant.)

But, a trained machine learning model can automatically adjust feature weights if a new feature is added.

Instead, the machine learning model can include the new feature and rebalance all its existing weights appropriately to optimize for the final objective.

Thus, the accuracy, efficiency, and maintenance of the image search engine can be improved.

The Rank Image Search results patent application can be found at

Ranking Image Search Results Using Machine Learning Models
US Patent Application Number 16263398
File Date: 31.01.2019
Publication Number US20200201915
Publication Date June 25, 2020
Applicants Google LLC
Inventors Manas Ashok Pathak, Sundeep Tirumalareddy, Wenyuan Yin, Suddha Kalyan Basu, Shubhang Verma, Sushrut Karanjkar, and Thomas Richard Strohmann

Abstract

Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for ranking image search results using machine learning models. In one aspect, a method includes receiving an image search query from a user device; obtaining a plurality of candidate image search results; for each of the candidate image search results: processing (i) features of the image search query and (ii) features of the respective image identified by the candidate image search result using an image search result ranking machine learning model to generate a relevance score that measures a relevance of the candidate image search result to the image search query; ranking the candidate image search results based on the relevance scores; generating an image search results presentation; and providing the image search results for presentation by a user device.

The Indexing Engine

The search engine may include an indexing engine and a ranking engine.

The indexing engine indexes image-landing page pairs, and adds the indexed image-landing page pairs to an index database.

That is, the index database includes data identifying images and, for each image, a corresponding landing page.

The index database also associates the image-landing page pairs with:

  • Features of the image search query
  • Features of the images, i.e., features that characterize the images
  • Features of the landing pages, i.e., features that characterize the landing page

Optionally, the index database also associates the indexed image-landing page pairs in the collections of image-landing pairs with values of image search engine ranking signals for the indexed image-landing page pairs.

Each image search engine ranking signal is used by the ranking engine in ranking the image-landing page pair in response to a received search query.

The ranking engine generates respective ranking scores for image-landing page pairs indexed in the index database based on the values of image search engine ranking signals for the image-landing page pair, e.g., signals accessed from the index database or computed at query time, and ranks the image-landing page pair based on the respective ranking scores. The ranking score for a given image-landing page pair reflects the relevance of the image-landing page pair to the received search query, the quality of the given image-landing page pair, or both.

The image search engine can use a machine learning model to rank image-landing page pairs in response to received search queries.

The machine learning model is a machine learning model that is configured to receive an input that includes

(i) features of the image search query
(ii) features of an image and
(iii) features of the landing page of the image and generate a relevance score that measures the relevance of the candidate image search result to the image search query.

Once the machine learning model generates the relevance score for the image-landing page pair, the ranking engine can then use the relevance score to generate ranking scores for the image-landing page pair in response to the received search query.

The Ranking Engine behind the Process to Rank Image Search Results

In some implementations, the ranking engine generates an initial ranking score for each of multiple image—landing page pairs using the signals in the index database.

The ranking engine can then select a certain number of the highest-scoring image—landing pair pairs for processing by the machine learning model.

The ranking engine can then rank candidate image—landing page pairs based on relevance scores from the machine learning model or use those relevance scores as additional signals to adjust the initial ranking scores for the candidate image—landing page pairs.

The machine learning model would receive a single input that includes features of the image search query, the landing page, and the image to predict the relevance (i.e., relevance score, of the particular image search result to the user image query.)

We are told that this allows the machine learning model to give more weight to landing page features, image features, or image search query features in a query-specific manner, which can improve the quality of the image search results returned to the user.

Features That May Be Used from Images and Landing Pages to Rank Image Search Results

The first step is to receive the image search query.

Once that happens, the image search system may identify initial image-landing page pairs that satisfy the image search query.

It would do that from pairs that are indexed in a search engine index database from signals measuring the quality of the pairs, and the relevance of the pairs to the search query, or both.

For those pairs, the search system identifies:

  • Features of the image search query
  • Features of the image
  • Features of the landing page

Features Extracted From the Image

These features can include vectors that represent the content of the image.

Vectors to represent the image may be derived by processing the image through an embedding neural network.

Or those vectors may be generated through other image processing techniques for feature extraction. Examples of feature extraction techniques can include edge, corner, ridge, and blob detection. Feature vectors can include vectors generated using shape extraction techniques (e.g., thresholding, template matching, and so on.) Instead of or in addition to the feature vectors, when the machine learning model is a neural network the features can include the pixel data of the image.

Features Extracted From the Landing Page

These aren’t the kinds of features that I usually think about when optimizing images historically. These features can include:

  • The date the page was first crawled or updated
  • Data characterizing the author of the landing page
  • The language of the landing page
  • Features of the domain that the landing page belong to
  • Keywords representing the content of the landing page
  • Features of the links to the image and landing page such as the anchor text or source page for the links
  • Features that describe the context of the image in the landing page
  • So on

Features Extracted From The Landing Page That Describes The Context of the Image in the Landing Page

The patent interestingly separated these features out:

  • Data characterizing the location of the image within the landing page
  • Prominence of the image on the landing page
  • Textual descriptions of the image on the landing page
  • Etc.

More Details on the Context of the Image on the Landing Page

The patent points out some alternative ways that the location of the image within the Landing Page might be found:

  • Using pixel-based geometric location in horizontal and vertical dimensions
  • User-device based length (e.g., in inches) in horizontal and vertical dimensions
  • An HTML/XML DOM-based XPATH-like identifier
  • A CSS-based selector
  • Etc.

The prominence of the image on the landing page can be measured using the relative size of the image as displayed on a generic device and a specific user device.

The textual descriptions of the image on the landing page can include alt-text labels for the image, text surrounding the image, and so on.

Features Extracted from the Image Search Query

The features from the image search query can include::

  • Language of the search query
  • Some or all of the terms in the search query
  • Time that the search query was submitted
  • Location from which the search query was submitted
  • Data characterizing the user device from which the query was received
  • So on

How the Features from the Query, the Image, and the Landing Page Work Together

  • The features may be represented categorically or discretely
  • Additional relevant features can be created through pre-existing features (Relationships may be created between one or more features through a combination of addition, multiplication, or other mathematical operations.)
  • For each image-landing page pair, the system processes the features using an image search result ranking machine learning model to generate a relevance score output
  • The relevance score measures a relevance of the candidate image search result to the image search query (i.e., the relevance score of the candidate image search result measures a likelihood of a user submitting the search query would click on or otherwise interact with the search result. A higher relevance score indicates the user submitting the search query would find the candidate image search more relevant and click on it)
  • The relevance score of the candidate image search result can be a prediction of a score generated by a human rater to measure the quality of the result for the image search query

Adjusting Initial Ranking Scores

The system may adjust initial ranking scores for the image search results based on the relevance scores to:

  • Promote search results having higher relevance scores
  • Demote search results having lower relevance scores
  • Or both

Training a Ranking Machine Learning Model to Rank Image Search Results

The system receives a set of training image search queries
For each training image search query, training image search results for the query that are each associated with a ground truth relevance score.

A ground truth relevance score is the relevance score that should be generated for the image search result by the machine learning model (i.e., when the relevance scores measure a likelihood that a user would select a search result in response to a given search query, each ground truth relevance score can identify whether a user submitting the given search query selected the image search result or a proportion of times that users submitting the given search query select the image search result.)

The patent provides another example of how ground-truth relevance scores might be generated:

When the relevance scores generated by the model are a prediction of a score assigned to an image search result by a human, the ground truth relevance scores are actual scores assigned to the search results by human raters.

For each of the training image search queries, the system may generate features for each associated image-landing page pair.

For each of those pairs, the system may identify:

(i) features of the image search query
(ii) features of the image and
(iii) features of the landing page.

We are told that extracting, generating, and selecting features may take place before training or using the machine learning model. Examples of features are the ones I listed above related to the images, landing pages, and queries.

The ranking engine trains the machine learning model by processing for each image search query

  • Features of the image search query
  • Features of the respective image identified by the candidate image search result
  • Features of the respective landing page identified by the candidate image search result and the respective ground truth relevance that measures a relevance of the candidate image search result to the image search query

The patent provides some specific implementation processes that might differ based upon the machine learning system used.

Take Aways to Rank Image Search Results

I’ve provided some information about what kinds of features Google May have used in the past in ranking Image search results.

Under a machine learning approach, Google may be paying more attention to features from an image query, features from Images, and features from the landing page those images are found upon. The patent lists many of those features, and if you spend time comparing the older features with the ones under the machine learning model approach, you can see there is overlap, but the machine learning approach covers considerably more options.


Copyright © 2020 SEO by the Sea ⚓. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at may be guilty of copyright infringement. Please contact SEO by the Sea, so we can take appropriate action immediately.
Plugin by Taragana

The post How Google Might Rank Image Search Results appeared first on SEO by the Sea ⚓.


SEO by the Sea ⚓


Apple Might Owe You $25 for Slowing Down Your iPhone

March 3, 2020 No Comments

Cupertino insists it did nothing wrong when it throttled aging handsets. But the decision will cost Apple hundreds of millions in payouts.
Feed: All Latest


Why it might have been time for new leadership at SAP

October 13, 2019 No Comments

SAP CEO Bill McDermott announced he was stepping down last night after a decade at the helm in an announcement that shocked many. It’s always tough to measure the performance of an enterprise leader when he or she leaves. Some people look at stock price over their tenure. Some at culture. Some at the acquisitions made. Whatever the measure, it will be up to the new co-CEOs Jennifer Morgan and Christian Klein to put their own mark on the company.

What form that will take remains to be seen. McDermott’s tenure ended without much warning, but it also happened against a wider backdrop that includes other top executives and board members leaving the company over the last year, an activist investor coming on board and some controversial licensing changes in recent years.

Why now?

The timing certainly felt sudden. McDermott, who was interviewed at TechCrunch Sessions: Enterprise last month sounded more like a man who was fully engaged in the job, not one ready to leave, but a month later he’s gone.

But as McDermott told our own Frederic Lardinois last night, after 10 years, it seemed like the right time to leave. “The consensus was 10 years is about the right amount of time for a CEO because you’ve accomplished a lot of things if you did the job well, but you certainly didn’t stay too long. And if you did really well, you had a fantastic success plan,” he said in the interview.

There is no reason to doubt that, but you should at least look at context and get a sense of what has been going in the company. As the new co-CEOs take over for McDermott, several other executives including SAP SuccessFactors COO Brigette McInnis-Day, Robert Enslin, president of its cloud business and a board member, CTO Björn Goerke and Bernd Leukert, a member of the executive board have all left this year.


Enterprise – TechCrunch


6 Reasons Why Your Google Analytics Reports Might Be Wrong

July 25, 2019 No Comments

tumblr_mxa45rwot71ro4c22o1_500

  1. Missing Tags – This is the most common error of wrong data.  This generally happens when new pages are added or the exiting pages are redesigned/recoded and the developer forgets to include the tags.  Make sure all of your pages are tagged with Google Analytics code.  You can use a tool like GAChecker, to verify if the Google Analytics tags are missing on any pages of your site.
  2. Mistagged Pages – Incorrect implementation or double tagging leads to wrong data in Google Analytics.  Double tagging results in increased page views and a low bounce rate. If you bounce rate is lower than 20% then that’s the first thing you should check.
  3. Location of GA Tags – Placing the tag towards the bottom of the page could result in no data particularly for the users with slow connections or pages that are slow to load.  This happens when a user tries to loads a page and clicks on another link before the first page is loaded. Since the Google Analytics tag is towards the bottom of the page, it might not get a chance to execute.  To avoid this issue, put your Google Analytics JavaScript in the <head> section of the page.
  4. Incorrect Filters – Wrong Filters can mess up the data and distort the view.  Always create an unfiltered view so that you have correct data to fall back on.
  5. Tags Not Firing Properly – This can happen when your page(s) have JavaScript errors.  A JavaScript error on any part of the page can result in an error in Google Analytics code. Verify the JavaScripts on your site to make sure there are no errors.
  6. Sampling – Sampling happens on highly trafficked site. Sampling in Google Analytics is the practice of selecting a subset of data from your traffic and reporting on the trends available in that sample set.  For most purposes, this might not be a non-issue however it can be of concern in eCommerce sites where sampling can (will) result in wrong sales figures.   You can get more information about GA sampling on “How Sampling Works“.


Google Analytics Premium


©2018-2020 - American Online Phonebook - All Rights Reserved

Privacy Policy | Terms & Conditions