Trisha Shetty (Editor)

Ranking SVM

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

In machine learning, a Ranking SVM is an variant of the support vector machine algorithm, which is used to solve certain ranking problems (via learning to rank). The ranking SVM algorithm was published by Thorsten Joachims in 2003. The original purpose of the algorithm was to improve the performance of an internet search engine. However, it was found that Ranking SVM also can be used to solve other problems such as Rank SIFT.

Contents

Description

The Ranking SVM algorithm is a learning retrieval function that employs pair-wise ranking methods to adaptively sort results based on how 'relevant' they are for a specific query. The Ranking SVM function uses a mapping function to describe the match between a search query and the features of each of the possible results. This mapping function projects each data pair (such as a search query and clicked web-page, for example) onto a feature space. These features are combined with the corresponding click-through data (which can act as a proxy for how relevant a page is for a specific query) and can then be used as the training data for the Ranking SVM algorithm.

Generally, Ranking SVM includes three steps in the training period:

  1. It maps the similarities between queries and the clicked pages onto a certain feature space.
  2. It calculates the distances between any two of the vectors obtained in step 1.
  3. It forms an optimization problem which is similar to a standard SVM classification and solves this problem with the regular SVM solver.

Ranking Method

Suppose C is a data set containing C elements c i . r is a ranking method applied to C . Then the r in C can be represented as a C by C asymmetric binary matrix. If the rank of c i is higher than the rank of c j , i.e. r   c i < r   c j , the corresponding position of this matrix is set to value of "1". Otherwise the element in that position will be set as the value "0".

Kendall’s Tau

Kendall's Tau also refers to Kendall tau rank correlation coefficient, which is commonly used to compare two ranking methods for the same data set.

Suppose r 1 and r 2 are two ranking method applied to data set C , the Kendall's Tau between r 1 and r 2 can be represented as follows:

τ ( r 1 , r 2 ) = P Q P + Q = 1 2 Q P + Q

where P is the number of concordant pairs and Q is the number of discordant pairs (inversions). A pair d i and d j is concordant if both r a and r b agree in how they order d i and d j . It is discordant if they disagree.

Information Retrieval Quality

Information retrieval quality is usually evaluated by the following three measurements:

  1. Precision
  2. Recall
  3. Average Precision

For a specific query to a database, let P r e l e v a n t be the set of relevant information elements in the database and P r e t r i e v e d be the set of the retrieved information elements. Then the above three measurements can be represented as follows:

P r e c i s i o n = | P r e l e v a n t P r e t r i e v e d | | P r e t r i e v e d | ; R e c a l l = | P r e l e v a n t P r e t r i e v e d | | P r e l e v a n t | ; A v e r a g e P r e c i s i o n = 0 1 P r e c ( R e c a l l ) d R e c a l l ,

where P r e c ( R e c a l l ) is the P r e c i s i o n of R e c a l l .

Let r and r f ( q ) be the expected and proposed ranking methods of a database respectively, the lower bound of Average Precision of method r f ( q ) can be represented as follows:

A v g P r e c ( r f ( q ) ) 1 R [ Q + ( R + 1 2 ) ] 1 ( i = 1 R i ) 2

where Q is the number of different elements in the upper triangular parts of matrices of r and r f ( q ) and R is the number of relevant elements in the data set.

SVM Classifier

Suppose ( x i , y i ) is the element of a training data set, where x i is the feature vector and y i is the label (which classifies the category of x i ). A typical SVM classifier for such data set can be defined as the solution of the following optimization problem.

m i n i m i z e :   V ( w , ξ ) = 1 2 w w + C F ξ i σ s . t . σ 0 ; y i ( w x i + b ) 1 ξ i σ ; w h e r e ,   b   i s   a   s c a l a r ; y i { 1 , 1 } ; ξ i 0 ;

The solution of the above optimization problem can be represented as a linear combination of the feature vectors x i s.

w = i α i y i x i

where α i is the coefficients to be determined.

Loss Function

Let τ P ( f ) be the Kendall's tau between expected ranking method r and proposed method r f ( q ) , it can be proved that maximizing τ P ( f ) helps to minimize the lower bound of the Average Precision of r f ( q ) .

  • Expected Loss Function
  • The negative τ P ( f ) can be selected as the loss function to minimize the lower bound of Average Precision of r f ( q ) L e x p e c t e d = τ P ( f ) = τ ( r f ( q ) , r ) d P r ( q , r )

    where P r ( q , r ) is the statistical distribution of r to certain query q .

  • Empirical Loss Function
  • Since the expected loss function is not applicable, the following empirical loss function is selected for the training data in practice.

    L e m p i r i c a l = τ S ( f ) = 1 n i = 1 n τ ( r f ( q i ) , r i )

    Collecting training data

    n i.i.d. queries are applied to a database and each query corresponds to a ranking method. The training data set has n elements. Each element contains a query and the corresponding ranking method.

    Feature Space

    A mapping function Φ ( q , d ) is required to map each query and the element of database to a feature space. Then each point in the feature space is labelled with certain rank by ranking method.

    Optimization problem

    The points generated by the training data are in the feature space, which also carry the rank information (the labels). These labeled points can be used to find the boundary (classifier) that specifies the order of them. In the linear case, such boundary (classifier) is a vector.

    Suppose c i and c j are two elements in the database and denote ( c i , c j ) r if the rank of c i is higher than c j in certain ranking method r . Let vector w be the linear classifier candidate in the feature space. Then the ranking problem can be translated to the following SVM classification problem.Note that one ranking method corresponds to one query.

    m i n i m i z e :   V ( w , ξ ) = 1 2 w w + C o n s t a n t ξ i , j , k s . t . ξ i , j , k 0 ( c i , c j ) r k w ( Φ ( q 1 , c i ) Φ ( q 1 , c j ) ) 1 ξ i , j , 1 ; . . . w ( Φ ( q n , c i ) Φ ( q n , c j ) ) 1 ξ i , j , n ; w h e r e     k { 1 , 2 , . . . n } ,   i , j { 1 , 2 , . . . } .

    The above optimization problem is identical to the classical SVM classification problem, which is the reason why this algorithm is called Ranking-SVM.

    Retrieval Function

    The optimal vector w obtained by the training sample is

    w = α k , l Φ ( q k , c i )

    So the retrieval function could be formed based on such optimal classifier.
    For new query q , the retrieval function first projects all elements of the database to the feature space. Then it orders these feature points by the values of their inner products with the optimal vector. And the rank of each feature point is the rank of the corresponding element of database for the query q .

    Application of Ranking SVM

    Ranking SVM can be applied to rank the pages according to the query. The algorithm can be trained using click-through data, where consists of the following three parts:

    1. Query.
    2. Present ranking of search results
    3. Search results clicked on by user

    The combination of 2 and 3 cannot provide full training data order which is needed to apply the full SVM algorithm. Instead, it provides a part of the ranking information of the training data. So the algorithm can be slightly revised as follows.

    m i n i m i z e :   V ( w , ξ ) = 1 2 w w + C o n t a n t ξ i , j , k s . t . ξ i , j , k 0 ( c i , c j ) r k w ( Φ ( q 1 , c i ) Φ ( q 1 , c j ) ) 1 ξ i , j , 1 ; . . . w ( Φ ( q n , c i ) Φ ( q n , c j ) ) 1 ξ i , j , n ; w h e r e     k { 1 , 2 , . . . n } ,   i , j { 1 , 2 , . . . } .

    The method r does not provide ranking information of the whole dataset, it's a subset of the full ranking method. So the condition of optimization problem becomes more relax compared with the original Ranking-SVM.

    References

    Ranking SVM Wikipedia