public class RankingMetrics<T>
extends Object
implements scala.Serializable
Java users should use RankingMetrics$.of
to create a RankingMetrics
instance.
param: predictionAndLabels an RDD of (predicted ranking, ground truth set) pairs.
Constructor and Description |
---|
RankingMetrics(RDD<scala.Tuple2<Object,Object>> predictionAndLabels,
scala.reflect.ClassTag<T> evidence$1) |
Modifier and Type | Method and Description |
---|---|
double |
meanAveragePrecision()
Returns the mean average precision (MAP) of all the queries.
|
double |
ndcgAt(int k)
Compute the average NDCG value of all the queries, truncated at ranking position k.
|
static <E,T extends Iterable<E>> |
of(JavaRDD<scala.Tuple2<T,T>> predictionAndLabels)
Creates a
RankingMetrics instance (for Java users). |
double |
precisionAt(int k)
Compute the average precision of all the queries, truncated at ranking position k.
|
public static <E,T extends Iterable<E>> RankingMetrics<E> of(JavaRDD<scala.Tuple2<T,T>> predictionAndLabels)
RankingMetrics
instance (for Java users).predictionAndLabels
- a JavaRDD of (predicted ranking, ground truth set) pairspublic double precisionAt(int k)
If for a query, the ranking algorithm returns n (n < k) results, the precision value will be computed as #(relevant items retrieved) / k. This formula also applies when the size of the ground truth set is less than k.
If a query has an empty ground truth set, zero will be used as precision together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated precision, must be positivepublic double meanAveragePrecision()
public double ndcgAt(int k)
If a query has an empty ground truth set, zero will be used as ndcg together with a log warning.
See the following paper for detail:
IR evaluation methods for retrieving highly relevant documents. K. Jarvelin and J. Kekalainen
k
- the position to compute the truncated ndcg, must be positive