puntuación solr - fieldnorm

I have the following records and the scores against it when I search for "iphone" -

Record1: FieldName - DisplayName : "Iphone" FieldName - Name : "Iphone"

11.654595 = (MATCH) sum of:
  11.654595 = (MATCH) max plus 0.01 times others of:
    7.718274 = (MATCH) weight(DisplayName:iphone^10.0 in 915195), product of:
      0.6654692 = queryWeight(DisplayName:iphone^10.0), product of:
        10.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      11.598244 = (MATCH) fieldWeight(DisplayName:iphone in 915195), product of:
        1.0 = tf(termFreq(DisplayName:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        1.0 = fieldNorm(field=DisplayName, doc=915195)
    11.577413 = (MATCH) weight(Name:iphone^15.0 in 915195), product of:
      0.99820393 = queryWeight(Name:iphone^15.0), product of:
        15.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      11.598244 = (MATCH) fieldWeight(Name:iphone in 915195), product of:
        1.0 = tf(termFreq(Name:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        1.0 = fieldNorm(field=Name, doc=915195)

Record2: FieldName - DisplayName : "The Iphone Book" FieldName - Name : "The Iphone Book"

7.284122 = (MATCH) sum of:
  7.284122 = (MATCH) max plus 0.01 times others of:
    4.823921 = (MATCH) weight(DisplayName:iphone^10.0 in 453681), product of:
      0.6654692 = queryWeight(DisplayName:iphone^10.0), product of:
        10.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(DisplayName:iphone in 453681), product of:
        1.0 = tf(termFreq(DisplayName:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=DisplayName, doc=453681)
    7.2358828 = (MATCH) weight(Name:iphone^15.0 in 453681), product of:
      0.99820393 = queryWeight(Name:iphone^15.0), product of:
        15.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(Name:iphone in 453681), product of:
        1.0 = tf(termFreq(Name:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=Name, doc=453681)

Record3: FieldName - DisplayName: "iPhone" FieldName - Name: "iPhone"

7.284122 = (MATCH) sum of:
  7.284122 = (MATCH) max plus 0.01 times others of:
    4.823921 = (MATCH) weight(DisplayName:iphone^10.0 in 5737775), product of:
      0.6654692 = queryWeight(DisplayName:iphone^10.0), product of:
        10.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(DisplayName:iphone in 5737775), product of:
        1.0 = tf(termFreq(DisplayName:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=DisplayName, doc=5737775)
    7.2358828 = (MATCH) weight(Name:iphone^15.0 in 5737775), product of:
      0.99820393 = queryWeight(Name:iphone^15.0), product of:
        15.0 = boost
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.0057376726 = queryNorm
      7.2489023 = (MATCH) fieldWeight(Name:iphone in 5737775), product of:
        1.0 = tf(termFreq(Name:iphone)=1)
        11.598244 = idf(docFreq=484, maxDocs=19431244)
        0.625 = fieldNorm(field=Name, doc=5737775)

Why is Record2 and Record3 have the same score when record2 has 3 words and record3 has just one word. So Record3 should have higher relevancy than record 2. Why are the fieldNorm of both Record2 and Record3 the same?

QueryParser: Dismax FieldType: text fieldtype as default in solrconfig.xml

Adding DataFeed:

Record1: Iphone

{
        "ListPrice":1184.526,
        "ShipsTo":1,
        "OID":"190502",
        "EAN":"9780596804299",
        "ISBN":"0596804296",
        "Author":"Pogue, David",
        "product_type_fq":"Books",
        "ShipmentDurationDays":"21",
        "CurrencyValue":"24.9900",
        "ShipmentDurationText":"NORMALLY SHIPS IN 21 BUSINESS DAYS",
        "Availability":0,
        "COD":0,
        "PublicationDate":"2009-08-07 00:00:00.0",
        "Discount":"25",
        "SubCategory_fq":"Hardware",
        "Binding":"Paperback",
        "Category_fq":"Non Classifiable",
        "ShippingCharges":"0",
        "OIDType":8,
        "Pages":"397",
        "CallOrder":"0",
        "TrackInventory":"Ingram",
        "Author_fq":"Pogue, David",
        "DisplayName":"Iphone",
        "url":"/iphone-pogue-david/books/9780596804299.htm",
        "CurrencyType":"USD",
        "SubSubCategory":"Handheld Devices",
        "Mask":0,
        "Publisher":"Oreilly & Associates Inc",
        "Name":"Iphone",
        "Language":"English",
        "DisplayPriority":"999",
        "rowid":"books_9780596804299"
        }

Record2: The Iphone Book

{
        "ListPrice":1184.526,
        "ShipsTo":1,
        "OID":"94694",
        "EAN":"9780321534101",
        "ISBN":"0321534107",
        "Author":"Kelby, Scott/ White, Terry",
        "product_type_fq":"Books",
        "ShipmentDurationDays":"21",
        "CurrencyValue":"24.9900",
        "ShipmentDurationText":"NORMALLY SHIPS IN 21 BUSINESS DAYS",
        "Availability":1,
        "COD":0,
        "PublicationDate":"2007-08-13 00:00:00.0",
        "Discount":"25",
        "SubCategory_fq":"Handheld Devices",
        "Binding":"Paperback",
        "BAMcategory_src":"Computers",
        "Category_fq":"Computers",
        "ShippingCharges":"0",
        "OIDType":8,
        "Pages":"219",
        "CallOrder":"0",
        "TrackInventory":"Ingram",
        "Author_fq":"Kelby, Scott/ White, Terry",
        "DisplayName":"The Iphone Book",
        "url":"/iphone-book-kelby-scott-white-terry/books/9780321534101.htm",
        "CurrencyType":"USD",
        "SubSubCategory":" Handheld Devices",
        "BAMcategory_fq":"Computers",
        "Mask":0,
        "Publisher":"Pearson P T R",
        "Name":"The Iphone Book",
        "Language":"English",        
        "DisplayPriority":"999",
        "rowid":"books_9780321534101"
        }

Record 3: iPhone

{
        "ListPrice":278.46,
        "ShipsTo":1,
        "OID":"694715",
        "EAN":"9781411423527",
        "ISBN":"1411423526",
        "Author":"Quamut (COR)",
        "product_type_fq":"Books",
        "ShipmentDurationDays":"21",
        "CurrencyValue":"5.9500",
        "ShipmentDurationText":"NORMALLY SHIPS IN 21 BUSINESS DAYS",
        "Availability":0,
        "COD":0,
        "PublicationDate":"2010-08-03 00:00:00.0",
        "Discount":"25",
        "SubCategory_fq":"Hardware",
        "Binding":"Paperback",
        "Category_fq":"Non Classifiable",
        "ShippingCharges":"0",
        "OIDType":8,
        "CallOrder":"0",        
        "TrackInventory":"BNT",
        "Author_fq":"Quamut (COR)",
        "DisplayName":"iPhone",
        "url":"/iphone-quamut-cor/books/9781411423527.htm",
        "CurrencyType":"USD",
        "SubSubCategory":"Handheld Devices",
        "Mask":0,
        "Publisher":"Sterling Pub Co Inc",
        "Name":"iPhone",
        "Language":"English",
        "DisplayPriority":"999",
        "rowid":"books_9781411423527"
        }         

preguntado el 08 de noviembre de 11 a las 17:11

Don't see any difference between Record 1 & Record 3, so i see no reason the results would be different. Can you confirm the scores belong to Record 3 and provide more info with query parser, field types and data fed ? -

Hi I am using dismax query parser and the scores belong to Record 3. I have cross checked and the score of all the 3 records are as mentioned by me. The field types are the default "text" field type defined in solrconfig.xml. -

can you also provide the data feed ? -

Hi Jayendra.. I have edited my question with the feed.. Thanks.. -

2 Respuestas

fieldnorm takes into account the field length i.e. the number of terms.
The fieldtype used is text for the fields display name & name, which would have the stopwords and the word delimiter filters.

Record 1 - Iphone
Would generate a single token - IPhone

Record 2 - The Iphone Book
Would generate 2 tokens - Iphone, Book
The would be removed by the stopwords.

Record 3 - iPhone
Would also generate 2 tokens - i,phone
As iPhone has a case change, the word delimiter filter with splitOnCaseChange would now split iPhone into 2 tokens i, Phone and would produce the field norm same as Record 2

respondido 09 nov., 11:10

Hi Jayendra, Thanks.. Make sense. But for another data set I am getting wrong relevance. Record1: "The Real Da Vinci Code" & Record2: "The Da Vinci Code". When I search for "da vinci code", I get the same scores for both records because the fieldNorm is the same. It uses the same fieldtype "text". Using SolrAnalysis I see that Record1 is changed to "real da vinci code" and record2 to "da vinci code" (Index time). Then why are the two scores the same. - user1021590

can you provide the data field and debug score ? - Jayendra

Hi Jayendra, Data field is "text". I have added the debug score as part of answer. - user1021590

This is the answer to user1021590's follow-up question/answer on the "da vinci code" search example.

The reason all the documents get the same score is due to a subtle implementation detail of lengthNorm. Lucence TFIDFSimilarity doc states the following about norm(t, d):

the resulted norm value is encoded as a single byte before being stored. At search time, the norm byte value is read from the index directory and decoded back to a float norm value. This encoding/decoding, while reducing index size, comes with the price of precision loss - it is not guaranteed that decode(encode(x)) = x. For instance, decode(encode(0.89)) = 0.75.

If you dig into the code, you see that this float-to-byte encoding is implemented as follows:

public static byte floatToByte315(float f)
{
    int bits = Float.floatToRawIntBits(f);
    int smallfloat = bits >> (24 - 3);
    if (smallfloat <= ((63 - 15) << 3))
    {
        return (bits <= 0) ? (byte) 0 : (byte) 1;
    }
    if (smallfloat >= ((63 - 15) << 3) + 0x100)
    {
        return -1;
    }
    return (byte) (smallfloat - ((63 - 15) << 3));
}

and the decoding of that byte to float is done as:

public static float byte315ToFloat(byte b)
{
    if (b == 0)
        return 0.0f;
    int bits = (b & 0xff) << (24 - 3);
    bits += (63 - 15) << 24;
    return Float.intBitsToFloat(bits);
}

lengthNorm se calcula como 1 / sqrt( number of terms in field ). This is then encoded for storage using floatToByte315. For a field with 3 terms, we get:

floatToByte315( 1/sqrt(3.0) ) = 120

and for a field with 4 terms, we get:

floatToByte315( 1/sqrt(4.0) ) = 120

so both of them get decoded to:

byte315ToFloat(120) = 0.5.

The doc also states this:

The rationale supporting such lossy compression of norm values is that given the difficulty (and inaccuracy) of users to express their true information need by a query, only big differences matter.

UPDATE: As of Solr 4.10, this implementation and corresponding statements are part of DefaultSimilarity.

Respondido el 25 de diciembre de 14 a las 07:12

hey I am facing the same issue.. for 3 & 4 terms documents, the fieldNorm is same and thus affecting the search results. Is there any solution for this? - sravan_kumar

I will be over-riding the lengthNorm class method and try to handle specially in case of 3 & 4 length documents.. - sravan_kumar

No es la respuesta que estás buscando? Examinar otras preguntas etiquetadas or haz tu propia pregunta.