Introduction to Information Retrieval Jian-Yun Nie University

61 Slides577.50 KB

Introduction to Information Retrieval Jian-Yun Nie University of Montreal Canada 1

Outline What is the IR problem? How to organize an IR system? (Or the main processes in IR) Indexing Retrieval System evaluation Some current research topics 2

The problem of IR Goal find documents relevant to an information need from a large document set Info. need Query Document collection Retrieval IR syste m Answer list 3

Example Googl e Web 4

IR problem First applications: in libraries (1950s) ISBN: 0-201-12227-8 Author: Salton, Gerard Title: Automatic text processing: the transformation, analysis, and retrieval of information by computer Editor: Addison-Wesley Date: 1989 Content: Text external attributes and internal attribute (content) Search by external attributes Search in DB IR: search by content 5

Possible approaches 1. String matching (linear search in documents) - Slow - Difficult to improve 2. Indexing (*) - Fast - Flexible to further improvement 6

Indexing-based IR Document Query indexing indexing (Query analysis) Representation (keywords) evaluation Representation Query (keywords) 7

Main problems in IR Document and query indexing Query evaluation (or retrieval process) How to best represent their contents? To what extent does a document correspond to a query? System evaluation How good is a system? Are the retrieved documents relevant? (precision) Are all the relevant documents retrieved? (recall) 8

Document indexing Goal Find the important meanings and create an internal representation Factors to consider: Accuracy to represent meanings (semantics) Exhaustiveness (cover all the contents) Facility for computer to manipulate What is the best representation of contents? Coverage (Recall) Char. string (char trigrams): not precise enough Word: good coverage, not precise Phrase: poor coverage, more precise Concept: poor coverage, precise String Word Phrase Concept Accuracy (Precision) 9

Keyword selection and weighting How to select important keywords? Simple method: using middle-frequency words Frequency/Informativity frequency informativity Max. Min. 123 Rank 10

tf*idf weighting schema tf term frequency frequency of a term/keyword in a document The higher the tf, the higher the importance (weight) for the doc. df document frequency no. of documents containing the term distribution of the term idf inverse document frequency the unevenness of term distribution in the corpus the specificity of term to a document The more the term is distributed evenly, the less it is specific to a document weight(t,D) tf(t,D) * idf(t) 11

Some common tf*idf schemes tf(t, tf(t, tf(t, tf(t, D) freq(t,D) idf(t) log(N/n) D) log[freq(t,D)] n #docs containing t D) log[freq(t,D)] 1 N #docs in corpus D) freq(t,d)/Max[f(t,d)] weight(t,D) tf(t,D) * idf(t) Normalization: Cosine normalization, /max, 12

Document Length Normalization Sometimes, additional normalizations e.g. length: pivoted (t , D) 1 weight (t , D) slope normalized weight (t , D) (1 slope) povot Probability of relevance slope pivot Probability of retrieval Doc. length 13

Stopwords / Stoplist function words do not bear useful information for IR of, in, about, with, I, although, Stoplist: contain stopwords, not to be used as index Prepositions Articles Pronouns Some adverbs and adjectives Some frequent words (e.g. document) The removal of stopwords usually improves IR effectiveness A few “standard” stoplists are commonly used. 14

Stemming Reason: Different word forms may bear similar meaning (e.g. search, searching): create a “standard” representation for them Stemming: Removing some endings of word computer compute computes computing computed computation comput 15

Porter algorithm (Porter, M.F., 1980, An algorithm for suffix stripping, Program, 14(3) :130-137) Step 1: plurals and past participles (m 0) ICATE - IC triplicate - triplic Step 4: (m 0) OUSNESS - OUS callousness - callous (m 0) ATIONAL - ATE relational - relate Step 3: caresses - caress motoring - motor Step 2: adj- n, n- v, n- adj, SSES - SS (*v*) ING - (m 1) AL - (m 1) ANCE - revival - reviv allowance - allow Step 5: (m 1) E - probate - probat (m 1 and *d and *L) - single letter controll - control 16

Lemmatization transform to standard form according to syntactic category. E.g. verb ing verb noun s noun Need POS tagging More accurate than stemming, but needs more resources crucial to choose stemming/lemmatization rules noise v.s. recognition rate compromise between precision and recall light/no stemming -recall precision severe stemming recall -precision 17

Result of indexing Each document is represented by a set of weighted keywords (terms): D1 {(t1, w1), (t2,w2), } e.g. D1 {(comput, 0.2), (architect, 0.3), } D2 {(comput, 0.1), (network, 0.5), } Inverted file: comput {(D1,0.2), (D2,0.1), } Inverted file is used during retrieval for higher efficiency. 18

Retrieval The problems underlying retrieval Retrieval model How is a document represented with the selected keywords? How are document and query representations compared to calculate a score? Implementation 19

Cases 1-word query: The documents to be retrieved are those that include the word - Retrieve the inverted list for the word - Sort in decreasing order of the weight of the word Multi-word query? Combining several lists - How to interpret the weight? (IR model) - 20

IR models Matching score model Document D a set of weighted keywords Query Q a set of non-weighted keywords R(D, Q) i w(ti , D) where ti is in Q. 21

Boolean model e.g. Document Logical conjunction of keywords Query Boolean expression of keywords R(D, Q) D Q D t1 t2 tn Q (t1 t2) (t3 t4) D Q, thus R(D, Q) 1. Problems: R is either 1 or 0 (unordered set of documents) many documents or few documents End-users cannot manipulate Boolean operators correctly E.g. documents about kangaroos and koalas 22

Extensions to Boolean model (for document ordering) D { , (ti, wi), }: weighted keywords Interpretation: D is a member of class ti to degree wi. In terms of fuzzy sets: ti(D) wi A possible Evaluation: R(D, R(D, R(D, R(D, ti) ti(D); Q1 Q2) min(R(D, Q1), R(D, Q2)); Q1 Q2) max(R(D, Q1), R(D, Q2)); Q1) 1 - R(D, Q1). 23

Vector space model Vector space all the keywords encountered t1, t2, t3, , tn Document D a1, a2, a3, , an ai weight of ti in D Query Q b1, b2, b3, , bn bi weight of ti in Q R(D,Q) Sim(D,Q) 24

Matrix representation tn D1 a11 a12 a13 a1n D2 a21 a22 a23 a2n D3 a31 a32 a33 a3n Dm am1 am2 am3 amn Q b1 bn Document space t1 t2 b2 t3 b3 Term vector space 25

Some formulas for Sim Dot product Sim( D, Q) ( ai * bi ) t1 (a * b ) i Cosine i Sim( D, Q) 2 ai * bi i Dice Sim( D, Q) Q 2 i t2 2 (ai * bi ) i 2 ai bi i Jaccard D i 2 i (a * b ) Sim( D, Q) a b (a * b ) i i i 2 2 i i i i i i i 26

Implementation (space) Matrix is very sparse: a few 100s terms for a document, and a few terms for a query, while the term space is large ( 100k) Stored as: D1 {(t1, a1), (t2,a2), } t1 {(D1,a1), } 27

Implementation (time) The implementation of VSM with dot product: Naïve implementation: O(m*n) Implementation using inverted file: Given a query {(t1,b1), (t2,b2)}: 1. find the sets of related documents through inverted file for t1 and t2 2. calculate the score of the documents to each weighted term (t1,b1) {(D1,a1 *b1), } 3. combine the sets and sum the weights ( ) O( Q *n) 28

Other similarities Cosine: (a * b ) i Sim( D, Q) i i 2 a * b i j - - i j 2 i ai a j bi 2 j b 2 j j 2 2 use and to normalize the a b j j j j weights after indexing Dot product (Similar operations do not apply to Dice and Jaccard) 29

Probabilistic model Given D, estimate P(R D) and P(NR D) P(R D) P(D R)*P(R)/P(D) (P(D), P(R) constant) 1 present P(D R) xi 0 absent D {t1 x1, t2 x2, } P( D R) P(t i xi R) ( ti xi ) D x P (ti 1 R) xi P (ti 0 R ) (1 xi ) pi i (1 pi ) (1 xi ) ti ti x P ( D NR) P (ti 1 NR ) xi P(ti 0 NR ) (1 xi ) qi i (1 qi ) (1 xi ) ti ti 30

Prob. model (cont’d) For document ranking x P( D R) Odd ( D) log log P ( D NR ) (1 xi ) i p ( 1 p ) i i ti x (1 xi ) i q ( 1 q ) i i ti pi (1 qi ) 1 pi xi log log qi (1 pi ) ti 1 qi ti pi (1 qi ) xi log qi (1 pi ) ti 31

Prob. model (cont’d) ri ni-ri ni How to estimate pi and qi? Rel. doc. with ti Irrel.doc Doc. . with ti with ti A set of N relevant and irrelevant samples: Ri-ri N-Ri– n ri ri pi Ri ni ri qi N Ri Rel. doc. without ti Ri N-ni Doc. Irrel.doc without ti . without ti N-Ri N Rel. doc Irrel.doc Samples 32 .

Prob. model (cont’d) pi (1 qi ) Odd ( D ) xi log qi (1 pi ) ti ri ( N Ri ni ri ) xi ( Ri ri )(ni ri ) ti Smoothing (Robertson-Sparck-Jones formula) Odd ( D) xi ti (ri 0.5)( N Ri ni ri 0.5) wi ( Ri ri 0.5)(ni ri 0.5) ti D When no sample is available: pi 0.5, qi (ni 0.5)/(N 0.5) ni/N May be implemented as VSM 33

BM25 (k1 1)tf (k3 1)qtf avdl dl Score ( D, Q) w k2 Q K tf k3 qtf avdl dl t Q K k1 ((1 b) b dl ) avdl dl k1, k2, k3, d: parameters qtf: query term frequency dl: document length avdl: average document length 34

(Classic) Presentation of results Query evaluation result is a list of documents, sorted by their similarity to the query. E.g. doc1 0.67 doc2 0.65 doc3 0.54 35

System evaluation Efficiency: time, space Effectiveness: How is a system capable of retrieving relevant documents? Is a system better than another one? Metrics often used (together): Precision retrieved relevant docs / retrieved docs Recall retrieved relevant docs / relevant docs retrieved relevant relevant retrieved 36

General form of precision/recall Precision 1.0 Recall 1.0 -Precision change w.r.t. Recall (not a fixed point) -Systems cannot compare at one Precision/Recall point -Average precision (on 11 points of recall: 0.0, 0.1, , 1.0) 37

An illustration of P/R calculation Precision 1.0 - List Rel? Doc Y 1 Doc 2 Doc Y 3 Doc Y Assume: 5 relevant docs. 4 Doc * (0.2, 1.0) 0.8 - * (0.6, 0.75) * (0.4, 0.67) 0.6 0.4 - * (0.6, 0.6) * (0.2, 0.5) 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Recall 38

MAP (Mean Average Precision) MAP 1 1 j n Qi Ri D j Ri rij rij rank of the j-th relevant document for Qi Ri #rel. doc. for Qi n # test queries E.g. Rank: 1 5 10 4 8 1st rel. doc. 2nd rel. doc. 3rd rel. doc. 1 1 1 2 3 1 1 2 MAP [ ( ) ( )] 2 3 1 5 10 2 4 8 39

Some other measures Noise retrieved irrelevant docs / retrieved docs Silence non-retrieved relevant docs / relevant docs Noise 1 – Precision; Silence 1 – Recall Fallout retrieved irrel. docs / irrel. docs Single value measures: F-measure 2 P * R / (P R) Average precision average at 11 points of recall Precision at n document (often used for Web IR) Expected search length (no. irrelevant documents to read before obtaining n relevant doc.) 40

Test corpus Compare different IR systems on the same test corpus A test corpus contains: A set of documents A set of queries Relevance judgment for every documentquery pair (desired answers for each query) The results of a system is compared with the desired answers. 41

An evaluation example (SMART) Run number: 1 2 Num queries: 52 52 Total number of documents over all queries Retrieved: 780 780 Relevant: 796 796 Rel ret: 246 229 Recall - Precision Averages: at 0.00 0.7695 0.7894 at 0.10 0.6618 0.6449 at 0.20 0.5019 0.5090 at 0.30 0.3745 0.3702 at 0.40 0.2249 0.3070 at 0.50 0.1797 0.2104 at 0.60 0.1143 0.1654 at 0.70 0.0891 0.1144 at 0.80 0.0891 0.1096 at 0.90 0.0699 0.0904 at 1.00 0.0699 0.0904 Average precision 11-pt Avg: % Change: Recall: Exact: at 5 docs: at 10 docs: at 15 docs: at 30 docs: Precision: Exact: 0.2936 At 5 docs: At 10 docs: At 15 docs: At 30 docs: for all points 0.2859 0.3092 8.2 0.4139 0.2373 0.3254 0.4139 0.4139 0.4166 0.2726 0.3572 0.4166 0.4166 0.3154 0.4308 0.3538 0.3154 0.1577 0.4192 0.3327 0.2936 0.1468 42

The TREC experiments Once per year A set of documents and queries are distributed to the participants (the standard answers are unknown) (April) Participants work (very hard) to construct, fine-tune their systems, and submit the answers (1000/query) at the deadline (July) NIST people manually evaluate the answers and provide correct answers (and classification of IR systems) (July – August) TREC conference (November) 43

TREC evaluation methodology Known document collection ( 100K) and query set (50) Submission of 1000 documents for each query by each participant Merge 100 first documents of each participant - global pool Human relevance judgment of the global pool The other documents are assumed to be irrelevant Evaluation of each system (with 1000 answers) Partial relevance judgments But stable for system ranking 44

Tracks (tasks) Ad Hoc track: given document collection, different topics Routing (filtering): stable interests (user profile), incoming document flow CLIR: Ad Hoc, but with queries in a different language Web: a large set of Web pages Question-Answering: When did Nixon visit China? Interactive: put users into action with system Spoken document retrieval Image and video retrieval Information tracking: new topic / follow up 45

CLEF and NTCIR CLEF Cross-Language Experimental Forum for European languages organized by Europeans Each per year (March – Oct.) NTCIR: Organized by NII (Japan) For Asian languages cycle of 1.5 year 46

Impact of TREC Provide large collections for further experiments Compare different systems/techniques on realistic data Develop new methodology for system evaluation Similar experiments are organized in other areas (NLP, Machine translation, Summarization, ) 47

Some techniques to improve IR effectiveness Interaction with user (relevance feedback) - Keywords only cover part of the contents - User can help by indicating relevant/irrelevant document The use of relevance feedback To improve query expression: Qnew *Qold *Rel d - *Nrel d where Rel d centroid of relevant documents NRel d centroid of non-relevant documents 48

Effect of RF 2nd retrieval * * * * * ** ** 1st retrieval * x * x x * * * x x * * R* Q * NR x Qnew * x * x x * * x 49

Modified relevance feedback Users usually do not cooperate (e.g. AltaVista in early years) Pseudo-relevance feedback (Blind RF) Using the top-ranked documents as if they are relevant: Select m terms from n top-ranked documents One can usually obtain about 10% improvement 50

Query expansion A query contains part of the important words Add new (related) terms into the query Manually constructed knowledge base/thesaurus (e.g. Wordnet) Q information retrieval Q’ (information data knowledge ) (retrieval search seeking ) Corpus analysis: two terms that often co-occur are related (Mutual information) Two terms that co-occur with the same words are related (e.g. T-shirt and coat with wear, ) 51

Global vs. local context analysis Global analysis: use the whole document collection to calculate term relationships Local analysis: use the query to retrieve a subset of documents, then calculate term relationships Combine pseudo-relevance feedback and term co-occurrences More effective than global analysis 52

Some current research topics: Go beyond keywords Keywords are not perfect representatives of concepts Ambiguity: table data structure, furniture? Lack of precision: “operating”, “system” less precise than “operating system” Suggested solution Sense disambiguation (difficult due to the lack of contextual information) Using compound terms (no complete dictionary of compound terms, variation in form) Using noun phrases (syntactic patterns statistics) Still a long way to go 53

Theory Bayesian networks P(Q D) D1 D2 t1 c1 Inference t2 c2 c3 Q D3 t3 t4 Dm . tn c4 cl revision Language models 54

Logical models How to describe the relevance relation as a logical relation? D Q What are the properties of this relation? How to combine uncertainty with a logical framework? The problem: What is relevance? 55

Related applications: Information filtering IR: changing queries on stable document collection IF: incoming document flow with stable interests (queries) yes/no decision (in stead of ordering documents) Advantage: the description of user’s interest may be improved using relevance feedback (the user is more willing to cooperate) Difficulty: adjust threshold to keep/ignore document The basic techniques used for IF are the same as keep coin” those for IR – “Two sides of the same doc3, doc2, doc1 IF ignore User profile 56

IR for (semi-)structured documents Using structural information to assign weights to keywords (Introduction, Conclusion, ) Hierarchical indexing Querying within some structure (search in title, etc.) INEX experiments Using hyperlinks in indexing and retrieval (e.g. Google) 57

PageRank in Google I1 I2 A B PR( I i ) PR ( A) (1 d ) d C(Ii ) i Assign a numeric value to each page The more a page is referred to by important pages, the more this page is important d: damping factor (0.85) Many other criteria: e.g. proximity of query words “ information retrieval ” better than “ information retrieval ” 58

IR on the Web No stable document collection (spider, crawler) Invalid document, duplication, etc. Huge number of documents (partial collection) Multimedia documents Great variation of document quality Multilingual problem 59

Final remarks on IR IR is related to many areas: NLP, AI, database, machine learning, user modeling library, Web, multimedia search, Relatively week theories Very strong tradition of experiments Many remaining (and exciting) problems Difficult area: Intuitive methods do not necessarily improve effectiveness in practice 60

Why is IR difficult Vocabularies mismatching Queries are ambiguous, they are partial specification of user’s need Content representation may be inadequate and incomplete The user is the ultimate judge, but we don’t know how the judge judges Synonymy: e.g. car v.s. automobile Polysemy: table The notion of relevance is imprecise, context- and userdependent But how much it is rewarding to gain 10% improvement! 61

Back to top button