There are numerous pieces of duplicate information served by multiple sources on the web. Many news stories that we receive from the media tend to originate from the same source, such as the Associated Press. When such contents are scraped off the web for archiving, a need may arise to categorize documents by their similarity (not in the sense of the meaning of the text but the character-level or lexical matching).
Here, we build a prototype for a near-duplicate document detection system. This article presents the background material on an algorithm called MinHash and a method for probabilistic dimension reduction through locality-sensitive hashing. A future article presents their implementation with Python and CouchDB.
(Note that all the numbers generated for the tables in this article are made up for illustration purposes; they may not be mathematically consistent with any hashing algorithm.)
Jaccard Similarity Index
A similarity is represented by the Jaccard index:
where and are sets representing the two documents in our context.
Shingling
A useful way to construct a set representing a document is by shingling. To construct a set of -singles from a text, a sliding window of characters is applied over the text. For example, if the text is “abcdabd,” the resulting set of 2-shingles is {ab, bc, cd, da, bd} (note that “ab” appears only once and is not repeated in the set).
The value of is arbitrary, but it should be large enough that the probability of any given shingle appearing in any random document is low. That is, if the available number of characters is and the character length of typical documents is , we should at least ensure . Since each character has a different frequency of appearance in a typical text, a suitable value for depends on the nature of documents and should be tuned accordingly. A good rule of thumb for an order of magnitude estimate is to assume for English texts.
Instead of using individual characters, shingles can also be constructed from words. For example, in a math textbook we may often see a sentence beginning with a terse expression “it is trivial to show,” whose 3-shingle set is {“it is trivial”, “is trivial to”, “trivial to show”}. This has an advantage in that shingles built this way are more sensitive to the styles of writing. The style sensitivity may aid in identifying similarities between domain-specific texts buried in other types of documents.
Hashing Shingles
Typically, shingles are all hashed and grouped into buckets represented by an integer. The use of an integer is a huge advantage in terms of data compression. For example, a 4-shingle (of characters) typically uses 4 bytes, each byte used for a character, and this is good for representing 4-shingles (i.e., ). With a 4-byte, however, about 4 million () integers and therefore shingles could be represented, which is a good enough size for (i.e., ). If a tiny probability of collision into the same bucket can be tolerated, can be chosen even larger. From here on, we assume a random hash function does not produce any collision between any pair of randomly chosen shingles, i.e., the mapping yields a unique integer.
Characteristic Matrix
Suppose we have a random hash function and all possible singles , , , from , , , for a total of documents. We can summarize this in a characteristic matrix:
where the entry of 1 indicates that the document contains a shingle for which a hash value exists. (The entry of 0 means the shingle itself does not appear in that document.) It is trivial to compute Jaccard indices using any pair of documents from this matrix. In practice, however, the requirement for comparing all the hash values for a large number of documents makes the process prohibitive.
MinHash as a Jaccard Index Estimator
Let us focus on a pair of documents, and , for which the shingles , , , have been hashed by a function . The relevant entries from the characteristic matrix look as follows:
There are three types of rows: (a) both columns have 1, (b) one of the columns has 1, and (c\) both columns have 0. We let , , and denote the numbers of rows categorized this way, respectively. For and , is the cardinality of their joint set and is that for their union. Hence the Jaccard index is .
Now, consider an experiment in which the rows in the matrix are randomly permutated. Remove the rows of type (c\), since they do not contribute at all to the union of two sets. We look at the first row of the matrix thus constructed and note its type defined above, either (a) or (b). Repeat the process n times. What is the chance that the first row found this way to be of type (a) above? The probability is given by , which is similar to the way Jaccard index is computed. This is the property that we use as a Jaccard index estimator.
In practice, randomly permuting a huge number of rows is very inefficient. Instead, we prepare a set of random hash functions (for for a set of measurements) that effectively permute the row order given the same set of shingles and sort rows in ascending order by hash values. (For this to be true, the hash functions need to be well-chosen and produce few collisions.) The row of the minimum hash value corresponds to picking the first row in the example above.
What we have shown is that, for estimating Jaccard indices, we only need to keep the minimum hash values generated from different hash functions. Therefore the very sparse characteristic matrix can be condensed into a signature matrix of minimum hash values with entries given by
where
is the set of shingles from the document .
For supposedly near-duplicate documents such as and in the table, most MinHash values are similar, and the fraction of similar values is an estimate of the Jaccard index. This is the gist of the MinHash algorithm. In other words, the probability that a pair of MinHash values from two documents and match is equivalent to their Jaccard index:
Locality-Sensitive Hashing
While the information necessary to compute document similarity has been compressed quite nicely into a signature matrix, examining all documents would take pairs, each involving comparisons from signature entries. The vast majority of documents may not be near-duplicate, however, and in such a case we do not need every pair to be compared. Locality-sensitive hashing (LSH) offers a method of reducing the number of dimensions in high-dimensional MinHash signatures.
The idea is to partition a MinHash signature matrix into bands, each with rows (such that is the total number of rows), and hashing MinHash integer sequences grouped by band. For example, if we have MinHash values, we could partition them into bands of rows:
Band
Then we use some good hash function which takes MinHash values and summarizes into integers, for band 1, for band 2, and so on. This reduces the signature matrix into matrix:
Near-duplicate documents will be hashed into the same bucket within each band. In this example, and are in the same bucket for bands 1 and 2. (Note that in band 3 has the same hash value as and in band 2, but they are not considered to be in the same bucket since the bands are different.) The documents that share a bucket within a band are considered candidates for further examination. The advantage is that, since in general, the number of required comparisons is much smaller. The LSH thus provides a way to select candidates for near-duplicate detection, before full signature comparisons are carried out.
The assumption is that a pair of documents, if near-duplicate, has a total of chances of being hashed into a common bucket in at least one of the available bands. Recall from Eq. that the probability that a pair of MinHash values from two documents match is equivalent to their Jaccard index. The probability that a pair of documents share a bucket in a band of rows is given by . Its complement, , is the probability that a document pair does not get hashed into the same bucket for a band. Then the probability that two documents become candidates in at least one band is given by . Plotting for varying and , the function forms a series of S-curves1:
The figure provides an intuition as to how the values of and should be chosen for a target Jaccard similarity threshold (above which two documents are considered near-duplicate). Let . The value of for the steepest slope is obtained from the second derivative, , which is
for , . As a rule of thumb, we want , but the exact value of can be adjusted based on rejection criteria. Choosing reduces false positives, whereas reduces false negatives at the candidate selection step.
Reference
Anand Rajaraman and Jeffrey David Ullman (2011). Mining of Massive Datasets. Cambridge University Press. ISBN 978-1-107-01535-7
The figure is generated by the following Python script:
importmatplotlib.pyplotaspltimportnumpyasnpps=np.arange(0,1,0.01)forrin[5,10]:forbin[2,4,8]:ys=1-(1-ps**r)**bplt.plot(ps,ys,label=f"r={r}, b={b}")plt.xlabel("Jaccard index")plt.ylabel("Probability of being chosen as candidates")plt.legend(loc="upper left")plt.show()