A Quick Note on Getting Better at Near Duplicates

Finding near-duplicates on the web is important for several unrelated reasons that include archiving and versioning, plagiarism and copyright, SEO and optimization of storage capacity. It is maybe instructive to re-review the original groundbreaking research in the space – an eleven-year-old highly cited study of 1.6 billion distinct web pages.

What’s interesting about this work is that it takes two relatively faulty heuristics and combines them both to produce a third one that is superior.

Specifically, let A and B be the two heuristics in question, and (p1, p2) – a random pair of web pages that get evaluated for near-duplication (in one of its domain-specific interpretations). Each method, effectively, computes a similarity between the two pages – let’s denote this as sim-A(p1, p2) and sim-B(p1, p2), respectively.

Next, based on their own configured tunables A and B would, separately, determine whether p1 and p2 are near-duplicate:

  • duplicate-A(p1, p2) = sim-A(p1, p2) > threshold-A
  • duplicate-B(p1, p2) = sim-B(p1, p2) > threshold-B

The combined algorithm – let’s call it C – evaluates as follows from left to right:

  • duplicate-C(p1, p2) = duplicate-A(p1, p2) && sim-B(p1, p2) > threshold-C

where threshold-C is a yet another constant that gets carefully tailored to minimize both false negatives and false positives (the latter, when a given pair of different web pages is falsely computed as near-duplicate).

Easy to notice that, given enough time to experiment and large enough datasets, this can work for more than two unrelated heuristics that utilize minhash, simhash and various other popular near-duplication detection techniques.

Maybe not so obvious is another observation: given real-life time and space limitations, any near-duplication algorithm will have to remain information-lossy and irreversible. And since the information that is being lost cannot be compared in terms of duplication (or non-duplication), we cannot eliminate the scenarios of non-identical (and maybe even disjoint) subsets of the respective false positives.

Therefore, given algorithms A and B and their respective false positive sets FP-A and FP-B, we should always be able to come up with a combined heuristics C(A, B) that optimizes its own inevitable FP-C as a function of (FP-A, FP-B).

PS. A blog that (at the time of this writing) comes on top when searching for “near-duplicate detection”. Easy and illustrated.

Map-free Multi-site Replication

A distributed storage system has do a great job of replicating content. Building a highly available storage service out of mildly reliable storage devices requires replicating after each failure. Replication is not an exception, it is a continuous process. So it’s important.

The design of the CCOW (Cloud Copy-on-Write) object storage system deals with replication within a local network very well. What it does not deal with anywhere near as well is spreading content over very long round-trip-times, as in different sites. I needed a feature that would allow customers to federate multiple sites without requiring 40 Gbe links connecting the sites.

This is not specific to CCOW object storage. The need to provide efficient storage locally will force any design to partition between “local” updating, which is fairly fast, and “remote” updating which will be considerably slower. If you don’t recognize this difference you’ll end up with a design that does all updates slowly.

But I needed a solution for CCOW. My first thought on how to replicate content between federated sites was to just layer on top of software bridges.

The problem actually maps fairly well:

  • Each site can be thought of as a switch.
  • The “switch” can deliver frames (chunks) to local destinations, or to links to other switches (sites).
  • A frame (chunk) is never echoed to the same link that it arrived upon.
  • There is no need to provision a site topology, the Spanning Tree will discover it. The latter spanning tree algorithms will do it better.

If you are not familiar with the Spanning Tree algorithms I suggest you go read up on them. Spanning Tree made Ethernet networking possible. See https://en.wikipedia.org/wiki/Spanning_Tree_Protocol.

Basically, we start with a set of linked Sites:

MultiSite1

Spanning Tree algorithms effectively prevent loops by logically de-activating some of the links.

MultiSite2

So re-using existing software (software switches and existing long-haul tunnels) was a very tempting solution, especially since it directly re-used networking layer code to solve storage layer problems. Pontificating that networking solutions are applicable to storage is one thing, actually leveraging network layer code would prove it.

This leads to the following distribution pattern, in this example for a frame originating from Site V.

MultiSite3

But it turns out that the storage layer problem is fundamentally simpler.

Radia Perlman’s spanning tree prevnts loops by discovering the links between the switches, and without central decision making de-activates a subset of the links so that would have enabled forwarding loops. It does this no matter how the switches are linked. It de-activates links to avoid loops. This is needed because Ethernet frames do not have a time-to-live marker. Without spanning tree it would be very easy for Switch A to forward a frame to Switch B, which would forward it to Switch C which would forward it to Switch A. While rare, such loops would crash any Ethernet network. Spanning tree was a simple algorithm that prevented that totally.

The fundamental assumption behind spanning tree was that links had to be de-activated because switches could not possibly remember what frames they had previously forwarded. Recognizing that a frame was being looped around would require remembering frames that have been forwarded eons previously, possibly even milliseconds.

If you understand how switches work the last thing any switch wants to do is to remember anything. Remembering things takes RAM, but more critically it takes time to update that RAM. RAM may get cheaper, but remembering things always takes longer than not remembering them and switches will kill to trim a microsecond from their relay time.

But remembering things is exactly what a storage cluster is deigned to do.

It turns out that the multi-site forwarding algorithm for multi-site storage of fingerprinted chunks is stunningly simple:

On receipt of a new Chunk X on one Inter-segment link:

If Chunk X is already known to this cluster

Do nothing.

Otherwise

Replicate Chunk X on all other inter-site inks.

Remember Chunk X locally.

That’s it. Having a cryptographic hash fingerprint of each chunk makes it easy to identify already seen chunks. This allows multiple sites to forward chunks over ad hoc links without having to build a site wide map, and all links can be utilized at all times.

Now the forwarding pattern is enhanced and there is no delay for building a tree:

Multisite4