Eigen‐WOT, or linear algebraic trust metrics

Last night we got into an interesting discussion on #rdfig about trust and the use of eigenspace methods in implementing trust metrics. I promised to whip up a little tutorial/discussion piece on the issue, so here goes…

Eigen…huh?

Yes, it’s math, and no, I won’t try to explain it in whole. I will presume the reader has some familiarity with the concepts of vectors, linear transformations and matrix representations thereoff, but I’ll also try to make this as painless as possible. First timers might do well in forgetting about the parenthesized stuff.

The easiest way to think of linear transforms is that they are a composite of shears, rotations, origin‐symmetric scalings and reflections about origin intersecting lines, planes and, more generally, subspaces of the original space. Linear transforms will keep parallel lines parallel, they won’t tear anything up (with respect to the normal Euclidean topology, linear mappings in finite dimensional real spaces are continuous) and the origin of our space will map to itself. In the following examples I’ll use our normal three dimensional space, but one should remember that there is no inherent limit to the dimensionality of a vector space.

Now, presume you sit at the origin of our space. What sort of changes do linear transformations effect on things around you? Rotations will revolve stuff around you (and not around positions other than yours). Scaling will stretch things symmetrically around you, making them bigger in some directions and smaller in others. Reflections will swap opposite directions around you. Shearing causes things in orthogonal directions to move so that they’re no longer orthogonal. Projections will cast everything into the nearest point on some chose subspace (e.g. produce an isometric projection of everything in the space and put it on the x‐y plane, or even some origin intersecting line).

Now, some transforms are nicer than others. The nicest ones will give rise to directions which do not change under the mapping. Such directions may well be scaled, but nevertheless they will move around as seen from your point of view. A vector sticking in such a direction is an eigen, or characteristic, vector of a transformation. Sometimes whole subspaces, like planes in your three dimensional example, stay still. We call these eigenspaces, and the simpler example of individual directions standing still is their special case—any vector sticking in the direction of an eigenvector will be an eigenvector, too, so the whole origin‐intersecting line (a one‐dimensional subspace) in that direction will be an eigenspace. Finally, the direction of an eigenvector will not change in our mapping, but its length very well may. The coefficient of change of length of a given eigenvector will be called its corresponding eigenvalue.

As an example, a reflection about an origin‐intersecting plane in three dimensional space will have the plane as an eigenspace, and 1 as its corresponding eigenvalue. An anisotropic scaling of the two axes would make for a mapping with two separate eigenvectors, lying in the direction of the axes, and their corresponding eigenvalues would be the coefficients by which we stretched the axes. A pure shear which maps coordinate axes to a 45 degree spread will have its two eigenvectors sticking in halfway through the images of the axes, in the plane of the shear. A plane rotation of, say, one degree, will not have real eigenvectors or values, since all nonzero vectors will change direction, but in 3‐D, we can always find an invariant line, and so the rotation will indeed have a one‐dimensional eigenspace with a corresponding eigenvalue of 1.

What’s so special about eigenvectors and values, then? That’s the tricky part. One way to see it is that many linear mappings can be completely characterized by their real valued eigenstuff. (The plane rotation cannot, as we already saw.) More generally, any real‐linear mapping can be decomposed into a rotation and something which can be characterized by its eigenvectors and values. Finally, using complex numbers any linear mapping can be expressed via its complex eigenstuff. (Here, rotations lead to imaginary eigenvalues.)

Eigen‐based characterizations are extremely useful, because they give rise to diagonalizations. To understand what this means, we pick an easy transform. Let’s say it scales x‐coordinates to one half their original value. Things are shrunk to half their original size, circles turn to ellipses, and so on. Now apply this mapping several times in a row. We see that in the direction of the mapping’s eigenvectors, all that happens is multiple scalings. In other directions, things also land in new directions. The first part is easy to handle, but we’d rather forget about the second one. The point about eigen‐representations and diagonalization is that we break down transforms into two parts. The outer part maps stuff so that eigenvectors align with coordinate axes, and back. The inner part simply scales stuff along the coordinate axes. The point is that the outer parts lets us look at transforms only along directions which are the easiest to handle, and the inner part actually lets us vary how the transform acts on those directions. In this representation, repeated application of the transformation will actually boil down to repeated scaling at the inner level, which simply means that we raise the individual scaling coefficients to the right integer power. If one compares this to the hard way, repeated matrix multiplication, it’s much, much nicer. (The name diagonalization comes from the fact that once we’ve broken down a mapping like this, it becomes a product of three matrices, the middle one of which will only have non‐zero elements on its diagonal.)

Popularity

What we have so far seems to have very little to do with trust or popularity indeed. This isn’t the case, though, when we think about it a little. In calculating popularity metrics, we have a bunch of people recommending stuff to each other. If there are n people, relationships between them can be modelled as an n×n matrix where the entry at row i and column j tells whether (or how much) person i likes person j says. The matrix represents a linear mapping from an assignment of popularity values to another such assignment. This mapping is presumed to be linear in an n‐dimensional vector space (i.e. if a single site’s popularity maps to a given assignment, doubling it will map to double the outcome; incoming votes from multiple people sum). When we apply the transform, votes are propagated from the original assignment to other people based on the coefficients. If we originally place popularity in just one person, after the application it will have spread in certain proportions to those people who like the original one. After a second application to those who trust them, and so on. There will be loops, and some of the goodwill will come back to you. The real question is, will the assignment stabilize, and what will a stable assignment look like?

The first condition is that we don’t get blowup. A stable configuration must be one which doesn’t acquire extra trust while passing through the mapping. Similarly, it shouldn’t subtract from the total amount of trust. The relative amounts of trust/popularity assigned to each participant must stay constant. If we look back at what we’ve been talking about, relative amounts correspond to a direction of the popularity vector as seen from the origin of our space. Since this cannot change, the configuration must actually be an eigenvector of the trust mapping. Since the total trust must be invariant, the stable configuration must have a corresponding eigenvalue of precisely one.

In practice this condition will rarely hold. Instead we make it hold by normalizing the tranform to have a unity maximum eigenvalue. Other eigenvalues will then be smaller than unity, and the only popularity vector which can actually survive repeated application of the transform will be the eigenvector corresponding to the leading eigenvalue. One way to calculate the stable popularity distribution is to repeatedly apply the mapping to beginning from a random starting point, while constantly scaling the vector to keep it from blowing up. This will almost always suffice to turn the vector in the direction of the leading eigenvector.

What I’ve described so far is Google’s patented PageRank algorithm, minus a couple of slight complications having to do with loops (i.e. rotations) and random jumps into the web. Google uses pages as the thing which acquire popularity values, and links as binary coefficients along which popularity is propagated. Here links from popular pages are most important, and lots of links from all over signify a popular page. Nice and simple.

Trust

Raph’s trust metric is an outgrowth the reasoning behind Google. It ranks people based on trust links they make to each other.