As an Amazon Associate I earn from qualifying purchases.

Using hyperboloids to embed knowledge graphs

[ad_1]

Knowledge graphs are an extremely efficient way to represent information, but the standard techniques for analyzing them, which involve tracing connections through a graph one hop at a time, don’t scale well.

A two-dimensional projection of hyperboloid embeddings of elements of the Amazon product graph. For ease of interpretation, the graph itself is superimposed on the embeddings. The size of the hyperboloids indicates how densely connected the corresponding graph elements are to other elements, which in this case indicates terms’ generality. The projection renders elements lower in the graph hierarchy as closer to the boundary of the embedding circle.

Recently, knowledge graph embeddings, which represent elements of the graph as points in a multidimensional space, have provided a way to analyze knowledge graphs more efficiently. But they give up much of the graphs’ informational richness.

At this year’s Web Conference, my colleagues and I presented a new way to embed knowledge graphs, as hyperboloids on a Poincaré hypersphere.

Hyperboloids are bounded curved surfaces — curved analogues of rectangles — and characteristics of hyperbolic space enable them to capture hierarchical information that conventional knowledge graph embeddings lose.

Our paper describes a neural-network architecture for learning hyperboloid embeddings that enables us to logically compose knowledge graph queries. We could, for instance, search a product graph for all footwear from both brand A and brand B, a query that could be logically represented as the union of the intersections of the embeddings for brand A and brand B with the embedding for footwear.

In this product graph, both brand names (A and B) and product types (footwear) are network nodes. A node that descends from two parents (say, brand B and footwear) fits into two categories. The set of nodes with the same two parents constitutes the intersection of the parents’ categories.

Credit: Glynis Condon

In experiments, we compared our approach to four other graph embedding schemes, using five different datasets. On each dataset, we posed nine different queries, for a total of 45 queries total. Our approach outperformed its predecessors on 44 of those queries. Sometimes the improvements were quite dramatic: on individual queries, improvements of 20% to 50% over the best-performing predecessor were common.

Horocycle intersection

A knowledge graph consists of nodes, which represent entities, connected by edges, which represent relationships between entities. A typical graph embedding would represent nodes and edges as vectors in the embedding space. With a good embedding, summing the vectors for a node and one of its edges should approximate the vector for the node that shares that edge.

Our embedding scheme, which we call HypE, instead embeds nodes and edges as hyperboloids on a Poincaré hypersphere. More particularly, each hyperboloid is defined by the intersections of two pairs of parallel arc-aligned horocyles.

An arc-aligned horocyle is a partial circle that is parallel to the diameter of a Poincaré hypersphere and intersects the hypersphere boundary orthogonally. (On a Poincaré hypersphere, the notion of parallelism is that of a curved space; parallel lines may exhibit different degrees of curvature; what defines them is that they don’t intersect each other.)

At left are nine arc-aligned horocycles on a Poincaré disk; the other eight horocycles are parallel to the disk diameter CD, in that they do not intersect it. At right is a hyperboloid (CDEF) defined by the intersection of two pairs of parallel horocycles.

The hyperboloid determined by the intersection of horocycles can be described by the location of its center and the limit of its boundary (see figure above).

The intersection of hyperboloids.

Because hyperboloid embeddings are extended in space, HypE can learn embeddings whose spatial overlap represents the logical intersection of categories encoded in the graph. To extend the example above, the embedding for brand A footwear would be the intersection of the embedding for brand A and the embedding for footwear.

Architecture

We train a neural network that takes as inputs an entity, one of its relations, an arbitrary number of additional entities, and a control signal that indicates one of three operations: translation, union, and intersection. Union and intersection are the standard logical operations; translation simply means the traversal of some number of hops (in our experiments, one to three) through the graph.

The HypeE architecture, with an operator signal that controls a sequence of switches within the network. Yellow represents the centers of hyperboloids, pink their limits.

The control signal sets a series of switches within the network that determine which inputs contribute to the output in what capacity. The network thus explicitly learns embeddings that encode information about particular types of logical relationships.

Our experiments compared the performance of embedding schemes on nine different queries: one-hop, two-hop, and three-hop translations; two- and three-entity intersections; two-entity unions; and three different combinations of translation and the logical operators.

Across five different datasets, HypE was the top performer on all but one test, a combination of union and translation; on that test, it finished second. On the five datasets, its average improvements over the second-best embedding schemes were 7% to 33%.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Aqualib World- Space of Deals & offers
Logo