Skip to content
This repository was archived by the owner on Aug 5, 2019. It is now read-only.

Commit c027843

Browse files
IgnasAIgnasAusiejus
authored andcommitted
Update library.md
fix minor typos Signed-off-by: IgnasAusiejus <ignas@zenitech.co.uk>
1 parent 2d07e95 commit c027843

1 file changed

Lines changed: 3 additions & 3 deletions

File tree

library.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Details:
8282

8383
* The authors propose a procedure for (i) determining the node sequences for which neighborhood graphs are created and (ii) computing a normalization of neighborhood graphs.
8484
* Node sequence selection: sort nodes according to some labeling (e.g. color refinement a.k.a. naive vertex classification), then traverse this sequence with some stride and generate receptive fields for each selected node.
85-
* For each selecte node we assemble its neighborhood by BFS.
85+
* For each selected node we assemble its neighborhood by BFS.
8686
* Each neighborhood is normalized to produce a receptieve field: pick neighboring nodes according to the receptive field size and canonize the subgraph for these nodes.
8787
* We can interpret node and edge features as channels, thus we can feed the generated receptive fields to a CNN.
8888

@@ -126,7 +126,7 @@ http://dl.acm.org/citation.cfm?id=2806512
126126
Details:
127127

128128
* The authors propose the same loss as in skip-gram, but with Noise Contrastive Estimation.
129-
* Turns out optimizing this loss is equivalent to factorizing PMI for transition probability matrix, thus we ccould use lower dimensional representation of our nodes.
129+
* Turns out optimizing this loss is equivalent to factorizing PMI for transition probability matrix, thus we could use lower dimensional representation of our nodes.
130130
* We can generate multiple k-step transition probability matrices (it contains probabilities for reaching other vertices in exactly k steps), and concatenate their respective lower dimensional approximations.
131131

132132
Thoughts: Matrix factorization based methods can't learn complex non-linear interactions, unless it's explicitly encoded in the matrix itself. This method overcomes some of these limitations by utilizing info from many transition probability matrices, but it feels that "Deep Neural Networks for Learning Graph Representations" offers a better way to handle non-linear dependencies in data.
@@ -178,4 +178,4 @@ https://arxiv.org/abs/1702.06921v1
178178

179179
Details:
180180

181-
*
181+
*

0 commit comments

Comments
 (0)