QUANTIFYING THE ALIGNMENT OF GRAPH AND FEATURES IN DEEP LEARNING
Abstract : We show that the classification performance of graph convolutional networks (GCNs) is related to the alignment between features, graph, and ground truth, which we quantify using a subspace alignment measure (SAM) corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph, and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations We showcase the relationship between the SAM and the classification performance through the study of limiting cases of GCNs and systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origins. The analysis also reveals the relative importance of the graph and features for classification purposes
We show that the classification performance of graph convolutional networks (GCNs) is related to the alignment between features, graph, and ground truth, which we quantify using a subspace alignment measure (SAM) corresponding to the Frobenius norm of the matrix of pairwise chordal distances between three subspaces associated with features, graph, and ground truth. The proposed measure is based on the principal angles between subspaces and has both spectral and geometrical interpretations We showcase the relationship between the SAM and the classification performance through the study of limiting cases of GCNs and systematic randomizations of both features and graph structure applied to a constructive example and several examples of citation networks of different origins. The analysis also reveals the relative importance of the graph and features for classification purposes
? The pairwise chordal distances Di j in are computed from a number of minimal angles, corresponding to the smaller of the two dimensions of the subspaces being compared. Hence, the dimensions of the subspaces (kX , kA , kY ) need to be defined to compute the distance matrix D. Here, we are interested in finding low dimensional subspaces of features, graph, and ground truth with dimensions (k* X , k* A , k* Y ) such that they provide maximum discriminatory power between the original problem and the fully randomized (null) model. To do this, we propose the following criterion
However, existing deep models are supervised which require a large dataset with ground truth alpha mattes for training. To avoid the cost of data collection and possible bias in training data, this paper proposes a datasetfree unsupervised deep learningbased approach for background matting. Observing that the local smoothness of alpha matte can be well characterized by the untrained network prior called deep matte prior, we model the foreground and alpha matte using the priors encoded by two generative convolutional neural networks. To avoid possible overfitting during unsupervised learning, a twostage learning scheme is developed which contains projectionbased training and Bayesian post refinement.
A GCN is a semisupervised method, in which a small subset of the node groundtruth labels is used in the training phase to infer the class of unlabeled nodes. This type of learning paradigm, where only a small amount of labeled data is available, therefore, lies between supervised and unsupervised learning.
we used 12 categories from the application programming interface (API) (People, Geography, Culture, Society, History, Nature, Sports, Technology, Health, Religion, Mathematics, and Philosophy) and assigned each document to one of them. As part of our investigation, we split this large Wikipedia data set into two smaller subsets of nonoverlapping categories
