HSGAN Hierarchical Graph Learning for Point Cloud Generation

Abstract : Point clouds are the most general data representations of real and abstract objects, and have a wide variety of applications in many science and engineering fields. Point clouds also provide the most scalable multi-resolution composition for geometric structures. Although point cloud learning has shown remarkable results in shape estimation and semantic segmentation, the unsupervised generation of 3D object parts still pose significant challenges in the 3D shape understanding problem. We address this problem by proposing a novel Generative Adversarial Network (GAN), named HSGAN, or Hierarchical Self-Attention GAN, with remarkable properties for 3D shape generation. Our generative model takes a random code and hierarchically transforms it into a representation graph by incorporating both Graph Convolution Network (GCN) and self-attention. With embedding the global graph topology in shape generation, the proposed model takes advantage of the latent topological information to fully construct the geometry of 3D object shapes. Different from the existing generative pipelines, our deep learning architecture articulates three significant properties HSGAN effectively deploys the compact latent topology information as a graph representation in the generative learning process and generates realistic point clouds, HSGAN avoids multiple discriminator updates per generator update, and HSGAN preserves the most dominant geometric structures of 3D shapes in the same hierarchical sampling process. We demonstrate the performance of our new approach with both quantitative and qualitative evaluations. We further present a new adversarial loss to maintain the training stability and overcome the potential mode collapse of traditional GANs. Finally, we explore the use of HSGAN as a plug-and-play decoder in the auto-encoding architecture.
 EXISTING SYSTEM :
 ? Differently from existing methods, we try to enhance the high-level representation of point cloud by capturing the relation of points and local fine-information along its channels. ? With the development of deep learning, the existed end-to-end neural networks have overcame many challenges stem from 3D data and made great breakthrough for point cloud. ? Convolutional neural networks are at the core of highly successful models in image generation and understanding. ? This success is due to the ability of the convolution operation to exploit the principles of locality, stationarity and compositionality that hold true for many data of interest.
 DISADVANTAGE :
 ? Learning-based approaches were proposed to solve various 3D vision problems, e.g., shape classification, scene semantic/instance segmentation, and 3D object detection. ? PointCNN explores convolution on point clouds and addresses the point ordering issue by permuting and weighting input points and features with the X -Conv operator. ? Three dimensions (e.g., for geometric positions) in a high dimensional feature space have very limited impacts if one co-treats all features (including “positions”) using merely an MLP. ? As the number of points on the objects decreases, the impact of the ProRe Module is gradually becoming apparent.
 PROPOSED SYSTEM :
 • The first spatial-based GCN was proposed in, by summing up the neighborhood information of vertices directly. • VoteNet proposed a new voting method, predicting the object centers with the features learned which helped aggregate distant semantic information. • In addition, many attention based GCNs were proposed. GINs assigned different weights for the central vertex and its neighboring vertices. • A new shape-attentive GConv is proposed to capture the local shape semantics. • We proposed a novel framework HGNet, learning the semantics via hierarchical graph modelling.
 ADVANTAGE :
 ? The decent performance of our method compared with all existing point-based neural networks on the large-scale scene labeling datasets, i.e., Stanford Large Scale 3D Indoor Space (S3DIS) and ScanNet, manifests the effectiveness of our framework. ? The performance gain for the graph-convolution-style methods is lower than max-pooling followed by concatenation. ? Therefore, directly voxelizing 3D scenes and extending deep neural network operations from 2D to 3D is inefficient. ? Several voxel-based methods, such as Submanifold Sparse Convolution and O-CNN , improve the 3D convolution efficiency. ? Point features from the encoder layers are also used in the process, along with skip-connection to the corresponding decoder layers.

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Mail us : info@nibode.com