DEEPKEYGEN: A DEEP LEARNING-BASED STREAM CIPHER GENERATOR FOR MEDICAL IMAGE ENCRYPTION AND DECRYPTION
ABSTARCT :
The need for medical image encryption is increasingly pronounced, for example to safeguard the privacy of the patients’ medical imaging data. In this paper, a novel deep learningbased key generation network (DeepKeyGen) is proposed as a stream cipher generator to generate the private key, which can then be used for encrypting and decrypting of medical images. In DeepKeyGen, the generative adversarial network (GAN) is adopted as the learning network to generate the private key. Furthermore, the transformation domain (that represents the “style” of the private key to be generated) is designed to guide the learning network to realize the private key generation processThe goal of DeepKeyGen is to learn the mapping relationship of how to transfer the initial image to the private key. We evaluate DeepKeyGen using three datasets, namely: the Montgomery County chest X-ray dataset, the Ultrasonic Brachial Plexus dataset, and the BraTS18 dataset. The evaluation findings and security analysis show that the proposed key generation network can achieve a high-level security in generating the private key.
Index Terms—Key generator, deep learning, generative adversarial network, image-to-image translation
EXISTING SYSTEM :
? The discriminator is used to distinguish between the key generated by the generator and the data from the transformation domain. Due to the randomness of the training process for deep learning, the generated private keys differ even under the same training conditions. In other words, the proposed DeepKeyGen can be considered as a one-time pad. Moreover, DeepGenKey utilizes unlabeled and unpaired images to train the learning network; thus, overcoming data availability issue in training the GAN.
? In the next section, we will introduce the related literature, prior to presenting our proposed DeepKeyGen in the third section. We then describe our security and performance evaluations in the next two sections, before concluding this paper in the last section.
DISADVANTAGE :
? Inspired by the concept of cycle consistency, Zhu et al. proposed the CycleGAN model to solve the problem of special training dataset for style transfer tasks. It can be trained by adopting the unpaired and unlabeled data, so as to facilitate the application of GAN in image-to-image transformation. Kim et al.
? proposed DiscoGAN to ensure that certain features of the image are preserved when the image is transferred to another domain. Inspired by the original dual learning method of natural language processing, Yi et al. proposed the DualGAN model, which can be used to translate images between two domains with different characteristics by using unlabeled and unpaired data.
PROPOSED SYSTEM :
? These existing key generation algorithms require the generators to be manually designed, and the generator realizing process is then repeated several times and the underlying mathematical formulas constantly refined to achieve or approach the desired style. This is clearly time- and resource-expensive.
? The value ranges of pi , x and y are from 0 to 255, and the value range of c is from 0 to 2. The private key differs from that of existing stream ciphers, which is actually a four-dimensional key. The key values (which contain the pixel value information and the three-dimensional space position information) ensure the private key is complex and the key space is sufficiently large; thus, significantly enhancing the key’s security level.
ADVANTAGE :
? The generator network G is used to transfer the initial image from source domain onto the transformation domain. The output of the generator network G is the private key which holds the same security characteristics as the encrypted performance in the transformation domain.
? The generator network G is composed of three downsample layers, six residual blocks, two transposed convolutional layers and an ordinary convolutional layer. The initial image is firstly processed with three downsample layers. These layers are aimed at extracting the features from the initial image
|