A Privacy-Preserving Federated Learning for Multiparty Data Sharing in Social IoTs
ABSTARCT :
As 5G and mobile computing are growing rapidly, deep learning services in the Social Computing and Social Internet of Things (IoT) have enriched our lives over the past few years. Mobile devices and IoT devices with computing capabilities can join social computing anytime and anywhere. Federated learning allows for the full use of decentralized training devices without the need for raw data, providing convenience in breaking data silos further and delivering more precise services. However, the various attacks illustrate that the current training process of federal learning is still threatened by disclosures at both the data and content levels. In this paper, we propose a new hybrid privacy-preserving method for federal learning to meet the challenges above. First, we employ an advanced function encryption algorithm that not only protects the characteristics of the data uploaded by each client, but also protects the weight of each participant in the weighted summation procedure. By designing local Bayesian differential privacy, the noise mechanism can effectively improve the adaptability of different distributed data sets. In addition, we also use Sparse Differential Gradient to improve the transmission and storage efficiency in federal learning training. Experiments show the efficacy and efficiency of our proposed scheme.
EXISTING SYSTEM :
? The Mobile Crowdsensing network uses the user’s existing equipment for sensing, therefore the sensing data is possibly neither accurate nor reliable.
? Dedicated sensing systems often lead to high expenditures for initial deployment and recurring costs for maintenance, whereas when sensors are not dedicated, it becomes possible to eliminate these upfront costs through using the participant’s pre-existing devices.
? The challenges for Mobile Crowdsensing is the process of recruiting users and incentivizing users
? The overhead cost of redeploying existing dedicated sensors and manufacturing cost of producing dedicated sensors with higher processing capabilities would be expensive and inefficient.
DISADVANTAGE :
? To address aforementioned security and privacy issues, we adopt blockchain and differential privacy.
? Moreover, FL has attracted substantial attention recently, and one of the most important issues in FL is privacy protection, which is explored in.
? We evaluate the impacts of incentive mechanism on customers’ reward and reputation.
? To avoid the data deficiency problem as well as maintain the machine learning model’s accuracy and performance, a decentralized approach of conducting machine learning, federated learning (FL), is proposed; that is, data are distributed and scattered among different users, and no such a single node stores the whole dataset.
PROPOSED SYSTEM :
• Many studies have been proposed based on this incentive mechanism, such as using game theory to explore user habits and preferences, and evaluating and improving the relevance of online search engines.
• FedGRU was proposed in or small-scale Federated Learning applications, specifically in joint traffic control where private information is often not shared between organizations.
• Reputation models have been proposed by to ensure reliability and trustworthiness of mobile devices.
• Although these issues are mitigated through various proposed methods, they have not been eliminated completely.
ADVANTAGE :
? Blockchain and federated learning (FL) techniques have been widely used in training a neural network with distributed data.
? There are many studies focusing on privacy preserving crowd sourcing, and leveraging fog computing or edge computing to improve the performance as they have gained popularity.
? Therefore, we leverage the MNIST dataset which has been used for testing the performance of the IoT system by.
? Our system considers home appliances of the same brand in a family as a unit, and a mobile phone is used to collect data from home appliances periodically and train the machine learning model locally.
? The Laplace mechanism of can be used to ensure differential privacy by adding independent zero-mean Laplace noise with scale ? to each dimension of the output.
|