Friend-as-Learner Socially-Driven Trustworthy and Efficient Wireless Federated Edge Learning
ABSTARCT :
Recently, wireless edge networks have realized intelligent operation and management with edge artificial intelligence (AI) techniques (i.e., federated edge learning). However, the trustworthiness and effective incentive mechanisms of federated edge learning (FEL) have not been fully studied. Thus, the current FEL framework will still suffer untrustworthy or low-quality learning parameters from malicious or inactive learners, which undermines the viability and stability of FEL. To address these challenges, the potential social attributes among edge devices and their users can be exploited, while not included in previous works. In this paper, we propose a novel Social Federated Edge Learning framework (SFEL) over wireless networks, which recruits trustworthy social friends as learning partners. First, we build a social graph model to find like-minded friends, comprehensively considering the mutual trust and learning task similarity. Besides, we propose a social effect based incentive mechanism for better personal federated learning behaviors with both complete and incomplete information. Finally, we conduct extensive simulations with the Erdos-Renyi random network, the Facebook network, and the classic MNIST/CIFAR-10 datasets. Simulation results demonstrate our framework could realize trustworthy and efficient federated learning over wireless edge networks, and it is superior to the existing FEL incentive mechanisms that ignore social effects.
EXISTING SYSTEM :
? Existing multiple access technologies such as orthogonal frequency-division multiple access (OFDMA) and code division multiple access (CDMA) are purely for rate-driven communication and fail to adapt to the actual learning task.
? The key innovation underpinning the learning-driven multiple access is to exploit the insight that the learning task involves computing some aggregating function (e.g., averaging or finding the maximum) of multiple data samples, rather than decoding individual samples as in the existing scheme.
? Based on the traditional approach of communication-computing separation, existing methods of radioresource management (RRM) are designed to maximize the efficiency of spectrum utilization by carefully allocating the scarce radio resources such as power, frequency band and access time.
DISADVANTAGE :
? In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place.
? In this paper, we address the problem of how to efficiently utilize the limited computation and communication resources at the edge for the optimal learning performance.
? Related studies on distributed optimization that are applicable for machine learning applications also include, where a separate solver is used to solve a local problem.
? The main focus of is the trade-off between communication and optimality, where the complexity of solving the local problem (such as the number of local updates needed) is not studied.
PROPOSED SYSTEM :
• This intuition has been captured by a recently proposed technique called over-the-air computation (AirComp).
• By allowing simultaneous transmission, AirComp can dramatically reduce the multiple access latency by a factor equal to the number of users (i.e., 100 times for 100 users).
• It provides a promising solution for overcoming the communication latency bottleneck in edge learning.
• Two multiple access schemes, namely the conventional OFDMA and the proposed AirComp, are compared. They mainly differ in how the available sub-channels are shared.
• This illustrates the proposed design principle and shows its effectiveness in adapting retransmission to data importance.
ADVANTAGE :
? The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment.
? The latter have shown very promising performance in recent years, for complex tasks such as image classification.
? In this paper, we address the problem of how to efficiently utilize the limited computation and communication resources at the edge for the optimal learning performance.
? We evaluate the performance of the proposed control algorithm via extensive experiments using real datasets both on a hardware prototype and in a simulated environment, which confirm that our proposed approach provides near-optimal performance for different data distributions, various machine learning models, and system configurations with different numbers of edge nodes.
|