Federated Continuous Learning With Broad Network Architecture

      

ABSTARCT :

Federated learning (FL) is a machine-learning setting, where multiple clients collaboratively train a model under the coordination of a central server. The clients' raw data are locally stored, and each client only uploads the trained weight to the server, which can mitigate the privacy risks from the centralized machine learning. However, most of the existing FL models focus on one-time learning without consideration for continuous learning. Continuous learning supports learning from streaming data continuously, so it can adapt to environmental changes and provide better real-time performance. In this article, we present a federated continuous learning scheme based on broad learning (FCL-BL) to support efficient and accurate federated continuous learning (FCL). In FCL-BL, we propose a weighted processing strategy to solve the catastrophic forgetting problem, so FCL-BL can handle continuous learning. Then, we develop a local-independent training solution to support fast and accurate training in FCL-BL. The proposed solution enables us to avoid using a time-consuming synchronous approach while addressing the inaccurate-training issue rooted in the previous asynchronous approach. Moreover, we introduce a batch-asynchronous approach and broad learning (BL) technique to guarantee the high efficiency of FCL-BL. Specifically, the batch-asynchronous approach reduces the number of client-server interaction rounds, and the BL technique supports incremental learning without retraining when learning newly produced data. Finally, theoretical analysis and experimental results further illustrate that FCL-BL is superior to the existing FL schemes in terms of efficiency and accuracy in FCL.

EXISTING SYSTEM :

? We provide definitions, architectures and applications for the federated learning framework, and provide a comprehensive survey of existing works on this subject. ? We survey existing works on federated learning, and propose definitions, categorizations and applications for a comprehensive secure federated learning framework. ? FTL is an important extension to the existing federated learning systems because it deals with problems exceeding the scope of existing federated learning algorithms. ? As a novel technology, federated learning has several threads of originality, some of which are rooted on existing fields.

DISADVANTAGE :

? Architectural patterns present reusable solutions to a commonly occurring problem within a given context during software architecture design. ? The client registry provides status tracking for all the devices, which is essential for problematic node identification. ? The central server is burdensome to accommodate the communication with massive number of widelydistributed client devices every round. ? Building machine learning models is becoming an engineering discipline where practitioners take advantage of triedand-proven methods to address recurring problems.

PROPOSED SYSTEM :

• In, a multi-task style federated learning system is proposed to allow multiple sites to complete separate tasks, while sharing knowledge and preserving security. • Their proposed multi-task learning model can in addition address high communication costs, stragglers, and fault tolerance issues. • Privacy-preserving machine learning algorithms have been proposed for vertically partitioned data, including Cooperative Statistical Analysis, association rule mining, secure linear regression, classification and gradient descent. • The federated database concept is proposed to achieve interoperability with multiple independent databases.

ADVANTAGE :

? Performance-based client selection can be conducted through local model performance assessment (e.g., performance of the latest local model or the historical records of local model performance). ? Selecting clients with the higher local model performance or lower data heterogeneity increases the global model quality. ? A client cluster groups the client devices (i.e., model trainers) based on their similarity of certain characteristics (e.g., available resources, data distribution, features, geolocation) to increase the model performance and training efficiency. ? By creating clusters of clients with similar data patterns, the global model generated will have better performance for the non-IID-severe client network, without accessing the local data.

Download DOC Download PPT

We have more than 145000 Documents , PPT and Research Papers

Have a question ?

Chat on WhatsApp