Main Navigation

  • Contact NeurIPS
  • Code of Ethics
  • Code of Conduct
  • Create Profile
  • Journal To Conference Track
  • Diversity & Inclusion
  • Proceedings
  • Future Meetings
  • Exhibitor Information
  • Privacy Policy

NeurIPS 2024, the Thirty-eighth Annual Conference on Neural Information Processing Systems, will be held at the Vancouver Convention Center

Monday Dec 9 through Sunday Dec 15. Monday is an industry expo.

federated machine learning research papers

Registration

Pricing » Registration 2024 Registration Cancellation Policy » . Certificate of Attendance

Our Hotel Reservation page is currently under construction and will be released shortly. NeurIPS has contracted Hotel guest rooms for the Conference at group pricing, requiring reservations only through this page. Please do not make room reservations through any other channel, as it only impedes us from putting on the best Conference for you. We thank you for your assistance in helping us protect the NeurIPS conference.

Announcements

  • The call for High School Projects has been released
  • The Call For Papers has been released
  • See the Visa Information page for changes to the visa process for 2024.

Latest NeurIPS Blog Entries [ All Entries ]

Important dates.

If you have questions about supporting the conference, please contact us .

View NeurIPS 2024 exhibitors » Become an 2024 Exhibitor Exhibitor Info »

Organizing Committee

General chair, program chair, workshop chair, workshop chair assistant, tutorial chair, competition chair, data and benchmark chair, diversity, inclusion and accessibility chair, affinity chair, ethics review chair, communication chair, social chair, journal chair, creative ai chair, workflow manager, logistics and it, mission statement.

The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research advances in Artificial Intelligence and Machine Learning, principally by hosting an annual interdisciplinary academic conference with the highest ethical standards for a diverse and inclusive community.

About the Conference

The conference was founded in 1987 and is now a multi-track interdisciplinary annual meeting that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas.

More about the Neural Information Processing Systems foundation »

Help | Advanced Search

Computer Science > Machine Learning

Title: korea-sfl: knowledge replay-based split federated learning against catastrophic forgetting.

Abstract: Although Split Federated Learning (SFL) is good at enabling knowledge sharing among resource-constrained clients, it suffers from the problem of low training accuracy due to the neglect of data heterogeneity and catastrophic forgetting. To address this issue, we propose a novel SFL approach named KoReA-SFL, which adopts a multi-model aggregation mechanism to alleviate gradient divergence caused by heterogeneous data and a knowledge replay strategy to deal with catastrophic forgetting. Specifically, in KoReA-SFL cloud servers (i.e., fed server and main server) maintain multiple branch model portions rather than a global portion for local training and an aggregated master-model portion for knowledge sharing among branch portions. To avoid catastrophic forgetting, the main server of KoReA-SFL selects multiple assistant devices for knowledge replay according to the training data distribution of each server-side branch-model portion. Experimental results obtained from non-IID and IID scenarios demonstrate that KoReA-SFL significantly outperforms conventional SFL methods (by up to 23.25\% test accuracy improvement).

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. machine learning research papers 2019 pdf

    federated machine learning research papers

  2. (PDF) Beyond federated learning: On confidentiality-critical machine

    federated machine learning research papers

  3. (PDF) LoAdaBoost: Loss-based AdaBoost federated machine learning with

    federated machine learning research papers

  4. (PDF) Federated Machine Learning: Concept and Applications

    federated machine learning research papers

  5. (PDF) Federated Machine Learning: Survey, Multi-Level Classification

    federated machine learning research papers

  6. (PDF) Machine Learning for Anomaly Detection: A Systematic Review

    federated machine learning research papers

VIDEO

  1. 3 Where Can You Find Machine Learning Research Papers and Code

  2. Federated Learning

  3. Federated Learning

  4. 3 Federated Learning Frameworks and Collaborative Learning

  5. Weighted Federated Learning

  6. EfficientML.ai Lecture 2

COMMENTS

  1. FedFa: A Fully Asynchronous Training Paradigm for Federated Learning

    Federated learning has been identified as an efficient decentralized training paradigm for scaling the machine learning model training on a large number of devices while guaranteeing the data privacy of the trainers. FedAvg has become a foundational parameter update strategy for federated learning, which has been promising to eliminate the effect of the heterogeneous data across clients and ...

  2. [2404.12710] FedMeS: Personalized Federated Continual Learning

    We focus on the problem of Personalized Federated Continual Learning (PFCL): a group of distributed clients, each with a sequence of local tasks on arbitrary data distributions, collaborate through a central server to train a personalized model at each client, with the model expected to achieve good performance on all local tasks. We propose a novel PFCL framework called Federated Memory ...

  3. Fedcmp: Byzantine-Robust Federated Learning Through Clustering ...

    Federated learning (FL) is a type of distributed machine learning that allows multiple clients to collaboratively train a machine learning model without uploading their own private data. However, the distributed nature of FL makes it vulnerable to Byzantine attacks.

  4. A Review of Federated Learning in Agriculture

    This study is a review of FL applications that address various agricultural problems and compares the types of data partitioning, types of FL, architectures, levels of federation, and the use of aggregation algorithms in different reviewed approaches and applications of FL in agriculture. Federated learning (FL), with the aim of training machine learning models using data and computational ...

  5. Introduction to the ACSAC'22 Special Issue

    For this special issue we invited authors of papers that appeared at ACSAC 2022 and that successfully passed an evaluation of their software and/or data artifacts to submit an extended version of their papers. This selection criteria ensured that the research has a high potential for being deployed in real-world environments and to be used to ...

  6. Machine learning

    Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance. Machine learning approaches have been applied ...

  7. Wavelet-based harmonization of local and global model shifts in

    Federated Learning (FL) is a promising machine learning approach for development of data-driven global model using collaborative local models across multiple local institutions. However, the heterogeneity of medical imaging data is one of the challenges within FL. This heterogeneity is caused by the variation in imaging scanner protocols across institutions, which may result in weight shift ...

  8. [2404.12850] CaBaFL: Asynchronous Federated Learning via Hierarchical

    Federated Learning (FL) as a promising distributed machine learning paradigm has been widely adopted in Artificial Intelligence of Things (AIoT) applications. However, the efficiency and inference capability of FL is seriously limited due to the presence of stragglers and data imbalance across massive AIoT devices, respectively. To address the above challenges, we present a novel asynchronous ...

  9. SAFe‐Health: Guarding federated learning‐driven smart healthcare with

    Federated learning (FL) serves as a decentralized training framework for machine learning (ML) models, preserving data privacy in critical domains such as smart healthcare. However, it has been found that attackers can exploit this decentralized learning framework to perform data and model poisoning attacks, specifically in FL-driven smart ...

  10. List of important publications in computer science

    Machine learning An Inductive Inference Machine. Ray Solomonoff; IRE Convention Record, Section on Information Theory, Part 2, pp. 56-62, 1957 (A longer version of this, a privately circulated report, 1956, is online). Description: The first paper written on machine learning. Emphasized the importance of training sequences, and the use of ...

  11. Using machine learning for security issues in cognitive IoT

    Cognitive learning is progressively prospering in the field of Internet of Things (IoT). With the advancement in IoT, data generation rate has also increased, whereas issues like performance, attacks on the data, security of the data, and inadequate data resources are yet to be resolved. Recent studies are mostly focusing on the security of the data which can be handled by machine learning.

  12. 2024 Conference

    The Neural Information Processing Systems Foundation is a non-profit corporation whose purpose is to foster the exchange of research advances in Artificial Intelligence and Machine Learning, principally by hosting an annual interdisciplinary academic conference with the highest ethical standards for a diverse and inclusive community.

  13. FedPFT: Federated Proxy Fine-Tuning of Foundation Models

    In this paper, we propose Federated Proxy Fine-Tuning (FedPFT), a novel method enhancing FMs adaptation in downstream tasks through FL by two key modules. First, the sub-FM construction module employs a layer-wise compression approach, facilitating comprehensive FM fine-tuning across all layers by emphasizing those crucial neurons. Second, the ...

  14. Has machine learning defeated trend following strategies in the ...

    Our findings indicate that machine learning models, particularly MLP, significantly outperformed traditional strategies, achieving higher annualized excess returns and Sharpe ratios. Machine learning's ability to capture complex patterns and leverage both volume and price data proved crucial, especially in managing higher frequency trading that ...

  15. PDF Abstract

    Federated learning (FL) is a rapidly growing research field in machine learning. However, existing FL libraries cannot adequately support diverse algorithmic de-velopment; inconsistent dataset and model usage make fair algorithm comparison challenging. In this work, we introduce FedML, an open research library and bench-

  16. FedAuxHMTL: Federated Auxiliary Hard-Parameter Sharing Multi-Task

    This paper introduces a new framework for federated auxiliary hard-parameter sharing multi-task learning, namely, FedAuxHMTL. The introduced framework incorporates model parameter exchanges between edge server and base stations, enabling base stations from distributed areas to participate in the FedAuxHMTL process and enhance the learning ...

  17. KoReA-SFL: Knowledge Replay-based Split Federated Learning Against

    Although Split Federated Learning (SFL) is good at enabling knowledge sharing among resource-constrained clients, it suffers from the problem of low training accuracy due to the neglect of data heterogeneity and catastrophic forgetting. To address this issue, we propose a novel SFL approach named KoReA-SFL, which adopts a multi-model aggregation mechanism to alleviate gradient divergence ...