Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Faster, Incentivized, and Efficient Federated Learning: Theory and Applications

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • الموضوع:
      2024
    • Collection:
      KiltHub Research from Carnegie Mellon University
    • نبذة مختصرة :
      Artificial intelligence (AI) is becoming increasingly ubiquitous with a plethora of applications such as recommendation systems, image/video generation, or chatbots changing our modern society. The majority of these AI based applications are based on centralized machine learning (ML). In centralized ML, a large amount of data is collected at a central server to train a large ML model with hundreds of billions of parameters using massive amount of system resources (e.g., CPUs, GPUs). Despite the success of such pipeline for training models in the centralized setting, however, centralized ML has critical limitations. First, data collection can raise serious privacy concerns due to the desired data often being personal data such as medical histories or financial data and users opting out from sharing their data. Second, the exponentially increasing economic and environmental cost for training colossal models in the centralized setting raises concerns for sustainability. Such major drawbacks of centralized ML calls for a new paradigm that can shift the server-based ML pipeline to the edge, where data collection as well as computation happens on-device at the edge client. Moreover, decentralizing ML to the edge can facilitate democratizing the benefits of ML by allowing better personalized models for individual users. Federated learning (FL) is a well-known method that achieves such decentralization of ML where clients (e.g., mobile phones, hospitals, banks) locally train the server ML model(s) on their private data and send only the local gradient updates to the server so that the server can update its model with the aggregation of these local updates. The clients can also privately further fine-tune the server ML model to make personalized models to their data. In this thesis, we aim to address the three main challenges that arise in FL which does not exist in the conventional centralized ML setting: i) clients’ limited communication and availability, ii) clients’ data heterogeneity, and iii) clients’ limited and ...
    • Relation:
      https://figshare.com/articles/thesis/Faster_Incentivized_and_Efficient_Federated_Learning_Theory_and_Applications/25922326
    • الرقم المعرف:
      10.1184/r1/25922326.v1
    • Rights:
      CC BY 4.0
    • الرقم المعرف:
      edsbas.C2027CD1