Our workshop aims to develop secure, privacy-preserving and fairness-aware techniques that span both optimization and learning domains.

Topic and Content

Optimization and machine learning problems are pervasive in economic, scientific, and engineering applications. While significant advancements have been made in both fields, traditional approaches often assume that all resources and data for a task are centralized on a single device. Unfortunately, the assumption is violated in many applications with the growing storage of personal data and computational power of edge devices. Over the past years, federated learning (FL) has become a popular machine learning paradigm that can leverage distributed data without leaking sensitive information. Similarly, federated optimization techniques are being developed to solve complex optimization problems using distributed data and computational resources. Both approaches aim to leverage collective intelligence while preserving individual privacy. Furthermore, jointly addressing optimization and learning tasks among multiple edge devices with distributed data raises concerns about data security, privacy protection and fairness. In both federated learning and data-driven optimization, outcomes can be affected by data or algorithmic biases, potentially generating unfair results. When the outcomes of these federated processes correlate with real-world rewards (e.g., financial gains or resource allocation), participants may be hesitant to collaborate if they perceive a risk of receiving disproportionately smaller benefits compared to others. As a result, it is crucial to develop new privacy preserving and fairness aware optimization and learning paradigms to leverage the power of distributed computing and storage.

The topics of this workshop include but are not limited to the following topics:

• Privacy-preserving Bayesian optimization and distributed optimization
• Privacy-preserving evolutionary algorithm
• Secure federated data-driven optimization
• Fairness-aware Bayesian optimization and data-driven optimization
• Fairness-aware federated optimization
• Fairness-aware multi-objective machine learning and optimization
• Client selection in large scale cross-device FL
• Data valuation for FL
• Federated machine unlearning and transfer learning
• Malicious attacks and defense in FL
• Optimization strategies in large scale FL
• Privacy-preserving techniques in FL
• Scalability and reliability of FL systems
• Algorithms for training and finetuning large language models in FL
• Algorithms for training foundation models in FL

Paper Submission

The workshop plans to call for paper submissions and expects around 25 posters with 5 oral presentations among them. The tentative program outline includes interleaved invited speaker sessions and oral presentations, with poster sessions scheduled in the middle of the program. The submission procedure, deadlines, and paper format follow the same guidelines as the IEEE CAI'2025 main conference. Submissions must be made via the IEEE CAI'2025 online system.

Submission Format: Submissions papers (.pdf format) must use the IEEE CAI Article author instructions. The workshop considers two types of submissions: (1) full papers [6 pages]; (2) short papers [2 pages], including figures, tables and references.

Submission Due : 15 January, 2025 AoE