134 research outputs found
A Comprehensive Survey On Client Selections in Federated Learning
Federated Learning (FL) is a rapidly growing field in machine learning that
allows data to be trained across multiple decentralized devices. The selection
of clients to participate in the training process is a critical factor for the
performance of the overall system. In this survey, we provide a comprehensive
overview of the state-of-the-art client selection techniques in FL, including
their strengths and limitations, as well as the challenges and open issues that
need to be addressed. We cover conventional selection techniques such as random
selection where all or partial random of clients is used for the trained. We
also cover performance-aware selections and as well as resource-aware
selections for resource-constrained networks and heterogeneous networks. We
also discuss the usage of client selection in model security enhancement.
Lastly, we discuss open issues and challenges related to clients selection in
dynamic constrained, and heterogeneous networks
Visual Crowd Analysis: Open Research Problems
Over the last decade, there has been a remarkable surge in interest in
automated crowd monitoring within the computer vision community. Modern
deep-learning approaches have made it possible to develop fully-automated
vision-based crowd-monitoring applications. However, despite the magnitude of
the issue at hand, the significant technological advancements, and the
consistent interest of the research community, there are still numerous
challenges that need to be overcome. In this article, we delve into six major
areas of visual crowd analysis, emphasizing the key developments in each of
these areas. We outline the crucial unresolved issues that must be tackled in
future works, in order to ensure that the field of automated crowd monitoring
continues to progress and thrive. Several surveys related to this topic have
been conducted in the past. Nonetheless, this article thoroughly examines and
presents a more intuitive categorization of works, while also depicting the
latest breakthroughs within the field, incorporating more recent studies
carried out within the last few years in a concise manner. By carefully
choosing prominent works with significant contributions in terms of novelty or
performance gains, this paper presents a more comprehensive exposition of
advancements in the current state-of-the-art.Comment: Accepted in AI Magazine published by Wiley Periodicals LLC on behalf
of the Association for the Advancement of Artificial Intelligenc
Location privacy preservation in secure crowdsourcing-based cooperative spectrum sensing
Spectrum sensing is one of the most essential components of cognitive radio since it detects whether the spectrum is available or not. However, spectrum sensing accuracy is often degraded due to path loss, interference, and shadowing. Cooperative spectrum sensing (CSS) is one of the proposed solutions to overcome these challenges. It is a key function for dynamic spectrum access that can increase largely the reliability in cognitive radio networks. In fact, several users cooperate to detect the availability of a wireless channel by exploiting spatial diversity. However, cooperative sensing is also facing some series of security threats. In this paper, we focus on two major problems. The first problem is the localization preservation of the secondary users. In fact, malicious users can exploit spatial diversity to localize a secondary user by linking his location-dependent sensing report to his physical position. The existing solutions present a high level of complexity which decreases the performance of the systems. The second problem is the data injection attack, in which malicious CR users may affect the decisions taken by the cognitive users by providing false information, introducing spectrum sensing data falsification (SSDF). In fact, they can submit false sensing reports containing power measurements much larger (or smaller) than the true value to inflate (or deflate) the final average, in which case the fusion center may falsely determine that the channel is busy (or vacant) which increases the false alarm and miss detection probabilities. In this paper, we propose a novel scheme to overcome these problems: iterative per cluster malicious detection (IPCMD). It utilizes applied cryptographic techniques to allow the fusion center (FC) to securely obtain the aggregated result from various secondary users without learning each individual report. IPCMD combines the aggregated sensing reports with their reputation scores during data fusion. The proposed scheme is based on a new algorithm for key generation which can significantly reduce the key management complexity and consequently increase the system performance. Therefore, it can enable secure cooperative spectrum sensing and improve the secondary user location privacy.Ooreedoo, Doha, QatarScopu
Adaptative Network Topology for Data Centers
Data centers have an important role in supporting cloud computing services (such as email, social networking, web search, etc.) enterprise computing needs, and infrastructure-based services. Data center networking is a research topic that aims at improving the overall performances of the data centers. It is a topic of high interest and importance for both academia and industry. Several architectures such as FatTree, FiConn, DCel, BCube, and SprintNet have been proposed. However, these topologies try to improve the scalability without any concerns about energy that data centers use and the network infrastructure cost, which are critical parameters that impact the performances of data centers. In fact, companies suffer from the huge energy their data centers use and the network infrastructure cost witch is seen by operators as a key driver for maximizing data centers profits and according to industry estimates, the united states data center market achieved almost US\$39 billion in 2009, growing from US\$16.2 billion in 2005 Moreover, the studies show that the installed base of servers has been increasing 12 percent a year, from 14 million in 2000 to 35 million in 2008. Yet that growth is not keeping up with the demands placed on data centers for computing power and the amount of data they can handle. Almost 30 percent of respondents to a 2008 survey of data center managers said their centers will have reached their capacity limits in three years or sooner. The infrastructure cost and power consumption are the first order design concern for data center operators. In fact, they represent an important fraction of the initial capital investment while not contributing directly to the future revenues. Thus, the design goals of data center architectures seen by operators are high scalability, low latency, low Average path length and especially low energy consumption and low infrastructure cost (the number of interface cards, switches, and links). Motivated by these challenges, we propose a new data center architecture, called VacoNet that combines the advantages of previous architectures while avoiding their limitations. VacoNet is a reliable, high-performance, and scalable data center topology that is able to improve the network performances in terms average path length, network capacity and network latency. In fact, VacoNet can connect more than 12 times the number of nodes in FlatNet without increasing the APL. Also, it achieves a good network capacity even with a bottleneck effect (bigger than 0.3 even for 1000 servers). Furthermore, VacoNet reduced the infrastructure cost by about 50%, and the power consumption will be decreased with more than 50000 watt compared to all the previous architectures. In addition, and thanks to the proposed fault tolerant algorithm, the new architecture shows a great performance even when the failure rate equals to 0.3, which means when about one third of the links failed, the connection failure rate is only 15%. By using VacoNet, operators can win till 2 million US dollars compared to Flatnet, Dcell, Bcube and Fattree. Both theoretical analysis and simulation experiments have conducted and validated to evaluate the overall performance of the proposed architecture.qscienc
A Novel UAV-Aided Network Architecture Using Wi-Fi Direct
The use of unmanned aerial vehicles (UAVs) in future wireless networks is gaining attention due to their quick deployment without requiring the existing infrastructure. Earlier studies on UAV-aided communication consider generic scenarios, and very few studies exist on the evaluation of UAV-aided communication in practical networks. The existing studies also have several limitations, and hence, an extensive evaluation of the benefits of UAV communication in practical networks is needed. In this paper, we proposed a UAV-aided Wi-Fi Direct network architecture. In the proposed architecture, a UAV equipped with a Wi-Fi Direct group owner (GO) device, the so-called Soft-AP, is deployed in the network to serve a set of Wi-Fi stations. We propose to use a simpler yet efficient algorithm for the optimal placement of the UAV. The proposed algorithm dynamically places the UAV in the network to reduce the distance between the GO and client devices. The expected benefits of the proposed scheme are to maintain the connectivity of client devices to increase the overall network throughput and to improve energy efficiency. As a proof of concept, realistic simulations are performed in the NS-3 network simulator to validate the claimed benefits of the proposed scheme. The simulation results report major improvements of 23% in client association, 54% in network throughput, and 33% in energy consumption using single UAV relative to the case of stationary or randomly moving GO. Further improvements are achieved by increasing the number of UAVs in the network. To the best of our knowledge, no prior work exists on the evaluation of the UAV-aided Wi-Fi Direct networks.This work was supported by the NPRP through the Qatar National Research Fund (a member of Qatar Foundation) under Grant NPRP 8-627-2-260.Scopu
- …