151 research outputs found
Decentralization of a Multi Data Source Distributed Processing System Using a Distributed Hash Table
A distributed processing system (DPS) contains many autonomous nodes, which contribute their own computing power. DPS is considered a unified logical structure, operating in a distributed manner; the processing tasks are divided into fragments and assigned to various nodes for processing. That type of operation requires and involves a great deal of communication. We propose to use the decentralized approach, based on a distributed hash table, to reduce the communication overhead and remove the server unit, thus avoiding having a single point of failure in the system. This paper proposes a mathematical model and algorithms that are implemented in a dedicated experimental system. Using the decentralized approach, this study demonstrates the efficient operation of a decentralized system which results in a reduced energy emission
Cognitive Congestion Control for Data Portal
Network congestion is one of challenging tasks in communication networks and leads to queuing delay, packet loss or the blocking of new connections. Here, a data portal is considered as an application-based network, and a cognitive method is proposed to deal with congestion in this kind of network. The cognitive method is proposed to improve bandwidth sharing, and deal with congestion in a data portal. When the data portal is about climate change data, congestion control is more emphasized, because the scientific climate data is voluminous and there is a high traffic to/from data portal by the scientific community, research groups, and general readers
A Cognitive Framework to Secure Smart Cities
The advancement in technology has transformed Cyber Physical Systems and their interface with IoT into a more sophisticated and challenging paradigm. As a result, vulnerabilities and potential attacks manifest themselves considerably more than before, forcing researchers to rethink the conventional strategies that are currently in place to secure such physical systems. This manuscript studies the complex interweaving of sensor networks and physical systems and suggests a foundational innovation in the field. In sharp contrast with the existing IDS and IPS solutions, in this paper, a preventive and proactive method is employed to stay ahead of attacks by constantly monitoring network data patterns and identifying threats that are imminent. Here, by capitalizing on the significant progress in processing power (e.g. petascale computing) and storage capacity of computer systems, we propose a deep learning approach to predict and identify various security breaches that are about to occur. The learning process takes place by collecting a large number of files of different types and running tests on them to classify them as benign or malicious. The prediction model obtained as such can then be used to identify attacks. Our project articulates a new framework for interactions between physical systems and sensor networks, where malicious packets are repeatedly learned over time while the system continually operates with respect to imperfect security mechanisms
Digital Watermarking Security
As creative works (e.g. books, films, music, photographs) become increasingly available in digital formats in a highly connected world, it also becomes increasingly difficult to secure intellectual property rights. Digital watermarking is one potential technology to aid intellectual property owners in controlling and tracking the use of their works. Surveys the state of digital watermarking research and examines the attacks that the technology faces and how it fares against them. Digital watermarking is an inherently difficult design problem subject to many constraints. The technology currently faces an uphill battle to be secure against relatively simple attacks
Machine Learning and Radiomic Features to Predict Overall Survival Time for Glioblastoma Patients
Glioblastoma is an aggressive brain tumor with a low survival rate. Understanding tumor behavior by predicting prognosis outcomes is a crucial factor in deciding a proper treatment plan. In this paper, an automatic overall survival time prediction system (OST) for glioblastoma patients is developed on the basis of radiomic features and machine learning (ML). This system is designed to predict prognosis outcomes by classifying a glioblastoma patient into one of three survival groups: short-term, mid-term, and long-term. To develop the prediction system, a medical dataset based on imaging information from magnetic resonance imaging (MRI) and non-imaging information is used. A novel radiomic feature extraction method is proposed and developed on the basis of volumetric and location information of brain tumor subregions extracted from MRI scans. This method is based on calculating the volumetric features from two brain sub-volumes obtained from the whole brain volume in MRI images using brain sectional planes (sagittal, coronal, and horizontal). Many experiments are conducted on the basis of various ML methods and combinations of feature extraction methods to develop the best OST system. In addition, the feature fusions of both radiomic and non-imaging features are examined to improve the accuracy of the prediction system. The best performance was achieved by the neural network and feature fusions
Hypercube-Based Topologies With Incremental Link Redundancy.
Hypercube structures have received a great deal of attention due to the attractive properties inherent to their topology. Parallel algorithms targeted at this topology can be partitioned into many tasks, each of which running on one node processor. A high degree of performance is achievable by running every task individually and concurrently on each node processor available in the hypercube. Nevertheless, the performance can be greatly degraded if the node processors spend much time just communicating with one another. The goal in designing hypercubes is, therefore, to achieve a high ratio of computation time to communication time. The dissertation addresses primarily ways to enhance system performance by minimizing the communication time among processors. The need for improving the performance of hypercube networks is clearly explained. Three novel topologies related to hypercubes with improved performance are proposed and analyzed. Firstly, the Bridged Hypercube (BHC) is introduced. It is shown that this design is remarkably more efficient and cost-effective than the standard hypercube due to its low diameter. Basic routing algorithms such as one to one and broadcasting are developed for the BHC and proven optimal. Shortcomings of the BHC such as its asymmetry and limited application are clearly discussed. The Folded Hypercube (FHC), a symmetric network with low diameter and low degree of the node, is introduced. This new topology is shown to support highly efficient communications among the processors. For the FHC, optimal routing algorithms are developed and proven to be remarkably more efficient than those of the conventional hypercube. For both BHC and FHC, network parameters such as average distance, message traffic density, and communication delay are derived and comparatively analyzed. Lastly, to enhance the fault tolerance of the hypercube, a new design called Fault Tolerant Hypercube (FTH) is proposed. The FTH is shown to exhibit a graceful degradation in performance with the existence of faults. Probabilistic models based on Markov chain are employed to characterize the fault tolerance of the FTH. The results are verified by Monte Carlo simulation. The most attractive feature of all new topologies is the asymptotically zero overhead associated with them. The designs are simple and implementable. These designs can lead themselves to many parallel processing applications requiring high degree of performance
Similarity of climate change data for Antarctica and Nevada
The correlation between temperature and carbon dioxide concentration in the past one hundred years is studied. Separate graphs containing data from Vostok, Antarctica and the Mojave desert/mountain west (Nevada region) are presented. Using data obtained from these graphs, an attempt is made to explain the results and investigate the similarity of these results for Antarctica and Nevada. The importance of this study lies in the fact that if data show the same trend in the two regions, many findings for climate change in Antarctica may readily be validated and employed for Nevada
- …