37 research outputs found

    Fault Tolerant Network Constructors

    Get PDF
    In this work, we consider adversarial crash faults of nodes in the network constructors model [[Michail and Spirakis, 2016]]. We first show that, without further assumptions, the class of graph languages that can be (stably) constructed under crash faults is non-empty but small. In particular, if an unbounded number of crash faults may occur, we prove that (i) the only constructible graph language is that of spanning cliques and (ii) a strong impossibility result holds even if the size of the graphs that the protocol outputs in populations of size nn need only grow with nn (the remaining nodes being waste). When there is a finite upper bound ff on the number of faults, we show that it is impossible to construct any non-hereditary graph language. On the positive side, by relaxing our requirements we prove that: (i) permitting linear waste enables to construct on n/(2f)fn/(2f)-f nodes, any graph language that is constructible in the fault-free case, (ii) partial constructibility (i.e. not having to generate all graphs in the language) allows the construction of a large class of graph languages. We then extend the original model with a minimal form of fault notifications. Our main result here is a fault-tolerant universal constructor: We develop a fault-tolerant protocol for spanning line and use it to simulate a linear-space Turing Machine MM. This allows a fault-tolerant construction of any graph accepted by MM in linear space, with waste min{n/2+f(n),  n}min\{n/2+f(n),\; n\}, where f(n)f(n) is the number of faults in the execution. We then prove that increasing the permissible waste to min{2n/3+f(n),  n}min\{2n/3+f(n),\; n\} allows the construction of graphs accepted by an O(n2)O(n^2)-space Turing Machine, which is asymptotically the maximum simulation space that we can hope for in this model. Finally, we show that logarithmic local memories can be exploited for a no-waste fault-tolerant simulation of any such protocol

    A REAL-TIME TRAFFIC CONDITION ASSESSMENT AND PREDICTION FRAMEWORK USING VEHICLE-INFRASTRUCTURE INTEGRATION (VII) WITH COMPUTATIONAL INTELLIGENCE

    Get PDF
    This research developed a real-time traffic condition assessment and prediction framework using Vehicle-Infrastructure Integration (VII) with computational intelligence to improve the existing traffic surveillance system. Due to the prohibited expenses and complexity involved for the field experiment of such a system, this study adopted state-of-the-art simulation tools as an efficient alternative. This work developed an integrated traffic and communication simulation platform to facilitate the design and evaluation of a wide range of online traffic surveillance and management system in both traffic and communication domain. Using the integrated simulator, the author evaluated the performance of different combination of communication medium and architecture. This evaluation led to the development of a hybrid VII framework exemplified by hierarchical architecture, which is expected to eliminate single point failures, enhance scalability and easy integration of control functions for traffic condition assessment and prediction. In the proposed VII framework, the vehicle on-board equipments and roadside units (RSUs) work collaboratively, based on an intelligent paradigm known as \u27Support Vector Machine (SVM),\u27 to determine the occurrence and characteristics of an incident with the kinetics data generated by vehicles. In addition to incident detection, this research also integrated the computational intelligence paradigm called \u27Support Vector Regression (SVR)\u27 within the hybrid VII framework for improving the travel time prediction capabilities, and supporting on-line leaning functions to improve its performance over time. Two simulation models that fully implemented the functionalities of real-time traffic surveillance were developed on calibrated and validated simulation network for study sites in Greenville and Spartanburg, South Carolina. The simulation models\u27 encouraging performance on traffic condition assessment and prediction justifies further research on field experiment of such a system to address various research issues in the areas covered by this work, such as availability and accuracy of vehicle kinetic and maneuver data, reliability of wireless communication, maintenance of RSUs and wireless repeaters. The impact of this research will provide a reliable alternative to traditional traffic sensors to assess and predict the condition of the transportation system. The integrated simulation methodology and open source software will provide a tool for design and evaluation of any real-time traffic surveillance and management systems. Additionally, the developed VII simulation models will be made available for use by future researchers and designers of other similar VII systems. Future implementation of the research in the private and public sector will result in new VII related equipment in vehicles, greater control of traffic loading, faster incident detection, improved safety, mitigated congestion, and reduced emissions and fuel consumption

    WIRELESS NETWORK COCAST: COOPERATIVE COMMUNICATIONS WITH SPACE-TIME NETWORK CODING

    Get PDF
    Traditional cooperative communications can greatly improve communication performance. However, transmissions from multiple relay nodes are challenging in practice. Single transmissions using time-division multiple access cause large transmission delay, but simultaneous transmissions from two or more nodes using frequency-division multiple access (FDMA), code-division multiple access (CDMA), or distributed space-time codes are associated with the issues of imperfect frequency and timing synchronization due to the asynchronous nature of cooperation. In this dissertation, we propose a novel concept of wireless network cocast (WNC) and develop its associated space-time network codes (STNCs) to overcome the foretold issues. In WNC networks, each node is allocated a time slot for its transmission and thus the issues of imperfect synchronization are eliminated. To reduce the large transmission delay, each relay node forms a unique signal, a combination of the overheard information, and transmits it to the intended destination. The combining functions at relay nodes form a STNC that ensures full spatial diversity for the transmitted information as in traditional cooperative communications. Various traditional combining techniques are utilized to design the STNCs, including FDMA-like and CDMA-like techniques and transform-based techniques with the use of Hadamard and Vandermonde matrices. However, a major distinction is that the combination of information from different sources happens within a relay node instead of through the air as in traditional cooperative communications. We consider a general case of multiuser relay wireless networks, where user nodes transmit and receive their information to and from a common base node with the assistance from relay nodes. We then apply the STNCs to multiuser cooperative networks, in which the user nodes are also relay nodes helping each other in their transmission. Since the cooperative nodes are distributed around the network, the node locations can be an important aspect of designing a STNC. Therefore, we propose a location-aware WNC scheme to reduce the aggregate transmit power and achieve even power distribution among the user nodes in the network. WNC networks and its associated STNCs provide spatial diversity to dramatically reduce the required transmit power. However, due to the additional processing power in receiving and retransmitting each other's information, not all nodes and WNC networks result in energy efficiency. Therefore, we first examine the power consumption in WNC networks. We then offer a TDMA-based merge process based on coalitional formation games to orderly and efficiently form cooperative groups in WNC networks. The proposed merge process substantially reduces the network power consumption and improves the network lifetime

    Social Media and Public Discourse Participation in Restrictive Environments

    Get PDF
    This dissertation investigates citizens\u27 use of social media to participate in public discourse (i.e., access, share, and comment on socio-political content) in restrictive environments: societies ruled by a hegemonic government where users face economic and infrastructure barriers to using digital technologies. Theoretical propositions are built inductively from an interpretive case study of how Cuban citizens use Twitter to participate in socio-political conversations. The case study resulted in the identification of nine affordances (i.e., action potentials) for participating in public discourse that Cubans perceive on Twitter. The findings also showed that the identified affordances enabled Cubans to achieve citizen goals: positive outcomes that made them more effective to counteract the government\u27s hegemonic ruling. The case study also resulted in the identification of six obstacle-circumvention use strategies that Cubans apply to realize Twitter’s affordances and the conditions informing these strategies. The case findings were abstracted into a conceptual framework to explain social media-enabled participation in public discourse as a mechanism of empowerment in restrictive environments. One research contribution is the proposition that social media empowers citizens in restrictive spaces by allowing them to take, in the virtual world, actions related to participating in socio-political conversations that they cannot take in offline settings. Moreover, this work advances that social media empowers citizens in restrictive environments because it increases their self-efficacy and motivation to counteract the government and the knowledge and access to valuable resources needed to be more effective while pursuing this goal. Another contribution was highlighting that media use in restrictive environments is an involved process requiring users to devise optimization strategies that usually involve the use of supportive technologies in addition to the social media app. The use strategies are informed by limiting societal, individual user-level, and circumstantial conditions. One of this work’s practical contributions is offering pro-democracy advocates in restrictive environments a clearer understanding of the effects of using social media. This dissertation reaffirms that social media-mediated participation in public discourse empowers citizens because it provides the emotional fuel and the knowledge that they need to engage in the tiring battle of pushing back against the government’s domination

    Using Malware Analysis to Evaluate Botnet Resilience

    Get PDF
    Bos, H.J. [Promotor]Steen, M.R. van [Promotor

    Learning deep embeddings by learning to rank

    Full text link
    We study the problem of embedding high-dimensional visual data into low-dimensional vector representations. This is an important component in many computer vision applications involving nearest neighbor retrieval, as embedding techniques not only perform dimensionality reduction, but can also capture task-specific semantic similarities. In this thesis, we use deep neural networks to learn vector embeddings, and develop a gradient-based optimization framework that is capable of optimizing ranking-based retrieval performance metrics, such as the widely used Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). Our framework is applied in three applications. First, we study Supervised Hashing, which is concerned with learning compact binary vector embeddings for fast retrieval, and propose two novel solutions. The first solution optimizes Mutual Information as a surrogate ranking objective, while the other directly optimizes AP and NDCG, based on the discovery of their closed-form expressions for discrete Hamming distances. These optimization problems are NP-hard, therefore we derive their continuous relaxations to enable gradient-based optimization with neural networks. Our solutions establish the state-of-the-art on several image retrieval benchmarks. Next, we learn deep neural networks to extract Local Feature Descriptors from image patches. Local features are used universally in low-level computer vision tasks that involve sparse feature matching, such as image registration and 3D reconstruction, and their matching is a nearest neighbor retrieval problem. We leverage our AP optimization technique to learn both binary and real-valued descriptors for local image patches. Compared to competing approaches, our solution eliminates complex heuristics, and performs more accurately in the tasks of patch verification, patch retrieval, and image matching. Lastly, we tackle Deep Metric Learning, the general problem of learning real-valued vector embeddings using deep neural networks. We propose a learning to rank solution through optimizing a novel quantization-based approximation of AP. For downstream tasks such as retrieval and clustering, we demonstrate promising results on standard benchmarks, especially in the few-shot learning scenario, where the number of labeled examples per class is limited

    7. GI/ITG KuVS Fachgespräch Drahtlose Sensornetze

    Get PDF
    In dem vorliegenden Tagungsband sind die Beiträge des Fachgesprächs Drahtlose Sensornetze 2008 zusammengefasst. Ziel dieses Fachgesprächs ist es, Wissenschaftlerinnen und Wissenschaftler aus diesem Gebiet die Möglichkeit zu einem informellen Austausch zu geben – wobei immer auch Teilnehmer aus der Industrieforschung willkommen sind, die auch in diesem Jahr wieder teilnehmen.Das Fachgespräch ist eine betont informelle Veranstaltung der GI/ITG-Fachgruppe „Kommunikation und Verteilte Systeme“ (www.kuvs.de). Es ist ausdrücklich keine weitere Konferenz mit ihrem großen Overhead und der Anforderung, fertige und möglichst „wasserdichte“ Ergebnisse zu präsentieren, sondern es dient auch ganz explizit dazu, mit Neueinsteigern auf der Suche nach ihrem Thema zu diskutieren und herauszufinden, wo die Herausforderungen an die zukünftige Forschung überhaupt liegen.Das Fachgespräch Drahtlose Sensornetze 2008 findet in Berlin statt, in den Räumen der Freien Universität Berlin, aber in Kooperation mit der ScatterWeb GmbH. Auch dies ein Novum, es zeigt, dass das Fachgespräch doch deutlich mehr als nur ein nettes Beisammensein unter einem Motto ist.Für die Organisation des Rahmens und der Abendveranstaltung gebührt Dank den beiden Mitgliedern im Organisationskomitee, Kirsten Terfloth und Georg Wittenburg, aber auch Stefanie Bahe, welche die redaktionelle Betreuung des Tagungsbands übernommen hat, vielen anderen Mitgliedern der AG Technische Informatik der FU Berlin und natürlich auch ihrem Leiter, Prof. Jochen Schiller

    Computational Proteomics Using Network-Based Strategies

    Get PDF
    This thesis examines the productive application of networks towards proteomics, with a specific biological focus on liver cancer. Contempory proteomics (shot- gun) is plagued by coverage and consistency issues. These can be resolved via network-based approaches. The application of 3 classes of network-based approaches are examined: A traditional cluster based approach termed Proteomics Expansion Pipeline), a generalization of PEP termed Maxlink and a feature-based approach termed Proteomics Signature Profiling. PEP is an improvement on prevailing cluster-based approaches. It uses a state- of-the-art cluster identification algorithm as well as network-cleaning approaches to identify the critical network regions indicated by the liver cancer data set. The top PARP1 associated-cluster was identified and independently validated. Maxlink allows identification of undetected proteins based on the number of links to identified differential proteins. It is more sensitive than PEP due to more relaxed requirements. Here, the novel roles of ARRB1/2 and ACTB are identified and discussed in the context of liver cancer. Both PEP and Maxlink are unable to deal with consistency issues, PSP is the first method able to deal with both, and is termed feature-based since the network- based clusters it uses are predicted independently of the data. It is also capable of using real complexes or predicted pathway subnets. By combining pathways and complexes, a novel basis of liver cancer progression implicating nucleotide pool imbalance aggravated by mutations of key DNA repair complexes was identified. Finally, comparative evaluations suggested that pure network-based methods are vastly outperformed by feature-based network methods utilizing real complexes. This is indicative that the quality of current networks are insufficient to provide strong biological rigor for data analysis, and should be carefully evaluated before further validations.Open Acces
    corecore