806 research outputs found

    Technologie RFID a Blochkchain v dodavatelském řetězci

    Get PDF
    The paper discusses the possibility of combining RFID and Blockchain technology to more effectively prevent counterfeiting of products or raw materials, and to solve problems related to production, logistics and storage. Linking these technologies can lead to better planning by increasing the transparency and traceability of industrial or logistical processes or such as efficient detection of critical chain sites.Příspěvek se zabývá možností kombinace technologií RFID a Blockchain pro účinnější zabránění padělání výrobků či surovin a řešení problémů spojených s výrobou, logistikou a skladováním. Spojení těchto technologií může vést k lepšímu plánování díky vyšší transparentnosti a sledovatelnosti průmyslových nebo logistických procesů, nebo například k efektivnímu zjišťování kritických míst řetězce

    Approximate algorithms for efficient indexing, clustering, and classification in Peer-to-peer networks

    Get PDF
    [no abstract

    Exploring heterogeneity of unreliable machines for p2p backup

    Full text link
    P2P architecture is a viable option for enterprise backup. In contrast to dedicated backup servers, nowadays a standard solution, making backups directly on organization's workstations should be cheaper (as existing hardware is used), more efficient (as there is no single bottleneck server) and more reliable (as the machines are geographically dispersed). We present the architecture of a p2p backup system that uses pairwise replication contracts between a data owner and a replicator. In contrast to standard p2p storage systems using directly a DHT, the contracts allow our system to optimize replicas' placement depending on a specific optimization strategy, and so to take advantage of the heterogeneity of the machines and the network. Such optimization is particularly appealing in the context of backup: replicas can be geographically dispersed, the load sent over the network can be minimized, or the optimization goal can be to minimize the backup/restore time. However, managing the contracts, keeping them consistent and adjusting them in response to dynamically changing environment is challenging. We built a scientific prototype and ran the experiments on 150 workstations in the university's computer laboratories and, separately, on 50 PlanetLab nodes. We found out that the main factor affecting the quality of the system is the availability of the machines. Yet, our main conclusion is that it is possible to build an efficient and reliable backup system on highly unreliable machines (our computers had just 13% average availability)

    CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines

    Get PDF
    Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective. The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines. From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research

    Numerical Analysis for Relevant Features in Intrusion Detection (NARFid)

    Get PDF
    Identification of cyber attacks and network services is a robust field of study in the machine learning community. Less effort has been focused on understanding the domain space of real network data in identifying important features for cyber attack and network service classification. Motivations for such work allow for anomaly detection systems with less requirements on data “sniffed” off the network, extraction of features from the traffic, reduced learning time of algorithms, and ideally increased classification performance of anomalous behavior. This thesis evaluates the usefulness of a good feature subset for the general classification task of identifying cyber attacks and network services. The generality of the selected features elucidates the relevance or irrelevance of the feature set for the classification task of intrusion detection. Additionally, the thesis provides an extension to the Bhattacharyya method, which selects features by means of inter-class separability (Bhattacharyya coefficient). The extension for multiple class problems selects a minimal set of features with the best separability across all class pairs. Several feature selection algorithms (e.g., accuracy rate with genetic algorithm, RELIEF-F, GRLVQI, median Bhattacharyya and minimum surface Bhattacharyya methods) create feature subsets that describe the decision boundary for intrusion detection problems. The selected feature subsets maintain or improve the classification performance for at least three out of the four anomaly detectors (i.e., classifiers) under test. The feature subsets, which illustrate generality for the intrusion detection problem, range in size from 12 to 27 features. The original feature set consists of 248 features. Of the feature subsets demonstrating generality, the extension to the Bhattacharyya method generates the second smallest feature subset. This thesis quantitatively demonstrates that a relatively small feature set may be used for intrusion detection with machine learning classifiers

    WebSocket vs WebRTC in the stream overlays of the Streamr Network

    Get PDF
    The Streamr Network is a decentralized publish-subscribe system. This thesis experimentally compares WebSocket and WebRTC as transport protocols in the system’s d-regular random graph type unstructured stream overlays. The thesis explores common designs for publish-subscribe and decentralized P2P systems. Underlying network protocols including NAT traversal are explored to understand how the WebSocket and WebRTC protocols function. The requirements set for the Streamr Network and how its design and implementations fulfill them are discussed. The design and implementations are validated with the use simulations, emulations and AWS deployed real-world experiments. The performance metrics measured from the real-world experiments are compared to related work. As the implementations using the two protocols are separate incompatible versions, the differences between them was taken into account during analysis of the experiments. Although the WebSocket versions overlay construction is known to be inefficient and vulnerable to churn, it is found to be unintentionally topology aware. This caused the WebSocket stream overlays to perform better in terms of latency. The WebRTC stream overlays were found to be more predictable and more optimized for small payloads as estimates for message propagation delays had a MEPA of 1.24% compared to WebSocket’s 3.98%. Moreover, the WebRTC version enables P2P connections between hosts behind NATs. As the WebRTC version’s overlay construction is more accurate, reliable, scalable, and churn tolerant, it can be used to create intentionally topology aware stream overlays to fully take over the results of the WebSocket implementation

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin
    corecore