531 research outputs found

    Copernicus high-resolution layers for land cover classification in Italy

    Get PDF
    The high-resolution layers (HRLs) are land cover maps produced for the entire Italian territory (approximately 30 million hectares) in 2012 by the European Environment Agency, aimed at monitoring soil imperviousness and natural cover, such as forest, grassland, wetland, and water surface, with a high spatial resolution of 20 m. This study presents the methodologies developed for the production, verification, and enhancement of the HRLs in Italy. The innovative approach is mainly based on (a) the use of available reference data for the enhancement process, (b) the reduction of the manual work of operators by using a semi-automatic approach, and (c) the overall increase in the cost-efficiency in relation to the production and updating of land cover maps. The results show the reliability of these methodologies in assessing and enhancing the quality of the HRLs. Finally, an integration of the individual layers, represented by the HRLs, was performed in order to produce a National High-Resolution Land Cover ma

    A symmetric cryptographic scheme for data integrity verification in cloud databases

    Get PDF
    Cloud database services represent a great opportunity for companies and organizations in terms of management and cost savings. However, outsourcing private data to external providers leads to risks of confidentiality and integrity violations. We propose an original solution based on encrypted Bloom filters that addresses the latter problem by allowing a cloud service user to detect unauthorized modifications to his outsourced data. Moreover, we propose an original analytical model that can be used to minimize storage and network overhead depending on the database structure and workload. We assess the effectiveness of the proposal as well as its performance improvements with respect to existing solutions by evaluating storage and network costs through micro-benchmarks and the TPC-C workload standard

    Detection and Threat Prioritization of Pivoting Attacks in Large Networks

    Get PDF
    Several advanced cyber attacks adopt the technique of "pivoting" through which attackers create a command propagation tunnel through two or more hosts in order to reach their final target. Identifying such malicious activities is one of the most tough research problems because of several challenges: command propagation is a rare event that cannot be detected through signatures, the huge amount of internal communications facilitates attackers evasion, timely pivoting discovery is computationally demanding. This paper describes the first pivoting detection algorithm that is based on network flows analyses, does not rely on any a-priori assumption on protocols and hosts, and leverages an original problem formalization in terms of temporal graph analytics. We also introduce a prioritization algorithm that ranks the detected paths on the basis of a threat score thus letting security analysts investigate just the most suspicious pivoting tunnels. Feasibility and effectiveness of our proposal are assessed through a broad set of experiments that demonstrate its higher accuracy and performance against related algorithms

    Frequency Estimation in OFDM Direct-Conversion Receivers Using a Repeated Preamble

    Get PDF
    This paper investigates the problem of carrier frequency offset (CFO) recovery in an OFDM receiver affected by frequency-selective in-phase/quadrature (I/Q) imbalances. The analysis is based on maximum-likelihood (ML) methods and relies on the transmission of a training preamble with a repetitive structure in the time domain. After assessing the accuracy of the conventional ML (CML) scheme in a scenario characterized by I/Q impairments, we review the joint ML (JML) estimator of all unknown parameters and evaluate its theoretical performance. In order to improve the estimation accuracy, we also present a novel CFO recovery method that exploits some side-information about the signal-to-interference ratio. It turns out that both CML and JML can be derived from this scheme by properly adjusting the value of a design parameter. The accuracy of the investigated methods are compared with the relevant Cramer-Rao bound. Our results can be used to check whether conventional CFO recovery algorithms can work properly or not in the presence of I/Q imbalances and also to evaluate the potential gain attainable by more sophisticated schemes

    Vehicle Safe-Mode, Limp-Mode in the Service of Cyber Security

    Get PDF
    This paper describes a concept for vehicle safe-mode, that may help reduce the potential damage of an identified cyber-attack. Unlike other defense mechanisms, that try to block the attack or simply notify of its existence, our mechanism responds to the detected breach, by limiting the vehicle\u2019s functionality to relatively safe operations, and optionally activating additional security counter-measures. This is done by adopting the already existing mechanism of Limp-mode, that was originally designed to limit the potential damage of either a mechanical or an electrical malfunction and let the vehicle \u201climp back home\u201d in relative safety. We further introduce two modes of safe-modemoperation: In Transparent-mode, when a cyber-attack is detected the vehicle enters its pre-configured Limp-mode; In Extended-mode we suggest to use custom messages that offer additional flexibility to both the reaction and the recovery plans. While Extended-mode requires modifications to the participating ECUs, Transparent-mode may be applicable to existing vehicles since it does not require any changes in the vehicle\u2019s systems\u2014in other words, it may even be deployed as an external component connected through the OBD-II port. We suggest an architectural design for the given modes, and include guidelines for a safe-mode manager, its clients, possible reactions, and recovery plans. We note that our system can rely upon any deployed anomaly-detection system to identify the potential attack

    A comprehensive theoretical framework for the optimization of neural networks classification performance with respect to weighted metrics

    Full text link
    In many contexts, customized and weighted classification scores are designed in order to evaluate the goodness of the predictions carried out by neural networks. However, there exists a discrepancy between the maximization of such scores and the minimization of the loss function in the training phase. In this paper, we provide a complete theoretical setting that formalizes weighted classification metrics and then allows the construction of losses that drive the model to optimize these metrics of interest. After a detailed theoretical analysis, we show that our framework includes as particular instances well-established approaches such as classical cost-sensitive learning, weighted cross entropy loss functions and value-weighted skill scores

    Identifying malicious hosts involved in periodic communications

    Get PDF
    After many research efforts, Network Intrusion Detection Systems still have much room for improvement. This paper proposes a novel method for automatic and timely analysis of traffic generated by large networks, which is able to identify malicious external hosts even if their activities do not raise any alert by existing defensive systems. Our proposal focuses on periodic communications, since our experimental evaluation shows that they are more related to malicious activities, and it can be easily integrated with other detection systems. We highlight that periodic network activities can occur at very different intervals ranging from seconds to hours, hence a timely analysis of long time-windows of the traffic generated by large organizations is a challenging task in itself. Existing work is primarily focused on identifying botnets, whereas the method proposed in this paper has a broader target and aims to detect external hosts that are likely involved in any malicious operation. Since malware-related network activities can be considered as rare events in the overall traffic, the output of the proposed method is a manageable graylist of external hosts that are characterized by a considerably higher likelihood of being malicious compared to the entire set of external hosts contacted by the monitored large network. A thorough evaluation on a real large network traffic demonstrates the effectiveness of our proposal, which is capable of automatically selecting only dozens of suspicious hosts from hundreds of thousands, thus allowing security operators to focus their analyses on few likely malicious targets

    Application of data fusion techniques to direct geographical traceability indicators

    Get PDF
    A hierarchical data fusion approach has been developed proposing multivariate curve resolution (MCR) as a variable reduction tool. The case study presented concerns the characterization of soil samples of the Modena District. It was performed in order to understand, at a pilot study stage, the geographical variability of the zone prior to planning a representative soils sampling to derive geographical traceability models for Lambrusco Wines. Soils samples were collected from four producers of Lambrusco Wines, located in in-plane and hill areas. Depending on the extension of the sampled fields the number of points collected varies from three to five and, for each point, five depth levels were considered. The different data blocks consisted of X-ray powder diffraction (XRDP) spectra, metals concentrations relative to thirty-four elements and the 87Sr/86Sr isotopic abundance ratio, a very promising geographical traceability marker. A multi steps data fusion strategy has been adopted. Firstly, the metals concentrations dataset was weighted and concatenated with the values of strontium isotopic ratio and compressed. The resolved components described common patterns of variation of metals content and strontium isotopic ratio. The X-ray powder spectra profiles were resolved in three main components that can be referred to calcite, quartz and clays contributions. Then, a high-level data fusion approach was applied by combining the components arising from the previous data sets. The results show interesting links among the different components arising from XRDP, the metals pattern and to which of these 87Sr/86Sr Isotopic Ratio variation is closer. The combined information allowed capturing the variability of the analyzed soil samples
    • …
    corecore