595 research outputs found

    Computational environment for modeling and analysing network traffic behaviour using the divide and recombine framework

    Get PDF
    There are two essential goals of this research. The first goal is to design and construct a computational environment that is used for studying large and complex datasets in the cybersecurity domain. The second goal is to analyse the Spamhaus blacklist query dataset which includes uncovering the properties of blacklisted hosts and understanding the nature of blacklisted hosts over time. The analytical environment enables deep analysis of very large and complex datasets by exploiting the divide and recombine framework. The capability to analyse data in depth enables one to go beyond just summary statistics in research. This deep analysis is at the highest level of granularity without any compromise on the size of the data. The environment is also, fully capable of processing the raw data into a data structure suited for analysis. Spamhaus is an organisation that identifies malicious hosts on the Internet. Information about malicious hosts are stored in a distributed database by Spamhaus and served through the DNS protocol query-response. Spamhaus and other malicious-host-blacklisting organisations have replaced smaller malicious host databases curated independently by multiple organisations for their internal needs. Spamhaus services are popular due to their free access, exhaustive information, historical information, simple DNS based implementation, and reliability. The malicious host information obtained from these databases are used in the first step of weeding out potentially harmful hosts on the internet. During the course of this research work a detailed packet-level analysis was carried out on the Spamhaus blacklist data. It was observed that the query-responses displayed some peculiar behaviours. These anomalies were studied and modeled, and identified to be showing definite patterns. These patterns are empirical proof of a systemic or statistical phenomenon

    A Tale of Three Signatures: practical attack of ECDSA with wNAF

    Get PDF
    One way of attacking ECDSA with wNAF implementation for the scalar multiplication is to perform a side-channel analysis to collect information, then use a lattice based method to recover the secret key. In this paper, we reinvestigate the construction of the lattice used in one of these methods, the Extended Hidden Number Problem (EHNP). We find the secret key with only 3 signatures, thus reaching the theoretical bound given by Fan, Wang and Cheng, whereas best previous methods required at least 4 signatures in practice. Our attack is more efficient than previous attacks, in particular compared to times reported by Fan et al. at CCS 2016 and for most cases, has better probability of success. To obtain such results, we perform a detailed analysis of the parameters used in the attack and introduce a preprocessing method which reduces by a factor up to 7 the overall time to recover the secret key for some parameters. We perform an error resilience analysis which has never been done before in the setup of EHNP. Our construction is still able to find the secret key with a small amount of erroneous traces, up to 2% of false digits, and 4% with a specific type of error. We also investigate Coppersmith's methods as a potential alternative to EHNP and explain why, to the best of our knowledge, EHNP goes beyond the limitations of Coppersmith's methods

    Digenean parasites of Chinese marine fishes: a list of species, hosts and geographical distribution

    Get PDF
    In the literature, 630 species of Digenea (Trematoda) have been reported from Chinese marine fishes. These belong to 209 genera and 35 families. The names of these species, along with their hosts, geographical distribution and records, are listed in this paper

    A Tale of Three Signatures: practical attack of ECDSA with wNAF

    Get PDF
    International audienceAttacking ECDSA with wNAF implementation for the scalar multiplication first requires some side channel analysis to collect information, then lattice based methods to recover the secret key. In this paper, we reinvestigate the construction of the lattice used in one of these methods, the Extended Hidden Number Problem (EHNP). We find the secret key with only 3 signatures, thus reaching a known theoretical bound, whereas best previous methods required at least 4 signatures in practice. Given a specifoc leakage model, our attack is more efficient than previous attacks, and for most cases, has better probability of success. To obtain such results, we perform a detailed analysis of the parameters used in the attack and introduce a preprocessing method which reduces by a factor up to 7 the total time to recover the secret key for some parameters. We perform an error resilience analysis which has never been done before in the setup of EHNP. Our construction find the secret key with a small amount of erroneous traces, up to 2% of false digits, and 4% with a specific type of error

    Identifying the structure of Zn-N-2 active sites and structural activation

    Get PDF
    Identification of active sites is one of the main obstacles to rational design of catalysts for diverse applications. Fundamental insight into the identification of the structure of active sites and structural contributions for catalytic performance are still lacking. Recently, X-ray absorption spectroscopy (XAS) and density functional theory (DFT) provide important tools to disclose the electronic, geometric and catalytic natures of active sites. Herein, we demonstrate the structural identification of Zn-N-2 active sites with both experimental/theoretical X-ray absorption near edge structure (XANES) and extended X-ray absorption fine structure (EXAFS) spectra. Further DFT calculations reveal that the oxygen species activation on Zn-N-2 active sites is significantly enhanced, which can accelerate the reduction of oxygen with high selectivity, according well with the experimental results. This work highlights the identification and investigation of Zn-N-2 active sites, providing a regular principle to obtain deep insight into the nature of catalysts for various catalytic applications

    A New Microsphere-Based Immunoassay for Measuring the Activity of Transcription Factors

    Get PDF
    There are several traditional and well-developed methods for analyzing the activity of transcription factors, such as EMSA, enzyme-linked immunosorbent assay, and reporter gene activity assays. All of these methods have their own distinct disadvantages, but none can analyze the changes in transcription factors in the few cells that are cultured in the wells of 96-well titer plates. Thus, a new microsphere-based immunoassay to measure the activity of transcription factors (MIA-TF) was developed. In MIA-TF, NeutrAvidin-labeled microspheres were used as the solid phase to capture biotin-labeled double-strand DNA fragments which contain certain transcription factor binding elements. The activity of transcription factors was detected by immunoassay using a transcription factor-specific antibody to monitor the binding with the DNA probe. Next, analysis was performed by flow cytometry. The targets hypoxia-inducible factor-1Îą (HIF-1Îą) and nuclear factor-kappa B (NF-ÎşB) were applied and detected in this MIA-TF method; the results that we obtained demonstrated that this method could be used to monitor the changes of NF-ÎşB or HIF within 50 or 100 ng of nuclear extract. Furthermore, MIA-TF could detect the changes in NF-ÎşB or HIF in cells that were cultured in wells of a 96-well plate without purification of the nuclear protein, an important consideration for applying this method to high-throughput assays in the future. The development of MIA-TF would support further progress in clinical analysis and drug screening systems. Overall, MIA-TF is a method with high potential to detect the activity of transcription factors

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Precise measurement of the W-boson mass with the CDF II detector

    Get PDF
    We have measured the W-boson mass MW using data corresponding to 2.2/fb of integrated luminosity collected in proton-antiproton collisions at 1.96 TeV with the CDF II detector at the Fermilab Tevatron collider. Samples consisting of 470126 W->enu candidates and 624708 W->munu candidates yield the measurement MW = 80387 +- 12 (stat) +- 15 (syst) = 80387 +- 19 MeV. This is the most precise measurement of the W-boson mass to date and significantly exceeds the precision of all previous measurements combined

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
    • …
    corecore