5,970 research outputs found

    Development of method of matched morphological filtering of biomedical signals and images

    Get PDF
    Formalized approach to the analysis of biomedical signals and images with locally concentrated features is developed on the basis of matched morphological filtering taking into account the useful signal models that allowed generalizing the existing methods of digital processing and analysis of biomedical signals and images with locally concentrated features. The proposed matched morphological filter has been adapted to solve such problems as localization of the searched structural elements on biomedical signals with locally concentrated features, estimation of the irregular background aimed at the visualization quality improving of biological objects on X-ray biomedical images, pathologic structures selection on mammogram. The efficiency of the proposed methods of matched morphological filtration of biomedical signals and images with locally concentrated features is proved by experiments

    A heuristic-based approach to code-smell detection

    Get PDF
    Encapsulation and data hiding are central tenets of the object oriented paradigm. Deciding what data and behaviour to form into a class and where to draw the line between its public and private details can make the difference between a class that is an understandable, flexible and reusable abstraction and one which is not. This decision is a difficult one and may easily result in poor encapsulation which can then have serious implications for a number of system qualities. It is often hard to identify such encapsulation problems within large software systems until they cause a maintenance problem (which is usually too late) and attempting to perform such analysis manually can also be tedious and error prone. Two of the common encapsulation problems that can arise as a consequence of this decomposition process are data classes and god classes. Typically, these two problems occur together – data classes are lacking in functionality that has typically been sucked into an over-complicated and domineering god class. This paper describes the architecture of a tool which automatically detects data and god classes that has been developed as a plug-in for the Eclipse IDE. The technique has been evaluated in a controlled study on two large open source systems which compare the tool results to similar work by Marinescu, who employs a metrics-based approach to detecting such features. The study provides some valuable insights into the strengths and weaknesses of the two approache

    CODEWEAVE: exploring fine-grained mobility of code

    Get PDF
    This paper is concerned with an abstract exploration of code mobility constructs designed for use in settings where the level of granularity associated with the mobile units exhibits significant variability. Units of mobility that are both finer and coarser grained than the unit of execution are examined. To accomplish this, we take the extreme view that every line of code and every variable declaration are potentially mobile, i.e., it may be duplicated or moved from one program context to another on the same host or across the network. We also assume that complex code assemblies may move with equal ease. The result is CODEWEAVE, a model that shows how to develop new forms of code mobility, assign them precise meaning, and facilitate formal verification of programs employing them. The design of CODEWEAVE relies greatly on Mobile UNITY, a notation and proof logic for mobile computing. Mobile UNITY offers a computational milieu for examining a wide range of constructs and semantic alternatives in a clean abstract setting, i.e., unconstrained by compilation and performance considerations traditionally associated with programming language design. Ultimately, the notation offered by CODEWEAVE is given exact semantic definition by means of a direct mapping to the underlying Mobile UNITY model. The abstract and formal treatment of code mobility offered by CODEWEAVE establishes a technical foundation for examining competing proposals and for subsequent integration of some of the mobility constructs both at the language level and within middleware for mobility

    Development and Algorithmization of a Method for Analyzing the Degree of Uniqueness of Personal Medical Data

    Get PDF
    The purpose of this investigation is to develop a method for quantitative assessment of the uniqueness of personal medical data (PMD) to improve their protection in medical information systems (MIS). The relevance of the goal is due to the fact that impersonal PMD can form unique combinations that are potentially of interest to intruders and threaten to reveal the patient's identity and medical confidentiality. Existing approaches were analyzed, and a new method for quantifying the degree of uniqueness of PMD was proposed. A weakness in existing approaches is the assumption that an attacker will use exact matching to identify people. The novelty of the method proposed in this paper lies in the fact that it is not limited to this hypothesis, although it has its limitations: it is not applicable to small samples. The developed method for determining the PMD uniqueness coefficient is based on the assumption of a multidimensional distribution of features, characterized by a covariance matrix, and a normal distribution, which provides the most reliable reflection of the existing relationships between features when analyzing large data samples. The results obtained in computational experiments show that efficiency is no worse than that of focus groups of specialized experts. Doi: 10.28991/HIJ-2023-04-01-09 Full Text: PD

    Semantic hierarchies for extracting, modeling, and connecting compliance requirements in information security control standards

    Get PDF
    Companies and government organizations are increasingly compelled, if not required by law, to ensure that their information systems will comply with various federal and industry regulatory standards, such as the NIST Special Publication on Security Controls for Federal Information Systems (NIST SP-800-53), or the Common Criteria (ISO 15408-2). Such organizations operate business or mission critical systems where a lack of or lapse in security protections translates to serious confidentiality, integrity, and availability risks that, if exploited, could result in information disclosure, loss of money, or, at worst, loss of life. To mitigate these risks and ensure that their information systems meet regulatory standards, organizations must be able to (a) contextualize regulatory documents in a way that extracts the relevant technical implications for their systems, (b) formally represent their systems and demonstrate that they meet the extracted requirements following an accreditation process, and (c) ensure that all third-party systems, which may exist outside of the information system enclave as web or cloud services also implement appropriate security measures consistent with organizational expectations. This paper introduces a step-wise process, based on semantic hierarchies, that systematically extracts relevant security requirements from control standards to build a certification baseline for organizations to use in conjunction with formal methods and service agreements for accreditation. The approach is demonstrated following a case study of all audit-related controls in the SP-800-53, ISO 15408-2, and related documents. Accuracy, applicability, consistency, and efficacy of the approach were evaluated using controlled qualitative and quantitative methods in two separate studies
    corecore