2,112 research outputs found

    Monitoring SOA Applications with SOOM Tools: A Competitive Analysis

    Get PDF
    Background: Monitoring systems decouple monitoring functionality from application and infrastructure layers and provide a set of tools that can invoke operations on the application to be monitored. Objectives: Our monitoring system is a powerful yet agile solution that is able to online observe and manipulate SOA (Service-oriented Architecture) applications. The basic monitoring functionality is implemented via lightweight components inserted into SOA frameworks thereby keeping the monitoring impact minimal. Methods/Approach: Our solution is software that hides the complexity of SOA applications being monitored via an architecture where its designated components deal with specific SOA aspects such as distribution and communication. Results: We implement an application-level and end-to-end monitoring with the end user experience in focus. Our tools are connected to a single monitoring system which provides consistent operations, resolves concurrent requests, and abstracts away the underlying mechanisms that cater for the SOA paradigm. Conclusions: Due to its flexible architecture and design our monitoring tools are capable of monitoring SOA application in Cloud environments without significant modifications. In comparisons with related systems we proved that our agile approaches are the areas where our monitoring system excels

    Model-Based Security Testing

    Full text link
    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.Comment: In Proceedings MBT 2012, arXiv:1202.582

    Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges

    Full text link
    [EN] If last decade viewed computational services as a utility then surely this decade has transformed computation into a commodity. Computation is now progressively integrated into the physical networks in a seamless way that enables cyber-physical systems (CPS) and the Internet of Things (IoT) meet their latency requirements. Similar to the concept of ¿platform as a service¿ or ¿software as a service¿, both cloudlets and fog computing have found their own use cases. Edge devices (that we call end or user devices for disambiguation) play the role of personal computers, dedicated to a user and to a set of correlated applications. In this new scenario, the boundaries between the network node, the sensor, and the actuator are blurring, driven primarily by the computation power of IoT nodes like single board computers and the smartphones. The bigger data generated in this type of networks needs clever, scalable, and possibly decentralized computing solutions that can scale independently as required. Any node can be seen as part of a graph, with the capacity to serve as a computing or network router node, or both. Complex applications can possibly be distributed over this graph or network of nodes to improve the overall performance like the amount of data processed over time. In this paper, we identify this new computing paradigm that we call Social Dispersed Computing, analyzing key themes in it that includes a new outlook on its relation to agent based applications. We architect this new paradigm by providing supportive application examples that include next generation electrical energy distribution networks, next generation mobility services for transportation, and applications for distributed analysis and identification of non-recurring traffic congestion in cities. The paper analyzes the existing computing paradigms (e.g., cloud, fog, edge, mobile edge, social, etc.), solving the ambiguity of their definitions; and analyzes and discusses the relevant foundational software technologies, the remaining challenges, and research opportunities.Garcia Valls, MS.; Dubey, A.; Botti, V. (2018). Introducing the new paradigm of Social Dispersed Computing: Applications, Technologies and Challenges. Journal of Systems Architecture. 91:83-102. https://doi.org/10.1016/j.sysarc.2018.05.007S831029

    Business process model customisation using domain-driven controlled variability management and rule generation

    Get PDF
    Business process models are abstract descriptions and as such should be applicable in different situations. In order for a single process model to be reused, we need support for configuration and customisation. Often, process objects and activities are domain-specific. We use this observation and allow domain models to drive the customisation. Process variability models, known from product line modelling and manufacturing, can control this customisation by taking into account the domain models. While activities and objects have already been studied, we investigate here the constraints that govern a process execution. In order to integrate these constraints into a process model, we use a rule-based constraints language for a workflow and process model. A modelling framework will be presented as a development approach for customised rules through a feature model. Our use case is content processing, represented by an abstract ontology-based domain model in the framework and implemented by a customisation engine. The key contribution is a conceptual definition of a domain-specific rule variability language

    Artificial Intelligence based Anomaly Detection of Energy Consumption in Buildings: A Review, Current Trends and New Perspectives

    Get PDF
    Enormous amounts of data are being produced everyday by sub-meters and smart sensors installed in residential buildings. If leveraged properly, that data could assist end-users, energy producers and utility companies in detecting anomalous power consumption and understanding the causes of each anomaly. Therefore, anomaly detection could stop a minor problem becoming overwhelming. Moreover, it will aid in better decision-making to reduce wasted energy and promote sustainable and energy efficient behavior. In this regard, this paper is an in-depth review of existing anomaly detection frameworks for building energy consumption based on artificial intelligence. Specifically, an extensive survey is presented, in which a comprehensive taxonomy is introduced to classify existing algorithms based on different modules and parameters adopted, such as machine learning algorithms, feature extraction approaches, anomaly detection levels, computing platforms and application scenarios. To the best of the authors' knowledge, this is the first review article that discusses anomaly detection in building energy consumption. Moving forward, important findings along with domain-specific problems, difficulties and challenges that remain unresolved are thoroughly discussed, including the absence of: (i) precise definitions of anomalous power consumption, (ii) annotated datasets, (iii) unified metrics to assess the performance of existing solutions, (iv) platforms for reproducibility and (v) privacy-preservation. Following, insights about current research trends are discussed to widen the applications and effectiveness of the anomaly detection technology before deriving future directions attracting significant attention. This article serves as a comprehensive reference to understand the current technological progress in anomaly detection of energy consumption based on artificial intelligence.Comment: 11 Figures, 3 Table

    Understanding the bi-directional relationship between analytical processes and interactive visualization systems

    Get PDF
    Interactive visualizations leverage the human visual and reasoning systems to increase the scale of information with which we can effectively work, therefore improving our ability to explore and analyze large amounts of data. Interactive visualizations are often designed with target domains in mind, such as analyzing unstructured textual information, which is a main thrust in this dissertation. Since each domain has its own existing procedures of analyzing data, a good start to a well-designed interactive visualization system is to understand the domain experts' workflow and analysis processes. This dissertation recasts the importance of understanding domain users' analysis processes and incorporating such understanding into the design of interactive visualization systems. To meet this aim, I first introduce considerations guiding the gathering of general and domain-specific analysis processes in text analytics. Two interactive visualization systems are designed by following the considerations. The first system is Parallel-Topics, a visual analytics system supporting analysis of large collections of documents by extracting semantically meaningful topics. Based on lessons learned from Parallel-Topics, this dissertation further presents a general visual text analysis framework, I-Si, to present meaningful topical summaries and temporal patterns, with the capability to handle large-scale textual information. Both systems have been evaluated by expert users and deemed successful in addressing domain analysis needs. The second contribution lies in preserving domain users' analysis process while using interactive visualizations. Our research suggests the preservation could serve multiple purposes. On the one hand, it could further improve the current system. On the other hand, users often need help in recalling and revisiting their complex and sometimes iterative analysis process with an interactive visualization system. This dissertation introduces multiple types of evidences available for capturing a user's analysis process within an interactive visualization and analyzes cost/benefit ratios of the capturing methods. It concludes that tracking interaction sequences is the most un-intrusive and feasible way to capture part of a user's analysis process. To validate this claim, a user study is presented to theoretically analyze the relationship between interactions and problem-solving processes. The results indicate that constraining the way a user interacts with a mathematical puzzle does have an effect on the problemsolving process. As later evidenced in an evaluative study, a fair amount of high-level analysis can be recovered through merely analyzing interaction logs

    Securing Cloud Storage by Transparent Biometric Cryptography

    Get PDF
    With the capability of storing huge volumes of data over the Internet, cloud storage has become a popular and desirable service for individuals and enterprises. The security issues, nevertheless, have been the intense debate within the cloud community. Significant attacks can be taken place, the most common being guessing the (poor) passwords. Given weaknesses with verification credentials, malicious attacks have happened across a variety of well-known storage services (i.e. Dropbox and Google Drive) – resulting in loss the privacy and confidentiality of files. Whilst today's use of third-party cryptographic applications can independently encrypt data, it arguably places a significant burden upon the user in terms of manually ciphering/deciphering each file and administering numerous keys in addition to the login password. The field of biometric cryptography applies biometric modalities within cryptography to produce robust bio-crypto keys without having to remember them. There are, nonetheless, still specific flaws associated with the security of the established bio-crypto key and its usability. Users currently should present their biometric modalities intrusively each time a file needs to be encrypted/decrypted – thus leading to cumbersomeness and inconvenience while throughout usage. Transparent biometrics seeks to eliminate the explicit interaction for verification and thereby remove the user inconvenience. However, the application of transparent biometric within bio-cryptography can increase the variability of the biometric sample leading to further challenges on reproducing the bio-crypto key. An innovative bio-cryptographic approach is developed to non-intrusively encrypt/decrypt data by a bio-crypto key established from transparent biometrics on the fly without storing it somewhere using a backpropagation neural network. This approach seeks to handle the shortcomings of the password login, and concurrently removes the usability issues of the third-party cryptographic applications – thus enabling a more secure and usable user-oriented level of encryption to reinforce the security controls within cloud-based storage. The challenge represents the ability of the innovative bio-cryptographic approach to generate a reproducible bio-crypto key by selective transparent biometric modalities including fingerprint, face and keystrokes which are inherently noisier than their traditional counterparts. Accordingly, sets of experiments using functional and practical datasets reflecting a transparent and unconstrained sample collection are conducted to determine the reliability of creating a non-intrusive and repeatable bio-crypto key of a 256-bit length. With numerous samples being acquired in a non-intrusive fashion, the system would be spontaneously able to capture 6 samples within minute window of time. There is a possibility then to trade-off the false rejection against the false acceptance to tackle the high error, as long as the correct key can be generated via at least one successful sample. As such, the experiments demonstrate that a correct key can be generated to the genuine user once a minute and the average FAR was 0.9%, 0.06%, and 0.06% for fingerprint, face, and keystrokes respectively. For further reinforcing the effectiveness of the key generation approach, other sets of experiments are also implemented to determine what impact the multibiometric approach would have upon the performance at the feature phase versus the matching phase. Holistically, the multibiometric key generation approach demonstrates the superiority in generating the bio-crypto key of a 256-bit in comparison with the single biometric approach. In particular, the feature-level fusion outperforms the matching-level fusion at producing the valid correct key with limited illegitimacy attempts in compromising it – 0.02% FAR rate overall. Accordingly, the thesis proposes an innovative bio-cryptosystem architecture by which cloud-independent encryption is provided to protect the users' personal data in a more reliable and usable fashion using non-intrusive multimodal biometrics.Higher Committee of Education Development in Iraq (HCED

    Something\u27s Brewing within the Commercial Speech Doctrine

    Get PDF
    corecore