483 research outputs found

    Cloud BI: A Multi-party Authentication Framework for Securing Business Intelligence on the Cloud

    Get PDF
    Business intelligence (BI) has emerged as a key technology to be hosted on Cloud computing. BI offers a method to analyse data thereby enabling informed decision making to improve business performance and profitability. However, within the shared domains of Cloud computing, BI is exposed to increased security and privacy threats because an unauthorised user may be able to gain access to highly sensitive, consolidated business information. The business process contains collaborating services and users from multiple Cloud systems in different security realms which need to be engaged dynamically at runtime. If the heterogamous Cloud systems located in different security realms do not have direct authentication relationships then it is technically difficult to enable a secure collaboration. In order to address these security challenges, a new authentication framework is required to establish certain trust relationships among these BI service instances and users by distributing a common session secret to all participants of a session. The author addresses this challenge by designing and implementing a multiparty authentication framework for dynamic secure interactions when members of different security realms want to access services. The framework takes advantage of the trust relationship between session members in different security realms to enable a user to obtain security credentials to access Cloud resources in a remote realm. This mechanism can help Cloud session users authenticate their session membership to improve the authentication processes within multi-party sessions. The correctness of the proposed framework has been verified by using BAN Logics. The performance and the overhead have been evaluated via simulation in a dynamic environment. A prototype authentication system has been designed, implemented and tested based on the proposed framework. The research concludes that the proposed framework and its supporting protocols are an effective functional basis for practical implementation testing, as it achieves good scalability and imposes only minimal performance overhead which is comparable with other state-of-art methods

    Handling Live Sensor Data on the Semantic Web

    Get PDF
    The increased linking of objects in the Internet of Things and the ubiquitous flood of data and information require new technologies in data processing and data storage in particular in the Internet and the Semantic Web. Because of human limitations in data collection and analysis, more and more automatic methods are used. Above all, these sensors or similar data producers are very accurate, fast and versatile and can also provide continuous monitoring even places that are hard to reach by people. The traditional information processing, however, has focused on the processing of documents or document-related information, but they have different requirements compared to sensor data. The main focus is static information of a certain scope in contrast to large quantities of live data that is only meaningful when combined with other data and background information. The paper evaluates the current status quo in the processing of sensor and sensor-related data with the help of the promising approaches of the Semantic Web and Linked Data movement. This includes the use of the existing sensor standards such as the Sensor Web Enablement (SWE) as well as the utilization of various ontologies. Based on a proposed abstract approach for the development of a semantic application, covering the process from data collection to presentation, important points, such as modeling, deploying and evaluating semantic sensor data, are discussed. Besides the related work on current and future developments on known diffculties of RDF/OWL, such as the handling of time, space and physical units, a sample application demonstrates the key points. In addition, techniques for the spread of information, such as polling, notifying or streaming are handled to provide examples of data stream management systems (DSMS) for processing real-time data. Finally, the overview points out remaining weaknesses and therefore enables the improvement of existing solutions in order to easily develop semantic sensor applications in the future

    MODEL DRIVEN SOFTWARE PRODUCT LINE ENGINEERING: SYSTEM VARIABILITY VIEW AND PROCESS IMPLICATIONS

    Full text link
    La Ingeniería de Líneas de Productos Software -Software Product Line Engineerings (SPLEs) en inglés- es una técnica de desarrollo de software que busca aplicar los principios de la fabricación industrial para la obtención de aplicaciones informáticas: esto es, una Línea de productos Software -Software Product Line (SPL)- se emplea para producir una familia de productos con características comunes, cuyos miembros, sin embargo, pueden tener características diferenciales. Identificar a priori estas características comunes y diferenciales permite maximizar la reutilización, reduciendo el tiempo y el coste del desarrollo. Describir estas relaciones con la suficiente expresividad se vuelve un aspecto fundamental para conseguir el éxito. La Ingeniería Dirigida por Modelos -Model Driven Engineering (MDE) en inglés- se ha revelado en los últimos años como un paradigma que permite tratar con artefactos software con un alto nivel de abstracción de forma efectiva. Gracias a ello, las SPLs puede aprovecharse en granmedida de los estándares y herramientas que han surgido dentro de la comunidad de MDE. No obstante, aún no se ha conseguido una buena integración entre SPLE y MDE, y como consecuencia, los mecanismos para la gestión de la variabilidad no son suficientemente expresivos. De esta manera, no es posible integrar la variabilidad de forma eficiente en procesos complejos de desarrollo de software donde las diferentes vistas de un sistema, las transformaciones de modelos y la generación de código juegan un papel fundamental. Esta tesis presenta MULTIPLE, un marco de trabajo y una herramienta que persiguen integrar de forma precisa y eficiente los mecanismos de gestión de variabilidad propios de las SPLs dentro de los procesos de MDE. MULTIPLE proporciona lenguajes específicos de dominio para especificar diferentes vistas de los sistemas software. Entre ellas se hace especial hincapié en la vista de variabilidad ya que es determinante para la especificación de SPLs.Gómez Llana, A. (2012). MODEL DRIVEN SOFTWARE PRODUCT LINE ENGINEERING: SYSTEM VARIABILITY VIEW AND PROCESS IMPLICATIONS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15075Palanci

    First CLIPS Conference Proceedings, volume 2

    Get PDF
    The topics of volume 2 of First CLIPS Conference are associated with following applications: quality control; intelligent data bases and networks; Space Station Freedom; Space Shuttle and satellite; user interface; artificial neural systems and fuzzy logic; parallel and distributed processing; enchancements to CLIPS; aerospace; simulation and defense; advisory systems and tutors; and intelligent control

    Specification and derivation of key performance indicators for business analytics: A semantic approach

    Get PDF
    Key Performance Indicators (KPI) measure the performance of an enterprise relative to its objectives thereby enabling corrective action where there are deviations. In current practice, KPIs are manually integrated within dashboards and scorecards used by decision makers. This practice entails various shortcomings. First, KPIs are not related to their business objectives and strategy. Consequently, decision makers often obtain a scattered view of the business status and business concerns. Second, while KPIs are defined by decision makers, their implementation is performed by IT specialists. This often results in discrepancies that are difficult to identify. In this paper, we propose an approach that provides decision makers with an integrated view of strategic business objectives and conceptual data warehouse KPIs. The main benefit of our proposal is that it links strategic business models to the data for monitoring and assessing them. In our proposal, KPIs are defined using a modeling language where decision makers specify KPIs using business terminology, but can also perform quick modifications and even navigate data while maintaining a strategic view. This enables monitoring and what-if analysis, thereby helping analysts to compare expectations with reported results.This research has been supported by the European Research Council (ERC) through advanced Grant 267856, titled “Lucretius: Foundations for Software Evolution” (04/2011-03/2016) and the national project GEODAS-BI (TIN2012-37493-C03-03) from the Spanish Ministry of Economy and Competitiveness (MINECO). Alejandro Maté is funded by the Generalitat Valenciana under an APOSTD Grant (APOSTD/2014/064)

    An evaluation of the challenges of Multilingualism in Data Warehouse development

    Get PDF
    In this paper we discuss Business Intelligence and define what is meant by support for Multilingualism in a Business Intelligence reporting context. We identify support for Multilingualism as a challenging issue which has implications for data warehouse design and reporting performance. Data warehouses are a core component of most Business Intelligence systems and the star schema is the approach most widely used to develop data warehouses and dimensional Data Marts. We discuss the way in which Multilingualism can be supported in the Star Schema and identify that current approaches have serious limitations which include data redundancy and data manipulation, performance and maintenance issues. We propose a new approach to enable the optimal application of multilingualism in Business Intelligence. The proposed approach was found to produce satisfactory results when used in a proof-of-concept environment. Future work will include testing the approach in an enterprise environmen

    Improving knowledge about the risks of inappropriate uses of geospatial data by introducing a collaborative approach in the design of geospatial databases

    Get PDF
    La disponibilité accrue de l’information géospatiale est, de nos jours, une réalité que plusieurs organisations, et même le grand public, tentent de rentabiliser; la possibilité de réutilisation des jeux de données est désormais une alternative envisageable par les organisations compte tenu des économies de coûts qui en résulteraient. La qualité de données de ces jeux de données peut être variable et discutable selon le contexte d’utilisation. L’enjeu d’inadéquation à l’utilisation de ces données devient d’autant plus important lorsqu’il y a disparité entre les nombreuses expertises des utilisateurs finaux de la donnée géospatiale. La gestion des risques d’usages inappropriés de l’information géospatiale a fait l’objet de plusieurs recherches au cours des quinze dernières années. Dans ce contexte, plusieurs approches ont été proposées pour traiter ces risques : parmi ces approches, certaines sont préventives et d’autres sont plutôt palliatives et gèrent le risque après l'occurrence de ses conséquences; néanmoins, ces approches sont souvent basées sur des initiatives ad-hoc non systémiques. Ainsi, pendant le processus de conception de la base de données géospatiale, l’analyse de risque n’est pas toujours effectuée conformément aux principes d’ingénierie des exigences (Requirements Engineering) ni aux orientations et recommandations des normes et standards ISO. Dans cette thèse, nous émettons l'hypothèse qu’il est possible de définir une nouvelle approche préventive pour l’identification et l’analyse des risques liés à des usages inappropriés de la donnée géospatiale. Nous pensons que l’expertise et la connaissance détenues par les experts (i.e. experts en geoTI), ainsi que par les utilisateurs professionnels de la donnée géospatiale dans le cadre institutionnel de leurs fonctions (i.e. experts du domaine d'application), constituent un élément clé dans l’évaluation des risques liés aux usages inadéquats de ladite donnée, d’où l’importance d’enrichir cette connaissance. Ainsi, nous passons en revue le processus de conception des bases de données géospatiales et proposons une approche collaborative d’analyse des exigences axée sur l’utilisateur. Dans le cadre de cette approche, l’utilisateur expert et professionnel est impliqué dans un processus collaboratif favorisant l’identification a priori des cas d’usages inappropriés. Ensuite, en passant en revue la recherche en analyse de risques, nous proposons une intégration systémique du processus d’analyse de risque au processus de la conception de bases de données géospatiales et ce, via la technique Delphi. Finalement, toujours dans le cadre d’une approche collaborative, un référentiel ontologique de risque est proposé pour enrichir les connaissances sur les risques et pour diffuser cette connaissance aux concepteurs et utilisateurs finaux. L’approche est implantée sous une plateforme web pour mettre en œuvre les concepts et montrer sa faisabilité.Nowadays, the increased availability of geospatial information is a reality that many organizations, and even the general public, are trying to transform to a financial benefit. The reusability of datasets is now a viable alternative that may help organizations to achieve cost savings. The quality of these datasets may vary depending on the usage context. The issue of geospatial data misuse becomes even more important because of the disparity between the different expertises of the geospatial data end-users. Managing the risks of geospatial data misuse has been the subject of several studies over the past fifteen years. In this context, several approaches have been proposed to address these risks, namely preventive approaches and palliative approaches. However, these approaches are often based on ad-hoc initiatives. Thus, during the design process of the geospatial database, risk analysis is not always carried out in accordance neither with the principles/guidelines of requirements engineering nor with the recommendations of ISO standards. In this thesis, we suppose that it is possible to define a preventive approach for the identification and analysis of risks associated to inappropriate use of geospatial data. We believe that the expertise and knowledge held by experts and users of geospatial data are key elements for the assessment of risks of geospatial data misuse of this data. Hence, it becomes important to enrich that knowledge. Thus, we review the geospatial data design process and propose a collaborative and user-centric approach for requirements analysis. Under this approach, the user is involved in a collaborative process that helps provide an a priori identification of inappropriate use of the underlying data. Then, by reviewing research in the domain of risk analysis, we propose to systematically integrate risk analysis – using the Delphi technique – through the design of geospatial databases. Finally, still in the context of a collaborative approach, an ontological risk repository is proposed to enrich the knowledge about the risks of data misuse and to disseminate this knowledge to the design team, developers and end-users. The approach is then implemented using a web platform in order to demonstrate its feasibility and to get the concepts working within a concrete prototype

    BioIMAX : a Web2.0 approach to visual data mining in bioimage data

    Get PDF
    Loyek C. BioIMAX : a Web2.0 approach to visual data mining in bioimage data. Bielefeld: Universität Bielefeld; 2012

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Towards a Reference Architecture with Modular Design for Large-scale Genotyping and Phenotyping Data Analysis: A Case Study with Image Data

    Get PDF
    With the rapid advancement of computing technologies, various scientific research communities have been extensively using cloud-based software tools or applications. Cloud-based applications allow users to access software applications from web browsers while relieving them from the installation of any software applications in their desktop environment. For example, Galaxy, GenAP, and iPlant Colaborative are popular cloud-based systems for scientific workflow analysis in the domain of plant Genotyping and Phenotyping. These systems are being used for conducting research, devising new techniques, and sharing the computer assisted analysis results among collaborators. Researchers need to integrate their new workflows/pipelines, tools or techniques with the base system over time. Moreover, large scale data need to be processed within the time-line for more effective analysis. Recently, Big Data technologies are emerging for facilitating large scale data processing with commodity hardware. Among the above-mentioned systems, GenAp is utilizing the Big Data technologies for specific cases only. The structure of such a cloud-based system is highly variable and complex in nature. Software architects and developers need to consider totally different properties and challenges during the development and maintenance phases compared to the traditional business/service oriented systems. Recent studies report that software engineers and data engineers confront challenges to develop analytic tools for supporting large scale and heterogeneous data analysis. Unfortunately, less focus has been given by the software researchers to devise a well-defined methodology and frameworks for flexible design of a cloud system for the Genotyping and Phenotyping domain. To that end, more effective design methodologies and frameworks are an urgent need for cloud based Genotyping and Phenotyping analysis system development that also supports large scale data processing. In our thesis, we conduct a few studies in order to devise a stable reference architecture and modularity model for the software developers and data engineers in the domain of Genotyping and Phenotyping. In the first study, we analyze the architectural changes of existing candidate systems to find out the stability issues. Then, we extract architectural patterns of the candidate systems and propose a conceptual reference architectural model. Finally, we present a case study on the modularity of computation-intensive tasks as an extension of the data-centric development. We show that the data-centric modularity model is at the core of the flexible development of a Genotyping and Phenotyping analysis system. Our proposed model and case study with thousands of images provide a useful knowledge-base for software researchers, developers, and data engineers for cloud based Genotyping and Phenotyping analysis system development
    corecore