111,623 research outputs found

    Towards a Reference Architecture with Modular Design for Large-scale Genotyping and Phenotyping Data Analysis: A Case Study with Image Data

    Get PDF
    With the rapid advancement of computing technologies, various scientific research communities have been extensively using cloud-based software tools or applications. Cloud-based applications allow users to access software applications from web browsers while relieving them from the installation of any software applications in their desktop environment. For example, Galaxy, GenAP, and iPlant Colaborative are popular cloud-based systems for scientific workflow analysis in the domain of plant Genotyping and Phenotyping. These systems are being used for conducting research, devising new techniques, and sharing the computer assisted analysis results among collaborators. Researchers need to integrate their new workflows/pipelines, tools or techniques with the base system over time. Moreover, large scale data need to be processed within the time-line for more effective analysis. Recently, Big Data technologies are emerging for facilitating large scale data processing with commodity hardware. Among the above-mentioned systems, GenAp is utilizing the Big Data technologies for specific cases only. The structure of such a cloud-based system is highly variable and complex in nature. Software architects and developers need to consider totally different properties and challenges during the development and maintenance phases compared to the traditional business/service oriented systems. Recent studies report that software engineers and data engineers confront challenges to develop analytic tools for supporting large scale and heterogeneous data analysis. Unfortunately, less focus has been given by the software researchers to devise a well-defined methodology and frameworks for flexible design of a cloud system for the Genotyping and Phenotyping domain. To that end, more effective design methodologies and frameworks are an urgent need for cloud based Genotyping and Phenotyping analysis system development that also supports large scale data processing. In our thesis, we conduct a few studies in order to devise a stable reference architecture and modularity model for the software developers and data engineers in the domain of Genotyping and Phenotyping. In the first study, we analyze the architectural changes of existing candidate systems to find out the stability issues. Then, we extract architectural patterns of the candidate systems and propose a conceptual reference architectural model. Finally, we present a case study on the modularity of computation-intensive tasks as an extension of the data-centric development. We show that the data-centric modularity model is at the core of the flexible development of a Genotyping and Phenotyping analysis system. Our proposed model and case study with thousands of images provide a useful knowledge-base for software researchers, developers, and data engineers for cloud based Genotyping and Phenotyping analysis system development

    Audio Transcription and Summarization System using Cloud Computing and Artificial Intelligence

    Get PDF
    In the modern era, organizations increasingly rely on virtual meetings to address customer issues promptly and effectively. However, dealing with recorded customer calls can be arduous. This review abstract introduces an innovative methodology to summarize audio data from customer interactions, which can streamline virtual meetings. Leveraging a speech recognizer, like AssemblyAI's API, the methodology converts audio data into text, and then employs a Graph-theoretic approach to generate concise summaries. This review abstract delves into the growing prominence of cloud-based AI and ML services in the tech industry. It underscores the unique competitive strategies and focuses of major players, namely Amazon, Microsoft, and Google, in the realm of AI and ML platform development. The analysis explores these companies' internal applications and external ecosystem, dissecting their respective AI and ML development strategies. Finally, it predicts future directions for AI and ML platforms, including potential business models and emerging trends, while considering how Amazon, Microsoft, and Google align their platform development strategies with these future prospects

    Service Level Agreement-based GDPR Compliance and Security assurance in (multi)Cloud-based systems

    Get PDF
    Compliance with the new European General Data Protection Regulation (Regulation (EU) 2016/679) and security assurance are currently two major challenges of Cloud-based systems. GDPR compliance implies both privacy and security mechanisms definition, enforcement and control, including evidence collection. This paper presents a novel DevOps framework aimed at supporting Cloud consumers in designing, deploying and operating (multi)Cloud systems that include the necessary privacy and security controls for ensuring transparency to end-users, third parties in service provision (if any) and law enforcement authorities. The framework relies on the risk-driven specification at design time of privacy and security level objectives in the system Service Level Agreement (SLA) and in their continuous monitoring and enforcement at runtime.The research leading to these results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644429 and No 780351, MUSA project and ENACT project, respectively. We would also like to acknowledge all the members of the MUSA Consortium and ENACT Consortium for their valuable help

    Applied business analytics approach to IT projects – Methodological framework

    Full text link
    The design and implementation of a big data project differs from a typical business intelligence project that might be presented concurrently within the same organization. A big data initiative typically triggers a large scale IT project that is expected to deliver the desired outcomes. The industry has identified two major methodologies for running a data centric project, in particular SEMMA (Sample, Explore, Modify, Model and Assess) and CRISP-DM (Cross Industry Standard Process for Data Mining). More general, the professional organizations PMI (Project Management Institute) and IIBA (International Institute of Business Analysis) have defined their methods for project management and business analysis based on the best current industry practices. However, big data projects place new challenges that are not considered by the existing methodologies. The building of end-to-end big data analytical solution for optimization of the supply chain, pricing and promotion, product launch, shop potential and customer value is facing both business and technical challenges. The most common business challenges are unclear and/or poorly defined business cases; irrelevant data; poor data quality; overlooked data granularity; improper contextualization of data; unprepared or bad prepared data; non-meaningful results; lack of skill set. Some of the technical challenges are related to lag of resources and technology limitations; availability of data sources; storage difficulties; security issues; performance problems; little flexibility; and ineffective DevOps. This paper discusses an applied business analytics approach to IT projects and addresses the above-described aspects. The authors present their work on research and development of new methodological framework and analytical instruments applicable in both business endeavors, and educational initiatives, targeting big data. The proposed framework is based on proprietary methodology and advanced analytics tools. It is focused on the development and the implementation of practical solutions for project managers, business analysts, IT practitioners and Business/Data Analytics students. Under discussion are also the necessary skills and knowledge for the successful big data business analyst, and some of the main organizational and operational aspects of the big data projects, including the continuous model deployment

    Microservices Validation: Methodology and Implementation

    Full text link
    Due to the wide spread of cloud computing, arises actual question about architecture, design and implementation of cloud applications. The microservice model describes the design and development of loosely coupled cloud applications when computing resources are provided on the basis of automated IaaS and PaaS cloud platforms. Such applications consist of hundreds and thousands of service instances, so automated validation and testing of cloud applications developed on the basis of microservice model is a pressing issue. There are constantly developing new methods of testing both individual microservices and cloud applications at a whole. This article presents our vision of a framework for the validation of the microservice cloud applications, providing an integrated approach for the implementation of various testing methods of such applications, from basic unit tests to continuous stability testing

    When IoT Meets DevOps: Fostering Business Opportunities

    Get PDF
    The Internet of Things (IoT) is the new digital revolution for the near-future society, the second after the creation of the Internet itself. The software industry is converging towards the large-scale deployment of IoT devices and services, and there’s broad support from the business environment for this engineering vision. The Development and Operations (DevOps) project management methodology, with continuous delivery and integration, is the preferred approach for achieving and deploying applications to all levels of the IoT architecture. In this paper we also discuss the promising trend of associating devices with microservices, which are further encapsulated into functional packages called containers. Docker is considered the market leader in container-based service delivery, though other important software companies are promoting this concept as part of the technology solution for their IoT customers. In the experimental section we propose a three-layer IoT model, business-oriented, and distributed over multiple cloud environments, comprising the Physical, Fog/Edge, and Application layers.     Keywords: Internet-of-Things, software technologies, project management, business environment Heading

    Cost modelling for cloud computing utilisation in long term digital preservation

    Get PDF
    The rapid increase in volume of digital information can cause concern among organisations regarding manageability, costs and security of their information in the long-term. As cloud computing technology is often used for digital preservation purposes and is still evolving, there is difficulty in determining its long-term costs. This paper presents the development of a generic cost model for public and private clouds utilisation in long term digital preservation (LTDP), considering the impact of uncertainties and obsolescence issues. The cost model consists of rules and assumptions and was built using a combination of activity based and parametric cost estimation techniques. After generation of cost breakdown structures for both clouds, uncertainties and obsolescence were categorised. To quantify impacts of uncertainties on cost, three-point estimate technique was employed and Monte Carlo simulation was applied to generate the probability distribution on each cost driver. A decision support cost estimation tool with dashboard representation of results was developed

    A case study in open source innovation: developing the Tidepool Platform for interoperability in type 1 diabetes management.

    Get PDF
    OBJECTIVE:Develop a device-agnostic cloud platform to host diabetes device data and catalyze an ecosystem of software innovation for type 1 diabetes (T1D) management. MATERIALS AND METHODS:An interdisciplinary team decided to establish a nonprofit company, Tidepool, and build open-source software. RESULTS:Through a user-centered design process, the authors created a software platform, the Tidepool Platform, to upload and host T1D device data in an integrated, device-agnostic fashion, as well as an application ("app"), Blip, to visualize the data. Tidepool's software utilizes the principles of modular components, modern web design including REST APIs and JavaScript, cloud computing, agile development methodology, and robust privacy and security. DISCUSSION:By consolidating the currently scattered and siloed T1D device data ecosystem into one open platform, Tidepool can improve access to the data and enable new possibilities and efficiencies in T1D clinical care and research. The Tidepool Platform decouples diabetes apps from diabetes devices, allowing software developers to build innovative apps without requiring them to design a unique back-end (e.g., database and security) or unique ways of ingesting device data. It allows people with T1D to choose to use any preferred app regardless of which device(s) they use. CONCLUSION:The authors believe that the Tidepool Platform can solve two current problems in the T1D device landscape: 1) limited access to T1D device data and 2) poor interoperability of data from different devices. If proven effective, Tidepool's open source, cloud model for health data interoperability is applicable to other healthcare use cases

    ClouNS - A Cloud-native Application Reference Model for Enterprise Architects

    Full text link
    The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies

    Big data analytics:Computational intelligence techniques and application areas

    Get PDF
    Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment
    corecore