906 research outputs found

    Database Security Issues and Challenges in Cloud Computing

    Get PDF
    The majority of enterprises have recently enthusiastically embraced cloud computing, and at the same time, the database has moved to the cloud. This cloud database paradigm can lower data administration expenses and free up new business to concentrate on the product that is being delivered. Furthermore, issues with scalability, flexibility, performance, availability, and affordability can be resolved with cloud computing. Security, however, has been noted as posing a serious risk to cloud databases and has been essential in fostering public acceptance of cloud computing. Several security factors should be taken into account before implementing any cloud database management system. These features comprise, but are not restricted to, data privacy, data isolation, data availability, data integrity, confidentiality, and defense against insider threats. In this paper, we discuss the most recent research that took into account the security risks and problems associated with adopting cloud databases. In order to better comprehend these problems and how they affect cloud databases, we also provide a conceptual model. Additionally, we look into these problems to the extent that they are relevant and provide two instances of vendors and security features that were used for cloud-based databases. Finally, we provide an overview of the security risks associated with open cloud databases and suggest possible future paths

    Data management in cloud environments: NoSQL and NewSQL data stores

    Get PDF
    : Advances in Web technology and the proliferation of mobile devices and sensors connected to the Internet have resulted in immense processing and storage requirements. Cloud computing has emerged as a paradigm that promises to meet these requirements. This work focuses on the storage aspect of cloud computing, specifically on data management in cloud environments. Traditional relational databases were designed in a different hardware and software era and are facing challenges in meeting the performance and scale requirements of Big Data. NoSQL and NewSQL data stores present themselves as alternatives that can handle huge volume of data. Because of the large number and diversity of existing NoSQL and NewSQL solutions, it is difficult to comprehend the domain and even more challenging to choose an appropriate solution for a specific task. Therefore, this paper reviews NoSQL and NewSQL solutions with the objective of: (1) providing a perspective in the field, (2) providing guidance to practitioners and researchers to choose the appropriate data store, and (3) identifying challenges and opportunities in the field. Specifically, the most prominent solutions are compared focusing on data models, querying, scaling, and security related capabilities. Features driving the ability to scale read requests and write requests, or scaling data storage are investigated, in particular partitioning, replication, consistency, and concurrency control. Furthermore, use cases and scenarios in which NoSQL and NewSQL data stores have been used are discussed and the suitability of various solutions for different sets of applications is examined. Consequently, this study has identified challenges in the field, including the immense diversity and inconsistency of terminologies, limited documentation, sparse comparison and benchmarking criteria, and nonexistence of standardized query languages

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed

    IoT database forensics : an investigation on HarperDB Security

    Get PDF
    The data that are generated by several devices in the IoT realmrequire careful and real time processing. Recently, researchers haveconcentrated on the usage of cloud databases for storing such datato improve efficiency. HarperDB aims at producing a DBMS that isrelational and non-relational simultaneously, to help journeymendevelopers creating products and servers in the IoT space. Much ofwhat the HarperDB team has talked about has been achieved, butfrom a security perspective, a lot of improvements need to be made.The team has clearly focused on the problems that exist from adatabase and data point of view, creating a structure that is unique,fast, easy to use and has great potential to grow with a startup.The functionality and ease of use of this DBMS is not in question,however as the trade-off triangle to the right suggests, this doesentail an impact to security. In this paper, using multiple forensicmethodologies, we performed an in-depth forensic analysis onHarperDB and found several areas of extreme concern, such as lackof logging functionalities, basic level of authorisation, exposure ofusers’ access rights to any party using the database, There had to bea focus on preventative advice instead of reactive workarounds dueto the nature of the flaws found in HarperDB. As such, we providea number of recommendations for the users and developers

    A software architecture for electro-mobility services: a milestone for sustainable remote vehicle capabilities

    Get PDF
    To face the tough competition, changing markets and technologies in automotive industry, automakers have to be highly innovative. In the previous decades, innovations were electronics and IT-driven, which increased exponentially the complexity of vehicle’s internal network. Furthermore, the growing expectations and preferences of customers oblige these manufacturers to adapt their business models and to also propose mobility-based services. One other hand, there is also an increasing pressure from regulators to significantly reduce the environmental footprint in transportation and mobility, down to zero in the foreseeable future. This dissertation investigates an architecture for communication and data exchange within a complex and heterogeneous ecosystem. This communication takes place between various third-party entities on one side, and between these entities and the infrastructure on the other. The proposed solution reduces considerably the complexity of vehicle communication and within the parties involved in the ODX life cycle. In such an heterogeneous environment, a particular attention is paid to the protection of confidential and private data. Confidential data here refers to the OEM’s know-how which is enclosed in vehicle projects. The data delivered by a car during a vehicle communication session might contain private data from customers. Our solution ensures that every entity of this ecosystem has access only to data it has the right to. We designed our solution to be non-technological-coupling so that it can be implemented in any platform to benefit from the best environment suited for each task. We also proposed a data model for vehicle projects, which improves query time during a vehicle diagnostic session. The scalability and the backwards compatibility were also taken into account during the design phase of our solution. We proposed the necessary algorithms and the workflow to perform an efficient vehicle diagnostic with considerably lower latency and substantially better complexity time and space than current solutions. To prove the practicality of our design, we presented a prototypical implementation of our design. Then, we analyzed the results of a series of tests we performed on several vehicle models and projects. We also evaluated the prototype against quality attributes in software engineering

    Using the MEAN Stack to implement a RESTful service for an Internet of Things Application

    No full text
    This paper examines the components of the MEAN development stack (MongoDb, Express.js, Angular.js, & Node.js), and demonstrate their benefits and appropriateness to be used in implementing RESTful web-service APIs for Inter- net of Things (IoT) appliances. In particular, we show an end- to-end example of this stack and discuss in detail the various components required. The paper also describes an approach to establishing a secure mechanism for communicating with IoT devices, using pull-communications

    Towards Secure and Intelligent Diagnosis: Deep Learning and Blockchain Technology for Computer-Aided Diagnosis Systems

    Get PDF
    Cancer is the second leading cause of death across the world after cardiovascular disease. The survival rate of patients with cancerous tissue can significantly decrease due to late-stage diagnosis. Nowadays, advancements of whole slide imaging scanners have resulted in a dramatic increase of patient data in the domain of digital pathology. Large-scale histopathology images need to be analyzed promptly for early cancer detection which is critical for improving patient's survival rate and treatment planning. Advances of medical image processing and deep learning methods have facilitated the extraction and analysis of high-level features from histopathological data that could assist in life-critical diagnosis and reduce the considerable healthcare cost associated with cancer. In clinical trials, due to the complexity and large variance of collected image data, developing computer-aided diagnosis systems to support quantitative medical image analysis is an area of active research. The first goal of this research is to automate the classification and segmentation process of cancerous regions in histopathology images of different cancer tissues by developing models using deep learning-based architectures. In this research, a framework with different modules is proposed, including (1) data pre-processing, (2) data augmentation, (3) feature extraction, and (4) deep learning architectures. Four validation studies were designed to conduct this research. (1) differentiating benign and malignant lesions in breast cancer (2) differentiating between immature leukemic blasts and normal cells in leukemia cancer (3) differentiating benign and malignant regions in lung cancer, and (4) differentiating benign and malignant regions in colorectal cancer. Training machine learning models, disease diagnosis, and treatment often requires collecting patients' medical data. Privacy and trusted authenticity concerns make data owners reluctant to share their personal and medical data. Motivated by the advantages of Blockchain technology in healthcare data sharing frameworks, the focus of the second part of this research is to integrate Blockchain technology in computer-aided diagnosis systems to address the problems of managing access control, authentication, provenance, and confidentiality of sensitive medical data. To do so, a hierarchical identity and attribute-based access control mechanism using smart contract and Ethereum Blockchain is proposed to securely process healthcare data without revealing sensitive information to an unauthorized party leveraging the trustworthiness of transactions in a collaborative healthcare environment. The proposed access control mechanism provides a solution to the challenges associated with centralized access control systems and ensures data transparency and traceability for secure data sharing, and data ownership
    • …
    corecore