1,538 research outputs found

    Artificial Intelligence in the Tourism Industry: A privacy impasse

    Get PDF
    Artificial Intelligence (AI) adoption in the tourism industry has resulted with privacy concerns as companies feed a vast amount of consumer data into AI, creating sensitive customer information. Therefore, this research aims at investigating the adequacy of the Personal Data Protection Act 2010 in addressing the privacy challenges raised by AI. Combining the doctrinal methodology and a case study, this research produced systematic means of legal reasoning pertinent to AI applications in the tourism industry. Ensuring privacy and security through every phase of the data lifecycle is pivotal to avoid legal liability for the tourism players while preserving customer confidence. Keywords: Artificial Intelligence and Law, Privacy and Artificial Intelligence, Privacy Engineering Model, Data Protection and Artificial Intelligence eISSN: 2398-4287 © 2022. The Authors. Published for AMER ABRA cE-Bs by e-International Publishing House, Ltd., UK. This is an open-access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). Peer–review under the responsibility of AMER (Association of Malaysian Environment-Behaviour Researchers), ABRA (Association of Behavioural Researchers on Asians), and cE-Bs (Centre for Environment-Behaviour Studies), Faculty of Architecture, Planning & Surveying, Universiti Teknologi MARA, Malaysia. DOI: https://doi.org/10.21834/ebpj.v7iSI7.381

    Towards an Accountable Web of Personal Information: the Web-of-Receipts

    Get PDF
    Consent is a corner stone in any Privacy practice or public policy. Much beyond a simple "accept" button, we show in this paper that obtaining and demonstrating valid Consent can be a complex matter since it is a multifaceted problem. This is important for both Organisations and Users. As shown in recent cases, not only cannot an individual prove what they accepted at any point in time, but also organisations are struggling with proving such consent was obtained leading to inefficiencies and non-compliance. To a large extent, this problem has not obtained sufficient visibility and research effort. In this paper, we review the current state of Consent and tie it to a problem of Accountability. We argue for a different approach to how the Web of Personal Information operates: the need of an accountable Web in the form of Personal Data Receipts which are able to protect both individuals and organisation. We call this evolution the Web-of-Receipts: online actions, from registration to real-time usage, is preceded by valid consent and is auditable (for Users) and demonstrable (for Organisations) at any moment by using secure protocols and locally stored artefacts such as Receipts. The key contribution of this paper is to elaborate on this unique perspective, present proof-of-concept results and lay out a research agenda

    Coordinated Evolution of Ontologies of Informed Consent

    Get PDF
    Informed consent, whether for health or behavioral research or clinical treatment, rests on notions of voluntarism, information disclosure and understanding, and the decisionmaking capacity of the person providing consent. Whether consent is for research or treatment, informed consent serves as a safeguard for trust that permissions given by the research participant or patient are upheld across the informed consent (IC) lifecycle. The IC lifecycle involves not only documentation of the consent when originally obtained, but actions that require clear communication of permissions from the initial acquisition of data and specimens through handoffs to, for example, secondary researchers, allowing them access to data or biospecimens referenced in the terms of the original consent

    Task analysis and application services for client relationship management in national level information sharing for social care

    Get PDF
    Use of confidential client information in health and social services requires client relationship by legislation. Approaches for verifying this relationship between care provider and client vary between different countries: in some cases, access logs are analyzed and in other cases, access to information is determined based on the existence and proof of client relationship. We present an approach of client relationship management from national project for social services IT in Finland. The approach is based on analysis of client relationship and case management tasks of users and information systems, and use of application services and system roles which support dynamic access management with client relationship as one of key constituents for access to information. The traditional user rights and access management is not the key area of this article

    Is my medical software allowed to go to market?

    Get PDF
    The central importance of software in healthcare practices is highlighted by the increasing regulation of medical software in order to safeguard medical activities and patient’s rights. Medical software suppliers need to meet regulatory requirements from different countries to gain market access and offer necessary compliant solutions. The present report focuses on providing methods and tools to allow software suppliers to evaluate which software products should be sold in which countries. Using design science and behavioral science two artifacts are presented integrating influence of regulatory requirements on market access and product lifecycle management. It is required to present all the regulatory information in an actionable way in order for it to be operationalized by businesses and engineering staff within a company

    Information technology, contract and knowledge in the networked economy: a biography of packaged software for contract management

    Get PDF
    In this research I investigate the intersection of information and communication technology (ICT), contract and knowledge in the networked economy as illuminated by the “life” of contract management software (CMS). The failure of CMS to fulfill market expectations provides the motivating question for this study. Based on interview, survey and archival data, I construct a “biography” of CMS from a market perspective informed by the theory of commoditization as well as studies of markets from economic sociology. From the latter, I draw upon the theory of performativity in markets to identify in the failure of CMS a series of breakdowns in performative assumptions and operations normally at work in the making of a packaged software market, ranging from a failure in classification performativity to a detachment of marketized criteria, in the form of analyst ratings, from the underlying software product and vendors. This catalog of breakdown indicates that packaged software production implicates multiple levels of commoditization, including financialized meta-commodities and marketized criteria, in a dynamic I theorize as substitution of performance. I explore the implications of my findings for packaged software and for process commodities more generally, suggesting, inter alia, that process commoditization may revolve around contract and information exchange rather than product definition. I go on to propose an open theorization of contract as a technology of connectedness, in a relationship of potential convergence, complementarity and substitution with ICT, interpenetrating and performative. My contributions are to information systems and organizations research on the topics of packaged software and the relationship of ICT, contract and organizational knowledge; and to economic sociology on the topics of performativity in markets and product qualification in process commoditization

    Identity in eHealth - from the reality of physical identification to digital identification.

    Get PDF
    Mestrado em Informática MédicaMaster Programme in Medical Informatic

    Challenges and requirements of heterogenous research data management in environmental sciences:a qualitative study

    Get PDF
    Abstract. The research focuses on the challenges and requirements of heterogeneous research data management in environmental sciences. Environmental research involves diverse data types, and effective management and integration of these data sets are crucial in managing heterogeneous environmental research data. The issue at hand is the lack of specific guidance on how to select and plan an appropriate data management practice to address the challenges of handling and integrating diverse data types in environmental research. The objective of the research is to identify the issues associated with the current data storage approach in research data management and determine the requirements for an appropriate system to address these challenges. The research adopts a qualitative approach, utilizing semi-structured interviews to collect data. Content analysis is employed to analyze the gathered data and identify relevant issues and requirements. The study reveals various issues in the current data management process, including inconsistencies in data treatment, the risk of unintentional data deletion, loss of knowledge due to staff turnover, lack of guidelines, and data scattered across multiple locations. The requirements identified through interviews emphasize the need for a data management system that integrates automation, open access, centralized storage, online electronic lab notes, systematic data management, secure repositories, reduced hardware storage, and version control with metadata support. The research identifies the current challenges faced by researchers in heterogeneous data management and compiles a list of requirements for an effective solution. The findings contribute to existing knowledge on research-related problems and provide a foundation for developing tailored solutions to meet the specific needs of researchers in environmental sciences

    Automating and Optimizing Data-Centric What-If Analyses on Native Machine Learning Pipelines

    Get PDF
    Software systems that learn from data with machine learning (ML) are used in critical decision-making processes. Unfortunately, real-world experience shows that the pipelines for data preparation, feature encoding and model training in ML systems are often brittle with respect to their input data. As a consequence, data scientists have to run different kinds of data centric what-if analyses to evaluate the robustness and reliability of such pipelines, e.g., with respect to data errors or preprocessing techniques. These what-if analyses follow a common pattern: they take an existing ML pipeline, create a pipeline variant by introducing a small change, and execute this pipeline variant to see how the change impacts the pipeline's output score. The application of existing analysis techniques to ML pipelines is technically challenging as they are hard to integrate into existing pipeline code and their execution introduces large overheads due to repeated work.We propose mlwhatif to address these integration and efficiency challenges for data-centric what-if analyses on ML pipelines. mlwhatif enables data scientists to declaratively specify what-if analyses for an ML pipeline, and to automatically generate, optimize and execute the required pipeline variants. Our approach employs pipeline patches to specify changes to the data, operators and models of a pipeline. Based on these patches, we define a multi-query optimizer for efficiently executing the resulting pipeline variants jointly, with four subsumption-based optimization rules. Subsequently, we detail how to implement the pipeline variant generation and optimizer of mlwhatif. For that, we instrument native ML pipelines written in Python to extract dataflow plans with re-executable operators.We experimentally evaluate mlwhatif, and find that its speedup scales linearly with the number of pipeline variants in applicable cases, and is invariant to the input data size. In end-to-end experiments with four analyses on more than 60 pipelines, we show speedups of up to 13x compared to sequential execution, and find that the speedup is invariant to the model and featurization in the pipeline. Furthermore, we confirm the low instrumentation overhead of mlwhatif
    corecore