13,964 research outputs found

    Towards a generic platform for developing CSCL applications using Grid infrastructure

    Get PDF
    The goal of this paper is to explore the possibility of using CSCL component-based software under a Grid infrastructure. The merge of these technologies represents an attractive, but probably quite laborious enterprise if we consider not only the benefits but also the barriers that we have to overcome. This work presents an attempt toward this direction by developing a generic platform of CSCL components and discussing the advantages that we could obtain if we adapted it to the Grid. We then propose a means that could make this adjustment possible due to the high degree of genericity that our library component is endowed with by being based on the generic programming paradigm. Finally, an application of our library is proposed both for validating the adequacy of the platform which it is based on and for indicating the possibilities gained by using it under the Grid.Peer ReviewedPostprint (published version

    Operations research and computers

    Get PDF
    operational research

    CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020

    Get PDF
    The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder by design and composed of academics, industry players from various sectors, policymakers and civil society. The Task Force is currently discussing issues such as the state and evolution of the application of AI in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe. As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed at helping the public and the private sector in operationalising Trustworthy AI. The list is composed of 131 items that are supposed to guide AI designers and developers throughout the process of design, development, and deployment of AI, although not intended as guidance to ensure compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a revision that will be finalised in early 2020. This report would like to contribute to this revision by addressing in particular the interplay between AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks are fundamentally different from traditional cyberattacks; whether they are compatible with different risk levels; whether they are flexible enough in terms of clear/easy measurement, implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles for the industry. The HLEG is a diverse group, with more than 50 members representing different stakeholders, such as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of producing a simple checklist for a complex issue. The public engagement exercise looks successful overall in that more than 450 stakeholders have signed in and are contributing to the process. The next sections of this report present the items listed by the HLEG followed by the analysis and suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data

    Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability

    Full text link
    Artificial Intelligence (AI) is rapidly integrating into various aspects of our daily lives, influencing decision-making processes in areas such as targeted advertising and matchmaking algorithms. As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial. Functional transparency is a fundamental aspect of algorithmic decision-making systems, allowing stakeholders to comprehend the inner workings of these systems and enabling them to evaluate their fairness and accuracy. However, achieving functional transparency poses significant challenges that need to be addressed. In this paper, we propose a design for user-centered compliant-by-design transparency in transparent systems. We emphasize that the development of transparent and explainable AI systems is a complex and multidisciplinary endeavor, necessitating collaboration among researchers from diverse fields such as computer science, artificial intelligence, ethics, law, and social science. By providing a comprehensive understanding of the challenges associated with transparency in AI systems and proposing a user-centered design framework, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.Comment: Hosain, M. T. , Anik, M. H. , Rafi, S. , Tabassum, R. , Insia, K. & S{\i}dd{\i}ky, M. M. (). Path To Gain Functional Transparency In Artificial Intelligence With Meaningful Explainability . Journal of Metaverse , 3 (2) , 166-180 . DOI: 10.57019/jmv.130668

    Intangible trust requirements - how to fill the requirements trust "gap"?

    Get PDF
    Previous research efforts have been expended in terms of the capture and subsequent instantiation of "soft" trust requirements that relate to HCI usability concerns or in relation to "hard" tangible security requirements that primarily relate to security a ssurance and security protocols. Little direct focus has been paid to managing intangible trust related requirements per se. This 'gap' is perhaps most evident in the public B2C (Business to Consumer) E- Systems we all use on a daily basis. Some speculative suggestions are made as to how to fill the 'gap'. Visual card sorting is suggested as a suitable evaluative tool; whilst deontic logic trust norms and UML extended notation are the suggested (methodologically invariant) means by which software development teams can perhaps more fully capture hence visualize intangible trust requirements

    Designing Integrated Conflict Management Systems: Guidelines for Practitioners and Decision Makers in Organizations

    Get PDF
    A committee of the ADR (alternative dispute resolution) in the Workplace Initiative of the Society of Professionals in Dispute Resolution (SPIDR) prepared this document for employers, managers, labor representatives, employees, civil and human rights organizations, and others who interact with organizations. In this document we explain why organizations should consider developing integrated conflict management systems to prevent and resolve conflict, and we provide practical guidelines for designing and implementing such systems. The principles identified in this document can also be used to manage external conflict with customers, clients, and the public. Indeed, we recommend that organizations focus simultaneously on preventing and managing both internal and external conflict. SPIDR recognizes that an integrated conflict management system will work only if designed with input from users and decision makers at all levels of the organization. Each system must be tailored to fit the organization\u27s needs, circumstances, and culture. In developing these systems, experimentation is both necessary and healthy. We hope that this document will provide guidance, encourage experimentation, and contribute to the evolving understanding of how best to design and implement these systems

    Secure Face and Liveness Detection with Criminal Identification for Security Systems

    Get PDF
    The advancement of computer vision, machine learning, and image processing techniques has opened new avenues for enhancing security systems. In this research work focuses on developing a robust and secure framework for face and liveness detection with criminal identification, specifically designed for security systems. Machine learning algorithms and image processing techniques are employed for accurate face detection and liveness verification. Advanced facial recognition methods are utilized for criminal identification. The framework incorporates ML technology to ensure data integrity and identification techniques for security system. Experimental evaluations demonstrate the system's effectiveness in detecting faces, verifying liveness, and identifying potential criminals. The proposed framework has the potential to enhance security systems, providing reliable and secure face and liveness detection for improved safety and security. The accuracy of the algorithm is 94.30 percent. The accuracy of the model is satisfactory even after the results are acquired by combining our rules inwritten by humans with conventional machine learning classification algorithms. Still, there is scope for improving and accurately classifying the attack precisely

    Enhancing Confidentiality and Privacy Preservation in e-Health to Enhanced Security

    Get PDF
    Electronic health (e-health) system use is growing, which has improved healthcare services significantly but has created questions about the privacy and security of sensitive medical data. This research suggests a novel strategy to overcome these difficulties and strengthen the security of e-health systems while maintaining the privacy and confidentiality of patient data by utilising machine learning techniques. The security layers of e-health systems are strengthened by the comprehensive framework we propose in this paper, which incorporates cutting-edge machine learning algorithms. The suggested framework includes data encryption, access control, and anomaly detection as its three main elements. First, to prevent unauthorised access during transmission and storage, patient data is secured using cutting-edge encryption technologies. Second, to make sure that only authorised staff can access sensitive medical records, access control mechanisms are strengthened using machine learning models that examine user behaviour patterns. This research's inclusion of machine learning-based anomaly detection is its most inventive feature. The technology may identify variations from typical data access and usage patterns, thereby quickly spotting potential security breaches or unauthorised activity, by training models on past e-health data. This proactive strategy improves the system's capacity to successfully address new threats. Extensive experiments were carried out employing a broad dataset made up of real-world e-health scenarios to verify the efficacy of the suggested approach. The findings showed a marked improvement in the protection of confidentiality and privacy, along with a considerable decline in security breaches and unauthorised access events
    • …
    corecore