4,032,586 research outputs found

    Self-ordered nanoporous lattice formed by chlorine atoms on Au(111)

    Get PDF
    A self-ordered nanoporous lattice formed by individual chlorine atoms on the Au(111) surface has been studied with low-temperature scanning tunneling microscopy, low-energy electron diffraction, and density functional theory calculations. We have found out that room-temperature adsorption of 0.09–0.30 monolayers of chlorine on Au(111) followed by cooling below 110 K results in the spontaneous formation of a nanoporous quasihexagonal structure with a periodicity of 25–38 Å depending on the initial chlorine coverage. The driving force of the superstructure formation is attributed to the substrate-mediated elastic interaction

    Futur campus santé de Sion: vers une collaboration entre services documentaires? : état des lieux et pistes de réflexion

    Get PDF
    La HES-SO Valais Wallis a pour projet de construire à l’horizon 2020 un nouveau bâtiment regroupant ses formations dans le domaine de la santé (soins infirmiers et physiothérapie) et du social (filières de niveau école supérieure dans l’éducation de l’enfance et l’action socio-professionnelle). Le choix de la localisation de cette nouvelle infrastructure s’est porté sur le site de Champsec à Sion qui regroupe actuellement plusieurs institutions sanitaires valaisannes (Hôpital du Valais, SUVA, Observatoire valaisan de la santé, etc.). Il existe ainsi un potentiel de synergie possible entre partenaires et futurs voisins. Ce travail a pour objectif de déterminer dans un premier temps l’intérêt des différents partenaires à une collaboration sur le plan de la documentation, d’établir un état de la situation actuelle et de proposer des pistes pour une future collaboration institutionnelle

    AI Education: Open-Access Educational Resources on AI

    Full text link
    Open-access AI educational resources are vital to the quality of the AI education we offer. Avoiding the reinvention of wheels is especially important to us because of the special challenges of AI Education. AI could be said to be “the really interesting miscellaneous pile of Computer Science”. While “artificial” is well-understood to encompass engineered artifacts, “intelligence” could be said to encompass any sufficiently difficult problem as would require an intelligent approach and yet does not fall neatly into established Computer Science subdisciplines. Thus AI consists of so many diverse topics that we would be hard-pressed to individually create quality learning experiences for each topic from scratch. In this column, we focus on a few online resources that we would recommend to AI Educators looking to find good starting points for course development. [excerpt

    Generating Rembrandt: Artificial Intelligence, Copyright, and Accountability in the 3A Era--The Human-like Authors are Already Here- A New Model

    Get PDF
    Artificial intelligence (AI) systems are creative, unpredictable, independent, autonomous, rational, evolving, capable of data collection, communicative, efficient, accurate, and have free choice among alternatives. Similar to humans, AI systems can autonomously create and generate creative works. The use of AI systems in the production of works, either for personal or manufacturing purposes, has become common in the 3A era of automated, autonomous, and advanced technology. Despite this progress, there is a deep and common concern in modern society that AI technology will become uncontrollable. There is therefore a call for social and legal tools for controlling AI systems’ functions and outcomes. This Article addresses the questions of the copyrightability of artworks generated by AI systems: ownership and accountability. The Article debates who should enjoy the benefits of copyright protection and who should be responsible for the infringement of rights and damages caused by AI systems that independently produce creative works. Subsequently, this Article presents the AI Multi- Player paradigm, arguing against the imposition of these rights and responsibilities on the AI systems themselves or on the different stakeholders, mainly the programmers who develop such systems. Most importantly, this Article proposes the adoption of a new model of accountability for works generated by AI systems: the AI Work Made for Hire (WMFH) model, which views the AI system as a creative employee or independent contractor of the user. Under this proposed model, ownership, control, and responsibility would be imposed on the humans or legal entities that use AI systems and enjoy its benefits. This model accurately reflects the human-like features of AI systems; it is justified by the theories behind copyright protection; and it serves as a practical solution to assuage the fears behind AI systems. In addition, this model unveils the powers behind the operation of AI systems; hence, it efficiently imposes accountability on clearly identifiable persons or legal entities. Since AI systems are copyrightable algorithms, this Article reflects on the accountability for AI systems in other legal regimes, such as tort or criminal law and in various industries using these systems

    Global Solutions vs. Local Solutions for the AI Safety Problem

    Get PDF
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progres
    corecore