1,265 research outputs found

    Formalizing Knowledge Creation in Inventive Project Groups. The Malleability of Formal Work Methods

    Get PDF
    This paper investigates how participants in cross-functional project groups use a formal work method in their sense making when dealing with the complexity of innovative work, especially in its inventive phase. The empirical basis of the paper is a prospective case study in which three project groups in three different companies are followed as they try to frame and solve their innovation tasks consisting in problems of a relatively general and vague character. The data are analyzed by means of a modified version of the principles of grounded theory. This means that the lessons drawn from the empirical data are guided by a relational sense making perspective in which the formal method used by the participants is seen as a technological artifact. Among the lessons learned by using this frame of reference are that a formal method may be seen as an entity with a meaning depending on the relations it is embedded in; as an enacted cue for interpretation and action; and as a non-human actor. Compared to the tradition of organizational development, these lessons represent an alternative conception of the implementation of a work method and illuminate prevailing notions about the importance of improvisation in innovation

    Agents Need Not Know Their Purpose

    Full text link
    Ensuring artificial intelligence behaves in such a way that is aligned with human values is commonly referred to as the alignment challenge. Prior work has shown that rational agents, behaving in such a way that maximizes a utility function, will inevitably behave in such a way that is not aligned with human values, especially as their level of intelligence goes up. Prior work has also shown that there is no "one true utility function"; solutions must include a more holistic approach to alignment. This paper describes oblivious agents: agents that are architected in such a way that their effective utility function is an aggregation of a known and hidden sub-functions. The hidden component, to be maximized, is internally implemented as a black box, preventing the agent from examining it. The known component, to be minimized, is knowledge of the hidden sub-function. Architectural constraints further influence how agent actions can evolve its internal environment model. We show that an oblivious agent, behaving rationally, constructs an internal approximation of designers' intentions (i.e., infers alignment), and, as a consequence of its architecture and effective utility function, behaves in such a way that maximizes alignment; i.e., maximizing the approximated intention function. We show that, paradoxically, it does this for whatever utility function is used as the hidden component and, in contrast with extant techniques, chances of alignment actually improve as agent intelligence grows

    Mammalian Value Systems

    Get PDF
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics

    Mammalian Value Systems

    Get PDF
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics

    THE INFLUENCE OF LEADERSHIP STYLE AND ORGANIZATIONAL CULTURE IN THE IMPLEMENTATION OF RISK MANAGEMENT

    Get PDF
    The impact of the global financial crisis has highlighted the importance of risk management. The role of risk management was also associated with changes in the business environment. The strategy process is divided into two steps, namely the formulation and implementation. Risk management carried out in the strategy formulation process as a project to identify opportunities and risks in accordance with the company's strategy. Implementation of Enterprise Risk Management (ERM), which can effectively help the organization achieve its goals, and lead to the creation of value for the organization. Risk management is an activity which integrates recognition of risk, risk assessment, developing strategies to manage and mitigate risk using managerial resources. From previous research, organizational culture is identified as an important key in contextual factors for the success of the company's risk management. Identifying the individual leader's style is central to evaluating the quality of leadership and effectiveness, especially for organizational goals and manage the risk in the company. The purpose of this study is to propose a conceptual framework of leadership styles, organizational culture and risk management. This study will be discussed at one of the state-owned insurance company with quantitative methods conducted through a survey of middle management and employees. The results of this study will contribute to the company to determine the influence of leadership style and organizational culture adopted by the company for the implementation of risk management.Keywords:  leadership style, organizational culture, risk management

    AI Systems of Concern

    Full text link
    Concerns around future dangers from advanced AI often centre on systems hypothesised to have intrinsic characteristics such as agent-like behaviour, strategic awareness, and long-range planning. We label this cluster of characteristics as "Property X". Most present AI systems are low in "Property X"; however, in the absence of deliberate steering, current research directions may rapidly lead to the emergence of highly capable AI systems that are also high in "Property X". We argue that "Property X" characteristics are intrinsically dangerous, and when combined with greater capabilities will result in AI systems for which safety and control is difficult to guarantee. Drawing on several scholars' alternative frameworks for possible AI research trajectories, we argue that most of the proposed benefits of advanced AI can be obtained by systems designed to minimise this property. We then propose indicators and governance interventions to identify and limit the development of systems with risky "Property X" characteristics.Comment: 9 pages, 1 figure, 2 table

    Mammalian Value Systems

    Get PDF
    Characterizing human values is a topic deeply interwoven with the sciences, humanities, political philosophy, art, and many other human endeavors. In recent years, a number of thinkers have argued that accelerating trends in computer science, cognitive science, and related disciplines foreshadow the creation of intelligent machines which meet and ultimately surpass the cognitive abilities of human beings, thereby entangling an understanding of human values with future technological development. Contemporary research accomplishments suggest increasingly sophisticated AI systems becoming widespread and responsible for managing many aspects of the modern world, from preemptively planning users’ travel schedules and logistics, to fully autonomous vehicles, to domestic robots assisting in daily living. The extrapolation of these trends has been most forcefully described in the context of a hypothetical “intelligence explosion,” in which the capabilities of an intelligent software agent would rapidly increase due to the presence of feedback loops unavailable to biological organisms. The possibility of superintelligent agents, or simply the widespread deployment of sophisticated, autonomous AI systems, highlights an important theoretical problem: the need to separate the cognitive and rational capacities of an agent from the fundamental goal structure, or value system, which constrains and guides the agent’s actions. The “value alignment problem” is to specify a goal structure for autonomous agents compatible with human values. In this brief article, we suggest that recent ideas from affective neuroscience and related disciplines aimed at characterizing neurological and behavioral universals in the mammalian kingdom provide important conceptual foundations relevant to describing human values. We argue that the notion of “mammalian value systems” points to a potential avenue for fundamental research in AI safety and AI ethics

    Evaluating Collaboration Constructs: An Analysis of the Paradise Creek Restoration Plan

    Get PDF
    This study examines collaboration constructs using Gray & Wood\u27s framework of theoretical dimensions of collaboration and two conceptual models found in the literature in an effort to determine which constructs are present in the successful collaborative efforts of the Elizabeth River Project\u27s Team Paradise as they developed the Paradise Creek Restoration Plan. The study used a mixed method approach involving both qualitative (interview and documents) and quantitative (survey) methods to gather data. The findings from this study support construct findings from three other studies on collaborative processes: Gray & Wood\u27s framework of theoretical dimensions of collaboration; the Selin & Chavez Model of the Collaborative Process in Natural Resource Management used in the area of environmental management, and the Melaville & Blank\u27s Five Stage Process for Change, used in the social services area. The findings from this research suggest that it might be possible to develop a generic model of collaboration using common constructs found in the literature that reflects the iterative and dynamic nature of the process of collaboration. Additionally, this study found two constructs not found in either of the conceptual models. This study indicates that collaboration does follow certain steps, or stages, consisting of a number of constructs, and that practitioners considering collaboration as a way to solve policy problems can use either of these prescriptive models as a framework for their own process
    • …
    corecore