42 research outputs found

    TOWARDS A HOLISTIC RISK MODEL FOR SAFEGUARDING THE PHARMACEUTICAL SUPPLY CHAIN: CAPTURING THE HUMAN-INDUCED RISK TO DRUG QUALITY

    Get PDF
    Counterfeit, adulterated, and misbranded medicines in the pharmaceutical supply chain (PSC) are a critical problem. Regulators charged with safeguarding the supply chain are facing shrinking resources for inspections while concurrently facing increasing demands posed by new drug products being manufactured at more sites in the US and abroad. To mitigate risk, the University of Kentucky (UK) Central Pharmacy Drug Quality Study (DQS) tests injectable drugs dispensed within the UK hospital. Using FT-NIR spectrometry coupled with machine learning techniques the team identifies and flags potentially contaminated drugs for further testing and possible removal from the pharmacy. Teams like the DQS are always working with limited equipment, time, and staffing resources. Scanning every vial immediately before use is infeasible and drugs must be prioritized for analysis. A risk scoring system coupled with batch sampling techniques is currently used in the DQS. However, a risk scoring system only allows the team to know about the risks to the PSC today. It doesn’t let us predict what the risks will be in the future. To begin bridging this gap in predictive modeling capabilities the authors assert that models must incorporate the human element. A sister project to the DQS, the Drug Quality Game (DGC), enables humans and all of their unpredictability to be inserted into a virtual PSC. The DQG approach was adopted as a means of capturing human creativity, imagination, and problem-solving skills. Current methods of prioritizing drug scans rely heavily on drug cost, sole-source status, warning letters, equipment and material specifications. However, humans, not machines, commit fraud. Given that even one defective drug product could have catastrophic consequences this project will improve risk-based modeling by equipping future models to identify and incorporate human-induced risks, expanding the overall landscape of risk-based modeling. This exploratory study tested the following hypotheses (1) a useful game system able to simulate real-life humans and their actions in a pharmaceutical manufacturing process can be designed and deployed, (2) there are variables in the game that are predictive of human-induced risks to the PSC, and (3) the game can identify ways in which bad actors can “game the system” (GTS) to produce counterfeit, adulterated, and misbranded drugs. A commercial-off-the-shelf (COTS) game, BigPharma, was used as the basis of a game system able to simulate the human subjects and their actions in a pharmaceutical manufacturing process. BigPharma was selected as it provides a low-cost, time-efficient virtual environment that captures the major elements of a pharmaceutical business- research, marketing, and manufacturing/processing. Running Big Pharma with a Python shell enables researchers to implement specific GxP-related tasks (Good x Practice, where x=Manufacturing, Clinical, Research, etc.) not provided in the COTS BigPharma game. Results from players\u27 interaction with the Python shell/Big Pharma environment suggest that the game can identify both variables predictive of human-induced risks to the PSC and ways in which bad actors may GTS. For example, company profitability emerged as one variable predictive of successful GTS. Player\u27s unethical in-game techniques matched well with observations seen within the DQS

    Startup dilemmas - Strategic problems of early-stage platforms on the internet

    Get PDF
    siirretty Doriast

    Essays on trust and online peer-to-peer markets

    Get PDF
    The internet has led to the rapid emergence of new organizational forms such as the sharing economy, crowdfunding and crowdlending and those based on the blockchain. Using a variety of methods, this dissertation empirically explores trust and legitimacy in these new markets as they relate to investor decision making

    Social Operative System (sOS): The Use of Technology to Develop New Forms of Governance

    Full text link
    [spa] La investigaciĂłn analiza las formas de gobernanza actuales basadas en modelos centralizados, y desarrolla un modelo emergente, basado en a) los datos de los individuos, b) las formas de aprendizaje AI, y c) el control mediante una DAO

    Collaborative Innovation: strategy, technology, and social practice

    Get PDF

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Engineering coordination : eine Methodologie fĂŒr die Koordination von Planungssystemen

    Get PDF
    Planning problems, like real-world planning and scheduling problems, are complex tasks. As an efficient strategy for handing such problems is the ‘divide and conquer’ strategy has been identified. Each sub problem is then solved independently. Typically the sub problems are solved in a linear way. This approach enables the generation of sub-optimal plans for a number of real world problems. Today, this approach is widely accepted and has been established e.g. in the organizational structure of companies. But existing interdependencies between the sub problems are not sufficiently regarded, as each problem are solved sequentially and no feedback information is given. The field of coordination has been covered by a number of academic fields, like the distributed artificial intelligence, economics or game theory. An important result is, that there exist no method that leads to optimal results in any given coordination problem. Consequently, a suitable coordination mechanism has to be identified for each single coordination problem. Up to now, there exists no process for the selection of a coordination mechanism, neither in the engineering of distributed systems nor in agent oriented software engineering. Within the scope of this work the ECo process is presented, that address exactly this selection problem. The Eco process contains the following five steps. ‱ Modeling of the coordination problem ‱ Defining the coordination requirements ‱ Selection / Design of the coordination mechanism ‱ Implementation ‱ Evaluation Each of these steps is detailed in the thesis. The modeling has to be done to enable a systemic analysis of the coordination problem. Coordination mechanisms have to respect the given situation and the context in which the coordination has to be done. The requirements imposed by the context of the coordination problem are formalized in the coordination requirements. The selection process is driven by these coordination requirements. Using the requirements as a distinction for the selection of a coordination mechanism is a central aspect of this thesis. Additionally these requirements can be used for documentation of design decisions. Therefore, it is reasonable to annotate the coordination mechanisms with the coordination requirements they fulfill and fail to ease the selection process, for a given situation. For that reason we present a new classification scheme for coordination methods within this thesis that classifies existing coordination methods according to a set of criteria that has been identified as important for the distinction between different coordination methods. The implementation phase of the ECo process is supported by the CoPS process and CoPS framework that has been developed within this thesis, as well. The CoPS process structures the design making that has to be done during the implementation phase. The CoPS framework provides a set of basic features software agents need for realizing the selected coordination method. Within the CoPS process techniques are presented for the design and implementation of conversations between agents that can be applied not only within the context of the coordination of planning systems, but for multiagent systems in general. The ECo-CoPS approach has been successfully validated in two case studies from the logistic domain.Reale Planungsprobleme, wie etwa die Produktionsplanung in einer Supply Chain, sind komplex Planungsprobleme. Eine ĂŒbliche Strategie derart komplexen Problemen zu lösen, ist es diese Probleme in einfachere Teilprobleme zu zerlegen und diese dann separat, meist sequentiell, zu lösen (divide-and-conquer Strategie). Dieser Ansatz erlaubt die Erstellung von (suboptimalen) PlĂ€nen fĂŒr eine Reihe von realen Anwendungen, und ist heute in den Organisationsstrukturen von grĂ¶ĂŸeren Unternehmen institutionalisiert worden. Allerdings werden AbhĂ€ngigkeiten zwischen den Teilproblemen nicht ausreichend berĂŒcksichtigt, da die Partialprobleme sequentiell ohne Feedback gelöst werden. Die erstellten Teillösungen mĂŒssen deswegen oft nachtrĂ€glich koordiniert werden. Das Gebiet der Koordination wird in verschiedenen Forschungsgebieten, wie etwa der verteilten KĂŒnstlichen Intelligenz, den Wirtschaftswissenschaften oder der Spieltheorie untersucht. Ein zentrales Ergebnis dieser Forschung ist, dass es keinen fĂŒr alle Situationen geeigneten Koordinationsmechanismus gibt. Es stellt sich also die Aufgabe aus den zahlreichen vorgeschlagenen Koordinationsmechanismen eine Auswahl zu treffen, die fĂŒr die aktuelle Situation den geeigneten Mechanismus identifiziert. FĂŒr die Auswahl eines solchen Mechanismus existiert bisher jedoch kein strukturiertes Verfahren fĂŒr die Entwicklung von verteilten Systems und insbesondere im Bereich der Agenten orientierter Softwareentwicklung. Im Rahmen dieser Arbeit wird genau hierfĂŒr ein Verfahren vorgestellt, der ECo-Prozess. Mit Hilfe dieses Prozesses wird der Auswahlprozess in die folgenden Schritte eingeteilt: ‱ Modellierung der Problemstellung und des relevante Kontextes ‱ Formulierung von Anforderungen an einen Koordinationsmechanismus (coordination requirements) ‱ Auswahl/Entwurf eines Koordinationsmechanismuses ‱ Implementierung des Koordinationsverfahrens ‱ Evaluation des Koordinationsverfahrens Diese Schritte werden im Rahmen der vorliegenden Arbeit detailliert beschrieben. Die Modellierung der Problemstellung stellt dabei den ersten Schritt dar, um die Problemstellung analytisch zugĂ€nglich zu machen. Koordinationsverfahren mĂŒssen die Gegebenheiten, den Kontext und die DomĂ€ne, in der sie angewendet werden sollen hinreichend berĂŒcksichtigen um anwendbar zu sein. Dieses kann ĂŒber Anforderungen an den Koordinationsprozess formalisiert werden. Der von den Anforderungen getrieben Auswahlprozess ist ein KernstĂŒck der hier vorgestellten Arbeit. Durch die Formulierung der Anforderungen und der Annotation eines Koordinationsmechanismus bezĂŒglich der erfĂŒllten und nicht erfĂŒllten Anforderungen werden die Motive fĂŒr Designentscheidungen dieses Verfahren expliziert. Wenn Koordinationsverfahren anhand dieser Anforderungen klassifiziert werden können, ist es weiterhin möglich den Auswahlprozess (unabhĂ€ngig vom ECo-Ansatz) zu vereinfachen und zu beschleunigen. Im Rahmen dieser Arbeit wird eine Klassifikation von KoordinationsansĂ€tzen anhand von allgemeinen Kriterien vorgestellt, die die Identifikation von geeigneten Kandidaten erleichtern. Diese Kandidaten können dann detaillierter untersucht werden. Dies wurde in den vorgestellten Fallstudien erfolgreich demonstriert. FĂŒr die UnterstĂŒtzung der Implementierung eines Koordinationsansatzes wird in dieser Arbeit zusĂ€tzlich der CoPS Prozess vorgeschlagen. Der CoPS Prozess erlaubt einen ganzheitlichen systematischen Ansatz fĂŒr den Entwurf und die Implementierung eines Koordinationsverfahrens. UnterstĂŒrzt wird der CoPS Prozess durch das CoPS Framework, das die Implementierung erleichtert, indem es als eine Plattform mit BasisfunktionalitĂ€t eines Agenten bereitstellt, der fĂŒr die Koordination von Planungssystemen verantwortlich ist. Im Rahmen des CoPS Verfahrens werden Techniken fĂŒr den Entwurf und die Implementierung von Konversation im Kontext des agenten-orientiertem Software Engineerings ausfĂŒhrlich behandelt. Der Entwurf von Konversationen geht dabei weit ĂŒber Fragestellung der Formatierung von Nachrichten hinaus, wie dies etwa in den FIPA Standards geregelt ist, und ist fĂŒr die Implementierung von agentenbasierten Systemen im Allgemeinen von Bedeutung. Die Funktionsweise des ECo-CoPS Ansatzes wird anhand von zweierfolgreich durchgefĂŒhrten Fallstudien aus dem betriebswirtschaftlichen Kontext vorgestellt

    Understanding Organizational Responses to Innovative Deviance: A Case Study of HathiTrust.

    Full text link
    This thesis traces the emergence and evolution of HathiTrust as way of generating deeper insights into the processes of sociotechnical transformation. HathiTrust emerged from the groundbreaking and legally contentious Google mass digitization project as an organization operated by the University of Michigan. It grew into a partnership with over 100 research institutions that support a shared digital repository, oversee a digital library comprised of over thirteen million volumes, and run a research center for non-consumptive computational research. This dissertation combines traditional legal research and analysis with social scientific approaches. Primary data for this case study were generated from in-depth interviews and review of relevant documents such as contracts, judicial opinions, press releases, and organizational reports. It develops an analytic framework blending the sociological concept of innovative deviance with organizational sensemaking theories and copyright doctrine. It describes and explains how and why organizations make sense of and make decisions with respect to risk and opportunity under conditions of uncertainty, ambiguity, and disequilibrium. This explains how slow-moving institutions such as laws and academic research libraries change and adapt in accordance with changes in technology and social practices. It describes the dynamic, non-linear, and mutually constitutive relationships among technology, social practice, and law that shaped and were shaped by HathiTrust. In so doing, it offers insights into the processes of sociotechnical transformation.PhDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133351/1/acentiva_1.pd

    The ambivalences of piracy : BitTorrent media piracy and anti-capitalism

    Get PDF
    This thesis argues that a more nuanced study of online media piracy is necessary in order to augment the dominant focus on piracy's relationship to copyright. Copyright as a frame for understanding piracy's relationship to capitalism has left potentially more crucial areas of study neglected. An approach to understanding the relationship of media piracy to anticapitalist projects must engage with forms of media piracy in their specificity and not as a homogeneous field. The thesis argues that it is possible and necessary to push beyond the constraints of copyright activism and intellectual property and in so doing opens up new areas of inquiry into online media piracy's potential to challenge logics of property and commodification. Original research is presented in the form of a highly detailed description and analysis of private BitTorrent filesharing sites. These sites are secretive and yet to receive scholarly attention in such a detailed and systematic way. This research finds both public and private variants of BitTorrent media piracy to be highly ambivalent with regards to their transformative potentials in relation to capital and thus tempers more extreme views of piracy as wholly revolutionary and emancipatory, and those that see pirate as a 'simple' form of theft. Public and private BitTorrent filesharing are theorised through the lens of Autonomist Marxism, a perspective that has a novel view of technology both as a tool of domination and a force for potential emancipation. Piracy is analysed for its capacity to refuse the valorisation of the enjoyment of music or film via the surveillance and tracking of audiences, which has become typical for contemporary legal online distribution venues. The thesis further analyses BitTorrent piracy's relationship to the 'common', the shared capacities for creating knowledge, ideas, affects. The thesis concludes that further scholarly research must move beyond concerns for creators' remuneration and its focus on reforming existing copyright policy and instead engage with the emergent institutional structures of organised media piracy. Though publicly accessible BitTorrent piracy has contributed to a broadening of awareness about issues of access to information, such an awareness often leaves in place logics of private property and capitalist accumulation. Finally, the thesis argues that the richness and complexity of private sites' organisational valences carry with them greater potential for radically destabilising capitalist social relations with regard to the distribution of cultural production
    corecore