327,082 research outputs found

    Lean and green – a systematic review of the state of the art literature

    Get PDF
    The move towards greener operations and products has forced companies to seek alternatives to balance efficiency gains and environmental friendliness in their operations and products. The exploration of the sequential or simultaneous deployment of lean and green initiatives is the results of this balancing action. However, the lean-green topic is relatively new, and it lacks of a clear and structured research definition. Thus, this paper’s main contribution is the offering of a systematic review of the existing literature on lean and green, aimed at providing guidance on the topic, uncovering gaps and inconsistencies in the literature, and finding new paths for research. The paper identifies and structures, through a concept map, six main research streams that comprise both conceptual and empirical research conducted within the context of various organisational functions and industrial sectors. Important issues for future research are then suggested in the form of research questions. The paper’s aim is to also contribute by stimulating scholars to further study this area in depth, which will lead to a better understanding of the compatibility and impact on organisational performance of lean and green initiatives. It also holds important implications for industrialists, who can develop a deeper and richer knowledge on lean and green to help them formulate more effective strategies for their deployment

    PhenDisco: phenotype discovery system for the database of genotypes and phenotypes.

    Get PDF
    The database of genotypes and phenotypes (dbGaP) developed by the National Center for Biotechnology Information (NCBI) is a resource that contains information on various genome-wide association studies (GWAS) and is currently available via NCBI's dbGaP Entrez interface. The database is an important resource, providing GWAS data that can be used for new exploratory research or cross-study validation by authorized users. However, finding studies relevant to a particular phenotype of interest is challenging, as phenotype information is presented in a non-standardized way. To address this issue, we developed PhenDisco (phenotype discoverer), a new information retrieval system for dbGaP. PhenDisco consists of two main components: (1) text processing tools that standardize phenotype variables and study metadata, and (2) information retrieval tools that support queries from users and return ranked results. In a preliminary comparison involving 18 search scenarios, PhenDisco showed promising performance for both unranked and ranked search comparisons with dbGaP's search engine Entrez. The system can be accessed at http://pfindr.net

    Large-scale event extraction from literature with multi-level gene normalization

    Get PDF
    Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons -Attribution - Share Alike (CC BY-SA) license

    A Language and Hardware Independent Approach to Quantum-Classical Computing

    Full text link
    Heterogeneous high-performance computing (HPC) systems offer novel architectures which accelerate specific workloads through judicious use of specialized coprocessors. A promising architectural approach for future scientific computations is provided by heterogeneous HPC systems integrating quantum processing units (QPUs). To this end, we present XACC (eXtreme-scale ACCelerator) --- a programming model and software framework that enables quantum acceleration within standard or HPC software workflows. XACC follows a coprocessor machine model that is independent of the underlying quantum computing hardware, thereby enabling quantum programs to be defined and executed on a variety of QPUs types through a unified application programming interface. Moreover, XACC defines a polymorphic low-level intermediate representation, and an extensible compiler frontend that enables language independent quantum programming, thus promoting integration and interoperability across the quantum programming landscape. In this work we define the software architecture enabling our hardware and language independent approach, and demonstrate its usefulness across a range of quantum computing models through illustrative examples involving the compilation and execution of gate and annealing-based quantum programs

    Integrated sustainability management for organizations

    Get PDF
    Purpose – The purpose of this paper is to propose the Viable System Model (VSM) as an effective model to base the analysis of organizational sustainability (long-term viability). It is specifically proposed as a model to integrate the various sustainability tools, and as the basis for designing a unified Sustainability Management System. Design/methodology/approach – The VSM is used as an organizational model to examine three prominent sustainability standards: ISO 26000, ISO 14001 and ISO 14044. A generic manufacturing company is used as a template; and its typical business processes are related to each of the VSM’s components. Each clause of the three sustainability standards is then mapped on to the VSM model. These three models are integrated into one, by analysing the differences, similarities and complementarities in the context of each VSM component, and by identifying common invariant functions. Findings – In all, 12 generic sustainability functions are identified. ISO 26000 has the widest scope; ISO 14001 is focused primarily on internal measurement and control (System 3), while ISO 14044 is a complex performance indicator at the System 3 level. There is a general absence of System 2. Each standard can be regarded as a distinct management layer, which needs to be integrated with the Business Management layer. Research limitations/implications – Further research is needed to explore the specifics of integration. Practical implications – This integration should not be based on creating distinct roles for each management layer. Originality/value – The paper uses the insights of organizational cybernetics to examine prominent sustainability standards and advance sustainability management at the business level

    Sustainability management : insights from the viable system model

    Get PDF
    A review of current literature on sustainability standards reveals a significant gap between their adoption and the implementation of sustainability into every level of the organisation. In this paper, it is argued that in order to overcome this challenge, an appropriate model of an organisation is needed. The Viable System Model (VSM) is proposed as such a model and, in order to illustrate this argument, it is used to interpret the ISO 26000 standard on Social Responsibility (SR). First, the VSM theory is introduced and presented by modelling the hypothetical company Widget Co. Then, the clauses of ISO 26000 are mapped on the Widget Co. model, together with detailed descriptions and examples on the organisational and managerial implications of its adopting the standard's guidelines. The result is the identification of generic SR functions that need to be performed by the various organisational governance systems, as well as their dynamic interrelations, thus clarifying implementation issues. Moreover, by identifying different SR management layers, VSM is suggested as a way forward to develop an integration model for SR issues and respective sustainability tools. Finally, a discussion is given on the implications of using this approach to integrate sustainability standards and the way this research contributes to recent developments in sustainability research

    A computer vision model for visual-object-based attention and eye movements

    Get PDF
    This is the post-print version of the final paper published in Computer Vision and Image Understanding. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2008 Elsevier B.V.This paper presents a new computational framework for modelling visual-object-based attention and attention-driven eye movements within an integrated system in a biologically inspired approach. Attention operates at multiple levels of visual selection by space, feature, object and group depending on the nature of targets and visual tasks. Attentional shifts and gaze shifts are constructed upon their common process circuits and control mechanisms but also separated from their different function roles, working together to fulfil flexible visual selection tasks in complicated visual environments. The framework integrates the important aspects of human visual attention and eye movements resulting in sophisticated performance in complicated natural scenes. The proposed approach aims at exploring a useful visual selection system for computer vision, especially for usage in cluttered natural visual environments.National Natural Science of Founda- tion of Chin
    corecore