130 research outputs found

    Systematic review of mRNA expression in human oocytes: understanding the molecular mechanisms underlying oocyte competence

    Get PDF
    The biggest cell in the human body, the oocyte, encloses almost the complete machinery to start life. Despite all the research performed to date, defining oocyte quality is still a major goal of reproductive science. It is the consensus that mature oocytes are transcriptionally silent although, during their growth, the cell goes through stages of active transcription and translation, which will endow the oocyte with the competence to undergo nuclear maturation, and the oocyte and embryo to initiate timely translation before the embryonic genome is fully activated (cytoplasmic maturation). A systematic search was conducted across three electronic databases and the literature was critically appraised using the KMET score system. The aim was to identify quantitative differences in transcriptome of human oocytes that may link to patient demographics that could affect oocyte competence. Data was analysed following the principles of thematic analysis. Differences in the transcriptome were identified with respect to age or pathological conditions and affected chromosome mis segregation, perturbations of the nuclear envelope, premature maturation, and alterations in metabolic pathways-amongst others-in human oocytes

    Scaling a convolutional neural network for classification of adjective noun pairs with TensorFlow on GPU clusters

    Get PDF
    Deep neural networks have gained popularity in recent years, obtaining outstanding results in a wide range of applications such as computer vision in both academia and multiple industry areas. The progress made in recent years cannot be understood without taking into account the technological advancements seen in key domains such as High Performance Computing, more specifically in the Graphic Processing Unit (GPU) domain. These kind of deep neural networks need massive amounts of data to effectively train the millions of parameters they contain, and this training can take up to days or weeks depending on the computer hardware we are using. In this work, we present how the training of a deep neural network can be parallelized on a distributed GPU cluster. The effect of distributing the training process is addressed from two different points of view. First, the scalability of the task and its performance in the distributed setting are analyzed. Second, the impact of distributed training methods on the training times and final accuracy of the models is studied. We used TensorFlow on top of the GPU cluster of servers with 2 K80 GPU cards, at Barcelona Supercomputing Center (BSC). The results show an improvement for both focused areas. On one hand, the experiments show promising results in order to train a neural network faster. The training time is decreased from 106 hours to 16 hours in our experiments. On the other hand we can observe how increasing the numbers of GPUs in one node rises the throughput, images per second, in a near-linear way. Morever an additional distributed speedup of 10.3 is achieved with 16 nodes taking as baseline the speedup of one node.This work is partially supported by the Spanish Ministry of Economy and Competitivity under contract TIN2012-34557, by the BSC-CNS Severo Ochoa program (SEV-2011-00067), by the SGR programmes (2014-SGR-1051 and 2014-SGR-1421 ) of the Catalan Government and by the framework of the project BigGraph TEC2013-43935-R, funded by the Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF). We also would like to thank the technical support team at the Barcelona Supercomputing center (BSC) especially to Carlos Tripiana.Peer ReviewedPostprint (author's final draft

    A maturity model for the information-driven SME

    Get PDF
    Purpose: This article presents a maturity model for the evaluation of the information-driven decision-making process (DMP) in small and medium enterprises. This model is called “Simplified Holistic Approach to DMP Evaluation (SHADE)”. The SHADE model is based in the “Circumplex Hierarchical Representation of the Organization Maturity Assessment” (CHROMA) framework for characterizing the information-driven DMP in organizations Design/methodology/approach: The CHROMA-SHADE provides a competency evaluation methodology regarding the SME’s use of data for making better-informed decisions. This model groups the main factors influencing the information-driven DMP and classifies them into five dimensions: data availability, data quality, data analysis and insights, information use and decision-making. It addresses these dimensions systematically, delivering a framework for positioning the organization from an uninitiated to a completely embedded stage. The assessment consists of interviews based on a standardized open-ended questionnaire performed to key company personnel followed by an analysis of the answers and their scoring performed by an expert evaluator. Findings: The results of its application indicate this model is well adapted to the SMEs resulting useful for identifying strengths and weaknesses, thereby providing insights for prioritizing improvement actions. Originality/value: The CHROMA-SHADE model follows a novel, holistic approach that embraces the complexities inherent in a multiplicity of factors that, at the technological and management level, converge to enable more objective and better-supported decisions to be made through the intelligent use of information.Peer Reviewe

    Chronological evolution of the information-driven decision-making process (1950–2020)

    Get PDF
    The version of record os available online at:https://doi.org/10.1007/s13132-022-00917-yThe decision-making process (DMP) is essential in organizations and has changed due to multidisciplinary research, greatly infuenced by the progress in information technologies and computational science. This work’s objective is analysing the progressive interaction between DMP and information technologies and the consequent breakthroughs in how business is conducted since 1950 to recent times. Therefore, a chronological review of the information-driven DMP evolvement is presented. The major landmarks that defned how technology infuenced how information is generated, stored, managed, and used for making better decisions, minimizing the uncertainty and gaining knowledge, are covered. The fndings showed that even if current data-driven trends in managerial decision making have led to competitive advantages and business opportunities, there is still a gap between the technological capabilities and the organizational needs. Nowadays, it has been reported that the adoption of technology solutions in many companies is faster than their capacity to adapt at managerial level. Aware of this reality, the “Circumplex Hierarchical Representation of Organization Maturity Assessment” (CHROMA) model has been developed. This tool makes it possible to evaluate whether the management of organizations is making decisions using the available data correctly and optimizing their information systems.Peer ReviewedPostprint (published version

    Budget-aware semi-supervised semantic and instance segmentation

    Get PDF
    Methods that move towards less supervised scenarios are key for image segmentation, as dense labels demand significant human intervention. Generally, the annotation burden is mitigated by labeling datasets with weaker forms of supervision, e.g. image-level labels or bounding boxes. Another option are semi-supervised settings, that commonly leverage a few strong annotations and a huge number of unlabeled/weakly-labeled data. In this paper, we revisit semi-supervised segmentation schemes and narrow down significantly the annotation budget (in terms of total labeling time of the training set) compared to previous approaches. With a very simple pipeline, we demonstrate that at low annotation budgets, semi-supervised methods outperform by a wide margin weakly-supervised ones for both semantic and instance segmentation. Our approach also outperforms previous semi-supervised works at a much reduced labeling cost. We present results for the Pascal VOC benchmark and unify weakly and semi-supervised ap- proaches by considering the total annotation budget, thus allowing a fairer comparison between methods.Peer ReviewedPostprint (author's final draft

    Skip RNN: learning to skip state updates in recurrent neural networks

    Get PDF
    Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates whilepreserving,andsometimesevenimproving,theperformance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/

    Tackling low-resourced sign language translation: UPC at WMT-SLT 22

    Get PDF
    This paper describes the system developed at the Universitat Politècnica de Catalunya for the Workshop on Machine Translation 2022 Sign Language Translation Task, in particular, for the sign-to-text direction. We use a Transformer model implemented with the Fairseq modeling toolkit. We have experimented with the vocabulary size, data augmentation techniques and pretraining the model with the PHOENIX-14T dataset. Our system obtains 0.50 BLEU score for the test set, improving the organizers’ baseline by 0.38 BLEU. We remark the poor results for both the baseline and our system, and thus, the unreliability of our findings.This research was partially supported by research grant Adavoice PID2019-107579RB-I00 /AEI / 10.13039/501100011033, research grants PRE2020-094223, PID2021-126248OB-I00 and PID2019-107255GB-C21.Peer ReviewedPostprint (published version

    Phosphatidylinositol 3-Kinase inhibitors block differentiation of skeletal muscle cells

    Get PDF
    Skeletal muscle differentiation involves myoblast alignment, elongation, and fusion into multinucleate myotubes, together with the induction of regulatory and structural muscle-specific genes. Here we show that two phosphatidylinositol 3-kinase inhibitors, LY294002 and wortmannin, blocked an essential step in the differentiation of two skeletal muscle cell models. Both inhibitors abolished the capacity of L6E9 myoblasts to form myotubes, without affecting myoblast proliferation, elongation, or alignment. Myogenic events like the induction of myogenin and of glucose carrier GLUT4 were also blocked and myoblasts could not exit the cell cycle, as measured by the lack of mRNA induction of p21 cyclin-dependent kinase inhibitor. Overexpresssion of MyoD in 10T1/2 cells was not sufficient to bypass the myogenic differentiation blockade by LY294002. Upon serum withdrawal, 10T1/2-MyoD cells formed myotubes and showed increased levels of myogenin and p21. In contrast, LY294002-treated cells exhibited none of these myogenic characteristics and maintained high levels of Id, a negative regulator of myogenesis. These data indicate that whereas phosphatidylinositol 3-kinase is not indispensable for cell proliferation or in the initial events of myoblast differentiation, i.e. elongation and alignment, it appears to be essential for terminal differentiation of muscle cells

    Distributed training strategies for a computer vision deep learning algorithm on a distributed GPU cluster

    Get PDF
    Deep learning algorithms base their success on building high learning capacity models with millions of parameters that are tuned in a data-driven fashion. These models are trained by processing millions of examples, so that the development of more accurate algorithms is usually limited by the throughput of the computing devices on which they are trained. In this work, we explore how the training of a state-of-the-art neural network for computer vision can be parallelized on a distributed GPU cluster. The effect of distributing the training process is addressed from two different points of view. First, the scalability of the task and its performance in the distributed setting are analyzed. Second, the impact of distributed training methods on the final accuracy of the models is studied.This work is partially supported by the Spanish Ministry of Economy and Competitivity under contract TIN2012-34557, by the BSC-CNS Severo Ochoa program (SEV-2011-00067), by the SGR programmes (2014-SGR-1051 and 2014-SGR-1421) of the Catalan Government and by the framework of the project BigGraph TEC2013-43935-R, funded by the Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (ERDF). We also would like to thank the technical support team at the Barcelona Supercomputing center (BSC) especially to Carlos Tripiana.Peer ReviewedPostprint (published version

    Explore, discover and learn: unsupervised discovery of state-covering skills

    Get PDF
    Acquiring abilities in the absence of a task-oriented reward function is at the frontier of reinforcement learning research. This problem has been studied through the lens of empowerment, which draws a connection between option discovery and information theory. Information-theoretic skill discovery methods have garnered much interest from the community, but little research has been conducted in understanding their limitations. Through theoretical analysis and empirical evidence, we show that existing algorithms suffer from a common limitation -- they discover options that provide a poor coverage of the state space. In light of this, we propose 'Explore, Discover and Learn' (EDL), an alternative approach to information-theoretic skill discovery. Crucially, EDL optimizes the same information-theoretic objective derived from the empowerment literature, but addresses the optimization problem using different machinery. We perform an extensive evaluation of skill discovery methods on controlled environments and show that EDL offers significant advantages, such as overcoming the coverage problem, reducing the dependence of learned skills on the initial state, and allowing the user to define a prior over which behaviors should be learned.This work was partially supported by the Spanish Ministry of Science and Innovation and the European Regional Development Fund under contracts TEC2016-75976-R and TIN2015-65316-P, by the BSC-CNS Severo Ochoa program SEV-2015-0493, and by Generalitat de Catalunya under contracts 2017-SGR-1414 and 2017-DI-011. Víctor Campos was supported by Obra Social “la Caixa” through La Caixa-Severo Ochoa International Doctoral Fellowship program.Peer ReviewedPostprint (published version
    corecore