2,707 research outputs found

    Ultra-Low-Power Superconductor Logic

    Full text link
    We have developed a new superconducting digital technology, Reciprocal Quantum Logic, that uses AC power carried on a transmission line, which also serves as a clock. Using simple experiments we have demonstrated zero static power dissipation, thermally limited dynamic power dissipation, high clock stability, high operating margins and low BER. These features indicate that the technology is scalable to far more complex circuits at a significant level of integration. On the system level, Reciprocal Quantum Logic combines the high speed and low-power signal levels of Single-Flux- Quantum signals with the design methodology of CMOS, including low static power dissipation, low latency combinational logic, and efficient device count.Comment: 7 pages, 5 figure

    A grid-based infrastructure for distributed retrieval

    Get PDF
    In large-scale distributed retrieval, challenges of latency, heterogeneity, and dynamicity emphasise the importance of infrastructural support in reducing the development costs of state-of-the-art solutions. We present a service-based infrastructure for distributed retrieval which blends middleware facilities and a design framework to ‘lift’ the resource sharing approach and the computational services of a European Grid platform into the domain of e-Science applications. In this paper, we give an overview of the DILIGENT Search Framework and illustrate its exploitation in the field of Earth Science

    The influence of feature selection methods on accuracy, stability and interpretability of molecular signatures

    Get PDF
    Motivation: Biomarker discovery from high-dimensional data is a crucial problem with enormous applications in biology and medicine. It is also extremely challenging from a statistical viewpoint, but surprisingly few studies have investigated the relative strengths and weaknesses of the plethora of existing feature selection methods. Methods: We compare 32 feature selection methods on 4 public gene expression datasets for breast cancer prognosis, in terms of predictive performance, stability and functional interpretability of the signatures they produce. Results: We observe that the feature selection method has a significant influence on the accuracy, stability and interpretability of signatures. Simple filter methods generally outperform more complex embedded or wrapper methods, and ensemble feature selection has generally no positive effect. Overall a simple Student's t-test seems to provide the best results. Availability: Code and data are publicly available at http://cbio.ensmp.fr/~ahaury/

    Cellular Automata Applications in Shortest Path Problem

    Full text link
    Cellular Automata (CAs) are computational models that can capture the essential features of systems in which global behavior emerges from the collective effect of simple components, which interact locally. During the last decades, CAs have been extensively used for mimicking several natural processes and systems to find fine solutions in many complex hard to solve computer science and engineering problems. Among them, the shortest path problem is one of the most pronounced and highly studied problems that scientists have been trying to tackle by using a plethora of methodologies and even unconventional approaches. The proposed solutions are mainly justified by their ability to provide a correct solution in a better time complexity than the renowned Dijkstra's algorithm. Although there is a wide variety regarding the algorithmic complexity of the algorithms suggested, spanning from simplistic graph traversal algorithms to complex nature inspired and bio-mimicking algorithms, in this chapter we focus on the successful application of CAs to shortest path problem as found in various diverse disciplines like computer science, swarm robotics, computer networks, decision science and biomimicking of biological organisms' behaviour. In particular, an introduction on the first CA-based algorithm tackling the shortest path problem is provided in detail. After the short presentation of shortest path algorithms arriving from the relaxization of the CAs principles, the application of the CA-based shortest path definition on the coordinated motion of swarm robotics is also introduced. Moreover, the CA based application of shortest path finding in computer networks is presented in brief. Finally, a CA that models exactly the behavior of a biological organism, namely the Physarum's behavior, finding the minimum-length path between two points in a labyrinth is given.Comment: To appear in the book: Adamatzky, A (Ed.) Shortest path solvers. From software to wetware. Springer, 201

    Extensive remodeling of DC function by rapid maturation-induced transcriptional silencing.

    Get PDF
    The activation, or maturation, of dendritic cells (DCs) is crucial for the initiation of adaptive T-cell mediated immune responses. Research on the molecular mechanisms implicated in DC maturation has focused primarily on inducible gene-expression events promoting the acquisition of new functions, such as cytokine production and enhanced T-cell-stimulatory capacity. In contrast, mechanisms that modulate DC function by inducing widespread gene-silencing remain poorly understood. Yet the termination of key functions is known to be critical for the function of activated DCs. Genome-wide analysis of activation-induced histone deacetylation, combined with genome-wide quantification of activation-induced silencing of nascent transcription, led us to identify a novel inducible transcriptional-repression pathway that makes major contributions to the DC-maturation process. This silencing response is a rapid primary event distinct from repression mechanisms known to operate at later stages of DC maturation. The repressed genes function in pivotal processes--including antigen-presentation, extracellular signal detection, intracellular signal transduction and lipid-mediator biosynthesis--underscoring the central contribution of the silencing mechanism to rapid reshaping of DC function. Interestingly, promoters of the repressed genes exhibit a surprisingly high frequency of PU.1-occupied sites, suggesting a novel role for this lineage-specific transcription factor in marking genes poised for inducible repression

    From Social Data Mining to Forecasting Socio-Economic Crisis

    Full text link
    Socio-economic data mining has a great potential in terms of gaining a better understanding of problems that our economy and society are facing, such as financial instability, shortages of resources, or conflicts. Without large-scale data mining, progress in these areas seems hard or impossible. Therefore, a suitable, distributed data mining infrastructure and research centers should be built in Europe. It also appears appropriate to build a network of Crisis Observatories. They can be imagined as laboratories devoted to the gathering and processing of enormous volumes of data on both natural systems such as the Earth and its ecosystem, as well as on human techno-socio-economic systems, so as to gain early warnings of impending events. Reality mining provides the chance to adapt more quickly and more accurately to changing situations. Further opportunities arise by individually customized services, which however should be provided in a privacy-respecting way. This requires the development of novel ICT (such as a self- organizing Web), but most likely new legal regulations and suitable institutions as well. As long as such regulations are lacking on a world-wide scale, it is in the public interest that scientists explore what can be done with the huge data available. Big data do have the potential to change or even threaten democratic societies. The same applies to sudden and large-scale failures of ICT systems. Therefore, dealing with data must be done with a large degree of responsibility and care. Self-interests of individuals, companies or institutions have limits, where the public interest is affected, and public interest is not a sufficient justification to violate human rights of individuals. Privacy is a high good, as confidentiality is, and damaging it would have serious side effects for society.Comment: 65 pages, 1 figure, Visioneer White Paper, see http://www.visioneer.ethz.c

    Annotation Search: the FAST Way

    Get PDF
    Περιέχει το πλήρες κείμενοThis paper discusses how annotations can be exploited to develop information access and retrieval algorithms that take them into account. The paper proposes a general framework for developing such algorithms that specifically deals with the problem of accessing and retrieving topical information from annotations and annotated documents

    Patient and public involvement in reducing health and care research waste

    Get PDF
    Background Eighty five per cent of health research expenditure is potentially wasted due to failure to publish research, unclear reporting of research that is published, and the failure of new research studies to systematically review previous research in the same topic area, poor study design and conduct. A great deal of progress has been made to address this issue but the role of patients and the public has not been considered. Main A small survey was undertaken, as part of a larger programme of work on reducing health and care waste, to understand the role of patients in reducing research waste. The study showed that patients are interested in this issue particularly in relation to the prioritisation of research and patient and public involvement. Conclusions Patients undertake key roles in the research process including co-applicancy, project management, or as co-researchers. This brings responsibility for ensuring high quality research and value for money. Responsibility for recognition of the potential for wasteful practices is part of the conduct and operation of research studies

    Algebraic Comparison of Partial Lists in Bioinformatics

    Get PDF
    The outcome of a functional genomics pipeline is usually a partial list of genomic features, ranked by their relevance in modelling biological phenotype in terms of a classification or regression model. Due to resampling protocols or just within a meta-analysis comparison, instead of one list it is often the case that sets of alternative feature lists (possibly of different lengths) are obtained. Here we introduce a method, based on the algebraic theory of symmetric groups, for studying the variability between lists ("list stability") in the case of lists of unequal length. We provide algorithms evaluating stability for lists embedded in the full feature set or just limited to the features occurring in the partial lists. The method is demonstrated first on synthetic data in a gene filtering task and then for finding gene profiles on a recent prostate cancer dataset
    corecore