16 research outputs found

    PROJECTOR SPACE OPTIMIZATION IN QUANTUM CONTROL

    No full text
    We investigate in this work the numerical resolution of a quantum control problem; the specificity of the approach is that, instead of searching directly for the optimal laser intensity that drives the system toward its target, we consider here as main variable the evolution semigroup i.e. the set of propagators indexed with time. The precise form of the generator of the semigroup (e.g. dipolar) is then enforced as a constraint. We present both an algorithm and associated numerical results

    Approches numériques et théoriques en contrôle quantique

    No full text
    PARIS-DAUPHINE-BU (751162101) / SudocSudocFranceF

    Lyapunov control of Schrödinger equations: beyond the dipole approximation

    No full text
    We analyse in this paper the Lyapunov trajectory tracking of the Schrödinger equation for a second order coupling operator. We present a theoretical convergence result; for situations not covered by the theoretical result we propose a numerical approach that is tested and works well in practice.ouinonouirechercheInternationa

    SIENA:Semi-automatic semantic enhancement of datasets using concept recognition

    No full text
    Background The amount of available data, which can facilitate answering scientific research questions, is growing. However, the different formats of published data are expanding as well, creating a serious challenge when multiple datasets need to be integrated for answering a question. Results This paper presents a semi-automated framework that provides semantic enhancement of biomedical data, specifically gene datasets. The framework involved a concept recognition task using machine learning, in combination with the BioPortal annotator. Compared to using methods which require only the BioPortal annotator for semantic enhancement, the proposed framework achieves the highest results. Conclusions Using concept recognition combined with machine learning techniques and annotation with a biomedical ontology, the proposed framework can provide datasets to reach their full potential of providing meaningful information, which can answer scientific research questions

    Abusive Language on Social Media Through the Legal Looking Glass

    No full text
    Abusive language is a growing phenomenon on social media platforms. Its effects can reach beyond the online context, contributing to mental or emotional stress on users. Automatic tools for detecting abuse can alleviate the issue. In practice, developing automated methods to detect abusive language relies on good quality data. However, there is currently a lack of standards for creating datasets in the field. These standards include definitions of what is considered abusive language, annotation guidelines and reporting on the process. This paper introduces an annotation framework inspired by legal concepts to define abusive language in the context of online harassment. The framework uses a 7-point Likert scale for labelling instead of class labels. We also present ALYT – a dataset of Abusive Language on YouTube. ALYT includes YouTube comments in English extracted from videos on different controversial topics and labelled by Law students. The comments were sampled from the actual collected data, without artificial methods for increasing the abusive content. The paper describes the annotation process thoroughly, including all its guidelines and training steps

    Data2Services: enabling automated conversion of data to services

    Get PDF
    While data are becoming increasingly easy to find and access on the Web, significant effort and skill is still required to process the amount and diversity of data into convenient formats that are friendly to the user. Moreover, these efforts are often duplicated and are hard to reuse. Here, we describe Data2Services, a new framework to semi-automatically process heterogeneous data into target data formats, databases and services. Data2Services uses Docker to faithfully execute data transformation pipelines. These pipelines automatically convert target data into a semantic knowledge graph that can be further refined to conform to a particular data standard. The data can be loaded in a number of databases and are made accessible through native and autogenerated APIs. We describe the architecture and a prototype implementation for data in the life sciences
    corecore