2,235 research outputs found

    Extension complexity of stable set polytopes of bipartite graphs

    Full text link
    The extension complexity xc(P)\mathsf{xc}(P) of a polytope PP is the minimum number of facets of a polytope that affinely projects to PP. Let GG be a bipartite graph with nn vertices, mm edges, and no isolated vertices. Let STAB(G)\mathsf{STAB}(G) be the convex hull of the stable sets of GG. It is easy to see that nxc(STAB(G))n+mn \leqslant \mathsf{xc} (\mathsf{STAB}(G)) \leqslant n+m. We improve both of these bounds. For the upper bound, we show that xc(STAB(G))\mathsf{xc} (\mathsf{STAB}(G)) is O(n2logn)O(\frac{n^2}{\log n}), which is an improvement when GG has quadratically many edges. For the lower bound, we prove that xc(STAB(G))\mathsf{xc} (\mathsf{STAB}(G)) is Ω(nlogn)\Omega(n \log n) when GG is the incidence graph of a finite projective plane. We also provide examples of 33-regular bipartite graphs GG such that the edge vs stable set matrix of GG has a fooling set of size E(G)|E(G)|.Comment: 13 pages, 2 figure

    Energy management for user’s thermal and power needs:A survey

    Get PDF
    The increasing world energy consumption, the diversity in energy sources, and the pressing environmental goals have made the energy supply–demand balance a major challenge. Additionally, as reducing energy costs is a crucial target in the short term, while sustainability is essential in the long term, the challenge is twofold and contains clashing goals. A more sustainable system and end-users’ behavior can be promoted by offering economic incentives to manage energy use, while saving on energy bills. In this paper, we survey the state-of-the-art in energy management systems for operation scheduling of distributed energy resources and satisfying end-user’s electrical and thermal demands. We address questions such as: how can the energy management problem be formulated? Which are the most common optimization methods and how to deal with forecast uncertainties? Quantitatively, what kind of improvements can be obtained? We provide a novel overview of concepts, models, techniques, and potential economic and emission savings to enhance energy management systems design

    Automatic detection of procedural knowledge in robotic-assisted surgical texts

    Get PDF
    Purpose The automatic extraction of knowledge about intervention execution from surgical manuals would be of the utmost importance to develop expert surgical systems and assistants. In this work we assess the feasibility of automatically identifying the sentences of a surgical intervention text containing procedural information, a subtask of the broader goal of extracting intervention workflows from surgical manuals. Methods We frame the problem as a binary classification task. We first introduce a new public dataset of 1958 sentences from robotic surgery texts, manually annotated as procedural or non-procedural. We then apply different classification methods, from classical machine learning algorithms, to more recent neural-network approaches and classification methods exploiting transformers (e.g., BERT, ClinicalBERT). We also analyze the benefits of applying balancing techniques to the dataset. Results The architectures based on neural-networks fed with FastText’s embeddings and the one based on ClinicalBERT outperform all the tested methods, empirically confirming the feasibility of the task. Adopting balancing techniques does not lead to substantial improvements in classification. Conclusion This is the first work experimenting with machine / deep learning algorithms for automatically identifying procedural sentences in surgical texts. It also introduces the first public dataset that can be used for benchmarking different classification methods for the task

    How force perception changes in different refresh rate conditions

    Get PDF
    n this work we consider the role of different refresh rates of the force feedback physical engine for haptics environments, such as robotic surgery and virtual reality surgical training systems. Two experimental force feedback tasks are evaluated in a virtual environment. Experiment I is a passive contact task, where the hand-grip is held waiting for the force feedback perception given by the contact with virtual objects. Experiment II is an active contact task, where a tool is moved in a direction until the contact perception with a pliable object. Different stiffnesses and refresh rates are factorially manipulated. To evaluate differences in the two tasks, we account for latency time inside the wall, penetration depth, and maximum force exerted against the object surface. The overall result of these experiments shows an improved sensitivity in almost all variables considered with refresh rates of 500 and 1,000 Hz compared with a refresh rate of 250 Hz, but no improved sensitivity is showed among them

    Surgicberta: a pre-trained language model for procedural surgical language

    Get PDF
    Pre-trained language models are now ubiquitous in natural language processing, being successfully applied for many different tasks and in several real-world applications. However, even though there is a wealth of high-quality written materials on surgery, and the scientific community has shown a growing interest in the application of natural language processing techniques in surgery, a pre-trained language model specific to the surgical domain is still missing. The creation and public release of such a model would serve numerous useful clinical applications. For example, it could enhance existing surgical knowledge bases employed for task automation, or assist medical students in summarizing complex surgical descriptions. For this reason, in this paper, we introduce SurgicBERTa, a pre-trained language model specific for the English surgical language, i.e., the language used in the surgical domain. SurgicBERTa has been obtained from RoBERTa through continued pre-training with the Masked language modeling objective on 300 k sentences taken from English surgical books and papers, for a total of 7 million words. By publicly releasing SurgicBERTa, we make available a resource built from the content collected in many high-quality surgical books, online textual resources, and academic papers. We performed several assessments in order to evaluate SurgicBERTa, comparing it with the general domain RoBERTa. First, we intrinsically assessed the model in terms of perplexity, accuracy, and evaluation loss resulting from the continual training according to the masked language modeling task. Then, we extrinsically evaluated SurgicBERTa on several downstream tasks, namely (i) procedural sentence detection, (ii) procedural knowledge extraction, (iii) ontological information discovery, and (iv) surgical terminology acquisition. Finally, we conducted some qualitative analysis on SurgicBERTa, showing that it contains a lot of surgical knowledge that could be useful to enrich existing state-of-the-art surgical knowledge bases or to extract surgical knowledge. All the assessments show that SurgicBERTa better deals with surgical language than a general-purpose pre-trained language model such as RoBERTa, and therefore can be effectively exploited in many computer-assisted applications in the surgical domain

    Machine understanding surgical actions from intervention procedure textbooks

    Get PDF
    The automatic extraction of procedural surgical knowledge from surgery manuals, academic papers or other high-quality textual resources, is of the utmost importance to develop knowledge-based clinical decision support systems, to automatically execute some procedure’s step or to summarize the procedural information, spread throughout the texts, in a structured form usable as a study resource by medical students. In this work, we propose a first benchmark on extracting detailed surgical actions from available intervention procedure textbooks and papers. We frame the problem as a Semantic Role Labeling task. Exploiting a manually annotated dataset, we apply different Transformer-based information extraction methods. Starting from RoBERTa and BioMedRoBERTa pre-trained language models, we first investigate a zero-shot scenario and compare the obtained results with a full fine-tuning setting. We then introduce a new ad-hoc surgical language model, named SurgicBERTa, pre-trained on a large collection of surgical materials, and we compare it with the previous ones. In the assessment, we explore different dataset splits (one in-domain and two out-of-domain) and we investigate also the effectiveness of the approach in a few-shot learning scenario. Performance is evaluated on three correlated sub-tasks: predicate disambiguation, semantic argument disambiguation and predicate-argument disambiguation. Results show that the fine-tuning of a pre-trained domain-specific language model achieves the highest performance on all splits and on all sub-tasks. All models are publicly released

    Do LLMs Dream of Ontologies?

    Full text link
    Large language models (LLMs) have recently revolutionized automated text understanding and generation. The performance of these models relies on the high number of parameters of the underlying neural architectures, which allows LLMs to memorize part of the vast quantity of data seen during the training. This paper investigates whether and to what extent general-purpose pre-trained LLMs have memorized information from known ontologies. Our results show that LLMs partially know ontologies: they can, and do indeed, memorize concepts from ontologies mentioned in the text, but the level of memorization of their concepts seems to vary proportionally to their popularity on the Web, the primary source of their training material. We additionally propose new metrics to estimate the degree of memorization of ontological information in LLMs by measuring the consistency of the output produced across different prompt repetitions, query languages, and degrees of determinism
    corecore