713 research outputs found
Recommended from our members
On the Parameterized Complexity of Red-Blue Points Separation
We study the following geometric separation problem: Given a set R of red points and a set B of blue points in the plane, find a minimum-size set of lines that separate R from B. We show that, in its full generality, parameterized by the number of lines k in the solution, the problem is unlikely to be solvable significantly faster than the bruteforce nO(k) -time algorithm, where n is the total number of points. Indeed, we show that an algorithm running in time f(k)ná”(k/log k) , for any computable function f, would disprove ETH. Our reduction crucially relies on selecting lines from a set with a large number of different slopes (i.e., this number is not a function of k). Conjecturing that the problem variant where the lines are required to be axis-parallel is FPT in the number of lines, we show the following preliminary result. Separating R from B with a minimum-size set of axis-parallel lines is FPT in the size of either set, and can be solved in time Oâ(9|B|) (assuming that B is the smaller set)
Recommended from our members
Geometric Multicut
We study the following separation problem: Given a collection of colored objects in the plane, compute a shortest âfenceâ F, i.e., a union of curves of minimum total length, that separates every two objects of different colors. Two objects are separated if F contains a simple closed curve that has one object in the interior and the other in the exterior. We refer to the problem as GEOMETRIC k-CUT, where k is the number of different colors, as it can be seen as a geometric analogue to the well-studied multicut problem on graphs. We first give an O(n4log3n)-time algorithm that computes an optimal fence for the case where the input consists of polygons of two colors and n corners in total. We then show that the problem is NP-hard for the case of three colors. Finally, we give a (2â4/3k)-approximation algorithm
Estimation in high dimensions: a geometric perspective
This tutorial provides an exposition of a flexible geometric framework for
high dimensional estimation problems with constraints. The tutorial develops
geometric intuition about high dimensional sets, justifies it with some results
of asymptotic convex geometry, and demonstrates connections between geometric
results and estimation problems. The theory is illustrated with applications to
sparse recovery, matrix completion, quantization, linear and logistic
regression and generalized linear models.Comment: 56 pages, 9 figures. Multiple minor change
Linking healthcare and societal resilience during the Covid-19 pandemic
Coronavirus disease 2019 (Covid-19) has highlighted the link between public healthcare and the broader context of operational response to complex crises. Data are needed to support the work of the emergency services and enhance governance.
This study develops a Europe-wide analysis of perceptions, needs and priorities of the public affected by the Covid-19 emergency. An online multilingual survey was conducted from mid-May until mid-July 2020. The questionnaire investigates perceptions of public healthcare, emergency management and societal resilience.
In total, N = 3029 valid answers were collected. They were analysed both as a whole and focusing on the most represented countries (Italy, Romania, Spain and the United Kingdom). Our findings highlight some perceived weaknesses in emergency management that are associated with the underlying vulnerability of the global interconnected society and public healthcare systems. The spreading of the epidemic in Italy represented a âtipping pointâ for perceiving Covid-19 as an âemergencyâ in the surveyed countries. The respondents uniformly suggested a preference for gradually restarting activities. We observed a tendency to ignore the cascading effects of Covid-19 and possible concurrence of threats.
Our study highlights the need for practices designed to address the next phases of the Covid-19 crisis and prepare for future systemic shocks. Cascading effects that could compromise operational capacity need to be considered more carefully. We make the case for the reinforcement of cross-border coordination of public health initiatives, for standardization in business continuity management, and for dealing with the recovery at the European level
The residual STL volume as a metric to evaluate accuracy and reproducibility of anatomic models for 3D printing: application in the validation of 3D-printable models of maxillofacial bone from reduced radiation dose CT images.
BackgroundThe effects of reduced radiation dose CT for the generation of maxillofacial bone STL models for 3D printing is currently unknown. Images of two full-face transplantation patients scanned with non-contrast 320-detector row CT were reconstructed at fractions of the acquisition radiation dose using noise simulation software and both filtered back-projection (FBP) and Adaptive Iterative Dose Reduction 3D (AIDR3D). The maxillofacial bone STL model segmented with thresholding from AIDR3D images at 100 % dose was considered the reference. For all other dose/reconstruction method combinations, a "residual STL volume" was calculated as the topologic subtraction of the STL model derived from that dataset from the reference and correlated to radiation dose.ResultsThe residual volume decreased with increasing radiation dose and was lower for AIDR3D compared to FBP reconstructions at all doses. As a fraction of the reference STL volume, the residual volume decreased from 2.9 % (20 % dose) to 1.4 % (50 % dose) in patient 1, and from 4.1 % to 1.9 %, respectively in patient 2 for AIDR3D reconstructions. For FBP reconstructions it decreased from 3.3 % (20 % dose) to 1.0 % (100 % dose) in patient 1, and from 5.5 % to 1.6 %, respectively in patient 2. Its morphology resembled a thin shell on the osseous surface with average thickness <0.1 mm.ConclusionThe residual volume, a topological difference metric of STL models of tissue depicted in DICOM images supports that reduction of CT dose by up to 80Â % of the clinical acquisition in conjunction with iterative reconstruction yields maxillofacial bone models accurate for 3D printing
Expression of HLA-G in patients with B-cell chronic lymphocytic leukemia (B-CLL).
The expression of HLA-G was reported in certain malignancies and its role in escaping from immunosurveillance in cancers was proposed since HLA-G is a nonconventional HLA class I molecule that protects fetus from immunorecognition during pregnancy. Recent studies proposed HLA-G as novel prognostic marker for patients with B-CLL. HLA-G was showed to bear even better prognostic information compared to Zeta-chain associated protein of 70kDa (ZAP-70) and CD38 although some other authors did not find HLA-G expression in CLL. Therefore in this study we characterized the expression of HLA-G on both RNA and protein level. In most of 20 B-CLL patients we were able to detect signal from HLA-G using flow cytometry analysis. The expression of HLA-G was confirmed on messenger level by real-time RT-PCR experiments. No correlation between HLA-G expression and expression of well established prognostic factors such as ZAP-70 and CD38 was detected. These results confirm that HLA-G is expressed on CLL leukemic cells. Furthermore the expression of HLA-G on CLL cells suggests that this molecule might be involved in escaping of CLL cells from immunosurveillance
Taxonomic corpus-based concept summary generation for document annotation.
Semantic annotation is an enabling technology which links documents to concepts that unambiguously describe their content. Annotation improves access to document contents for both humans and software agents. However, the annotation process is a challenging task as annotators often have to select from thousands of potentially relevant concepts from controlled vocabularies. The best approaches to assist in this task rely on reusing the annotations of an annotated corpus. In the absence of a pre-annotated corpus, alternative approaches suffer due to insufficient descriptive texts for concepts in most vocabularies. In this paper, we propose an unsupervised method for recommending document annotations based on generating node descriptors from an external corpus. We exploit knowledge of the taxonomic structure of a thesaurus to ensure that effective descriptors (concept summaries) are generated for concepts. Our evaluation on recommending annotations show that the content that we generate effectively represents the concepts. Also, our approach outperforms those which rely on information from a thesaurus alone and is comparable with supervised approaches
Semantic Boosting: Enhancing Deep Learning Based LULC Classification
The classification of land use and land cover (LULC) is a well-studied task within the domain of remote sensing and geographic information science. It traditionally relies on remotely sensed imagery and therefore models land cover classes with respect to their electromagnetic reflectances, aggregated in pixels. This paper introduces a methodology which enables the inclusion of geographical object semantics (from vector data) into the LULC classification procedure. As such, information on the types of geographic objects (e.g., Shop, Church, Peak, etc.) can improve LULC classification accuracy. In this paper, we demonstrate how semantics can be fused with imagery to classify LULC. Three experiments were performed to explore and highlight the impact and potential of semantics for this task. In each experiment CORINE LULC data was used as a ground truth and predicted using imagery from Sentinel-2 and semantics from LinkedGeoData using deep learning. Our results reveal that LULC can be classified from semantics only and that fusing semantics with imageryâSemantic Boostingâimproved the classification with significantly higher LULC accuracies. The results show that some LULC classes are better predicted using only semantics, others with just imagery, and importantly much of the improvement was due to the ability to separate similar land use classes. A number of key considerations are discussed
- âŠ