428 research outputs found

    Decision Support System For Search & Rescue Operations

    Get PDF
    SAR (Search and Rescue) operation is a complex process, often carried out in the absence of resources and time. Every single minute matters, as it puts the lost person in more danger. Therefore, it is really crucial to plan and coordinate SAR operation effectively. Because the search area is often very extensive, any leads about where to look first are invaluable. This can be achieved by modelling lost person’s behaviour based on the data from past operations. Generated results present probabilities of finding a subject in different segments of the search area, which might benefit the planning and the execution phases of the operation. The authors evaluate one of the commonly used modelling methods and propose several ways to improve it, together with some preliminary evaluation results and an already implemented system, which incorporates the described methodology

    REALIZATION OF A SYSTEM OF EFFICIENT QUERYING OF HIERARCHICAL DATA TRANSFORMED INTO A QUASI-RELATIONAL MODEL

    Get PDF
    Extensible Markup Language was mainly designed to easily represent documents; however, it has evolved and is now widely used for the representation of arbitrary data structures. There are many Application Programming Interfaces (APIs) to aid software developers with processing XML data. There are also many languages for querying and transforming XML, such as XPath or XQuery, which are widely used in this field. However, because of the great flexibility of XML documents, there are no unified data storing and processing standards, tools, or systems.On the other hand, a relational model is still the most-commonly and widely used standard for storing and querying data. Many Database Management Systems consist of components for loading and transforming hierarchical data. DB2 pureXML or Oracle SQLX are some of the most-recognized examples. Unfortunately, all of them require knowledge of additional tools, standards, and languages dedicated to accessing hierarchical data (for example, XPath or XQuery). Transforming XML documents into a (quasi)relational model and then querying (transformed) documents with SQL or SQL–like queries would significantly simplify the development of data-oriented systems and applications.In this paper, an implementation of the SQLxD query system is proposed. The XML documents are converted into a quasi-relational model (preserving their hierarchical structure), and the SQL–like language based on SQL-92 allows for efficient data querying

    A new approach to storing dynamic data in relational databases using JSON

    Get PDF
    JavaScript Object Notation was originally designed to transfer data; however, it soon found another use as a way to persist data in NoSQL databases. Recently, the most popular relational databases introduced JSON as native column type, which makes it easier to store and query dynamic database schema. In this paper, we review the currently popular techniques of storing data with a dynamic model with a large number of relationships between entities in relational databases. We focus on creating a simple dynamic schema with JSON in the most popular relational databases and we compare it with well-known EAV/CR data model and the document database. The results of precisely selected tests in the field of Criminal Data suggest that the use of JSON in dynamic database schema greatly simplifies queries and reduces their execution time compared to widely used approaches

    Cassiopeia – Towards a Distributed and Composable Crawling Platform, Journal of Telecommunications and Information Technology, 2014, nr 2

    Get PDF
    When it comes to designing and implementing crawling systems or Internet robots, it is of the utmost importance to first address efficiency and scalability issues (from a technical and architectural point of view), due to the enormous size and unimaginable structural complexity of the World Wide Web. There are, however, a significant number of users for whom flexibility and ease of execution are as important as efficiency. Running, defining, and composing Internet robots and crawlers according to dynamically-changing requirements and use-cases in the easiest possible way (e.g. in a graphical, drag & drop manner) is necessary especially for criminal analysts. The goal of this paper is to present the idea, design, crucial architectural elements, Proof- of-Concept (PoC) implementation, and preliminary experimental assessment of Cassiopeia framework, i.e. an all-in-one studio addressing both of the above-mentioned aspect

    Przetwarzanie dokumentów XML na podstawie modelu quasi-relacyjnego i języka SQLxD

    No full text
    The authors propose to use the quasi-relational model as a basis to the XML data processing. The XML data is transformed to the relational data sets according to the intuitive rules that make use of the syntax and structure of the source document. Part of the solution is the query language called SQLxD, which is based on the popular SQL syntax. SQLxD is a tool for both data transformation and its further processing.Autorzy proponują zastosowanie modelu quasi-relacyjnego jako podstawy przetwarzania danych pochodzących z dokumentów XML. Dane XML są transformowane do zbiorów relacyjnych według intuicyjnych reguł, wykorzystujących składnię i strukturę źródłowego dokumentu. Elementem rozwiązania jest język zapytań nazwany SQLxD, opierający się na składni popularnego SQL. Element ten stanowi narzędzie do transformacji danych oraz dalszego ich przetwarzania

    Sparse data classifier based on first-past-the-post voting system

    No full text
    A point of interest (POI) is a general term for objects that describe places from the real world. The concept of POI matching (i.e., determining whether two sets of attributes represent the same location) is not a trivial challenge due to the large variety of data sources. The representations of POIs may vary depending on the basis of how they are stored. A manual comparison of objects is not achievable in real time; therefore, there are multiple solutions for automatic merging. However, there is no yet the efficient solution solves the missing of the attributes. In this paper, we propose a multi-layered hybrid classifier that is composed of machine-learning and deep-learning techniques and supported by a first-past-the-post voting system. We examined different weights for the constituencies that were taken into consideration during a majority (or supermajority) decision. As a result, we achieved slightly higher accuracy than the best current model (random forest), which also is based on voting

    Sparse data classifier based on the first-past-the-post voting system

    No full text
    Point of Interest (POI) is a general term for objects describing places from the real world. The concept of POIs matching, i.e. determining whether two sets of attributes represent the same location, is not a trivial challenge due to the large variety of data sources. The representation of POIs may vary depending on the base in which they are stored. Manual comparison of objects with each other is not achievable in real-time, therefore there are multiple solutions to automatic merging. However there is no efficient solution that includes the deficiencies in the existence of attributes, has been proposed so far. In this paper, we propose the Multilayered Hybrid Classifier which is composed of machine learning and deep learning techniques, supported by the first-past-the-post voting system. We examined different weights for constituencies which were taken into consideration during the majority (or supermajority) decision. As a result, we achieved slightly higher accuracy than the current best model - Random Forest, which in its working also base on voting

    Towards Automatic Points of Interest Matching

    No full text
    Complementing information about particular points, places, or institutions, i.e., so-called Points of Interest (POIs) can be achieved by matching data from the growing number of geospatial databases; these include Foursquare, OpenStreetMap, Yelp, and Facebook Places. Doing this potentially allows for the acquisition of more accurate and more complete information about POIs than would be possible by merely extracting the information from each of the systems alone. Problem: The task of Points of Interest matching, and the development of an algorithm to perform this automatically, are quite challenging problems due to the prevalence of different data structures, data incompleteness, conflicting information, naming differences, data inaccuracy, and cultural and language differences; in short, the difficulties experienced in the process of obtaining (complementary) information about the POI from different sources are due, in part, to the lack of standardization among Points of Interest descriptions; a further difficulty stems from the vast and rapidly growing amount of data to be assessed on each occasion. Research design and contributions: To propose an efficient algorithm for automatic Points of Interest matching, we: (1) analyzed available data sources—their structures, models, attributes, number of objects, the quality of data (number of missing attributes), etc.—and defined a unified POI model; (2) prepared a fairly large experimental dataset consisting of 50,000 matching and 50,000 non-matching points, taken from different geographical, cultural, and language areas; (3) comprehensively reviewed metrics that can be used for assessing the similarity between Points of Interest; (4) proposed and verified different strategies for dealing with missing or incomplete attributes; (5) reviewed and analyzed six different classifiers for Points of Interest matching, conducting experiments and follow-up comparisons to determine the most effective combination of similarity metric, strategy for dealing with missing data, and POIs matching classifier; and (6) presented an algorithm for automatic Points of Interest matching, detailing its accuracy and carrying out a complexity analysis. Results and conclusions: The main results of the research are: (1) comprehensive experimental verification and numerical comparisons of the crucial Points of Interest matching components (similarity metrics, approaches for dealing with missing data, and classifiers), indicating that the best Points of Interest matching classifier is a combination of random forest algorithm coupled with marking of missing data and mixing different similarity metrics for different POI attributes; and (2) an efficient greedy algorithm for automatic POI matching. At a cost of just 3.5% in terms of accuracy, it allows for reducing POI matching time complexity by two orders of magnitude in comparison to the exact algorithm

    LINK – środowisko analiz kryminalnych wykorzystujące narzędzia analiz geoprzestrzennych

    No full text
    The paper presents the LINK application, which is a decision-support system dedicated for operational and investigational activities of homeland security services. The paper briefly discusses issues of criminal analysis, possibilities of utilizing spatial (geographical) information together with crime mapping and spatial analyses.Artykuł prezentuje system LINK będący zintegrowanym środowiskiem wspomagania analizy kryminalnej przeznaczonym do działań operacyjnych i śledczych służb bezpieczeństwa wewnętrznego. W artykule omówiono problemy analizy kryminalnej, możliwość wykorzystania informacji o charakterze przestrzennym oraz narzędzia i metody analiz geoprzestrzennych

    Measurement of the angle between jet axes in Pb-Pb collisions at sNN=5.02\sqrt{s_{\rm NN}} = 5.02 TeV

    No full text
    International audienceThis letter presents the first measurement of the angle between different jet axes (denoted as ΔR{\Delta}R) in Pb-Pb collisions. The measurement is carried out in the 0-10% most-central events at sNN=5.02\sqrt{s_{\rm NN}} = 5.02 TeV. Jets are assembled by clustering charged particles at midrapidity using the anti-kTk_{\rm T} algorithm with resolution parameters R=0.2R=0.2 and 0.40.4 and transverse momenta in the intervals 40<pTchjet<14040 < p_{\rm T}^{\rm ch jet} < 140 GeV/cc and 80<pTchjet<14080 < p_{\rm T}^{\rm ch jet} < 140 GeV/cc, respectively. Measurements at these low transverse momenta enhance the sensitivity to quark-gluon plasma (QGP) effects. A comparison to models implementing various mechanisms of jet energy loss in the QGP shows that the observed narrowing of the Pb-Pb distribution relative to pp can be explained if quark-initiated jets are more likely to emerge from the medium than gluon-initiated jets. These new measurements discard intra-jet pTp_{\rm T} broadening as described in a model calculation with the BDMPS formalism as the main mechanism of energy loss in the QGP. The data are sensitive to the angular scale at which the QGP can resolve two independent splittings, favoring mechanisms that incorporate incoherent energy loss
    corecore