348,778 research outputs found

    Requirements for the data transfer during the examination of design documentation

    Full text link
    When you transfer the design documents to the examination office, number of incompatible electronic documents increases dramatically. The article discusses the way to solve the problem of transferring of the text and graphic data of design documentation for state and non-state expertise, as well as verification of estimates and requirement management. The methods for the recognition of the system elements and requirements for the transferring of text and graphic design documents are provided. The need to use the classification and coding of various elements of information systems (structures, objects, resources, requirements, contracts, etc.) in data transferring systems is indicated separately. The authors have developed a sequence of document processing and transmission of data during the examination, and propose a language for describing the construction of the facility, taking into account the classification criteria of the structures and construction works

    A hybrid data structure for searching in metric spaces

    Get PDF
    The concept of “approximate” searching has applications in a vast number of fields. Some examples are non-traditional databases (e. g. storing images, fingerprints or audio clips, where the concept of exact search is of no use and we search instead for similar objects), text searching, information retrieval, machine learning and classification, image quantization and compression, computational biology, and function prediction.Eje: Base de datosRed de Universidades con Carreras en Informática (RedUNCI

    Detection of topic on Health News in Twitter Data

    Get PDF
    Abstract: The development and rapid popularization of the internet has led to an exponential growth of data in the network, thus, the text mining becomes more important. Users search for the information from the immense information available online. The ways to obtain valuable information, and to classify, organize and manage vast text data automatically make the text processing even more difficult. Therefore, in order to solve those problems and requirements, intelligent information processing has been extensively studied. Topic modelling has been widely employed in the field of natural language processing. Current research directions are more focused on ways to improve the classification speed and accuracy of text classification and topic detection as well as selecting feature methods in achieving better dimension reduction operations. Latent Dirichlet Allocation (LDA) topic model works well on data noise reduction. The LDA is widely used as a feature model combined with the classifier design in order to achieve a good classification effect. This study aims to conduct data mining and save load from the huge database. Thus, three supervised learning algorithms are run, which are Naïve Bayes, Decision Tree and Random Forest. Random Forest classifier outperforms the other two classifiers with 99.99% accuracy. Seven clusters for topic modelling have been revealed using Random Forest classifier. Each output has been set to four highest word and shows the highest term and its weight.  The highest term used in the dataset is term ‘Ebola’. Based on the finding of this study, it shows that the combination of the LDA and supervised learning algorithm effectively solve the problem of data sparseness in short text sets. The method of selecting microblogs that are most likely to discuss news topics will significantly reduce the size of data objects of concern, and to a certain extent eliminate the interference of non-news blogs

    A simple yet effective baseline for non-attributed graph classification

    Full text link
    Graphs are complex objects that do not lend themselves easily to typical learning tasks. Recently, a range of approaches based on graph kernels or graph neural networks have been developed for graph classification and for representation learning on graphs in general. As the developed methodologies become more sophisticated, it is important to understand which components of the increasingly complex methods are necessary or most effective. As a first step, we develop a simple yet meaningful graph representation, and explore its effectiveness in graph classification. We test our baseline representation for the graph classification task on a range of graph datasets. Interestingly, this simple representation achieves similar performance as the state-of-the-art graph kernels and graph neural networks for non-attributed graph classification. Its performance on classifying attributed graphs is slightly weaker as it does not incorporate attributes. However, given its simplicity and efficiency, we believe that it still serves as an effective baseline for attributed graph classification. Our graph representation is efficient (linear-time) to compute. We also provide a simple connection with the graph neural networks. Note that these observations are only for the task of graph classification while existing methods are often designed for a broader scope including node embedding and link prediction. The results are also likely biased due to the limited amount of benchmark datasets available. Nevertheless, the good performance of our simple baseline calls for the development of new, more comprehensive benchmark datasets so as to better evaluate and analyze different graph learning methods. Furthermore, given the computational efficiency of our graph summary, we believe that it is a good candidate as a baseline method for future graph classification (or even other graph learning) studies.Comment: 13 pages. Shorter version appears at 2019 ICLR Workshop: Representation Learning on Graphs and Manifolds. arXiv admin note: text overlap with arXiv:1810.00826 by other author

    A hybrid data structure for searching in metric spaces

    Get PDF
    The concept of “approximate” searching has applications in a vast number of fields. Some examples are non-traditional databases (e. g. storing images, fingerprints or audio clips, where the concept of exact search is of no use and we search instead for similar objects), text searching, information retrieval, machine learning and classification, image quantization and compression, computational biology, and function prediction.Eje: Base de datosRed de Universidades con Carreras en Informática (RedUNCI

    Bayesian Methods and Machine Learning for Processing Text and Image Data

    Get PDF
    Classification/clustering is an important class of unstructured data processing problems. The classification (supervised, semi-supervised and unsupervised) aims to discover the clusters and group the similar data into categories for information organization and knowledge discovery. My work focuses on using the Bayesian methods and machine learning techniques to classify the free-text and image data, and address how to overcome the limitations of the traditional methods. The Bayesian approach provides a way to allow using more variations(numerical or categorical), and estimate the probabilities instead of explicit rules, which will benefit in the ambiguous cases. The MAP(maximum a posterior) estimation is used to deal with the local maximum problems which the ML(maximum likelihood) method gives inaccurate estimates. The EM(expectation-maximization) algorithm can be applied with MAP estimation for the incomplete/missing data problems. Our proposed framework can be used in both supervised and unsupervised classification. For natural language processing(NLP), we applied the machine learning techniques for sentence/text classification. For 3D CT image segmentation, MAP EM clustering approach is proposed to auto-detect the number of objects in the 3D CT luggage image, and the prior knowledge and constraints in MAP estimation are used to avoid/improve the local maximum problems. The algorithm can automatically determine the number of classes and find the optimal parameters for each class. As a result, it can automatically detect the number of objects and produce better segmentation for each object in the image. For segmented object recognition, we applied machine learning techniques to classify each object into targets or non-targets. We have achieved the good results with 90% PD(probability of detection) and 6% PFA(probability of false alarm). For image restoration, in X-ray imaging, scatter can produce noise, artifacts, and decreased contrast. In practice, hardware such as anti-scatter grid is often used to reduce scatter. However, the remaining scatter can still be significant and additional software-based correction is desirable. Furthermore, good software solutions can potentially reduce the amount of needed anti-scatter hardware, thereby reducing cost. In this work, the scatter correction is formulated as a Bayesian MAP (maximum a posteriori) problem with a non-local prior, which leads to better textural detail preservation in scatter reduction. The efficacy of our algorithm is demonstrated through experimental and simulation results

    Analysis of Information Visualization Techniques for Abstract data on Mobile Devices

    Get PDF
    To perform visualization on mobile devices different types of data may use like text, picture, maps, physical objects, abstract data etc. According to data visualization is categorized in two areas of visualization that is, scientific visualization and information visualization. Scientific visualization refers to some specific type of data like physical data and it is used for computer modeling and simulation. Information visualization refers to abstract data and used in business and finance, administration, digital media and other abstract concepts. The physical and abstract data is only one classification but there are others classification like static and dynamic data, structured and unstructured data, or hierarchical and non-hierarchical data classification. This paper is focus on information visualization of abstract data on mobile devices
    corecore