799 research outputs found

    Cyber–Physical–Social Frameworks for Urban Big Data Systems: A Survey

    Get PDF
    The integration of things’ data on the Web and Web linking for things’ description and discovery is leading the way towards smart Cyber–Physical Systems (CPS). The data generated in CPS represents observations gathered by sensor devices about the ambient environment that can be manipulated by computational processes of the cyber world. Alongside this, the growing use of social networks offers near real-time citizen sensing capabilities as a complementary information source. The resulting Cyber–Physical–Social System (CPSS) can help to understand the real world and provide proactive services to users. The nature of CPSS data brings new requirements and challenges to different stages of data manipulation, including identification of data sources, processing and fusion of different types and scales of data. To gain an understanding of the existing methods and techniques which can be useful for a data-oriented CPSS implementation, this paper presents a survey of the existing research and commercial solutions. We define a conceptual framework for a data-oriented CPSS and detail the various solutions for building human–machine intelligence

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists

    Characterization of Enhanced Interferometric Gravitational Wave Detectors and Studies of Numeric Simulations for Compact-Binary Coalescences

    Get PDF
    Gravitational waves are a consequence of the general theory of relativity. Direct detection of such waves will provide a wealth of information about physics, astronomy, and cosmology. A worldwide effort is currently underway to make the first direct detection of gravitational waves. The global network of detectors includes the Laser Interferometer Gravitational-wave Observatory (LIGO), which recently completed its sixth science run. A particularly promising source of gravitational waves is a binary system consisting of two neutron stars and/or black holes. As the objects orbit each other they emit gravitational radiation, lose energy, and spiral inwards. This produces a characteristic ``chirp\u27\u27 signal for which we can search in the LIGO data. Currently this is done using matched-filter techniques, which correlate the detector data against analytic models of the emitted gravitational waves. Several choices must be made in constructing a search for signals from such binary coalescences. Any discrepancy between the signals and the models used will reduce the effectiveness of the matched filter. However, the analytic models are based on approximations which are not valid through the entire evolution of the binary. In recent years numerical relativity has had impressive success in simulating the final phases of the coalescence of binary black holes. While numerical relativity is too computationally expensive to use directly in the search, this progress has made it possible to perform realistic tests of the LIGO searches. The results of such tests can be used to improve the efficiency of searches. Conversely, noise in the LIGO and Virgo detectors can reduce the efficiency. This must be addressed by characterizing the quality of the data from the detectors, and removing from the analysis times that will be detrimental to the search. In this thesis we utilize recent results from numerical relativity to study both the degree to which analytic models match realistic waveforms and the ability of LIGO searches to make detections. We also apply the matched-filter search to the problem of removing times of excess noise from the search

    D6.2 Integrated Final Version of the Components for Lexical Acquisition

    Get PDF
    The PANACEA project has addressed one of the most critical bottlenecks that threaten the development of technologies to support multilingualism in Europe, and to process the huge quantity of multilingual data produced annually. Any attempt at automated language processing, particularly Machine Translation (MT), depends on the availability of language-specific resources. Such Language Resources (LR) contain information about the language\u27s lexicon, i.e. the words of the language and the characteristics of their use. In Natural Language Processing (NLP), LRs contribute information about the syntactic and semantic behaviour of words - i.e. their grammar and their meaning - which inform downstream applications such as MT. To date, many LRs have been generated by hand, requiring significant manual labour from linguistic experts. However, proceeding manually, it is impossible to supply LRs for every possible pair of European languages, textual domain, and genre, which are needed by MT developers. Moreover, an LR for a given language can never be considered complete nor final because of the characteristics of natural language, which continually undergoes changes, especially spurred on by the emergence of new knowledge domains and new technologies. PANACEA has addressed this challenge by building a factory of LRs that progressively automates the stages involved in the acquisition, production, updating and maintenance of LRs required by MT systems. The existence of such a factory will significantly cut down the cost, time and human effort required to build LRs. WP6 has addressed the lexical acquisition component of the LR factory, that is, the techniques for automated extraction of key lexical information from texts, and the automatic collation of lexical information into LRs in a standardized format. The goal of WP6 has been to take existing techniques capable of acquiring syntactic and semantic information from corpus data, improving upon them, adapting and applying them to multiple languages, and turning them into powerful and flexible techniques capable of supporting massive applications. One focus for improving the scalability and portability of lexical acquisition techniques has been to extend exiting techniques with more powerful, less "supervised" methods. In NLP, the amount of supervision refers to the amount of manual annotation which must be applied to a text corpus before machine learning or other techniques are applied to the data to compile a lexicon. More manual annotation means more accurate training data, and thus a more accurate LR. However, given that it is impractical from a cost and time perspective to manually annotate the vast amounts of data required for multilingual MT across domains, it is important to develop techniques which can learn from corpora with less supervision. Less supervised methods are capable of supporting both large-scale acquisition and efficient domain adaptation, even in the domains where data is scarce. Another focus of lexical acquisition in PANACEA has been the need of LR users to tune the accuracy level of LRs. Some applications may require increased precision, or accuracy, where the application requires a high degree of confidence in the lexical information used. At other times a greater level of coverage may be required, with information about more words at the expense of some degree of accuracy. Lexical acquisition in PANACEA has investigated confidence thresholds for lexical acquisition to ensure that the ultimate users of LRs can generate lexical data from the PANACEA factory at the desired level of accuracy

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    PREDON Scientific Data Preservation 2014

    Get PDF
    LPSC14037Scientific data collected with modern sensors or dedicated detectors exceed very often the perimeter of the initial scientific design. These data are obtained more and more frequently with large material and human efforts. A large class of scientific experiments are in fact unique because of their large scale, with very small chances to be repeated and to superseded by new experiments in the same domain: for instance high energy physics and astrophysics experiments involve multi-annual developments and a simple duplication of efforts in order to reproduce old data is simply not affordable. Other scientific experiments are in fact unique by nature: earth science, medical sciences etc. since the collected data is "time-stamped" and thereby non-reproducible by new experiments or observations. In addition, scientific data collection increased dramatically in the recent years, participating to the so-called "data deluge" and inviting for common reflection in the context of "big data" investigations. The new knowledge obtained using these data should be preserved long term such that the access and the re-use are made possible and lead to an enhancement of the initial investment. Data observatories, based on open access policies and coupled with multi-disciplinary techniques for indexing and mining may lead to truly new paradigms in science. It is therefore of outmost importance to pursue a coherent and vigorous approach to preserve the scientific data at long term. The preservation remains nevertheless a challenge due to the complexity of the data structure, the fragility of the custom-made software environments as well as the lack of rigorous approaches in workflows and algorithms. To address this challenge, the PREDON project has been initiated in France in 2012 within the MASTODONS program: a Big Data scientific challenge, initiated and supported by the Interdisciplinary Mission of the National Centre for Scientific Research (CNRS). PREDON is a study group formed by researchers from different disciplines and institutes. Several meetings and workshops lead to a rich exchange in ideas, paradigms and methods. The present document includes contributions of the participants to the PREDON Study Group, as well as invited papers, related to the scientific case, methodology and technology. This document should be read as a "facts finding" resource pointing to a concrete and significant scientific interest for long term research data preservation, as well as to cutting edge methods and technologies to achieve this goal. A sustained, coherent and long term action in the area of scientific data preservation would be highly beneficial
    • …
    corecore