117 research outputs found

    A Web3D Enabled Information Integration Framework for Facility Management

    Full text link
    Managing capital oil and gas and civil engineering facilities requires a large amount of heterogeneous information that is generated by different project stakeholders across the facility lifecycle phases and is stored in various databases and technical documents. The amount of information reaches its peak during the commissioning and handover phases when the project is handed over to the operator. The operational phase of facilities spans multiple decades and the way facilities are used and maintained have a huge impact on costs, environment, productivity, and health and safety. Thus, the client and the operator bear most of the additional costs associated with incomplete, incorrect or not immediately usable information. Web applications can provide quick and convenient access to information regardless of user location. However, the integration and delivery of engineering information, including 3D content, over the Web is still at its infancy and is affected by numerous technical (i.e. data and tools) and procedural (i.e. process and people) challenges. This paper addresses the technical issues and proposes a Web3D enabled information integration framework that delivers engineering information together with 3D content without any plug-ins. In the proposed framework, a class library defines the engineering data requirements and a semi-structured database provides means to integrate heterogeneous technical asset information. This framework also enables separating the 3D model content into fragments, storing them together with the digital assets and delivering to the client browser on demand. Such framework partially alleviates the current limitations of the JavaScript based 3D content delivery such as application speed and latency. Hence, the proposed framework is particularly valuable to petroleum and civil engineering companies working with large amounts of data

    Community detection applied on big linked data

    Get PDF
    The Linked Open Data (LOD) Cloud has more than tripled its sources in just six years (from 295 sources in 2011 to 1163 datasets in 2017). The actual Web of Data contains more then 150 Billions of triples. We are assisting at a staggering growth in the production and consumption of LOD and the generation of increasingly large datasets. In this scenario, providing researchers, domain experts, but also businessmen and citizens with visual representations and intuitive interactions can significantly aid the exploration and understanding of the domains and knowledge represented by Linked Data. Various tools and web applications have been developed to enable the navigation, and browsing of the Web of Data. However, these tools lack in producing high level representations for large datasets, and in supporting users in the exploration and querying of these big sources. Following this trend, we devised a new method and a tool called H-BOLD (High level visualizations on Big Open Linked Data). H-BOLD enables the exploratory search and multilevel analysis of Linked Open Data. It offers different levels of abstraction on Big Linked Data. Through the user interaction and the dynamic adaptation of the graph representing the dataset, it will be possible to perform an effective exploration of the dataset, starting from a set of few classes and adding new ones. Performance and portability of H-BOLD have been evaluated on the SPARQL endpoint listed on SPARQL ENDPOINT STATUS. The effectiveness of H-BOLD as a visualization tool is described through a user study

    Weiterentwicklung analytischer Datenbanksysteme

    Get PDF
    This thesis contributes to the state of the art in analytical database systems. First, we identify and explore extensions to better support analytics on event streams. Second, we propose a novel polygon index to enable efficient geospatial data processing in main memory. Third, we contribute a new deep learning approach to cardinality estimation, which is the core problem in cost-based query optimization.Diese Arbeit trĂ€gt zum aktuellen Forschungsstand von analytischen Datenbanksystemen bei. Wir identifizieren und explorieren Erweiterungen um Analysen auf Eventströmen besser zu unterstĂŒtzen. Wir stellen eine neue Indexstruktur fĂŒr Polygone vor, die eine effiziente Verarbeitung von Geodaten im Hauptspeicher ermöglicht. Zudem prĂ€sentieren wir einen neuen Ansatz fĂŒr KardinalitĂ€tsschĂ€tzungen mittels maschinellen Lernens

    Educational Technology and Related Education Conferences for January to June 2011 - November 11, 2010

    Get PDF
    If you attend the same conferences each year, you don’t need to scan this list. This list is your opportunity to “push the envelope” by trying something new. There are hundreds of professional development events that may give you a different perspective or help you learn a new skill. Rather than attend the same event you always do, scan this list and investigate conferences, symposiums, or workshops you have never attended. The list below covers selected events focused primarily on the use of technology in educational settings and on teaching, learning, and educational administration. Only listings until June 2011 are complete as dates, locations, or URLs are not available for a number of events held after June 2011. A Word 2003 format is used to enable people who do not have access to Word 2007 or higher version and those with limited or high-cost Internet access to find a conference that is congruent with their interests or obtain conference proceedings. (If you are seeking a more interactive listing, refer to online conference sites.) Consider using the “Find” tool under Microsoft Word’s “Edit” tab or similar tab in OpenOffice to locate the name of a particular conference, association, city, or country. If you enter the country “United Kingdom” in the “Find” tool, all conferences that occur in the United Kingdom will be highlighted. Then, “cut and paste” a list of suitable events for yourself and your colleagues. Please note that events, dates, titles, and locations may change; thus, CHECK the specific conference website. Note also that some events will be cancelled at a later date. All Internet addresses were verified at the time of publication. No liability is assumed for any errors that may have been introduced inadvertently during the assembly of this conference list. If possible, please do not remove the contact information when you re-distribute the list as that is how I receive updates and corrections. If you publish the list on the web, please note its source

    Finite Automata Algorithms in Map-Reduce

    Get PDF
    In this thesis the intersection of several large nondeterministic finite automata (NFA's) as well as minimization of a large deterministic finite automaton (DFA) in map-reduce are studied. We have derived a lower bound on replication rate for computing NFA intersections and provided three concrete algorithms for the problem. Our investigation of the replication rate for each of all three algorithms shows where each algorithm could be applied through detailed experiments on large datasets of finite automata. Denoting n the number of states in DFA A, we propose an algorithm to minimize A in n map-reduce rounds in the worst-case. Our experiments, however, indicate that the number of rounds, in practice, is much smaller than n for all DFA's we examined. In other words, this algorithm converges in d iterations by computing the equivalence classes of each state, where d is the diameter of the input DFA

    How Fast Can We Play Tetris Greedily With Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width ww and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n)O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on a board of width w=Θ(n)w=\Theta(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n1/2−ϔ)O(n^{1/2-\epsilon}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n1/2log⁥3/2n)O(n^{1/2}\log^{3/2}n) time on boards of width nO(1)n^{O(1)}, matching the lower bound up to a no(1)n^{o(1)} factor.Comment: Correction of typos and other minor correction

    Data-driven and production-oriented tendering design using artificial intelligence

    Get PDF
    Construction projects are facing an increase in requirements since the projects are getting larger, more technology is integrated into the buildings, and new sustainability and CO2 equivalent emissions requirements are introduced. As a result, requirement management quickly gets overwhelming, and instead of having systematic requirement management, the construction industry tends to trust craftsmanship. One method for a more systematic requirement management approach successful in other industries is the systems engineering approach, focusing on requirement decomposition and linking proper verifications and validations. This research project explores if a systems engineering approach, supported by natural language processing techniques, can enable more systematic requirement management in construction projects and facilitate knowledge transfer from completed projects to new tendering projects.The first part of the project explores how project requirements can be extracted, digitised, and analysed in an automated way and how this can benefit the tendering specialists. The study is conducted by first developing a work support tool targeting tendering specialists and then evaluating the challenges and benefits of such a tool through a workshop and surveys. The second part of the project explores inspection data generated in production software as a requirement and quality verification method. First, a dataset containing over 95000 production issues is examined to understand the data quality level of standardisation. Second, a survey addressing production specialists evaluates the current benefits of digital inspection reporting. Third, future benefits of using inspection data for knowledge transfers are explored by applying the Knowledge Discovery in Databases method and clustering techniques. The results show that applying natural language processing techniques can be a helpful tool for analysing construction project requirements, facilitating the identification of essential requirements, and enabling benchmarking between projects. The results from the clustering process suggested in this thesis show that inspection data can be used as a knowledge base for future projects and quality improvement within a project-based organisation. However, higher data quality and standardisation would benefit the knowledge-generation process.This research project provides insights into how artificial intelligence can facilitate knowledge transfer, enable data-informed design choices in tendering projects, and automate the requirements analysis in construction projects as a possible step towards more systematic requirements management
    • 

    corecore