286 research outputs found

    Designing a Framework for Exchanging Partial Sets of BIM Information on a Cloud-Based Service

    Get PDF
    The rationale behind this research study was based on the recognised difficulty of exchanging data at element or object level due to the inefficiencies of compatible hardware and software. Interoperability depicts the need to pass data between applications, allowing multiple types of experts and applications to contribute to the work at hand. The only way that software file exchanges between two applications can produce consistent data and change management results for large projects is through a building model repository. The overall aim of this thesis was to design and develop an integrated process that would advance key decisions at an early design stage through faster information exchanges during collaborative work. In the construction industry, Building Information Modeling is the most integrated shared model between all disciplines. It is based on a manufacturing-like process where standardised deliverables are used throughout the life cycle with effective collaboration as its main driving force. However, the dilemma is how to share these properties of BIM applications on one single platform asynchronously. Cloud Computing is a centralized heterogeneous network that enables different applications to be connected to each other. The methodology used in the research was based on triangulation of data which incorporated many techniques featuring a mixture of both quantitative and qualitative analysis. The results identified the need to re-engineer Simplified Markup Language, in order to exchange partial data sets of intelligent object architecture on an integrated platform. The designed and tested prototype produced findings that enhanced project decisions at a relatively early design stage, improved communication and collaboration techniques and cross disciple co-ordination

    Survey over Existing Query and Transformation Languages

    Get PDF
    A widely acknowledged obstacle for realizing the vision of the Semantic Web is the inability of many current Semantic Web approaches to cope with data available in such diverging representation formalisms as XML, RDF, or Topic Maps. A common query language is the first step to allow transparent access to data in any of these formats. To further the understanding of the requirements and approaches proposed for query languages in the conventional as well as the Semantic Web, this report surveys a large number of query languages for accessing XML, RDF, or Topic Maps. This is the first systematic survey to consider query languages from all these areas. From the detailed survey of these query languages, a common classification scheme is derived that is useful for understanding and differentiating languages within and among all three areas

    Busca de textos utilizando similaridade semântica no contexto biológico e biomédico

    Get PDF
    Monografia (graduação)—Universidade de Brasília, Faculdade UnB Gama, Curso de Engenharia de Software, 2015.Com o crescente aumento da literatura biológica e biomédica, se faz necessário o uso de buscas que retornem mais do que a tradicional busca por palavras chave oferece. Este trabalho tem como objetivo estudar um método de busca semântica no contexto biológico e biomédico, além de implementar e avaliar um algoritmo com o intuito de propor melhorias. Um estudo bibliográfico com os conceitos utilizados foi conduzido, seguido da caracterização da demanda por um algoritmo neste contexto. O algoritmo utiliza ontologias, permitindo que uma entrada de busca e artigos em uma base, ambos com termos presentes nas ontologias utilizadas, sejam comparados a fim de encontrar os textos mais similares semanticamente. Além disso utiliza em sua implementação a biblioteca Semantic Measures Library. Em uma primeira parte do trabalho, o estudo bibliográfico, a caracterização da demanda e a implementação do algoritmo foram concretizados. Na segunda parte foram abordadas as melhorias e avaliações do algoritmo. Com a implementação obtida até o momento, notou-se que os tempos de execução não estão satisfatórios.With the growing increase in biological and biomedical literature, there has also been a growing need for search mechanisms that provide better returns than what mere keywords search can produce. This paper studies a semantic search method in the biological and biomedical context, as well as implementing and assessing an algorithm so as to propose improvements. It also conducts a bibliographical study of the concepts used, which is followed by the characterisation of the demand for an algorithm within this context. The algorithm uses ontologies, allowing for the comparison of a search entry and given articles – both containing terms present in the ontologies used – so as to find texts that are most similar semantically. Its implementation also includes the use of the Semantic Measures Library. In this first stage of the paper, the bibliographical study, as well as the characterisation of the demand and the implementation of the algorithm have been concluded. The second part approached the improvements and assessment of the algorithm. With the implementation conducted so far, it has been noted that running times are not satisfactory

    New IR & Ranking Algorithm for Top-K Keyword Search on Relational Databases ‘Smart Search’

    Get PDF
    Database management systems are as old as computers, and the continuous research and development in databases is huge and an interest of many database venders and researchers, as many researchers work in solving and developing new modules and frameworks for more efficient and effective information retrieval based on free form search by users with no knowledge of the structure of the database. Our work as an extension to previous works, introduces new algorithms and components to existing databases to enable the user to search for keywords with high performance and effective top-k results. Work intervention aims at introducing new table structure for indexing of keywords, which would help algorithms to understand the semantics of keywords and generate only the correct CN‟s (Candidate Networks) for fast retrieval of information with ranking of results according to user‟s history, semantics of keywords, distance between keywords and match of keywords. In which a three modules where developed for this purpose. We implemented our three proposed modules and created the necessary tables, with the development of a web search interface called „Smart Search‟ to test our work with different users. The interface records all user interaction with our „Smart Search‟ for analyses, as the analyses of results shows improvements in performance and effective results returned to the user. We conducted hundreds of randomly generated search terms with different sizes and multiple users; all results recorded and analyzed by the system were based on different factors and parameters. We also compared our results with previous work done by other researchers on the DBLP database which we used in our research. Our final result analysis shows the importance of introducing new components to the database for top-k keywords search and the performance of our proposed system with high effective results.نظم إدارة قواعد البيانات قديمة مثل أجيزة الكمبيوتر، و البحث والتطوير المستمر في قواعد بيانات ضخم و ىنالك اىتمام من العديد من مطوري قواعد البيانات والباحثين، كما يعمل العديد من الباحثين في حل وتطوير وحدات جديدة و أطر السترجاع المعمومات بطرق أكثر كفاءة وفعالية عمى أساس نموذج البحث الغير مقيد من قبل المستخدمين الذين ليس لدييم معرفة في بنية قاعدة البيانات. ويأتي عممنا امتدادا لألعمال السابقة، ويدخل الخوارزميات و مكونات جديدة لقواعد البيانات الموجودة لتمكين المستخدم من البحث عن الكممات المفتاحية )search Keyword )مع األداء العالي و نتائج فعالة في الحصول عمى أعمى ترتيب لمبيانات .)Top-K( وييدف ىذا العمل إلى تقديم بنية جديدة لفيرسة الكممات المفتاحية )Table Keywords Index ،)والتي من شأنيا أن تساعد الخوارزميات المقدمة في ىذا البحث لفيم معاني الكممات المفتاحية المدخمة من قبل المستخدم وتوليد فقط الشبكات المرشحة (s’CN (الصحيحة السترجاع سريع لممعمومات مع ترتيب النتائج وفقا ألوزان مختمفة مثل تاريخ البحث لممستخدم، ترتيب الكمات المفتاحية في النتائج والبعد بين الكممات المفتاحية في النتائج بالنسبة لما قام المستخدم بأدخالو. قمنا بأقتراح ثالث مكونات جديدة )Modules )وتنفيذىا من خالل ىذه االطروحة، مع تطوير واجية البحث عمى شبكة اإلنترنت تسمى "البحث الذكي" الختبار عممنا مع المستخدمين. وتتضمن واجية البحث مكونات تسجل تفاعل المستخدمين وتجميع تمك التفاعالت لمتحميل والمقارنة، وتحميالت النتائج تظير تحسينات في أداء استرجاع البينات و النتائج ذات صمة ودقة أعمى. أجرينا مئات عمميات البحث بأستخدام جمل بحث تم أنشائيا بشكل عشوائي من مختمف األحجام، باالضافة الى االستعانة بعدد من المستخدمين ليذه الغاية. واستندت جميع النتائج المسجمة وتحميميا بواسطة واجية البحث عمى عوامل و معايير مختمفة .وقمنا بالنياية بعمل مقارنة لنتائجنا مع االعمال السابقة التي قام بيا باحثون آخرون عمى نفس قاعدة البيانات (DBLP (الشييرة التي استخدمناىا في أطروحتنا. وتظير نتائجنا النيائية مدى أىمية أدخال بنية جديدة لفيرسة الكممات المفتاحية الى قواعد البيانات العالئقية، وبناء خوارزميات استنادا الى تمك الفيرسة لمبحث بأستخدام كممات مفتاحية فقط والحصول عمى نتائج أفضل ودقة أعمى، أضافة الى التحسن في وقت البحث

    On the execution of high level formal specifications

    Get PDF
    Executable specifications can serve as prototypes of the specified system and as oracles for automated testing of implementations, and so are more useful than non-executable specifications. Executable specifications can also be debugged in much the same way as programs, allowing errors to be detected and corrected at the specification level rather than in later stages of software development. However, existing executable specification languages often force the specifier to work at a low level of abstraction, which negates many of the advantages of non-executable specifications. This dissertation shows how to execute specifications written at a level of abstraction comparable to that found in specifications written in non-executable specification languages. The key innovation is an algorithm for evaluating and satisfying first order predicate logic assertions written over abstract model types. This is important because many specification languages use such assertions. Some of the features of this algorithm were inspired by techniques from constraint logic programming

    Learning classifiers from linked data

    Get PDF
    The emergence of many interlinked, physically distributed, and autonomously maintained linked data sources amounts to the rapid growth of Linked Open Data (LOD) cloud, which offers unprecedented opportunities for predictive modeling and knowledge discovery from such data. However existing machine learning approaches are limited in their applicability because it is neither desirable nor feasible to gather all of the data in a centralized location for analysis due to access, memory, bandwidth, or computational restrictions. In some applications additional schema such as subclass hierarchies may be available and exploited by the learner. Furthermore, in other applications, the attributes that are relevant for specific prediction tasks are not known a priori and hence need to be discovered by the algorithm. Against this background, we present a series of approaches that attempt to address such scenarios. First, we show how to learn Relational Bayesian Classifiers (RBCs) from a single but remote data store using statistical queries, and we extend to the setting where the attributes that are relevant for prediction are not known a priori, by selectively crawling the data store for attributes of interest. Next, we introduce an algorithm for learning classifiers from a remote data store enriched with subclass hierarchies. Our algorithm encodes the constraints specified in a subclass hierarchy using latent variables in a directed graphical model, and adopts the Variational Bayesian EM approach to efficiently learn parameters. In retrospect, we observe that in learning from linked data it is often useful to represent an instance as tuples of bags of attribute values. With this inspiration, we introduce, formulate, and present solutions for a novel type of learning problem which we call distributional instance classification. Finally, building up from the foundations, we consider the problem of learning predictive models from multiple interlinked data stores. We introduce a distributed learning framework, and identify three special cases of linked data fragmentation then describe effective strategies for learning predictive models in each case. Further, we consider a novel application of a matrix reconstruction technique from the field of Computerized Tomography to approximate the statistics needed by the learning algorithm from projections using count queries, thus dramatically reducing the amount of information transmitted from the remote data sources to the learner

    Generating mock skeletons for lightweight Web service testing : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Manawatū New Zealand

    Get PDF
    Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, continued access to dependent services during testing may be restrained, making adequate testing a significant and non-trivial engineering challenge. The concept of Service Virtualisation is gaining popularity for testing such applications in isolation. It is a practise to simulate the behaviour of dependent services by synthesising responses using semantic models inferred from recorded traffic. Replacing services with their respective mocks is, therefore, useful to address their absence and move on application testing. In reality, however, it is unlikely that fully automated service virtualisation solutions can produce highly accurate proxies. Therefore, we recommend using service virtualisation to infer some attributes of HTTP service responses. We further acknowledge that engineers often want to fine-tune this. This requires algorithms to produce readily interpretable and customisable results. We assume that if service virtualisation is based on simple logical rules, engineers would have the potential to understand and customise rules. In this regard, Symbolic Machine Learning approaches can be investigated because of the high provenance of their results. Accordingly, this thesis examines the appropriateness of symbolic machine learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. We consider four commonly used symbolic techniques: the C4.5 decision tree algorithm, the RIPPER and PART rule learners, and the OCEL description logic learning algorithm. The experiments are performed employing network traffic datasets extracted from a few different successful, large-scale HTTP services. The experimental design further focuses on the generation of reproducible results. The chosen algorithms demonstrate the suitability of training highly accurate and human-readable semantic models for predicting the key aspects of HTTP service responses, such as the status and response headers. Having human-readable logics would make interpretation of the response properties simpler. These mock skeletons can then be easily customised to create mocks that can generate service responses suitable for testing
    corecore