4 research outputs found

    Semantic querying and search in distributed ontologies

    Get PDF
    We have observed in recent years a continuous growth in the quantity of RDF data accessible on the web. This evolution is primarily based on increasing data on the web by different sectors such as governments, life science researchers, or academic institutes. RDF data creation is mainly developed by replacing existing data resources with RDF, changing relational databases into RDF. These RDF data are usually called qualified linked data URIs and endpoints of SPARQL. Continuous development that we are experiencing in SPARQL endpoints requires accessing sets of distributed RDF data repositories is getting popularity. This research has offered an extensive analysis of accessing RDF data across distributed ontologies. The existing approaches lack a broad mix of RDF indexing and retrieving of distributed RDF data in one package. In addition, the efficiency of the current methods is not so dynamic and mainly depend on manual fixed strategies for accessing RDF data from a distributed environment. The literature review has acknowledged the need for a robust, reliable, dynamic, and comprehensive accessing mechanism for distributed RDF data using RDF indexing. This thesis presents the conceptual framework that demonstrates the SPARQL query execution process, which accesses the data within distributed RDF sets across a stored index. This thesis introduces the semantic algebra involved in the conversion of traditional SPARQL query language into different phases. The proposed framework elaborates the concepts included in selecting, projection, joins, specialisation and generalisation operators. These operators are usually in assistance during the process of processing and converting a SPARQL query. This thesis introduces the algorithms behind the proposed conceptual framework, which covert the main SPARQL query into sub-queries, sending each subquery to the required distributed repository to fetch the data and merging the sub queries results. 4 This research demonstrates the testing of the proposed framework using the unit and functional testing strategies. The author developed and utilised the Museum ontology to test and evaluate the developed system. It demonstrates all how the complete developed and processed system works. Different tests have been performed in this thesis, like the algebraic operator's test (e.g., select, join, outer join, generalisation, and specialisation operators test) and test the proposed algorithm. After comprehensive testing, it shows that all developed system units worked as expected, and no errors found during the testing of all phases of the tested framework. Finally, the thesis presents implemented framework's performance and accuracy by comparing it to other similar systems. Evaluation of the implemented system demonstrated that the proposed framework could handle distributed SPARQL queries very effectively. The author selected FedX, ANAPSID and ADERIS existing frameworks to compare with developed system and described the results in a graphical format to illustrate the performance and accuracy of all systems

    Semi­Automatic Generation of Tests for Assessing Correct Integration of Security Mechanisms in the Internet of Things

    Get PDF
    Internet of Things (IoT) is expanding at a global level and its influence in our daily lives is increasing. This fast expansion, with companies competing to be the first to deploy new IoT systems, has led to the majority of the software being created and produced without due attention being given to security considerations and without adequate security testing. Software quality and security testing are inextricably linked. The most successful approach to achieve secure software is to adhere to secure development, deployment, and maintenance principles and practices throughout the development process. Security testing is a procedure for ensuring that a system keeps the users data secure and performs as expected. However, extensively testing a system can be a very daunting task, that usually requires professionals to be well versed in the subject, so as to be performed correctly. Moreover, not all development teams can have access to a security expert to perform security testing in their IoT systems. The need to automate security testing emerged as a potential means to solve this issue. This dissertation describes the process undertaken to design and develop a module entitled Assessing Correct Integration of Security Mechanisms (ACISM) that aims to provide system developers with the means to improve system security by anticipating and preventing potential attacks. Using the list of threats that the system is vulnerable as inputs, this tool provides developers with a set of security tests and tools that will allow testing how susceptible the system is to each of those threats. This tool outputs a set of possible attacks derived from the threats and what tools could be used to simulate these attacks. The tool developed in this dissertation has the purpose to function as a plugin of a framework called Security Advising Modules (SAM). It has the objective of advising users in the development of secure IoT, cloud and mobile systems during the design phases of these systems. SAM is a modular framework composed by a set of modules that advise the user in different stages of the security engineering process. To validate the usefulness of the ACISM module in real life, it was tested by 17 computer science practitioners. The feedback received from these users was very positive. The great majority of the participants found the tool to be extremely helpful in facilitating the execution of security tests in IoT. The principal contributions achieved with this dissertation were: the creation of a tool that outputs a set of attacks and penetration tools to execute the attacks mentioned, all starting from the threats an IoT system is susceptible to. Each of the identified attacking tools will be accompanied with a brief instructional guide; all summing up to an extensive review of the state of the art in testing.A Internet das Coisas (IoT) é um dos paradigmas com maior expansão mundial à data de escrita da dissertação, traduzindo­se numa influência incontornável no quotidiano. As empresas pretendem ser as primeiras a implantar novos sistemas de IoT como resultado da sua rápida expansão, o que faz com que a maior parte do software seja criado e produzido sem considerações de segurança ou testes de segurança adequados. A qualidade do software e os testes de segurança estão intimamente ligados. A abordagem mais bemsucedida para obter software seguro é aderir aos princípios e práticas de desenvolvimento, implantação e manutenção seguros em todo o processo de desenvolvimento. O teste de segurança é um procedimento para garantir que um sistema proteja os dados do utilizador e execute conforme o esperado. Esta dissertação descreve o esforço despendido na concepção e desenvolvimento de uma ferramenta que, tendo em consideração as ameaças às quais um sistema é vulnerável, produz um conjunto de testes e identifica um conjunto de ferramentas de segurança para verificar a susceptibilidade do sistema às mesmas. A ferramenta mencionada anteriormente foi desenvolvida em Python e tem como valores de entrada uma lista de ameaças às quais o sistema é vulnerável. Depois de processar estas informações, a ferramenta produz um conjunto de ataques derivados das ameaças e possíveis ferramentas a serem usadas para simular esses ataques. Para verificar a utilidade da ferramenta em cenários reais, esta foi testada por 17 pessoas com conhecimento na área de informática. A ferramenta foi avaliada pelos sujeitos de teste de uma forma muito positiva. A grande maioria dos participantes considerou a ferramenta extremamente útil para auxiliar a realização de testes de segurança em IoT. As principais contribuições alcançadas com esta dissertação foram: a criação de uma ferramenta que, através das ameaças às quais um sistema IoT é susceptível, produzirá um conjunto de ataques e ferramentas de penetração para executar os ataques mencionados. Cada uma das ferramentas será acompanhada por um breve guia de instruções; uma extensa revisão do estado da arte em testes.The work described in this dissertation was carried out at the Instituto de Telecomunicações, Multimedia Signal Processing – Covilhã Laboratory, in Universidade da Beira Interior, at Covilhã, Portugal. This research work was funded by the S E C U R I o T E S I G N Project through FCT/COMPETE/FEDER under Reference Number POCI­01­0145­FEDER030657 and by Fundação para Ciência e Tecnologia (FCT) research grant with reference BIL/Nº11/2019­B00701

    Automated Extraction of Behaviour Model of Applications

    Get PDF
    Highly replicated cloud applications are deployed only when they are deemed to be func- tional. That is, they generally perform their task and their failure rate is relatively low. However, even though failure is rare, it does occur and is very difficult to diagnose. We devise a tool for failure diagnosis which learns the normal behaviour of an application in terms of the statistical properties of variables used throughout its execution, and then monitors it for deviation from these statistical properties. Our study reveals that many variables have unique statistical characteristics that amount to an invariant of the pro- gram. Therefore, any significant deviation from these characteristics reflects an abnormal behaviour of the application which may be caused by a program error. It is difficult to get the invariant from the application’s static code analysis alone. For example, the name of a person usually does not include a semicolon; however, an intruder may try to do a SQL injection (which will include a semicolon) through the ‘name’ field while entering his information and be successful if there is no checking for this case. This scenario can only be captured at runtime and may not be tested by the application de- veloper. The character range of the ‘name’ variable is one of its statistical properties; by learning this range from the execution of the application it is possible to detect the above described abnormal input. Hence, monitoring the statistics of values taken by the different variables of an application is an effective way to detect anomalies that can help to diagnose the failure of the application. We build a tool that collects frequent snapshots of the application’s heap and build a statistical model solely from the extensional knowledge of the application. The extensional knowledge is only obtainable from runtime data of the application without having any description or explanation of the application’s execution flow. The model characterizes the application’s normal behaviour. Collecting snapshots in form of memory dumps and determine the application’s behaviour model from them without code instrumentation make our tool applicable in cases where instrumentation is computationally expensive. Our approach allows a behaviour model to be automatically and efficiently built using the monitoring data alone. We evaluate the utility of our approach by applying it on an e-commerce application and online bidding system, and then derive different statisti- cal properties of variables from their runtime-exhibited values. Our experimental result demonstrates 96% accuracy in the generated statistical model with a maximum 1% per- formance overhead. This accuracy is measured at the basis of generating less false positive alerts when the application is running without any anomaly. The high accuracy and low performance overhead indicates that our tool can successfully determine the application’s normal behaviour without affecting the performance of the application and can be used to monitor it in production time. Moreover, our tool also correctly detected two anomalous condition while monitoring the application with a small amount of injected fault. In ad- dition to anomaly detection, our tool logs all the variables of the application that violates the learned model. The log file can help to diagnose any failure caused by the variables and gives our tool a source-code granularity in fault localization

    BIM mediated mass customization applied to engineered-to-order of building component : custom kitchens and cabinetry solutions

    Get PDF
    Orientador: Regina Coeli RuschelTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil, Arquitetura e UrbanismoResumo: Um número crescente de fabricantes tem buscado a Customização em Massa (CM). CM é uma estratégia de produção que utiliza de Tecnologia da Informação, processos flexíveis e estruturas organizacionais adequadas para fornecer produtos projetos específicamente para um cliente a um custo próximo ao da produção em massa. Quanto maior for o nível de customização, maiores serão os benefícios, mas também os custos operacionais. Portanto, o objetivo desta pesquisa é avaliar os benefícios da Modelagem da Informação da Construção (BIM) para se obter um elevado nível de CM. O foco da pesquisa está na formulação de uma solução que permita a reconfiguração das estruturas e processos de sistemas de fabricação de componentes de construção, em sistemas de produção em CM mediados por BIM. Esta solução fornecerá conhecimento sobre como determinar o nível apropriado de personalização para um determinado produto abordando os seguintes problemas: (a) O valor de um determinado nível de customização, de acordo com as especificações dos clientes; (b) a capacidade do sistema oferecer este nível de customização; e, (c) a combinação desses problemas aparentemente antagônicos. O desenvolvimento da solução proposta, está sendo realizado através de uma Design Science Research, um método de pesquisa que é um processo rigoroso de projetos de artefatos (constructos, modelos, métodos, instanciações e Design propositions) com relevância prática e contribuição teórica. Esta investigação cria artefatos que são validados por uma instanciação através de uma pesquisa-ação em um fabricante de cozinhas e móveis modulados fabricados sob encomendaAbstract: An increasing number of fabricators has pursued Mass Customization (MC). MC is a system that uses information technology, flexible processes and organizational structures suitable to provide individually designed products at a cost near that of Mass Production. The higher the customization level the more significant the benefits, but also the operational costs. Therefore, the objective of this research was to evaluate the benefits of Building Information Modeling (BIM) for achieving a higher level of MC. The research focused on the formulation of a solution that enables reconfiguration of fabricators structures and processes into BIM mediated mass-customized production system. Such solution provided knowledge on how to determine the appropriate level of customization for a specific product addressing the following issues: (a) the value placed on a level of customization by the customer's requirements, (b) the system's ability to deliver that level of customization, and (c) the combination of these apparently conflicting issues. The development of such a solution was carried out through a Design Science Research, a method that is a scientific process of designing artifacts (constructs, models, methods, instantiations, and design propositions) with practical relevance and theoretical contribution. This investigation created artifacts that were validated by an instantiation through action research at a kitchen and furniture custom cabinetry fabricatorDoutoradoArquitetura, Tecnologia e CidadeDoutor em Arquitetura, Tecnologia e Cidad
    corecore