11,963 research outputs found

    The Unsupervised Acquisition of a Lexicon from Continuous Speech

    Get PDF
    We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.Comment: 27 page technical repor

    A survey of intrusion detection system technologies

    Get PDF
    This paper provides an overview of IDS types and how they work as well as configuration considerations and issues that affect them. Advanced methods of increasing the performance of an IDS are explored such as specification based IDS for protecting Supervisory Control And Data Acquisition (SCADA) and Cloud networks. Also by providing a review of varied studies ranging from issues in configuration and specific problems to custom techniques and cutting edge studies a reference can be provided to others interested in learning about and developing IDS solutions. Intrusion Detection is an area of much required study to provide solutions to satisfy evolving services and networks and systems that support them. This paper aims to be a reference for IDS technologies other researchers and developers interested in the field of intrusion detection

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    Application of multi-agents to power distribution systems

    Get PDF
    The electric power system has become a very complicated network at present because of re-structuring and the penetration of distributed energy resources. In addition, due to increasing demand for power, issues such as transmission congestion have made the power system stressed. A single fault can lead to massive cascading effects, affecting the power supply and power quality. An overall solution for these issues can be obtained by a new artificial intelligent mechanism called the multi-agent system. A multi-agent system is a collection of agents, which senses the environmental changes and acts diligently on the environment in order to achieve its objectives. Due to the increasing speed and decreasing cost in communication and computation of complex matrices, multi-agent system promise to be a viable solution for today\u27s intrinsic network problems.;A multi-agent system model for fault detection and reconfiguration is presented in this thesis. These models are developed based on graph theory and mathematical programming. A mathematical model is developed to specify the objective function and the constraints.;The multi-agent models are simulated in Java Agent Development Framework and MatlabRTM and are applied to the power system model designed in the commercial software, Distributed Engineering Workstation(c) . The circuit that is used to model the power distribution system is the Circuit of the Future, developed by Southern California Edison.;The multi-agent system model can precisely detect the fault location and according to the type of fault, it reconfigures the system to supply as much load as possible by satisfying the power balance and line capacity constraints. The model is also capable of handling the assignment of load priorities.;All possible fault cases were tested and a few critical test scenarios are presented in this thesis. The results obtained were promising and were as expected

    A contribution for data processing and interoperability in Industry 4.0

    Get PDF
    Dissertação de mestrado em Engenharia de SistemasIndustry 4.0 is expected to drive a significant change in companies’ growth. The idea is to cluster important information from all the company’s supply chain, enabling valuable decision-making while permitting interactions between machines and humans in real time. Autonomous systems powered with Information Technologies are enablers of Industry 4.0 – like Internet of Things (IoT), Cyber Physical-Systems (CPS) and Big Data and analytics. IoT gather information from every piece of the big puzzle which is the manufacturing process. Cloud Computing store all that information in one place. People share information across the company, between its supply chain and hierarchical levels through integration of systems. Finally, Big Data and analytics are of intelligence that will improve Industry 4.0. Methods and tools in Industry 4.0 are designed to increase interoperability across industrial stakeholders. In order to make the complete process possible, standardisation must be implemented across the company. Two reference models for Industry 4.0 were studied - RAMI 4.0 and IIRA. RAMI 4.0, a German initiative, focuses on industrial digitalization while IIRA, an American initiative, focuses on “Internet of Things” world, i.e. energy, healthcare and transportation. The two initiatives aim to obtain intelligence data from processes while enabling interoperability among systems. Representatives from the two reference models are working together on the technological interface standards that could be used by companies joining this new era. This study aims at the interoperability between systems. Even though there must be a model to guide the company into Industry 4.0, this model ought to be mutable and flexible enough to handle differences in manufacturing process, as an example automotive industry 4.0 will not have the same approach as aviation Industry 4.0.Espera-se que a Indústria 4.0 seja uma mudança significativa no crescimento das empresas. O objetivo é agrupar informações importantes de toda a cadeia de suprimentos da empresa, proporcionando uma tomada de decisão mais acertada, ao mesmo tempo que permite interações entre seres humanos e máquinas em tempo real. Sistemas autônomos equipados com Tecnologias da Informação possibilitam a Indústria 4.0 como a Internet das Coisas (IoT), sistemas ciber-físicos (CPS) e Big Data e analytics. A IoT coleta informações de cada peça do grande quebra-cabeça que é o processo de fabricação. Cloud Computing lida com armazenamento de toda essa informação em um só lugar. As pessoas compartilham informações em toda a empresa, na cadeia de abastecimento e níveis hierárquicos por meio da integração de sistemas. Por fim, Big Data e analytics são de inteligência que melhorarão a Indústria 4.0. Os métodos e ferramentas da Indústria 4.0 são projetadas para aumentar a interoperabilidade entre os stakeholders. Para tornar possível essa interoperabilidade, um padrão em toda a empresa deve ser implementado. Dois modelos de referência para a Indústria 4.0 foram estudados - RAMI 4.0 e IIRA. RAMI 4.0, a iniciativa alemã, concentra-se na digitalização industrial, enquanto IIRA, a iniciativa americana, foca no mundo da Internet das Coisas, como energia, saúde e transporte. As duas iniciativas visam obter dados inteligentes dos processos e, ao mesmo tempo, permitir a interoperabilidade entre os sistemas. Representantes dos dois modelos de referência estão a trabalhar juntos para discutir os padrões de interface tecnológica que podem ser usados pelas empresas que entram nessa nova era. Este estudo visa a interoperabilidade entre sistemas. Embora deva haver um modelo para orientar a empresa na Indústria 4.0, esse modelo deve ser mutável e flexível o suficiente para lidar com diferenças no processo de fabricação, como exemplo a indústria 4.0 automotiva não terá a mesma abordagem que a Indústria 4.0 de aviação

    Effectiveness of OPC for systems integration in the process control information architecture

    Get PDF
    A Process is defined as the progression to some particular end or objective through a logical and orderly sequence of events. Various devices (e.g., actuators, limit switches, motors, sensors, etc.) play a significant role in making sure that the process attains its objective (e.g., maintaining the furnace temperature within an acceptable limit). To do these things effectively, manufacturers need to access data from the plant floor or devices and integrate those into their control applications, which maybe one of the off the shelf tools such as Supervisory Control and Data Acquisition (SCADA), Distributed Control System (DCS), or Programmable Logic Controllers (PLC). A number of vendors have devised their own Data Acquisition Networks or Process Control Architectures (e.g., PROFIBUS, DEVICENET, INTERBUS, ETHERNET I/P, etc.) that claim to be open to or interoperable with a number of third party devices or products that make process data available to the Process or Business Management level. In reality this is far from what it is claimed to be. Due to the problem of interoperability, a manufacturer is forced to be bound, either with the solutions provided by a single vendor or with the writing of a driver for each hardware device that is accessed by a process application. Today\u27s manufacturers are looking for advanced distributed object technologies that allow for seamless exchange of information across plant networks as a means of integrating the islands of automation that exist in their manufacturing operations. OLE for Process Control (OPC) works to significantly reduce the time, cost, and effort required in writing custom interfaces for hundreds of different intelligent devices and networks in use today. The objective of this thesis is to explore the OLE for Process Control (OPC) technology in depth by highlighting its need in industry and by using the OPC technology in an application in which data from a process controlled by Siemens Simatic S7 PLC are shared with a client application running in LabVTEW6i

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    Data integrity: an often-ignored aspect of safety systems: executive summary

    Get PDF
    Data is all-pervasive and is found in all aspects of modern computer systems, and yet many engineers seem reluctant to recognise the importance of data integrity. The conventional view of data, as simply an aspect of software, underestimates the role played by data errors in the behaviour of the system and their potential effect on the integrity of the overall system. In many cases hazard analysis is not applied to data in the same way that it is applied to other system components. Without data integrity requirements, data development and data provision may not attract the degree of rigour that would be required of other system components of a similar integrity. This omission also has implications for safety assessment where the data is often ignored or neglected. This position becomes self reenforcing, as without integrity requirements the importance of data integrity remains hidden. This research provides a wide-ranging overview of the use (and abuse) of data within safety systems, and proposes a range of strategies and techniques to improve the safety of such systems. A literature review and a survey of industrial practice confirmed the conventional view of data, and showed that there is little consistency in the methods used for data development. To tackle these problems this work proposes a novel paradigm, in which data is considered as a separate and distinct system component. This approach not only ensures that data is given the importance that it deserves, but also simplifies the task of providing guidance that is specific to data. Having developed this conceptual framework for data, the work then goes on to develop lifecycle models to assist with data development, and to propose a range of techniques appropriate for the various lifecycle phases. An important aspect of the development of any safety-related system is the production of a safety argument, and this research looks in some detail at the treatment of data, and data development, within this justification. The industrial survey reveals that in data-intensive systems data is often developed quite separately from other elements of the system. It also reveals that data is often produced by an extended data supply chain that may involve a number of disparate organisations. These characteristics of data distinguish it from other system components and greatly complicate the achievement and demonstration of safety. This research proposes methods of modelling complex data supply chains and proposes techniques for tackling the difficult task of safety justification for such systems
    corecore