36 research outputs found

    A web services based framework for efficient monitoring and event reporting.

    Get PDF
    Network and Service Management (NSM) is a research discipline with significant research contributions the last 25 years. Despite the numerous standardised solutions that have been proposed for NSM, the quest for an "all encompassing technology" still continues. A new technology introduced lately to address NSM problems is Web Services (WS). Despite the research effort put into WS and their potential for addressing NSM objectives, there are efficiency, interoperability, etc issues that need to be solved before using WS for NSM. This thesis looks at two techniques to increase the efficiency of WS management applications so that the latter can be used for efficient monitoring and event reporting. The first is a query tool we built that can be used for efficient retrieval of management state data close to the devices where they are hosted. The second technique is policies used to delegate a number of tasks from a manager to an agent to make WS-based event reporting systems more efficient. We tested the performance of these mechanisms by incorporating them in a custom monitoring and event reporting framework and supporting systems we have built, against other similar mechanisms (XPath) that have been proposed for the same tasks, as well as previous technologies such as SNMP. Through these tests we have shown that these mechanisms are capable of allowing us to use WS efficiently in various monitoring and event reporting scenarios. Having shown the potential of our techniques we also present the design and implementation challenges for building a GUI tool to support and enhance the above systems with extra capabilities. In summary, we expect that other problems WS face will be solved in the near future, making WS a capable platform for it to be used for NSM

    Initial design and concept of operations for a clandestine data relay UUV to circumvent jungle canopy effects on satellite communications

    Get PDF
    Communications within jungle environments has always been a difficult proposition. This is especially true of collection assets beneath triple canopy jungle that need to communicate with overhead national assets. The traditional methods of countering the negative effects of the canopy on EM signals have been to increase the power to offset the losses, or to utilize new, more canopy transparent portions of the EM spectrum. However, there are complications with both of these methods. Simply increasing transmitted power increases the drain on the system's power supply, thus lowering effective on-station time. Shifting to a different portion of the EM spectrum can negatively affect the transmission rate of the system and requires specialized equipment such as antennas and modulators. This work addresses the issue by designing a semi-autonomous UUV, which will clandestinely relay data from the embedded jungle systems to overhead national assets. Rather than trying to punch through the canopy directly, the proposed UUV will take advantage of the fact that most jungle water ways have, at the very least, a thinner canopy overhead if not a clear view of the sky for less lossy satellite communications. This shifts the primary communications from an Earth-Sky problem to a lateral wave model where the communications travels parallel to the canopy. While the jungle is still not an ideal medium for communications, other methods can be used to address these losses. The proposed UUV will be designed to be cheap and constructed from existing systems. It will also be small, and lightweight, enough to be delivered and deployed in theater via aircraft, boats, and operators on the ground. Additionally it will be capable of long on station times due to the ability recharge on station.http://archive.org/details/initialdesignndc109455537Approved for public release; distribution is unlimited

    Аналіз ефективності безпроводових мереж

    Get PDF
    Робота публікується згідно наказу ректора від 29.12.2020 р. №580/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії НАУ". Керівник проекту: к.т.н., доцент Надточій Василь ІвановичThe increasing interest in network efficiency analysis tasks is determined by the necessity of automation in either control functions. Therefore, today the search and implementation of efficiency network principles for human function by means of computer systems. One of the most perspective directions of solving the given problem is based on the usage of modern protocols, programming language and monitoring tools. This problem is solved by means of choosing the correspondent architecture and learning method. The analysis demonstrates that there is still no model, which could be sensitive to all types of distortion.Зростаючий інтерес до завдань аналізу ефективності мережі визначається необхідністю автоматизації будь-яких функцій управління. Тому сьогодні пошук та впровадження принципів ефективності мережі функціонування людини за допомогою комп’ютерних систем. Один з найбільш перспективних напрямків вирішення даної проблеми базується на використанні сучасних протоколів, мови програмування та засобів моніторингу. Ця проблема вирішується шляхом вибору відповідної архітектури та методу навчання. Аналіз демонструє, що досі не існує моделі, яка могла б бути чутливою до всіх видів спотворень

    An ontology-driven architecture for data integration and management in home-based telemonitoring scenarios

    Get PDF
    The shift from traditional medical care to the use of new technology and engineering innovations is nowadays an interesting and growing research area mainly motivated by a growing population with chronic conditions and disabilities. By means of information and communications technologies (ICTs), telemedicine systems offer a good solution for providing medical care at a distance to any person in any place at any time. Although significant contributions have been made in this field in recent decades, telemedicine and in e-health scenarios in general still pose numerous challenges that need to be addressed by researchers in order to take maximum advantage of the benefits that these systems provide and to support their long-term implementation. The goal of this research thesis is to make contributions in the field of home-based telemonitoring scenarios. By periodically collecting patients' clinical data and transferring them to physicians located in remote sites, patient health status supervision and feedback provision is possible. This type of telemedicine system guarantees patient supervision while reducing costs (enabling more autonomous patient care and avoiding hospital over flows). Furthermore, patients' quality of life and empowerment are improved. Specifically, this research investigates how a new architecture based on ontologies can be successfully used to address the main challenges presented in home-based telemonitoring scenarios. The challenges include data integration, personalized care, multi-chronic conditions, clinical and technical management. These are the principal issues presented and discussed in this thesis. The proposed new ontology-based architecture takes into account both practical and conceptual integration issues and the transference of data between the end points of the telemonitoring scenario (i.e, communication and message exchange). The architecture includes two layers: 1) a conceptual layer and 2) a data and communication layer. On the one hand, the conceptual layer based on ontologies is proposed to unify the management procedure and integrate incoming data from all the sources involved in the telemonitoring process. On the other hand, the data and communication layer based on web service technologies is proposed to provide practical back-up to the use of the ontology, to provide a real implementation of the tasks it describes and thus to provide a means of exchanging data. This architecture takes advantage of the combination of ontologies, rules, web services and the autonomic computing paradigm. All are well-known technologies and popular solutions applied in the semantic web domain and network management field. A review of these technologies and related works that have made use of them is presented in this thesis in order to understand how they can be combined successfully to provide a solution for telemonitoring scenarios. The design and development of the ontology used in the conceptual layer led to the study of the autonomic computing paradigm and its combination with ontologies. In addition, the OWL (Ontology Web Language) language was studied and selected to express the required knowledge in the ontology while the SPARQL language was examined for its effective use in defining rules. As an outcome of these research tasks, the HOTMES (Home Ontology for Integrated Management in Telemonitoring Scenarios) ontology, presented in this thesis, was developed. The combination of the HOTMES ontology with SPARQL rules to provide a flexible solution for personalising management tasks and adapting the methodology for different management purposes is also discussed. The use of Web Services (WSs) was investigated to support the exchange of information defined in the conceptual layer of the architecture. A generic ontology based solution was designed to integrate data and management procedures in the data and communication layer of the architecture. This is an innovative REST-inspired architecture that allows information contained in an ontology to be exchanged in a generic manner. This layer structure and its communication method provide the approach with scalability and re-usability features. The application of the HOTMES-based architecture has been studied for clinical purposes following three simple methodological stages described in this thesis. Data and management integration for context-aware and personalized monitoring services for patients with chronic conditions in the telemonitoring scenario are thus addressed. In particular, the extension of the HOTMES ontology defines a patient profile. These profiles in combination with individual rules provide clinical guidelines aiming to monitor and evaluate the evolution of the patient's health status evolution. This research implied a multi-disciplinary collaboration where clinicians had an essential role both in the ontology definition and in the validation of the proposed approach. Patient profiles were defined for 16 types of different diseases. Finally, two solutions were explored and compared in this thesis to address the remote technical management of all devices that comprise the telemonitoring scenario. The first solution was based on the HOTMES ontology-based architecture. The second solution was based on the most popular TCP/IP management architecture, SNMP (Simple Network Management Protocol). As a general conclusion, it has been demonstrated that the combination of ontologies, rules, WSs and the autonomic computing paradigm takes advantage of the main benefits that these technologies can offer in terms of knowledge representation, work flow organization, data transference, personalization of services and self-management capabilities. It has been proven that ontologies can be successfully used to provide clear descriptions of managed data (both clinical and technical) and ways of managing such information. This represents a further step towards the possibility of establishing more effective home-based telemonitoring systems and thus improving the remote care of patients with chronic diseases

    Real Time Control for Intelligent 6G Networks

    Get PDF
    The benefits of telemetry for optical networking have been shown in the literature, and several telemetry architectures have been defined. In general, telemetry data is collected from observation points in the devices and sent to a central system running besides the Software Defined Networking (SDN) controller. In this project, we try to develop a telemetry architecture that supports intelligent data aggregation and nearby data collection. Several frameworks and technologies have been explored to ensure that they fit well into the architecture's composition. A description of these different technologies is presented in this work, along with a comparison between their main features and downsides. Some intelligent techniques, aka. Algorithms have been stated and tested within architecture, showing their benefits by reducing the amount of data processed. In the design of this architecture, the main issues related to distributed systems have been faced, and some initial solutions have been proposed. In particular, several security solutions have been explored to deal with threats but also with scalability and performance issues, trying to find a balance between performance and security. Finally, two use cases are presented, showing a real implementation of the architecture that has been presented at conferences and validated within the project's development

    Network anomalies detection via event analysis and correlation by a smart system

    Get PDF
    The multidisciplinary of contemporary societies compel us to look at Information Technology (IT) systems as one of the most significant grants that we can remember. However, its increase implies a mandatory security force for users, a force in the form of effective and robust tools to combat cybercrime to which users, individual or collective, are ex-posed almost daily. Monitoring and detection of this kind of problem must be ensured in real-time, allowing companies to intervene fruitfully, quickly and in unison. The proposed framework is based on an organic symbiosis between credible, affordable, and effective open-source tools for data analysis, relying on Security Information and Event Management (SIEM), Big Data and Machine Learning (ML) techniques commonly applied for the development of real-time monitoring systems. Dissecting this framework, it is composed of a system based on SIEM methodology that provides monitoring of data in real-time and simultaneously saves the information, to assist forensic investigation teams. Secondly, the application of the Big Data concept is effective in manipulating and organising the flow of data. Lastly, the use of ML techniques that help create mechanisms to detect possible attacks or anomalies on the network. This framework is intended to provide a real-time analysis application in the institution ISCTE – Instituto Universitário de Lisboa (Iscte), offering a more complete, efficient, and secure monitoring of the data from the different devices comprising the network.A multidisciplinaridade das sociedades contemporâneas obriga-nos a perspetivar os sistemas informáticos como uma das maiores dádivas de que há memória. Todavia o seu incremento implica uma mandatária força de segurança para utilizadores, força essa em forma de ferramentas eficazes e robustas no combate ao cibercrime a que os utilizadores, individuais ou coletivos, são sujeitos quase diariamente. A monitorização e deteção deste tipo de problemas tem de ser assegurada em tempo real, permitindo assim, às empresas intervenções frutuosas, rápidas e em uníssono. A framework proposta é alicerçada numa simbiose orgânica entre ferramentas open source credíveis, acessíveis pecuniariamente e eficazes na monitorização de dados, recorrendo a um sistema baseado em técnicas de Security Information and Event Management (SIEM), Big Data e Machine Learning (ML) comumente aplicadas para a criação de sistemas de monitorização em tempo real. Dissecando esta framework, é composta pela metodologia SIEM que possibilita a monitorização de dados em tempo real e em simultâneo guardar a informação, com o objetivo de auxiliar as equipas de investigação forense. Em segundo lugar, a aplicação do conceito Big Data eficaz na manipulação e organização do fluxo dos dados. Por último, o uso de técnicas de ML que ajudam a criação de mecanismos de deteção de possíveis ataques ou anomalias na rede. Esta framework tem como objetivo uma aplicação de análise em tempo real na instituição ISCTE – Instituto Universitário de Lisboa (Iscte), apresentando uma monitorização mais completa, eficiente e segura dos dados dos diversos dispositivos presentes na mesma

    Tietoverkkojen valvonnan yhdenmukaistaminen

    Get PDF
    As the modern society is increasingly dependant on computer networks especially as the Internet of Things gaining popularity, a need to monitor computer networks along with associated devices increases. Additionally, the amount of cyber attacks is increasing and certain malware such as Mirai target especially network devices. In order to effectively monitor computer networks and devices, effective solutions are required for collecting and storing the information. This thesis designs and implements a novel network monitoring system. The presented system is capable of utilizing state-of-the-art network monitoring protocols and harmonizing the collected information using a common data model. This design allows effective queries and further processing on the collected information. The presented system is evaluated by comparing the system against the requirements imposed on the system, by assessing the amount of harmonized information using several protocols and by assessing the suitability of the chosen data model. Additionally, the protocol overheads of the used network monitoring protocols are evaluated. The presented system was found to fulfil the imposed requirements. Approximately 21% of the information provided by the chosen network monitoring protocols could be harmonized into the chosen data model format. The result is sufficient for effective querying and combining the information, as well as for processing the information further. The result can be improved by extending the data model and improving the information processing. Additionally, the chosen data model was shown to be suitable for the use case presented in this thesis.Yhteiskunnan ollessa jatkuvasti verkottuneempi erityisesti Esineiden Internetin kasvattaessa suosiotaan, tarve seurata sekä verkon että siihen liitettyjen laitteiden tilaa ja mahdollisia poikkeustilanteita kasvaa. Lisäksi tietoverkkohyökkäysten määrä on kasvamassa ja erinäiset haittaohjelmat kuten Mirai, ovat suunnattu erityisesti verkkolaitteita kohtaan. Jotta verkkoa ja sen laitteiden tilaa voidaan seurata, tarvitaan tehokkaita ratkaisuja tiedon keräämiseen sekä säilöntään. Tässä diplomityössä suunnitellaan ja toteutetaan verkonvalvontajärjestelmä, joka mahdollistaa moninaisten verkonvalvontaprotokollien hyödyntämisen tiedonkeräykseen. Lisäksi järjestelmä säilöö kerätyn tiedon käyttäen yhtenäistä tietomallia. Yhtenäisen tietomallin käyttö mahdollistaa tiedon tehokkaan jatkojalostamisen sekä haut tietosisältöihin. Diplomityössä esiteltävän järjestelmän ominaisuuksia arvioidaan tarkastelemalla, minkälaisia osuuksia eri verkonvalvontaprotokollien tarjoamasta informaatiosta voidaan yhdenmukaistaa tietomalliin, onko valittu tietomalli soveltuva verkonvalvontaan sekä varmistetaan esiteltävän järjestelmän täyttävän sille asetetut vaatimukset. Lisäksi työssä arvioidaan käytettävien verkonvalvontaprotokollien siirtämisen kiinteitä kustannuksia kuten otsakkeita. Työssä esitellyn järjestelmän todettiin täyttävän sille asetetut vaatimukset. Eri verkonvalvontaprotokollien tarjoamasta informaatiosta keskimäärin 21% voitiin harmonisoida tietomalliin. Saavutettu osuus on riittävä, jotta eri laitteista saatavaa informaatiota voidaan yhdistellä ja hakea tehokkaasti. Lukemaa voidaan jatkossa parantaa laajentamalla tietomallia sekä kehittämällä kerätyn informaation prosessointia. Lisäksi valittu tietomalli todettiin soveltuvaksi tämän diplomityön käyttötarkoitukseen

    A Generic Network and System Management Framework

    Get PDF
    Networks and distributed systems have formed the basis of an ongoing communications revolution that has led to the genesis of a wide variety of services. The constantly increasing size and complexity of these systems does not come without problems. In some organisations, the deployment of Information Technology has reached a state where the benefits from downsizing and rightsizing by adding new services are undermined by the effort required to keep the system running. Management of networks and distributed systems in general has a straightforward goal: to provide a productive environment in which work can be performed effectively. The work required for management should be a small fraction of the total effort. Most IT systems are still managed in an ad hoc style without any carefully elaborated plan. In such an environment the success of management decisions depends totally on the qualification and knowledge of the administrator. The thesis provides an analysis of the state of the art in the area of Network and System Management and identifies the key requirements that must be addressed for the provisioning of Integrated Management Services. These include the integration of the different management related aspects (i.e. integration of heterogeneous Network, System and Service Management). The thesis then proposes a new framework, INSMware, for the provision of Management Services. It provides a fundamental basis for the realisation of a new approach to Network and System Management. It is argued that Management Systems can be derived from a set of pre-fabricated and reusable Building Blocks that break up the required functionality into a number of separate entities rather than being developed from scratch. It proposes a high-level logical model in order to accommodate the range of requirements and environments applicable to Integrated Network and System Management that can be used as a reference model. A development methodology is introduced that reflects principles of the proposed approach, and provides guidelines to structure the analysis, design and implementation phases of a management system. The INSMware approach can further be combined with the componentware paradigm for the implementation of the management system. Based on these principles, a prototype for the management of SNMP systems has been implemented using industry standard middleware technologies. It is argued that development of a management system based on Componentware principles can offer a number of benefits. INSMware Components may be re-used and system solutions will become more modular and thereby easier to construct and maintain

    Major: Electronics and Communication Engineering

    Get PDF
    Today, information technology is strategically important to the goals and aspirations of the business enterprises, government and high-level education institutions – university. Universities are facing new challenges with the emerging global economy characterized by the importance of providing faster communication services and improving the productivity and effectiveness of individuals. New challenges such as provides an information network that supports the demands and diversification of university issues. A new network architecture, which is a set of design principles for build a network, is one of the pillar bases. It is the cornerstone that enables the university’s faculty, researchers, students, administrators, and staff to discover, learn, reach out, and serve society. This thesis focuses on the network architecture definitions and fundamental components. Three most important characteristics of high-quality architecture are that: it’s open network architecture; it’s service-oriented characteristics and is an IP network based on packets. There are four important components in the architecture, which are: Services and Network Management, Network Control, Core Switching and Edge Access. The theoretical contribution of this study is a reference model Architecture of University Campus Network that can be followed or adapted to build a robust yet flexible network that respond next generation requirements. The results found are relevant to provide an important complete reference guide to the process of building campus network which nowadays play a very important role. Respectively, the research gives university networks a structured modular model that is reliable, robust and can easily grow
    corecore