124 research outputs found

    Internet protocol over wireless sensor networks, from myth to reality

    Get PDF
    Internet Protocol (IP) is a standard network layer protocol of the Internet architecture, allowing communication among heterogeneous networks. For a given network to be accessible from the Internet it must have a router that complies with this protocol. Wireless sensor networks have many smart sensing nodes with computational, communication and sensing capabilities. Such smart sensors cooperate to gather relevant data and present it to the user. The connection of sensor networks and the Internet has been realized using gateway or proxy- based approaches. Historically, several routing protocols were specifically created, discarding IP. However, recent research, prototypes and even implementation tools show that it is possible to combine the advantages of IP access with sensor networks challenges, with a major contribution from the 6LoWPAN Working Group. This paper presents the advantages and challenges of IP on sensor networks, surveys the state-of-art with some implementation examples, and points further research topics in this area

    Ship availability oriented contract management model for in-service support contracts of naval vessels

    Get PDF
    The rapid development of the ship building and ship repair industry in recent years has transformed the way organizations perceive the future industry growth. Greater growth of naval technology is clearly noticed as well. Disappointingly, the worldwide phenomenon reflects that availability of naval vessels remained lower than expected. The Royal Malaysian Navy (RMN) vessels currently maintained under in-service support (ISS) contracts suffer the same fate, despite continuous yearly effort to improve the ships’ availabilities. The complexity of naval ship itself and its ever-changing roles and mission makes the situation more complex. Previous studies remained focused mostly on availability calculations and availability modelling of few factors only. There has not been any holistic study on all human and equipment factors impacting availability. The research aim is to demystify the complex naval ship availability issue by developing a decision-making model in improving ship operational availability of naval vessels under the ISS contract. Besides introducing a simplified view to the complex naval issue, this multiple-staged mixed-method sequential Delphi exploratory research has determined and ranked various downtime influence factors (DIFs) viewed holistically from both human and equipment perspectives, as well as determining the DIFs impact from the contract and project management perspectives. A panel of 30 experts and five top management experts in ISS contract in Malaysia participated in the research. 50 DIFs were identified, and a severity index (SI) was developed for each of the determined 15 severe DIFs. The developed SI highlights that almost 45% of the downtime causes are due to the top five severe DIFs with corrective maintenance (SI 0.142) ranked first, spares availability (SI 0.082) ranked second, cash flow shortages (SI 0.078), ranked third maintenance budget allocation ranked fourth (SI 0.075) and knowledge management including training and skills (SI 0.070) ranked fifth. In this study, an availability-oriented model has been developed to assist policymakers in decision making and for maintainers and logisticians in appreciating their individual contribution to improve availability. Contract managers are provided with a tool to better manage the contract at ‘close to real time’ with identified prioritization on severe issues added with recovery recommendation to improve the ongoing availability situation. The simple approach and model are more appealing to practitioners unlike previously where complex mathematical results and algorithms were made available. An interesting finding is that availability could be improved even with budget constraints

    The AdCIM framework : extraction, integration and persistence of the configuration of distributed systems

    Get PDF
    [Resumen] Este resumen se compone de una introducción, que explica el enfoque y contexto de la Tesis, seguida de una sección sobre su organización en partes y capítulos. Después, sigue una enumeración de las contribuciones recogidas en ella, para finalizar con las conclusiones y trabajo futuro. Introducción Los administradores de sistemas tienen que trabajar con la gran diversidad de hardware y software existente en las organizaciones actuales. Desde el punto de vista del administrador, las infraestructuras homogéneas son mucho más sencillas de administrar y por ello más deseables. Pero, aparte de la dificultad intrínseca de mantener esa homogeneidad a la vez que progresa la tecnología y las consecuencias de estar atado a un proveedor fijo, la propia homogeneidad tiene riesgos; por ejemplo, las instalaciones en monocultivo son más vulnerables contra virus y troyanos, y hacerlas seguras requiere la introducción de diferencias aleatorias en llamadas al sistema que introduzcan diversidad artificial, una medida que puede provocar inestabilidad (ver Birman y Schneider. Esto hace la heterogeneidad en sí casi inevitable, y una característica de los sistemas reales difícil de obviar. Pero de hecho conlleva más complejidad. En muchas instalaciones, la mezcla de Windows y derivados de Unix es usual, ya sea en combinación o divididos claramente en clientes y servidores. Las tareas de administración en ambos sistemas son diferentes debido a las diferencias en ecosistema y modo de conceptualizar los sistemas informáticos acaecidas tras años de divergencia en interfaces, sistemas de configuración, comandos y abstracciones. A lo largo del tiempo ha habido muchos intentos de cerrar esa brecha, y algunos lo hacen emulando o versionando las herramientas Unix, probadas a lo largo de muchos años. Por ejemplo, la solución de Microsoft, Windows Services for Unix permite el uso de NIS, el Network File System (NFS), Perl, y el shell Korn en Windows, pero no los integra realmente en Windows, ya que está más orientado a la migración de aplicaciones. Cygwin soporta más herramientas, como Bash y las Autotools de GNU, pero se centra en la traslación directa a Windows de programas Unix basados en POSIX usando gcc. Outwit es un port muy interesante del conjunto de herramientas Unix que integra los pipelines de Unix en Windows y permite acceder al Registro, los drivers ODBC y al portapapeles desde los shells de Unix, pero los scripts desarrollados para este sistema no son usables directamente en sistemas Unix. Por lo tanto, la separación sigue a pesar de dichos intentos. En esta Tesis presentamos un framework, denominado AdCIM, para la administración de la configuración de sistemas heterogéneos. Como tal, su objetivo es integrar y uniformizar la administración de estos sistemas abstrayendo sus diferencias, pero al mismo tiempo ser flexible y fácil de adaptar para soportar nuevos sistemas rápidamente. Para lograr dichos objetivos la arquitectura de AdCIM sigue el paradigma de orientación a modelo, que propone el diseño de aplicaciones a partir de un modelo inicial, que es transformado en diversos ''artefactos'', como código, documentación, esquemas de base de datos, etc. que formarían la aplicación. En el caso de AdCIM, el modelo es CIM, y las transformaciones se efectúan utilizando el lenguaje declarativo XSLT, que es capaz de expresar transformaciones sobre datos XML. AdCIM realiza todas sus transformaciones con XSLT, excepto la conversión inicial de ficheros de texto plano a XML, hecha con un párser especial de texto a XML. Los programas XSLT, también denominados stylesheets, enlazan y transforman partes específicas del árbol XML de entrada, y soportan ejecución recursiva, formando un modelo de programación declarativo-funcional con gran potencia expresiva. El modelo elegido para representar los dominios de administración cubiertos por el framework es CIM (Common Information Model), un modelo estándar, extensible y orientado a objetos creado por la Distributed Management Task Force (DMTF). Usando esquemas del modelo CIM, los múltiples y distintos formatos de configuración y datos de administración son traducidos por la infraestructura de AdCIM en instancias CIM. Los esquemas CIM también sirven como base para generar formularios web y otros esquemas específicos para validación y persistencia de los datos. El desarrollo de AdCIM como un framework orientado al modelo evolucionó a partir de nuestro trabajo previo, que extraía datos de configuración y los almacenaba en un repositorio LDAP utilizando scripts Perl. En sucesivos trabajos se empezó a trabajar con la orientación a modelo y se demostró la naturaleza adaptativa de este framework, mediante adaptaciones a entornos Grid y a Wireless Mesh Networks. El enfoque e implementación de este framework son novedosos, y usa algunas tecnologías definidas como estándares por organizaciones internacionales como la IETF, la DMTF, y la W3C. Vemos el uso de dichas tecnologías como una ventaja en vez de una limitación en las posibilidades del framework. Su uso añade generalidad y aplicabilidad al framework, sobre todo comparado con soluciones ad-hoc o de propósito muy específico. A pesar de esta flexibilidad, hemos intentado en todo lo posible definir y concretar todos los aspectos de implementación, definir prácticas de uso adecuadas y evaluar el impacto en el rendimiento y escalabilidad del framework de la elección de las distintas tecnologías estándar

    Modelling and Design of Resilient Networks under Challenges

    Get PDF
    Communication networks, in particular the Internet, face a variety of challenges that can disrupt our daily lives resulting in the loss of human lives and significant financial costs in the worst cases. We define challenges as external events that trigger faults that eventually result in service failures. Understanding these challenges accordingly is essential for improvement of the current networks and for designing Future Internet architectures. This dissertation presents a taxonomy of challenges that can help evaluate design choices for the current and Future Internet. Graph models to analyse critical infrastructures are examined and a multilevel graph model is developed to study interdependencies between different networks. Furthermore, graph-theoretic heuristic optimisation algorithms are developed. These heuristic algorithms add links to increase the resilience of networks in the least costly manner and they are computationally less expensive than an exhaustive search algorithm. The performance of networks under random failures, targeted attacks, and correlated area-based challenges are evaluated by the challenge simulation module that we developed. The GpENI Future Internet testbed is used to conduct experiments to evaluate the performance of the heuristic algorithms developed

    Mobile Oriented Future Internet (MOFI)

    Get PDF
    This Special Issue consists of seven papers that discuss how to enhance mobility management and its associated performance in the mobile-oriented future Internet (MOFI) environment. The first two papers deal with the architectural design and experimentation of mobility management schemes, in which new schemes are proposed and real-world testbed experimentations are performed. The subsequent three papers focus on the use of software-defined networks (SDN) for effective service provisioning in the MOFI environment, together with real-world practices and testbed experimentations. The remaining two papers discuss the network engineering issues in newly emerging mobile networks, such as flying ad-hoc networks (FANET) and connected vehicular networks

    A Generic Network and System Management Framework

    Get PDF
    Networks and distributed systems have formed the basis of an ongoing communications revolution that has led to the genesis of a wide variety of services. The constantly increasing size and complexity of these systems does not come without problems. In some organisations, the deployment of Information Technology has reached a state where the benefits from downsizing and rightsizing by adding new services are undermined by the effort required to keep the system running. Management of networks and distributed systems in general has a straightforward goal: to provide a productive environment in which work can be performed effectively. The work required for management should be a small fraction of the total effort. Most IT systems are still managed in an ad hoc style without any carefully elaborated plan. In such an environment the success of management decisions depends totally on the qualification and knowledge of the administrator. The thesis provides an analysis of the state of the art in the area of Network and System Management and identifies the key requirements that must be addressed for the provisioning of Integrated Management Services. These include the integration of the different management related aspects (i.e. integration of heterogeneous Network, System and Service Management). The thesis then proposes a new framework, INSMware, for the provision of Management Services. It provides a fundamental basis for the realisation of a new approach to Network and System Management. It is argued that Management Systems can be derived from a set of pre-fabricated and reusable Building Blocks that break up the required functionality into a number of separate entities rather than being developed from scratch. It proposes a high-level logical model in order to accommodate the range of requirements and environments applicable to Integrated Network and System Management that can be used as a reference model. A development methodology is introduced that reflects principles of the proposed approach, and provides guidelines to structure the analysis, design and implementation phases of a management system. The INSMware approach can further be combined with the componentware paradigm for the implementation of the management system. Based on these principles, a prototype for the management of SNMP systems has been implemented using industry standard middleware technologies. It is argued that development of a management system based on Componentware principles can offer a number of benefits. INSMware Components may be re-used and system solutions will become more modular and thereby easier to construct and maintain

    Performance metrics and routing in vehicular ad hoc networks

    Get PDF
    The aim of this thesis is to propose a method for enhancing the performance of Vehicular Ad hoc Networks (VANETs). The focus is on a routing protocol where performance metrics are used to inform the routing decisions made. The thesis begins by analysing routing protocols in a random mobility scenario with a wide range of node densities. A Cellular Automata algorithm is subsequently applied in order to create a mobility model of a highway, and wide range of density and transmission range are tested. Performance metrics are introduced to assist the prediction of likely route failure. The Good Link Availability (GLA) and Good Route Availability (GRA) metrics are proposed which can be used for a pre-emptive action that has the potential to give better performance. The implementation framework for this method using the AODV routing protocol is also discussed. The main outcomes of this research can be summarised as identifying and formulating methods for pre-emptive actions using a Cellular Automata with NS-2 to simulate VANETs, and the implementation method within the AODV routing protocol

    A Remote Capacity Utilization Estimator for WLANs

    Get PDF
    In WLANs, the capacity of a node is not fixed and can vary dramatically due to the shared nature of the medium under the IEEE 802.11 MAC mechanism. There are two main methods of capacity estimation in WLANs: Active methods based upon probing packets that consume the bandwidth of the channel and do not scale well. Passive methods based upon analyzing the transmitted packets that avoid the overhead of transmitting probe packets and perform with greater accuracy. Furthermore, passive methods can be implemented locally or remotely. Local passive methods require an additional dissemination mechanism in order to communicate the capacity information to other network nodes which adds complexity and can be unreliable under adverse network conditions. On the other hand, remote passive methods do not require a dissemination mechanism and so can be simpler to implement and also do not suffer from communication reliability issues. Many applications (e.g. ANDSF etc) can benefit from utilizing this capacity information. Therefore, in this thesis we propose a new remote passive Capacity Utilization estimator performed by neighbour nodes. However, there will be an error associated with the measurements owing to the differences in the wireless medium as observed by the different nodes’ location. The main undertaking of this thesis is to address this issue. An error model is developed to analyse the main sources of error and to determine their impact on the accuracy of the estimator. Arising from this model, a number of modifications are implemented to improve the accuracy of the estimator. The network simulator ns2 is used to investigate the performance of the estimator and the results from a range of different test scenarios indicate its feasibility and accuracy as a passive remote method. Finally, the estimator is deployed in a node saturation detection scheme where it is shown to outperform two other similar schemes based upon queue observation and probing with ping packets
    corecore