13 research outputs found

    An Overview on Wireless Sensor Networks Technology and Evolution

    Get PDF
    Wireless sensor networks (WSNs) enable new applications and require non-conventional paradigms for protocol design due to several constraints. Owing to the requirement for low device complexity together with low energy consumption (i.e., long network lifetime), a proper balance between communication and signal/data processing capabilities must be found. This motivates a huge effort in research activities, standardization process, and industrial investments on this field since the last decade. This survey paper aims at reporting an overview of WSNs technologies, main applications and standards, features in WSNs design, and evolutions. In particular, some peculiar applications, such as those based on environmental monitoring, are discussed and design strategies highlighted; a case study based on a real implementation is also reported. Trends and possible evolutions are traced. Emphasis is given to the IEEE 802.15.4 technology, which enables many applications of WSNs. Some example of performance characteristics of 802.15.4-based networks are shown and discussed as a function of the size of the WSN and the data type to be exchanged among nodes

    On the performance of markup language compression

    Get PDF
    Data compression is used in our everyday life to improve computer interaction or simply for storage purposes. Lossless data compression refers to those techniques that are able to compress a file in such ways that the decompressed format is the replica of the original. These techniques, which differ from the lossy data compression, are necessary and heavily used in order to reduce resource usage and improve storage and transmission speeds. Prior research led to huge improvements in compression performance and efficiency for general purpose tools which are mainly based on statistical and dictionary encoding techniques. Extensible Markup Language (XML) is based on redundant data which is parsed as normal text by general-purpose compressors. Several tools for compressing XML data have been developed, resulting in improvements for compression size and speed using different compression techniques. These tools are mostly based on algorithms that rely on variable length encoding. XML Schema is a language used to define the structure and data types of an XML document. As a result of this, it provides XML compression tools additional information that can be used to improve compression efficiency. In addition, XML Schema is also used for validating XML data. For document compression there is a need to generate the schema dynamically for each XML file. This solution can be applied to improve the efficiency of XML compressors. This research investigates a dynamic approach to compress XML data using a hybrid compression tool. This model allows the compression of XML data using variable and fixed length encoding techniques when their best use cases are triggered. The aim of this research is to investigate the use of fixed length encoding techniques to support general-purpose XML compressors. The results demonstrate the possibility of improving on compression size when a fixed length encoder is used to compressed most XML data types

    Data Communications and Network Technologies

    Get PDF
    This open access book is written according to the examination outline for Huawei HCIA-Routing Switching V2.5 certification, aiming to help readers master the basics of network communications and use Huawei network devices to set up enterprise LANs and WANs, wired networks, and wireless networks, ensure network security for enterprises, and grasp cutting-edge computer network technologies. The content of this book includes: network communication fundamentals, TCP/IP protocol, Huawei VRP operating system, IP addresses and subnetting, static and dynamic routing, Ethernet networking technology, ACL and AAA, network address translation, DHCP server, WLAN, IPv6, WAN PPP and PPPoE protocol, typical networking architecture and design cases of campus networks, SNMP protocol used by network management, operation and maintenance, network time protocol NTP, SND and NFV, programming, and automation. As the world’s leading provider of ICT (information and communication technology) infrastructure and smart terminals, Huawei’s products range from digital data communication, cyber security, wireless technology, data storage, cloud-computing, and smart computing to artificial intelligence

    Visualisation of PF firewall logs using open source

    Get PDF
    If you cannot measure, you cannot manage. This is an age old saying, but still very true, especially within the current South African cybercrime scene and the ever-growing Internet footprint. Due to the significant increase in cybercrime across the globe, information security specialists are starting to see the intrinsic value of logs that can ‘tell a story’. Logs do not only tell a story, but also provide a tool to measure a normally dark force within an organisation. The collection of current logs from installed systems, operating systems and devices is imperative in the event of a hacking attempt, data leak or even data theft, whether the attempt is successful or unsuccessful. No logs mean no evidence, and in many cases not even the opportunity to find the mistake or fault in the organisation’s defence systems. Historically, it remains difficult to choose what logs are required by your organization. A number of questions should be considered: should a centralised or decentralised approach for collecting these logs be followed or a combination of both? How many events will be collected, how much additional bandwidth will be required and will the log collection be near real time? How long must the logs be saved and what if any hashing and encryption (integrity of data) should be used? Lastly, what system must be used to correlate, analyse, and make alerts and reports available? This thesis will address these myriad questions, examining the current lack of log analysis, practical implementations in modern organisation, and also how a need for the latter can be fulfilled by means of a basic approach. South African organizations must use technology that is at hand in order to know what electronic data are sent in and out of their organizations network. Concentrating only on FreeBSD PF firewall logs, it is demonstrated within this thesis the excellent results are possible when logs are collected to obtain a visual display of what data is traversing the corporate network and which parts of this data are posing a threat to the corporate network. This threat is easily determined via a visual interpretation of statistical outliers. This thesis aims to show that in the field of corporate data protection, if you can measure, you can manage

    On-demand transmission model using image-based rendering for remote visualization

    Get PDF
    Interactive distributed visualization is an emerging technology with numerous applications. However, many of the present approaches to interactive distributed visualization have limited performance since they are based on the traditional polygonal processing graphics pipeline. In contrast, image-based rendering uses multiple images of the scene instead of a 3D geometrical representation, and so has the key advantage that the final output is independent of the scene complexity, and depends on the desired final image resolution. These multiple images are referred to as the light field dataset. In this thesis we propose an on-demand solution for efficiently transmitting visualization data to remote users/clients. This is achieved through sending selected parts of the dataset based on the current client viewpoint, and is done instead of downloading a complete replica of the light field dataset to each client, or remotely sending a single rendered view back from a central server to the user each time the user updates their viewing parameters. The on-demand approach shows stable performance as the number of clients increases because the load on the server and the network traffic are reduced. Furthermore, detailed performance studies show that the proposed on-demand scheme outperforms the current local and remote solutions in terms of interactivity measured in frames per second. In addition, a performance study based on a theoretical cost model is presen ted. The model was able to provide realistic estimations of the results for different ranges of dataset sizes. Also, these results indicate that the model can be used as a predictive tool for estimating timings for the visualization process, enabling the improvement of the process and product quality, as well as the further develop ment of models for larger systems and datasets. In further discussing the strengths and weaknesses of each of the models, we see that to be able to run the system for larger dataset resolution involves a trade-off between generality of hardware (the server and network) and dataset resolution. Larger dataset resolution cannot achieve interactive frame rates on current COTS infrastructure. Finally, we conclude that the design of our 3D visualization system, based on image-based rendering coupled with an on-demand transmission model, has made a contribution to the field, and is a good basis for the future development of collaborative, distributed visualization systems
    corecore