2,144,742 research outputs found

    An FPGA implementation of an adaptive data reduction technique for wireless sensor networks

    Get PDF
    Wireless sensor networking (WSN) is an emerging technology that has a wide range of potential applications including environment monitoring, surveillance, medical systems, and robotic exploration. These networks consist of large numbers of distributed nodes that organize themselves into a multihop wireless network. Each node is equipped with one or more sensors, embedded processors, and low- power radios, and is normally battery operated. Reporting constant measurement updates incurs high communication costs for each individual node, resulting in a significant communication overhead and energy consumption. A solution to reduce power requirements is to select, among all data produced by the sensor network, a subset of sensor readings that is relayed to the user such that the original observation data can be reconstructed within some user-defined accuracy. This paper describes the implementation of an adaptive data reduction algorithm for WSN, on a Xilinx Spartan-3E FPGA. A feasibility study is carried out to determine the benefits of this solution.peer-reviewe

    Design and implementation of an intelligent USB peripheral controller

    Get PDF
    This paper presents the design and implementation of an intelligent, re-programmable device that is capable of au-tomatically detecting USB peripherals on insertion and per-forming various tasks accordingly. Examples include the au-tomatic transfer of data between pen drives or the automatic printing of a file located on a pen drive. The performance of the system was analyzed and results for the execution time and CPU utilization of the programs performing the tasks were obtained. A comparison was made with the same pro-grams running on a laptop, which was set as a benchmark.peer-reviewe

    Programming compensations for system-monitor synchronisation

    Get PDF
    In security-critical systems such as online establishments, runtime analysis is crucial to detect and handle any unexpected behaviour. Due to resource-intensive operations carried out by such systems, particularly during peak times, synchronous monitoring is not always an option. Asynchronous monitoring, on the other hand, would not compete for system resources but might detect anomalies when the system has progressed further, and it is already too late to apply a remedy. A conciliatory approach is to apply asynchronous monitoring but synchronising when there is a high risk of a problem arising. Although this does not solve the issue of problems arising when in asynchronous mode, compensations have been shown to be useful to restore the system to a sane state when this occurs. In this paper we propose a novel notation, compensating automata, which enables the user to program the compensation logic within the monitor, extending our earlier results by allowing for richer compensation structures. This approach moves the compensation closer to the violation information while simultaneously relieving the system of the additional burden.peer-reviewe

    Fast user-level inter-thread communication, synchronisation

    Get PDF
    This project concerns the design and implementation of user-level inter-thread synchronisation and communication algorithms. A number of these algorithms have been implemented on the SMASH user-level thread scheduler for symmetric multiprocessors and multicore processors. All inter-thread communication primitives considered have two implementations: the lock-based implementation and the lock-free implementation. The performance of concurrent programs using these user-level primitives are measured and analyzed against the performance of programs using kernel-level inter-thread communication primitives. Besides, the differences between the lock- based implementations and lock-free implementations are also analyzed.peer-reviewe

    A software development framework for hardware centric applications: an architectural perspective

    Get PDF
    Throughout the history of Software Engineering, software development has been looked at from various perspectives, in terms of: usability, suitability for proposed problem, speed of development, relevance to real world scenarios, as well as in terms of the hardware that it needs to manifest itself in the real world. This paper delves deeper into the aspect of the actual core concept in software engineering: that of mapping software onto hardware[1], focused specifically on Hardware Centric systems (HCS), (systems where the hardware dictates to an influential level, the actual nature of the software); examining the various frameworks and concepts that exist for displaying this mapping from an architecture point of view, so as to establish if there is a need for a more complete and/or effective framework. It also proposes a roadmap proposal for a base architecture framework for the development of Hardware Centric applications, which will then be employed to determine if a suitable framework already exists.peer-reviewe

    Offline handwritten signature verification using radial basis function neural network

    Get PDF
    This study investigates the effectiveness of Radial Basis Function Neural Networks (RBFNNs) for Of- fline Handwritten Signature Verification (OHSV). A signature database is collected using intrapersonal variations for evaluation. Global, grid and texture features are used as feature sets. A number of exper- iments were carried out to compare the effectiveness of each separate set and their combination. The system is extensively tested with random signature forgeries and the high recognition rates obtained demonstrate the effectiveness of the architecture. The best results are obtained when global and grid features are combined producing a feature vector of 592 elements. In this case a Mean Error Rate (MER) of 2.04% with a False Rejection Rate (FRR) of 1.58% and a False Acceptance Rate (FAR) of 2.5% are achieved, which are generally better than those reported in the literature.peer-reviewe

    Utilization of information and communication technology among undergraduate nursing students in Tanta university, Egypt

    Get PDF
    The use of ICT to enhance learning and teaching has become increasingly important. Information and communication technology in education is a modern, efficient and cost effective process which has created a need to transform how students and teachers from higher institutions learn and teach respectively. This study was conducted to assess the pattern and utilization of Information and Communication Technology among undergraduate Nursing students in Tanta University, Egypt. A descriptive cross sectional design was used for the study where 504 fourth year students enrolled in the 2015/2016 session participated in the study. A validated structured questionnaire was used for data collection. The data collected were analyzed using Statistical Package for Social Science (SPSS) version 20. The results indicated that 80% of the surveyed students utilized ICT in performing their study assignments and research. Majority of the female students (79.0%) self reported themselves as good in computer skills while only one fifth (21.0%) of the male students rated themselves as good in computer skills. Students whose parents had secondary education and above had their total score in self-rating of computer operation skills as significantly higher than those whose parents had below secondary education. The study concluded that majority of the students had good ICT utilization with variation to residence and family income. It is therefore recommended that the university should ensure strict compliance with the rules of e-learning courses for the students and ensure proper application by each student

    A simplified model of QuickCheck automata

    Get PDF
    Placing guarantees on a program’s correctness is as hard as it is essential. Several approaches to verification exist, with testing being a popular, if imperfect, solution. The following is a formal description of QuickCheck finite state automata, which can be used to model a system and automatically derive command sequences over which properties can be checked. Understanding and describing such a model will aid in integrating this verification approach with other methodologies, notably runtime verification.peer-reviewe

    Data processing : challenges and tools

    Get PDF
    Data has grown at incredible rates these last few year, especially with the increasing popularity of social media and video streaming services such as YouTube. This paper looks at some of the challenges associated with the acquisition and processing of data. These challenges defer from the opportunities that can be exploited from the acquired data and are areas that can benefit from scientific research. In particular, the need to process large amounts of data in real-time is becoming a critical need in many areas such as social network trends, website statistics and intrusion detection in large data centres.peer-reviewe

    Parallel processing over a peer-to-peer network : constructing the poor man’s supercomputer

    Get PDF
    The aggregation of typical home computers through a peer-to-peer (P2P) framework over the Internet would yield a virtual supercomputer of unmatched processing power, 95% of which is presently being left unutilized. However, the global community appears to be still hesitant at tapping into the well of unharnessed potential offered by exploiting distributed computing. Reasons include the lack of personal incentive for participants, and the high degree of expertise required from application developers. Our vision is to tackle the aforementioned obstacles by building a P2P system capable of deploying user-defined tasks onto the network for distributed execution. Users would only be expected to write standard concurrent code accessing our application programming interface, and may rely on the system to transparently provide for optimal task distribution, process migration, message delivery, global state, fault tolerance, and recovery. Strong mobility during process migration is achieved by pre-processing the source code. Our results indicate that near-linear efficiencies – approximately 94% ± 2% of the optimal – may be obtained for adequately coarse-grained applications, even when deployed on a heterogeneous net- work.peer-reviewe
    • …
    corecore