763 research outputs found

    Coherent, automatic address resolution for vehicular ad hoc networks

    Get PDF
    Published in: Int. J. of Ad Hoc and Ubiquitous Computing, 2017 Vol.25, No.3, pp.163 - 179. DOI: 10.1504/IJAHUC.2017.10001935The interest in vehicular communications has increased notably. In this paper, the use of the address resolution (AR) procedures is studied for vehicular ad hoc networks (VANETs). We analyse the poor performance of AR transactions in such networks and we present a new proposal called coherent, automatic address resolution (CAAR). Our approach inhibits the use of AR transactions and instead increases the usefulness of routing signalling to automatically match the IP and MAC addresses. Through extensive simulations in realistic VANET scenarios using the Estinet simulator, we compare our proposal CAAR to classical AR and to another of our proposals that enhances AR for mobile wireless networks, called AR+. In addition, we present a performance evaluation of the behaviour of CAAR, AR and AR+ with unicast traffic of a reporting service for VANETs. Results show that CAAR outperforms the other two solutions in terms of packet losses and furthermore, it does not introduce additional overhead.Postprint (published version

    Just below the surface: developing knowledge management systems using the paradigm of the noetic prism

    Get PDF
    In this paper we examine how the principles embodied in the paradigm of the noetic prism can illuminate the construction of knowledge management systems. We draw on the formalism of the prism to examine three successful tools: frames, spreadsheets and databases, and show how their power and also their shortcomings arise from their domain representation, and how any organisational system based on integration of these tools and conversion between them is inevitably lossy. We suggest how a late-binding, hybrid knowledge based management system (KBMS) could be designed that draws on the lessons learnt from these tools, by maintaining noetica at an atomic level and storing the combinatory processes necessary to create higher level structure as the need arises. We outline the “just-below-the-surface” systems design, and describe its implementation in an enterprise-wide knowledge-based system that has all of the conventional office automation features

    Colour Text Segmentation in Web Images Based on Human Perception

    No full text
    There is a significant need to extract and analyse the text in images on Web documents, for effective indexing, semantic analysis and even presentation by non-visual means (e.g., audio). This paper argues that the challenging segmentation stage for such images benefits from a human perspective of colour perception in preference to RGB colour space analysis. The proposed approach enables the segmentation of text in complex situations such as in the presence of varying colour and texture (characters and background). More precisely, characters are segmented as distinct regions with separate chromaticity and/or lightness by performing a layer decomposition of the image. The method described here is a result of the authors’ systematic approach to approximate the human colour perception characteristics for the identification of character regions. In this instance, the image is decomposed by performing histogram analysis of Hue and Lightness in the HLS colour space and merging using information on human discrimination of wavelength and luminance

    Doctor of Philosophy

    Get PDF
    dissertationDetailed clinical models (DCMs) are the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. DCMs are also the basis for shared computable meaning when clinical data are referenced in decision support logic, and they provide a basis for data consistency in a longitudinal electronic medical record. Intermountain Healthcare has a long history in the design and evolution of these models, beginning with PAL (PTXT Application Language) and then the Clinical Event Model, which was developed in partnership with 3M. After the partnership between Intermountain and 3M dissolved, Intermountain decided to design a next-generation architecture for DCMs. The aim of this research is to develop a detailed clinical model architecture that meets the needs of Intermountain Healthcare and other healthcare organizations. The approach was as follows: 1. An updated version of the Clinical Event Model was created using XML Schema as a formalism to describe models. 2. In response to problems with XML Schema, The Clinical Element Model was designed and created using Clinical Element Modeling Language as a formalism to describe models. 3. To verify that our model met the needs of Intermountain Healthcare and others, a desiderata for Detailed Clinical Models was developed. 4. The Clinical Element Model is then critiqued using the desiderata as a guide, and suggestions for further refinements to the Clinical Element Model are described

    The Family of MapReduce and Large Scale Data Processing Systems

    Full text link
    In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data which has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. MapReduce is a simple and powerful programming model that enables easy development of scalable parallel applications to process vast amounts of data on large clusters of commodity machines. It isolates the application from the details of running a distributed program such as issues on data distribution, scheduling and fault tolerance. However, the original implementation of the MapReduce framework had some limitations that have been tackled by many research efforts in several followup works after its introduction. This article provides a comprehensive survey for a family of approaches and mechanisms of large scale data processing mechanisms that have been implemented based on the original idea of the MapReduce framework and are currently gaining a lot of momentum in both research and industrial communities. We also cover a set of introduced systems that have been implemented to provide declarative programming interfaces on top of the MapReduce framework. In addition, we review several large scale data processing systems that resemble some of the ideas of the MapReduce framework for different purposes and application scenarios. Finally, we discuss some of the future research directions for implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author

    Reconfigurable Instruction Cell Architecture Reconfiguration and Interconnects

    Get PDF

    REAL-TIME EMBEDDED SYSTEM OF MULTI-TASK CNN FOR ADVANCED DRIVING ASSISTANCE

    Get PDF
    In this research, we've engineered a real-time embedded system for advanced driving assistance. Our approach involves employing a multi-task Convolutional Neural Network (CNN) capable of simultaneously executing three tasks: object detection, semantic segmentation, and disparity estimation. Confronted with the limitations of edge computing, we've streamlined resource usage by sharing a common encoder and decoder among these tasks. To enhance computational efficiency, we've opted for a blend of depth-wise separable convolution and bilinear interpolation, departing from the conventional transposed convolution. This strategic change reduced the multiply-accumulate operations to 23.3% and the convolution parameters to 16.7%.Our experimental findings demonstrate that the decoder's complexity reduction not only avoids compromising recognition accuracy but, in fact, enhances it. Furthermore, we've embraced a semi-supervised learning approach to heighten network accuracy when deployed in a target domain divergent from the source domain used during training. Specifically, we've employed manually crafted correct answers only for object detection to train the whole network for optimal performance in the target domain. For the foreground object categories, we generate pseudo-correct responses for semantic segmentation by employing bounding boxes from object detection and iteratively refining them. Conversely, for the background categories, we rely on the initial inference outcomes as pseudo-correct responses, abstaining from further adjustments. Semantic segmentation of object classes with widely different appearances can be achieved thanks to this method, which tells the rough position, size, and shape of each object to the task. Our experimental results substantiate that the incorporation of this semi-supervised learning technique leads to enhancements in both object detection and semantic segmentation accuracy. We implemented this multi-task CNN on an embedded Graphics Processing Unit (GPU) board, added multi-object tracking functionality, and achieved a throughput of 18 fps with 26 Watt power consumption
    • 

    corecore