514 research outputs found

    Modeling the Behavior of Multipath Components Pertinent to Indoor Geolocation

    Get PDF
    Recently, a number of empirical models have been introduced in the literature for the behavior of direct path used in the design of algorithms for RF based indoor geolocation. Frequent absence of direct path has been a major burden on the performance of these algorithms directing researchers to discover algorithms using multipath diversity. However, there is no reliable model for the behavior of multipath components pertinent to precise indoor geolocation. In this dissertation, we first examine the absence of direct path by statistical analysis of empirical data. Then we show how the concept of path persistency can be exploited to obtain accurate ranging using multipath diversity. We analyze the effects of building architecture on the multipath structure by demonstrating the effects of wall length and wall density on the path persistency. Finally, we introduce a comprehensive model for the spatial behavior of multipath components. We use statistical analysis of empirical data obtained by a measurement calibrated ray-tracing tool to model the time-of- arrival, angle-of-arrival and path gains. The relationship between the transmitter-receiver separation and the number of paths are also incorporated in our model. In addition, principles of ray optics are applied to explain the spatial evolution of path gains, time-of-arrival and angle-of-arrival of individual multipath components as a mobile terminal moves inside a typical indoor environment. We also use statistical modeling for the persistency and birth/death rate of the paths

    Sensitivity Analysis for Measurements of Multipath Parameters Pertinent to TOA based Indoor Geolocation

    Get PDF
    Recently, indoor geolocation technologies has been attracting tremendous attention. For indoor environments, the fine time resolution of ultra-wideband (UWB) signals enables the potential of accurate distance measurement of the direct path (DP) between a number of reference sources and the people or assets of interest. However, Once the DP is not available or is shadowed, substantial errors will be introduced into the ranging measurements, leading to large localization errors when measurements are combined from multiple sources. The measurement accuracy in undetected direct path (UDP) conditions can be improved in some cases by exploiting the geolocation information contained in the indirect path measurements. Therefore, the dynamic spatial behavior of paths is an important issue for positioning techniques based on TOA of indirect paths. The objectives of this thesis are twofold. The first is to analyze the sensitivity of TOA estimation techniques based on TOA of the direct path. we studied the effect of distance, bandwidth and multipath environment on the accuracy of various TOA estimation techniques. The second is to study the sensitivity of multipath parameters pertinent to TOA estimation techniques based on the TOA of the indirect paths. We mainly looked into the effect of distance, bandwidth, threshold for picking paths, and multipath environment on the number of multipath components(MPCs) and path persistency. Our results are based on data from a new measurement campaign conducted on the 3rd floor of AK laboratory. For the TOA estimation techniques based on DP, the line of sight (LOS) scenario provides greatest accuracy and these TOA estimation techniques are most sensitive to bandwidth availability in obstructed line of sight (OLOS) scenario. All the TOA estimation algorithms perform poorly in the UDP scenario although the use of higher bandwidth can reduce the ranging error to some extent. Based on our processed results, The proposal for selecting the appropriate TOA estimation technique with certain constrains is given. The sensitivity study of multipath parameters pertinent to indirect-path-based TOA estimation techniques shows that the number of MPCs is very sensitive to the threshold for picking paths and to the noise threshold. It generally decreases as the distance increase while larger bandwidth always resolves more MPCs. The multipath components behave more persistently in line of sight (LOS) and obstructed line of sight (OLOS) scenarios than in UDP scenarios, and the use of larger bandwidth and higher threshold for picking paths also result in more persistent paths

    Characterization and surface reconstruction of objects in tomographic images of composite materials

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaIn the scope of the project Tomo-GPU supported by FCT / MCTES the aim is to build an interactive graphical environment that allows a Materials specialist to define their own programs for analysis of 3D tomographic images. This project aims to build a tool to characterize and investigate the identified objects, where the user can define search criteria such as size, orientation, bounding boxes, among others. All this processing will be done on a desktop computer equipped with a graphics card with some processing power. On the proposed solution the modules for characterizing objects, received from the identification phase, will be implemented using some existing software libraries, most notably the CGAL library. The characterization modules with bigger execution times will be implemented using OpenCL and GPUs. With this work the characterization and reconstruction of objects and their research can now be done on conventional machines, using GPUs to accelerate the most time-consuming computations. After the conclusion of this thesis, new tools that will help to improve the current development cycle of new materials will be available for Materials Science specialists

    Leveraging Kubernetes in Edge-Native Cable Access Convergence

    Get PDF
    Public clouds provide infrastructure services and deployment frameworks for modern cloud-native applications. As the cloud-native paradigm has matured, containerization, orchestration and Kubernetes have become its fundamental building blocks. For the next step of cloud-native, an interest to extend it to the edge computing is emerging. Primary reasons for this are low-latency use cases and the desire to have uniformity in cloud-edge continuum. Cable access networks as specialized type of edge networks are not exception here. As the cable industry transitions to distributed architectures and plans the next steps to virtualize its on-premise network functions, there are opportunities to achieve synergy advantages from convergence of access technologies and services. Distributed cable networks deploy resource-constrained devices like RPDs and RMDs deep in the edge networks. These devices can be redesigned to support more than one access technology and to provide computing services for other edge tenants with MEC-like architectures. Both of these cases benefit from virtualization. It is here where cable access convergence and cloud-native transition to edge-native intersect. However, adapting cloud-native in the edge presents a challenge, since cloud-native container runtimes and native Kubernetes are not optimal solutions in diverse edge environments. Therefore, this thesis takes as its goal to describe current landscape of lightweight cloud-native runtimes and tools targeting the edge. While edge-native as a concept is taking its first steps, tools like KubeEdge, K3s and Virtual Kubelet can be seen as the most mature reference projects for edge-compatible solution types. Furthermore, as the container runtimes are not yet fully edge-ready, WebAssembly seems like a promising alternative runtime for lightweight, portable and secure Kubernetes compatible workloads

    Reverse Engineering Heterogeneous Applications

    Get PDF
    Nowadays a large majority of software systems are built using various technologies that in turn rely on different languages (e.g. Java, XML, SQL etc.). We call such systems heterogeneous applications (HAs). By contrast, we call software systems that are written in one language homogeneous applications. In HAs the information regarding the structure and the behaviour of the system is spread across various components and languages and the interactions between different application elements could be hidden. In this context applying existing reverse engineering and quality assurance techniques developed for homogeneous applications is not enough. These techniques have been created to measure quality or provide information about one aspect of the system and they cannot grasp the complexity of HAs. In this dissertation we present our approach to support the analysis and evolution of HAs based on: (1) a unified first-class description of HAs and, (2) a meta-model that reifies the concept of horizontal and vertical dependencies between application elements at different levels of abstraction. We implemented our approach in two tools, MooseEE and Carrack. The first is an extension of the Moose platform for software and data analysis and contains our unified meta-model for HAs. The latter is an engine to infer derived dependencies that can support the analysis of associations among the heterogeneous elements composing HA. We validate our approach and tools by case studies on industrial and open-source JEAs which demonstrate how we can handle the complexity of such applications and how we can solve problems deriving from their heterogeneous nature

    Machine-Learning-Powered Cyber-Physical Systems

    Get PDF
    In the last few years, we witnessed the revolution of the Internet of Things (IoT) paradigm and the consequent growth of Cyber-Physical Systems (CPSs). IoT devices, which include a plethora of smart interconnected sensors, actuators, and microcontrollers, have the ability to sense physical phenomena occurring in an environment and provide copious amounts of heterogeneous data about the functioning of a system. As a consequence, the large amounts of generated data represent an opportunity to adopt artificial intelligence and machine learning techniques that can be used to make informed decisions aimed at the optimization of such systems, thus enabling a variety of services and applications across multiple domains. Machine learning processes and analyses such data to generate a feedback, which represents a status the environment is in. A feedback given to the user in order to make an informed decision is called an open-loop feedback. Thus, an open-loop CPS is characterized by the lack of an actuation directed at improving the system itself. A feedback used by the system itself to actuate a change aimed at optimizing the system itself is called a closed-loop feedback. Thus, a closed-loop CPS pairs feedback based on sensing data with an actuation that impacts the system directly. In this dissertation, we propose several applications in the context of CPS. We propose open-loop CPSs designed for the early prediction, diagnosis, and persistency detection of Bovine Respiratory Disease (BRD) in dairy calves, and for gait activity recognition in horses.These works use sensor data, such as pedometers and automated feeders, to perform valuable real-field data collection. Data are then processed by a mix of state-of-the-art approaches as well as novel techniques, before being fed to machine learning algorithms for classification, which informs the user on the status of their animals. Our work further evaluates a variety of trade-offs. In the context of BRD, we adopt optimization techniques to explore the trade-offs of using sensor data as opposed to manual examination performed by domain experts. Similarly, we carry out an extensive analysis on the cost-accuracy trade-offs, which farmers can adopt to make informed decisions on their barn investments. In the context of horse gait recognition we evaluate the benefits of lighter classifications algorithms to improve energy and storage usage, and their impact on classification accuracy. With respect to closed-loop CPS we proposes an incentive-based demand response approach for Heating Ventilation and Air Conditioning (HVAC) designed for peak load reduction in the context of smart grids. Specifically, our approach uses machine learning to process power data from smart thermostats deployed in user homes, along with their personal temperature preferences. Our machine learning models predict power savings due to thermostat changes, which are then plugged into our optimization problem that uses auction theory coupled with behavioral science. This framework selects the set of users who fulfill the power saving requirement, while minimizing financial incentives paid to the users, and, as a consequence, their discomfort. Our work on BRD has been published on IEEE DCOSS 2022 and Frontiers in Animal Science. Our work on gait recognition has been published on IEEE SMARTCOMP 2019 and Elsevier PMC 2020, and our work on energy management and energy prediction has been published on IEEE PerCom 2022 and IEEE SMARTCOMP 2022. Several other works are under submission when this thesis was written, and are included in this document as well

    Database and System Design for Emerging Storage Technologies

    Full text link
    Emerging storage technologies offer an alternative to disk that is durable and allows faster data access. Flash memory, made popular by mobile devices, provides block access with low latency random reads. New nonvolatile memories (NVRAM) are expected in upcoming years, presenting DRAM-like performance alongside persistent storage. Whereas both technologies accelerate data accesses due to increased raw speed, used merely as disk replacements they may fail to achieve their full potentials. Flash’s asymmetric read/write access (i.e., reads execute faster than writes opens new opportunities to optimize Flash-specific access. Similarly, NVRAM’s low latency persistent accesses allow new designs for high performance failure-resistant applications. This dissertation addresses software and hardware system design for such storage technologies. First, I investigate analytics query optimization for Flash, expecting Flash’s fast random access to require new query planning. While intuition suggests scan and join selection should shift between disk and Flash, I find that query plans chosen assuming disk are already near-optimal for Flash. Second, I examine new opportunities for durable, recoverable transaction processing with NVRAM. Existing disk-based recovery mechanisms impose large software overheads, yet updating data in-place requires frequent device synchronization that limits throughput. I introduce a new design, NVRAM Group Commit, to amortize synchronization delays over many transactions, increasing throughput at some cost to transaction latency. Finally, I propose a new framework for persistent programming and memory systems to enable high performance recoverable data structures with NVRAM, extending memory consistency with persistent semantics to introduce memory persistency.PhDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107114/1/spelley_1.pd
    • …
    corecore