332 research outputs found

    Load Balancing Hotspots in Sensor Storage Systems

    Get PDF
    Sensor networks provide us with the means of effectively monitoring and interacting with the physical world. A sensor network usually consists of a large number of small inexpensive battery-operated sensors deployed in a geographic area. This dissertation considers a sensor network deployed to monitor a disaster area. First responders continuously issue ad-hoc queries while moving in the disaster area. In such an environment, it is often more beneficial to store sensor readings and process ad-hoc queries within rather than outside the sensor network.Recently, this led to an increased popularity of Data-Centric Storage (DCS).A DCS scheme is based on a mapping function from readings to sensors based on the attribute values of each reading. This mapping function defines the DCS index structure.Two significant problems arising in this DCS network model due to data and traffic skewness are storage hotspots and query hotspots. Storage hotspots are formed when many sensor readings are mapped for storage to a relatively small number of sensor nodes. Query hotspots occur when many user queries target few sensor nodes. Both types of hotspots are hard to predict. Storage hotspots result in an uncontrolled reading shedding that decreases the Quality of Data (QoD). Due to the limited wireless bandwidth of sensors, hotspots decrease QoD by increasing collisions (thus losses) of reading/query packets. When lasting long enough, hotspots affect the Quality of Service (QoS) by unevenly depleting energy in the sensor network.This dissertation addresses both problems of hotspots through load balancing. The main dissertation hypothesis is that data migration resulting from local or global load balancing of the DCS index structure can effectively solve the hotspot problems. The contributions of this dissertation lie in developing two schemes, namely, the Zone Sharing/Zone Partitioning/Zone Partial Replication (ZS/ZP/ZPR) scheme and the K-D tree based Data-Centric Storage (KDDCS) scheme. ZS/ZP/ZPR detects and decomposes both types of hotspots through load balancing in the hotspot area. KDDCS avoids the formation of hotspots through globally load-balancing the underlying DCS index structure. Experimental evaluation shows the effectiveness of the proposed schemes in coping with hotspots in comparison to the state-of-the-art DCS schemes

    Data gathering techniques on wireless sensor networks

    Get PDF
    The nearly exponential growth of the performance/price and performance/size ratios of computers has given rise to the development of inexpensive, miniaturized systems with wireless and sensing capabilities. Such wireless sensors are able to produce a wealth of information about our personal environment, in agricultural and industrial monitoring, and many other scenarios. Each sensor due to its miniature nature has severe resource constraints in terms of processing power, storage space, battery capacity and bandwidth of radio. Our goal in this research is to maximize the extraction of information out of the sensor network by efficient resource utilization

    Unified Role Assignment Framework For Wireless Sensor Networks

    Get PDF
    Wireless sensor networks are made possible by the continuing improvements in embedded sensor, VLSI, and wireless radio technologies. Currently, one of the important challenges in sensor networks is the design of a systematic network management framework that allows localized and collaborative resource control uniformly across all application services such as sensing, monitoring, tracking, data aggregation, and routing. The research in wireless sensor networks is currently oriented toward a cross-layer network abstraction that supports appropriate fine or course grained resource controls for energy efficiency. In that regard, we have designed a unified role-based service paradigm for wireless sensor networks. We pursue this by first developing a Role-based Hierarchical Self-Organization (RBSHO) protocol that organizes a connected dominating set (CDS) of nodes called dominators. This is done by hierarchically selecting nodes that possess cumulatively high energy, connectivity, and sensing capabilities in their local neighborhood. The RBHSO protocol then assigns specific tasks such as sensing, coordination, and routing to appropriate dominators that end up playing a certain role in the network. Roles, though abstract and implicit, expose role-specific resource controls by way of role assignment and scheduling. Based on this concept, we have designed a Unified Role-Assignment Framework (URAF) to model application services as roles played by local in-network sensor nodes with sensor capabilities used as rules for role identification. The URAF abstracts domain specific role attributes by three models: the role energy model, the role execution time model, and the role service utility model. The framework then generalizes resource management for services by providing abstractions for controlling the composition of a service in terms of roles, its assignment, reassignment, and scheduling. To the best of our knowledge, a generic role-based framework that provides a simple and unified network management solution for wireless sensor networks has not been proposed previously

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    A Big-Data based and process-oriented decision support system for traffic management

    Get PDF
    Data analysis and monitoring of road networks in terms of reliability and performance are valuable but hard to achieve, especially when the analytical information has to be available to decision makers on time. The gathering and analysis of the observable facts can be used to infer knowledge about traffic congestion over time and gain insights into the roads safety. However, the continuous monitoring of live traffic information produces a vast amount of data that makes it difficult for business intelligence (BI) tools to generate metrics and key performance indicators (KPI) in nearly real-time. In order to overcome these limitations, we propose the application of a big-data based and process-centric approach that integrates with operational traffic information systems to give insights into the road network's efficiency. This paper demonstrates how the adoption of an existent process-oriented DSS solution with big-data support can be leveraged to monitor and analyse live traffic data on an acceptable response time basis.publishedVersio

    Information discovery in multi-dimensional autonomous wireless sensor networks

    Full text link
     The thesis proposed four novel algorithms of information discovery for Multidimensional Autonomous Wireless Sensor Networks (WSNs) that can significantly increase network lifetime and minimize query processing latency, resulting in quality of service improvements that are of immense benefit to Multidimensional Autonomous WSNs are deployed in complex environments (e.g., mission-critical applications)

    Low-latency, query-driven analytics over voluminous multidimensional, spatiotemporal datasets

    Get PDF
    2017 Summer.Includes bibliographical references.Ubiquitous data collection from sources such as remote sensing equipment, networked observational devices, location-based services, and sales tracking has led to the accumulation of voluminous datasets; IDC projects that by 2020 we will generate 40 zettabytes of data per year, while Gartner and ABI estimate 20-35 billion new devices will be connected to the Internet in the same time frame. The storage and processing requirements of these datasets far exceed the capabilities of modern computing hardware, which has led to the development of distributed storage frameworks that can scale out by assimilating more computing resources as necessary. While challenging in its own right, storing and managing voluminous datasets is only the precursor to a broader field of study: extracting knowledge, insights, and relationships from the underlying datasets. The basic building block of this knowledge discovery process is analytic queries, encompassing both query instrumentation and evaluation. This dissertation is centered around query-driven exploratory and predictive analytics over voluminous, multidimensional datasets. Both of these types of analysis represent a higher-level abstraction over classical query models; rather than indexing every discrete value for subsequent retrieval, our framework autonomously learns the relationships and interactions between dimensions in the dataset (including time series and geospatial aspects), and makes the information readily available to users. This functionality includes statistical synopses, correlation analysis, hypothesis testing, probabilistic structures, and predictive models that not only enable the discovery of nuanced relationships between dimensions, but also allow future events and trends to be predicted. This requires specialized data structures and partitioning algorithms, along with adaptive reductions in the search space and management of the inherent trade-off between timeliness and accuracy. The algorithms presented in this dissertation were evaluated empirically on real-world geospatial time-series datasets in a production environment, and are broadly applicable across other storage frameworks

    Machine Learning Methods for Product Quality Monitoring in Electric Resistance Welding

    Get PDF
    Elektrisches Widerstandsschweißen (Englisch: Electric Resistance Welding, ERW) ist eine Gruppe von vollautomatisierten Fertigungsprozessen, bei denen metallische Werkstoffe durch Wärme verbunden werden, die von elektrischem Strom und Widerstand erzeugt wird. Eine genaue Qualitätsüberwachung von ERW kann oft nur teilweise mit destruktiven Methoden durchgeführt werden. Es besteht ein großes industrielles und wirtschaftliches Potenzial, datengetriebene Ansätze für die Qualitätsüberwachung in ERW zu entwickeln, um die Wartungskosten zu senken und die Qualitätskontrolle zu verbessern. Datengetriebene Ansätze wie maschinelles Lernen (ML) haben aufgrund der enormen Menge verfügbarer Daten, die von Technologien der Industrie 4.0 bereitgestellt werden, viel Aufmerksamkeit auf sich gezogen. Datengetriebene Ansätze ermöglichen eine zerstörungsfreie, umfassende und präzise Qualitätsüberwachung, wenn eine bestimmte Menge präziser Daten verfügbar ist. Dies kann eine umfassende Online-Qualitätsüberwachung ermöglichen, die ansonsten mit herkömmlichen empirischen Methoden äußerst schwierig ist. Es gibt jedoch noch viele Herausforderungen bei der Adoption solcher Ansätze in der Fertigungsindustrie. Zu diesen Herausforderungen gehören: effiziente Datensammlung, die dasWissen von erforderlichen Datenmengen und relevanten Sensoren für erfolgreiches maschinelles Lernen verlangt; das anspruchsvolle Verstehen von komplexen Prozessen und facettenreichen Daten; eine geschickte Selektion geeigneter ML-Methoden und die Integration von Domänenwissen für die prädiktive Qualitätsüberwachung mit inhomogenen Datenstrukturen, usw. Bestehende ML-Lösungen für ERW liefern keine systematische Vorgehensweise für die Methodenauswahl. Jeder Prozess der ML-Entwicklung erfordert ein umfassendes Prozess- und Datenverständnis und ist auf ein bestimmtes Szenario zugeschnitten, das schwer zu verallgemeinern ist. Es existieren semantische Lösungen für das Prozess- und Datenverständnis und Datenmanagement. Diese betrachten die Datenanalyse als eine isolierte Phase. Sie liefern keine Systemlösungen für das Prozess- und Datenverständnis, die Datenaufbereitung und die ML-Verbesserung, die konfigurierbare und verallgemeinerbare Lösungen für maschinelles Lernen ermöglichen. Diese Arbeit versucht, die obengenannten Herausforderungen zu adressieren, indem ein Framework für maschinelles Lernen für ERW vorgeschlagen wird, und demonstriert fünf industrielle Anwendungsfälle, die das Framework anwenden und validieren. Das Framework überprüft die Fragen und Datenspezifitäten, schlägt eine simulationsunterstützte Datenerfassung vor und erörtert Methoden des maschinellen Lernens, die in zwei Gruppen unterteilt sind: Feature Engineering und Feature Learning. Das Framework basiert auf semantischen Technologien, die eine standardisierte Prozess- und Datenbeschreibung, eine Ontologie-bewusste Datenaufbereitung sowie halbautomatisierte und Nutzer-konfigurierbare ML-Lösungen ermöglichen. Diese Arbeit demonstriert außerdem die Übertragbarkeit des Frameworks auf einen hochpräzisen Laserprozess. Diese Arbeit ist ein Beginn des Wegs zur intelligenten Fertigung von ERW, der mit dem Trend der vierten industriellen Revolution korrespondiert

    Distributed aop middleware for large-scale scenarios

    Get PDF
    En aquesta tesi doctoral presentem una proposta de middleware distribuït pel desenvolupament d'aplicacions de gran escala. La nostra motivació principal és permetre que les responsabilitats distribuïdes d'aquestes aplicacions, com per exemple la replicació, puguin integrar-se de forma transparent i independent. El nostre enfoc es basa en la implementació d'aquestes responsabilitats mitjançant el paradigma d'aspectes distribuïts i es beneficia dels substrats de les xarxes peer-to-peer (P2P) i de la programació orientada a aspectes (AOP) per realitzar-ho de forma descentralitzada, desacoblada, eficient i transparent. La nostra arquitectura middleware es divideix en dues capes: un model de composició i una plataforma escalable de desplegament d'aspectes distribuïts. Per últim, es demostra la viabilitat i aplicabilitat del nostre model mitjançant la implementació i experimentació de prototipus en xarxes de gran escala reals.In this PhD dissertation we present a distributed middleware proposal for large-scale application development. Our main aim is to separate the distributed concerns of these applications, like replication, which can be integrated independently and transparently. Our approach is based on the implementation of these concerns using the paradigm of distributed aspects. In addition, our proposal benefits from the peer-to-peer (P2P) networks and aspect-oriented programming (AOP) substrates to provide these concerns in a decentralized, decoupled, efficient, and transparent way. Our middleware architecture is divided into two layers: a composition model and a scalable deployment platform for distributed aspects. Finally, we demonstrate the viability and applicability of our model via implementation and experimentation of prototypes in real large-scale networks
    corecore