59 research outputs found

    Personalized Temporal Medical Alert System

    No full text
    International audienceThe continuous increasing needs in telemedicine and healthcare, accentuate the need of well-adapted medical alert systems. Such alert systems may be used by a variety of patients and medical actors, and should allow monitoring a wide range of medical variables. This paper proposes Tempas, a personalized temporal alert system. It facilitates customized alert configuration by using linguistic trends. The trend detection algorithm is based on data normalization, time series segmentation, and segment classification. It improves state of the art by treating irregular and regular time series in an appropriate way, thanks to the introduction of an observation variable valid time. Alert detection is enriched with quality and applicability measures. They allow a personalized tuning of the system to help reducing false negatives and false positives alert

    Astral: An algebraic approach for sensor data stream querying

    No full text
    The use of sensor based applications is in expansion in many contexts. Sensors are involved at several scales ranging from the individual (e.g. personal monitoring, smart homes) to regional and even world wide contexts (i.e. logistics, natural resource monitoring and forecast). Easy and efficient management of data streams produced by a large number of heterogeneous sensors is a key issue to support such applications. Numerous solutions for query processing on data streams have been proposed by the scientific community. Several query processors have been implemented and offer heterogeneous querying capabilities and semantics. Our work is a contribution on the formalization of queries on data streams in general, and on sensor data in particular. This paper proposes the Astral algebra; defining operators on temporal relations and streams which allow the expression of a large variety of queries. This proposal extends several aspects of existing results: it presents precise formal definitions of operators which are (or may be) semantically ambiguous and it demonstrates several properties of such operators. Such properties are an important result for query optimization as they are helpful in query rewriting and operator sharing. This formalization deepens the understanding of the queries and facilitates the comparison of the semantics implemented by existing systems. This is an essential step in building mediation solutions involving heterogeneous data stream processing systems. Cross system data exchange and application coupling would be facilitated. This paper discusses existing proposals, presents the Astral algebra, several properties of the operators

    Petits textes pour grandes masses de données

    No full text
    National audienceWhen controlled, omnipresence of data can leverage a potential of services never reached before. We propose an user driven approach to take advantage of massive data streams. Our solution named Stream2Text rests on a personalized and continual refinement of data to generates texts (in natural language) that give in a tailored synthesis of relevant data. This textual stream enables monitoring by a wide range of users. It can also be shared on social networks or be used individually on mobile devices.. Maîtrisée, l'omniprésence des données offre aujourd'hui un potentiel de services sans précédent. Dans une optique centrée personne, nous proposons une solution étendue pour l'ex-ploitation de masses de données en flux. Notre solution, nommée Stream2Text, s'appuie sur un raffinement personnalisé et continu des données, et produit des textes (en langue naturelle) qui résument de manière personnalisée les données intéressantes pour l'utilisateur. Les flux textuels produits permettent un monitoring adapté à un large spectre d'utilisateurs et peut être partagé dans un réseau social ou utilisé individuellement à partir de dispositifs mobiles

    A probabilistic model for assigning queries at the edge

    Get PDF
    Data management at the edge of the network can increase the performance of applications as the processing is realized close to end users limiting the observed latency in the provision of responses. A typical data processing involves the execution of queries/tasks defined by users or applications asking for responses in the form of analytics. Query/task execution can be realized at the edge nodes that can undertake the responsibility of delivering the desired analytics to the interested users or applications. In this paper, we deal with the problem of allocating queries to a number of edge nodes. The aim is to support the goal of eliminating further the latency by allocating queries to nodes that exhibit a low load and high processing speed, thus, they can respond in the minimum time. Before any allocation, we propose a method for estimating the computational burden that a query/task will add to a node and, afterwards, we proceed with the final assignment. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query/task and a probabilistic decision making model. The proposed scheme matches the characteristics of the incoming queries and edge nodes trying to conclude the optimal allocation. We discuss our mechanism and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results

    Edge-centric queries stream management based on an ensemble model

    Get PDF
    The Internet of things (IoT) involves numerous devices that can interact with each other or with their environment to collect and process data. The collected data streams are guided to the cloud for further processing and the production of analytics. However, any processing in the cloud, even if it is supported by improved computational resources, suffers from an increased latency. The data should travel to the cloud infrastructure as well as the provided analytics back to end users or devices. For minimizing the latency, we can perform data processing at the edge of the network, i.e., at the edge nodes. The aim is to deliver analytics and build knowledge close to end users and devices minimizing the required time for realizing responses. Edge nodes are transformed into distributed processing points where analytics queries can be served. In this paper, we deal with the problem of allocating queries, defined for producing knowledge, to a number of edge nodes. The aim is to further reduce the latency by allocating queries to nodes that exhibit low load (the current and the estimated); thus, they can provide the final response in the minimum time. However, before the allocation, we should decide the computational burden that a query will cause. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query. The complexity class, thus, can be matched against the current load of every edge node. We discuss our scheme, and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results

    Wireless Localization in Narrowband-IoT Networks

    Get PDF
    Internet of things (IoT) is an emerging technology, which connects devices to the internet and with the upcoming of 5G, even more devices will be connected. Narrowband-IoT (NB-IoT) is a promising cellular technology that supports the connection of IoT devices and their integration with the existing long-term evolution (LTE) networks. The Increase of location-based services that requires localization for IoT devices is growing with the increase in IoT devices and applications. This thesis considers the localization of IoT devices in the NB-IoT wireless network. Localization emulation is produced in which Software Defined Radio (SDR) used to implement Base stations (BS) and user equipment (UE). Channel emulator was used to emulate wireless channel conditions, and a personal computer (PC) to calculate the UE location. The distance from each BS to the UE is calculated using Time of arrival (TOA). Triangulation method used to estimate the UE's position from the different BSs distances to the UE. The accuracy of positioning is analysed with various simulation scenarios and the results compared with third generation partnership project (3GPP) Release 14 standards for NB-IoT. The positioning accuracy requirement of 50 m horizontal accuracy for localization in NB-IoT 3GPP standardized have been achieved, under Line of Sight (LOS) full triangulation scenarios 1 and 2

    Reconfigurable FPGA Architecture for Computer Vision Applications in Smart Camera Networks

    Get PDF
    Smart Camera Networks (SCNs) is nowadays an emerging research field which represents the natural evolution of centralized computer vision applications towards full distributed and pervasive systems. In this vision, one of the biggest effort is in the definition of a flexible and reconfigurable SCN node architecture able to remotely update the application parameter and the performed computer vision application at run­time. In this respect, we present a novel SCN node architecture based on a device in which a microcontroller manage all the network functionality as well as the remote configuration, while an FPGA implements all the necessary module of a full computer vision pipeline. In this work the envisioned architecture is first detailed in general terms, then a real implementation is presented to show the feasibility and the benefits of the proposed solution. Finally, performance evaluation results underline the potential of an hardware software codesign approach in reaching flexibility and reduced processing time

    Realtime image noise reduction FPGA implementation with edge detection

    Get PDF
    The purpose of this dissertation was to develop and implement, in a Field Programmable Gate Array (FPGA), a noise reduction algorithm for real-time sensor acquired images. A Moving Average filter was chosen due to its fulfillment of a low demanding computational expenditure nature, speed, good precision and low to medium hardware resources utilization. The technique is simple to implement, however, if all pixels are indiscriminately filtered, the result will be a blurry image which is undesirable. Since human eye is more sensitive to contrasts, a technique was introduced to preserve sharp contour transitions which, in the author’s opinion, is the dissertation contribution. Synthetic and real images were tested. Synthetic, composed both with sharp and soft tone transitions, were generated with a developed algorithm, while real images were captured with an 8-kbit (8192 shades) high resolution sensor scaled up to 10 × 103 shades. A least-squares polynomial data smoothing filter, Savitzky-Golay, was used as comparison. It can be adjusted using 3 degrees of freedom ─ the window frame length which varies the filtering relation size between pixels’ neighborhood, the derivative order, which varies the curviness and the polynomial coefficients which change the adaptability of the curve. Moving Average filter only permits one degree of freedom, the window frame length. Tests revealed promising results with 2 and 4ℎ polynomial orders. Higher qualitative results were achieved with Savitzky-Golay’s better signal characteristics preservation, especially at high frequencies. FPGA algorithms were implemented in 64-bit integer registers serving two purposes: increase precision, hence, reducing the error comparatively as if it were done in floating-point registers; accommodate the registers’ growing cumulative multiplications. Results were then compared with MATLAB’s double precision 64-bit floating-point computations to verify the error difference between both. Used comparison parameters were Mean Squared Error, Signalto-Noise Ratio and Similarity coefficient.O objetivo desta dissertação foi desenvolver e implementar, em FPGA, um algoritmo de redução de ruído para imagens adquiridas em tempo real. Optou-se por um filtro de Média Deslizante por não exigir uma elevada complexidade computacional, ser rápido, ter boa precisão e requerer moderada utilização de recursos. A técnica é simples, mas se abordada como filtragem monotónica, o resultado é uma indesejável imagem desfocada. Dado o olho humano ser mais sensível ao contraste, introduziu-se uma técnica para preservar os contornos que, na opinião do autor, é a sua principal contribuição. Utilizaram-se imagens sintéticas e reais nos testes. As sintéticas, compostas por fortes e suaves contrastes foram geradas por um algoritmo desenvolvido. As reais foram capturadas com um sensor de alta resolução de 8-kbit (8192 tons) e escalonadas a 10 × 103 tons. Um filtro com suavização polinomial de mínimos quadrados, SavitzkyGolay, foi usado como comparação. Possui 3 graus de liberdade: o tamanho da janela, que varia o tamanho da relação de filtragem entre os pixels vizinhos; a ordem da derivada, que varia a curvatura do filtro e os coeficientes polinomiais, que variam a adaptabilidade da curva aos pontos a suavizar. O filtro de Média Deslizante é apenas ajustável no tamanho da janela. Os testes revelaram-se promissores nas 2ª e 4ª ordens polinomiais. Obtiveram-se resultados qualitativos com o filtro Savitzky-Golay que detém melhores características na preservação do sinal, especialmente em altas frequências. Os algoritmos em FPGA foram implementados em registos de vírgula fixa de 64-bits, servindo dois propósitos: aumentar a precisão, reduzindo o erro comparativamente ao terem sido em vírgula flutuante; acomodar o efeito cumulativo das multiplicações. Os resultados foram comparados com os cálculos de 64-bits obtidos pelo MATLAB para verificar a diferença de erro entre ambos. Os parâmetros de medida foram MSE, SNR e coeficiente de Semelhança

    Modeling and Optimization of Next-Generation Wireless Access Networks

    Get PDF
    The ultimate goal of the next generation access networks is to provide all network users, whether they are fixed or mobile, indoor or outdoor, with high data rate connectivity, while ensuring a high quality of service. In order to realize this ambitious goal, delay, jitter, error rate and packet loss should be minimized: a goal that can only be achieved through integrating different technologies, including passive optical networks, 4th generation wireless networks, and femtocells, among others. This thesis focuses on medium access control and physical layers of future networks. In this regard, the first part of this thesis discusses techniques to improve the end-to-end quality of service in hybrid optical-wireless networks. In these hybrid networks, users are connected to a wireless base station that relays their data to the core network through an optical connection. Hence, by integrating wireless and optical parts of these networks, a smart scheduler can predict the incoming traffic to the optical network. The prediction data generated herein is then used to propose a traffic-aware dynamic bandwidth assignment algorithm for reducing the end-to-end delay. The second part of this thesis addresses the challenging problem of interference management in a two-tier macrocell/femtocell network. A high quality, high speed connection for indoor users is ensured only if the network has a high signal to noise ratio. A requirement that can be fulfilled with using femtocells in cellular networks. However, since femtocells generate harmful interference to macrocell users in proximity of them, careful analysis and realistic models should be developed to manage the introduced interference. Thus, a realistic model for femtocell interference outside suburban houses is proposed and several performance measures, e.g., signal to interference and noise ratio and outage probability are derived mathematically for further analysis. The quality of service of cellular networks can be degraded by several factors. For example, in industrial environments, simultaneous fading and strong impulsive noise significantly deteriorate the error rate performance. In the third part of this thesis, a technique to improve the bit error rate of orthogonal frequency division multiplexing systems in industrial environments is presented. This system is the most widely used technology in next-generation networks, and is very susceptible to impulsive noise, especially in fading channels. Mathematical analysis proves that the proposed method can effectively mitigate the degradation caused by impulsive noise and significantly improve signal to interference and noise ratio and bit error rate, even in frequency-selective fading channels

    Champs-Multizone and Virtual Building for Integrated Building Systems Design and Performance Evaluation

    Get PDF
    The ultimate goal of this research was to develop an integrated framework that facilitates performance-based multi-stage design of buildings and comparison between the performance predicted at the design stage and that monitored at operation stage. Such an integrated framework would not only enable design optimization, but also enable confirmation of design intent or diagnosis of performance deficiency, and thus provide feedbacks for future building design. This dissertation study represents the first step toward this ultimate goal, and had the following specific objectives:: 1) developing a combined heat, air, moisture and pollutant transport model for whole building performance simulation; 2) developing a real-time building IEQ and energy performance monitoring system using a Virtual Building structure to facilitate fast comparison between design and montored performance.; 3) developing a methodology to use CHAMPS-Multizone for a green building design throughout its initial and final design stage. The CHAMPS-Multizone model consists of building envelope model, room model, HVAC model and airflow model, and has an efficient and accurate numeric solvers. The model is tested under different building cases including ASHRAE 140 standard test and a three zones building test and comparision with EnergyPlus calculation results. The Virtual Building is a digital representation of the physical building with a hierarchical data structure, containing both static data such as enclosure assemblies, internal layout, etc. and dynamic data such as occupant activity schedule, outdoor weather conditions, indoor environmental parameters, HVAC operation data and energy consumption data. Then, the Virtual Building approach has been demonstrated in a LEED office building with its monitoring system. Finally, a multi-stage design process was formulated that considers the impact of climate and site, form and massing, external enclosure, internal configuration and environmental system on the whole building performance as simulated by CHAMPS-Multizone. Using the testbed building, both simulation results were also compared with the results monitored by the Virtual Building monitoring system. Future research includes refining CHAMPS-Multizone simulation capability and adding modules such as water loop calculation and integrating HVAC calculation with EnergyPlus
    corecore