61 research outputs found

    Qos‐aware approximate query processing for smart cities spatial data streams

    Get PDF
    Large amounts of georeferenced data streams arrive daily to stream processing systems. This is attributable to the overabundance of affordable IoT devices. In addition, interested practitioners desire to exploit Internet of Things (IoT) data streams for strategic decision‐making purposes. However, mobility data are highly skewed and their arrival rates fluctuate. This nature poses an extra challenge on data stream processing systems, which are required in order to achieve prespecified latency and accuracy goals. In this paper, we propose ApproxSSPS, which is a system for approximate processing of geo‐referenced mobility data, at scale with quality of service guarantees. We focus on stateful aggregations (e.g., means, counts) and top‐N queries. ApproxSSPS features a controller that interactively learns the latency statistics and calculates proper sampling rates to meet latency or/and accuracy targets. An overarching trait of ApproxSSPS is its ability to strike a plausible balance between latency and accuracy targets. We evaluate ApproxSSPS on Apache Spark Structured Streaming with real mobility data. We also compared ApproxSSPS against a state‐of‐the‐art online adaptive processing system. Our extensive experiments prove that ApproxSSPS can fulfill latency and accuracy targets with varying sets of parameter configurations and load intensities (i.e., transient peaks in data loads versus slow arriving streams). Moreover, our results show that ApproxSSPS outperforms the baseline counterpart by significant magnitudes. In short, ApproxSSPS is a novel spatial data stream processing system that can deliver real accurate results in a timely manner, by dynamically specifying the limits on data samples

    PV Cell Characteristic Extraction to Verify Power Transfer Efficiency in Indoor Harvesting System

    Get PDF
    A method is proposed to verify the efficiency of low-power harvesting systems based on Photovoltaic (PV) cells for indoor applications and a Fractional Open-Circuit Voltage (FOCV) technique to track the Maximum Power Point (MPP). It relies on an algorithm to reconstruct the PV cell Power versus Voltage (P-V) characteristic measuring the open circuit voltage and the voltage/current operating point but not the short-circuit current as required by state-of-the-art algorithms. This way the characteristic is reconstructed starting from the two values corresponding to standard operation modes of dc-dc converters implementing the FOCV Maximum Power Point Tracking (MPPT) technique. The method is applied to a prototype system: an external board is connected between the transducer and the dc-dc converter to measure the open circuit voltage and the voltage/current operating values. Experimental comparisons between the reconstructed and the measured P-V characteristics validate the reconstruction algorithm. Experimental results show the method is able to clearly identify the error between the transducer operating point and the one corresponding to the maximum power transfer, whilst also suggesting corrective action on the programmable factor of the FOCV technique. The proposed technique therefore provides a possible way of estimating MPPT efficiency without sampling the full P-V characteristic

    A smartwater metering deployment based on the fog computing paradigm

    Get PDF
    In this paper, we look into smart water metering infrastructures that enable continuous, on-demand and bidirectional data exchange between metering devices, water flow equipment, utilities and end-users. We focus on the design, development and deployment of such infrastructures as part of larger, smart city, infrastructures. Until now, such critical smart city infrastructures have been developed following a cloud-centric paradigm where all the data are collected and processed centrally using cloud services to create real business value. Cloud-centric approaches need to address several performance issues at all levels of the network, as massive metering datasets are transferred to distant machine clouds while respecting issues like security and data privacy. Our solution uses the fog computing paradigm to provide a system where the computational resources already available throughout the network infrastructure are utilized to facilitate greatly the analysis of fine-grained water consumption data collected by the smart meters, thus significantly reducing the overall load to network and cloud resources. Details of the system's design are presented along with a pilot deployment in a real-world environment. The performance of the system is evaluated in terms of network utilization and computational performance. Our findings indicate that the fog computing paradigm can be applied to a smart grid deployment to reduce effectively the data volume exchanged between the different layers of the architecture and provide better overall computational, security and privacy capabilities to the system

    Chatbot Quality Assurance Using RPA

    Get PDF
    Chatbots are becoming mainstream consumer engagement tools, and well-developed chatbots are already transforming user experience and personalization. Chatbot Quality Assurance (QA) is an essential part of the development and deployment process, regardless of whether it’s conducted by one entity (business) or two (developers and business), to ensure ideal results. Robotic Process Automation (RPA) can be explored as a potential facilitator to improve, augment, streamline, or optimize chatbot QA. RPA is ideally suited for tasks that can be clearly defined (rule-based) and are repeating in nature. This limits its ability to become an all-encompassing technology for chatbot QA testing, but it can still be useful in replacing part of the manual QA testing of chatbots. Chatbot QA is a complex domain in its own right and has its own challenges, including the lack of streamlined/standardized testing protocols and quality measures, though traits like intent recognition, responsiveness, conversational flow, etc., are usually tested, especially at the end-user testing phase. RPA can be useful in certain areas of chatbot QA, including its ability to increase the sample size for training and testing datasets, generating input variations, splitting testing/conversation data sets, testing for typo resiliency, etc. The general rule is that the easier a testing process is to clearly define and set rules for, the better it's a candidate for RPA-based testing. This naturally increases the lean towards technical testing and makes it moderately unfeasible as an end-user testing alternative. It has the potential to optimize chatbot QA in conjunction with AI and ML testing tools

    Latency-Sensitive 5G RAN Slicing for Deterministic Aperiodic Traffic in Smart Manufacturing

    Get PDF
    5G and beyond networks will support the digitalization of smart manufacturing thanks to their capacity to simultaneously serve different types of traffic with distinct QoS requirements. This can be achieved using Network Slicing that creates different logical network partitions (or slices) over a common infrastructure, and each can be tailored to support a particular type of traffic. The configuration of the Radio Access Network (RAN) slices strongly impacts the capacity of 5G and beyond to support critical services with stringent QoS requirements, and in particular deterministic requirements. Existing RAN Slicing solutions only consider the transmission rate (or bandwidth) requirements of the different services to partition the radio resources. This study demonstrates that this approach is not suitable to guarantee the stringent latency requirements of deterministic aperiodic traffic that is characteristic of industrial critical applications. We then propose designing RAN slices using descriptors that consider both the services’ transmission rate and latency requirements, and demonstrate that this approach can support critical services that generate deterministic aperiodic traffic

    A survey on mobility-induced service migration in the fog, edge, and related computing paradigms

    Get PDF
    The final publication is available at ACM via http://dx.doi.org/10.1145/3326540With the advent of fog and edge computing paradigms, computation capabilities have been moved toward the edge of the network to support the requirements of highly demanding services. To ensure that the quality of such services is still met in the event of users’ mobility, migrating services across different computing nodes becomes essential. Several studies have emerged recently to address service migration in different edge-centric research areas, including fog computing, multi-access edge computing (MEC), cloudlets, and vehicular clouds. Since existing surveys in this area focus on either VM migration in general or migration in a single research field (e.g., MEC), the objective of this survey is to bring together studies from different, yet related, edge-centric research fields while capturing the different facets they addressed. More specifically, we examine the diversity characterizing the landscape of migration scenarios at the edge, present an objective-driven taxonomy of the literature, and highlight contributions that rather focused on architectural design and implementation. Finally, we identify a list of gaps and research opportunities based on the observation of the current state of the literature. One such opportunity lies in joining efforts from both networking and computing research communities to facilitate future research in this area.Peer ReviewedPreprin

    Web 3.0 and its Potential Impact on Privacy Shifting Left in the Development Process

    Get PDF
    The concept of Web 3.0 as the semantic web has been around since the early 2000s, and its decentralized interpretation gained more traction when the term was coined by Ethereum’s co-founder Gavin Wood. New programming languages were identified as the first enablers of Web 3.0, but under its new interpretation, other enabling technologies were identified. The three most significant of which are Artificial Intelligence (AI), Machine Learning (ML), and blockchain. IoT is both a concurrent technology and an enabler. Security and privacy-related challenges with the enabler technologies are already being identified and addressed. The privacy challenges associated with Web 3.0 as a whole are more difficult to identify due to multiple reasons, including the nascent form of the technology, the non-standardized definition of Web 3.0, and privacy (And compliance) concerns associated with decentralization. A decentralized version of the internet has the potential to evoke new, unprecedented privacy challenges, some of which may be addressed with further advances in blockchain (a key enabler). Other challenges and trends are associated with the other Web 3.0 enabler, i.e., artificial intelligence. Despite a wide variety of privacy challenges, there is a strong probability that Web 3.0 is highly likely to push privacy left in the development process. Many of the identified challenges with underlying Web 3.0 technologies can be better addressed at the early stages of the development process. Even though we have yet to see how development culture, our approach to privacy, and Web 3.0 as a tech will evolve, especially considering the myriad of new ethical concerns associated with AI, these factors may not impede privacy's shift to the left in Web 3.0

    An efficient RAN slicing strategy for a heterogeneous network with eMBB and V2X services

    Get PDF
    Emerging 5G wireless technology will support services and use cases with vastly heterogeneous requirements. Network slicing, which allows composing multiple dedicated logical networks with specific functionality running on top of a common infrastructure, is introduced as a solution to cope with this heterogeneity. At the radio access network (RAN), the use of network slicing involves the assignment of radio resources to each slice in accordance with its expected requirements and functionalities. Therefore, RAN slicing will provide the required design flexibility and will be necessary for any network slicing solution. This paper investigates the RAN slicing problem for providing two generic services of 5G, namely enhanced mobile broadband (eMBB) and vehicle-to-everything (V2X). In this respect, we propose an efficient RAN slicing scheme based on an off-line reinforcement learning followed by a low-complexity heuristic algorithm, which allocates radio resources to different slices with the target of maximizing the resource utilization while ensuring the availability of resources to fulfill the requirements of the traffic of each RAN slice. A simulation-based analysis is presented to assess the performance of the proposed solution. The simulation results have shown that the proposed algorithm improves the network performance in terms of resource utilization, the latency of V2X services, achievable data rate, and outage probability.Peer ReviewedPostprint (published version
    • 

    corecore