1,518 research outputs found
LIPIcs, Volume 251, ITCS 2023, Complete Volum
The consequences of climate change have emphasized the need for a power network that is centered around clean, green, and renewable sources of energy. Currently, Photovoltaics (PV) and wind turbines are the only two modes of technology that can convert renewable energy of the sun and wind respectively into large-scale power for the electricity network. This dissertation aims at providing a novel solution to implement these sources of power (majorly PV) coupled with Lithium-ion battery storage in an efficient and sustainable approach. Such a power network can enable efficiency, reliability, low-cost, and sustainability with minimum impact to the environment. The first chapter illustrates the utilization of PV- and battery-based local power networks for low voltage loads as well as the significance of local DC power in the transportation sector. Chapter two focuses on the most efficient and maximum utilization of PV and battery power in an AC infrastructure. A simulated use-case for load satisfaction and feasibility analysis of 10 university-scale buildings is illustrated. The role of PV- and battery-based networks to fulfill the new demand from the electrification of the surface transportation sector discussed in Chapter three. Chapter four analyzes the PV- and battery- based network on a global perspective and proposes a DC power network with PV and complementary wind power to fulfill the power needs across the globe. Finally, the role of SiC power electronics and the design concept for an SiC based DC-to-DC converter for maximum utilization of PV/wind and battery power through enabling HVDC transmission is discussed in Chapter six
In data analysis, recognizing unusual patterns (outliers’ analysis or anomaly detection) plays a crucial role in identifying critical events. Because of its widespread use in many applications, it remains an important and extensive research brand in data mining. As a result, numerous techniques for finding anomalies have been developed, and more are still being worked on. Researchers can gain vital knowledge by identifying anomalies, which helps them make better meaningful data analyses. However, anomaly detection is even more challenging when the datasets are high-dimensional and multivariate. In the literature, anomaly detection has received much attention but not as much as anomaly detection, specifically in high dimensional and multivariate conditions. This paper systematically reviews the existing related techniques and presents extensive coverage of challenges and perspectives of anomaly detection within highdimensional and multivariate data. At the same time, it provides a clear insight into the techniques developed for anomaly detection problems. This paper aims to help select the best technique that suits its rightful purpose. It has been found that PCA, DOBIN, Stray algorithm, and DAE-KNN have a high learning rate compared to Random projection, ROBEM, and OCP methods. Overall, most methods have shown an excellent ability to tackle the curse of dimensionality and multivariate features to perform anomaly detection. Moreover, a comparison of each algorithm for anomaly detection is also provided to produce a better algorithm. Finally, it would be a line of future studies to extend by comparing the methods on other domain-specific datasets and offering a comprehensive anomaly interpretation in describing the truth of anomalies
In recent years, Wireless sensor communication is growing expeditiously on the capability to gather information, communicate and transmit data effectively. Clustering is the main objective of improving the network lifespan in Wireless sensor network. It includes selecting the cluster head for each cluster in addition to grouping the nodes into clusters. The cluster head gathers data from the normal nodes in the cluster, and the gathered information is then transmitted to the base station. However, there are many reasons in effect opposing unsteady cluster head selection and dead nodes. The technique for selecting a cluster head takes into factors to consider including residual energy, neighbors’ nodes, and the distance between the base station to the regular nodes. In this study, we thoroughly investigated by number of methods of selecting a cluster head and constructing a cluster. Additionally, a quick performance assessment of the techniques' performance is given together with the methods' criteria, advantages, and future directions
As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency. The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours. In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency. It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system
The present doctoral thesis fits into the energy harvesting framework, presenting the development of low-power nodes compliant with the energy autonomy requirement, and sharing common technologies and architectures, but based on different energy sources and sensing mechanisms. The adopted approach is aimed at evaluating multiple aspects of the system in its entirety (i.e., the energy harvesting mechanism, the choice of the harvester, the study of the sensing process, the selection of the electronic devices for processing, acquisition and measurement, the electronic design, the microcontroller unit (MCU) programming techniques), accounting for very challenging constraints as the low amounts of harvested power (i.e., [μW, mW] range), the careful management of the available energy, the coexistence of sensing and radio transmitting features with ultra-low power requirements. Commercial sensors are mainly used to meet the cost-effectiveness and the large-scale reproducibility requirements, however also customized sensors for a specific application (soil moisture measurement), together with appropriate characterization and reading circuits, are also presented. Two different strategies have been pursued which led to the development of two types of sensor nodes, which are referred to as 'sensor tags' and 'self-sufficient sensor nodes'. The first term refers to completely passive sensor nodes without an on-board battery as storage element and which operate only in the presence of the energy source, provisioning energy from it. In this thesis, an RFID (Radio Frequency Identification) sensor tag for soil moisture monitoring powered by the impinging electromagnetic field is presented. The second term identifies sensor nodes equipped with a battery rechargeable through energy scavenging and working as a secondary reserve in case of absence of the primary energy source. In this thesis, quasi-real-time multi-purpose monitoring LoRaWAN nodes harvesting energy from thermoelectricity, diffused solar light, indoor white light, and artificial colored light are presented
A Wireless Body Area Network (WBAN) is a network that may be worn on the human body or implanted in the human body to transmit data, audio, and video in real-time to assess how vital organs are performing. A WBAN may be either an inter-WBAN or an intra-WBAN network. Intra-WBAN communication occurs when the various body sensors can share information. This is known as inter-WBAN communication, which occurs when two or more WBANs can exchange data with one another. One difficulty involves getting data traffic from wireless sensor nodes to the gateway with as little wasted energy, dropped packets, and downtime as possible. In this paper, the WBAN protocols have been compared with WBAN under Particle Swarm Optimization (PSO), and the performance of various parameters has been analysed for different simulation areas. The WBAN under the PSO protocol reduces the energy consumption by 43.2% against the SIMPLE protocoldue to the effective selection of forwarding nodes based on PSO optimization. In addition to that the experimental WBAN testbed is conducted in indoor environment to study the performance of the routing metrics towards energy and packet reception.