738 research outputs found

    An ultra-low voltage FFT processor using energy-aware techniques

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.Page 170 blank.Includes bibliographical references (p. 165-169).In a number of emerging applications such as wireless sensor networks, system lifetime depends on the energy efficiency of computation and communication. The key metric in such applications is the energy dissipated per function rather than traditional ones such as clock speed or silicon area. Hardware designs are shifting focus toward enabling energy-awareness, allowing the processor to be energy-efficient for a variety of operating scenarios. This is in contrast to conventional low-power design, which optimizes for the worst-case scenario. Here, three energy-quality scalable hooks are designed into a real-valued FFT processor: variable FFT length (N=128 to 1024 points), variable bit precision (8,16 bit), and variable voltage supply with variable clock frequency (VDD=1 80mV to 0.9V, and f=164Hz to 6MHz). A variable-bit-precision and variable-FFT-length scalable FFT ASIC using an off-the-shelf standard-cell logic library and memory only scales down to 1V operation. Further energy savings is achieved through ultra-low voltage-supply operation. As performance requirements are relaxed, the operating voltage supply is scaled down, possibly even below the threshold voltage into the subthreshold region. When lower frequencies cause leakage energy dissipation to exceed the active energy dissipation, there is an optimal operating point for minimizing energy consumption.(cont.) Logic and memory design techniques allowing ultra-low voltage operation are employed to study the optimal frequency/voltage operating point for the FFT. A full-custom implementation with circuit techniques optimized for deep voltage scaling into the subthreshold regime, is fabricated using a standard CMOS 0.18[mu]m logic process and functions down to 180mV. At the optimal operating point where the voltage supply is 350mV, the FFT processor dissipates 155nJ/FFT. The custom FFT is 8x more energy-efficient than the ASIC implementation and 350x more energy-efficient than a low-power microprocessor implementation.by Alice Wang.Ph.D

    A Networked Dataflow Simulation Environment for Signal Processing and Data Mining Applications

    Get PDF
    In networked signal processing systems, dataflow graphs can be used to describe the processing on individual network nodes. However, to analyze the correctness and performance of these systems, designers must understand the interactions across these individual "node-level'' dataflow graphs --- as they communicate across the network --- in addition to the characteristics of the individual graphs. In this thesis, we present a novel simulation environment, called the NS-2 -- TDIF SIMulation environment (NT-SIM). NT-SIM provides integrated co-simulation of networked systems and combines the network analysis capabilities provided by the Network Simulator (ns) with the scheduling capabilities of a dataflow-based framework, thereby providing novel features for more comprehensive simulation of networked signal processing systems. Through a novel integration of advanced tools for network and dataflow graph simulation, our NT-SIM environment allows comprehensive simulation and analysis of networked systems. We present two case studies that concretely demonstrate the utility of NT-SIM in the contexts of a heterogeneous signal processing and data mining system design

    NASA Tech Briefs, January 2013

    Get PDF
    Topics include: Single-Photon-Sensitive HgCdTe Avalanche Photodiode Detector; Surface-Enhanced Raman Scattering Using Silica Whispering-Gallery Mode Resonators; 3D Hail Size Distribution Interpolation/Extrapolation Algorithm; Color-Changing Sensors for Detecting the Presence of Hypergolic Fuels; Artificial Intelligence Software for Assessing Postural Stability; Transformers: Shape-Changing Space Systems Built with Robotic Textiles; Fibrillar Adhesive for Climbing Robots; Using Pre-Melted Phase Change Material to Keep Payloads in Space Warm for Hours without Power; Development of a Centrifugal Technique for the Microbial Bioburden Analysis of Freon (CFC-11); Microwave Sinterator Freeform Additive Construction System (MS-FACS); DSP/FPGA Design for a High-Speed Programmable S-Band Space Transceiver; On-Chip Power-Combining for High-Power Schottky Diode-Based Frequency Multipliers; FPGA Vision Data Architecture; Memory Circuit Fault Simulator; Ultra-Compact Transputer-Based Controller for High-Level, Multi-Axis Coordination; Regolith Advanced Surface Systems Operations Robot Excavator; Magnetically Actuated Seal; Hybrid Electrostatic/Flextensional Mirror for Lightweight, Large-Aperture, and Cryogenic Space Telescopes; System for Contributing and Discovering Derived Mission and Science Data; Remote Viewer for Maritime Robotics Software; Stackfile Database; Reachability Maps for In Situ Operations; JPL Space Telecommunications Radio System Operating Environment; RFI-SIM: RFI Simulation Package; ION Configuration Editor; Dtest Testing Software; IMPaCT - Integration of Missions, Programs, and Core Technologies; Integrated Systems Health Management (ISHM) Toolkit; Wind-Driven Wireless Networked System of Mobile Sensors for Mars Exploration; In Situ Solid Particle Generator; Analysis of the Effects of Streamwise Lift Distribution on Sonic Boom Signature; Rad-Tolerant, Thermally Stable, High-Speed Fiber-Optic Network for Harsh Environments; Towed Subsurface Optical Communications Buoy; High-Collection-Efficiency Fluorescence Detection Cell; Ultra-Compact, Superconducting Spectrometer-on-a-Chip at Submillimeter Wavelengths; UV Resonant Raman Spectrometer with Multi-Line Laser Excitation; Medicine Delivery Device with Integrated Sterilization and Detection; Ionospheric Simulation System for Satellite Observations and Global Assimilative Model Experiments - ISOGAME; Airborne Tomographic Swath Ice Sounding Processing System; flexplan: Mission Planning System for the Lunar Reconnaissance Orbiter; Estimating Torque Imparted on Spacecraft Using Telemetry; PowderSim: Lagrangian Discrete and Mesh-Free Continuum Simulation Code for Cohesive Soils; Multiple-Frame Detection of Subpixel Targets in Thermal Image Sequences; Metric Learning to Enhance Hyperspectral Image Segmentation; Basic Operational Robotics Instructional System; Sheet Membrane Spacesuit Water Membrane Evaporator; Advanced Materials and Manufacturing for Low-Cost, High-Performance Liquid Rocket Combustion Chambers; Motor Qualification for Long-Duration Mars Missions

    SOFTWARE UPDATE MANAGEMENT IN WIRELESS SENSOR NETWORKS

    Get PDF
    Wireless sensor networks (WSNs) have recently emerged as a promising platform for many non-traditional applications, such as wildfire monitoring and battlefield surveillance. Due to bug fixes, feature enhancements and demand changes, the code running on deployed wireless sensors often needs to be updated, which is done through energy-consuming wireless communication. Since the energy supply of battery-powered sensors is limited, the network lifetime is reduced if more energy is consumed for software update, especially at the early stage of a WSN’s life when bug fixes and feature enhancements are frequent, or in WSNs that support multiple applications, and frequently demand a subset of sensors to fetch and run different applications. In this dissertation, I propose an energy-efficient software update management framework for WSNs. The diff-based software update process can be divided into three phases: new binary generation, diff-patch generation, and patch distribution. I identify the energy-saving opportunities in each phase and develop a set of novel schemes to achieve overall energy efficiency. In the phase of generating new binary after source code changes, I design an update-conscious compilation approach to improve the code similarity between the new and old binaries. In the phase of generating update patch, I adopt simple primitives in the literature and develop a set of advanced primitives. I then study the energy-efficient patch distribution in WSNs and develop a multicast-based code distribution protocol to effectively disseminate the patch to individual sensors. In summary, this dissertation successfully addresses an important problem in WSNs. Update-conscious compilation is the first work that compiles the code with the goal of improving code similarity, and proves to be effective. The other components in the proposed framework also advance the state of the art. The proposed software update management framework benefits all WSN users, as software update is indispensable in WSNs. The techniques developed in this framework can also be adapted to other platforms such as the smart phone network

    Exploring Path Computation Techniques in Software-Defined Networking: A Review and Performance Evaluation of Centralized, Distributed, and Hybrid Approaches

    Get PDF
    Software-Defined Networking (SDN) is a networking paradigm that allows network administrators to dynamically manage network traffic flows and optimize network performance. One of the key benefits of SDN is the ability to compute and direct traffic along efficient paths through the network. In recent years, researchers have proposed various SDN-based path computation techniques to improve network performance and reduce congestion. This review paper provides a comprehensive overview of SDN-based path computation techniques, including both centralized and distributed approaches. We discuss the advantages and limitations of each approach and provide a critical analysis of the existing literature. In particular, we focus on recent advances in SDN-based path computation techniques, including Dynamic Shortest Path (DSP), Distributed Flow-Aware Path Computation (DFAPC), and Hybrid Path Computation (HPC). We evaluate three SDN-based path computation algorithms: centralized, distributed, and hybrid, focusing on optimal path determination for network nodes. Test scenarios with random graph simulations are used to compare their performance. The centralized algorithm employs global network knowledge, the distributed algorithm relies on local information, and the hybrid approach combines both. Experimental results demonstrate the hybrid algorithm's superiority in minimizing path costs, striking a balance between optimization and efficiency. The centralized algorithm ranks second, while the distributed algorithm incurs higher costs due to limited local knowledge. This research offers insights into efficient path computation and informs future SDN advancements. We also discuss the challenges associated with implementing SDN-based path computation techniques, including scalability, security, and interoperability. Furthermore, we highlight the potential applications of SDN-based path computation techniques in various domains, including data center networks, wireless networks, and the Internet of Things (IoT). Finally, we conclude that SDN-based path computation techniques have the potential to significantly improvement in-order to improve network performance and reduce congestion. However, further research is needed to evaluate the effectiveness of these techniques under different network conditions and traffic patterns. With the rapid growth of SDN technology, we expect to see continued development and refinement of SDN-based path computation techniques in the future

    Challenges and opportunities of introducing Internet of Things and Artificial Intelligence applications into Supply Chain Management

    Get PDF
    The study examines the challenges and opportunities of introducing Artificial Intelligence (AI) and the Internet of Things (IoT) into the Supply Chain Management (SCM). This research focuses on the Logistic Management. The central research question is “What are the key challenges and opportunities of introducing AI and IoT applications into the Supply Chain Management?” The goal of this research is to collect the most appropriate literature to help create a conceptual framework, which involves the integration of the IoT and AI applications into contemporary supply chain management with the emphasis on the logistics management. Additionally, the role of 5G Network is closely studied in order to indicate its capabilities and the processing capacity that it can provide to the AI and IoT operations. In addition, the semi-structured online interview with the top managers from several companies was conducted in order to identify the degree of readiness of the companies for the AI and IoT applications in SCM. From the retrieved results, the major challenges of integrating the IoT into SCM are the security and privacy issues, the sensitivity of the data and high costs of the implementation at an initial stage. Moreover, the research results have shown that the IoT applications can positively affect the SCM activities, in particular, the high visibility across the SC, an effective traceability and an automated data collection. Furthermore, the predictive analysis of AI programs can help the SCM to eliminate the potential errors and failures in the processes.fi=Opinnäytetyö kokotekstinä PDF-muodossa.|en=Thesis fulltext in PDF format.|sv=Lärdomsprov tillgängligt som fulltext i PDF-format

    Cryptographic key distribution in wireless sensor networks: a hardware perspective

    Get PDF
    In this work the suitability of different methods of symmetric key distribution for application in wireless sensor networks are discussed. Each method is considered in terms of its security implications for the network. It is concluded that an asymmetric scheme is the optimum choice for key distribution. In particular, Identity-Based Cryptography (IBC) is proposed as the most suitable of the various asymmetric approaches. A protocol for key distribution using identity based Non-Interactive Key Distribution Scheme (NIKDS) and Identity-Based Signature (IBS) scheme is presented. The protocol is analysed on the ARM920T processor and measurements were taken for the run time and energy of its components parts. It was found that the Tate pairing component of the NIKDS consumes significants amounts of energy, and so it should be ported to hardware. An accelerator was implemented in 65nm Complementary Metal Oxide Silicon (CMOS) technology and area, timing and energy figures have been obtained for the design. Initial results indicate that a hardware implementation of IBC would meet the strict energy constraint of a wireless sensor network node

    Development and Evaluation of a Multistatic Ultrawideband Random Noise Radar

    Get PDF
    This research studies the AFIT noise network (NoNET) radar node design and the feasibility in processing the bistatic channel information of a cluster of widely distributed noise radar nodes. A system characterization is used to predict theoretical localization performance metrics. Design and integration of a distributed and central signal and data processing architecture enables the Matlab®-driven signal data acquisition, digital processing and multi-sensor image fusion. Experimental evaluation of the monostatic localization performance reveals its range measurement error standard deviation is 4.8 cm with a range resolution of 87.2(±5.9) cm. The 16-channel multistatic solution results in a 2-dimensional localization error of 7.7(±3.1) cm and a comparative analysis is performed against the netted monostatic solution. Results show that active sensing with a low probability of intercept (LPI) multistatic radar, like the NoNET, is capable of producing sub-meter accuracy and near meter-resolution imagery

    Explainable AI over the Internet of Things (IoT): Overview, State-of-the-Art and Future Directions

    Full text link
    Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.Comment: 29 pages, 7 figures, 2 tables. IEEE Open Journal of the Communications Society (2022

    Runtime adaptive iomt node on multi-core processor platform

    Get PDF
    The Internet of Medical Things (IoMT) paradigm is becoming mainstream in multiple clinical trials and healthcare procedures. Thanks to innovative technologies, latest-generation communication networks, and state-of-the-art portable devices, IoTM opens up new scenarios for data collection and continuous patient monitoring. Two very important aspects should be considered to make the most of this paradigm. For the first aspect, moving the processing task from the cloud to the edge leads to several advantages, such as responsiveness, portability, scalability, and reliability of the sensor node. For the second aspect, in order to increase the accuracy of the system, state-of-the-art cognitive algorithms based on artificial intelligence and deep learning must be integrated. Sensory nodes often need to be battery powered and need to remain active for a long time without a different power source. Therefore, one of the challenges to be addressed during the design and development of IoMT devices concerns energy optimization. Our work proposes an implementation of cognitive data analysis based on deep learning techniques on resource-constrained computing platform. To handle power efficiency, we introduced a component called Adaptive runtime Manager (ADAM). This component takes care of reconfiguring the hardware and software of the device dynamically during the execution, in order to better adapt it to the workload and the required operating mode. To test the high computational load on a multi-core system, the Orlando prototype board by STMicroelectronics, cognitive analysis of Electrocardiogram (ECG) traces have been adopted, considering single-channel and six-channel simultaneous cases. Experimental results show that by managing the sensory node configuration at runtime, energy savings of at least 15% can be achieved
    corecore