1,831 research outputs found
Enabling Deep Neural Network Inferences on Resource-constraint Devices
Department of Computer Science and EngineeringWhile deep neural networks (DNN) are widely used on various devices, including resource-constraint devices such as IoT, AR/VR, and mobile devices, running DNN from resource-constrained devices remains challenging. There exist three approaches for DNN inferences on resource-constraint devices: 1) lightweight DNN for on-device computing, 2) offloading DNN inferences to a cloud server, and 3) split computing to utilize computation and network resources efficiently.
Designing a lightweight DNN without compromising the accuracy of DNN is challenging due to a trade-off between latency and accuracy, that more computation is required to achieve higher accuracy. One solution to overcome this challenge is pre-processing to extract and transfer helpful information to achieve high accuracy of DNN. We design the pre-processing, which consists of three processes. The first process of pre-processing is finding out the best input source. The second process is the input-processing which extracts and contains important information for DNN inferences among the whole information gained from the input source. The last process is choosing or designing a suitable lightweight DNN for processed input. As an instance of how to apply the pre-processing, in Sec 2, we present a new transportation mode recognition system for smartphones called DeepVehicleSense, which aims at achieving three performance objectives: high accuracy, low latency, and low power consumption at once by exploiting sound characteristics captured from the built-in microphone while being on candidate transportations. To achieve high accuracy and low latency, DeepVehicleSense makes use of non-linear filters that can best extract the transportation sound samples. For the recognition of five different transportation modes, we design a deep learning-based sound classifier using a novel deep neural network architecture with multiple branches. Our staged inference technique can significantly reduce runtime and energy consumption while maintaining high accuracy for the majority of samples.
Offloading DNN inferences to a server is a solution for DNN inferences on resource-constraint devices, but there is one concern about latency caused by data transmission. To reduce transmission latency, recent studies have tried to make this offloading process more efficient by compressing data to be offloaded. However, conventional compression techniques are designed for human beings, so they compress data to be possible to restore data, which looks like the original from the perspective of human eyes. As a result, the compressed data through the compression technique contains redundancy beyond the necessary information for DNN inference.
In other words, the most fundamental question on extracting and offloading the minimal amount of necessary information that does not degrade the inference accuracy has remained unanswered. To answer the question, in Sec 3, we call such an ideal offloading semantic offloading and propose N-epitomizer, a new offloading framework that enables semantic offloading, thus achieving more reliable and timely inferences in highly-fluctuated or even low-bandwidth wireless networks. To realize N-epitomizer, we design an autoencoder-based scalable encoder trained to extract the most informative data and scale its output size to meet the latency and accuracy requirements of inferences over a network.
Even though our proposed lightweight DNN and offloading framework with the essential information extractor achieve low latency while preserving DNN performance, they alone cannot realize latency-guaranteed DNN inferences. To realize latency-guaranteed DNN inferences, the computational complexity of the lightweight DNN and the compression performance of the encoder for offloading should be adaptively selected according to current computation resources and network conditions by utilizing the DNN's trade-off between computational complexity and DNN performance and the encoder's trade-off between compression performance and DNN performance. To this end, we propose a new framework for latency-guaranteed DNN inferences called LG-DI, which predicts DNN performance degradation given a latency budget in advance and utilizes the better method between the lightweight DNN and offloading with compression. As a result, our proposed framework for DNN inferences can guarantee latency regardless of changes in computation and network resources while maintaining DNN performance as much as possible.ope
Evaluation of mixed microalgae species biorefinery of Desmodesmus sp. And Scenedesmus sp. For bioproducts synthesis
Microalgae is known to produce numerous bioactive compounds for instance proteins, fatty acid, polysaccharides, enzymes, sterols, and antioxidants. Due to their valuable biochemical composition, microalgae are regarded as a very intriguing source to produce novel food products and can be utilised to improve the nutritional content of traditional foods. Additionally, microalgae are used as animal feed and additives in the cosmetics, pharmaceutical as well as nutraceutical industries. As compared to other terrestrial plants and other microorganisms, microalgae possess few advantages: (1) rapid growth rate; (2) able to grow in non-arable land and harsh cultivation conditions; (3) low nutritional requirements; (4) high productivity; and (5) reduce emission of carbon dioxide. Despite the large number of microalgae species found in nature, only a few species are identified and commercialized such as Chlorella sp., Spirulina sp. Haematococcus pluvialis, Nannochloropsis sp. and Chlamydomonas reinhardtii, which is one of the major obstacles preventing the full utilisation of microalgae-based technology.
This thesis provides information on the overall composition of mixed microalgae species, Desmodesmus sp. and Scenedesmus sp., for instance protein, carbohydrate, lipid, antioxidants, and pigment. This thesis firstly introduces the application of triphasic partitioning (TPP) in the extraction and partitioning of the biomolecules from the microalgae. The latest advancement of technology has evolved from a liquid biphasic flotation (LBF) to TPP. T-butanol and ammonium sulphate are used in TPP to precipitate desired biomolecules from the aqueous solutions with the formation of three layer. TPP is a simple, time- and cost- efficient, as well as scalable process that does not require toxic organic solvents. Lipase is abundantly produced by microbes, bacteria, fungi, yeast, mammals, and plants. Lipase is widely used in the oleochemical, detergent, dairy, leather, cosmetics, paper, cosmetics, and nutraceutical industries. Therefore, this thesis also discusses the possibility of identifying and extracting enzyme lipase from the microalgae using LBF. Several parameters (volume and concentration of solvents, weight of biomass, flotation kinetics and solvent types, etc.) have been investigated to optimize the lipase extraction from LBF.
Chlorophyll is the main pigment present in the microalgae. Thus, this work proposes the digital imaging approach to determine the chlorophyll concentration in the microalgae rapidly because the chlorophyll content has a significant impact on microalgae physiological health status as well as identifies the chlorophyll concentration in the production of by-products. Lastly, microalgae oil can be used as the feedstock for biodiesel as well as nutraceutical, pharmaceutical, and health-care products. The challenge in the lipid extraction is the co-extraction of chlorophyll into the oil, which can have serious consequences for downstream processing. Therefore, the removal of the chlorophyll from the microalgae using activated clay or sodium chlorite in the pre-treatment procedure are examined. The research achievements in these works and future opportunities are highlighted in the last chapter of the thesis
IoT Transmission Technologies for Distributed Measurement Systems in Critical Environments
Distributed measurement systems are spread in the most diverse application scenarios, and Internet of Things (IoT) transmission equipment is usually the enabling technologies for such measurement systems that need to feature wireless connectivity to ensure pervasiveness. Because wireless measurement systems have been deployed for the last years even in critical environments, assessing transmission technologies performances in such contexts is fundamental. Indeed, they are the most challenging ones for wireless data transmission due to their intrinsic attenuation capabilities.
Several scenarios in which measurement systems can be deployed are analysed. Firstly, marine contexts are treated by considering above-the-sea wireless links. Such setting can be experienced in whichever application requiring remote monitoring of facilities and assets that are offshore installed. Some instances are offshore sea farming plants, or remote video monitoring systems installed on seamark buoys. Secondly, wireless communications taking place from the underground to the aboveground are covered. This scenario is typical of precision agriculture applications, where the accurate measurement of underground physical parameters is needed to be remotely sent to optimise crops reducing the wastefulness of fundamental resources (e.g., irrigation water). Thirdly, wireless communications occurring from the underwater to the abovewater are addressed. Such situation is inevitable for all those infrastructures monitoring conservation status of underwater species like algae, seaweeds and reef. Then, wireless links happening traversing metal surfaces and structures are tackled. Such context is commonly encountered in asset tracking and monitoring (e.g., containers), or in smart metering applications (e.g., utility meters). Lastly, sundry harsh environments that are typical of industrial monitoring (e.g., vibrating machineries, harsh temperature and humidity rooms, corrosive atmospheres) are tested to validate pervasive measurement infrastructures even in such contexts that are usually experienced in Industrial Internet of Things (IIoT) applications. The performances of wireless measurement systems in such scenarios are tested by sorting out ad-hoc measurement campaigns. Finally, IoT measurement infrastructures respectively deployed in above-the-sea and underground-to-aboveground settings are described to provide real applications in which such facilities can be effectively installed. Nonetheless, the aforementioned application scenarios are only some amid their sundry variety. Indeed, nowadays distributed pervasive measurement systems have to be thought in a broad way, resulting in countless instances: predictive maintenance, smart healthcare, smart cities, industrial monitoring, or smart agriculture, etc.
This Thesis aims at showing distributed measurement systems in critical environments to set up pervasive monitoring infrastructures that are enabled by IoT transmission technologies. At first, they are presented, and then the harsh environments are introduced, along with the relative theoretical analysis modelling path loss in such conditions. It must be underlined that this Thesis aims neither at finding better path loss models with respect to the existing ones, nor at improving them. Indeed, path loss models are exploited as they are, in order to derive estimates of losses to understand the effectiveness of the deployed infrastructure. In fact, some transmission tests in those contexts are described, along with providing examples of these types of applications in the field, showing the measurement infrastructures and the relative critical environments serving as deployment sites. The scientific relevance of this Thesis is evident since, at the moment, the literature lacks a comparative study like this, showing both transmission performances in critical environments, and the deployment of real IoT distributed wireless measurement systems in such contexts
Algorithms for light applications: from theoretical simulations to prototyping
[eng] Although the first LED dates to the middle of the 20th century, it has not been until the last decade that the market has been flooded with high efficiency and high durability LED solutions compared to previous technologies. In addition, luminaires that include types of LEDs differentiated in hue or color have already appeared. These luminaires offer new possibilities to reach colorimetric or non-visual capabilities not seen to date.
Due to the enormous number of LEDs on the market, with very different spectral characteristics, the use of the spectrometer as a measuring device for determining LEDs properties has become popular. Obtaining colorimetric information from a luminaire is a necessary step to commercialize it, so it is a tool commonly used by many LED manufacturers.
This doctoral thesis advances the state-of-the-art and knowledge of LED technology at the level of combined spectral emission, as well as applying innovative spectral reconstruction techniques to a commercial multichannel colorimetric sensor. On the one hand, new spectral simulation algorithms that allow obtaining a very high number of results have been developed, being able to obtain optimized values of colorimetric and non-visual parameters in multichannel light sources. MareNostrum supercomputer has been used and new relationships between colorimetric and non-visual parameters in commercial white LED datasets have been found through data analysis. Moreover, the functional improvement of a multichannel colorimetric sensor has been explored by providing it with a neural network for spectral reconstruction. A large amount of data has been generated, which has allowed simulations and statistical studies on the error committed in the spectral reconstruction process using different techniques. This improvement has led to an increase in the spectral resolution measured by the sensor, allowing better accuracy in the calculation of colorimetric parameters. Prototypes of the light sources and the colorimetric sensor have been developed in order to experimentally demonstrate the theoretical framework generated. All the prototypes have been characterized and the errors generated with respect to the theoretical models have been evaluated. The results obtained have been validated through the application of different industry standards by comparison with calibrated commercial devices.[cat] Aquesta tesi doctoral realitza un avançament en l’estat de l’art i en el coneixement sobre la tecnologia LED a nivell d’emissió espectral combinada, a més d’aplicar tècniques innovadores de reconstrucció espectral a un sensor colorimètric multicanal comercial. Per una banda, s’han desenvolupat nous algoritmes de simulació espectral que permeten obtenir un nombre molt elevat de resultats, sent capaços d’obtenir valors optimitzats de paràmetres colorimètrics i no-visuals en fonts de llum multicanal. S’ha fet ús del supercomputador MareNostrum i s’han trobat noves relacions entre paràmetres colorimètrics i no visuals en conjunts de LEDs blancs comercials a través de l’anàlisi de dades. Per altra banda, s’ha explorat la millora funcional d’un sensor colorimètric multicanal, dotant-lo d’una xarxa neuronal per a la reconstrucció espectral. S’han generat una gran quantitat de dades que han permès realitzar simulacions i estudis estadístics sobre l’error comès en el procés de reconstrucció espectral utilitzant diferents tècniques. Aquesta millora ha implicat un augment de la resolució espectral mesurada pel sensor, permetent obtenir una millor precisió en el càlcul de paràmetres colorimètrics. S’han desenvolupat prototips de les fonts de llum i del sensor colorimètric amb l’objectiu de demostrar experimentalment el marc teòric generat. Tots els prototips han estat caracteritzats i s’han avaluat els errors generats respecte els models teòrics. Els resultats obtinguts s’han validat a través de l’aplicació de diferents estàndards de la indústria o a través de la comparativa amb dispositius comercials calibrats
Enhancing Security in Internet of Healthcare Application using Secure Convolutional Neural Network
The ubiquity of Internet of Things (IoT) devices has completely changed the healthcare industry by presenting previously unheard-of potential for remote patient monitoring and individualized care. In this regard, we suggest a unique method that makes use of Secure Convolutional Neural Networks (SCNNs) to improve security in Internet-of-Healthcare (IoH) applications. IoT-enabled healthcare has advanced as a result of the integration of IoT technologies, giving it impressive data processing powers and large data storage capacity. This synergy has led to the development of an intelligent healthcare system that is intended to remotely monitor a patient's medical well-being via a wearable device as a result of the ongoing advancement of the Industrial Internet of Things (IIoT). This paper focuses on safeguarding user privacy and easing data analysis. Sensitive data is carefully separated from user-generated data before being gathered. Convolutional neural network (CNN) technology is used to analyse health-related data thoroughly in the cloud while scrupulously protecting the privacy of the consumers.The paper provide a secure access control module that functions using user attributes within the IoT-Healthcare system to strengthen security. This module strengthens the system's overall security and privacy by ensuring that only authorised personnel may access and interact with the sensitive health data. The IoT-enabled healthcare system gets the capacity to offer seamless remote monitoring while ensuring the confidentiality and integrity of user information thanks to this integrated architecture
Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study
This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives
Tradition and Innovation in Construction Project Management
This book is a reprint of the Special Issue 'Tradition and Innovation in Construction Project Management' that was published in the journal Buildings
Engineered polysaccharide, Alpha-1,3 Glucan, as a Functional Filler of Rubber Composites
Rubber products represent an essential and highly functional class of performance materials required in many daily applications. However, the increasing interest in enhancing the overall material sustainability of rubber products has accelerated the focus on compatible, lightweight, and environmentally sustainable fillers. As part of the effort to design sustainable rubber composites, enzymatic polymerization-derived polysaccharide fillers, alpha-1,3 glucan, with designed fibrids, platelet and spherical morphology and high crystallinity was employed as a novel sustainable filler system in natural rubber (NR) films. The alpha-1,3 glucan is supplied by International Fragrances and Flavors (IFF), former E.I. DuPont Industrial Biosciences. Initially, lightly crosslinked NR films reinforced with 0 – 10 phr MCG were fabricated using dipping and casting processes. The effect of MCG on the physicochemical properties, chemical stability, and thermo-mechanical properties of the composite films was investigated. In the subsequent project, colloidal alpha-1,3 glucan with spherical morphology was employed as a functional filler of NR coating films. Coating formulations containing NR latex and 10 – 100 phr colloidal alpha-1,3 glucan were prepared and applied to paper substrates at three different thicknesses. The effect of various coating formulations on the barrier properties against water vapor, oxygen, oil as well as on dry and wet mechanical properties were investigated. In order to study the impact of alpha,1-3 glucan’s morphology on the barrier properties of the paper coating, the following studies were conducted. This study employed enzymatically polymerized microcrystalline glucan (MCG) as a functional additive in natural rubber (NR)-based coating formulations. Typically, NR coating formulations containing 0–50 wt. % MCG were fabricated at a constant coating thickness with a constant solid content. The influence of MCG on the wet and dry strength, rheology, adhesion strength, and barrier properties such as moisture, oxygen, and grease barrier of the formulated coatings was investigated. Also, further study on the effect of solid content and low crosslinking on the barrier properties was conducted.
The last stage of the study involved a solvent-free, batch mixer-based reactive process to carry out the reaction of ENR with glucan. In order to enhance the degree of dispersion and bonding of polar filler in a nonpolar natural rubber matrix, in situ melt grafting of epoxidized natural rubber (ENR) onto the polysaccharide was employed to achieve enhanced material properties. The process of temperature and shear-mediated melt grafting, in the presence of two catalysts (sodium hydroxide (NaOH) and dicumyl peroxide (DCP)), was investigated. Analytical characterization techniques, including Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and solvent swelling, were employed to confirm the formation of covalent bonds between alpha1,3-glucan and ENR. The selected ENR-glucan masterbatch samples were then subjected to melt-mixing with NR formulations to produce NR composites. Overall, this study aimed to develop a sustainable rubber composite with the incorporation of alpha-1,3 glucan as a functional filler targeting dipped rubber, packaging, and footwear applications and indicating the potential of this study in alleviating the environmental pollution induced by traditional polymers
- …