62 research outputs found

    COMPUTATION OFFLOADING DESIGN FOR DEEP NEURAL NETWORK INFERENCE ON IoT DEVICES

    Get PDF
    In recent times, advances in the technologies of Internet-of-Things (IoT) and Deep Neural Networks (DNN) have significantly increased the accuracy and speed of a variety of smart applications. However, one of the barriers to deploying DNN to IoT is the computational limitations of IoT devices as compared with the computationally expensive task of DNN inference. Computation offloading is an approach that addresses this problem by offloading DNN computation tasks to cloud servers. In this thesis we propose a collaborative computation offloading solution, in which some of the work is done on the IoT device, and the remainder of the work is done by the cloud server. There are two components to this collaborative approach. First, the input image to the DNN is partitioned into multiple pieces, allowing the pieces of the image to be processed in parallel, speeding up the inference time. Second, the DNN is split between two of its layers, so that layers before the split point are processed on the IoT device, and layers after the split point are processed by the cloud server. We investigated several strategies for partitioning the image and splitting the DNN, and we evaluated the results using several commonly-used DNNs: Lenet-5, AlexNet, and VGG-16. The results show that collaborative computation offloading sped up the inference time of IoT devices by 35-40% as compared with non-collaborative methods

    Chapter 34 - Biocompatibility of nanocellulose: Emerging biomedical applications

    Get PDF
    Nanocellulose already proved to be a highly relevant material for biomedical applications, ensued by its outstanding mechanical properties and, more importantly, its biocompatibility. Nevertheless, despite their previous intensive research, a notable number of emerging applications are still being developed. Interestingly, this drive is not solely based on the nanocellulose features, but also heavily dependent on sustainability. The three core nanocelluloses encompass cellulose nanocrystals (CNCs), cellulose nanofibrils (CNFs), and bacterial nanocellulose (BNC). All these different types of nanocellulose display highly interesting biomedical properties per se, after modification and when used in composite formulations. Novel applications that use nanocellulose includewell-known areas, namely, wound dressings, implants, indwelling medical devices, scaffolds, and novel printed scaffolds. Their cytotoxicity and biocompatibility using recent methodologies are thoroughly analyzed to reinforce their near future applicability. By analyzing the pristine core nanocellulose, none display cytotoxicity. However, CNF has the highest potential to fail long-term biocompatibility since it tends to trigger inflammation. On the other hand, neverdried BNC displays a remarkable biocompatibility. Despite this, all nanocelluloses clearly represent a flag bearer of future superior biomaterials, being elite materials in the urgent replacement of our petrochemical dependence

    Ionosphere Monitoring with Remote Sensing

    Get PDF
    This book focuses on the characterization of the physical properties of the Earth’s ionosphere, contributing to unveiling the nature of several processes responsible for a plethora of space weather-related phenomena taking place in a wide range of spatial and temporal scales. This is made possible by the exploitation of a huge amount of high-quality data derived from both remote sensing and in situ facilities such as ionosondes, radars, satellites and Global Navigation Satellite Systems receivers

    2022 roadmap on neuromorphic computing and engineering

    Full text link
    Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 1018^{18} calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community

    Proceedings of the 11th Toulon-Verona International Conference on Quality in Services

    Get PDF
    The Toulon-Verona Conference was founded in 1998 by prof. Claudio Baccarani of the University of Verona, Italy, and prof. Michel Weill of the University of Toulon, France. It has been organized each year in a different place in Europe in cooperation with a host university (Toulon 1998, Verona 1999, Derby 2000, Mons 2001, Lisbon 2002, Oviedo 2003, Toulon 2004, Palermo 2005, Paisley 2006, Thessaloniki 2007, Florence, 2008). Originally focusing on higher education institutions, the research themes have over the years been extended to the health sector, local government, tourism, logistics, banking services. Around a hundred delegates from about twenty different countries participate each year and nearly one thousand research papers have been published over the last ten years, making of the conference one of the major events in the field of quality in services

    Implementing machine ethics: using machine learning to raise ethical machines

    Get PDF
    As more decisions and tasks are delegated to the artificially intelligent machines of the 21st century, we must ensure that these machines are, on their own, able to engage in ethical decision-making and behaviour. This dissertation makes the case that bottom-up reinforcement learning methods are the best suited for implementing machine ethics by raising ethical machines. This is one of three main theses in this dissertation, that we must seriously consider how machines themselves, as moral agents that can impact human well-being and flourishing, might make ethically preferable decisions and take ethically preferable actions. The second thesis is that artificially intelligent machines are different in kind from all previous machines. The conjunction of autonomy and intelligence, among other unique features like the ability to learn and their general-purpose nature, is what sets artificially intelligent machines apart from all previous machines and tools. The third thesis concerns the limitations of artificially intelligent machines. As impressive as these machines are, their abilities are still derived from humans and as such lack the sort of normative commitments humans have. In short, we ought to care deeply about artificially intelligent machines, especially those used in times and places when considered human judgment is required, because we risk lapsing into a state of moral complacency otherwise

    Heuristics in Entrepreneurial Opportunity Evaluation: A Comparative Case Study of the Middle East and Germany

    Get PDF
    Heuristics are mental shortcuts applied, consciously, subconsciously or both, to save time and efforts at the expense of risking the accuracy of the outcome. Therefore, one might argue that it is just an accuracy-effort trade-off. Nonetheless, we ought to recognize the distinction between the circumstances of risk, where all choices, outcomes, and probabilities might be generally known, and the circumstances of uncertainty, where, at least some, are not. Traditional models like the Subjective Expected Utility (SEU) work best for decisions under risk but not under uncertainty, which portrays most situations people need to tackle. Uncertainty requires simple heuristics that are sufficient instead of perfect. In this dissertation, the notion of heuristics was researched through a comprehensive historical review that unfolded the heuristics-linked ideas of significant scholars. An explicit distinction between the deliberate and the automatic heuristics was stated with chronological categories of pre and post-introduction of the SEU theory; providing a new perspective and opening a discussion for future research to consider. Additionally, qualitative and quantitative studies were applied that produced an unsophisticated heuristic set that was used by entrepreneurs in the Middle East and Germany. Perhaps entrepreneurs, and people in general, do not always know or acknowledge their use of heuristics. But still, they use it extensively and may exchange heuristics among others. That may lead us to think that in a world where uncertainty prevails, the Homo heuristicus might become a real threat to the Homo economicus
    • …
    corecore