9,601 research outputs found
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Pathophysiology of Spinal Cord Injury (SCI)
Spinal cord injury (SCI) leads to paralysis, sensory, and autonomic nervous system dysfunctions. However, the pathophysiology of SCI is complex, and not limited to the nervous system. Indeed, several other organs and tissue are also affected by the injury, directly or not, acutely or chronically, which induces numerous health complications. Although a lot of research has been performed to repair motor and sensory functions, SCI-induced health issues are less studied, although they represent a major concern among patients. There is a gap of knowledge in pre-clinical models studying these SCI-induced health complications that limits translational applications in humans. This reprint describes several aspects of the pathophysiology of spinal cord injuries. This includes, but is not limited to, the impact of SCI on cardiovascular and respiratory functions, bladder and bowel function, autonomic dysreflexia, liver pathology, metabolic syndrome, bones and muscles loss, and cognitive functions
The development and applications of ceragenins and bone-binding antimicrobials to prevent osteomyelitis in orthopaedic patients
Bone infection remains a high-burden disease in orthopaedic and trauma patients with fractures and implantations. Osteomyelitis is difficult to cure in clinical settings, especially if antimicrobial resistance or biofilm is involved, which may prolong the treatments with antibiotics and require multiple surgeries, severely affecting the patients' quality of life and mobility. Osteomyelitis can lead to osteonecrosis, septicaemia, amputation, multi-organ dysfunction, and death in severe cases.
Preclinical models are essential for efficacy testing to develop new prophylactic and therapeutic interventions. Previously bone infection models in rats involved fractures and implantations, making it complicated to perform. In this study, we have developed and optimised murine models with a tibial drilled hole (TDH) and needle insertion surgery (NIS) that are reliable, reproducible, and cost-effective for studying implant- related and biofilm bone infections and efficacy testing.
Ceragenins (CSAs) are a novel class of broad-spectrum antimicrobials that mimic the activities of antimicrobial peptides. They are effective against bacterial, viral, fungal, and parasitic infections with low minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBCs). CSAs can also penetrate biofilm and kill antimicrobial-resistant bacteria, such as methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-resistant Staphylococcus epidermidis (MRSE). In recent years, CSA-131 has been approved by the FDA for endotracheal tube coating to prevent infection in intubated and critical patients. In our study, we applied CSA-90 (which belongs to the same family as CSA-131) to implant coating and prevented osteomyelitis in a mouse model and demonstrated the osteogenic properties of CSA- 90, which promotes bone healing and reunion of the bone defects.
CSA-90 has been classified as a potential drug to prevent and treat osteomyelitis. However, conventional methods of antibiotic delivery to the bone are inefficient. To increase the bone-binding property of CSA-90, we invented a new molecule by attaching alendronate (bisphosphonate) to CSA-90 and named it bone-binding antimicrobial-1 (BBA-1). In vitro, we determined the bone-binding properties of BBA-1 and confirmed its antimicrobial activities against S. aureus. Later, we conducted a preclinical trial to test the in vivo efficacy of BBA-1 and showed that BBA-1 could prevent osteomyelitis in mice and has low cytotoxicity.
Multiple myeloma (MM) is an aggressive cancer of plasma cells. Although chemotherapy, corticosteroids, and radiation therapy manage multiple myeloma, MM has no cure. Most MM patients (>90%) suffer myeloma-skeletal disease, including local osteolytic lesions and osteomyelitis. Thus, we dedicate the clinical application of BBA-1 to MM patients. To pursue clinical trials, preclinical trials must be conducted. In our attempts, we proposed a feasible murine model that can induce bone infections in MM mice and elucidated how MM patients will benefit from BBA-1
A 3-step Low-latency Low-Power Multichannel Time-to-Digital Converter based on Time Residual Amplifier
This paper proposes and evaluates a novel architecture for a low-power
Time-to-Digital Converter with high resolution, optimized for both integration
in multichannel chips and high rate operation (40 Mconversion/s/channel). This
converter is based on a three-step architecture. The first step uses a counter
whereas the following ones are based on two kinds of Delay Line structures. A
programmable time amplifier is used between the second and third steps to reach
the final resolution of 24.4 ps in the standard mode of operation. The system
makes use of common continuously stabilized master blocks that control
trimmable slave blocks, in each channel, against the effects of global PVT
variations. Thanks to this structure, the power consumption of a channel is
considerably reduced when it does not process a hit, and limited to 2.2 mW when
it processes a hit. In the 130 nm CMOS technology used for the prototype, the
area of a TDC channel is only 0.051 mm2. This compactness combined with low
power consumption is a key advantage for integration in multi-channel front-end
chips. The performance of this new structure has been evaluated on prototype
chips. Measurements show excellent timing performance over a wide range of
operating temperatures (-40{\deg}C to 60{\deg}C) in agreement with our
expectations. For example, the measured timing integral nonlinearity is better
than 1 LSB (25 ps) and the overall timing precision is better than 21 ps RMS
The dual environmental and economic effects of the emission trading scheme under local fiscal pressure: âefficient marketsâ and âpromising governmentsâ
Compared with developed economies, China implements the Emission Trading Scheme (ETS) within a fundamentally distinct political-economic-institutional context. This study aims to investigate the internal mechanisms and external constraints of emission trading scheme in achieving the dual benefits of environmental preservation and economic advancement within the institutional context of fiscal decentralization. We demonstrate that the transmission from emission reduction to economic returns inherently facilitates the realization of dual benefits, and further propose a restrictive effect of local fiscal pressure on the effectiveness of the emission trading scheme. Using panel data of 284 prefectural-level cities from 2003 to 2017, we conduct a quasi-experiment based on Chinaâs emission trading scheme pilot policy in 2007. The results indicate three primary conclusions: First, the implementation of emission trading scheme in China generally yields dual environmental-economic benefits, with emission reduction serving as a transmission channel for realizing economic gains. Second, high fiscal pressure on local governments not only directly undermines policy effects but also indirectly affects the transmission channel. Finally, the dual benefits have been realized in eastern China, but not yet in the central and western regions. This study contributes to the research on market-oriented environmental governance under fiscal decentralization. The theoretical logic of this study can be applied to a wide range of market-based mechanisms for green factors trading, providing valuable insights for countries facing similar challenges
Variables controlling the resurgence of previously reinforced behaviour in hens
Resurgence is defined as the occurrence of previously reinforced behaviours when reinforcer delivery ceases for a recently reinforced behaviour. In five experiments, variables suggested to control the degree of occurrence of a first-trained behaviour during the extinction of a second-trained behaviour (resurgence) were investigated. All experiments used hens and behaviours were selected from door push, key peck, and head bob. In Experiment 1, using 6 naive hens, Behaviour 1 was reinforced on a random-interval (RI) 60-s schedule followed by two sessions of extinction. Each occurrence of Behaviour 2 was then reinforced followed by another period of extinction. The degree of occurrence of Behaviour 1 during the final extinction was less than that which occurred during the period of Behaviour 1 extinction, suggesting that the extinction of Behaviour 2 did not increase the occurrence of Behaviour 1. This result failed to support the idea that resurgence is induced by the extinction of Behaviour 2. In Experiment 2, using the same hens and an additional hen, Experiment 1 was repeated five times and then there were either 0 or 9 sessions of Behaviour 1 extinction in a further five conditions. The degree of resurgence was generally less when there were 9 sessions than when there were no sessions but not consistently different from either when there were 2 sessions. Experiment 3 used six naive hens. Two first-trained behaviours were initially reinforced on RI 45-s schedules under a multiple schedule. One first behaviour then received a period of extinction and then each occurrence of two second behaviours was reinforced under the multiple schedule followed by extinction. The sequence from training of the first behaviours to the extinction of the second behaviours was repeated 10 times with the number of occurrences of the component for which extinction was in effect for the first behaviour varying across conditions from 12 to 0. The degree of resurgence was an inverse function of the amount of Behaviour 1 extinction. Experiment 4 used six naive hens. In a multiple schedule two first behaviours were reinforced on RI 20-s schedules and then two second behaviours were reinforced followed by extinction. This was repeated 8 times with the RI schedule in effect for one of the second behaviours varying from 80 s to 10 s across conditions while the other remained at 40 s. The degree of occurrence of Behaviour 1 when Behaviour 2 was reinforced was a direct function of the varied RI schedule of Behaviour 2. The degree of resurgence of Behaviour 1 in extinction was an inverse function of the varied RI schedule of Behaviour 2. The degree of resurgence was also inversely related to the degree of occurrence of this Behaviour 1 when Behaviour 2 was reinforced. Experiment 5 used five naive hens and one hen from Experiment 3 in a multiple-schedule design where the length of training of the second behaviours varied from 124 to O occurrences of a component over three conditions. No effect of this was found on the degree of resurgence. The results are consistent with the hypothesis that resurgence is the result of the prevention of extinction of Behaviour 1 by the reinforcement of Behaviour 2, but they are not definitive proof that this hypothesis is correct. Models derived from the Generalised Matching Law and Behavioural Momentum are also proposed as descriptions of resurgence
Recommended from our members
Machine Learning for Gravitational-Wave Astronomy: Methods and Applications for High-Dimensional Laser Interferometry Data
Gravitational-wave astronomy is an emerging field in observational astrophysics concerned with the study of gravitational signals proposed to exist nearly a century ago by Albert Einstein but only recently confirmed to exist. Such signals were theorized to result from astronomical events such as the collisions of black holes, but they were long thought to be too faint to measure on Earth. In recent years, the construction of extremely sensitive detectorsâincluding the Laser Interferometer Gravitational-Wave Observatory (LIGO) projectâhas enabled the first direct detections of these gravitational waves, corroborating the theory of general relativity and heralding a new era of astrophysics research.
As a result of their extraordinary sensitivity, the instruments used to study gravitational waves are also subject to noise that can significantly limit their ability to detect the signals of interest with sufficient confidence. The detectors continuously record more than 200,000 time series of auxiliary data describing the state of a vast array of internal components and sensors, the environmental state in and around the detector, and so on. This data offers significant value for understanding the nearly innumerable potential sources of noise and ultimately reducing or eliminating them, but it is clearly impossible to monitor, let alone understand, so much information manually. The field of machine learning offers a variety of techniques well-suited to problems of this nature.
In this thesis, we develop and present several machine learningâbased approaches to automate the process of extracting insights from the vast, complex collection of data recorded by LIGO detectors. We introduce a novel problem formulation for transient noise detection and show for the first time how an efficient and interpretable machine learning method can accurately identify detector noise using all of these auxiliary data channels but without observing the noise itself. We present further work employing more sophisticated neural networkâbased models, demonstrating how they can reduce error rates by over 60% while also providing LIGO scientists with interpretable insights into the detectorâs behavior. We also illustrate the methodsâ utility by demonstrating their application to a specific, recurring type of transient noise; we show how we can achieve a classification accuracy of over 97% while also independently corroborating the results of previous manual investigations into the origins of this type of noise.
The methods and results presented in the following chapters are applicable not only to the specific gravitational-wave data considered but also to a broader family of machine learning problems involving prediction from similarly complex, high-dimensional data containing only a few relevant components in a sea of irrelevant information. We hope this work proves useful to astrophysicists and other machine learning practitioners seeking to better understand gravitational waves, extremely complex and precise engineered systems, or any of the innumerable extraordinary phenomena of our civilization and universe
Special Topics in Information Technology
This open access book presents thirteen outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the thirteen best theses defended in 2020-21 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists
- âŠ