10 research outputs found

    Design Techniques for Energy-Quality Scalable Digital Systems

    Get PDF
    Energy efficiency is one of the key design goals in modern computing. Increasingly complex tasks are being executed in mobile devices and Internet of Things end-nodes, which are expected to operate for long time intervals, in the orders of months or years, with the limited energy budgets provided by small form-factor batteries. Fortunately, many of such tasks are error resilient, meaning that they can toler- ate some relaxation in the accuracy, precision or reliability of internal operations, without a significant impact on the overall output quality. The error resilience of an application may derive from a number of factors. The processing of analog sensor inputs measuring quantities from the physical world may not always require maximum precision, as the amount of information that can be extracted is limited by the presence of external noise. Outputs destined for human consumption may also contain small or occasional errors, thanks to the limited capabilities of our vision and hearing systems. Finally, some computational patterns commonly found in domains such as statistics, machine learning and operational research, naturally tend to reduce or eliminate errors. Energy-Quality (EQ) scalable digital systems systematically trade off the quality of computations with energy efficiency, by relaxing the precision, the accuracy, or the reliability of internal software and hardware components in exchange for energy reductions. This design paradigm is believed to offer one of the most promising solutions to the impelling need for low-energy computing. Despite these high expectations, the current state-of-the-art in EQ scalable design suffers from important shortcomings. First, the great majority of techniques proposed in literature focus only on processing hardware and software components. Nonetheless, for many real devices, processing contributes only to a small portion of the total energy consumption, which is dominated by other components (e.g. I/O, memory or data transfers). Second, in order to fulfill its promises and become diffused in commercial devices, EQ scalable design needs to achieve industrial level maturity. This involves moving from purely academic research based on high-level models and theoretical assumptions to engineered flows compatible with existing industry standards. Third, the time-varying nature of error tolerance, both among different applications and within a single task, should become more central in the proposed design methods. This involves designing “dynamic” systems in which the precision or reliability of operations (and consequently their energy consumption) can be dynamically tuned at runtime, rather than “static” solutions, in which the output quality is fixed at design-time. This thesis introduces several new EQ scalable design techniques for digital systems that take the previous observations into account. Besides processing, the proposed methods apply the principles of EQ scalable design also to interconnects and peripherals, which are often relevant contributors to the total energy in sensor nodes and mobile systems respectively. Regardless of the target component, the presented techniques pay special attention to the accurate evaluation of benefits and overheads deriving from EQ scalability, using industrial-level models, and on the integration with existing standard tools and protocols. Moreover, all the works presented in this thesis allow the dynamic reconfiguration of output quality and energy consumption. More specifically, the contribution of this thesis is divided in three parts. In a first body of work, the design of EQ scalable modules for processing hardware data paths is considered. Three design flows are presented, targeting different technologies and exploiting different ways to achieve EQ scalability, i.e. timing-induced errors and precision reduction. These works are inspired by previous approaches from the literature, namely Reduced-Precision Redundancy and Dynamic Accuracy Scaling, which are re-thought to make them compatible with standard Electronic Design Automation (EDA) tools and flows, providing solutions to overcome their main limitations. The second part of the thesis investigates the application of EQ scalable design to serial interconnects, which are the de facto standard for data exchanges between processing hardware and sensors. In this context, two novel bus encodings are proposed, called Approximate Differential Encoding and Serial-T0, that exploit the statistical characteristics of data produced by sensors to reduce the energy consumption on the bus at the cost of controlled data approximations. The two techniques achieve different results for data of different origins, but share the common features of allowing runtime reconfiguration of the allowed error and being compatible with standard serial bus protocols. Finally, the last part of the manuscript is devoted to the application of EQ scalable design principles to displays, which are often among the most energy- hungry components in mobile systems. The two proposals in this context leverage the emissive nature of Organic Light-Emitting Diode (OLED) displays to save energy by altering the displayed image, thus inducing an output quality reduction that depends on the amount of such alteration. The first technique implements an image-adaptive form of brightness scaling, whose outputs are optimized in terms of balance between power consumption and similarity with the input. The second approach achieves concurrent power reduction and image enhancement, by means of an adaptive polynomial transformation. Both solutions focus on minimizing the overheads associated with a real-time implementation of the transformations in software or hardware, so that these do not offset the savings in the display. For each of these three topics, results show that the aforementioned goal of building EQ scalable systems compatible with existing best practices and mature for being integrated in commercial devices can be effectively achieved. Moreover, they also show that very simple and similar principles can be applied to design EQ scalable versions of different system components (processing, peripherals and I/O), and to equip these components with knobs for the runtime reconfiguration of the energy versus quality tradeoff

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Earth Observation Data, Processing and Applications. Volume 2A. Processing - Basic Image Operations

    Get PDF
    Eds. Harrison, B.A., Jupp, D.L.B., Lewis, M.M, Sparks, T., Phinn, S.F., Mueller, N., Byrne, G

    Pre-processing, classification and semantic querying of large-scale Earth observation spaceborne/airborne/terrestrial image databases: Process and product innovations.

    Get PDF
    By definition of Wikipedia, “big data is the term adopted for a collection of data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications. The big data challenges typically include capture, curation, storage, search, sharing, transfer, analysis and visualization”. Proposed by the intergovernmental Group on Earth Observations (GEO), the visionary goal of the Global Earth Observation System of Systems (GEOSS) implementation plan for years 2005-2015 is systematic transformation of multisource Earth Observation (EO) “big data” into timely, comprehensive and operational EO value-adding products and services, submitted to the GEO Quality Assurance Framework for Earth Observation (QA4EO) calibration/validation (Cal/Val) requirements. To date the GEOSS mission cannot be considered fulfilled by the remote sensing (RS) community. This is tantamount to saying that past and existing EO image understanding systems (EO-IUSs) have been outpaced by the rate of collection of EO sensory big data, whose quality and quantity are ever-increasing. This true-fact is supported by several observations. For example, no European Space Agency (ESA) EO Level 2 product has ever been systematically generated at the ground segment. By definition, an ESA EO Level 2 product comprises a single-date multi-spectral (MS) image radiometrically calibrated into surface reflectance (SURF) values corrected for geometric, atmospheric, adjacency and topographic effects, stacked with its data-derived scene classification map (SCM), whose thematic legend is general-purpose, user- and application-independent and includes quality layers, such as cloud and cloud-shadow. Since no GEOSS exists to date, present EO content-based image retrieval (CBIR) systems lack EO image understanding capabilities. Hence, no semantic CBIR (SCBIR) system exists to date either, where semantic querying is synonym of semantics-enabled knowledge/information discovery in multi-source big image databases. In set theory, if set A is a strict superset of (or strictly includes) set B, then A B. This doctoral project moved from the working hypothesis that SCBIR computer vision (CV), where vision is synonym of scene-from-image reconstruction and understanding EO image understanding (EO-IU) in operating mode, synonym of GEOSS ESA EO Level 2 product human vision. Meaning that necessary not sufficient pre-condition for SCBIR is CV in operating mode, this working hypothesis has two corollaries. First, human visual perception, encompassing well-known visual illusions such as Mach bands illusion, acts as lower bound of CV within the multi-disciplinary domain of cognitive science, i.e., CV is conditioned to include a computational model of human vision. Second, a necessary not sufficient pre-condition for a yet-unfulfilled GEOSS development is systematic generation at the ground segment of ESA EO Level 2 product. Starting from this working hypothesis the overarching goal of this doctoral project was to contribute in research and technical development (R&D) toward filling an analytic and pragmatic information gap from EO big sensory data to EO value-adding information products and services. This R&D objective was conceived to be twofold. First, to develop an original EO-IUS in operating mode, synonym of GEOSS, capable of systematic ESA EO Level 2 product generation from multi-source EO imagery. EO imaging sources vary in terms of: (i) platform, either spaceborne, airborne or terrestrial, (ii) imaging sensor, either: (a) optical, encompassing radiometrically calibrated or uncalibrated images, panchromatic or color images, either true- or false color red-green-blue (RGB), multi-spectral (MS), super-spectral (SS) or hyper-spectral (HS) images, featuring spatial resolution from low (> 1km) to very high (< 1m), or (b) synthetic aperture radar (SAR), specifically, bi-temporal RGB SAR imagery. The second R&D objective was to design and develop a prototypical implementation of an integrated closed-loop EO-IU for semantic querying (EO-IU4SQ) system as a GEOSS proof-of-concept in support of SCBIR. The proposed closed-loop EO-IU4SQ system prototype consists of two subsystems for incremental learning. A primary (dominant, necessary not sufficient) hybrid (combined deductive/top-down/physical model-based and inductive/bottom-up/statistical model-based) feedback EO-IU subsystem in operating mode requires no human-machine interaction to automatically transform in linear time a single-date MS image into an ESA EO Level 2 product as initial condition. A secondary (dependent) hybrid feedback EO Semantic Querying (EO-SQ) subsystem is provided with a graphic user interface (GUI) to streamline human-machine interaction in support of spatiotemporal EO big data analytics and SCBIR operations. EO information products generated as output by the closed-loop EO-IU4SQ system monotonically increase their value-added with closed-loop iterations

    Endoscopic Fluorescence Imaging:Spectral Optimization and in vivo Characterization of Positive Sites by Magnifying Vascular Imaging

    Get PDF
    Since several decades, the physicians are able to access hollow organs with endoscopic methods, which serve both as diagnostic and surgical means in a wide range of disciplines of the modern medicine (e.g. urology, pneumology, gastroenterology). Unfortunately, white light (WL) endoscopy displays a limited sensitivity to early pre-cancerous lesions. Hence, several endoscopic methods based on fluorescence imaging have been developed to overcome this limitation. Both endogenous and exogenously-induced fluorescence have been investigated, leading to commercial products. Indeed, autofluorescence bronchoscopy, as well as porphyrin-based fluorescence cystoscopy, are now on the market. As a matter of fact, fluorescence-based endoscopic detection methods show very high sensitivity to pre-cancerous lesions, which are often overlooked in WL endoscopy, but they still lack specificity mainly due to the high false-positive rate. Although most of these false positives can easily be rejected under WL observation, tissue abnormalities such as inflammations, hyperplasia, and metaplasia are more difficult to identify, often resulting in supplementary biopsies. Therefore, the purpose of this thesis is to study novel, fast, and convenient method to characterize fluorescence positive spots in situ during fluorescence endoscopy and, more generally, to optimize the existing endoscopic setup. In this thesis, several clinical evaluations were conducted either in the tracheo-bronchial tree and the urinary bladder. In the urinary bladder, fluorescence imaging for detection of non-muscle invasive bladder cancer is based on the selective production and accumulation of fluorescing porphyrins, mainly protoporphyrin IX (PpIX), in cancerous tissues after the instillation of Hexvix® during one hour. In this thesis, we adapted a rigid cystoscope to perform high magnification (HM) cystoscopy in order to discriminate false from true fluorescence positive findings. Both white light and fluorescence modes are possible with the magnification cystoscope, allowing observation of the bladder wall with magnification ranging between 30× – for standard observation – and 650×. The optical zooming setup allows adjusting the magnification continuously in situ. In the high magnification regime, the smallest diameter of the field of view is 600 microns and the resolution is 2.5 microns, when in contact with the bladder wall. With this HM cystoscope, we characterized the superficial vascularization of the fluorescing sites in WL (370–700 nm) reflectance imaging in order to discriminate cancerous from non-cancerous tissues. This procedure allowed us to establish a classification based on observed vascular patterns. 72 patients subject to Hexvix® f luorescence cystoscopy were included in the study. Comparison of HM cystoscopy classification with histopathology results confirmed 32/33 (97%) cancerous biopsies, and rejected 17/20 (85%) non-cancerous lesions. No vascular alteration could be observed on the only positive lesion that was negative in HM mode, probably because this sarcomatoid carcinoma was not originating in the bladder mucosa. We established with this study that a magnification ranging between 80× and 100× is an optimal tradeoff to perform both macroscopic PDD and HM reflectance imaging. In order to make this approach more quantitative, different algorithms of image processing (vessel segmentation and skeletonisation, global information extraction) were also implemented in this thesis. In order to better visualize the vessels, we improved their contrast with respect to the background. Since hemoglobin is a very strong absorber, we targeted the two hemoglobin absorption peaks by placing appropriate bandpass filters (blue 405±50 nm, green 550±50 nm) in the light source. HM cystoscopy was then performed sequentially with WL, blue and green illumination. The two latter showed higher vessel-to-background contrast, identifying different layers of vascularization due to the light penetration depth. During fluorescence cystoscopy, we often observed that the images are somehow "blurred" by a greenish screen between endoscope tip and bladder mucosa. Since this effect is enhanced by the urine production, it is more visible with flexible scopes (lower flushing capabilities) and imaging systems that collect only autofluorescence as background. Indeed, when the bladder is not flushed regularly, greenish flows coming out of the ureters can easily be observed. For this reason, it is supposed that some fluorophores contained in the urine are excited by the photodetection excitation light, and appear greenish on the screen. This effect may impair the visualization of the bladder mucosa, and thus cancerous lesions, and lowers sensitivity of the fluorescence cystoscopy. In this thesis, we identified the main metabolites responsible for the liquid fluorescence, and optimized the spectral design accordingly. In the tracheo-bronchial tree, the fluorescence contrast is based on the sharp autofluorescence (AF) decrease on early cancerous lesions in the green spectral region (around 500 nm) and a relatively less important decrease in the red spectral region (> 600 nm) when excited with blue-violet light (around 410 nm). It has been shown over the last years, that this contrast may be attributed to a combined effect of epithelium thickening and higher concentration of hemoglobin in the tissues underneath the (pre-)cancerous lesions. In this thesis, we contributed to the definition of the input design of several new prototypes, that were subsequently tested in the clinical environment. We first showed that narrow-band excitation in the blue-violet could increase the tumor-to-normal spectral contrast in the green spectral region. Then, we quantified the intra- and inter-patient variations in the AF intensities in order to optimize the spectral response of the endoscopic fluorescence imaging system. For this purpose, we developed an endoscopic reference to be placed close to the bronchial mucosa during bronchoscopy. Finally, we evaluated a novel AF bronchoscope with blue-backscattered light on 144 patients. This new device showed increased sensitivity for pre-neoplastic lesions. Similar to what we observed in the bladder, it is likely that developing new imaging capabilities (including vascular imaging) will facilitate discriminating true from false positive in AF bronchoscopy. Here, we demonstrated that this magnification allowed us to resolve vessels with a diameter of about 30 µm. This resolution is likely to be sufficient to identify Shibuya's vascular criteria (loops, meshes, dotted vessels) on AF positive lesions. This criteria allow him to recognize pre-cancerous lesions, and thus can potentially decrease the false-positive rate with our AF imaging system. This magnification was also showed to be better for routine bronchoscopy, since it delivers sharper and more structured images to the operator

    Intelligent Circuits and Systems

    Get PDF
    ICICS-2020 is the third conference initiated by the School of Electronics and Electrical Engineering at Lovely Professional University that explored recent innovations of researchers working for the development of smart and green technologies in the fields of Energy, Electronics, Communications, Computers, and Control. ICICS provides innovators to identify new opportunities for the social and economic benefits of society.  This conference bridges the gap between academics and R&D institutions, social visionaries, and experts from all strata of society to present their ongoing research activities and foster research relations between them. It provides opportunities for the exchange of new ideas, applications, and experiences in the field of smart technologies and finding global partners for future collaboration. The ICICS-2020 was conducted in two broad categories, Intelligent Circuits & Intelligent Systems and Emerging Technologies in Electrical Engineering

    Dataset shift in land-use classification for optical remote sensing

    Get PDF
    Multimodal dataset shifts consisting of both concept and covariate shifts are addressed in this study to improve texture-based land-use classification accuracy for optical panchromatic and multispectral remote sensing. Multitemporal and multisensor variances between train and test data are caused by atmospheric, phenological, sensor, illumination and viewing geometry differences, which cause supervised classification inaccuracies. The first dataset shift reduction strategy involves input modification through shadow removal before feature extraction with gray-level co-occurrence matrix and local binary pattern features. Components of a Rayleigh quotient-based manifold alignment framework is investigated to reduce multimodal dataset shift at the input level of the classifier through unsupervised classification, followed by manifold matching to transfer classification labels by finding across-domain cluster correspondences. The ability of weighted hierarchical agglomerative clustering to partition poorly separated feature spaces is explored and weight-generalized internal validation is used for unsupervised cardinality determination. Manifold matching solves the Hungarian algorithm with a cost matrix featuring geometric similarity measurements that assume the preservation of intrinsic structure across the dataset shift. Local neighborhood geometric co-occurrence frequency information is recovered and a novel integration thereof is shown to improve matching accuracy. A final strategy for addressing multimodal dataset shift is multiscale feature learning, which is used within a convolutional neural network to obtain optimal hierarchical feature representations instead of engineered texture features that may be sub-optimal. Feature learning is shown to produce features that are robust against multimodal acquisition differences in a benchmark land-use classification dataset. A novel multiscale input strategy is proposed for an optimized convolutional neural network that improves classification accuracy to a competitive level for the UC Merced benchmark dataset and outperforms single-scale input methods. All the proposed strategies for addressing multimodal dataset shift in land-use image classification have resulted in significant accuracy improvements for various multitemporal and multimodal datasets.Thesis (PhD)--University of Pretoria, 2016.National Research Foundation (NRF)University of Pretoria (UP)Electrical, Electronic and Computer EngineeringPhDUnrestricte

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum
    corecore