508,617 research outputs found

    Sustainability Indicators Past and Present: What Next?

    Get PDF
    This paper discusses the current state of thought amongst the Sustainability Indicator (SI) community, what has been achieved and where we are succeeding and failing. Recent years have witnessed the rise of “alternative facts” and “fake news” and this paper discusses how SIs fit into this maelstrom, especially as they are themselves designed to encapsulate complexity into condensed signals and it has long been known that SIs can be selectively used to support polarized sides of a debate. This paper draws from chapters in a new edited volume, the “Routledge Handbook of Sustainability Indicators and Indices”, edited by the authors. The book has 34 chapters written by a total of 59 SI experts from a wide range of backgrounds, and attempts to provide a picture of the past and present, strengths and weaknesses of SI development today. This paper is an “analysis of those analyses”—a mindful reflection on reflection, and an assessment of the malign and benign forces at work in 2018 within the SI arena. Finally, we seek to identify where SIs may be going over the coming, unpredictable years

    Using effective medium theories to design tailored nanocomposite materials for optical systems

    Full text link
    Modern optical systems are subject to very restrictive performance, size and cost requirements. Especially in portable systems size often is the most important factor, which necessitates elaborate designs to achieve the desired specifications. However, current designs already operate very close to the physical limits and further progress is difficult to achieve by changing only the complexity of the design. Another way of improving the performance is to tailor the optical properties of materials specifically to the application at hand. A class of novel, customizable materials that enables the tailoring of the optical properties, and promises to overcome many of the intrinsic disadvantages of polymers, are nanocomposites. However, despite considerable past research efforts, these types of materials are largely underutilized in optical systems. To shed light into this issue we, in this paper, discuss how nanocomposites can be modeled using effective medium theories. In the second part, we then investigate the fundamental requirements that have to be fulfilled to make nanocomposites suitable for optical applications, and show that it is indeed possible to fabricate such a material using existing methods. Furthermore, we show how nanocomposites can be used to tailor the refractive index and dispersion properties towards specific applications.Comment: This is a draft manuscript of a paper published in Proc. SPIE (Proceedings Volume 10745, Current Developments in Lens Design and Optical Engineering XIX, Event: SPIE Optical Engineering + Applications, 2018

    Practical Volume Estimation by a New Annealing Schedule for Cooling Convex Bodies

    Full text link
    We study the problem of estimating the volume of convex polytopes, focusing on H- and V-polytopes, as well as zonotopes. Although a lot of effort is devoted to practical algorithms for H-polytopes there is no such method for the latter two representations. We propose a new, practical algorithm for all representations, which is faster than existing methods. It relies on Hit-and-Run sampling, and combines a new simulated annealing method with the Multiphase Monte Carlo (MMC) approach. Our method introduces the following key features to make it adaptive: (a) It defines a sequence of convex bodies in MMC by introducing a new annealing schedule, whose length is shorter than in previous methods with high probability, and the need of computing an enclosing and an inscribed ball is removed; (b) It exploits statistical properties in rejection-sampling and proposes a better empirical convergence criterion for specifying each step; (c) For zonotopes, it may use a sequence of convex bodies for MMC different than balls, where the chosen body adapts to the input. We offer an open-source, optimized C++ implementation, and analyze its performance to show that it outperforms state-of-the-art software for H-polytopes by Cousins-Vempala (2016) and Emiris-Fisikopoulos (2018), while it undertakes volume computations that were intractable until now, as it is the first polynomial-time, practical method for V-polytopes and zonotopes that scales to high dimensions (currently 100). We further focus on zonotopes, and characterize them by their order (number of generators over dimension), because this largely determines sampling complexity. We analyze a related application, where we evaluate methods of zonotope approximation in engineering.Comment: 20 pages, 12 figures, 3 table

    Statistical analysis driven optimized deep learning system for intrusion detection

    Get PDF
    Attackers have developed ever more sophisticated and intelligent ways to hack information and communication technology systems. The extent of damage an individual hacker can carry out upon infiltrating a system is well understood. A potentially catastrophic scenario can be envisaged where a nation-state intercepting encrypted financial data gets hacked. Thus, intelligent cybersecurity systems have become inevitably important for improved protection against malicious threats. However, as malware attacks continue to dramatically increase in volume and complexity, it has become ever more challenging for traditional analytic tools to detect and mitigate threat. Furthermore, a huge amount of data produced by large networks has made the recognition task even more complicated and challenging. In this work, we propose an innovative statistical analysis driven optimized deep learning system for intrusion detection. The proposed intrusion detection system (IDS) extracts optimized and more correlated features using big data visualization and statistical analysis methods (human-in-the-loop), followed by a deep autoencoder for potential threat detection. Specifically, a pre-processing module eliminates the outliers and converts categorical variables into one-hot-encoded vectors. The feature extraction module discard features with null values and selects the most significant features as input to the deep autoencoder model (trained in a greedy-wise manner). The NSL-KDD dataset from the Canadian Institute for Cybersecurity is used as a benchmark to evaluate the feasibility and effectiveness of the proposed architecture. Simulation results demonstrate the potential of our proposed system and its outperformance as compared to existing state-of-the-art methods and recently published novel approaches. Ongoing work includes further optimization and real-time evaluation of our proposed IDS.Comment: To appear in the 9th International Conference on Brain Inspired Cognitive Systems (BICS 2018

    Do Complexity Measures of Frontal EEG Distinguish Loss of Consciousness in Geriatric Patients Under Anesthesia?

    Get PDF
    While geriatric patients have a high likelihood of requiring anesthesia, they carry an increased risk for adverse cognitive outcomes from its use. Previous work suggests this could be mitigated by better intraoperative monitoring using indexes defined by several processed electroencephalogram (EEG) measures. Unfortunately, inconsistencies between patients and anesthetic agents in current analysis techniques have limited the adoption of EEG as standard of care. In attempts to identify new analyses that discriminate clinically-relevant anesthesia timepoints, we tested 1/f frequency scaling as well as measures of complexity from nonlinear dynamics. Specifically, we tested whether analyses that characterize time-delayed embeddings, correlation dimension (CD), phase-space geometric analysis, and multiscale entropy (MSE) capture loss-of-consciousness changes in EEG activity. We performed these analyses on EEG activity collected from a traditionally hard-to-monitor patient population: geriatric patients on beta-adrenergic blockade who were anesthetized using a combination of fentanyl and propofol. We compared these analyses to traditional frequency-derived measures to test how well they discriminated EEG states before and after loss of response to verbal stimuli. We found spectral changes similar to those reported previously during loss of response. We also found significant changes in 1/f frequency scaling. Additionally, we found that our phase-space geometric characterization of time-delayed embeddings showed significant differences before and after loss of response, as did measures of MSE. Our results suggest that our new spectral and complexity measures are capable of capturing subtle differences in EEG activity with anesthesia administration-differences which future work may reveal to improve geriatric patient monitoring
    • …
    corecore