12,102 research outputs found
A conceptual framework for development of sustainable development indicators
There was a boom in the development of sustainable development indicators (SDIs) after notion of sustainability became popular through Bruntland Commission's report. Since then numerous efforts have been made worldwide in constructing SDIs at global, national and local scales, but in India not a single city has registered any initiative for indicator development . Motivated by this dearth of studies added to the prevailing sustainability risks in million plus cities in India, a research is being undertaken at the Indira Gandhi Institute of Development and Research (IGIDR), Mumbai, India, to develop a set of sustainable indicators to study the resource dynamics of the city of Mumbai. As a first step in the process, the ground for development of SDIs is prepared through the development of a framework. A multi-view black box (MVBB) framework has been constructed by eliminating the system component from the extended urban metabolism model (EUMM) and introducing three-dimensional views of economic efficiency (EE), social wellbeing (SW), and ecological acceptability (EA). Domain-based classification was adopted to facilitate a scientifically credible set of indicators. The important domain areas are identified and applying MVBB framework, a model has been developed for each domain.Urban metabolism, Resources transformation, Economic efficiency, Society, Ecology, Monitoring and evaluation, City development, Black box, Productization of process
SoK: Design Tools for Side-Channel-Aware Implementations
Side-channel attacks that leak sensitive information through a computing
device's interaction with its physical environment have proven to be a severe
threat to devices' security, particularly when adversaries have unfettered
physical access to the device. Traditional approaches for leakage detection
measure the physical properties of the device. Hence, they cannot be used
during the design process and fail to provide root cause analysis. An
alternative approach that is gaining traction is to automate leakage detection
by modeling the device. The demand to understand the scope, benefits, and
limitations of the proposed tools intensifies with the increase in the number
of proposals.
In this SoK, we classify approaches to automated leakage detection based on
the model's source of truth. We classify the existing tools on two main
parameters: whether the model includes measurements from a concrete device and
the abstraction level of the device specification used for constructing the
model. We survey the proposed tools to determine the current knowledge level
across the domain and identify open problems. In particular, we highlight the
absence of evaluation methodologies and metrics that would compare proposals'
effectiveness from across the domain. We believe that our results help
practitioners who want to use automated leakage detection and researchers
interested in advancing the knowledge and improving automated leakage
detection
A Spitzer Unbiased Ultradeep Spectroscopic Survey
We carried out an unbiased, spectroscopic survey using the low-resolution
module of the infrared spectrograph (IRS) on board Spitzer targeting two 2.6
square arcminute regions in the GOODS-North field. IRS was used in spectral
mapping mode with 5 hours of effective integration time per pixel. One region
was covered between 14 and 21 microns and the other between 20 and 35 microns.
We extracted spectra for 45 sources. About 84% of the sources have reported
detections by GOODS at 24 microns, with a median F_nu(24um) ~ 100 uJy. All but
one source are detected in all four IRAC bands, 3.6 to 8 microns. We use a new
cross-correlation technique to measure redshifts and estimate IRS spectral
types; this was successful for ~60% of the spectra. Fourteen sources show
significant PAH emission, four mostly SiO absorption, eight present mixed
spectral signatures (low PAH and/or SiO) and two show a single line in
emission. For the remaining 17, no spectral features were detected. Redshifts
range from z ~ 0.2 to z ~ 2.2, with a median of 1. IR Luminosities are roughly
estimated from 24 microns flux densities, and have median values of 2.2 x
10^{11} L_{\odot} and 7.5 x 10^{11} L_{\odot} at z ~ 1 and z ~ 2 respectively.
This sample has fewer AGN than previous faint samples observed with IRS, which
we attribute to the fainter luminosities reached here.Comment: Published in Ap
Robust techniques and applications in fuzzy clustering
This dissertation addresses issues central to frizzy classification. The issue of sensitivity to noise and outliers of least squares minimization based clustering techniques, such as Fuzzy c-Means (FCM) and its variants is addressed. In this work, two novel and robust clustering schemes are presented and analyzed in detail. They approach the problem of robustness from different perspectives. The first scheme scales down the FCM memberships of data points based on the distance of the points from the cluster centers. Scaling done on outliers reduces their membership in true clusters. This scheme, known as the Mega-clustering, defines a conceptual mega-cluster which is a collective cluster of all data points but views outliers and good points differently (as opposed to the concept of Dave\u27s Noise cluster). The scheme is presented and validated with experiments and similarities with Noise Clustering (NC) are also presented. The other scheme is based on the feasible solution algorithm that implements the Least Trimmed Squares (LTS) estimator. The LTS estimator is known to be resistant to noise and has a high breakdown point. The feasible solution approach also guarantees convergence of the solution set to a global optima. Experiments show the practicability of the proposed schemes in terms of computational requirements and in the attractiveness of their simplistic frameworks.
The issue of validation of clustering results has often received less attention than clustering itself. Fuzzy and non-fuzzy cluster validation schemes are reviewed and a novel methodology for cluster validity using a test for random position hypothesis is developed. The random position hypothesis is tested against an alternative clustered hypothesis on every cluster produced by the partitioning algorithm. The Hopkins statistic is used as a basis to accept or reject the random position hypothesis, which is also the null hypothesis in this case. The Hopkins statistic is known to be a fair estimator of randomness in a data set. The concept is borrowed from the clustering tendency domain and its applicability to validating clusters is shown here.
A unique feature selection procedure for use with large molecular conformational datasets with high dimensionality is also developed. The intelligent feature extraction scheme not only helps in reducing dimensionality of the feature space but also helps in eliminating contentious issues such as the ones associated with labeling of symmetric atoms in the molecule. The feature vector is converted to a proximity matrix, and is used as an input to the relational fuzzy clustering (FRC) algorithm with very promising results. Results are also validated using several cluster validity measures from literature. Another application of fuzzy clustering considered here is image segmentation. Image analysis on extremely noisy images is carried out as a precursor to the development of an automated real time condition state monitoring system for underground pipelines. A two-stage FCM with intelligent feature selection is implemented as the segmentation procedure and results on a test image are presented. A conceptual framework for automated condition state assessment is also developed
IROF: a low resource evaluation metric for explanation methods
The adoption of machine learning in health care hinges on the transparency of
the used algorithms, necessitating the need for explanation methods. However,
despite a growing literature on explaining neural networks, no consensus has
been reached on how to evaluate those explanation methods. We propose IROF, a
new approach to evaluating explanation methods that circumvents the need for
manual evaluation. Compared to other recent work, our approach requires several
orders of magnitude less computational resources and no human input, making it
accessible to lower resource groups and robust to human bias
- âŠ