613 research outputs found
Trustworthiness in Mobile Cyber Physical Systems
Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS
Efficient Monte Carlo Based Methods for Variability Aware Analysis and Optimization of Digital Circuits.
Process variability is of increasing concern in modern nanometer-scale CMOS. The
suitability of Monte Carlo based algorithms for efficient analysis and optimization of
digital circuits under variability is explored in this work. Random sampling based Monte
Carlo techniques incur high cost of computation, due to the large sample size required to
achieve target accuracy. This motivates the need for intelligent sample selection
techniques to reduce the number of samples. As these techniques depend on information
about the system under analysis, there is a need to tailor the techniques to fit the specific
application context. We propose efficient smart sampling based techniques for timing and
leakage power consumption analysis of digital circuits. For the case of timing analysis, we
show that the proposed method requires 23.8X fewer samples on average to achieve
comparable accuracy as a random sampling approach, for benchmark circuits studied. It is
further illustrated that the parallelism available in such techniques can be exploited using
parallel machines, especially Graphics Processing Units. Here, we show that SH-QMC
implemented on a Multi GPU is twice as fast as a single STA on a CPU for benchmark
circuits considered. Next we study the possibility of using such information from
statistical analysis to optimize digital circuits under variability, for example to achieve
minimum area on silicon though gate sizing while meeting a timing constraint. Though
several techniques to optimize circuits have been proposed in literature, it is not clear how
much gains are obtained in these approaches specifically through utilization of statistical
information. Therefore, an effective lower bound computation technique is proposed to
enable efficient comparison of statistical design optimization techniques. It is shown that
even techniques which use only limited statistical information can achieve results to
within 10% of the proposed lower bound. We conclude that future optimization research
should shift focus from use of more statistical information to achieving more efficiency
and parallelism to obtain speed ups.Ph.D.Electrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/78936/1/tvvin_1.pd
Algorithmic techniques for nanometer VLSI design and manufacturing closure
As Very Large Scale Integration (VLSI) technology moves to the nanoscale
regime, design and manufacturing closure becomes very difficult to achieve due to
increasing chip and power density. Imperfections due to process, voltage and temperature variations aggravate the problem. Uncertainty in electrical characteristic of
individual device and wire may cause significant performance deviations or even functional failures. These impose tremendous challenges to the continuation of Moore's
law as well as the growth of semiconductor industry.
Efforts are needed in both deterministic design stage and variation-aware design
stage. This research proposes various innovative algorithms to address both stages for
obtaining a design with high frequency, low power and high robustness. For deterministic optimizations, new buffer insertion and gate sizing techniques are proposed. For
variation-aware optimizations, new lithography-driven and post-silicon tuning-driven
design techniques are proposed.
For buffer insertion, a new slew buffering formulation is presented and is proved
to be NP-hard. Despite this, a highly efficient algorithm which runs > 90x faster
than the best alternatives is proposed. The algorithm is also extended to handle
continuous buffer locations and blockages.
For gate sizing, a new algorithm is proposed to handle discrete gate library in
contrast to unrealistic continuous gate library assumed by most existing algorithms. Our approach is a continuous solution guided dynamic programming approach, which
integrates the high solution quality of dynamic programming with the short runtime
of rounding continuous solution.
For lithography-driven optimization, the problem of cell placement considering
manufacturability is studied. Three algorithms are proposed to handle cell flipping
and relocation. They are based on dynamic programming and graph theoretic approaches, and can provide different tradeoff between variation reduction and wire-
length increase.
For post-silicon tuning-driven optimization, the problem of unified adaptivity
optimization on logical and clock signal tuning is studied, which enables us to significantly save resources. The new algorithm is based on a novel linear programming
formulation which is solved by an advanced robust linear programming technique.
The continuous solution is then discretized using binary search accelerated dynamic
programming, batch based optimization, and Latin Hypercube sampling based fast
simulation
Data Mining Framework for Monitoring Attacks In Power Systems
Vast deployment of Wide Area Measurement Systems (WAMS) has facilitated in increased understanding and intelligent management of the current complex power systems. Phasor Measurement Units (PMU\u27s), being the integral part of WAMS transmit high quality system information to the control centers every second. With the North American Synchro Phasor Initiative (NAPSI), the number of PMUs deployed across the system has been growing rapidly. With this increase in the number of PMU units, the amount of data accumulated is also growing in a tremendous manner. This increase in the data necessitates the use of sophisticated data processing, data reduction, data analysis and data mining techniques. WAMS is also closely associated with the information and communication technologies that are capable of implementing intelligent protection and control actions in order to improve the reliability and efficiency of the existing power systems. Along with the myriad of advantages that these measurements systems, informational and communication technologies bring, they also lead to a close synergy between heterogeneous physical and cyber components which unlocked access points for easy cyber intrusions. This easy access has resulted in various cyber attacks on control equipment consequently increasing the vulnerability of the power systems.;This research proposes a data mining based methodology that is capable of identifying attacks in the system using the real time data. The proposed methodology employs an online clustering technique to monitor only limited number of measuring units (PMU\u27s) deployed across the system. Two different classification algorithms are implemented to detect the occurrence of attacks along with its location. This research also proposes a methodology to differentiate physical attacks with malicious data attacks and declare attack severity and criticality. The proposed methodology is implemented on IEEE 24 Bus reliability Test System using data generated for attacks at different locations, under different system topologies and operating conditions. Different cross validation studies are performed to determine all the user defined variables involved in data mining studies. The performance of the proposed methodology is completely analyzed and results are demonstrated. Finally the strengths and limitations of the proposed approach are discussed
True scale-free networks hidden by finite size effects
We analyze about two hundred naturally occurring networks with distinct
dynamical origins to formally test whether the commonly assumed hypothesis of
an underlying scale-free structure is generally viable. This has recently been
questioned on the basis of statistical testing of the validity of power law
distributions of network degrees by contrasting real data. Specifically, we
analyze by finite-size scaling analysis the datasets of real networks to check
whether purported departures from the power law behavior are due to the
finiteness of the sample size. In this case, power laws would be recovered in
the case of progressively larger cutoffs induced by the size of the sample. We
find that a large number of the networks studied follow a finite size scaling
hypothesis without any self-tuning. This is the case of biological protein
interaction networks, technological computer and hyperlink networks, and
informational networks in general. Marked deviations appear in other cases,
especially infrastructure and transportation but also social networks. We
conclude that underlying scale invariance properties of many naturally
occurring networks are extant features often clouded by finite-size effects due
to the nature of the sample data
Self-organised criticality via retro-synaptic signals in complex neural networks
The brain is a complex system par excellence. Its intricate structure has become clearer
recently, and it has been reported that it shares some properties common to complex
networks, such as the small-world property, the presence of hubs, and assortative mixing,
among others. These properties provide the brain with a robust architecture appropriate
for efficient information transmission across different brain regions. Nevertheless,
how these topological properties emerge in neural networks is still an open
question.
Moreover, in the last decade the observation of neuronal avalanches in neocortical
circuits suggested the presence of self-organised criticality in neural systems. The
occurrence of this kind of dynamics implies several benefits to neural computation.
However, the mechanisms that give rise to critical behaviour in these systems, and
how they interact with other neuronal processes such as synaptic plasticity are not
fully understood.
In this thesis, we study self-organised criticality and neural systems in the context
of complex networks. Our work differs from other similar approaches by stressing the
importance of analysing the influence of hubs, high clustering coefficients, and synaptic
plasticity into the collective dynamics of the system. Additionally, we introduce a
metric that we call node success to assess the effectiveness of a spike in terms of its
capacity to trigger cascading behaviour. We present a synaptic plasticity rule based
on this metric, which enables the system to reach the critical state of its collective dynamics
without the need to fine-tune any control parameter. Our results suggest that
retro-synaptic signals could be responsible for the emergence of self-organised criticality
in brain networks. Furthermore, based on the measure of node success, we find
what kind of topology allows nodes to be more successful at triggering cascades of
activity. Our study comprises four different scenarios: i) static synapses, ii) dynamic
synapses under spike-timing-dependent plasticity (STDP), iii) dynamic synapses under
node-success-driven plasticity (NSDP), and iv) dynamic synapses under both NSDP
and STDP mechanisms. We observe that small-world structures emerge when critical
dynamics are combined with STDP mechanisms in a particular type of topology.
Moreover, we go beyond simple spike pairs of STDP, and implement spike triplets to
assess their influence on the dynamics of the system. To the best of our knowledge
this is the first study that implements this version of STDP in the context of critical
dynamics in complex networks
An Introduction to Recursive Partitioning: Rationale, Application and Characteristics of Classification and Regression Trees, Bagging and Random Forests
Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, that can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine and bioinformatics within the past few years.
High dimensional problems are common not only in genetics, but also in some areas of psychological research, where only few subjects can be measured due to time or cost constraints, yet a large amount of data is generated for each subject. Random forests have been shown to achieve a high prediction accuracy in such applications, and provide descriptive variable importance measures reflecting the impact of each variable in both main effects and interactions.
The aim of this work is to introduce the principles of the standard recursive partitioning methods as well as recent methodological improvements, to illustrate their usage for low and high dimensional data exploration, but also to point out limitations of the methods and potential pitfalls in their practical application.
Application of the methods is illustrated using freely available implementations in the R system for statistical computing
Field-control, phase-transitions, and life's emergence
Instances of critical-like characteristics in living systems at each
organizational level as well as the spontaneous emergence of computation
(Langton), indicate the relevance of self-organized criticality (SOC). But
extrapolating complex bio-systems to life's origins, brings up a paradox: how
could simple organics--lacking the 'soft matter' response properties of today's
bio-molecules--have dissipated energy from primordial reactions in a controlled
manner for their 'ordering'? Nevertheless, a causal link of life's macroscopic
irreversible dynamics to the microscopic reversible laws of statistical
mechanics is indicated via the 'functional-takeover' of a soft magnetic
scaffold by organics (c.f. Cairns-Smith's 'crystal-scaffold'). A
field-controlled structure offers a mechanism for bootstrapping--bottom-up
assembly with top-down control: its super-paramagnetic components obey
reversible dynamics, but its dissipation of H-field energy for aggregation
breaks time-reversal symmetry. The responsive adjustments of the controlled
(host) mineral system to environmental changes would bring about mutual
coupling between random organic sets supported by it; here the generation of
long-range correlations within organic (guest) networks could include SOC-like
mechanisms. And, such cooperative adjustments enable the selection of the
functional configuration by altering the inorganic network's capacity to assist
a spontaneous process. A non-equilibrium dynamics could now drive the
kinetically-oriented system towards a series of phase-transitions with
appropriate organic replacements 'taking-over' its functions.Comment: 54 pages, pdf fil
- …