193,949 research outputs found
Energy-efficient switching of nanomagnets for computing: Straintronics and other methodologies
The need for increasingly powerful computing hardware has spawned many ideas
stipulating, primarily, the replacement of traditional transistors with
alternate "switches" that dissipate miniscule amounts of energy when they
switch and provide additional functionality that are beneficial for information
processing. An interesting idea that has emerged recently is the notion of
using two-phase (piezoelectric/magnetostrictive) multiferroic nanomagnets with
bistable (or multi-stable) magnetization states to encode digital information
(bits), and switching the magnetization between these states with small
voltages (that strain the nanomagnets) to carry out digital information
processing. The switching delay is ~1 ns and the energy dissipated in the
switching operation can be few to tens of aJ, which is comparable to, or
smaller than, the energy dissipated in switching a modern-day transistor.
Unlike a transistor, a nanomagnet is "non-volatile", so a nanomagnetic
processing unit can store the result of a computation locally without refresh
cycles, thereby allowing it to double as both logic and memory. These dual-role
elements promise new, robust, energy-efficient, high-speed computing and signal
processing architectures (usually non-Boolean and often non-von-Neumann) that
can be more powerful, architecturally superior (fewer circuit elements needed
to implement a given function) and sometimes faster than their traditional
transistor-based counterparts. This topical review covers the important
advances in computing and information processing with nanomagnets with emphasis
on strain-switched multiferroic nanomagnets acting as non-volatile and
energy-efficient switches - a field known as "straintronics". It also outlines
key challenges in straintronics.Comment: This is a commissioned topical review article published in
Nanotechnolog
Optimal Intelligent Control for Wind Turbulence Rejection in WECS Using ANNs and Genetic Fuzzy Approach
One of the disadvantages in Connection of wind energy conversion systems
(WECSs) to transmission networks is plentiful turbulence of wind speed.
Therefore effects of this problem must be controlled. Nowadays,
pitch-controlled WECSs are increasingly used for variable speed and pitch wind
turbines. Megawatt class wind turbines generally turn at variable speed in wind
farm. Thus turbine operation must be controlled in order to maximize the
conversion efficiency below rated power and reduce loading on the drive-train.
Due to random and non-linear nature of the wind turbulence and the ability of
Multi-Layer Perceptron (MLP) and Radial Basis Function (RBF) Artificial Neural
Networks (ANNs) in the modeling and control of this turbulence, in this study,
widespread changes of wind have been perused using MLP and RBF artificial NNs.
In addition in this study, a new genetic fuzzy system has been successfully
applied to identify disturbance wind in turbine input. Thus output power has
been regulated in optimal and nominal range by pitch angle regulation.
Consequently, our proposed approaches have regulated output aerodynamic power
and torque in the nominal rang.Comment: International journal of soft computing & soft engineering 201
Mathematical Software: Past, Present, and Future
This paper provides some reflections on the field of mathematical software on
the occasion of John Rice's 65th birthday. I describe some of the common themes
of research in this field and recall some significant events in its evolution.
Finally, I raise a number of issues that are of concern to future developments.Comment: To appear in the Proceedings of the International Symposium on
Computational Sciences, Purdue University, May 21-22, 1999. 20 page
Application of Visual Clustering Properties of Self Organizing Map in Machine-part Cell Formation
Cellular manufacturing (CM) is an approach that includes both flexibility of
job shops and high production rate of flow lines. Although CM provides many
benefits in reducing throughput times, setup times, work-in-process inventories
but the design of CM is complex and NP complete problem. The cell formation
problem based on operation sequence (ordinal data) is rarely reported in the
literature. The objective of the present paper is to propose a visual
clustering approach for machine-part cell formation using Self Organizing Map
(SOM) algorithm an unsupervised neural network to achieve better group
technology efficiency measure of cell formation as well as measure of SOM
quality. The work also has established the criteria of choosing an optimum SOM
map size based on results of quantization error, topography error, and average
distortion measure during SOM training which have generated the best clustering
and preservation of topology. To evaluate the performance of the proposed
algorithm, we tested the several benchmark problems available in the
literature. The results show that the proposed approach not only generates the
best and accurate solution as any of the results reported, so far, in
literature but also, in some instances the results produced are even better
than the previously reported results. The effectiveness of the proposed
approach is also statistically verified.Comment: 33 pages, 7 figures, 7 table
An Empirical Study on the Procedure to Derive Software Quality Estimation Models
Software quality assurance has been a heated topic for several decades. If
factors that influence software quality can be identified, they may provide
more insight for better software development management. More precise quality
assurance can be achieved by employing resources according to accurate quality
estimation at the early stages of a project. In this paper, a general procedure
is proposed to derive software quality estimation models and various techniques
are presented to accomplish the tasks in respective steps. Several statistical
techniques together with machine learning method are utilized to verify the
effectiveness of software metrics. Moreover, a neuro-fuzzy approach is adopted
to improve the accuracy of the estimation model. This procedure is carried out
based on data from the ISBSG repository to present its empirical value
Comparison of Flow Scheduling Policies for Mix of Regular and Deadline Traffic in Datacenter Environments
Datacenters are the main infrastructure on top of which cloud computing
services are offered. Such infrastructure may be shared by a large number of
tenants and applications generating a spectrum of datacenter traffic. Delay
sensitive applications and applications with specific Service Level Agreements
(SLAs), generate deadline constrained flows, while other applications initiate
flows that are desired to be delivered as early as possible. As a result,
datacenter traffic is a mix of two types of flows: deadline and regular. There
are several scheduling policies for either traffic type with focus on
minimizing completion times or deadline miss rate. In this report, we apply
several scheduling policies to mix traffic scenario while varying the ratio of
regular to deadline traffic. We consider FCFS (First Come First Serve), SRPT
(Shortest Remaining Processing Time) and Fair Sharing as deadline agnostic
approaches and a combination of Earliest Deadline First (EDF) with either FCFS
or SRPT as deadline-aware schemes. In addition, for the latter, we consider
both cases of prioritizing deadline traffic (Deadline First) and prioritizing
regular traffic (Deadline Last). We study both light-tailed and heavy-tailed
flow size distributions and measure mean, median and tail flow completion times
(FCT) for regular flows along with Deadline Miss Rate (DMR) and average
lateness for deadline flows. We also consider two operation regimes of
lightly-loaded (low utilization) and heavily-loaded (high utilization). We find
that performance of deadline-aware schemes is highly dependent on fraction of
deadline traffic. With light-tailed flow sizes, we find that FCFS performs
better in terms of tail times and average lateness while SRPT performs better
in average times and deadline miss rate. For heavy-tailed flow sizes, except
for tail times, SRPT performs better in all other metrics.Comment: Technical Repor
Survey of state-of-the-art mixed data clustering algorithms
Mixed data comprises both numeric and categorical features, and mixed
datasets occur frequently in many domains, such as health, finance, and
marketing. Clustering is often applied to mixed datasets to find structures and
to group similar objects for further analysis. However, clustering mixed data
is challenging because it is difficult to directly apply mathematical
operations, such as summation or averaging, to the feature values of these
datasets. In this paper, we present a taxonomy for the study of mixed data
clustering algorithms by identifying five major research themes. We then
present a state-of-the-art review of the research works within each research
theme. We analyze the strengths and weaknesses of these methods with pointers
for future research directions. Lastly, we present an in-depth analysis of the
overall challenges in this field, highlight open research questions and discuss
guidelines to make progress in the field.Comment: 20 Pages, 2 columns, 6 Tables, 209 Reference
Towards Cytoskeleton Computers. A proposal
We propose a road-map to experimental implementation of cytoskeleton-based
computing devices. An overall concept is described in the following.
Collision-based cytoskeleton computers implement logical gates via interactions
between travelling localisation (voltage solitons on AF/MT chains and AF/MT
polymerisation wave fronts). Cytoskeleton networks are grown via programmable
polymerisation. Data are fed into the AF/MT computing networks via electrical
and optical means. Data signals are travelling localisations (solitons,
conformational defects) at the network terminals. The computation is
implemented via collisions between the localisations at structural gates
(branching sites) of the AF/MT network. The results of the computation are
recorded electrically and/or optically at the output terminals of the protein
networks. As additional options, optical I/O elements are envisaged via direct
excitation of the protein network and by coupling to fluorescent molecules.Comment: To be published as a chapter in the book Adamatzky A., Akl S.,
Sirakoulis G., Editors. From Parallel to Emergent Computing, CRC Press/Taylor
& Francis, 201
A Multi-Dimensional approach towards Intrusion Detection System
In this paper, we suggest a multi-dimensional approach towards intrusion
detection. Network and system usage parameters like source and destination IP
addresses; source and destination ports; incoming and outgoing network traffic
data rate and number of CPU cycles per request are divided into multiple
dimensions. Rather than analyzing raw bytes of data corresponding to the values
of the network parameters, a mature function is inferred during the training
phase for each dimension. This mature function takes a dimension value as an
input and returns a value that represents the level of abnormality in the
system usage with respect to that dimension. This mature function is referred
to as Individual Anomaly Indicator. Individual Anomaly Indicators recorded for
each of the dimensions are then used to generate a Global Anomaly Indicator, a
function with n variables (n is the number of dimensions) that provides the
Global Anomaly Factor, an indicator of anomaly in the system usage based on all
the dimensions considered together. The Global Anomaly Indicator inferred
during the training phase is then used to detect anomaly in the network traffic
during the detection phase. Network traffic data encountered during the
detection phase is fed back to the system to improve the maturity of the
Individual Anomaly Indicators and hence the Global Anomaly Indicator.Comment: 8 pages, 3 Figures, 4 Table
Particle Swarm Optimization: A survey of historical and recent developments with hybridization perspectives
Particle Swarm Optimization (PSO) is a metaheuristic global optimization
paradigm that has gained prominence in the last two decades due to its ease of
application in unsupervised, complex multidimensional problems which cannot be
solved using traditional deterministic algorithms. The canonical particle swarm
optimizer is based on the flocking behavior and social co-operation of birds
and fish schools and draws heavily from the evolutionary behavior of these
organisms. This paper serves to provide a thorough survey of the PSO algorithm
with special emphasis on the development, deployment and improvements of its
most basic as well as some of the state-of-the-art implementations. Concepts
and directions on choosing the inertia weight, constriction factor, cognition
and social weights and perspectives on convergence, parallelization, elitism,
niching and discrete optimization as well as neighborhood topologies are
outlined. Hybridization attempts with other evolutionary and swarm paradigms in
selected applications are covered and an up-to-date review is put forward for
the interested reader.Comment: 34 pages, 7 table
- …