45,705 research outputs found
Fitting Prediction Rule Ensembles with R Package pre
Prediction rule ensembles (PREs) are sparse collections of rules, offering
highly interpretable regression and classification models. This paper presents
the R package pre, which derives PREs through the methodology of Friedman and
Popescu (2008). The implementation and functionality of package pre is
described and illustrated through application on a dataset on the prediction of
depression. Furthermore, accuracy and sparsity of PREs is compared with that of
single trees, random forest and lasso regression in four benchmark datasets.
Results indicate that pre derives ensembles with predictive accuracy comparable
to that of random forests, while using a smaller number of variables for
prediction
Space biology initiative program definition review. Trade study 1: Automation costs versus crew utilization
A significant emphasis upon automation within the Space Biology Initiative hardware appears justified in order to conserve crew labor and crew training effort. Two generic forms of automation were identified: automation of data and information handling and decision making, and the automation of material handling, transfer, and processing. The use of automatic data acquisition, expert systems, robots, and machine vision will increase the volume of experiments and quality of results. The automation described may also influence efforts to miniaturize and modularize the large array of SBI hardware identified to date. The cost and benefit model developed appears to be a useful guideline for SBI equipment specifiers and designers. Additional refinements would enhance the validity of the model. Two NASA automation pilot programs, 'The Principal Investigator in a Box' and 'Rack Mounted Robots' were investigated and found to be quite appropriate for adaptation to the SBI program. There are other in-house NASA efforts that provide technology that may be appropriate for the SBI program. Important data is believed to exist in advanced medical labs throughout the U.S., Japan, and Europe. The information and data processing in medical analysis equipment is highly automated and future trends reveal continued progress in this area. However, automation of material handling and processing has progressed in a limited manner because the medical labs are not affected by the power and space constraints that Space Station medical equipment is faced with. Therefore, NASA's major emphasis in automation will require a lead effort in the automation of material handling to achieve optimal crew utilization
Extension of Petri Nets by Aspects to Apply the Model Driven Architecture Approach
Within MDA models are usually created in the UML. However, one may prefer to\ud
use different notations such as Petri-nets, for example, for modelling concurrency\ud
and synchronization properties of systems. This paper claims that techniques that\ud
are adopted within the context of MDA can also be beneficial in modelling systems\ud
by using notations other than the UML. Petri-Nets are widely used for modelling\ud
of business and application logic of information systems with web services. For\ud
certain kinds of applications, therefore, Petri Nets can be more suitable for building\ud
Computation Independent, Platform Independent and Platform Specific Models\ud
(CIM, PIM and PSM). Unfortunately, the well-known problems with separation of\ud
concerns in Petri Nets and keeping track of changes may hinder achieving the aim of\ud
MDA: building reusable, portable and interoperable models. In this paper we define\ud
Aspect Petri Nets as a structure of several Petri Nets and quantification rules for\ud
weaving of those Petri Nets. Aspect Petri Nets are suitable for application of MDA;\ud
they support traceability of changes and reusability, portability and interoperability\ud
of models. We illustrate advantages of modelling in Aspect Petri Nets for MDA\ud
application and describe necessary tool support
Recommended from our members
A fuzzy approach for the network congestion problem
In the recent years, the unpredictable growth of the Internet has moreover pointed out the congestion problem, one of the problems that historicallyha ve affected the network. This paper deals with the design and the evaluation of a congestion control algorithm which adopts
a FuzzyCon troller. The analogyb etween Proportional Integral (PI) regulators and Fuzzycon trollers is discussed and a method to determine the scaling factors of the Fuzzycon troller is presented. It is shown that
the Fuzzycon troller outperforms the PI under traffic conditions which are different from those related to the operating point considered in the design
Edge- and Node-Disjoint Paths in P Systems
In this paper, we continue our development of algorithms used for topological
network discovery. We present native P system versions of two fundamental
problems in graph theory: finding the maximum number of edge- and node-disjoint
paths between a source node and target node. We start from the standard
depth-first-search maximum flow algorithms, but our approach is totally
distributed, when initially no structural information is available and each P
system cell has to even learn its immediate neighbors. For the node-disjoint
version, our P system rules are designed to enforce node weight capacities (of
one), in addition to edge capacities (of one), which are not readily available
in the standard network flow algorithms.Comment: In Proceedings MeCBIC 2010, arXiv:1011.005
Massively parallel computing on an organic molecular layer
Current computers operate at enormous speeds of ~10^13 bits/s, but their
principle of sequential logic operation has remained unchanged since the 1950s.
Though our brain is much slower on a per-neuron base (~10^3 firings/s), it is
capable of remarkable decision-making based on the collective operations of
millions of neurons at a time in ever-evolving neural circuitry. Here we use
molecular switches to build an assembly where each molecule communicates-like
neurons-with many neighbors simultaneously. The assembly's ability to
reconfigure itself spontaneously for a new problem allows us to realize
conventional computing constructs like logic gates and Voronoi decompositions,
as well as to reproduce two natural phenomena: heat diffusion and the mutation
of normal cells to cancer cells. This is a shift from the current static
computing paradigm of serial bit-processing to a regime in which a large number
of bits are processed in parallel in dynamically changing hardware.Comment: 25 pages, 6 figure
- …