475,655 research outputs found
SPECIAL ISSUE ON MEMBRANE COMPUTING, Seventh Brainstorming Week on Membrane Computing
The present volume contains a selection of papers resulting from the Seventh Brainstorming Week on Membrane Computing (BWMC7), held in Sevilla, from February 2 to February 6, 2009. The meeting was organized by the Research Group on Natural Computing (RGNC) from Department of Computer Science and Artificial Intelligence of Sevilla University. The previous editions of this series of meetings were organized in Tarragona (2003), and Sevilla (2004 – 2008). After the first BWMC, a special issue of Natural Computing – volume 2, number 3, 2003, and a special issue of New Generation Computing – volume 22, number 4, 2004, were published; papers from the second BWMC have appeared in a special issue of Journal of Universal Computer Science – volume 10, number 5, 2004, as well as in a special issue of Soft Computing – volume 9, number 5, 2005; a selection of papers written during the third BWMC has appeared in a special issue of International Journal of Foundations of Computer Science – volume 17, number 1, 2006); after the fourth BWMC a special issue of Theoretical Computer Science was edited – volume 372, numbers 2-3, 2007; after the fifth edition, a special issue of International Journal of Unconventional Computing was edited – volume 5, number 5, 2009; finally, a selection of papers elaborated during the sixth BWMC has appeared in a special issue of Fundamenta Informatica
Performance Evaluation of Apache Spark MLlib Algorithms on an Intrusion Detection Dataset
The increase in the use of the Internet and web services and the advent of
the fifth generation of cellular network technology (5G) along with
ever-growing Internet of Things (IoT) data traffic will grow global internet
usage. To ensure the security of future networks, machine learning-based
intrusion detection and prevention systems (IDPS) must be implemented to detect
new attacks, and big data parallel processing tools can be used to handle a
huge collection of training data in these systems. In this paper Apache Spark,
a general-purpose and fast cluster computing platform is used for processing
and training a large volume of network traffic feature data. In this work, the
most important features of the CSE-CIC-IDS2018 dataset are used for
constructing machine learning models and then the most popular machine learning
approaches, namely Logistic Regression, Support Vector Machine (SVM), three
different Decision Tree Classifiers, and Naive Bayes algorithm are used to
train the model using up to eight number of worker nodes. Our Spark cluster
contains seven machines acting as worker nodes and one machine is configured as
both a master and a worker. We use the CSE-CIC-IDS2018 dataset to evaluate the
overall performance of these algorithms on Botnet attacks and distributed
hyperparameter tuning is used to find the best single decision tree parameters.
We have achieved up to 100% accuracy using selected features by the learning
method in our experimentsComment: Journal of Computing and Security (Isfahan University, Iran), Vol. 9,
No.1, 202
Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond
In this and a set of companion whitepapers, the USQCD Collaboration lays out
a program of science and computing for lattice gauge theory. These whitepapers
describe how calculation using lattice QCD (and other gauge theories) can aid
the interpretation of ongoing and upcoming experiments in particle and nuclear
physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
ATLAS Data Challenge 1
In 2002 the ATLAS experiment started a series of Data Challenges (DC) of
which the goals are the validation of the Computing Model, of the complete
software suite, of the data model, and to ensure the correctness of the
technical choices to be made. A major feature of the first Data Challenge (DC1)
was the preparation and the deployment of the software required for the
production of large event samples for the High Level Trigger (HLT) and physics
communities, and the production of those samples as a world-wide distributed
activity. The first phase of DC1 was run during summer 2002, and involved 39
institutes in 18 countries. More than 10 million physics events and 30 million
single particle events were fully simulated. Over a period of about 40 calendar
days 71000 CPU-days were used producing 30 Tbytes of data in about 35000
partitions. In the second phase the next processing step was performed with the
participation of 56 institutes in 21 countries (~ 4000 processors used in
parallel). The basic elements of the ATLAS Monte Carlo production system are
described. We also present how the software suite was validated and the
participating sites were certified. These productions were already partly
performed by using different flavours of Grid middleware at ~ 20 sites.Comment: 10 pages; 3 figures; CHEP03 Conference, San Diego; Reference MOCT00
- …