125,097 research outputs found
Malware in the Future? Forecasting of Analyst Detection of Cyber Events
There have been extensive efforts in government, academia, and industry to
anticipate, forecast, and mitigate cyber attacks. A common approach is
time-series forecasting of cyber attacks based on data from network telescopes,
honeypots, and automated intrusion detection/prevention systems. This research
has uncovered key insights such as systematicity in cyber attacks. Here, we
propose an alternate perspective of this problem by performing forecasting of
attacks that are analyst-detected and -verified occurrences of malware. We call
these instances of malware cyber event data. Specifically, our dataset was
analyst-detected incidents from a large operational Computer Security Service
Provider (CSSP) for the U.S. Department of Defense, which rarely relies only on
automated systems. Our data set consists of weekly counts of cyber events over
approximately seven years. Since all cyber events were validated by analysts,
our dataset is unlikely to have false positives which are often endemic in
other sources of data. Further, the higher-quality data could be used for a
number for resource allocation, estimation of security resources, and the
development of effective risk-management strategies. We used a Bayesian State
Space Model for forecasting and found that events one week ahead could be
predicted. To quantify bursts, we used a Markov model. Our findings of
systematicity in analyst-detected cyber attacks are consistent with previous
work using other sources. The advanced information provided by a forecast may
help with threat awareness by providing a probable value and range for future
cyber events one week ahead. Other potential applications for cyber event
forecasting include proactive allocation of resources and capabilities for
cyber defense (e.g., analyst staffing and sensor configuration) in CSSPs.
Enhanced threat awareness may improve cybersecurity.Comment: Revised version resubmitted to journa
Big-Data-Driven Materials Science and its FAIR Data Infrastructure
This chapter addresses the forth paradigm of materials research -- big-data
driven materials science. Its concepts and state-of-the-art are described, and
its challenges and chances are discussed. For furthering the field, Open Data
and an all-embracing sharing, an efficient data infrastructure, and the rich
ecosystem of computer codes used in the community are of critical importance.
For shaping this forth paradigm and contributing to the development or
discovery of improved and novel materials, data must be what is now called FAIR
-- Findable, Accessible, Interoperable and Re-purposable/Re-usable. This sets
the stage for advances of methods from artificial intelligence that operate on
large data sets to find trends and patterns that cannot be obtained from
individual calculations and not even directly from high-throughput studies.
Recent progress is reviewed and demonstrated, and the chapter is concluded by a
forward-looking perspective, addressing important not yet solved challenges.Comment: submitted to the Handbook of Materials Modeling (eds. S. Yip and W.
Andreoni), Springer 2018/201
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
Recommended from our members
Towards an aspect weaving BPEL engine
This position paper proposes the use of dynamic aspects and
the visitor design pattern to obtain a highly configurable and
extensible BPEL engine. Using these two techniques, the
core of this infrastructural software can be customised to
meet new requirements and add features such as debugging,
execution monitoring, or changing to another Web Service
selection policy. Additionally, it can easily be extended to
cope with customer-specific BPEL extensions. We propose
the use of dynamic aspects not only on the engine itself
but also on the workflow in order to tackle the problems of
Web Service hot deployment and hot fixes to long running
processes. In this way, composing aWeb Service "on-the-fly"
means weaving its choreography interface into the workflow
Great SCO2T! Rapid tool for carbon sequestration science, engineering, and economics
CO2 capture and storage (CCS) technology is likely to be widely deployed in
coming decades in response to major climate and economics drivers: CCS is part
of every clean energy pathway that limits global warming to 2C or less and
receives significant CO2 tax credits in the United States. These drivers are
likely to stimulate capture, transport, and storage of hundreds of millions or
billions of tonnes of CO2 annually. A key part of the CCS puzzle will be
identifying and characterizing suitable storage sites for vast amounts of CO2.
We introduce a new software tool called SCO2T (Sequestration of CO2 Tool,
pronounced "Scott") to rapidly characterizing saline storage reservoirs. The
tool is designed to rapidly screen hundreds of thousands of reservoirs, perform
sensitivity and uncertainty analyses, and link sequestration engineering
(injection rates, reservoir capacities, plume dimensions) to sequestration
economics (costs constructed from around 70 separate economic inputs). We
describe the novel science developments supporting SCO2T including a new
approach to estimating CO2 injection rates and CO2 plume dimensions as well as
key advances linking sequestration engineering with economics. Next, we perform
a sensitivity and uncertainty analysis of geology combinations (including
formation depth, thickness, permeability, porosity, and temperature) to
understand the impact on carbon sequestration. Through the sensitivity analysis
we show that increasing depth and permeability both can lead to increased CO2
injection rates, increased storage potential, and reduced costs, while
increasing porosity reduces costs without impacting the injection rate (CO2 is
injected at a constant pressure in all cases) by increasing the reservoir
capacity.Comment: CO2 capture and storage; carbon sequestration; reduced-order
modeling; climate change; economic
Towards Product Lining Model-Driven Development Code Generators
A code generator systematically transforms compact models to detailed code.
Today, code generation is regarded as an integral part of model-driven
development (MDD). Despite its relevance, the development of code generators is
an inherently complex task and common methodologies and architectures are
lacking. Additionally, reuse and extension of existing code generators only
exist on individual parts. A systematic development and reuse based on a code
generator product line is still in its infancy. Thus, the aim of this paper is
to identify the mechanism necessary for a code generator product line by (a)
analyzing the common product line development approach and (b) mapping those to
a code generator specific infrastructure. As a first step towards realizing a
code generator product line infrastructure, we present a component-based
implementation approach based on ideas of variability-aware module systems and
point out further research challenges.Comment: 6 pages, 1 figure, Proceedings of the 3rd International Conference on
Model-Driven Engineering and Software Development, pp. 539-545, Angers,
France, SciTePress, 201
Recommended from our members
Planning and Policymaking for Transit-Oriented Development, Transit, and Active Transport in California Cities
This report provides research findings from the first year of a two-year research project on patterns of local policymaking in California to support transit-oriented development (TOD), transit, and active transport. The project aims to assess motivations, perceived obstacles, and priorities for development near transit, in relation to patterns of local policy adoption, from the perspective of city planners in the state’s four largest regions: the San Francisco Bay, Los Angeles, San Diego, and Sacramento metropolitan areas. This first-stage report discusses research and policy context that informed the methodology, findings from the analysis of results from an online survey of city planning directors administered in the spring of 2019, and findings from two case studies of TOD policymaking in urban central cities, namely Los Angeles and Sacramento. A sampling methodology for conducting further case studies of TOD policymaking during the upcoming second phase of the project is also described, based on findings from the first year of the research.View the NCST Project Webpag
- …