105,722 research outputs found
Tiresias: Predicting Security Events Through Deep Learning
With the increased complexity of modern computer attacks, there is a need for
defenders not only to detect malicious activity as it happens, but also to
predict the specific steps that will be taken by an adversary when performing
an attack. However this is still an open research problem, and previous
research in predicting malicious events only looked at binary outcomes (e.g.,
whether an attack would happen or not), but not at the specific steps that an
attacker would undertake. To fill this gap we present Tiresias, a system that
leverages Recurrent Neural Networks (RNNs) to predict future events on a
machine, based on previous observations. We test Tiresias on a dataset of 3.4
billion security events collected from a commercial intrusion prevention
system, and show that our approach is effective in predicting the next event
that will occur on a machine with a precision of up to 0.93. We also show that
the models learned by Tiresias are reasonably stable over time, and provide a
mechanism that can identify sudden drops in precision and trigger a retraining
of the system. Finally, we show that the long-term memory typical of RNNs is
key in performing event prediction, rendering simpler methods not up to the
task
A hierarchical Bayesian model for predicting ecological interactions using scaled evolutionary relationships
Identifying undocumented or potential future interactions among species is a
challenge facing modern ecologists. Recent link prediction methods rely on
trait data, however large species interaction databases are typically sparse
and covariates are limited to only a fraction of species. On the other hand,
evolutionary relationships, encoded as phylogenetic trees, can act as proxies
for underlying traits and historical patterns of parasite sharing among hosts.
We show that using a network-based conditional model, phylogenetic information
provides strong predictive power in a recently published global database of
host-parasite interactions. By scaling the phylogeny using an evolutionary
model, our method allows for biological interpretation often missing from
latent variable models. To further improve on the phylogeny-only model, we
combine a hierarchical Bayesian latent score framework for bipartite graphs
that accounts for the number of interactions per species with the host
dependence informed by phylogeny. Combining the two information sources yields
significant improvement in predictive accuracy over each of the submodels
alone. As many interaction networks are constructed from presence-only data, we
extend the model by integrating a correction mechanism for missing
interactions, which proves valuable in reducing uncertainty in unobserved
interactions.Comment: To appear in the Annals of Applied Statistic
Managerial practices that promote voice and taking charge among frontline workers
Process-improvement ideas often come from frontline workers who speak up by voicing concerns about problems and by taking charge to resolve them. We hypothesize that organization-wide process-improvement campaigns encourage both forms of speaking up, especially voicing concern. We also hypothesize that the effectiveness of such campaigns depends on the prior responsiveness of line managers. We test our hypotheses in the healthcare setting, in which problems are frequent. We use data on nearly 7,500 reported incidents extracted from an incident-reporting system that is similar to those used by many organizations to encourage employees to communicate about operational problems. We find that process-improvement campaigns prompt employees to speak up and that campaigns increase the frequency of voicing concern to a greater extent than they increase taking charge. We also find that campaigns are particularly effective in eliciting taking charge among employees whose managers have been relatively unresponsive to previous instances of speaking up. Our results therefore indicate that organization-wide campaigns can encourage voicing concerns and taking charge, two important forms of speaking up. These results can enable managers to solicit ideas from frontline workers that lead to performance improvement.
Narrative Generation in Entertainment: Using Artificial Intelligence Planning
From the field of artificial intelligence (AI) there is a growing stream of technology capable of being embedded in software that will reshape the way we interact with our environment in our everyday lives. This ‘AI software’ is often used to tackle more mundane tasks that are otherwise dangerous or meticulous for a human to accomplish. One particular area, explored in this paper, is for AI software to assist in supporting the enjoyable aspects of the lives of humans. Entertainment is one of these aspects, and often includes storytelling in some form no matter what the type of media, including television, films, video games, etc. This paper aims to explore the ability of AI software to automate the story-creation and story-telling process. This is part of the field of Automatic Narrative Generator (ANG), which aims to produce intuitive interfaces to support people (without any previous programming experience) to use tools to generate stories, based on their ideas of the kind of characters, intentions, events and spaces they want to be in the story. The paper includes details of such AI software created by the author that can be downloaded and used by the reader for this purpose. Applications of this kind of technology include the automatic generation of story lines for ‘soap operas’
Hidden Markov Models and their Application for Predicting Failure Events
We show how Markov mixed membership models (MMMM) can be used to predict the
degradation of assets. We model the degradation path of individual assets, to
predict overall failure rates. Instead of a separate distribution for each
hidden state, we use hierarchical mixtures of distributions in the exponential
family. In our approach the observation distribution of the states is a finite
mixture distribution of a small set of (simpler) distributions shared across
all states. Using tied-mixture observation distributions offers several
advantages. The mixtures act as a regularization for typically very sparse
problems, and they reduce the computational effort for the learning algorithm
since there are fewer distributions to be found. Using shared mixtures enables
sharing of statistical strength between the Markov states and thus transfer
learning. We determine for individual assets the trade-off between the risk of
failure and extended operating hours by combining a MMMM with a partially
observable Markov decision process (POMDP) to dynamically optimize the policy
for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020;
@Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title =
{Hidden Markov Models and their Application for Predicting Failure Events},
howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}
The Return of the Rogue
The “rogue trader”—a famed figure of the 1990s—recently has returned to prominence due largely to two phenomena. First, recent U.S. mortgage market volatility spilled over into stock, commodity, and derivative markets worldwide, causing large financial institution losses and revealing previously hidden unauthorized positions. Second, the rogue trader has gained importance as banks around the world have focused more attention on operational risk in response to regulatory changes prompted by the Basel II Capital Accord. This Article contends that of the many regulatory options available to the Basel Committee for addressing operational risk it arguably chose the worst: an enforced selfregulatory regime unlikely to substantially alter financial institutions’ ability to successfully manage operational risk. That regime also poses the danger of high costs, a false sense of security, and perverse incentives. Particularly with respect to the low-frequency, high-impact events—including rogue trading—that may be the greatest threat to bank stability and soundness, attempts at enforced self-regulation are unlikely to significantly reduce operational risk, because those financial institutions with the highest operational risk are the least likely to credibly assess that risk and set aside adequate capital under a regime of enforced self-regulation
- …