3,428 research outputs found
Task Agnostic Restoration of Natural Video Dynamics
In many video restoration/translation tasks, image processing operations are
na\"ively extended to the video domain by processing each frame independently,
disregarding the temporal connection of the video frames. This disregard for
the temporal connection often leads to severe temporal inconsistencies.
State-Of-The-Art (SOTA) techniques that address these inconsistencies rely on
the availability of unprocessed videos to implicitly siphon and utilize
consistent video dynamics to restore the temporal consistency of frame-wise
processed videos which often jeopardizes the translation effect. We propose a
general framework for this task that learns to infer and utilize consistent
motion dynamics from inconsistent videos to mitigate the temporal flicker while
preserving the perceptual quality for both the temporally neighboring and
relatively distant frames without requiring the raw videos at test time. The
proposed framework produces SOTA results on two benchmark datasets, DAVIS and
videvo.net, processed by numerous image processing applications. The code and
the trained models are available at
\url{https://github.com/MKashifAli/TARONVD}
Quantifying the behavioural relevance of hippocampal neurogenesis
Few studies that examine the neurogenesis--behaviour relationship formally
establish covariation between neurogenesis and behaviour or rule out competing
explanations. The behavioural relevance of neurogenesis might therefore be
overestimated if other mechanisms account for some, or even all, of the
experimental effects. A systematic review of the literature was conducted and
the data reanalysed using causal mediation analysis, which can estimate the
behavioural contribution of new hippocampal neurons separately from other
mechanisms that might be operating. Results from eleven eligible individual
studies were then combined in a meta-analysis to increase precision
(representing data from 215 animals) and showed that neurogenesis made a
negligible contribution to behaviour (standarised effect = 0.15; 95% CI = -0.04
to 0.34; p = 0.128); other mechanisms accounted for the majority of
experimental effects (standardised effect = 1.06; 95% CI = 0.74 to 1.38; p =
1.7 ).Comment: To be published in PLoS ON
A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities
The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia
Automatic Prediction of Rejected Edits in Stack Overflow
The content quality of shared knowledge in Stack Overflow (SO) is crucial in
supporting software developers with their programming problems. Thus, SO allows
its users to suggest edits to improve the quality of a post (i.e., question and
answer). However, existing research shows that many suggested edits in SO are
rejected due to undesired contents/formats or violating edit guidelines. Such a
scenario frustrates or demotivates users who would like to conduct good-quality
edits. Therefore, our research focuses on assisting SO users by offering them
suggestions on how to improve their editing of posts. First, we manually
investigate 764 (382 questions + 382 answers) rejected edits by rollbacks and
produce a catalog of 19 rejection reasons. Second, we extract 15 texts and
user-based features to capture those rejection reasons. Third, we develop four
machine learning models using those features. Our best-performing model can
predict rejected edits with 69.1% precision, 71.2% recall, 70.1% F1-score, and
69.8% overall accuracy. Fourth, we introduce an online tool named EditEx that
works with the SO edit system. EditEx can assist users while editing posts by
suggesting the potential causes of rejections. We recruit 20 participants to
assess the effectiveness of EditEx. Half of the participants (i.e., treatment
group) use EditEx and another half (i.e., control group) use the SO standard
edit system to edit posts. According to our experiment, EditEx can support SO
standard edit system to prevent 49% of rejected edits, including the commonly
rejected ones. However, it can prevent 12% rejections even in free-form regular
edits. The treatment group finds the potential rejection reasons identified by
EditEx influential. Furthermore, the median workload suggesting edits using
EditEx is half compared to the SO edit system.Comment: Accepted for publication in Empirical Software Engineering (EMSE)
journa
Biomedical Image Splicing Detection using Uncertainty-Guided Refinement
Recently, a surge in biomedical academic publications suspected of image
manipulation has led to numerous retractions, turning biomedical image
forensics into a research hotspot. While manipulation detectors are concerning,
the specific detection of splicing traces in biomedical images remains
underexplored. The disruptive factors within biomedical images, such as
artifacts, abnormal patterns, and noises, show misleading features like the
splicing traces, greatly increasing the challenge for this task. Moreover, the
scarcity of high-quality spliced biomedical images also limits potential
advancements in this field. In this work, we propose an Uncertainty-guided
Refinement Network (URN) to mitigate the effects of these disruptive factors.
Our URN can explicitly suppress the propagation of unreliable information flow
caused by disruptive factors among regions, thereby obtaining robust features.
Moreover, URN enables a concentration on the refinement of uncertainly
predicted regions during the decoding phase. Besides, we construct a dataset
for Biomedical image Splicing (BioSp) detection, which consists of 1,290
spliced images. Compared with existing datasets, BioSp comprises the largest
number of spliced images and the most diverse sources. Comprehensive
experiments on three benchmark datasets demonstrate the superiority of the
proposed method. Meanwhile, we verify the generalizability of URN when against
cross-dataset domain shifts and its robustness to resist post-processing
approaches. Our BioSp dataset will be released upon acceptance
Translation memory and computer assisted translation tool for medieval texts
Translation memories (TMs), as part of Computer Assisted Translation (CAT) tools, support translators reusing portions of formerly translated text. Fencing books are good candidates for using TMs due to the high number of repeated terms. Medieval texts suffer a number of drawbacks that make hard even “simple” rewording to the modern version of the same language. The analyzed difficulties are: lack of systematic spelling, unusual word orders and typos in the original. A hypothesis is made and verified that even simple modernization increases legibility and it is feasible, also it is worthwhile to apply translation memories due to the numerous and even extremely long repeated terms. Therefore, methods and algorithms are presented 1. for automated transcription of medieval texts (when a limited training set is available), and 2. collection of repeated patterns. The efficiency of the algorithms is analyzed for recall and precision
Cloud transactions and caching for improved performance in clouds and DTNs
In distributed transactional systems deployed over some massively decentralized cloud servers, access policies are typically replicated. Interdependencies ad inconsistencies among policies need to be addressed as they can affect performance, throughput and accuracy. Several stringent levels of policy consistency constraints and enforcement approaches to guarantee the trustworthiness of transactions on cloud servers are proposed. We define a look-up table to store policy versions and the concept of Tree-Based Consistency approach to maintain a tree structure of the servers. By integrating look-up table and the consistency tree based approach, we propose an enhanced version of Two-phase validation commit (2PVC) protocol integrated with the Paxos commit protocol with reduced or almost the same performance overhead without affecting accuracy and precision.
A new caching scheme has been proposed which takes into consideration Military/Defense applications of Delay-tolerant Networks (DTNs) where data that need to be cached follows a whole different priority levels. In these applications, data popularity can be defined not only based on request frequency, but also based on the importance like who created and ranked point of interests in the data, when and where it was created; higher rank data belonging to some specific location may be more important though frequency of those may not be higher than more popular lower priority data. Thus, our caching scheme is designed by taking different requirements into consideration for DTN networks for defense applications. The performance evaluation shows that our caching scheme reduces the overall access latency, cache miss and usage of cache memory when compared to using caching schemes --Abstract, page iv
- …