149 research outputs found
Visual Security Analytics
Despite the application of increasingly advanced methods and technologies to automate tasks within cyber security, human domain knowledge remains indispensable. Especially monitoring a system’s security posture as well as detecting and analyzing cyber threats requires involvement of security experts. However, the large amount of data relevant for these tasks poses a major impediment for any kind of manual analyses. It is therefore necessary, to enable security experts to efficiently deal with large amounts of data. Visual Security Analytics (VSA) aims to achieve this through generating interactive visual representations of log data or any other data relevant for monitoring, ensuring, and preserving cyber security and covering different ways of analyzing security data using visual approaches (Marty 2009). It is a combination of automated and visual analysis aiming for a “best-of-both” worlds approach. Thus, VSA is a highly interdisciplinary field covering information security, security analytics, information visualization, and human-computer interaction among several others
Digging for Quality Management in Production Systems: A Solution Space for Blockchain Collaborations
Quality management (QM) and efficient information sharing among value chain partners have been important IS research topics for decades. Today, IS researchers and practitioners hope to overcome various information inefficiencies in complex supply chains using blockchain approaches. Additionally, future traceability regulations increase companies’ interest in innovative blockchain-based enterprise solutions. We identified several factors that could hinder BC adoption, due to a lack of standards. This paper sheds profound light on organizational and technical aspects of blockchain enterprise applications to support future collaboration initiatives. Furthermore, it develops a terminology that researchers and practitioners can reuse. A case study describes several quality-related objects and events that characterize multiple dimensions and traceability types. Based on these findings, we provide a set of design principles to assist future design features. Finally, this paper provides a holistic orientation and implications for researchers and practitioners moving forwards
Measuring and visualizing cyber threat intelligence quality
The very raison d’être of cyber threat intelligence (CTI) is to provide meaningful knowledge about cyber security threats. The exchange and collaborative generation of CTI by the means of sharing platforms has proven to be an important aspect of practical application. It is evident to infer that inaccurate, incomplete, or outdated threat intelligence is a major problem as only high-quality CTI can be helpful to detect and defend against cyber attacks. Additionally, while the amount of available CTI is increasing it is not warranted that quality remains unaffected. In conjunction with the increasing number of available CTI, it is thus in the best interest of every stakeholder to be aware of the quality of a CTI artifact. This allows for informed decisions and permits detailed analyses. Our work makes a twofold contribution to the challenge of assessing threat intelligence quality. We first propose a series of relevant quality dimensions and configure metrics to assess the respective dimensions in the context of CTI. In a second step, we showcase the extension of an existing CTI analysis tool to make the quality assessment transparent to security analysts. Furthermore, analysts’ subjective perceptions are, where necessary, included in the quality assessment concept
Order-of-magnitude differences in computational performance of analog Ising machines induced by the choice of nonlinearity
Ising machines based on nonlinear analog systems are a promising method to
accelerate computation of NP-hard optimization problems. Yet, their analog
nature is also causing amplitude inhomogeneity which can deteriorate the
ability to find optimal solutions. Here, we investigate how the system's
nonlinear transfer function can mitigate amplitude inhomogeneity and improve
computational performance. By simulating Ising machines with polynomial,
periodic, sigmoid and clipped transfer functions and benchmarking them with
MaxCut optimization problems, we find the choice of transfer function to have a
significant influence on the calculation time and solution quality. For
periodic, sigmoid and clipped transfer functions, we report order-of-magnitude
improvements in the time-to-solution compared to conventional polynomial
models, which we link to the suppression of amplitude inhomogeneity induced by
saturation of the transfer function. This provides insights into the
suitability of systems for building Ising machines and presents an efficient
way for overcoming performance limitations
Formalizing and Integrating User Knowledge into Security Analytics
The Internet-of-Things and ubiquitous cyber-physical systems increase the attack surface for cyber-physical attacks. They exploit technical vulnerabilities and human weaknesses to wreak havoc on organizations’ information systems, physical machines, or even humans. Taking a stand against these multi-dimensional attacks requires automated measures to be com- bined with people as their knowledge has proven critical for security analytics. However, there is no uniform understanding of information security knowledge and its integration into security analytics activities. With this work, we structure and formalize the crucial notions of knowledge that we deem essential for holistic security analytics. A corresponding knowledge model is established based on the Incident Detection Lifecycle, which summarizes the security analytics activities. This idea of knowledge-based security analytics highlights a dichotomy in security analytics. Security experts can operate security mechanisms and thus contribute their knowledge. However, security novices often cannot operate security mechanisms and, therefore, cannot make their highly-specialized domain knowledge available for security analytics. This results in several severe knowledge gaps. We present a research prototype that shows how several of these knowledge gaps can be overcome by simplifying the interaction with automated security analytics techniques
Research Output and International Cooperation Among Countries During the COVID-19 Pandemic: Scientometric Analysis
Background: The COVID-19 pandemic, caused by the novel coronavirus SARS-CoV-2, has instigated immediate and massive
worldwide research efforts. Rapid publication of research data may be desirable but also carries the risk of quality loss.
Objective: This analysis aimed to correlate the severity of the COVID-19 outbreak with its related scientific output per country.
Methods: All articles related to the COVID-19 pandemic were retrieved from Web of Science and analyzed using the web
application SciPE (science performance evaluation), allowing for large data scientometric analyses of the global geographical
distribution of scientific output.
Results: A total of 7185 publications, including 2592 articles, 2091 editorial materials, 2528 early access papers, 1479 letters,
633 reviews, and other contributions were extracted. The top 3 countries involved in COVID-19 research were the United States,
China, and Italy. The confirmed COVID-19 cases or deaths per region correlated with scientific research output. The United
States was most active in terms of collaborative efforts, sharing a significant amount of manuscript authorships with the United
Kingdom, China, and Italy. The United States was China’s most frequent collaborative partner, followed by the United Kingdom.
Conclusions: The COVID-19 research landscape is rapidly developing and is driven by countries with a generally strong
prepandemic research output but is also significantly affected by countries with a high prevalence of COVID-19 cases. Our
findings indicate that the United States is leading international collaborative efforts
- …