80 research outputs found
Geodetic Graphs: Experiments and New Constructions
In 1962 Ore initiated the study of geodetic graphs. A graph is called
geodetic if the shortest path between every pair of vertices is unique. In the
subsequent years a wide range of papers appeared investigating their peculiar
properties. Yet, a complete classification of geodetic graphs is out of reach.
In this work we present a program enumerating all geodetic graphs of a given
size. Using our program, we succeed to find all geodetic graphs with up to 25
vertices and all regular geodetic graphs with up to 32 vertices. This leads to
the discovery of two new infinite families of geodetic graphs
From Observing to Understanding: Empirical Insights on the Organizational Foundations of Security Chaos Engineering
Cloud computing has become an integral part of modern corporate IT infrastructures. However, conventional IT-security measures cannot cope with its specific technical needs resulting from complexity, virtualization, or multi-tenancy as well as the need for holistic security approaches incorporating both technological and organizational perspectives on security. Security Chaos Engineering (SCE) constitutes a promising approach to overcome these shortcomings. Unfortunately, existing literature focuses on technical aspects of SCE and neglects the organizational perspective, i.e., which organizational success factors need to be addressed for a successful implementation. To close this gap, we conducted an interview study following the approach of Gioia et al. (2013) and identified seven success factors related to goals, social structure, participants, and technology within a company following Scott (1981). Furthermore, we found that these organizational success factors are not only the basis for the introduction of SCE but represent common requirements for holistic security approaches in general, too
Towards Secure Cloud-Computing in FinTechs – An Artefact for Prioritizing Information Security Measures
The number of FinTechs has been proliferating over the last decades. While their innovative offerings inherit disruptive potential, the security of their cloud services remains a fundamental issue. Tight budgets and the need for rapid product development force FinTechs to focus on the most necessary information security measures (ISMs) that ensure regulatory compliance and avoid customer losses due to security incidents. The question arises of how FinTechs should prioritize ISMs. To answer this question, we follow design science research to develop an artifact by which cloud service using and providing FinTechs can obtain a prioritized list of ISMs. Our resulting artifact builds upon extant research on FinTechs and information security (IS), relevant regulatory frameworks, and the shared responsibility model for cloud services. Our research contributes to the conceptualization of integrated ISM prioritization for FinTechs and provides practitioners with a structured prioritization approach based on a standardized logic
Rebound-Effekte: Wie verhindern sie das Erreichen von Umweltschutzzielen?
Einführung in das Schwerpunktthem
On the average case of MergeInsertion
MergeInsertion, also known as the Ford-Johnson algorithm, is a sorting algorithm which, up to today, for many input sizes achieves the best known upper bound on the number of comparisons. Indeed, it gets extremely close to the information-theoretic lower bound. While the worst-case behavior is well understood, only little is known about the average case. This work takes a closer look at the average case behavior. In particular, we establish an upper bound of nlogn-1.4005n+o(n) comparisons. We also give an exact description of the probability distribution of the length of the chain a given element is inserted into and use it to approximate the average number of comparisons numerically. Moreover, we compute the exact average number of comparisons for n up to 148. Furthermore, we experimentally explore the impact of different decision trees for binary insertion. To conclude, we conduct experiments showing that a slightly different insertion order leads to a better average case and we compare the algorithm to Manacher’s combination of merging and MergeInsertion as well as to the recent combined algorithm with (1,2)-Insertionsort by Iwama and Teruyama.Deutsche ForschungsgemeinschaftProjekt DEA
Agile Research Data Management with Open Source: LinkAhead
Research data management (RDM) in academic scientific environments increasingly enters the focus as an important part of good scientific practice and as a topic with big potentials for saving time and money. Nevertheless, there is a shortage of appropriate tools, which fulfill the specific requirements in scientific research. We identified where the requirements in science deviate from other fields and proposed a list of requirements which RDM software should answer to become a viable option. We analyzed a number of currently available technologies and tool categories for matching these requirements and identified areas where no tools can satisfy researchers' needs. Finally we assessed the open-source RDMS (research data management system) LinkAhead for compatibility with the proposed features and found that it fulfills the requirements in the area of semantic, flexible data handling in which other tools show weaknesses
Ischemic biomarker heart-type fatty acid binding protein (hFABP) in acute heart failure - diagnostic and prognostic insights compared to NT-proBNP and troponin I
Background: To evaluate diagnostic and long-term prognostic values of hFABP compared to NT-proBNP and troponin I (TnI) in patients presenting to the emergency department (ED) suspected of acute heart failure (AHF). Methods: 401 patients with acute dyspnea or peripheral edema, 122 suffering from AHF, were prospectively enrolled and followed up to 5 years. hFABP combined with NT-proBNP versus NT-proBNP alone was tested for AHF diagnosis. Prognostic value of hFABP versus TnI was evaluated in models predicting all-cause mortality (ACM) and AHF related rehospitalization (AHF-RH) at 1 and 5 years, including 11 conventional risk factors plus NT-proBNP. Results: Additional hFABP measurements improved diagnostic specificity and positive predictive value (PPV) of sole NT-proBNP testing at the cutoff <300 ng/l to “rule out” AHF. Highest hFABP levels (4th quartile) were associated with increased ACM (hazard ratios (HR): 2.1–2.5; p = 0.04) and AHF-RH risk at 5 years (HR 2.8–8.3, p = 0.001). ACM was better characterized in prognostic models including TnI, whereas AHF-RH was better characterized in prognostic models including hFABP. Cox analyses revealed a 2 % increase of ACM risk and 3–7 % increase of AHF-RH risk at 5 years by each unit increase of hFABP of 10 ng/ml. Conclusions: Combining hFABP plus NT-proBNP (<300 ng/l) only improves diagnostic specificity and PPV to rule out AHF. hFABP may improve prognosis for long-term AHF-RH, whereas TnI may improve prognosis for ACM. Trial registration: ClinicalTrials.gov identifier: NCT00143793
- …