816 research outputs found
Workload Schedulers - Genesis, Algorithms and Comparisons
In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems
Best matching processes in distributed systems
The growing complexity and dynamic behavior of modern manufacturing and service industries along with competitive and globalized markets have gradually transformed traditional centralized systems into distributed networks of e- (electronic) Systems. Emerging examples include e-Factories, virtual enterprises, smart farms, automated warehouses, and intelligent transportation systems. These (and similar) distributed systems, regardless of context and application, have a property in common: They all involve certain types of interactions (collaborative, competitive, or both) among their distributed individuals—from clusters of passive sensors and machines to complex networks of computers, intelligent robots, humans, and enterprises. Having this common property, such systems may encounter common challenges in terms of suboptimal interactions and thus poor performance, caused by potential mismatch between individuals. For example, mismatched subassembly parts, vehicles—routes, suppliers—retailers, employees—departments, and products—automated guided vehicles—storage locations may lead to low-quality products, congested roads, unstable supply networks, conflicts, and low service level, respectively. This research refers to this problem as best matching, and investigates it as a major design principle of CCT, the Collaborative Control Theory.
The original contribution of this research is to elaborate on the fundamentals of best matching in distributed and collaborative systems, by providing general frameworks for (1) Systematic analysis, inclusive taxonomy, analogical and structural comparison between different matching processes; (2) Specification and formulation of problems, and development of algorithms and protocols for best matching; (3) Validation of the models, algorithms, and protocols through extensive numerical experiments and case studies. The first goal is addressed by investigating matching problems in distributed production, manufacturing, supply, and service systems based on a recently developed reference model, the PRISM Taxonomy of Best Matching. Following the second goal, the identified problems are then formulated as mixed-integer programs. Due to the computational complexity of matching problems, various optimization algorithms are developed for solving different problem instances, including modified genetic algorithms, tabu search, and neighbourhood search heuristics. The dynamic and collaborative/competitive behaviors of matching processes in distributed settings are also formulated and examined through various collaboration, best matching, and task administration protocols. In line with the third goal, four case studies are conducted on various manufacturing, supply, and service systems to highlight the impact of best matching on their operational performance, including service level, utilization, stability, and cost-effectiveness, and validate the computational merits of the developed solution methodologies
Scalable attack modelling in support of security information and event management
Includes bibliographical referencesWhile assessing security on single devices can be performed using vulnerability assessment tools, modelling of more intricate attacks, which incorporate multiple steps on different machines, requires more advanced techniques. Attack graphs are a promising technique, however they face a number of challenges. An attack graph is an abstract description of what attacks are possible against a specific network. Nodes in an attack graph represent the state of a network at a point in time while arcs between nodes indicate the transformation of a network from one state to another, via the exploit of a vulnerability. Using attack graphs allows system and network configuration information to be correlated and analysed to indicate imminent threats. This approach is limited by several serious issues including the state-space explosion, due to the exponential nature of the problem, and the difficulty in visualising an exhaustive graph of all potential attacks. Furthermore, the lack of availability of information regarding exploits, in a standardised format, makes it difficult to model atomic attacks in terms of exploit requirements and effects.
This thesis has as its objective to address these issues and to present a proof of concept solution. It describes a proof of concept implementation of an automated attack graph based tool, to assist in evaluation of network security, assessing whether a sequence of actions could lead to an attacker gaining access to critical network resources. Key objectives are the investigation of attacks that can be modelled, discovery of attack paths, development of techniques to strengthen networks based on attack paths, and testing scalability for larger networks. The proof of concept framework, Network Vulnerability Analyser (NVA), sources vulnerability information from National Vulnerability Database (NVD), a comprehensive, publicly available vulnerability database, transforming it into atomic exploit actions. NVA
combines these with a topological network model, using an automated planner to identify potential attacks on network devices. Automated planning is an area of Artificial Intelligence (AI) which focuses on the computational deliberation process of action sequences, by measuring their expected outcomes and this technique is applied to support discovery of a best possible solution to an attack graph that is created. Through the use of heuristics developed for this study, unpromising regions of an attack graph are avoided. Effectively, this prevents the state-space explosion problem associated with modelling large scale networks, only enumerating critical paths rather than an exhaustive graph. SGPlan5 was selected as the most suitable automated planner for this study and was integrated into the system, employing network and exploit models to construct critical attack paths. A critical attack
path indicates the most likely attack vector to be used in compromising a targeted device. Critical attack paths are identifed by SGPlan5 by using a heuristic to search through the state-space the attack which yields the highest aggregated severity score. CVSS severity scores were selected as a means of guiding state-space exploration since they are currently the only publicly available metric which can measure the impact of an exploited vulnerability. Two analysis techniques have been implemented to further support the user in making an informed decision as to how to prevent identified attacks. Evaluation of NVA was broken down into a demonstration of its effectiveness in two case studies, and analysis of its scalability potential. Results demonstrate that NVA can successfully enumerate the expected critical attack paths and also this information to establish a solution to identified attacks. Additionally, performance and scalability testing illustrate NVA's success in application to realistically sized larger networks
The first ICASE/LARC industry roundtable: Session proceedings
The first 'ICASE/LaRC Industry Roundtable' was held on October 3-4, 1994, in Williamsburg, Virginia. The main purpose of the roundtable was to draw attention of ICASE/LaRC scientists to industrial research agendas. The roundtable was attended by about 200 scientists, 30% from NASA Langley; 20% from universities; 17% NASA Langley contractors (including ICASE personnel); and the remainder from federal agencies other than NASA Langley. The technical areas covered reflected the major research programs in ICASE and closely associated NASA branches. About 80% of the speakers were from industry. This report is a compilation of the session summaries prepared by the session chairmen
A Comparison of wide area network performance using virtualized and non-virtualized client architectures
The goal of this thesis is to determine if there is a significant performance difference between two network computer architecture models. The study will measure latency and throughput for both client-server and virtualized client architectures. In the client server environment, the client computer performs a significant portion of the work and frequently requires downloading uploading files to and from a remote location. Virtual client architecture turns the client machine into a terminal, sending only keystrokes and mouse clicks and receiving only display pixel or sound changes. I accomplished the goal of comparing these architectures by comparing completion times for ping reply, file download, a small set of common work tasks, and a moderately large SQL database query. I compared these tasks using simulated wide area network, local area network, and virtual client network architectures. The study limits the architecture to one where the virtual client and server are in the same data center
Testing SOAR Tools in Use
Modern security operation centers (SOCs) rely on operators and a tapestry of
logging and alerting tools with large scale collection and query abilities. SOC
investigations are tedious as they rely on manual efforts to query diverse data
sources, overlay related logs, and correlate the data into information and then
document results in a ticketing system. Security orchestration, automation, and
response (SOAR) tools are a new technology that promise to collect, filter, and
display needed data; automate common tasks that require SOC analysts' time;
facilitate SOC collaboration; and, improve both efficiency and consistency of
SOCs. SOAR tools have never been tested in practice to evaluate their effect
and understand them in use. In this paper, we design and administer the first
hands-on user study of SOAR tools, involving 24 participants and 6 commercial
SOAR tools. Our contributions include the experimental design, itemizing six
characteristics of SOAR tools and a methodology for testing them. We describe
configuration of the test environment in a cyber range, including network,
user, and threat emulation; a full SOC tool suite; and creation of artifacts
allowing multiple representative investigation scenarios to permit testing. We
present the first research results on SOAR tools. We found that SOAR
configuration is critical, as it involves creative design for data display and
automation. We found that SOAR tools increased efficiency and reduced context
switching during investigations, although ticket accuracy and completeness
(indicating investigation quality) decreased with SOAR use. Our findings
indicated that user preferences are slightly negatively correlated with their
performance with the tool; overautomation was a concern of senior analysts, and
SOAR tools that balanced automation with assisting a user to make decisions
were preferred
- …