234 research outputs found

    Data Quality Challenges in Net-Work Automation Systems Case Study of a Multinational Financial Services Corporation

    Get PDF
    With the emerging trends of IPv6 rollout, Bring Your Own Device, virtualization, cloud computing and the Internet of Things, corporations are continuously facing challenges regarding data collection and analysis processes for multiple purposes. These challenges can also be applied to network monitoring practices: available data is used not only to assess network capacity and latency, but to identify possible security breaches and bottlenecks in network performance. This study will focus on assessing the collected network data from a multinational financial services corporation on its quality and attempts to link the concept of network data quality with process automation of network management and monitoring. Information Technology (IT) can be perceived as the lifeblood within the financial services industry, yet within the discussed case study the corporation strives to cut down operational expenditures on IT by 2,5 to 5 percent. This study combines both theoretical and practical approaches by conducting a literature review followed by a case study of abovementioned financial organization. The literature review focuses on (a) the importance of data quality, (b) IP Address Management (IPAM), and (c) network monitoring practices. The case study discusses the implementation of a network automation solution powered by Infoblox hardware and software, which should be capable of scanning all devices in the network along with DHCP lease history while having the convenience of easy IP address management mapping. Their own defined monitoring maturity levels are also taken into consideration. Twelve data quality issues have been identified using the network data management platform during the timeline of the research which potentially hinder the network management lifecycle of monitoring, configuration, and deployment. While network management systems are not designed to identify, document, and repair data quality issues, representing the network’s performance in terms of capability, latency and behavior is dependent on data quality on the dimensions of completeness, timeliness and accuracy. The conclusion of the research is that the newly implemented network automation system has potential to achieve better decision-making for relevant stakeholders, and to eliminate business silos by centralizing network data to one platform, supporting business strategy on an operational, tactical, and strategic level; however, data quality is one of the biggest hurdles to overcome to achieve process automation and ultimately to achieve a passive network appliance monitoring system.siirretty Doriast

    Auditor Inability to use Professional Skepticism

    Get PDF
    This qualitative study sought to understand challenges to Louisville, Kentucky Certified Public Accounting firms and the development of auditor professional skepticism. The research considered the problem of the lack of professional skepticism and the importance of the skill in the auditor\u27s role. The study focused on three levels of auditing roles, their experience in auditing, education levels, and overall experience in their current role. The research methodology was a single case study design and interviewed participants through gaining insight from their experiences in the auditing field, and the challenges faced. The research deepened the understanding of problems and contributing factors for professional skepticism development and identified potential solutions

    A Novel Approach to Determining Real-Time Risk Probabilities in Critical Infrastructure Industrial Control Systems

    Get PDF
    Critical Infrastructure Industrial Control Systems are substantially different from their more common and ubiquitous information technology system counterparts. Industrial control systems, such as distributed control systems and supervisory control and data acquisition systems that are used for controlling the power grid, were not originally designed with security in mind. Geographically dispersed distribution, an unfortunate reliance on legacy systems and stringent availability requirements raise significant cybersecurity concerns regarding electric reliability while constricting the feasibility of many security controls. Recent North American Electric Reliability Corporation Critical Infrastructure Protection standards heavily emphasize cybersecurity concerns and specifically require entities to categorize and identify their Bulk Electric System cyber systems; and, have periodic vulnerability assessments performed on those systems. These concerns have produced an increase in the need for more Critical Infrastructure Industrial Control Systems specific cybersecurity research. Industry stakeholders have embraced the development of a large-scale test environment through the Department of Energy’s National Supervisory Control and Data Acquisition Test-bed program; however, few individuals have access to this program. This research developed a physical industrial control system test-bed on a smaller-scale that provided an environment for modeling a simulated critical infrastructure sector performing a set of automated processes for the purpose of exploring solutions and studying concepts related to compromising control systems by way of process-tampering through code exploitation, as well as, the ability to passively and subsequently identify any risks resulting from such an event. Relative to the specific step being performed within a production cycle, at a moment in time when sensory data samples were captured and analyzed, it was possible to determine the probability of a real-time risk to a mock Critical Infrastructure Industrial Control System by comparing the sample values to those derived from a previously established baseline. This research achieved such a goal by implementing a passive, spatial and task-based segregated sensor network, running in parallel to the active control system process for monitoring and detecting risk, and effectively identified a real-time risk probability within a Critical Infrastructure Industrial Control System Test-bed. The practicality of this research ranges from determining on-demand real-time risk probabilities during an automated process, to employing baseline monitoring techniques for discovering systems, or components thereof, exploited along the supply chain

    Fuzzing for software vulnerability discovery

    Get PDF
    Background Fuzz testing can be used to detect software programming flaws present in an application by submitting malformed input to the application as it executes. Some programming flaws impact upon the security of an application by undermining the performance of controls, rendering the application vulnerable to attack. Hence, the discovery of programming flaws can lead to the discovery of security vulnerabilities. Fuzz testing (like almost all run-time testing) does not require access to the source code, which makes it attractive to those who wish to assess the security of an application, but are unable to obtain access to the source code, such as end-users, corporate clients, security researchers and cyber criminals. Motivation The author wanted to explore the value of fuzz testing from the point of view of a corporate client that intends to release software including a component developed by a third party, where the component source code is not available for review. Three case studies where conducted: two practical fuzz testing methodologies ('blind' data mutation and protocol analysis-based fuzzing) were employed to discover vulnerabilities in a commercial operating system, and a purposefully vulnerable web server, respectively. A third case study involved the exploitation of a vulnerability discovered using fuzz testing, including the production of 'Proof of Concept' code. Conclusions It was found that fuzzing is a valid method for identifying programming flaws in software applications, but additional analysis is required to determine whether discovered flaws represented a security vulnerability. In order to better understand the analysis and ranking of errors discovered using fuzz testing, exploit code was developed based on a flaw discovered using fuzz testing. It was found that the level of skill required to create such an exploit depends (largely) upon the nature of the specific programming flaw. In the worst case (where user-controlled input values are passed to the instruction pointer register), the level of skill required to develop an exploit that permitted arbitrary code execution was minimal. Due to the scale and range of input data accepted by all but the most simple of applications, fuzzing is not a practical method for detecting all flaws present in an application. However, fuzzing should not be discounted since no current software security testing methodology is capable of discovering all present flaws, and fuzzing can offer benefits such as automation, scalability, and a low ratio of false-positives

    Information Assurance; Small Business and the Basics

    Get PDF
    Business is increasingly dependent on information systems to allow decision makers to gather process and disseminate information. As the information landscape becomes more interconnected, the threats to computing resources also increase. While the Internet has allowed information to flow, it has also exposed businesses to vulnerabilities. Whereas large businesses have information technology (IT) departments to support their security, small businesses are at risk because they lack personnel dedicated to addressing, controlling and evaluating their information security efforts. Further complicating this situation, most small businesses IT capabilities have evolved in an ad hoc fashion where few employees understand the scope of the network and fewer if any sat down and envisioned a secure architecture as capabilities were added. This paper examines the problem from the perspective that IT professionals struggle to bring adequate Information Assurance (IA) to smaller organizations where the tools are well known, but the organizational intent of the information security stance lacks a cohesive structure for system development and enforcement. This paper focuses on a process that will allow IT professionals to rapidly improve their organizations\u27 security stance with few changes using tools already in place or available at little or no cost. Starting with an initial risk assessment research provides the groundwork for the introduction of a secure system development life cycle (SSLDC) where continual evaluation improves the security stance and operation of a networked computer system

    Smart Regulation: Lessons from the Artificial Intelligence Act

    Get PDF
    The European Union (EU) has recently announced that it will consider a proposal to systematically regulate artificial intelligence (AI) systems. This regulation will add to the legacy of other data regulation acts adopted in the EU and move the EU closer to a comprehensive framework through which it can address rapidly evolving technologies like AI. The United States has yet to implement data regulation or AI regulation legislation at the federal level. This inaction by the United States could negatively impact global cooperation with the EU and China and innovation within the United States. The United States is currently the global leader in AI technology. However, if it wants to maintain that position, it should consider the negative repercussions of dragging its feet regarding regulation. This Comment will demonstrate why systematic regulation of the type being developed in the EU should be adopted in the United States

    An analysis of cybersecurity culture in an organisation managing Critical Infrastructure

    Get PDF
    The 4th industrial revolution (4IR) is transforming the way businesses operate, making them more efficient and data-driven while also increasing the threat-landscape brought on by the convergence of technologies and increasingly so for organisations managing critical infrastructure. Environments that traditionally operated entirely independent of networks and the internet are now connecting in ways that are exposing critical infrastructure to a new level of cyber-risks that now need to be managed. Due to the stable nature of technologies and knowledge in traditional industrial environments, there is a misalignment of skills to emerging technology trends. Globally cyber-crime attacks are on the rise with Cisco reporting in 2018 that 31% of all respondents had seen a cyber-attack in their operational environment[1]. With up to 67% of breaches reported in the Willis Towers report due to employee negligence [2], the importance of cybersecurity culture is no longer in question in organisations managing critical infrastructure. Developing an understanding of the drivers for behaviours, attitudes and beliefs related to cybersecurity and aligning these to an organisations risk appetite and tolerance is crucial to managing cyber-risk. There is a very divergent understanding of cyber-risk in the engineering environment. This study endeavours to investigate employee perceptions, attitudes and values associated with cybersecurity and how these potentially affects their behaviour and ultimately the risk to the plant or organisation. Most traditional culture questionnaires focus on information security with observations focussing more on social engineering, email hygiene and physical controls. This cybersecurity culture study was conducted to gain insight into people's beliefs, attitudes and behaviours related to cybersecurity encompassing people, process and technology focussing on the operational technology environment in Eskom1. Both technical (Engineering and IT) and nontechnical (business support staff) staff were questionnaireed. The questionnaire was categorised into four sections dealing with cybersecurity culture as they relate to individuals, processes and technology, leadership and the organisation at large. The results from the analysis, revealed that collaboration, information sharing, reporting of vulnerabilities, high dependence and trust in technology, leadership commitment, vigilance, compliance, unclear processes and lack of understanding around cybersecurity all contribute to the current levels of cybersecurity culture. Insights from this study will generate recommendations that will form part of a cybersecurity culture transformation journey

    Nature-inspired survivability: Prey-inspired survivability countermeasures for cloud computing security challenges

    Get PDF
    As cloud computing environments become complex, adversaries have become highly sophisticated and unpredictable. Moreover, they can easily increase attack power and persist longer before detection. Uncertain malicious actions, latent risks, Unobserved or Unobservable risks (UUURs) characterise this new threat domain. This thesis proposes prey-inspired survivability to address unpredictable security challenges borne out of UUURs. While survivability is a well-addressed phenomenon in non-extinct prey animals, applying prey survivability to cloud computing directly is challenging due to contradicting end goals. How to manage evolving survivability goals and requirements under contradicting environmental conditions adds to the challenges. To address these challenges, this thesis proposes a holistic taxonomy which integrate multiple and disparate perspectives of cloud security challenges. In addition, it proposes the TRIZ (Teorija Rezbenija Izobretatelskib Zadach) to derive prey-inspired solutions through resolving contradiction. First, it develops a 3-step process to facilitate interdomain transfer of concepts from nature to cloud. Moreover, TRIZ’s generic approach suggests specific solutions for cloud computing survivability. Then, the thesis presents the conceptual prey-inspired cloud computing survivability framework (Pi-CCSF), built upon TRIZ derived solutions. The framework run-time is pushed to the user-space to support evolving survivability design goals. Furthermore, a target-based decision-making technique (TBDM) is proposed to manage survivability decisions. To evaluate the prey-inspired survivability concept, Pi-CCSF simulator is developed and implemented. Evaluation results shows that escalating survivability actions improve the vitality of vulnerable and compromised virtual machines (VMs) by 5% and dramatically improve their overall survivability. Hypothesis testing conclusively supports the hypothesis that the escalation mechanisms can be applied to enhance the survivability of cloud computing systems. Numeric analysis of TBDM shows that by considering survivability preferences and attitudes (these directly impacts survivability actions), the TBDM method brings unpredictable survivability information closer to decision processes. This enables efficient execution of variable escalating survivability actions, which enables the Pi-CCSF’s decision system (DS) to focus upon decisions that achieve survivability outcomes under unpredictability imposed by UUUR

    Best software test & quality assurance practices in the project life-cycle. An approach to the creation of a process for improved test & quality assurance practices in the project life-cycle of an SME

    Get PDF
    The cost of software problems or errors is a significant problem to global industry, not only to the producers of the software but also to their customers and end users of the software. There is a cost associated with the lack of quality of software to companies who purchase a software product and also to the companies who produce the same piece of software. The task of improving quality on a limited cost base is a difficult one. The foundation of this thesis lies with the difficult task of evaluating software from its inception through its development until its testing and subsequent release. The focus of this thesis is on the improvement of the testing & quality assurance task in an Irish SME company with software quality problems but with a limited budget. Testing practices and quality assurance methods are outlined in the thesis explaining what was used during the software quality improvement process in the company. Projects conducted in the company are used for the research in the thesis. Following the quality improvement process in the company a framework for improving software quality was produced and subsequently used and evaluated in another company
    • 

    corecore