11 research outputs found

    Usage of control charts for time series analysis in financial management

    Get PDF
    We will deal with corporate financial proceeding using statistical process control, specifically time series control charts. The article outlines intersection of two disciplines, namely econometrics and statistical process control. Theoretical part discusses methodology of time series control charts, and in research part, the methodology is demonstrated on two case studies. The first focuses on analysis of Slovak currency from the perspective of its usefulness for generating profits through time series control charts. The second involves regulation of financial flows for a heteroskedastic financial process by EWMA and ARIMA control charts. We use Box-Jenkins methodology to find models of time series of annual Argentinian Gross Domestic Product available as a basic index from 1951-1998. We demonstrate the versatility of control charts not only in manufacturing but also in managing financial stability of cash flows. Specifically, we show their sensitivity in detecting even small shifts in mean which may indicate financial instability. This analytical approach is widely applicable and therefore of theoretical and practical interest

    Process capability indices for non-normal data

    Get PDF
    When probability distribution of a process characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. Various methods have been proposed for surrogate process capability indices (PCIs) under non-normality but few literature sources offer their comprehensive evaluation and comparison, in particular whether they adequately capture process capability under mild and severe departures from normality, and what is the best method to compute true capability under each of these circumstances. We overview 9 methods as to their performance when handling PCI non-normality. The comparison is carried out by simulating log-normal and data and the results presented using box plots. We show performance to be dependent on the ability to capture tail behavior of the underlying distribution

    Implementing control charts to corporate financial management

    Get PDF
    In the paper, corporate financial management using statistical process control (SPC), especially Shewhart's control charts operating with the constant mean, control charts with non-constant mean, and process capability indices will be introduced. The center line, UCL and LCL for the control charts will be defined with the regulated process not allowed to cross the UCL and LCL boundaries. Altman's model (the so called Z-score), the most popular corporate financial stability index, will be used. We will demonstrate benefits of SPC on two case studies: the first will focus on corporate financial flow control, the second will include six companies. Special types of control charts, i.e., CUSUM and EWMA, will be discussed due to their mean shift sensitivity and practical applications demonstrated on additional two case studies. The results prove control charts can be successfully implemented not only in manufacturing processes but in corporate financial management as well

    Guide TEX it: Uneasy beginnings of typesetters from the perspective of non-typesetters

    No full text
    The article describes the process of typesetting a proceedings in TEX from the perspective of prospective typesetters along with challenges and obstacles encountered and solved during the work. Focused on the problems of generating a desired Table of Contents and captions of graphic objects, it further lists minor annoyances and tricks used to solve them. Also described is a field-proven electronic content management and synchronization system for different file versions utilized while working on the project in a decentralized fashion

    Distributed Denial of Service Attacks as Threat Vectors to Economic Infrastructure: Motives, Estimated Losses and Defense Against the HTTP/1.1 GET and SYN Floods Nightmares

    No full text
    With the number of nodes in the Internet's backbone networks rising exponentially the possibility of emergence of entities exhibiting outwardly hostile intents has been steadily increasing. The cyberspace is fittingly termed "the no man's land" because of an unprecedented growth pattern and lackluster control mechanisms. Distributed Denial of Service (DDoS) attacks take advantage of the current situation and primarily aim at destabilizing or severely limiting usability of infrastructure to the end-users in part or whole. A typical DDoS incursion exploiting heterogeneous base of personal computers consists of two phases: insertion of predefined set of instructions into the host systems via either self-propagating or non-reproducing malware and simultaneous execution of repeating queries to a destination unit. Generally targeted and deployed to impede functionality of a single or multiple servers with similar properties and utilizing substantial resources with little to no discernible selection criteria, DDoSes poses a significant threat. Moreover, effective and efficient countermeasures require experience, precision, speed, operational awareness, appropriate security protocols summarizing and alleviating potential consequences in case of failure to contain as well as proactive detection algorithms in place. Global response instruments (batch filtering, temporary IP address blacklisting) are only suitable for SYN floods, whereas during GET DDoS the same tools can't be used due to presence of legitimate incoming requests. The article scrutinizes methodology and policies currently in effect as a part of Critical Infrastructure Protection initiatives. The examination allows to outline procedural decision-making trees in the event of a DDoS violation while maintaining predefined and consistent quality of service level. Furthermore, rationale of perpetrators' motives to instigate the attacks are hypothesized with preferential focus on economic infrastructure components. These hubs of virtualized economy are detailed and target selection probabilities in tactical and strategic perspectives are identified based on known facts. Financial losses, worst case scenarios and social repercussions following a successful intrusion are also investigated by means of inference from successful DDoS insurgences

    User-side password authentication: A study

    No full text
    Researchers have for a time been struggling to change inert mindset of users regarding passwords as a response to advances in processing power, emergence of highly-scalable computing models, and attackers prioritizing human element for attacks. Recommendations regarding security are ignored as documented by recent corporate database breaches and releases of unencrypted password caches which corroborated lacking security awareness in vast majority of Internet users. In order to educate users about computer security, terms such as hashing, cipher systems and their weaknesses, brute-force attacks, social engineering, multi-factor authentication, and balance between usability and ease of use must be clearly explained. However, academia tend to focus on areas requiring deep mathematical or programmatic background, clear communication of these security elements while minimizing scientific rigor thus remains challenging. The article aims to provide a concise, comprehensive research overview and outline of authentication, including information entropy, hashing algorithms, reverse password engineering, importance of complexity and length in passwords, general-purpose attacks such as brute-force and social engineering as well as specialized ones, namely side-channel interception. Novel ways of increasing security by utilizing two- and multi-factor authentication, visual passwords, pass phrases, mnemonic-based strings will be considered as well along with their advantages over the traditional textual password model and pitfalls for their widespread propagation. In particular, we hypothesize that technological developments allow vendors to offer solutions which limit unauthorized third parties from gaining windows of opportunity to exploit weaknesses in the authentication schemes. However, as infrastructure becomes more resilient, attackers shift their focus towards human-based attacks (social engineering, social networking). Due to largely unchanging short-term behavior patterns, institutions need to lecture employees over extended periods about being vigilant to leaks of procedural and organizational information which may help attackers bypass perimeter-level security measures. We conclude the article by listing emerging threats in the field, specifically social networks-distributed malware and mobile devices targeting

    Mobile cyberwarfare threats and mitigations: An overview

    No full text
    Mobile technologies have transformed rapidly with their rate of adoption increasing for several years. Smartphones, tablets, and other small form-factor devices are integrated in educational institutions, medical and commercial facilities with further military, governmental as well as industrial deployment expected in future. However, the complexity from interconnected hardware and software layers opens up multiple attack vectors for adversaries, allowing personally identifiable data exfiltration, malicious modifications of the device's intended functionality, pushing unauthorized code without user consent, or incorporating it into a botnet. Mobile threat landscape has become the next stage of cyberwarfare. Here, users unable or unwilling to adequately protect themselves make decisions based on information originating from untrusted third parties with potentially harmful intents. Recognizing the situation, a comprehensive array of tools and concepts such as ASLR, DEP, closing the source code, sandboxing, and code validation has been implemented. In this asymmetric security model, developers invalidate novel attack vectors while adversaries employ sophisticated techniques to thwart detection for large-scale penetration. The former are further penalized by heterogeneous base of software versions, some entirely defenseless against recent exploits. The paper presents an overview of techniques in current mobile operating systems and best practices the vendors incorporated to minimize unauthorized third-party modifications. It also aims to provide high-level description of exploits malware creators use to target users who, as we further postulate, underestimate capabilities of their devices. Best practices for safer use are briefly outlined, too

    Human factor: The weakest link of security?

    No full text
    Human element plays a critical role in cyberwarfare scenarios: a malicious adversary can launch targeted social engineering campaigns to gain unfettered access to sensitive electronic resources, establish unauthorized system persistence, and use the compromised host as a stepping stone for further exploitation, incorporating it into a botnet of controlled nodes. As hardware and software infrastructure protection efforts result in increasingly resilient systems, focus on end-users who constitute a security vulnerability can be expected to increase in the future. However, password database leaks, effectiveness of social engineering, and bring your own device (BYOD) trends in organizations all raise concerns as to the security competencies the general population possess. In the article, we present results of a large-scale questionnaire study pertaining to security habits and BYOD practices of more than 700 participants conducted in the Czech Republic during the period of September-December 2013. Ranging from a preferred operating system to password selection rationale, the answers should be a representative cross-section of how an "average" user maintains their electronic identity online. The snapshot provides valuable insights and actionable intelligence based on which information and communication technology policies in organizations can be modified to better accommodate the patterns discovered. The article maps current state of selected aspects of security in increasingly interconnected, technology-driven global structures where electronic identities supplement real-world ones and their compromise results in significant negative consequences

    Launching distributed denial of service attacks by network protocol exploitation

    No full text
    The article aims to provide a concise introduction to the network protocols and methods of their exploitation. This may lead to instigating DDoS (Distributed Denial of Service) attacks which focus on flooding and saturating elements of victim's physical network infrastructure. In the first part layers of the RFC (Request for Comments) and the OSI (Open Systems Interconnection) models are delimited along with their differences and protocols associated. The HTTP (Hypertext Transfer Protocol) standard is treated preferentially as it forms communication framework of the Internet. The second part focuses on the most prolific type of network incursion currently in use, DDoS, its structure and a freely available tool LOIC (Low Orbit Ion Cannon) utilized to launch it. The third part evaluates specific forms of DDoS, including known measures to counter them. The conclusion summarizes the text and briefly hypothesizes about the future in the field
    corecore