26 research outputs found

    Cell therapy-processing economics: small-scale microfactories as a stepping stone toward large-scale macrofactories

    Get PDF
    Aim: Manufacturing methods for cell-based therapies differ markedly from those established for noncellular pharmaceuticals and biologics. Attempts to ‘shoehorn’ these into existing frameworks have yielded poor outcomes. Some excellent clinical results have been realized, yet emergence of a ‘blockbuster’ cell-based therapy has so far proved elusive. Materials & methods: The pressure to provide these innovative therapies, even at a smaller scale, remains. In this process, economics research paper, we utilize cell expansion research data combined with operational cost modeling in a case study to demonstrate the alternative ways in which a novel mesenchymal stem cell-based therapy could be provided at small scale. Results & Conclusions: This research outlines the feasibility of cell microfactories but highlighted that there is a strong pressure to automate processes and split the quality control cost-burden over larger production batches. The study explores one potential paradigm of cell-based therapy provisioning as a potential exemplar on which to base manufacturing strategy

    NetGlance NMS - An integrated network monitoring system

    Get PDF
    Mestrado de dupla diplomação com a Kuban State Agrarian UniversityThis work is about IT infrastructure and, in particular, computer networks in KubSAU and IPB. Also, it is about a network monitoring system “NetGlance NMS” developed for KubSAU System Administration Department. Work objective is to optimize the information structure for KubSAU and IPB. During the work, following tasks were completed: Research the existing IPB information structure, Compare the information structure for KubSAU and IPB, Model the IPB computer network (topology, services), Research bottlenecks and potential pitfalls in the data-center and in the computer network of IPB, Research information security mechanisms in the computer network of IPB, Organize monitoring process for the computer network in KubSAU. The most important impact of the work is an increasing network productivity and user experience as a result of creation and deploy a monitoring software.O trabalho descrito no âmbito desta dissertação incide sobre a infraestrutura TI e, em particular, sobre as redes de computadores da KubSAU e do IPB. Além disso, descreve-se um sistema de gestão integrada de redes, designada “NetGlance NMS”, desenvolvido para o Departamento de Administração de Sistemas da KubSAU. O objetivo do trabalho é desenvolver uma ferramenta para otimizar a gestão da estrutura de comunicações das duas instituições. Durante o trabalho, as seguintes tarefas foram concluídas: levantamento da estrutura de comunicações do IPB, comparação da estrutura de comunicações entre a KubSAU e o IPB, modelação da rede de comunicações do IPB (topologia, serviços), estudo de possíveis estrangulamentos no datacenter e na rede de comunicações doIPB, estudo de mecanismos de segurança na rede de comunicações do IPB, organização do processo de monitorização da rede de comunicações da KubSAU. O contributo mais relevante deste trabalho é o desenvolvimento de uma aplicação de gestão integrada de redes, de forma a contribuir para o aumento da produtividade da rede e da experiência dos utilizadores

    Towards Optimal IT Availability Planning: Methods and Tools

    Get PDF
    The availability of an organisation’s IT infrastructure is of vital importance for supporting business activities. IT outages are a cause of competitive liability, chipping away at a company financial performance and reputation. To achieve the maximum possible IT availability within the available budget, organisations need to carry out a set of analysis activities to prioritise efforts and take decisions based on the business needs. This set of analysis activities is called IT availability planning. Most (large) organisations address IT availability planning from one or more of the three main angles: information risk management, business continuity and service level management. Information risk management consists of identifying, analysing, evaluating and mitigating the risks that can affect the information processed by an organisation and the information-processing (IT) systems. Business continuity consists of creating a logistic plan, called business continuity plan, which contains the procedures and all the useful information needed to recover an organisations’ critical processes after major disruption. Service level management mainly consists of organising, documenting and ensuring a certain quality level (e.g. the availability level) for the services offered by IT systems to the business units of an organisation. There exist several standard documents that provide the guidelines to set up the processes of risk, business continuity and service level management. However, to be as generally applicable as possible, these standards do not include implementation details. Consequently, to do IT availability planning each organisation needs to develop the concrete techniques that suit its needs. To be of practical use, these techniques must be accurate enough to deal with the increasing complexity of IT infrastructures, but remain feasible within the budget available to organisations. As we argue in this dissertation, basic approaches currently adopted by organisations are feasible but often lack of accuracy. In this thesis we propose a graph-based framework for modelling the availability dependencies of the components of an IT infrastructure and we develop techniques based on this framework to support availability planning. In more detail we present: 1. the Time Dependency model, which is meant to support IT managers in the selection of a cost-optimal set of countermeasures to mitigate availability-related IT risks; 2. the Qualitative Time Dependency model, which is meant to be used to systematically assess availability-related IT risks in combination with existing risk assessment methods; 3. the Time Dependency and Recovery model, which provides a tool for IT managers to set or validate the recovery time objectives on the components of an IT architecture, which are then used to create the IT-related part of a business continuity plan; 4. A2THOS, to verify if availability SLAs, regulating the provisioning of IT services between business units of the same organisation, can be respected when the implementation of these services is partially outsourced to external companies, and to choose outsourcing offers accordingly. We run case studies with the data of a primary insurance company and a large multinational company to test the proposed techniques. The results indicate that organisations such as insurance or manufacturing companies, which use IT to support their business can benefit from the optimisation of the availability of their IT infrastructure: it is possible to develop techniques that support IT availability planning while guaranteeing feasibility within budget. The framework we propose shows that the structure of the IT architecture can be practically employed with such techniques to increase their accuracy over current practice

    A distributed middleware for IT/OT convergence in modern industrial environments

    Get PDF
    The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers

    Not-for-profit entities industry developments - 2018; Audit risk alerts

    Get PDF
    https://egrove.olemiss.edu/aicpa_indev/2403/thumbnail.jp

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress

    Interdependent Security and Compliance in Service Selection

    Get PDF
    Application development today is characterized by ever shorter release cycles and more frequent change requests. Hence development methods such as service composition are increasingly arousing interest as viable alternative approaches. While employing web services as building blocks rapidly reduces development times, it raises new challenges regarding security and compliance since their implementation remains a black box which usually cannot be controlled. Security in particular gets even more challenging since some applications require domainspecific security objectives such as location privacy. Another important aspect is that security objectives are in general no singletons but subject to interdependence. Hence this thesis addresses the question of how to consider interdependent security and compliance in service composition. Current approaches for service composition do neither consider interdependent security nor compliance. Selecting suiting services for a composition is a combinatorial problem which is known to be NP-hard. Often this problem is solved utilizing genetic algorithms in order to obtain near-optimal solutions in reasonable time. This is particularly the case if multiple objectives have to be optimized simultaneously such as price, runtime and data encryption strength. Security properties of compositions are usually verified using formal methods. However, none of the available methods supports interdependence effects or defining arbitrary security objectives. Similarly, no current approach ensures compliance of service compositions during service selection. Instead, compliance is verified afterwards which might necessitate repeating the selection process in case of a non-compliant solution. In this thesis, novel approaches for considering interdependent security and compliance in service composition are being presented and discussed. Since no formal methods exist covering interdependence effects for security, this aspect is covered in terms of a security assessment. An assessment method is developed which builds upon the notion of structural decomposition in order to assess the fulfillment of arbitrary security objectives in terms of a utility function. Interdependence effects are being modeled as dependencies between utility functions. In order to enable compliance-awareness, an approach is presented which checks compliance of compositions during service selection and marks non-compliant parts. This enables to repair the corresponding parts during the selection process by replacing the current services and hence avoids the necessity to repeat the selection process. It is demonstrated how to embed the presented approaches into a genetic algorithm in order to ease integration with existing approaches for service composition. The developed approaches are being compared to state-of-the-art genetic algorithms using simulations

    Revenue maximization problems in commercial data centers

    Get PDF
    PhD ThesisAs IT systems are becoming more important everyday, one of the main concerns is that users may face major problems and eventually incur major costs if computing systems do not meet the expected performance requirements: customers expect reliability and performance guarantees, while underperforming systems loose revenues. Even with the adoption of data centers as the hub of IT organizations and provider of business efficiencies the problems are not over because it is extremely difficult for service providers to meet the promised performance guarantees in the face of unpredictable demand. One possible approach is the adoption of Service Level Agreements (SLAs), contracts that specify a level of performance that must be met and compensations in case of failure. In this thesis I will address some of the performance problems arising when IT companies sell the service of running ‘jobs’ subject to Quality of Service (QoS) constraints. In particular, the aim is to improve the efficiency of service provisioning systems by allowing them to adapt to changing demand conditions. First, I will define the problem in terms of an utility function to maximize. Two different models are analyzed, one for single jobs and the other useful to deal with session-based traffic. Then, I will introduce an autonomic model for service provision. The architecture consists of a set of hosted applications that share a certain number of servers. The system collects demand and performance statistics and estimates traffic parameters. These estimates are used by management policies which implement dynamic resource allocation and admission algorithms. Results from a number of experiments show that the performance of these heuristics is close to optimal.QoSP (Quality of Service Provisioning)British Teleco

    Cyber Security of Critical Infrastructures

    Get PDF
    Critical infrastructures are vital assets for public safety, economic welfare, and the national security of countries. The vulnerabilities of critical infrastructures have increased with the widespread use of information technologies. As Critical National Infrastructures are becoming more vulnerable to cyber-attacks, their protection becomes a significant issue for organizations as well as nations. The risks to continued operations, from failing to upgrade aging infrastructure or not meeting mandated regulatory regimes, are considered highly significant, given the demonstrable impact of such circumstances. Due to the rapid increase of sophisticated cyber threats targeting critical infrastructures with significant destructive effects, the cybersecurity of critical infrastructures has become an agenda item for academics, practitioners, and policy makers. A holistic view which covers technical, policy, human, and behavioural aspects is essential to handle cyber security of critical infrastructures effectively. Moreover, the ability to attribute crimes to criminals is a vital element of avoiding impunity in cyberspace. In this book, both research and practical aspects of cyber security considerations in critical infrastructures are presented. Aligned with the interdisciplinary nature of cyber security, authors from academia, government, and industry have contributed 13 chapters. The issues that are discussed and analysed include cybersecurity training, maturity assessment frameworks, malware analysis techniques, ransomware attacks, security solutions for industrial control systems, and privacy preservation methods

    Advanced Threat Intelligence: Interpretation of Anomalous Behavior in Ubiquitous Kernel Processes

    Get PDF
    Targeted attacks on digital infrastructures are a rising threat against the confidentiality, integrity, and availability of both IT systems and sensitive data. With the emergence of advanced persistent threats (APTs), identifying and understanding such attacks has become an increasingly difficult task. Current signature-based systems are heavily reliant on fixed patterns that struggle with unknown or evasive applications, while behavior-based solutions usually leave most of the interpretative work to a human analyst. This thesis presents a multi-stage system able to detect and classify anomalous behavior within a user session by observing and analyzing ubiquitous kernel processes. Application candidates suitable for monitoring are initially selected through an adapted sentiment mining process using a score based on the log likelihood ratio (LLR). For transparent anomaly detection within a corpus of associated events, the author utilizes star structures, a bipartite representation designed to approximate the edit distance between graphs. Templates describing nominal behavior are generated automatically and are used for the computation of both an anomaly score and a report containing all deviating events. The extracted anomalies are classified using the Random Forest (RF) and Support Vector Machine (SVM) algorithms. Ultimately, the newly labeled patterns are mapped to a dedicated APT attacker–defender model that considers objectives, actions, actors, as well as assets, thereby bridging the gap between attack indicators and detailed threat semantics. This enables both risk assessment and decision support for mitigating targeted attacks. Results show that the prototype system is capable of identifying 99.8% of all star structure anomalies as benign or malicious. In multi-class scenarios that seek to associate each anomaly with a distinct attack pattern belonging to a particular APT stage we achieve a solid accuracy of 95.7%. Furthermore, we demonstrate that 88.3% of observed attacks could be identified by analyzing and classifying a single ubiquitous Windows process for a mere 10 seconds, thereby eliminating the necessity to monitor each and every (unknown) application running on a system. With its semantic take on threat detection and classification, the proposed system offers a formal as well as technical solution to an information security challenge of great significance.The financial support by the Christian Doppler Research Association, the Austrian Federal Ministry for Digital and Economic Affairs, and the National Foundation for Research, Technology and Development is gratefully acknowledged
    corecore