123 research outputs found

    An Epistemological Inquiry into the Incorporation of Emergency Management Concept in the Homeland Security with a Post-Disaster Security Centric Focus

    Get PDF
    The historical roots of the Emergency Management concept in the U.S. date back to 19th century. As disasters occurred, policies relating to disaster response have been developed, and many statuary provisions, including several Federal Disaster Relief Acts, conceptually established the framework of Emergency Management. In 1979, with the foundation of the Federal Emergency Management Agency (FEMA), disaster relief efforts were finally institutionalized, and the federal government acknowledged that Emergency Management included mitigation, preparedness, response and recovery activities as abbreviated \u27MPRR.\u27 However, after 2000, the U.S. experienced two milestone events - the September 11 terrorist attacks in 2001 and Hurricane Katrina in 2005. Following the foundation of the Department of Homeland Security (DHS) in 2002, the definitional context of Emergency Management and its phases/components, simply its essence, evolved and was incorporated into many official documents differently, creating contextual inconsistencies. Recent key official documents embody epistemological problems that have the potential to traumatize the coherence of the Homeland Security contextual framework as well as to impose challenges theoretically to the education and training of Homeland Security/Emergency Management stakeholders. Furthermore, the conceptual design of the Emergency Support Functions (ESF) which have been defined within the context of the National Response Framework (NRF) displays similar problematic symptoms, and existing urban area Public Safety and Security planning processes have also not been supported by methodologies that are aligned with the post-disaster security requirements. To that end, the conceptual framework of Emergency Management and its incorporation in the Homeland Security global architecture should be revised and redefined to enhance coherence and reliability. Coherence in the contextual structure directly links to the system\u27s organizational structure and its viability functions. Also, holistic multi-dimensional system representations/abstractions, which would support appreciation of the system\u27s complex context, should be incorporated in policy documents to be utilized to educate the relevant stakeholders (individuals, teams, etc.) during the training/orientation programs. In addition, the NRF and its ESFs should be reviewed through a post-disaster security centric focus, since the post-disaster environment has unique characteristics that should be addressed by different approaches. In that sense, this dissertation develops a Post-Disaster Security Index (PDSI) Model that provides valuable insights for security agents and other Emergency Management and Homeland Security stakeholders

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Toward a Bio-Inspired System Architecting Framework: Simulation of the Integration of Autonomous Bus Fleets & Alternative Fuel Infrastructures in Closed Sociotechnical Environments

    Get PDF
    Cities are set to become highly interconnected and coordinated environments composed of emerging technologies meant to alleviate or resolve some of the daunting issues of the 21st century such as rapid urbanization, resource scarcity, and excessive population demand in urban centers. These cybernetically-enabled built environments are expected to solve these complex problems through the use of technologies that incorporate sensors and other data collection means to fuse and understand large sums of data/information generated from other technologies and its human population. Many of these technologies will be pivotal assets in supporting and managing capabilities in various city sectors ranging from energy to healthcare. However, among these sectors, a significant amount of attention within the recent decade has been in the transportation sector due to the flood of new technological growth and cultivation, which is currently seeing extensive research, development, and even implementation of emerging technologies such as autonomous vehicles (AVs), the Internet of Things (IoT), alternative xxxvi fueling sources, clean propulsion technologies, cloud/edge computing, and many other technologies. Within the current body of knowledge, it is fairly well known how many of these emerging technologies will perform in isolation as stand-alone entities, but little is known about their performance when integrated into a transportation system with other emerging technologies and humans within the system organization. This merging of new age technologies and humans can make analyzing next generation transportation systems extremely complex to understand. Additionally, with new and alternative forms of technologies expected to come in the near-future, one can say that the quantity of technologies, especially in the smart city context, will consist of a continuously expanding array of technologies whose capabilities will increase with technological advancements, which can change the performance of a given system architecture. Therefore, the objective of this research is to understand the system architecture implications of integrating different alternative fueling infrastructures with autonomous bus (AB) fleets in the transportation system within a closed sociotechnical environment. By being able to understand the system architecture implications of alternative fueling infrastructures and AB fleets, this could provide performance-based input into a more sophisticated approach or framework which is proposed as a future work of this research

    Industry 4.0 for SME

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsIndustry 4.0 has been growing within companies and impacting the economy and society, but this has been a more complex challenge for some types of companies. Due to the costs and complexity associated with Industry 4.0 technologies, small and medium enterprises face difficulties in adopting them. This thesis proposes to create a model that gives guidance and simplifies how to implement Industry 4.0 in SMEs with a low-cost perspective. It is intended that this model can be used as a blueprint to design and implement an Industry 4.0 project within a manufactory SME. To create the model, a literature review of the different fields regarding Industry 4.0 were conducted to understand the most suited technologies to leverage within the manufacturing industry and the different use cases where these would be applicable. After the model was built, expert interviews were conducted, and based on the received feedback, the model was tweaked, improved, and validated

    Big Data Analytics for Complex Systems

    Get PDF
    The evolution of technology in all fields led to the generation of vast amounts of data by modern systems. Using data to extract information, make predictions, and make decisions is the current trend in artificial intelligence. The advancement of big data analytics tools made accessing and storing data easier and faster than ever, and machine learning algorithms help to identify patterns in and extract information from data. The current tools and machines in health, computer technologies, and manufacturing can generate massive raw data about their products or samples. The author of this work proposes a modern integrative system that can utilize big data analytics, machine learning, super-computer resources, and industrial health machines’ measurements to build a smart system that can mimic the human intelligence skills of observations, detection, prediction, and decision-making. The applications of the proposed smart systems are included as case studies to highlight the contributions of each system. The first contribution is the ability to utilize big data revolutionary and deep learning technologies on production lines to diagnose incidents and take proper action. In the current digital transformational industrial era, Industry 4.0 has been receiving researcher attention because it can be used to automate production-line decisions. Reconfigurable manufacturing systems (RMS) have been widely used to reduce the setup cost of restructuring production lines. However, the current RMS modules are not linked to the cloud for online decision-making to take the proper decision; these modules must connect to an online server (super-computer) that has big data analytics and machine learning capabilities. The online means that data is centralized on cloud (supercomputer) and accessible in real-time. In this study, deep neural networks are utilized to detect the decisive features of a product and build a prediction model in which the iFactory will make the necessary decision for the defective products. The Spark ecosystem is used to manage the access, processing, and storing of the big data streaming. This contribution is implemented as a closed cycle, which for the best of our knowledge, no one in the literature has introduced big data analysis using deep learning on real-time applications in the manufacturing system. The code shows a high accuracy of 97% for classifying the normal versus defective items. The second contribution, which is in Bioinformatics, is the ability to build supervised machine learning approaches based on the gene expression of patients to predict proper treatment for breast cancer. In the trial, to personalize treatment, the machine learns the genes that are active in the patient cohort with a five-year survival period. The initial condition here is that each group must only undergo one specific treatment. After learning about each group (or class), the machine can personalize the treatment of a new patient by diagnosing the patients’ gene expression. The proposed model will help in the diagnosis and treatment of the patient. The future work in this area involves building a protein-protein interaction network with the selected genes for each treatment to first analyze the motives of the genes and target them with the proper drug molecules. In the learning phase, a couple of feature-selection techniques and supervised standard classifiers are used to build the prediction model. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges around 100%. The third contribution is the ability to build semi-supervised learning for the breast cancer survival treatment that advances the second contribution. By understanding the relations between the classes, we can design the machine learning phase based on the similarities between classes. In the proposed research, the researcher used the Euclidean matrix distance among each survival treatment class to build the hierarchical learning model. The distance information that is learned through a non-supervised approach can help the prediction model to select the classes that are away from each other to maximize the distance between classes and gain wider class groups. The performance measurement of this approach shows a slight improvement from the second model. However, this model reduced the number of discriminative genes from 47 to 37. The model in the second contribution studies each class individually while this model focuses on the relationships between the classes and uses this information in the learning phase. Hierarchical clustering is completed to draw the borders between groups of classes before building the classification models. Several distance measurements are tested to identify the best linkages between classes. Most of the nodes show a high-performance measurement where accuracy, sensitivity, specificity, and F-measure ranges from 90% to 100%. All the case study models showed high-performance measurements in the prediction phase. These modern models can be replicated for different problems within different domains. The comprehensive models of the newer technologies are reconfigurable and modular; any newer learning phase can be plugged-in at both ends of the learning phase. Therefore, the output of the system can be an input for another learning system, and a newer feature can be added to the input to be considered for the learning phase

    Production planning process optimization

    Get PDF
    Produktionsautomationssysteme sind komplexe Systeme mit viele Entitäten (Roboter, Transportsysteme usw.) die mannigfaltig aufeinander einwirken und zusammenspielen um das Ziel einer Produktendfertigung zu ermöglichen. Multiagenten-Systeme basierend auf verteilter Kontrolle sind der praktikabelste Ansatz die ansteigende Kompliziertheit solcher Systeme in den Griff zu bekommen und gleichzeitig eine flexible Anpassung des Produktionsautomationssystems an variable Rahmenbedingungen zu gewährleisten (z.B. Änderung von Produktionsstrassen oder die Koordination von Transportelementen). Für solch kritische Produktionsautomationssysteme ist eine Überprüfung aller Schritte im Entwicklungsprozess erforderlich um ein sicher funktionierendes System zu gewährleisten. Qualitätsmessungen zur Sicherstellung der Korrektheit von Systemelemente stellen bei der Zielerreichung daher einen wichtigen Schritt dar. Die Softwaresimulation des Werkstatt-Systems erlaubt sowohl Leistungsmessung einer Systemkonfiguration als auch schnellere und preiswertere Reaktion auf sich ändernde Voraussetzungen. Hinzu kommt, dass die Softwaresimulation von Produktionsautomationssystemen immer mehr einen praktikable Möglichkeit darstellt, um Produktionsvorgänge zu planen und/oder zu optimieren.Production Automation Systems are complex systems. They typically have many entities like robots, transport systems, etc. that interact in complex ways to provide production automation functions like assembly of products. The increasing complexity of these systems makes central control more and more difficult. Therefore systems with distributed control are areas of intense research such as multi-agent systems. Moreover, changing requirements for production automation systems require better system and model flexibility for e.g. easy-to-change workshop layouts or coordination of transportation elements. Meeting all this tasks makes the design of a production automation system a challenge hard to solve for designers and system engineers. For safety-critical systems like production automation systems, verification is required for all steps in the development process. Testing aims at measuring the quality of executable system elements, especially the validity of a configuration and correctness of calculated results. A particular challenge is measurement of non-functional quality requirements such as system performance before the actual hardware system is built. Software simulation of the workshop system would allow both performance measurement of a configuration and faster, cheaper reaction to changing requirements, however the validity of the simulation has to be assured. On top of this, software simulation of production automation systems can get more and more a sufficient part during the production planning and optimization process

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse
    corecore