9 research outputs found

    Decision Making Support Systems with VIKOR Method For Supply Chain Management Problems

    Get PDF
    In this work scientific and simple calculation method for manufacturer?s decision-makers to choose the most ideal supplier in the Supply Chain Management problem has been provided. As a fundamental decision-making for manufacturers, the quality of supplier performance not only affects the downstream business, but also determines the success of the whole supply chain. Therefore, choosing suitable suppliers in the supply chain becomes a key strategic step; it directly impacts the benefit for manufacturers. This paper deals with the supplier selection problem based on VIKOR algorithm (Vlse Kriterijumska Optimizacija Kompromisno Resenje) which is a compromise multiple criteria decision making approach with entropy method which gives the weights to indicators. The VIKOR algorithm deals with the conflicts between indicators based on certain way to sort the scheme and choose the best scheme. A numerical example is proposed to illustrate the effectiveness of this algorithm. However, Sensitivity Analysis for the weighting vectors is performed to make the result of evaluations more objective and accurate

    Intelligent geospatial decision support system for Malaysian marine geospatial data infrastructure

    Get PDF
    Marine resources for different uses and activities are characterised by multi-dimensional concepts, criteria, multi-participants, and multiple-use conflicts. In addition, the fuzzy nature in the marine environment has attendant features that increase the complexity of the environment, thus, necessitating the quest for multiple alternative solutions and adequate evaluation, particularly within the context of Marine Geospatial Data Infrastructure (MGDI). However, in the literature of MGDI, there has yet to be a concerted research effort and framework towards holistic consideration of decision making prospects using multi-criteria evaluation (MCE) and intelligent algorithms for effective and informed decision beyond the classical methods. This research, therefore, aims to develop and validate an intelligent decision support system for Malaysian MGDI. An integrated framework built on mixed method research design serves as the mode of inquiry. Initially, the quantitative methodology, comprising of Dynamic Analytic Network Process (DANP) model, comprehensive evaluation index system (CEIS), MCE extensions, geographic information system’s spatial interaction modelling (SIM), and hydrographic data acquisition sub-system was implemented. Within this framework, a case study validation was employed for the qualitative aspect to predict the most viable geospatial extents within Malaysian waters for exploitation of deep sea marine fishery. Quantitative findings showed that the model has an elucidated CEIS with a DANP network model of 7 criteria, 28 sub-criteria, and 145 performance indicators, with 5 alternatives. In the MCE, computed priority values for Analytic Hierarchy Process (AHP) and Fuzzy AHP are different though their rankings are the same. In addition, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and Fuzzy TOPSIS results from the MCE extensions showed that they were similarly ranked for the Exclusive Economic Zone (EEZ) (200 nm) area as predicted by the DANP model. Furthermore, re-arrangement of the priorities in sensitivity analysis enhanced the final judgment for the criteria being evaluated; and for the SIM. Qualitatively, the validation of the DANP through the prediction has cumulated a computed value of 76.39 nm (141.47 Km) where this would be the most viable and economical deep sea fishery exploitation location in Malaysian waters and within the EEZ. In this study, MGDI decision and MgdiEureka are newly formulated terminologies to depict decisions in the realms of MGDI initiatives and the developed applications. The framework would serve as an improved marine geospatial planning for various stakeholders prior to decision making

    Distributed integrated product teams

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2000.Includes bibliographical references (p. 134-135).Two major organizational tools, Integrated Process and Product Development (IPPD) and co-location, have been key initiatives in many corporate knowledge management and information flow strategies. The benefits of IPPD and co-location are well documented, and central to the success of these tools is the increased information flow and knowledge transfer across organizational boundaries. The fundamental knowledge management philosophy of IPPD is person-to-person tacit knowledge sharing and capture through the establishment of multi-disciplined Integrated Product Teams (IPT). Co-location of the integrated product team members has facilitated frequent informal face-to-face information flow outside of the structured meetings typical of IPPD processes. In today's global environment, the development and manufacture of large complex systems can involve hundreds, if not thousands, of geographically dispersed engineers often from different companies working on IPTs. In such an environment, the implementation of IPPD is challenging, and co-location is not feasible across the entire enterprise. The development of a comprehensive knowledge capture and information flow strategy aligned to the organizational architecture and processes involved with proper utilization of available information technologies is critical in facilitating information flow and knowledge transfer between dispersed IPTs. In this thesis we provide a case study of the knowledge capture and information flow issues that have arisen with the recent transition to the Module Center organization at Pratt & Whitney. We identify several critical enablers for efficient information flow and knowledge capture in a dispersed IPT environment by analyzing qualitative and quantitative survey data obtained at Pratt & Whitney, existing research in this area, and our own observations as participants in this environment. From this analysis, we identify key information flow and knowledge capture issues and provide recommendations for potential improvement. The Design Structures Matrix (DSM) methodology is used to understand the complex, tightly coupled information flow between the IPTs that exist at Pratt & Whitney. We build upon the previous Pratt & Whitney DSM work. The proposed DSM is not only a valuable tool identifying the information flow paths that exist between part level and system level attributes, but also can be utilized as an information technology tool to capture the content or knowledge contained in the information flow paths identified.by Stephen V. Glynn [and] Thomas G. Pelland.S.M

    The Role of Computers in Research and Development at Langley Research Center

    Get PDF
    This document is a compilation of presentations given at a workshop on the role cf computers in research and development at the Langley Research Center. The objectives of the workshop were to inform the Langley Research Center community of the current software systems and software practices in use at Langley. The workshop was organized in 10 sessions: Software Engineering; Software Engineering Standards, methods, and CASE tools; Solutions of Equations; Automatic Differentiation; Mosaic and the World Wide Web; Graphics and Image Processing; System Design Integration; CAE Tools; Languages; and Advanced Topics

    Using systems engineering for the development of decision making support systems (DMSS) : an analysis of system development methodologies (SDM)

    No full text
    Decision Making Support Systems (DMSS) can mitigate the risks involved in highly uncertain processes where novelty is high, such as NPD resource management. However such systems manipulate complex organisational information and require embedding within the business it operates within. There is a risk of poor acceptance in the business if the DMSS does not take into account a number of business related considerations. Utilising a systems approach literature was analysed, from which a set of considerations pertinent to the development of DMSS was elicited. Through the assessment of a number of System Development Methodologies (SDM), it was found that no one System Development Methodology (SDM) took into account all considerations identified. There is therefore a clear gap in current research and a real need for such a methodology which addresses the considerations identified

    Air Traffic Management Abbreviation Compendium

    Get PDF
    As in all fields of work, an unmanageable number of abbreviations are used today in aviation for terms, definitions, commands, standards and technical descriptions. This applies in general to the areas of aeronautical communication, navigation and surveillance, cockpit and air traffic control working positions, passenger and cargo transport, and all other areas of flight planning, organization and guidance. In addition, many abbreviations are used more than once or have different meanings in different languages. In order to obtain an overview of the most common abbreviations used in air traffic management, organizations like EUROCONTROL, FAA, DWD and DLR have published lists of abbreviations in the past, which have also been enclosed in this document. In addition, abbreviations from some larger international projects related to aviation have been included to provide users with a directory as complete as possible. This means that the second edition of the Air Traffic Management Abbreviation Compendium includes now around 16,500 abbreviations and acronyms from the field of aviation

    Fleas of fleas: The potential role of bacteriophages in Salmonella diversity and pathogenicity.

    Get PDF
    Non-typhoidal salmonellosis is an important foodborne and zoonotic infection, that causes significant global public health concern. Diverse serovars are multidrug-resistant and encode several virulence indicators, however, little is known on the role prophages play in driving these characteristics. Here, we extracted prophages from 75 Salmonella genomes, which represent the 15 most important serovars in the United Kingdom. We analysed the genomes of the intact prophages for the presence of virulence factors which were associated with; diversity, evolution and pathogenicity of Salmonella and to establish their genomic relationships. We identified 615 prophage elements from the Salmonella genomes, from which 195 prophages are intact, 332 being incomplete while 88 are questionable. The average prophage carriage was found to be more prevalent in S. Heidelberg, S. Inverness and S. Newport (10.2-11.6 prophages/strain), compared to S. Infantis, S. Stanley, S. Typhimurium and S. Virchow (8.2-9 prophages/strain) and S. Agona, S. Braenderup, S. Bovismorbificans, S. Choleraesuis, S. Dublin, and S. Java (6-7.8 prophages/strain), and S. Javiana and S. Enteritidis (5.8 prophages/strain). Cumulatively, 2760 virulence factors were detected from the intact prophages and associated with cellular functionality being linked to effector delivery/secretion system (73%), adherence (22%), magnesium uptake (2.7%), resistance to antimicrobial peptides (0.94%), stress/survival (0.4%), exotoxins (0.32%) and antivirulence (0.18%). Close and distant clusters were formed among the prophage genomes suggesting different lineages and associations with bacteriophages of other Enterobacteriaceae. We show that diverse repertoire of Salmonella prophages are associated with numerous virulence factors, and may contribute to diversity, pathogenicity and success of specific serovars

    Efficient feature reduction and classification methods

    Get PDF
    Durch die steigende Anzahl verfügbarer Daten in unterschiedlichsten Anwendungsgebieten nimmt der Aufwand vieler Data-Mining Applikationen signifikant zu. Speziell hochdimensionierte Daten (Daten die über viele verschiedene Attribute beschrieben werden) können ein großes Problem für viele Data-Mining Anwendungen darstellen. Neben höheren Laufzeiten können dadurch sowohl für überwachte (supervised), als auch nicht überwachte (unsupervised) Klassifikationsalgorithmen weitere Komplikationen entstehen (z.B. ungenaue Klassifikationsgenauigkeit, schlechte Clustering-Eigenschaften, …). Dies führt zu einem Bedarf an effektiven und effizienten Methoden zur Dimensionsreduzierung. Feature Selection (die Auswahl eines Subsets von Originalattributen) und Dimensionality Reduction (Transformation von Originalattribute in (Linear)-Kombinationen der Originalattribute) sind zwei wichtige Methoden um die Dimension von Daten zu reduzieren. Obwohl sich in den letzten Jahren vielen Studien mit diesen Methoden beschäftigt haben, gibt es immer noch viele offene Fragestellungen in diesem Forschungsgebiet. Darüber hinaus ergeben sich in vielen Anwendungsbereichen durch die immer weiter steigende Anzahl an verfügbaren und verwendeten Attributen und Features laufend neue Probleme. Das Ziel dieser Dissertation ist es, verschiedene Fragenstellungen in diesem Bereich genau zu analysieren und Verbesserungsmöglichkeiten zu entwickeln. Grundsätzlich, werden folgende Ansprüche an Methoden zur Feature Selection und Dimensionality Reduction gestellt: Die Methoden sollten effizient (bezüglich ihres Rechenaufwandes) sein und die resultierenden Feature-Sets sollten die Originaldaten möglichst kompakt repräsentieren können. Darüber hinaus ist es in vielen Anwendungsgebieten wichtig, die Interpretierbarkeit der Originaldaten beizubehalten. Letztendlich sollte der Prozess der Dimensionsreduzierung keinen negativen Effekt auf die Klassifikationsgenauigkeit haben - sondern idealerweise, diese noch verbessern. Offene Problemstellungen in diesem Bereich betreffen unter anderem den Zusammenhang zwischen Methoden zur Dimensionsreduzierung und der resultierenden Klassifikationsgenauigkeit, wobei sowohl eine möglichst kompakte Repräsentation der Daten, als auch eine hohe Klassifikationsgenauigkeit erzielt werden sollen. Wie bereits erwähnt, ergibt sich durch die große Anzahl an Daten auch ein erhöhter Rechenaufwand, weshalb schnelle und effektive Methoden zur Dimensionsreduzierung entwickelt werden müssen, bzw. existierende Methoden verbessert werden müssen. Darüber hinaus sollte natürlich auch der Rechenaufwand der verwendeten Klassifikationsmethoden möglichst gering sein. Des Weiteren ist die Interpretierbarkeit von Feature Sets zwar möglich, wenn Feature Selection Methoden für die Dimensionsreduzierung verwendet werden, im Fall von Dimensionality Reduction sind die resultierenden Feature Sets jedoch meist Linearkombinationen der Originalfeatures. Daher ist es schwierig zu überprüfen, wie viel Information einzelne Originalfeatures beitragen. Im Rahmen dieser Dissertation konnten wichtige Beiträge zu den oben genannten Problemstellungen präsentiert werden: Es wurden neue, effiziente Initialisierungsvarianten für die Dimensionality Reduction Methode Nonnegative Matrix Factorization (NMF) entwickelt, welche im Vergleich zu randomisierter Initialisierung und im Vergleich zu State-of-the-Art Initialisierungsmethoden zu einer schnelleren Reduktion des Approximationsfehlers führen. Diese Initialisierungsvarianten können darüber hinaus mit neu entwickelten und sehr effektiven Klassifikationsalgorithmen basierend auf NMF kombiniert werden. Um die Laufzeit von NMF weiter zu steigern wurden unterschiedliche Varianten von NMF Algorithmen auf Multi-Prozessor Systemen vorgestellt, welche sowohl Task- als auch Datenparallelismus unterstützen und zu einer erheblichen Reduktion der Laufzeit für NMF führen. Außerdem wurde eine effektive Verbesserung der Matlab Implementierung des ALS Algorithmus vorgestellt. Darüber hinaus wurde eine Technik aus dem Bereich des Information Retrieval -- Latent Semantic Indexing -- erfolgreich als Klassifikationsalgorithmus für Email Daten angewendet. Schließlich wurde eine ausführliche empirische Studie über den Zusammenhang verschiedener Feature Reduction Methoden (Feature Selection und Dimensionality Reduction) und der resultierenden Klassifikationsgenauigkeit unterschiedlicher Lernalgorithmen präsentiert. Der starke Einfluss unterschiedlicher Methoden zur Dimensionsreduzierung auf die resultierende Klassifikationsgenauigkeit unterstreicht dass noch weitere Untersuchungen notwendig sind um das komplexe Zusammenspiel von Dimensionsreduzierung und Klassifikation genau analysieren zu können.The sheer volume of data today and its expected growth over the next years are some of the key challenges in data mining and knowledge discovery applications. Besides the huge number of data samples that are collected and processed, the high dimensional nature of data arising in many applications causes the need to develop effective and efficient techniques that are able to deal with this massive amount of data. In addition to the significant increase in the demand of computational resources, those large datasets might also influence the quality of several data mining applications (especially if the number of features is very high compared to the number of samples). As the dimensionality of data increases, many types of data analysis and classification problems become significantly harder. This can lead to problems for both supervised and unsupervised learning. Dimensionality reduction and feature (subset) selection methods are two types of techniques for reducing the attribute space. While in feature selection a subset of the original attributes is extracted, dimensionality reduction in general produces linear combinations of the original attribute set. In both approaches, the goal is to select a low dimensional subset of the attribute space that covers most of the information of the original data. During the last years, feature selection and dimensionality reduction techniques have become a real prerequisite for data mining applications. There are several open questions in this research field, and due to the often increasing number of candidate features for various application areas (e.\,g., email filtering or drug classification/molecular modeling) new questions arise. In this thesis, we focus on some open research questions in this context, such as the relationship between feature reduction techniques and the resulting classification accuracy and the relationship between the variability captured in the linear combinations of dimensionality reduction techniques (e.\,g., PCA, SVD) and the accuracy of machine learning algorithms operating on them. Another important goal is to better understand new techniques for dimensionality reduction, such as nonnegative matrix factorization (NMF), which can be applied for finding parts-based, linear representations of nonnegative data. This ``sum-of-parts'' representation is especially useful if the interpretability of the original data should be retained. Moreover, performance aspects of feature reduction algorithms are investigated. As data grow, implementations of feature selection and dimensionality reduction techniques for high-performance parallel and distributed computing environments become more and more important. In this thesis, we focus on two types of open research questions: methodological advances without any specific application context, and application-driven advances for a specific application context. Summarizing, new methodological contributions are the following: The utilization of nonnegative matrix factorization in the context of classification methods is investigated. In particular, it is of interest how the improved interpretability of NMF factors due to the non-negativity constraints (which is of central importance in various problem settings) can be exploited. Motivated by this problem context two new fast initialization techniques for NMF based on feature selection are introduced. It is shown how approximation accuracy can be increased and/or how computational effort can be reduced compared to standard randomized seeding of the NMF and to state-of-the-art initialization strategies suggested earlier. For example, for a given number of iterations and a required approximation error a speedup of 3.6 compared to standard initialization, and a speedup of 3.4 compared to state-of-the-art initialization strategies could be achieved. Beyond that, novel classification methods based on the NMF are proposed and investigated. We can show that they are not only competitive in terms of classification accuracy with state-of-the-art classifiers, but also provide important advantages in terms of computational effort (especially for low-rank approximations). Moreover, parallelization and distributed execution of NMF is investigated. Several algorithmic variants for efficiently computing NMF on multi-core systems are studied and compared to each other. In particular, several approaches for exploiting task and/or data-parallelism in NMF are studied. We show that for some scenarios new algorithmic variants clearly outperform existing implementations. Last, but not least, a computationally very efficient adaptation of the implementation of the ALS algorithm in Matlab 2009a is investigated. This variant reduces the runtime significantly (in some settings by a factor of 8) and also provides several possibilities to be executed concurrently. In addition to purely methodological questions, we also address questions arising in the adaptation of feature selection and classification methods to two specific application problems: email classification and in silico screening for drug discovery. Different research challenges arise in the contexts of these different application areas, such as the dynamic nature of data for email classification problems, or the imbalance in the number of available samples of different classes for drug discovery problems. Application-driven advances of this thesis comprise the adaptation and application of latent semantic indexing (LSI) to the task of email filtering. Experimental results show that LSI achieves significantly better classification results than the widespread de-facto standard method for this special application context. In the context of drug discovery problems, several groups of well discriminating descriptors could be identified by utilizing the ``sum-of-parts`` representation of NMF. The number of important descriptors could be further increased when applying sparseness constraints on the NMF factors
    corecore