3,547 research outputs found

    1991 Anaquest sales force automation project

    Get PDF
    The purpose of this paper was to evaluate the feasibility of undertaking an upgrade and expansion of the Anaquest Professional Services Program. This study has been designated the 1991 Sales Force Automation (SFA) Project. The Anaquest Sales Representative has been using a laptop computer as a selling aid to help promote the effectiveness of Anaquest\u27s anesthetic products since 1985. Considering the 69% failure rate of the current computers, and the technology changes over the past six years, now may be the most opportune time to upgrade the field computers

    Automating the Certificate Verification Process

    Get PDF
    Automation has seen a rapid growth during the recent decade and has evolved almost every industry, as it has allowed processes to become more reliable and efficient. One of the main objectives with automation is to reduce or eliminate time-consuming and tedious repetitive tasks to allow time to be allocated to more important tasks. Similarly, this thesis sought to explore how the process of manually verifying certificates at a case company specialized in calibration equipment manufacturing and services could be improved with modern tools such as machine learning to verify that the measurement results in the certificates were correct while simple rule-based approaches could be applied to other parts of the certificate where faults usually occurred to create an assistant to aid technicians during the verification process. To structure the thesis, a simplified version of the CRISP-DM framework was used, which consisted of four different phases. First, a focus group interview with the chief of the labora-tory and technicians was held to map out how the current process worked, what the data in the certificates implied, where faults usually occurred and what kind of solution was desired. These answers were used as the requirements during the development of a potential solu-tion. Second, methods to prepare the certificate data were developed, both to prepare data sufficient enough to train a model as well as being able to extract data from the certificate which had to be verified. The third phase consisted of developing the models, where seven different models were compared and evaluated, out of which four were selected for further evaluation. In the last phase, the performance of the selected models was evaluated where unseen data was used as the input and the prediction the model made as the output. The results indicated that the selected machine learning models all performed exceptionally well and were able to make accurate predictions, especially the Extra Trees algorithm showed promising results on the two different datasets used during the thesis. With the results, a solution which includes a small modification to the current certificate printing tool as well as a web service which would handle the certificate verification and return the verified certificate to the technician for further analysis was proposed. Due to the time taken to de-fine the requirements as well as experimentations with the machine learning models and data extraction methods, the solution could only be proposed but a small proof of concept was developed to evaluate the feasibility of the solution, which resulted in four managerial implications being identified. These included establishing consistency in the process, im-proved efficiency, cost reduction as well as continuous improvement. Considering the find-ings and the conclusions made, the project could be considered a success as the research objectives were met and questions answered but would still require more development and testing before the proposed solution could be deployed to the production environment.Automatisointi on kasvanut nopeasti viime vuosikymmenen aikana, ja auttanut teollisuutta kehittymään, sillä automatisointi on mahdollistanut luotettavampia sekä tehokkaampia pro-sesseja. Yksi digitalisoinnin tärkeimmistä tavoitteista on vähentää tai kokonaan poistaa turhaa aikaa vieviä tehtäviä, jotta aikaa voidaan priorisoida tärkeämpiin tehtäviin. Tässä opinnäyte-työssä pyrittiin tutkimaan, miten eräässä kalibrointilaitteiden valmistukseen ja ratkaisuihin erikoistuneessa yrityksessä manuaalisesti suoritettavaa sertifikaattien tarkistusprosessi voi-taisiin parantaa nykyaikaisilla työkaluilla, kuten koneoppimisella, jolla olisi mahdollista tarkis-taa sertifikaattien mittaustulokset tarkistusprosessin aikana sekä soveltaa yksinkertaisia rat-kaisuja muihin kohtiin sertifikaatissa, joissa yleensä virheitä esiintyi. Tämän avulla voitaisiin kehittää teknikoille avustaja, joka helpottaisi ja parantaisi tarkistusprosessia. Opinnäytetyön struktuurina käytettiin yksinkertaistettua versiota CRISP-DM-mallista, joka koostui neljästä eri vaiheesta. Ensimmäisessä vaiheessa järjestettiin laboratorion päällikön ja teknikkojen kanssa ryhmähaastattelu, jossa kartoitettiin, miten nykyinen prosessi toimi, mitä todistusten tiedot tarkoittavat, missä yleensä ilmeni virheitä ja miten optimaalisen ratkaisun pitäisi toimia. Näitä vastauksia käytettiin vaatimuksina mahdollisen ratkaisun kehittämiseen. Kehitettiin menetelmiä sertifikaattitietojen valmistelemiseksi, sekä mallin kehitykseen riittä-vien tietojen valmistelemiseksi että tapa kerätä tarkistettava tieto sertifikaatista. Kolmas vai-he koostui itse mallin kehittämisestä, jossa vertailtiin ja arvioitiin seitsemää eri mallia, joista neljä valittiin lupaavien tuloksien perusteella. Viimeisessä vaiheessa valittujen mallien suori-tuskykyä arvioitiin sertifikaattitiedoilla, jota mallit eivät olleet ennen nähneet ja mallien en-nusteiden perusteella tehtiin lopullinen arvio. Tulokset osoittivat, että kaikki valitut koneoppimismallit toimivat poikkeuksellisen hyvin ja pystyivät tekemään tarkkoja ennusteita, erityisesti Extra Trees -algoritmin tulokset olivat lupaavia. Tuloksien perusteella ratkaisua ehdotettiin, johon sisältyy pieni muutos nykyiseen sertifikaattien tulostustyökaluun sekä uusi verkkopalvelu, joka hoitaisi sertifikaattien tarkis-tamisen ja palauttaisi tarkistustulokset teknikolle. Vaatimusten kartoittamiseen sekä kone-oppimismallien ja tiedonkeräys menetelmien kehittäminen vei enemmän aikaa, kun aluksi oletettiin, minkä takia ratkaisua ei pystytty muuta kuin ehdottamaan. Ratkaisun kelpoisuuden todentamiseksi oli kumminkin mahdollista kehitettä konseptitodistus, jonka tuloksista oli mahdollista kartoittaa neljä johtamisvaikutusta, joihin kuuluivat vakaammat tulokset sertifi-kaattiprosessista, tehokkuuden parantaminen, kustannusten vähentäminen sekä jatkuva parantaminen. Kun huomioon otetaan havainnot sekä tehdyt johtopäätökset, tutkimusta voidaan pitää onnistuneena, sillä tutkimustavoitteet saavutettiin ja tutkimuskysymyksiin vas-tattiin, mutta jotta ratkaisu saataisiin käyttöön tuotantoon, palvelu vaatisi enemmän kehitys-tä sekä testejä, jotta ehdotettu ratkaisu pystyisi todeta olevan toteutettavissa

    Ground Robotic Hand Applications for the Space Program study (GRASP)

    Get PDF
    This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time

    Evaluation and improvement of the workflow of digital imaging of fine art reproduction in museums

    Get PDF
    Fine arts refer to a broad spectrum of art formats, ie~painting, calligraphy, photography, architecture, and so forth. Fine art reproductions are to create surrogates of the original artwork that are able to faithfully deliver the aesthetics and feelings of the original. Traditionally, reproductions of fine art are made in the form of catalogs, postcards or books by museums, libraries, archives, and so on (hereafter called museums for simplicity). With the widespread adoption of digital archiving in museums, more and more artwork is reproduced to be viewed on a display. For example, artwork collections are made available through museum websites and Google Art Project for art lovers to view on their own displays. In the thesis, we study the fine art reproduction of paintings in the form of soft copy viewed on displays by answering four questions: (1) what is the impact of the viewing condition and original on image quality evaluation? (2) can image quality be improved by avoiding visual editing in current workflows of fine art reproduction? (3) can lightweight spectral imaging be used for fine art reproduction? and (4) what is the performance of spectral reproductions compared with reproductions by current workflows? We started with evaluating the perceived image quality of fine art reproduction created by representative museums in the United States under controlled and uncontrolled environments with and without the presence of the original artwork. The experimental results suggest that the image quality is highly correlated with the color accuracy of the reproduction only when the original is present and the reproduction is evaluated on a characterized display. We then examined the workflows to create these reproductions, and found that current workflows rely heavily on visual editing and retouching (global and local color adjustments on the digital reproduction) to improve the color accuracy of the reproduction. Visual editing and retouching can be both time-consuming and subjective in nature (depending on experts\u27 own experience and understanding of the artwork) lowering the efficiency of artwork digitization considerably. We therefore propose to improve the workflow of fine art reproduction by (1) automating the process of visual editing and retouching in current workflows based on RGB acquisition systems and by (2) recovering the spectral reflectance of the painting with off-the-shelf equipment under commonly available lighting conditions. Finally, we studied the perceived image quality of reproductions created by current three-channel (RGB) workflows with those by spectral imaging and those based on an exemplar-based method

    Explaining and Refining Decision-Theoretic Choices

    Get PDF
    As the need to make complex choices among competing alternative actions is ubiquitous, the reasoning machinery of many intelligent systems will include an explicit model for making choices. Decision analysis is particularly useful for modelling such choices, and its potential use in intelligent systems motivates the construction of facilities for automatically explaining decision-theoretic choices and for helping users to incrementally refine the knowledge underlying them. The proposed thesis addresses the problem of providing such facilities. Specifically, we propose the construction of a domain-independent facility called UTIL, for explaining and refining a restricted but widely applicable decision-theoretic model called the additive multi-attribute value model. In this proposal we motivate the task, address the related issues, and present preliminary solutions in the context of examples from the domain of intelligent process control

    Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval

    Full text link
    Tesis por compendio[ES] Hoy en día, el software existe en casi todo. Las empresas a menudo desarrollan y mantienen colecciones de sistemas de software personalizados que comparten algunas características entre ellos, pero que también tienen otras características particulares. Conforme el número de características y el número de variantes de un producto crece, el mantenimiento del software se vuelve cada vez más complejo. Para hacer frente a esta situación la Comunidad de Ingeniería del Software basada en Modelos está abordando una actividad clave: la Localización de Fragmentos de Modelo. Esta actividad consiste en la identificación de elementos del modelo que son relevantes para un requisito, una característica o un bug. Durante los últimos años se han propuesto muchos enfoques para abordar la identificación de los elementos del modelo que corresponden a una funcionalidad en particular. Sin embargo, existe una carencia a la hora de cómo se reportan las medidas del espacio de búsqueda, así como las medidas de la solución a encontrar. El objetivo de nuestra tesis radica en proporcionar a la comunidad dedicada a la actividad de localización de fragmentos de modelo una serie de medidas (tamaño, volumen, densidad, multiplicidad y dispersión) para reportar los problemas de localización de fragmentos de modelo. El uso de estas novedosas medidas ayuda a los investigadores durante la creación de nuevos enfoques, así como la mejora de aquellos enfoques ya existentes. Mediante el uso de dos casos de estudio reales e industriales, esta tesis pone en valor la importancia de estas medidas para comparar resultados de diferentes enfoques de una manera precisa. Los resultados de este trabajo han sido redactados y publicados en foros, conferencias y revistas especializadas en los temas y contexto de la investigación. Esta tesis se presenta como un compendio de artículos acorde a la regulación de la Universitat Politècnica de València. Este documento de tesis presenta los temas, el contexto y los objetivos de la investigación. Presenta las publicaciones académicas que se han publicado como resultado del trabajo y luego analiza los resultados de la investigación.[CA] Hui en dia, el programari existix en quasi tot. Les empreses sovint desenrotllen i mantenen col·leccions de sistemes de programari personalitzats que compartixen algunes característiques entre ells, però que també tenen altres característiques particulars. Conforme el nombre de característiques i el nombre de variants d'un producte creix, el manteniment del programari es torna cada vegada més complex. Per a fer front a esta situació la Comunitat d'Enginyeria del Programari basada en Models està abordant una activitat clau: la Localització de Fragments de Model. Esta activitat consistix en la identificació d'elements del model que són rellevants per a un requisit, una característica o un bug. Durant els últims anys s'han proposat molts enfocaments per a abordar la identificació dels elements del model que corresponen a una funcionalitat en particular. No obstant això, hi ha una carència a l'hora de com es reporten les mesures de l'espai de busca, així com les mesures de la solució a trobar. L'objectiu de la nostra tesi radica a proporcionar a la comunitat dedicada a l'activitat de localització de fragments de model una sèrie de mesures (grandària, volum, densitat, multiplicitat i dispersió) per a reportar els problemes de localització de fragments de model. L'ús d'estes noves mesures ajuda als investigadors durant la creació de nous enfocaments, així com la millora d'aquells enfocaments ja existents. Per mitjà de l'ús de dos casos d'estudi reals i industrials, esta tesi posa en valor la importància d'estes mesures per a comparar resultats de diferents enfocaments d'una manera precisa. Els resultats d'este treball han sigut redactats i publicats en fòrums, conferències i revistes especialitzades en els temes i context de la investigació. Esta tesi es presenta com un compendi d'articles d'acord amb la regulació de la Universitat Politècnica de València. Este document de tesi presenta els temes, el context i els objectius de la investigació. Presenta les publicacions acadèmiques que s'han publicat com resultat del treball i després analitza els resultats de la investigació.[EN] Nowadays, software exists in almost everything. Companies often develop and maintain a collection of custom-tailored software systems that share some common features but also support customer-specific ones. As the number of features and the number of product variants grows, software maintenance is becoming more and more complex. To keep pace with this situation, Model-Based Software Engineering Community is addressing a key-activity: Model Fragment Location (MFL). MFL aims at identifying model elements that are relevant to a requirement, feature, or bug. Many MFL approaches have been introduced in the last few years to address the identification of the model elements that correspond to a specific functionality. However, there is a lack of detail when the measurements about the search space (models) and the measurements about the solution to be found (model fragment) are reported. The goal of this thesis is to provide insights to MFL Research Community of how to improve the report of location problems. We propose using five measurements (size, volume, density, multiplicity, and dispersion) to report the location problems during MFL. The usage of these novel measurements support researchers during the creation of new MFL approaches and during the improvement of those existing ones. Using two different case studies, both real and industrial, we emphasize the importance of these measurements in order to compare results in a deeply way. The results of the research have been redacted and published in forums, conferences, and journals specialized in the topics and context of the research. This thesis is presented as compendium of articles according the regulations in Universitat Politècnica de València. This thesis document introduces the topics, context, and objectives of the research, presents the academic publications that have been published as a result of the work, and then discusses the outcomes of the investigation.Ballarin Naya, M. (2021). Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/171604TESISCompendi
    • …
    corecore