3,547 research outputs found
1991 Anaquest sales force automation project
The purpose of this paper was to evaluate the feasibility of undertaking an upgrade and expansion of the Anaquest Professional Services Program. This study has been designated the 1991 Sales Force Automation (SFA) Project. The Anaquest Sales Representative has been using a laptop computer as a selling aid to help promote the effectiveness of Anaquest\u27s anesthetic products since 1985. Considering the 69% failure rate of the current computers, and the technology changes over the past six years, now may be the most opportune time to upgrade the field computers
Automating the Certificate Verification Process
Automation has seen a rapid growth during the recent decade and has evolved almost every industry, as it has allowed processes to become more reliable and efficient. One of the main objectives with automation is to reduce or eliminate time-consuming and tedious repetitive tasks to allow time to be allocated to more important tasks. Similarly, this thesis sought to explore how the process of manually verifying certificates at a case company specialized in calibration equipment manufacturing and services could be improved with modern tools such as machine learning to verify that the measurement results in the certificates were correct while simple rule-based approaches could be applied to other parts of the certificate where faults usually occurred to create an assistant to aid technicians during the verification process.
To structure the thesis, a simplified version of the CRISP-DM framework was used, which consisted of four different phases. First, a focus group interview with the chief of the labora-tory and technicians was held to map out how the current process worked, what the data in the certificates implied, where faults usually occurred and what kind of solution was desired. These answers were used as the requirements during the development of a potential solu-tion. Second, methods to prepare the certificate data were developed, both to prepare data sufficient enough to train a model as well as being able to extract data from the certificate which had to be verified. The third phase consisted of developing the models, where seven different models were compared and evaluated, out of which four were selected for further evaluation. In the last phase, the performance of the selected models was evaluated where unseen data was used as the input and the prediction the model made as the output.
The results indicated that the selected machine learning models all performed exceptionally well and were able to make accurate predictions, especially the Extra Trees algorithm showed promising results on the two different datasets used during the thesis. With the results, a solution which includes a small modification to the current certificate printing tool as well as a web service which would handle the certificate verification and return the verified certificate to the technician for further analysis was proposed. Due to the time taken to de-fine the requirements as well as experimentations with the machine learning models and data extraction methods, the solution could only be proposed but a small proof of concept was developed to evaluate the feasibility of the solution, which resulted in four managerial implications being identified. These included establishing consistency in the process, im-proved efficiency, cost reduction as well as continuous improvement. Considering the find-ings and the conclusions made, the project could be considered a success as the research objectives were met and questions answered but would still require more development and testing before the proposed solution could be deployed to the production environment.Automatisointi on kasvanut nopeasti viime vuosikymmenen aikana, ja auttanut teollisuutta kehittymään, sillä automatisointi on mahdollistanut luotettavampia sekä tehokkaampia pro-sesseja. Yksi digitalisoinnin tärkeimmistä tavoitteista on vähentää tai kokonaan poistaa turhaa aikaa vieviä tehtäviä, jotta aikaa voidaan priorisoida tärkeämpiin tehtäviin. Tässä opinnäyte-työssä pyrittiin tutkimaan, miten eräässä kalibrointilaitteiden valmistukseen ja ratkaisuihin erikoistuneessa yrityksessä manuaalisesti suoritettavaa sertifikaattien tarkistusprosessi voi-taisiin parantaa nykyaikaisilla työkaluilla, kuten koneoppimisella, jolla olisi mahdollista tarkis-taa sertifikaattien mittaustulokset tarkistusprosessin aikana sekä soveltaa yksinkertaisia rat-kaisuja muihin kohtiin sertifikaatissa, joissa yleensä virheitä esiintyi. Tämän avulla voitaisiin kehittää teknikoille avustaja, joka helpottaisi ja parantaisi tarkistusprosessia.
Opinnäytetyön struktuurina käytettiin yksinkertaistettua versiota CRISP-DM-mallista, joka koostui neljästä eri vaiheesta. Ensimmäisessä vaiheessa järjestettiin laboratorion päällikön ja teknikkojen kanssa ryhmähaastattelu, jossa kartoitettiin, miten nykyinen prosessi toimi, mitä todistusten tiedot tarkoittavat, missä yleensä ilmeni virheitä ja miten optimaalisen ratkaisun pitäisi toimia. Näitä vastauksia käytettiin vaatimuksina mahdollisen ratkaisun kehittämiseen. Kehitettiin menetelmiä sertifikaattitietojen valmistelemiseksi, sekä mallin kehitykseen riittä-vien tietojen valmistelemiseksi että tapa kerätä tarkistettava tieto sertifikaatista. Kolmas vai-he koostui itse mallin kehittämisestä, jossa vertailtiin ja arvioitiin seitsemää eri mallia, joista neljä valittiin lupaavien tuloksien perusteella. Viimeisessä vaiheessa valittujen mallien suori-tuskykyä arvioitiin sertifikaattitiedoilla, jota mallit eivät olleet ennen nähneet ja mallien en-nusteiden perusteella tehtiin lopullinen arvio.
Tulokset osoittivat, että kaikki valitut koneoppimismallit toimivat poikkeuksellisen hyvin ja pystyivät tekemään tarkkoja ennusteita, erityisesti Extra Trees -algoritmin tulokset olivat lupaavia. Tuloksien perusteella ratkaisua ehdotettiin, johon sisältyy pieni muutos nykyiseen sertifikaattien tulostustyökaluun sekä uusi verkkopalvelu, joka hoitaisi sertifikaattien tarkis-tamisen ja palauttaisi tarkistustulokset teknikolle. Vaatimusten kartoittamiseen sekä kone-oppimismallien ja tiedonkeräys menetelmien kehittäminen vei enemmän aikaa, kun aluksi oletettiin, minkä takia ratkaisua ei pystytty muuta kuin ehdottamaan. Ratkaisun kelpoisuuden todentamiseksi oli kumminkin mahdollista kehitettä konseptitodistus, jonka tuloksista oli mahdollista kartoittaa neljä johtamisvaikutusta, joihin kuuluivat vakaammat tulokset sertifi-kaattiprosessista, tehokkuuden parantaminen, kustannusten vähentäminen sekä jatkuva parantaminen. Kun huomioon otetaan havainnot sekä tehdyt johtopäätökset, tutkimusta voidaan pitää onnistuneena, sillä tutkimustavoitteet saavutettiin ja tutkimuskysymyksiin vas-tattiin, mutta jotta ratkaisu saataisiin käyttöön tuotantoon, palvelu vaatisi enemmän kehitys-tä sekä testejä, jotta ehdotettu ratkaisu pystyisi todeta olevan toteutettavissa
Ground Robotic Hand Applications for the Space Program study (GRASP)
This document reports on a NASA-STDP effort to address research interests of the NASA Kennedy Space Center (KSC) through a study entitled, Ground Robotic-Hand Applications for the Space Program (GRASP). The primary objective of the GRASP study was to identify beneficial applications of specialized end-effectors and robotic hand devices for automating any ground operations which are performed at the Kennedy Space Center. Thus, operations for expendable vehicles, the Space Shuttle and its components, and all payloads were included in the study. Typical benefits of automating operations, or augmenting human operators performing physical tasks, include: reduced costs; enhanced safety and reliability; and reduced processing turnaround time
Evaluation and improvement of the workflow of digital imaging of fine art reproduction in museums
Fine arts refer to a broad spectrum of art formats, ie~painting, calligraphy, photography, architecture, and so forth. Fine art reproductions are to create surrogates of the original artwork that are able to faithfully deliver the aesthetics and feelings of the original. Traditionally, reproductions of fine art are made in the form of catalogs, postcards or books by museums, libraries, archives, and so on (hereafter called museums for simplicity). With the widespread adoption of digital archiving in museums, more and more artwork is reproduced to be viewed on a display. For example, artwork collections are made available through museum websites and Google Art Project for art lovers to view on their own displays. In the thesis, we study the fine art reproduction of paintings in the form of soft copy viewed on displays by answering four questions: (1) what is the impact of the viewing condition and original on image quality evaluation? (2) can image quality be improved by avoiding visual editing in current workflows of fine art reproduction? (3) can lightweight spectral imaging be used for fine art reproduction? and (4) what is the performance of spectral reproductions compared with reproductions by current workflows? We started with evaluating the perceived image quality of fine art reproduction created by representative museums in the United States under controlled and uncontrolled environments with and without the presence of the original artwork. The experimental results suggest that the image quality is highly correlated with the color accuracy of the reproduction only when the original is present and the reproduction is evaluated on a characterized display. We then examined the workflows to create these reproductions, and found that current workflows rely heavily on visual editing and retouching (global and local color adjustments on the digital reproduction) to improve the color accuracy of the reproduction. Visual editing and retouching can be both time-consuming and subjective in nature (depending on experts\u27 own experience and understanding of the artwork) lowering the efficiency of artwork digitization considerably. We therefore propose to improve the workflow of fine art reproduction by (1) automating the process of visual editing and retouching in current workflows based on RGB acquisition systems and by (2) recovering the spectral reflectance of the painting with off-the-shelf equipment under commonly available lighting conditions. Finally, we studied the perceived image quality of reproductions created by current three-channel (RGB) workflows with those by spectral imaging and those based on an exemplar-based method
Explaining and Refining Decision-Theoretic Choices
As the need to make complex choices among competing alternative actions is ubiquitous, the reasoning machinery of many intelligent systems will include an explicit model for making choices. Decision analysis is particularly useful for modelling such choices, and its potential use in intelligent systems motivates the construction of facilities for automatically explaining decision-theoretic choices and for helping users to incrementally refine the knowledge underlying them. The proposed thesis addresses the problem of providing such facilities. Specifically, we propose the construction of a domain-independent facility called UTIL, for explaining and refining a restricted but widely applicable decision-theoretic model called the additive multi-attribute value model. In this proposal we motivate the task, address the related issues, and present preliminary solutions in the context of examples from the domain of intelligent process control
Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval
Tesis por compendio[ES] Hoy en dĂa, el software existe en casi todo. Las empresas a menudo desarrollan y mantienen colecciones de sistemas de software personalizados que comparten algunas caracterĂsticas entre ellos, pero que tambiĂ©n tienen otras caracterĂsticas particulares. Conforme el nĂşmero de caracterĂsticas y el nĂşmero de variantes de un producto crece, el mantenimiento del software se vuelve cada vez más complejo. Para hacer frente a esta situaciĂłn la Comunidad de IngenierĂa del Software basada en Modelos está abordando una actividad clave: la LocalizaciĂłn de Fragmentos de Modelo. Esta actividad consiste en la identificaciĂłn de elementos del modelo que son relevantes para un requisito, una caracterĂstica o un bug. Durante los Ăşltimos años se han propuesto muchos enfoques para abordar la identificaciĂłn de los elementos del modelo que corresponden a una funcionalidad en particular. Sin embargo, existe una carencia a la hora de cĂłmo se reportan las medidas del espacio de bĂşsqueda, asĂ como las medidas de la soluciĂłn a encontrar. El objetivo de nuestra tesis radica en proporcionar a la comunidad dedicada a la actividad de localizaciĂłn de fragmentos de modelo una serie de medidas (tamaño, volumen, densidad, multiplicidad y dispersiĂłn) para reportar los problemas de localizaciĂłn de fragmentos de modelo. El uso de estas novedosas medidas ayuda a los investigadores durante la creaciĂłn de nuevos enfoques, asĂ como la mejora de aquellos enfoques ya existentes. Mediante el uso de dos casos de estudio reales e industriales, esta tesis pone en valor la importancia de estas medidas para comparar resultados de diferentes enfoques de una manera precisa. Los resultados de este trabajo han sido redactados y publicados en foros, conferencias y revistas especializadas en los temas y contexto de la investigaciĂłn. Esta tesis se presenta como un compendio de artĂculos acorde a la regulaciĂłn de la Universitat Politècnica de València. Este documento de tesis presenta los temas, el contexto y los objetivos de la investigaciĂłn. Presenta las publicaciones acadĂ©micas que se han publicado como resultado del trabajo y luego analiza los resultados de la investigaciĂłn.[CA] Hui en dia, el programari existix en quasi tot. Les empreses sovint desenrotllen i mantenen col·leccions de sistemes de programari personalitzats que compartixen algunes caracterĂstiques entre ells, però que tambĂ© tenen altres caracterĂstiques particulars. Conforme el nombre de caracterĂstiques i el nombre de variants d'un producte creix, el manteniment del programari es torna cada vegada mĂ©s complex. Per a fer front a esta situaciĂł la Comunitat d'Enginyeria del Programari basada en Models estĂ abordant una activitat clau: la LocalitzaciĂł de Fragments de Model. Esta activitat consistix en la identificaciĂł d'elements del model que sĂłn rellevants per a un requisit, una caracterĂstica o un bug. Durant els Ăşltims anys s'han proposat molts enfocaments per a abordar la identificaciĂł dels elements del model que corresponen a una funcionalitat en particular. No obstant això, hi ha una carència a l'hora de com es reporten les mesures de l'espai de busca, aixĂ com les mesures de la soluciĂł a trobar. L'objectiu de la nostra tesi radica a proporcionar a la comunitat dedicada a l'activitat de localitzaciĂł de fragments de model una sèrie de mesures (grandĂ ria, volum, densitat, multiplicitat i dispersiĂł) per a reportar els problemes de localitzaciĂł de fragments de model. L'Ăşs d'estes noves mesures ajuda als investigadors durant la creaciĂł de nous enfocaments, aixĂ com la millora d'aquells enfocaments ja existents. Per mitjĂ de l'Ăşs de dos casos d'estudi reals i industrials, esta tesi posa en valor la importĂ ncia d'estes mesures per a comparar resultats de diferents enfocaments d'una manera precisa. Els resultats d'este treball han sigut redactats i publicats en fòrums, conferències i revistes especialitzades en els temes i context de la investigaciĂł. Esta tesi es presenta com un compendi d'articles d'acord amb la regulaciĂł de la Universitat Politècnica de València. Este document de tesi presenta els temes, el context i els objectius de la investigaciĂł. Presenta les publicacions acadèmiques que s'han publicat com resultat del treball i desprĂ©s analitza els resultats de la investigaciĂł.[EN] Nowadays, software exists in almost everything. Companies often develop and maintain a collection of custom-tailored software systems that share some common features but also support customer-specific ones. As the number of features and the number of product variants grows, software maintenance is becoming more and more complex. To keep pace with this situation, Model-Based Software Engineering Community is addressing a key-activity: Model Fragment Location (MFL). MFL aims at identifying model elements that are relevant to a requirement, feature, or bug. Many MFL approaches have been introduced in the last few years to address the identification of the model elements that correspond to a specific functionality. However, there is a lack of detail when the measurements about the search space (models) and the measurements about the solution to be found (model fragment) are reported. The goal of this thesis is to provide insights to MFL Research Community of how to improve the report of location problems. We propose using five measurements (size, volume, density, multiplicity, and dispersion) to report the location problems during MFL. The usage of these novel measurements support researchers during the creation of new MFL approaches and during the improvement of those existing ones. Using two different case studies, both real and industrial, we emphasize the importance of these measurements in order to compare results in a deeply way. The results of the research have been redacted and published in forums, conferences, and journals specialized in the topics and context of the research.
This thesis is presented as compendium of articles according the regulations in Universitat Politècnica de València. This thesis document introduces the topics, context, and objectives of the research, presents the academic publications that have been published as a result of the work, and then discusses the outcomes of the investigation.Ballarin Naya, M. (2021). Definition of Descriptive and Diagnostic Measurements for Model Fragment Retrieval [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/171604TESISCompendi
- …