1,966 research outputs found

    Information Technology Sophistication in Hospitals: A Field Study in Quebec

    Get PDF
    The Quebec health sector has been experiencing a period of great turmoil over the last five years. Among other institutions, hospitals are faced with huge pressures from government funding cuts. Several empirical studies in the information systems field have shown that the use of computer-based information systems could have positive impacts on organizational performance. Many agree to say that health care institutions are no exceptions. But if one wishes to identify the effects of IT on the delivery of care, one must be able to characterize IT for operationalization purposes. The objective of this research project is twofold. Our first aim consists in developing and validating a measurement instrument of IT sophistication in hospitals. Such instrument should provide hospital managers with a diagnostic tool capable of indicating the profile of their respective institutions in regard to IT use and comparing this profile to those of other similar health institutions. In this line of thought, our second objective consists in presenting the IT sophistication profile of Quebec hospitals. Le secteur de la santé au Québec vit à l'heure des grands bouleversements. Plusieurs s'entendent à dire que les hôpitaux n'ont d'autre alternative que de faire appel aux technologies de pointe afin d'assurer un niveau de qualité des soins adéquat tout en minimisant les coûts associés à ces mêmes soins. Or, si l'on veut identifier les effets de la TI sur la performance des hôpitaux, il faut être capable de définir cette TI en tant que construit et caractériser cette dernière dans un but d'opérationalisation en tant que variable indépendante, dépendante ou modératrice dans un cadre conceptuel de recherche. Cette étude vise deux objectifs particuliers. Le premier consiste à développer un questionnaire mesurant le degré de sophistication des TI en milieu hospitalier et à le valider auprès de la population des hôpitaux québécois. Notre second objectif est de présenter, de façon sommaire, le profil des hôpitaux du Québec en matière de sophistication des TI.IT sophistication, measurement instrument, hospital information systems, Sophistication des TI, instrument de mesure, SI en milieu hospitalier

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Automated Testing For Software Process Automation

    Get PDF
    Robotic Process Automation is a way of automatizing business processes within short timespans. At the case company the initial automation step is implemented by business experts, rather than software developers. Using developers with limited software engineering experience allows for high speed but raises concerns of automation quality. One way to reduce these concerns is extensive testing, which takes up much time for integration developers. The aim of this thesis is to increase the quality of the development process, while minimizing impact on development time through test automation. The research is carried out as a part of the Robotic Process Automation project at the case company. The artifact produced by this thesis is a process for automatically testing software automation products. Automated testing of software automation solutions was found to be technically feasible, but difficult. Robotic process automation provides several novel challenges for test automation, but certain uses such as regression and integration testing are still possible. Benefits of the chosen approach are traceability for quality, developer confidence and potentially increased development speed. In addition, test automation facilitates the adoption of agile software development methods, such as continuous integration and deployment. The usage of continuous integration in relation to Robotic Process Automation was demonstrated via a newly developed workflow.Ohjelmistoautomaatio on nopea tapa automatisoida liiketoimintaprosessien rutiineja. Tapausyrityksessä automaation luovat ohjelmistonkehittäjien sijasta liiketoiminnan asiantuntijat. Käyttämällä alkukehittäjiä, joilla on vähäisesti kokemusta ohjelmistokehityksestä, saadaan nopeita ratkaisuja, mutta samalla yrityksellä on huolia laadusta. Laatua voidaan mitata testaamalla automaatioratkaisuja laajasti, mutta tähän menee huomattavasti aikaa. Tämän tutkielman tarkoituksena on testiautomaatiota käyttämällä nostaa kehitysprosessin laatua ilman että työmäärä kasvaa merkittävästi. Tutkimus suoritettiin osana tapausyrityksen ohjelmistorobotiikkaprojektia. Tutkielmassa luotiin prosessi, jossa automaattisesti testataan ohjelmistoautomaatioprosesseja. Testaus todettiin tutkimuksessa mahdolliseksi mutta käytännössä haasteelliseksi. Testauksessa ilmeni useita ongelmia, mutta muutamat ratkaisut kuten regressio- ja integraatiotestaus todettiin kuitenkin hyödyllisiksi. Lähestymistavan hyödyiksi todettiin laadun jäljitettävyyden, kehittäjien itseluottamuksen ja kehitysnopeuden kasvu. Lisäksi testiautomaatio mahdollistaa nykyaikaisten ketterien menetelmien kuten jatkuvan integraation käytön. Jatkuvan integraation käyttömahdollisuus demonstroitiin uudistetulla työtavalla

    Marine Forces Reserve: accelerating knowledge flow through asynchronous learning technologies

    Get PDF
    "Further distribution of all or part of this report is authorized."Most scholars agree that knowledge is key to competitive advantage. Organizations able to move dynamic knowledge quickly can outperform their rivals, peers and counterparts. The US Marine Corps is clearly a knowledge organization, and Marine Forces Reserve (MFR) is an organization exemplifying the need for rapid knowledge movement. Indeed, a key component to MFR success is the knowledge of Active Duty Inspector Instructors (I-Is), but a great number of them are required to take charge quickly—although most lack prior training and experience working with the unique and dynamic challenges of the Reserves—and their extant knowledge flows are relegated principally to questionably effective presentation slideshows and error-prone on the job training. Leveraging deftly the power of information technology—in conjunction with knowledge management principles, methods and techniques—we employ a class of systems used principally for distributed and remote learning, and we engage key subject matter experts at MFR Headquarters to accelerate the knowledge flows required for effective I-I performance. Preliminary results point to huge return on investment in terms of cost, and early indications suggest that training efficacy can be just as effective as—if not better than—accomplished through previous methods. This sets the stage for even more effective use of I-I personnel time and energy when they gather for their annual conference in New Orleans, and it highlights enhanced opportunities for continuing our acceleration of knowledge flows through online training and support—both for I-I personnel and across other MFR training populations. Further research, implementation and assessment are required, but results to date are impressive and encouraging.Marine Forces ReserveMarine Forces ReserveApproved for public release; distribution is unlimited

    Identifying Common Patterns and Unusual Dependencies in Faults, Failures and Fixes for Large-scale Safety-critical Software

    Get PDF
    As software evolves, becoming a more integral part of complex systems, modern society becomes more reliant on the proper functioning of such systems. However, the field of software quality assurance lacks detailed empirical studies from which best practices can be determined. The fundamental factors that contribute to software quality are faults, failures and fixes, and although some studies have considered specific aspects of each, comprehensive studies have been quite rare. Thus, the fact that we establish the cause-effect relationship between the fault(s) that caused individual failures, as well as the link to the fixes made to prevent the failures from (re)occurring appears to be a unique characteristic of our work. In particular, we analyze fault types, verification activities, severity levels, investigation effort, artifacts fixed, components fixed, and the effort required to implement fixes for a large industrial case study. The analysis includes descriptive statistics, statistical inference through formal hypothesis testing, and data mining. Some of the most interesting empirical results include (1) Contrary to popular belief, later life-cycle faults dominate as causes of failures. Furthermore, over 50% of high priority failures (e.g., post-release failures and safety-critical failures) were caused by coding faults. (2) 15% of failures led to fixes spread across multiple components and the spread was largely affected by the software architecture. (3) The amount of effort spent fixing faults associated with each failure was not uniformly distributed across failures; fixes with a greater spread across components and artifacts, required more effort. Overall, the work indicates that fault prevention and elimination efforts focused on later life cycle faults is essential as coding faults were the dominating cause of safety-critical failures and post-release failures. Further, statistical correlation and/or traditional data mining techniques show potential for assessment and prediction of the locations of fixes and the associated effort. By providing quantitative results and including statistical hypothesis testing, which is not yet a standard practice in software engineering, our work enriches the empirical knowledge needed to improve the state-of-the-art and practice in software quality assurance

    Intelligent Radio Spectrum Monitoring

    Full text link
    [EN] Spectrum monitoring is an important part of the radio spectrum management process, providing feedback on the workflow that allows for our current wirelessly interconnected lifestyle. The constantly increasing number of users and uses of wireless technologies is pushing the limits and capabilities of the existing infrastructure, demanding new alternatives to manage and analyse the extremely large volume of data produced by existing spectrum monitoring networks. This study addresses this problem by proposing an information management system architecture able to increase the analytical level of a spectrum monitoring measurement network. This proposal includes an alternative to manage the data produced by such network, methods to analyse the spectrum data and to automate the data gathering process. The study was conducted employing system requirements from the Brazilian National Telecommunications Agency and related functional concepts were aggregated from the reviewed scientific literature and publications from the International Telecommunication Union. The proposed solution employs microservice architecture to manage the data, including tasks such as format conversion, analysis, optimization and automation. To enable efficient data exchange between services, we proposed the use of a hierarchical structure created using the HDF5 format. The suggested architecture was partially implemented as a pilot project, which allowed to demonstrate the viability of presented ideas and perform an initial refinement of the proposed data format and analytical algorithms. The results pointed to the potential of the solution to solve some of the limitations of the existing spectrum monitoring workflow. The proposed system may play a crucial role in the integration of the spectrum monitoring activities into open data initiatives, promoting transparency and data reusability for this important public service.[ES] El control y análisis de uso del espectro electromagnético, un servicio conocido como comprobación técnica del espectro, es una parte importante del proceso de gestión del espectro de radiofrecuencias, ya que proporciona la información necesaria al flujo de trabajo que permite nuestro estilo de vida actual, interconectado e inalámbrico. El número cada vez más grande de usuarios y el creciente uso de las tecnologías inalámbricas amplían las demandas sobre la infraestructura existente, exigiendo nuevas alternativas para administrar y analizar el gran volumen de datos producidos por las estaciones de medición del espectro. Este estudio aborda este problema al proponer una arquitectura de sistema para la gestión de información capaz de aumentar la capacidad de análisis de una red de equipos de medición dedicados a la comprobación técnica del espectro. Esta propuesta incluye una alternativa para administrar los datos producidos por dicha red, métodos para analizar los datos recolectados, así como una propuesta para automatizar el proceso de recopilación. El estudio se realizó teniendo como referencia los requisitos de la Agencia Nacional de Telecomunicaciones de Brasil, siendo considerados adicionalmente requisitos funcionales relacionados descritos en la literatura científica y en las publicaciones de la Unión Internacional de Telecomunicaciones. La solución propuesta emplea una arquitectura de microservicios para la administración de datos, incluyendo tareas como la conversión de formatos, análisis, optimización y automatización. Para permitir el intercambio eficiente de datos entre servicios, sugerimos el uso de una estructura jerárquica creada usando el formato HDF5. Esta arquitectura se implementó parcialmente dentro de un proyecto piloto, que permitió demostrar la viabilidad de las ideas presentadas, realizar mejoras en el formato de datos propuesto y en los algoritmos analíticos. Los resultados señalaron el potencial de la solución para resolver algunas de las limitaciones del tradicional flujo de trabajo de comprobación técnica del espectro. La utilización del sistema propuesto puede mejorar la integración de las actividades e impulsar iniciativas de datos abiertos, promoviendo la transparencia y la reutilización de datos generados por este importante servicio público[CA] El control i anàlisi d'ús de l'espectre electromagnètic, un servei conegut com a comprovació tècnica de l'espectre, és una part important del procés de gestió de l'espectre de radiofreqüències, ja que proporciona la informació necessària al flux de treball que permet el nostre estil de vida actual, interconnectat i sense fils. El número cada vegada més gran d'usuaris i el creixent ús de les tecnologies sense fils amplien la demanda sobre la infraestructura existent, exigint noves alternatives per a administrar i analitzar el gran volum de dades produïdes per les xarxes d'estacions de mesurament. Aquest estudi aborda aquest problema en proposar una arquitectura de sistema per a la gestió d'informació capaç d’augmentar la capacitat d’anàlisi d'una xarxa d'equips de mesurament dedicats a la comprovació tècnica de l'espectre. Aquesta proposta inclou una alternativa per a administrar les dades produïdes per aquesta xarxa, mètodes per a analitzar les dades recol·lectades, així com una proposta per a automatitzar el procés de recopilació. L'estudi es va realitzar tenint com a referència els requisits de l'Agència Nacional de Telecomunicacions del Brasil, sent considerats addicionalment requisits funcionals relacionats descrits en la literatura científica i en les publicacions de la Unió Internacional de Telecomunicacions. La solució proposada empra una arquitectura de microserveis per a l'administració de dades, incloent tasques com la conversió de formats, anàlisi, optimització i automatització. Per a permetre l'intercanvi eficient de dades entre serveis, suggerim l'ús d'una estructura jeràrquica creada usant el format HDF5. Aquesta arquitectura es va implementar parcialment dins d'un projecte pilot, que va permetre demostrar la viabilitat de les idees presentades, realitzar millores en el format de dades proposat i en els algorismes analítics. Els resultats van assenyalar el potencial de la solució per a resoldre algunes de les limitacions del tradicional flux de treball de comprovació tècnica de l'espectre. La utilització del sistema proposat pot millorar la integració de les activitats i impulsar iniciatives de dades obertes, promovent la transparència i la reutilització de dades generades per aquest important servei públicSantos Lobão, F. (2019). Intelligent Radio Spectrum Monitoring. http://hdl.handle.net/10251/128850TFG

    Assessment of socio-techno-economic factors affecting the market adoption and evolution of 5G networks: Evidence from the 5G-PPP CHARISMA project

    Get PDF
    5G networks are rapidly becoming the means to accommodate the complex demands of vertical sectors. The European project CHARISMA is aiming to develop a hierarchical, distributed-intelligence 5G architecture, offering low latency, security, and open access as features intrinsic to its design. Finding its place in such a complex landscape consisting of heterogeneous technologies and devices, requires the designers of the CHARISMA and other similar 5G architectures, as well as other related market actors to take into account the multiple technical, economic and social aspects that will affect the deployment and the rate of adoption of 5G networks by the general public. In this paper, a roadmapping activity identifying the key technological and socio-economic issues is performed, so as to help ensure a smooth transition from the legacy to future 5G networks. Based on the fuzzy Analytical Hierarchy Process (AHP) method, a survey of pairwise comparisons has been conducted within the CHARISMA project by 5G technology and deployment experts, with several critical aspects identified and prioritized. The conclusions drawn are expected to be a valuable tool for decision and policy makers as well as for stakeholders

    Empirical analysis of software reliability

    Get PDF
    This thesis presents an empirical study of architecture-based software reliability based on large real case studies. It undoubtedly demonstrates the value of using open source software to empirically study software reliability. The major goal is to empirically analyze the applicability, adequacy and accuracy of architecture-based software reliability models. In both our studies we found evidence that the number of failures due to faults in more than one component is not insignificant. Consequently, existing models that make such simplifying assumptions must be improved to account for this phenomenon. This thesis\u27 contributions include developing automatic methods for efficient extraction of necessary data from the available repositories, and using this data to test how and when architecture-based software reliability models work. We study their limitations and ways to improve them. Our results show the importance of knowledge gained from the interaction between theoretical and empirical research
    corecore