130 research outputs found

    An Autoethnographic Account of Innovation at the US Department of Veterans Affairs

    Get PDF
    The history of the U.S. Department of Veterans Affairs (VA) health information technology (HIT) has been characterized by both enormous successes and catastrophic failures. While the VA was once hailed as the way to the future of twenty-first-century health care, many programs have been mismanaged, delayed, or flawed, resulting in the waste of hundreds of millions of taxpayer dollars. Since 2015 the U.S. Government Accountability Office (GAO) has designated HIT at the VA as being susceptible to waste, fraud, and mismanagement. The timely central research question I ask in this study is, can healthcare IT at the VA be healed? To address this question, I investigate a HIT case study at the VA Center of Innovation (VACI), originally designed to be the flagship initiative of the open government transformation at the VA. The Open Source Electronic Health Record Alliance (OSEHRA) was designed to promote the open innovation ecosystem public-private-academic partnership. Based on my fifteen years of experience at the VA, I use an autoethnographic methodology to make a significant value-added contribution to understanding and modeling the VA’s approach to innovation. I use several theoretical information system framework models including People, Process, and Technology (PPT), Technology, Organization and Environment (TOE), and Technology Adaptive Model (TAM) and propose a new adaptive theory to understand the inability of VA HIT to innovate. From the perspective of people and culture, I study retaliation against whistleblowers, organization behavioral integrity, and lack of transparency in communications. I examine the VA processes, including the different software development methodologies used, the development and operations process (DevOps) of an open-source application developed at VACI, the Radiology Protocol Tool Recorder (RAPTOR), a Veterans Health Information Systems and Technology Architecture (VistA) radiology workflow module. I find that the VA has chosen to migrate away from inhouse application software and buy commercial software. The impact of these People, Process, and Technology findings are representative of larger systemic failings and are appropriate examples to illustrate systemic issues associated with IT innovation at the VA. This autoethnographic account builds on first-hand project experience and literature-based insights

    DACA: arquitetura para implementação de mecanismos dinâmicos de controlo de acesso em camadas de negócio

    Get PDF
    Doutoramento em Ciências da ComputaçãoAccess control is a software engineering challenge in database applications. Currently, there is no satisfactory solution to dynamically implement evolving fine-grained access control mechanisms (FGACM) on business tiers of relational database applications. To tackle this access control gap, we propose an architecture, herein referred to as Dynamic Access Control Architecture (DACA). DACA allows FGACM to be dynamically built and updated at runtime in accordance with the established fine-grained access control policies (FGACP). DACA explores and makes use of Call Level Interfaces (CLI) features to implement FGACM on business tiers. Among the features, we emphasize their performance and their multiple access modes to data residing on relational databases. The different access modes of CLI are wrapped by typed objects driven by FGACM, which are built and updated at runtime. Programmers prescind of traditional access modes of CLI and start using the ones dynamically implemented and updated. DACA comprises three main components: Policy Server (repository of metadata for FGACM), Dynamic Access Control Component (DACC) (business tier component responsible for implementing FGACM) and Policy Manager (broker between DACC and Policy Server). Unlike current approaches, DACA is not dependent on any particular access control model or on any access control policy, this way promoting its applicability to a wide range of different situations. In order to validate DACA, a solution based on Java, Java Database Connectivity (JDBC) and SQL Server was devised and implemented. Two evaluations were carried out. The first one evaluates DACA capability to implement and update FGACM dynamically, at runtime, and, the second one assesses DACA performance against a standard use of JDBC without any FGACM. The collected results show that DACA is an effective approach for implementing evolving FGACM on business tiers based on Call Level Interfaces, in this case JDBC.Controlo de acesso é um desafio para a engenharia de software nas aplicações de bases de dados. Atualmente, não há uma solução satisfatória para a implementação dinâmica de mecanismos finos e evolutivos de controlo de acesso (FGACM) ao nível das camadas de negócio de aplicações de bases de dados relacionais. Para solucionar esta lacuna, propomos uma arquitetura, aqui referida como Arquitetura Dinâmica de Controlo de Acesso (DACA). DACA permite que FGACM sejam dinamicamente construídos e atualizados em tempo de execução de acordo com as políticas finas de controlo de acesso (FGACP) estabelecidas. DACA explora e utiliza as características das Call Level Interfaces (CLI) para implementar FGACM ao nível das camadas de negócio. De entre as características das CLI, destacamos o seu desempenho e os diversos modos para acesso a dados armazenados em bases de dados relacionais. Na DACA, os diversos modos de acesso das CLI são envolvidos por objetos tipados derivados de FGACM, que são construídos e atualizados em tempo de execução. Os programadores prescindem dos modos tradicionais de acesso das CLI e passam a utilizar os dinamicamente construídos e atualizados. DACA compreende três componentes principais: Policy Server (repositório de meta-data dos FGACM), Dynamic Access Control Component (componente da camada de negócio que é responsável pela implementação dos FGACM) e Policy Manager (broker entre DACC e Policy Server). Ao contrário das soluções atuais, DACA não é dependente de qualquer modelo de controlo de acesso ou de qualquer política de controlo de acesso, promovendo assim a sua aplicabilidade a muitas e diversificadas situações. Com o intuito de validar DACA, foi concebida e desenvolvida uma solução baseada em Java, Java Database Connectivity (JDBC) e SQL Server. Foram efetuadas duas avaliações. A primeira avalia DACA quanto à sua capacidade para dinamicamente, em tempo de execução, implementar e atualizar FGACM e, a segunda, avalia o desempenho de DACA contra uma solução sem FGACM que utiliza o JDBC normalizado. Os resultados recolhidos mostram que DACA é uma solução válida para implementar FGACM evolutivos em camadas de negócio baseadas em CLI

    On the genesis of computer forensis

    Get PDF
    This thesis presents a coherent set of research contributions to the new discipline of computer forensis. It analyses emergence of computer forensis and defines challenges facing this discipline, carries forward research advances in conventional methodology, introduces novel approach to using virtual environments in forensis, and systemises the computer forensis body of knowledge leading to the establishment of tertiary curriculum. The emergence of computer forensis as a separate discipline of science was triggered by evolution and growth of computer crime. Computer technology reached a stage when a conventional, mechanistic approach to collecting and analysing data is insufficient: the existing methodology must be formalised, and embrace technologies and methods that will enable the inclusion of transient data and live systems analysis. Further work is crucial to incorporate advances in related disciplines like computer security and information systems audit, as well as developments in operating systems to make computer forensics issues inherent in their design. For example: it is proposed that some of the features offered by persistent systems could be built into conventional operating systems to make illicit activities easier to identify and analyse. The analysis of permanent data storage is fundamental to computer forensics practice. There is very little finalised, and a lot still to be discovered in the conventional computer forensics methodology. This thesis contributes to formalisation and improved integrity of forensic handling of data storage by: formalising methods for data collection and analysis in NTFS (Microsoft file system) environment: presenting safe methodology for handling data backups in order to avoid information loss where Alternate Data Streams (ADS) are present: formalising methods of hiding and extracting hidden and encrypted data. A significant contribution of this thesis is in the field of application of virtualisation, or simulation of the computer in the virtual environment created by the underlying hardware and software, to computer forensics practice. Computer systems are not easily analysed for forensic purpose, and it is demonstrated that virtualisation applied in computer forensics allows for more efficient and accurate identification and analysis of the evidence. A new method is proposed where two environments used in parallel can bring faster and verifiable results not dependent on proprietary, close source tools and may lead to gradual shift from commercial Windows software to open source software (OSS). The final contribution of this thesis is systemising the body of knowledge in computer forensics, which is a necessary condition for it to become an established discipline of science. This systemisation led to design and development of tertiary curriculum in computer forensics illustrated here with a case study of computer forensics major for Bachelor of Computer Science at University of Western Sydney. All genesis starts as an idea. A natural part of scientific research process is replacing previous assumptions, concepts, and practices with new ones which better approximate the truth. This thesis advances computer forensis body of knowledge in the areas which are crucial to further development of this discipline. Please note that the appendices to this thesis consist of separately published items which cannot be made available due to copyright restrictions. These items are listed in the PDF attachment for reference purposes

    Approximation Opportunities in Edge Computing Hardware : A Systematic Literature Review

    Get PDF
    With the increasing popularity of the Internet of Things and massive Machine Type Communication technologies, the number of connected devices is rising. However, while enabling valuable effects to our lives, bandwidth and latency constraints challenge Cloud processing of their associated data amounts. A promising solution to these challenges is the combination of Edge and approximate computing techniques that allows for data processing nearer to the user. This paper aims to survey the potential benefits of these paradigms’ intersection. We provide a state-of-the-art review of circuit-level and architecture-level hardware techniques and popular applications. We also outline essential future research directions.publishedVersionPeer reviewe

    Anthropology 2018 APR Self-Study & Documents

    Get PDF
    UNM Anthropology APR self-study and review team report for Fall 2018, fulfilling requirements of the Higher Learning Commission

    Towards formalisation of situation-specific computations in pervasive computing environments

    Get PDF
    We have categorised the characteristics and the content of pervasive computing environments (PCEs), and demonstrated why a non-dynamic approach to knowledge conceptualisation in PCEs does not fulfil the expectations we may have from them. Consequently, we have proposed a formalised computational model, the FCM, for knowledge representation and reasoning in PCEs which, secures the delivery of situation and domain specific services to their users. The proposed model is a user centric model, materialised as a software engineering solution, which uses the computations generated from the FCM, stores them within software architectural components, which in turn can be deployed using modern software technologies. The model has also been inspired by the Semantic Web (SW) vision and provision of SW technologies. Therefore, the FCM creates a semantically rich situation-specific PCE based on SWRL-enabled OWL ontologies that allows reasoning about the situation in a PCE and delivers situation specific service. The proposed FCM model has been illustrated through the example of remote patient monitoring in the healthcare domain. Numerous software applications generated from the FCM have been deployed using Integrated Development Environments and OWL-API

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    ISCR Annual Report: Fical Year 2004

    Full text link
    corecore