37,321 research outputs found

    Impact of Mobile and Wireless Technology on Healthcare Delivery services

    Get PDF
    Modern healthcare delivery services embrace the use of leading edge technologies and new scientific discoveries to enable better cures for diseases and better means to enable early detection of most life-threatening diseases. The healthcare industry is finding itself in a state of turbulence and flux. The major innovations lie with the use of information technologies and particularly, the adoption of mobile and wireless applications in healthcare delivery [1]. Wireless devices are becoming increasingly popular across the healthcare field, enabling caregivers to review patient records and test results, enter diagnosis information during patient visits and consult drug formularies, all without the need for a wired network connection [2]. A pioneering medical-grade, wireless infrastructure supports complete mobility throughout the full continuum of healthcare delivery. It facilitates the accurate collection and the immediate dissemination of patient information to physicians and other healthcare care professionals at the time of clinical decision-making, thereby ensuring timely, safe, and effective patient care. This paper investigates the wireless technologies that can be used for medical applications, and the effectiveness of such wireless solutions in a healthcare environment. It discusses challenges encountered; and concludes by providing recommendations on policies and standards for the use of such technologies within hospitals

    The future of Cybersecurity in Italy: Strategic focus area

    Get PDF
    This volume has been created as a continuation of the previous one, with the aim of outlining a set of focus areas and actions that the Italian Nation research community considers essential. The book touches many aspects of cyber security, ranging from the definition of the infrastructure and controls needed to organize cyberdefence to the actions and technologies to be developed to be better protected, from the identification of the main technologies to be defended to the proposal of a set of horizontal actions for training, awareness raising, and risk management

    Application of Incident Command Structure to clinical trial management in the academic setting: principles and lessons learned

    Get PDF
    Background Clinical trial success depends on appropriate management, but practical guidance to trial organisation and planning is lacking. The Incident Command System (ICS) is the ‘gold standard’ management system developed for managing diverse operations in major incident and public health arenas. It enables effective and flexible management through integration of personnel, procedures, resources, and communications within a common hierarchical organisational structure. Conventional ICS organisation consists of five function modules: Command, Planning, Operations, Logistics, and Finance/Administration. Large clinical trials will require a separate Regulatory Administrative arm, and an Information arm, consisting of dedicated data management and information technology staff. We applied ICS principles to organisation and management of the Prehospital Use of Plasma in Traumatic Haemorrhage (PUPTH) trial. This trial was a multidepartmental, multiagency, randomised clinical trial investigating prehospital administration of thawed plasma on mortality and coagulation response in severely injured trauma patients. We describe the ICS system as it would apply to large clinical trials in general, and the benefits, barriers, and lessons learned in utilising ICS principles to reorganise and coordinate the PUPTH trial. Results Without a formal trial management structure, early stages of the trial were characterised by inertia and organisational confusion. Implementing ICS improved organisation, coordination, and communication between multiple agencies and service groups, and greatly streamlined regulatory compliance administration. However, unfamiliarity of clinicians with ICS culture, conflicting resource allocation priorities, and communication bottlenecks were significant barriers. Conclusions ICS is a flexible and powerful organisational tool for managing large complex clinical trials. However, for successful implementation the cultural, psychological, and social environment of trial participants must be accounted for, and personnel need to be educated in the basics of ICS

    Blockchain technology into the logistics supply chain implementation effectiveness

    Get PDF
    Technologies currently have a tremendous impact on all spheres of economy, business and a state. They integrally change people’s conception of trade, property, and market entities interaction. Artificial intelligence, additive, informationommunication, green technologies, biotechnologies, and blockchain technologies development and implementation confirm their leadership importance and inevitability in relation to the activities traditional approaches. In the modern world only the companies with flexible vision, equipment and technologies able to instantly reform, adapt to new conditions and challenges, will benefit. The point at issue is Industry 4.0 as a new technological mode emergence

    CRC for Construction Innovation : annual report 2008-2009

    Get PDF

    Value-driven Security Agreements in Extended Enterprises

    Get PDF
    Today organizations are highly interconnected in business networks called extended enterprises. This is mostly facilitated by outsourcing and by new economic models based on pay-as-you-go billing; all supported by IT-as-a-service. Although outsourcing has been around for some time, what is now new is the fact that organizations are increasingly outsourcing critical business processes, engaging on complex service bundles, and moving infrastructure and their management to the custody of third parties. Although this gives competitive advantage by reducing cost and increasing flexibility, it increases security risks by eroding security perimeters that used to separate insiders with security privileges from outsiders without security privileges. The classical security distinction between insiders and outsiders is supplemented with a third category of threat agents, namely external insiders, who are not subject to the internal control of an organization but yet have some access privileges to its resources that normal outsiders do not have. Protection against external insiders requires security agreements between organizations in an extended enterprise. Currently, there is no practical method that allows security officers to specify such requirements. In this paper we provide a method for modeling an extended enterprise architecture, identifying external insider roles, and for specifying security requirements that mitigate security threats posed by these roles. We illustrate our method with a realistic example

    What’s behind the ag-data logo? An examination of voluntary agricultural-data codes of practice

    Get PDF
    In this article, we analyse agricultural data (ag-data) codes of practice. After the introduction, Part II examines the emergence of ag-data codes of practice and provides two case studies—the American Farm Bureau’s Privacy and Security Principles for Farm Data and New Zealand’s Farm Data Code of Practice—that illustrate that the ultimate aims of ag-data codes of practice are inextricably linked to consent, disclosure, transparency and, ultimately, the building of trust. Part III highlights the commonalities and challenges of ag-data codes of practice. In Part IV several concluding observations are made. Most notably, while ag-data codes of practice may help change practices and convert complex details about ag-data contracts into something tangible, understandable and useable, it is important for agricultural industries to not hastily or uncritically accept or adopt ag-data codes of practice. There needs to be clear objectives, and a clear direction in which stakeholders want to take ag-data practices. In other words, stakeholders need to be sure about what they are trying, and able, to achieve with ag-data codes of practice. Ag-data codes of practice need credible administration, accreditation and monitoring. There also needs to be a way of reviewing and evaluating the codes in a more meaningful way than simple metrics such as the number of members: for example, we need to know something about whether the codes raise awareness and education around data practices, and, perhaps most importantly, whether they encourage changes in attitudes and behaviours around the access to and use of ag-data

    Y2K Interruption: Can the Doomsday Scenario Be Averted?

    Get PDF
    The management philosophy until recent years has been to replace the workers with computers, which are available 24 hours a day, need no benefits, no insurance and never complain. But as the year 2000 approached, along with it came the fear of the millennium bug, generally known as Y2K, and the computers threatened to strike!!!! Y2K, though an abbreviation of year 2000, generally refers to the computer glitches which are associated with the year 2000. Computer companies, in order to save memory and money, adopted a voluntary standard in the beginning of the computer era that all computers automatically convert any year designated by two numbers such as 99 into 1999 by adding the digits 19. This saved enormous amount of memory, and thus money, because large databases containing birth dates or other dates only needed to contain the last two digits such as 65 or 86. But it also created a built in flaw that could make the computers inoperable from January 2000. The problem is that most of these old computers are programmed to convert 00 (for the year 2000) into 1900 and not 2000. The trouble could therefore, arise when the systems had to deal with dates outside the 1900s. In 2000, for example a programme that calculates the age of a person born in 1965 will subtract 65 from 00 and get -65. The problem is most acute in mainframe systems, but that does not mean PCs, UNIX and other computing environments are trouble free. Any computer system that relies on date calculations must be tested because the Y2K or the millennium bug arises because of a potential for “date discontinuity” which occurs when the time expressed by a system, or any of its components, does not move in consonance with real time. Though attention has been focused on the potential problems linked with change from 1999 to 2000, date discontinuity may occur at other times in and around this period.
    corecore