970 research outputs found

    TCitySmartF: A comprehensive systematic framework for transforming cities into smart cities

    Get PDF
    A shared agreed-upon definition of "smart city" (SC) is not available and there is no "best formula" to follow in transforming each and every city into SC. In a broader inclusive definition, it can be described as an opportunistic concept that enhances harmony between the lives and the environment around those lives perpetually in a city by harnessing the smart technology enabling a comfortable and convenient living ecosystem paving the way towards smarter countries and the smarter planet. SCs are being implemented to combine governors, organisations, institutions, citizens, environment, and emerging technologies in a highly synergistic synchronised ecosystem in order to increase the quality of life (QoL) and enable a more sustainable future for urban life with increasing natural resource constraints. In this study, we analyse how to develop citizen- and resource-centric smarter cities based on the recent SC development initiatives with the successful use cases, future SC development plans, and many other particular SC development solutions. The main features of SC are presented in a framework fuelled by recent technological advancement, particular city requirements and dynamics. This framework - TCitySmartF 1) aims to aspire a platform that seamlessly forges engineering and technology solutions with social dynamics in a new philosophical city automation concept - socio-technical transitions, 2) incorporates many smart evolving components, best practices, and contemporary solutions into a coherent synergistic SC topology, 3) unfolds current and future opportunities in order to adopt smarter, safer and more sustainable urban environments, and 4) demonstrates a variety of insights and orchestrational directions for local governors and private sector about how to transform cities into smarter cities from the technological, social, economic and environmental point of view, particularly by both putting residents and urban dynamics at the forefront of the development with participatory planning and interaction for the robust community- and citizen-tailored services. The framework developed in this paper is aimed to be incorporated into the real-world SC development projects in Lancashire, UK

    Analyze business context data in developing economies using quantum computing

    Full text link
    Quantum computing is an advancing area of computing sciences and provides a new base of development for many futuristic technologies discussions on how it can help developing economies will further help developed economies in technology transfer and economic development initiatives related to Research and development within developing countries thus providing a new means of foreign direct investment(FDI) and business innovation for the majority of the globe that lacks infrastructure economic resources required for growth in the technology landscape and cyberinfrastructure for growth in computing applications. Discussion of which areas of support quantum computing can help will further assist developing economies in implementing it for growth opportunities for local systems and businesses

    Alignment of Big Data Perceptions Across Levels in Healthcare: The case of New Zealand

    Get PDF
    Big data and related technologies have the potential to transform healthcare sectors by facilitating improvements to healthcare planning and delivery. Big data research highlights the importance of aligning big data implementations with business needs to achieve success. In one of the first studies to examine the influence of big data on business-IT alignment in the healthcare sector, this paper addresses the question: how do stakeholders’ perceptions of big data influence alignment between big data technologies and healthcare sector needs across macro, meso, and micro levels in the New Zealand (NZ) healthcare sector? A qualitative inquiry was conducted using semi-structured interviews to understand perceptions of big data across the NZ healthcare sector. An application of a novel theory, Theory of Sociotechnical Representations (TSR), is used to examine people’s perceptions of big data technologies and their applicability in their day-to-day work. These representations are analysed at each level and then across levels to evaluate the degree of alignment. A social dimension lens to alignment was used to explore mutual understanding of big data across the sector. The findings show alignment across the sector through the shared understanding of the importance of data quality, the increasing challenges of privacy and security, and the importance of utilising modern and new data in measuring health outcomes. Areas of misalignment include the differing definitions of big data, as well as perceptions around data ownership, data sharing, use of patient-generated data and interoperability. Both practical and theoretical contributions of the study are discussed

    Handbook of Learning Analytics

    Get PDF
    The utilization of learning analytics within K12 education has expanded over the last ten years. The availability of new software for both administering school systems and tools for teaching has increased and with it the amount of data collected and analyzed. However, each country is on a different path with respect to how it navigates the availability of these new technologies and the data they produce. Given the heterogeneity of systems, practices and cultures, rather than attempt to comprehensively document learning analytics the following chapter asked six scholars involved in K12 learning analytics research to document what they see as the key benefits and concerns associated with learning analytics within their country. The countries represented are China, Finland, South Africa, Uruguay and The United States of America. Although each is clearly different, a common theme emerges around the difficulties and dangers of moving education systems from data collection to data utilization.Second Editio

    Adaptive Financial Regulation and RegTech: A Concept Article on Realistic Protection for Victims of Bank Failures

    Get PDF
    Frustrated by the seeming inability of regulators and prosecutors to hold bank executives to account for losses inflicted by their companies before, during, and since the financial crisis of 2008, some scholars have suggested that private-attorney-general suits such as class action and shareholder derivative suits might achieve better results. While a few isolated suits might be successful in cases where there is provable fraud, such remedies are no general panacea for preventing large-scale bank-inflicted losses. Large losses are nearly always the result of unforeseeable or suddenly changing economic conditions, poor business judgment, or inadequate regulatory supervision—usually a combination of all three. Yet regulators face an increasingly complex task in supervising modern financial institutions. This Article explains how the challenge has become so difficult. It argues for preserving regulatory discretion rather than reducing it through formal congressional direction. The Article also asserts that regulators have to develop their own sophisticated methods of automated supervision. Although also not a panacea, the development of “RegTech” solutions will help clear away volumes of work that understaffed and underfunded regulators cannot keep up with. RegTech will not eliminate policy considerations, nor will it render regulatory decisions noncontroversial. Nevertheless, a sophisticated deployment of RegTech should help focus regulatory discretion and public-policy debate on the elements of regulation where choices really matter

    The third generation of pan-canadian wetland map at 10 m resolution using multisource earth observation data on cloud computing platform

    Get PDF
    Development of the Canadian Wetland Inventory Map (CWIM) has thus far proceeded over two generations, reporting the extent and location of bog, fen, swamp, marsh, and water wetlands across the country with increasing accuracy. Each generation of this training inventory has improved the previous results by including additional reference wetland data and focusing on processing at the scale of ecozone, which represent ecologically distinct regions of Canada. The first and second generations attained relatively highly accurate results with an average approaching 86% though some overestimated wetland extents, particularly of the swamp class. The current research represents a third refinement of the inventory map. It was designed to improve the overall accuracy (OA) and reduce wetlands overestimation by modifying test and train data and integrating additional environmental and remote sensing datasets, including countrywide coverage of L-band ALOS PALSAR-2, SRTM, and Arctic digital elevation model, nighttime light, temperature, and precipitation data. Using a random forest classification within Google Earth Engine, the average OA obtained for the CWIM3 is 90.53%, an improvement of 4.77% over previous results. All ecozones experienced an OA increase of 2% or greater and individual ecozone OA results range between 94% at the highest to 84% at the lowest. Visual inspection of the classification products demonstrates a reduction of wetland area overestimation compared to previous inventory generations. In this study, several classification scenarios were defined to assess the effect of preprocessing and the benefits of incorporating multisource data for large-scale wetland mapping. In addition, the development of a confidence map helps visualize where current results are most and least reliable given the amount of wetland test and train data and the extent of recent landscape disturbance (e.g., fire). The resulting OAs and wetland areal extent reveal the importance of multisource data and adequate test and train data for wetland classification at a countrywide scale

    Predicting the Risk of Falling with Artificial Intelligence

    Get PDF
    Predicting the Risk of Falling with Artificial Intelligence Abstract Background: Fall prevention is a huge patient safety concern among all healthcare organizations. The high prevalence of patient falls has grave consequences, including the cost of care, longer hospital stays, unintentional injuries, and decreased patient and staff satisfaction. Preventing a patient from falling is critical in maintaining a patient’s quality of life and averting the high cost of healthcare expenses. Local Problem: Two hospitals\u27 healthcare system saw a significant increase in inpatient falls. The fall rate is one of the nursing quality indicators, and fall reduction is a key performance indicator of high-quality patient care. Methods: This quality improvement evidence-based observational project compared the rate of fall (ROF) between the experimental and control unit. Pearson’s chi-square and Fisher’s exact test were used to analyze and compare results. Qualtrics surveys evaluated the nurses’ perception of AI, and results were analyzed using the Mann-Whitney Rank Sum test. Intervention. Implementing an artificial intelligence-assisted fall predictive analytics model that can timely and accurately predict fall risk can mitigate the increase in inpatient falls. Results: The pilot unit (Pearson’s chi-square = p pp\u3c0.001). Conclusions: AI-assisted automatic fall predictive risk assessment produced a significant reduction if the number of falls, the ROF, and the use of fall countermeasures. Further, nurses’ perception of AI improved after the introduction of FPAT and presentation

    Reining in the Big Promise of Big Data: Transparency, Inequality, and New Regulatory Frontiers

    Get PDF
    The growing differentiation of services based on Big Data harbors the potential for both greater societal inequality and for greater equality. Anti-discrimination law and transparency alone, however, cannot do the job of curbing Big Data’s negative externalities while fostering its positive effects. To rein in Big Data’s potential, we adapt regulatory strategies from behavioral economics, contracts and criminal law theory. Four instruments stand out: First, active choice may be mandated between data collecting-services (paid by data) and data-free services (paid by money). Our suggestion provides concrete estimates for the price range of a data-free option, sheds new light on the monetization of data-collecting services, and proposes an “inverse predatory pricing” instrument to limit excessive pricing of the data-free option. Second, we propose using the doctrine of unconscionability to prevent contracts that unreasonably favor data-collecting companies. Third, we suggest democratizing data collection by regular user surveys and data compliance officers partially elected by users. Finally, we trace back new Big Data personalization techniques to the old Hartian precept of treating like cases alike and different cases – differently. If it is true that a speeding ticket over $50 is less of a disutility for a millionaire than for a welfare recipient, the income and wealth-responsive fines powered by Big Data that we suggest offer a glimpse into the future of the mitigation of economic and legal inequality by personalized law. Throughout these different strategies, we show how salience of data collection can be coupled with attempts to prevent discrimination and exploitation of users. Finally, we discuss all four proposals in the context of different test cases: social media, student education software and credit and cell phone markets. Many more examples could and should be discussed. In the face of increasing unease about the asymmetry of power between Big Data collectors and dispersed users, about differential legal treatment, and about the unprecedented dimensions of economic inequality, this paper proposes a new regulatory framework and research agenda to put the powerful engine of Big Data to the benefit of both the individual and societies adhering to basic notions of equality and non-discrimination
    • …
    corecore