33,198 research outputs found

    Reinforcement machine learning for predictive analytics in smart cities

    Get PDF
    The digitization of our lives cause a shift in the data production as well as in the required data management. Numerous nodes are capable of producing huge volumes of data in our everyday activities. Sensors, personal smart devices as well as the Internet of Things (IoT) paradigm lead to a vast infrastructure that covers all the aspects of activities in modern societies. In the most of the cases, the critical issue for public authorities (usually, local, like municipalities) is the efficient management of data towards the support of novel services. The reason is that analytics provided on top of the collected data could help in the delivery of new applications that will facilitate citizens’ lives. However, the provision of analytics demands intelligent techniques for the underlying data management. The most known technique is the separation of huge volumes of data into a number of parts and their parallel management to limit the required time for the delivery of analytics. Afterwards, analytics requests in the form of queries could be realized and derive the necessary knowledge for supporting intelligent applications. In this paper, we define the concept of a Query Controller ( QC ) that receives queries for analytics and assigns each of them to a processor placed in front of each data partition. We discuss an intelligent process for query assignments that adopts Machine Learning (ML). We adopt two learning schemes, i.e., Reinforcement Learning (RL) and clustering. We report on the comparison of the two schemes and elaborate on their combination. Our aim is to provide an efficient framework to support the decision making of the QC that should swiftly select the appropriate processor for each query. We provide mathematical formulations for the discussed problem and present simulation results. Through a comprehensive experimental evaluation, we reveal the advantages of the proposed models and describe the outcomes results while comparing them with a deterministic framework

    Big data for monitoring educational systems

    Get PDF
    This report considers “how advances in big data are likely to transform the context and methodology of monitoring educational systems within a long-term perspective (10-30 years) and impact the evidence based policy development in the sector”, big data are “large amounts of different types of data produced with high velocity from a high number of various types of sources.” Five independent experts were commissioned by Ecorys, responding to themes of: students' privacy, educational equity and efficiency, student tracking, assessment and skills. The experts were asked to consider the “macro perspective on governance on educational systems at all levels from primary, secondary education and tertiary – the latter covering all aspects of tertiary from further, to higher, and to VET”, prioritising primary and secondary levels of education

    Smart Cities: Towards a New Citizenship Regime? A Discourse Analysis of the British Smart City Standard

    Get PDF
    Growing practice interest in smart cities has led to calls for a less technology-oriented and more citizen-centric approach. In response, this articles investigates the citizenship mode promulgated by the smart city standard of the British Standards Institution. The analysis uses the concept of citizenship regime and a mixture of quantitative and qualitative methods to discern key discursive frames defining the smart city and the particular citizenship dimensions brought into play. The results confirm an explicit citizenship rationale guiding the smart city (standard), although this displays some substantive shortcomings and contradictions. The article concludes with recommendations for both further theory and practice development

    Smart Asset Management for Electric Utilities: Big Data and Future

    Full text link
    This paper discusses about future challenges in terms of big data and new technologies. Utilities have been collecting data in large amounts but they are hardly utilized because they are huge in amount and also there is uncertainty associated with it. Condition monitoring of assets collects large amounts of data during daily operations. The question arises "How to extract information from large chunk of data?" The concept of "rich data and poor information" is being challenged by big data analytics with advent of machine learning techniques. Along with technological advancements like Internet of Things (IoT), big data analytics will play an important role for electric utilities. In this paper, challenges are answered by pathways and guidelines to make the current asset management practices smarter for the future.Comment: 13 pages, 3 figures, Proceedings of 12th World Congress on Engineering Asset Management (WCEAM) 201

    Adding Value to Statistics in the Data Revolution Age

    Get PDF
    As many statistical offices in accordance with the European Statistical System commitment to Vision 2020, since the second half of 2014 Istat has implemented its internal standardisation and industrialisation process within the framework of a common Business Architecture. Istat modernisation programme aims at building services and infrastructures within a plug-and-play framework to foster innovation, promote reuse and move towards full integration and interoperability of statistical process, consistent with a service-oriented architecture. This is expected to lead to higher effectiveness and productivity by improving the quality of statistical information and reducing the response burden. This paper addresses the strategy adopted by Istat which is focused on exploiting administrative data and new data sources in order to achieve its key goals enhancing value to users. The strategy is based on some priorities that consider services centred on users and stakeholders as well as Linked Open Data, to allow Machine-to-Machine data and metadata integration through definition of common statistical ontologies and semantics

    A Proposal for Supply Chain Management Research That Matters: Sixteen High Priority Research Projects for the Future

    Get PDF
    On May 4th, 2016 in Milton, Ontario, the World Class Supply Chain 2016 Summit was held in partnership between CN Rail and Wilfrid Laurier University’s Lazaridis School of Business & Economics to realize an ambitious goal: raise knowledge of contemporary supply chain management (SCM) issues through genuine peer-­‐to-­‐peer dialogue among practitioners and scholars. A principal element of that knowledge is an answer to the question: to gain valid and reliable insights for attaining SCM excellence, what issues must be researched further? This White Paper—which is the second of the summit’s two White Papers—addresses the question by proposing a research agenda comprising 16 research projects. This research agenda covers the following: The current state of research knowledge on issues that are of the highest priority to today’s SCM professionals Important gaps in current research knowledge and, consequently, the major questions that should be answered in sixteen future research projects aimed at addressing those gaps Ways in which the research projects can be incorporated into student training and be supported by Canada’s major research funding agencies That content comes from using the summit’s deliberations to guide systematic reviews of both the SCM research literature and Canadian institutional mechanisms that are geared towards building knowledge through research. The major conclusions from those reviews can be summarized as follows: While the research literature to date has yielded useful insights to inform the pursuit of SCM excellence, several research questions of immense practical importance remain unanswered or, at best, inadequately answered The body of research required to answer those questions will have to focus on what the summit’s first White Paper presented as four highly impactful levers that SCM executives must expertly handle to attain excellence: collaboration; information; technology; and talent The proposed research agenda can be pursued in ways that achieve the two inter-­‐related goals of creating new actionable knowledge and building the capacity of today’s students to become tomorrow’s practitioners and contributors to ongoing knowledge growth in the SCM field This White Paper’s details underlying these conclusions build on the information presented in the summit’s first White Paper. That is, while the first White Paper (White Paper 1) identified general SCM themes for which the research needs are most urgent, this White Paper goes further along the path of industry-academia knowledge co-creation. It does so by examining and articulating those needs against the backdrop of available research findings, translating the needs into specific research projects that should be pursued, and providing guidelines for how those projects can be carried out

    Regulating Data as Property: A New Construct for Moving Forward

    Get PDF
    The global community urgently needs precise, clear rules that define ownership of data and express the attendant rights to license, transfer, use, modify, and destroy digital information assets. In response, this article proposes a new approach for regulating data as an entirely new class of property. Recently, European and Asian public officials and industries have called for data ownership principles to be developed, above and beyond current privacy and data protection laws. In addition, official policy guidances and legal proposals have been published that offer to accelerate realization of a property rights structure for digital information. But how can ownership of digital information be achieved? How can those rights be transferred and enforced? Those calls for data ownership emphasize the impact of ownership on the automotive industry and the vast quantities of operational data which smart automobiles and self-driving vehicles will produce. We looked at how, if at all, the issue was being considered in consumer-facing statements addressing the data being collected by their vehicles. To formulate our proposal, we also considered continued advances in scientific research, quantum mechanics, and quantum computing which confirm that information in any digital or electronic medium is, and always has been, physical, tangible matter. Yet, to date, data regulation has sought to adapt legal constructs for “intangible” intellectual property or to express a series of permissions and constraints tied to specific classifications of data (such as personally identifiable information). We examined legal reforms that were recently approved by the United Nations Commission on International Trade Law to enable transactions involving electronic transferable records, as well as prior reforms adopted in the United States Uniform Commercial Code and Federal law to enable similar transactions involving digital records that were, historically, physical assets (such as promissory notes or chattel paper). Finally, we surveyed prior academic scholarship in the U.S. and Europe to determine if the physical attributes of digital data had been previously considered in the vigorous debates on how to regulate personal information or the extent, if at all, that the solutions developed for transferable records had been considered for larger classes of digital assets. Based on the preceding, we propose that regulation of digital information assets, and clear concepts of ownership, can be built on existing legal constructs that have enabled electronic commercial practices. We propose a property rules construct that clearly defines a right to own digital information arises upon creation (whether by keystroke or machine), and suggest when and how that right attaches to specific data though the exercise of technological controls. This construct will enable faster, better adaptations of new rules for the ever-evolving portfolio of data assets being created around the world. This approach will also create more predictable, scalable, and extensible mechanisms for regulating data and is consistent with, and may improve the exercise and enforcement of, rights regarding personal information. We conclude by highlighting existing technologies and their potential to support this construct and begin an inventory of the steps necessary to further proceed with this process

    Governance in the age of social machines: the web observatory

    Get PDF
    The World Wide Web has provided unprecedented access to information; as humans and machines increasingly interact with it they provide more and more data. The challenge is how to analyse and interpret this data within the context that it was created, and to present it in a way that both researchers and practitioners can more easily make sense of. The first step is to have access to open and interoperable data sets, which Governments around the world are increasingly subscribing to. But having ‘open’ data is just the beginning and does not necessarily lead to better decision making or policy development. This is because data do not provide the answers – they need to be analysed, interpreted and understood within the context of their creation, and the business imperative of the organisation using them. The major corporate entities, such as Google, Amazon, Microsoft, Apple and Facebook, have the capabilities to do this, but are driven by their own commercial imperatives, and their data are largely siloed and held within ‘walled gardens’ of information. All too often governments and non-profit groups lack these capabilities, and are driven by very different mandates. In addition they have far more complex community relationships, and must abide by regulatory constraints which dictate how they can use the data they hold. As such they struggle to maximise the value of this emerging ‘digital currency’ and are therefore largely beholden to commercial vendors. What has emerged is a public-private data ecosystem that has huge policy implications (including the twin challenges of privacy and security). Many within the public sector lack the skills to address these challenges because they lack the literacy required within the digital context. This project seeks to address some of these problems by bringing together a safe and secure Australian-based data platform (facilitating the sharing of data, analytics and visualisation) with policy analysis and governance expertise in order to create a collaborative working model of a ‘Government Web Observatory’. This neutral space, hosted by an Australian university, can serve as a powerful complement to existing Open Data initiatives in Australia, and enable research and education to combine to support the development of a more digitally literate public service. The project aims to explore where, and in which contexts, people, things, data and the Internet meet and result in evolving observable phenomena which can inform better government policy development and service delivery.&nbsp

    Ethical Reflections of Human Brain Research and Smart Information Systems

    Get PDF
    open access journalThis case study explores ethical issues that relate to the use of Smart Infor-mation Systems (SIS) in human brain research. The case study is based on the Human Brain Project (HBP), which is a European Union funded project. The project uses SIS to build a research infrastructure aimed at the advancement of neuroscience, medicine and computing. The case study was conducted to assess how the HBP recognises and deal with ethical concerns relating to the use of SIS in human brain research. To under-stand some of the ethical implications of using SIS in human brain research, data was collected through a document review and three semi-structured interviews with partic-ipants from the HBP. Results from the case study indicate that the main ethical concerns with the use of SIS in human brain research include privacy and confidentiality, the security of personal data, discrimination that arises from bias and access to the SIS and their outcomes. Furthermore, there is an issue with the transparency of the processes that are involved in human brain research. In response to these issues, the HBP has put in place different mechanisms to ensure responsible research and innovation through a dedicated pro-gram. The paper provides lessons for the responsible implementation of SIS in research, including human brain research and extends some of the mechanisms that could be employed by researchers and developers of SIS for research in addressing such issues
    • 

    corecore