111 research outputs found

    Estimation of 5G Core and RAN End-to-End Delay through Gaussian Mixture Models

    Get PDF
    Funding Information: This research was funded by Fundação para a Ciência e Tecnologia (FCT) under the projects 2022.08786.PTDC and UIDB/50008/2020. Publisher Copyright: © 2022 by the authors.Network analytics provide a comprehensive picture of the network’s Quality of Service (QoS), including the End-to-End (E2E) delay. In this paper, we characterize the Core and the Radio Access Network (RAN) E2E delay of 5G networks with the Standalone (SA) and Non-Standalone (NSA) topologies when a single known Probability Density Function (PDF) is not suitable to model its distribution. To this end, multiple PDFs, denominated as components, are combined in a Gaussian Mixture Model (GMM) to represent the distribution of the E2E delay. The accuracy and computation time of the GMM is evaluated for a different number of components and a number of samples. The results presented in the paper are based on a dataset of E2E delay values sampled from both SA and NSA 5G networks. Finally, we show that the GMM can be adopted to estimate a high diversity of E2E delay patterns found in 5G networks and its computation time can be adequate for a large range of applications.publishersversionpublishe

    The Federal Big Data Research and Development Strategic Plan

    Get PDF
    This document was developed through the contributions of the NITRD Big Data SSG members and staff. A special thanks and appreciation to the core team of editors, writers, and reviewers: Lida Beninson (NSF), Quincy Brown (NSF), Elizabeth Burrows (NSF), Dana Hunter (NSF), Craig Jolley (USAID), Meredith Lee (DHS), Nishal Mohan (NSF), Chloe Poston (NSF), Renata Rawlings-Goss (NSF), Carly Robinson (DOE Science), Alejandro Suarez (NSF), Martin Wiener (NSF), and Fen Zhao (NSF). A national Big Data1 innovation ecosystem is essential to enabling knowledge discovery from and confident action informed by the vast resource of new and diverse datasets that are rapidly becoming available in nearly every aspect of life. Big Data has the potential to radically improve the lives of all Americans. It is now possible to combine disparate, dynamic, and distributed datasets and enable everything from predicting the future behavior of complex systems to precise medical treatments, smart energy usage, and focused educational curricula. Government agency research and public-private partnerships, together with the education and training of future data scientists, will enable applications that directly benefit society and the economy of the Nation. To derive the greatest benefits from the many, rich sources of Big Data, the Administration announced a “Big Data Research and Development Initiative” on March 29, 2012.2 Dr. John P. Holdren, Assistant to the President for Science and Technology and Director of the Office of Science and Technology Policy, stated that the initiative “promises to transform our ability to use Big Data for scientific discovery, environmental and biomedical research, education, and national security.” The Federal Big Data Research and Development Strategic Plan (Plan) builds upon the promise and excitement of the myriad applications enabled by Big Data with the objective of guiding Federal agencies as they develop and expand their individual mission-driven programs and investments related to Big Data. The Plan is based on inputs from a series of Federal agency and public activities, and a shared vision: We envision a Big Data innovation ecosystem in which the ability to analyze, extract information from, and make decisions and discoveries based upon large, diverse, and real-time datasets enables new capabilities for Federal agencies and the Nation at large; accelerates the process of scientific discovery and innovation; leads to new fields of research and new areas of inquiry that would otherwise be impossible; educates the next generation of 21st century scientists and engineers; and promotes new economic growth. The Plan is built around seven strategies that represent key areas of importance for Big Data research and development (R&D). Priorities listed within each strategy highlight the intended outcomes that can be addressed by the missions and research funding of NITRD agencies. These include advancing human understanding in all branches of science, medicine, and security; ensuring the Nation’s continued leadership in research and development; and enhancing the Nation’s ability to address pressing societal and environmental issues facing the Nation and the world through research and development

    The Federal Big Data Research and Development Strategic Plan

    Get PDF
    This document was developed through the contributions of the NITRD Big Data SSG members and staff. A special thanks and appreciation to the core team of editors, writers, and reviewers: Lida Beninson (NSF), Quincy Brown (NSF), Elizabeth Burrows (NSF), Dana Hunter (NSF), Craig Jolley (USAID), Meredith Lee (DHS), Nishal Mohan (NSF), Chloe Poston (NSF), Renata Rawlings-Goss (NSF), Carly Robinson (DOE Science), Alejandro Suarez (NSF), Martin Wiener (NSF), and Fen Zhao (NSF). A national Big Data1 innovation ecosystem is essential to enabling knowledge discovery from and confident action informed by the vast resource of new and diverse datasets that are rapidly becoming available in nearly every aspect of life. Big Data has the potential to radically improve the lives of all Americans. It is now possible to combine disparate, dynamic, and distributed datasets and enable everything from predicting the future behavior of complex systems to precise medical treatments, smart energy usage, and focused educational curricula. Government agency research and public-private partnerships, together with the education and training of future data scientists, will enable applications that directly benefit society and the economy of the Nation. To derive the greatest benefits from the many, rich sources of Big Data, the Administration announced a “Big Data Research and Development Initiative” on March 29, 2012.2 Dr. John P. Holdren, Assistant to the President for Science and Technology and Director of the Office of Science and Technology Policy, stated that the initiative “promises to transform our ability to use Big Data for scientific discovery, environmental and biomedical research, education, and national security.” The Federal Big Data Research and Development Strategic Plan (Plan) builds upon the promise and excitement of the myriad applications enabled by Big Data with the objective of guiding Federal agencies as they develop and expand their individual mission-driven programs and investments related to Big Data. The Plan is based on inputs from a series of Federal agency and public activities, and a shared vision: We envision a Big Data innovation ecosystem in which the ability to analyze, extract information from, and make decisions and discoveries based upon large, diverse, and real-time datasets enables new capabilities for Federal agencies and the Nation at large; accelerates the process of scientific discovery and innovation; leads to new fields of research and new areas of inquiry that would otherwise be impossible; educates the next generation of 21st century scientists and engineers; and promotes new economic growth. The Plan is built around seven strategies that represent key areas of importance for Big Data research and development (R&D). Priorities listed within each strategy highlight the intended outcomes that can be addressed by the missions and research funding of NITRD agencies. These include advancing human understanding in all branches of science, medicine, and security; ensuring the Nation’s continued leadership in research and development; and enhancing the Nation’s ability to address pressing societal and environmental issues facing the Nation and the world through research and development

    IoT technologies for livestock management: A review of present status, opportunities, and future trends

    Get PDF
    The world population currently stands at about 7 billion amidst an expected increase in 2030 from 9.4 billion to around 10 billion in 2050. This burgeoning population has continued to influence the upward demand for animal food. Moreover, the management of finite resources such as land, the need to reduce livestock contribution to greenhouse gases, and the need to manage inherent complex, highly contextual, and repetitive day-to-day livestock management (LsM) routines are some examples of challenges to overcome in livestock production. The Internet of Things (IoT)’s usefulness in other vertical industries (OVI) shows that its role will be significant in LsM. This work uses the systematic review methodology of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) to guide a review of existing literature on IoT in OVI. The goal is to identify the IoT’s ecosystem, architecture, and its technicalities—present status, opportunities, and expected future trends—regarding its role in LsM. Among identified IoT roles in LsM, the authors found that data will be its main contributor. The traditional approach of reactive data processing will give way to the proactive approach of augmented analytics to provide insights about animal processes. This will undoubtedly free LsM from the drudgery of repetitive tasks with opportunities for improved productivity

    A study of learner experience design and learning efficacy of mobile microlearning in journalism education

    Get PDF
    With the increasing number of mobile technologies, people rely on smartphones to connect with the world and obtain news and information. The emergent use of mobile technologies changes the way journalists produce and disseminate news. It is important for journalism educators to know how to support journalists' digital skills development, particularly digital skills of mobile technologies, and understand which new forms of learning are suitable and feasible for those learners in the journalism sector. Previous research has shown that mobile microlearning (MML) can be a promising learning approach for specific learning needs. Mobile microlearning basically means learning no more than five minutes of lessons that are distributed on the smartphone. However, there is only a little evidence on the design and effects of MML in the context of journalism education research. Hence, this dissertation aims to examine whether MML can be a useful approach to facilitate mobile journalists' digital skills learning with smartphones. Adapting a sociotechnical-pedagogical learner experience framework with a usercentered design process, a four-phase formative research cycle was conducted in this dissertation: Phase 1, a systematic literature review of mobile microlearning (Study 1), Phase 2, a needs assessment for an understanding of mobile journalists' learning needs and requirements (Study 2), Phase 3, an iterative design and development of a mobile microcourse and studying its usability and user experience (Study 3), and Phase 4, an examination of the learning efficacy (i.e., effectiveness, efficiency, and appeal) and learner experience of the developed mobile microcourse (Study 4). A mixed-method data collection and analysis approach was applied throughout this dissertation. The results in this research provided evidence-based findings and indicated that MML is a feasible and effective approach to support mobile journalists' just-in-time learning when the MML designs follow four sequential design principles: (a) an aha moment to help with the learners connecting their previous experiences to the importance of current learning topics, (b) interactive content, (c) short exercises, and (d) instant automated feedback. Lastly, the dissertation discussed the results and addressed insights and implications of the MML design to improve learner experience and learning efficacy.Includes bibliographical references

    Prediction and preemptive control of network congestion in distributed real-time environment

    Get PDF
    Due to ever increasing demand for network capacity, the congestion problem is inflating. Congestion results in queuing within the network, packet loss and increased delays. It should be controlled to increase the system throughput and quality of service. The existing congestion control approaches such as source throttling and re-routing focus on controlling congestion after it has already happened. However, it is much more desirable to predict future congestion based on the current state and historical data, so that efficient controlling techniques can be applied to prevent congestion from happening in future. We have proposed a Neural Network Prediction-based routing (NNPR) protocol to predict as well as control the network traffic in distributed real time environment. A distributed real time transaction processing simulator (DRTTPS) has been used as the test-bed. For predictions, multi-step neural network model is developed in SPSS Modeler, which predicts congestion in future. ADAPA (Adaptive Decision and Predictive Analytics) scoring engine has been used for real-time scoring. An ADAPA wrapper calls the prediction model through web services and predicts the congestion in real-time. Once predicted results are obtained, messages are re-routed to prevent congestion. To compare our proposed work with existing techniques, two routing protocols are also implements "" Dijkstra's Shortest Path (DSP) and Routing Information Protocol (RIP). The main metric used to analyze the performance of our protocol is the percentage of transactions which complete before their deadline. The NNPR protocol is analyzed with various simulation runs having parameters both inside and outside the neural network input training range. Various parameters which can cause congestion were studied. These include bandwidth, worksize, latency, max active transactions, mean arrival time and update percentage. Through experimentation, it is observed that NNPR consistently outperforms DSP and RIP for all congestion loads. --Leaves [i]-ii.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b214474

    Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions

    Full text link
    Machine learning is expected to fuel significant improvements in medical care. To ensure that fundamental principles such as beneficence, respect for human autonomy, prevention of harm, justice, privacy, and transparency are respected, medical machine learning systems must be developed responsibly. Many high-level declarations of ethical principles have been put forth for this purpose, but there is a severe lack of technical guidelines explicating the practical consequences for medical machine learning. Similarly, there is currently considerable uncertainty regarding the exact regulatory requirements placed upon medical machine learning systems. This survey provides an overview of the technical and procedural challenges involved in creating medical machine learning systems responsibly and in conformity with existing regulations, as well as possible solutions to address these challenges. First, a brief review of existing regulations affecting medical machine learning is provided, showing that properties such as safety, robustness, reliability, privacy, security, transparency, explainability, and nondiscrimination are all demanded already by existing law and regulations - albeit, in many cases, to an uncertain degree. Next, the key technical obstacles to achieving these desirable properties are discussed, as well as important techniques to overcome these obstacles in the medical context. We notice that distribution shift, spurious correlations, model underspecification, uncertainty quantification, and data scarcity represent severe challenges in the medical context. Promising solution approaches include the use of large and representative datasets and federated learning as a means to that end, the careful exploitation of domain knowledge, the use of inherently transparent models, comprehensive out-of-distribution model testing and verification, as well as algorithmic impact assessments

    Analytics and Intelligence for Smart Manufacturing

    Get PDF
    Digital transformation is one of the main aspects emerged by the current 4.0 revolution. It embraces the integration between the digital and physical environment,including the application of modelling and simulation techniques, visualization, and data analytics in order to manage the overall product life cycle
    • …
    corecore