12,336 research outputs found
Challenges in the Design and Implementation of IoT Testbeds in Smart-Cities : A Systematic Review
Advancements in wireless communication and the increased accessibility to low-cost sensing and data processing IoT technologies have increased the research and development of urban monitoring systems. Most smart city research projects rely on deploying proprietary IoT testbeds for indoor and outdoor data collection. Such testbeds typically rely on a three-tier architecture composed of the Endpoint, the Edge, and the Cloud. Managing the system's operation whilst considering the security and privacy challenges that emerge, such as data privacy controls, network security, and security updates on the devices, is challenging. This work presents a systematic study of the challenges of developing, deploying and managing urban monitoring testbeds, as experienced in a series of urban monitoring research projects, followed by an analysis of the relevant literature. By identifying the challenges in the various projects and organising them under the V-model development lifecycle levels, we provide a reference guide for future projects. Understanding the challenges early on will facilitate current and future smart-cities IoT research projects to reduce implementation time and deliver secure and resilient testbeds
Birth of dairy 4.0: opportunities and challenges in adoption of fourth industrial revolution technologies in the production of milk and its derivative
Embracing innovation and emerging technologies is becoming increasingly important to address the current global challenges facing many food industry sectors, including the dairy industry. Growing literature shows that the adoption of technologies of the fourth industrial revolution (named Industry 4.0) has promising potential to bring about breakthroughs and new insights and unlock advancement opportunities in many areas of the food manufacturing sector. This article discusses the current knowledge and recent trends and progress on the application of Industry 4.0 innovations in the dairy industry. First, the “Dairy 4.0” concept, inspired by Industry 4.0, is introduced and its enabling technologies are determined. Second, relevant examples of the use of Dairy 4.0 technologies in milk and its derived products are presented. Finally, conclusions and future perspectives are given. The results revealed that robotics, 3D printing, Artificial Intelligence, the Internet of Things, Big Data, and blockchain are the main enabling technologies of Dairy 4.0. These advanced technologies are being progressively adopted in the dairy sector, from farm to table, making significant and profound changes in the production of milk, cheese, and other dairy products. It is expected that, in the near future, new digital innovations will emerge, and greater implementations of Dairy 4.0 technologies is likely to be achieved, leading to more automation and optimization of this dynamic food sector
Guest editorial: special issue on recent advances in security and privacy for 6G networks
No abstract available
The State of the Art in Deep Learning Applications, Challenges, and Future Prospects::A Comprehensive Review of Flood Forecasting and Management
Floods are a devastating natural calamity that may seriously harm both infrastructure and people. Accurate flood forecasts and control are essential to lessen these effects and safeguard populations. By utilizing its capacity to handle massive amounts of data and provide accurate forecasts, deep learning has emerged as a potent tool for improving flood prediction and control. The current state of deep learning applications in flood forecasting and management is thoroughly reviewed in this work. The review discusses a variety of subjects, such as the data sources utilized, the deep learning models used, and the assessment measures adopted to judge their efficacy. It assesses current approaches critically and points out their advantages and disadvantages. The article also examines challenges with data accessibility, the interpretability of deep learning models, and ethical considerations in flood prediction. The report also describes potential directions for deep-learning research to enhance flood predictions and control. Incorporating uncertainty estimates into forecasts, integrating many data sources, developing hybrid models that mix deep learning with other methodologies, and enhancing the interpretability of deep learning models are a few of these. These research goals can help deep learning models become more precise and effective, which will result in better flood control plans and forecasts. Overall, this review is a useful resource for academics and professionals working on the topic of flood forecasting and management. By reviewing the current state of the art, emphasizing difficulties, and outlining potential areas for future study, it lays a solid basis. Communities may better prepare for and lessen the destructive effects of floods by implementing cutting-edge deep learning algorithms, thereby protecting people and infrastructure
Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
The rapid growth of demanding applications in domains applying multimedia
processing and machine learning has marked a new era for edge and cloud
computing. These applications involve massive data and compute-intensive tasks,
and thus, typical computing paradigms in embedded systems and data centers are
stressed to meet the worldwide demand for high performance. Concurrently, the
landscape of the semiconductor field in the last 15 years has constituted power
as a first-class design concern. As a result, the community of computing
systems is forced to find alternative design approaches to facilitate
high-performance and/or power-efficient computing. Among the examined
solutions, Approximate Computing has attracted an ever-increasing interest,
with research works applying approximations across the entire traditional
computing stack, i.e., at software, hardware, and architectural levels. Over
the last decade, there is a plethora of approximation techniques in software
(programs, frameworks, compilers, runtimes, languages), hardware (circuits,
accelerators), and architectures (processors, memories). The current article is
Part I of our comprehensive survey on Approximate Computing, and it reviews its
motivation, terminology and principles, as well it classifies and presents the
technical details of the state-of-the-art software and hardware approximation
techniques.Comment: Under Review at ACM Computing Survey
The Globalization of Artificial Intelligence: African Imaginaries of Technoscientific Futures
Imaginaries of artificial intelligence (AI) have transcended geographies of the Global North and become increasingly entangled with narratives of economic growth, progress, and modernity in Africa. This raises several issues such as the entanglement of AI with global technoscientific capitalism and its impact on the dissemination of AI in Africa. The lack of African perspectives on the development of AI exacerbates concerns of raciality and inclusion in the scientific research, circulation, and adoption of AI. My argument in this dissertation is that innovation in AI, in both its sociotechnical imaginaries and political economies, excludes marginalized countries, nations and communities in ways that not only bar their participation in the reception of AI, but also as being part and parcel of its creation.
Underpinned by decolonial thinking, and perspectives from science and technology studies and African studies, this dissertation looks at how AI is reconfiguring the debate about development and modernization in Africa and the implications for local sociotechnical practices of AI innovation and governance. I examined AI in international development and industry across Kenya, Ghana, and Nigeria, by tracing Canada’s AI4D Africa program and following AI start-ups at AfriLabs. I used multi-sited case studies and discourse analysis to examine the data collected from interviews, participant observations, and documents.
In the empirical chapters, I first examine how local actors understand the notion of decolonizing AI and show that it has become a sociotechnical imaginary. I then investigate the political economy of AI in Africa and argue that despite Western efforts to integrate the African AI ecosystem globally, the AI epistemic communities in the continent continue to be excluded from dominant AI innovation spaces. Finally, I examine the emergence of a Pan-African AI imaginary and argue that AI governance can be understood as a state-building experiment in post-colonial Africa. The main issue at stake is that the lack of African perspectives in AI leads to negative impacts on innovation and limits the fair distribution of the benefits of AI across nations, countries, and communities, while at the same time excludes globally marginalized epistemic communities from the imagination and creation of AI
Estimating the innovation benefits of first-mover and second-mover strategies when micro-businesses adopt artificial intelligence and machine learning
Digital technologies have the potential to transform all aspects of firms’ operations. The emergence of advanced digital technologies such as Artificial Intelligence and Machine Learning raises questions about whether and when micro-businesses should adopt these technologies. In this paper we focus on how firms’ adoption decisions on Artificial Intelligence and Machine Learning influence their innovation capabilities. Using survey data for over 6,000 micro-businesses in the UK, we identify two groups of adopters based on the timing of their adoption of Artificial Intelligence and Machine Learning. ‘first movers’ – early adopters of the new technologies - and ‘second movers’- later adopters of the new technology. Probit models are used to investigate the innovation benefits of first and second mover adoption strategies. Our results suggest strong and positive impacts of adopting Artificial Intelligence and Machine Learning on micro-businesses’ innovation outcomes and innovation processes. We highlight the differential benefits of first mover and second mover strategies and highlight the role of technology characteristics as the differentiating factor. Our results emphasize both the innovation enabling role of digital technologies and the importance of an appropriate strategic approach to adopting advanced digital technologies
Demonstration of a Response Time Based Remaining Useful Life (RUL) Prediction for Software Systems
Prognostic and Health Management (PHM) has been widely applied to hardware
systems in the electronics and non-electronics domains but has not been
explored for software. While software does not decay over time, it can degrade
over release cycles. Software health management is confined to diagnostic
assessments that identify problems, whereas prognostic assessment potentially
indicates when in the future a problem will become detrimental. Relevant
research areas such as software defect prediction, software reliability
prediction, predictive maintenance of software, software degradation, and
software performance prediction, exist, but all of these represent diagnostic
models built upon historical data, none of which can predict an RUL for
software. This paper addresses the application of PHM concepts to software
systems for fault predictions and RUL estimation. Specifically, this paper
addresses how PHM can be used to make decisions for software systems such as
version update and upgrade, module changes, system reengineering, rejuvenation,
maintenance scheduling, budgeting, and total abandonment. This paper presents a
method to prognostically and continuously predict the RUL of a software system
based on usage parameters (e.g., the numbers and categories of releases) and
performance parameters (e.g., response time). The model developed has been
validated by comparing actual data, with the results that were generated by
predictive models. Statistical validation (regression validation, and k-fold
cross validation) has also been carried out. A case study, based on publicly
available data for the Bugzilla application is presented. This case study
demonstrates that PHM concepts can be applied to software systems and RUL can
be calculated to make system management decisions.Comment: This research methodology has opened up new and practical
applications in the software domain. In the coming decades, we can expect a
significant amount of attention and practical implementation in this area
worldwid
The role of artificial intelligence-driven soft sensors in advanced sustainable process industries: a critical review
With the predicted depletion of natural resources and alarming environmental issues, sustainable development has become a popular as well as a much-needed concept in modern process industries. Hence, manufacturers are quite keen on adopting novel process monitoring techniques to enhance product quality and process efficiency while minimizing possible adverse environmental impacts. Hardware sensors are employed in process industries to aid process monitoring and control, but they are associated with many limitations such as disturbances to the process flow, measurement delays, frequent need for maintenance, and high capital costs. As a result, soft sensors have become an attractive alternative for predicting quality-related parameters that are ‘hard-to-measure’ using hardware sensors. Due to their promising features over hardware counterparts, they have been employed across different process industries. This article attempts to explore the state-of-the-art artificial intelligence (Al)-driven soft sensors designed for process industries and their role in achieving the goal of sustainable development. First, a general introduction is given to soft sensors, their applications in different process industries, and their significance in achieving sustainable development goals. AI-based soft sensing algorithms are then introduced. Next, a discussion on how AI-driven soft sensors contribute toward different sustainable manufacturing strategies of process industries is provided. This is followed by a critical review of the most recent state-of-the-art AI-based soft sensors reported in the literature. Here, the use of powerful AI-based algorithms for addressing the limitations of traditional algorithms, that restrict the soft sensor performance is discussed. Finally, the challenges and limitations associated with the current soft sensor design, application, and maintenance aspects are discussed with possible future directions for designing more intelligent and smart soft sensing technologies to cater the future industrial needs
Digital endpoints in clinical trials of Alzheimer’s disease and other neurodegenerative diseases: challenges and opportunities
Alzheimer’s disease (AD) and other neurodegenerative diseases such as Parkinson’s disease (PD) and Huntington’s disease (HD) are associated with progressive cognitive, motor, affective and consequently functional decline considerably affecting Activities of Daily Living (ADL) and quality of life. Standard assessments, such as questionnaires and interviews, cognitive testing, and mobility assessments, lack sensitivity, especially in early stages of neurodegenerative diseases and in the disease progression, and have therefore a limited utility as outcome measurements in clinical trials. Major advances in the last decade in digital technologies have opened a window of opportunity to introduce digital endpoints into clinical trials that can reform the assessment and tracking of neurodegenerative symptoms. The Innovative Health Initiative (IMI)-funded projects RADAR-AD (Remote assessment of disease and relapse—Alzheimer’s disease), IDEA-FAST (Identifying digital endpoints to assess fatigue, sleep and ADL in neurodegenerative disorders and immune-mediated inflammatory diseases) and Mobilise-D (Connecting digital mobility assessment to clinical outcomes for regulatory and clinical endorsement) aim to identify digital endpoints relevant for neurodegenerative diseases that provide reliable, objective, and sensitive evaluation of disability and health-related quality of life. In this article, we will draw from the findings and experiences of the different IMI projects in discussing (1) the value of remote technologies to assess neurodegenerative diseases; (2) feasibility, acceptability and usability of digital assessments; (3) challenges related to the use of digital tools; (4) public involvement and the implementation of patient advisory boards; (5) regulatory learnings; and (6) the significance of inter-project exchange and data- and algorithm-sharing
- …