235 research outputs found
Recommended from our members
How the Common Component Architecture Advances Compuational Science
Computational chemists are using Common Component Architecture (CCA) technology to increase the parallel scalability of their application ten-fold. Combustion researchers are publishing science faster because the CCA manages software complexity for them. Both the solver and meshing communities in SciDAC are converging on community interface standards as a direct response to the novel level of interoperability that CCA presents. Yet, there is much more to do before component technology becomes mainstream computational science. This paper highlights the impact that the CCA has made on scientific applications, conveys some lessons learned from five years of the SciDAC program, and previews where applications could go with the additional capabilities that the CCA has planned for SciDAC 2
Real-Time Sensor Networks and Systems for the Industrial IoT
The Industrial Internet of Things (Industrial IoT—IIoT) has emerged as the core construct behind the various cyber-physical systems constituting a principal dimension of the fourth Industrial Revolution. While initially born as the concept behind specific industrial applications of generic IoT technologies, for the optimization of operational efficiency in automation and control, it quickly enabled the achievement of the total convergence of Operational (OT) and Information Technologies (IT). The IIoT has now surpassed the traditional borders of automation and control functions in the process and manufacturing industry, shifting towards a wider domain of functions and industries, embraced under the dominant global initiatives and architectural frameworks of Industry 4.0 (or Industrie 4.0) in Germany, Industrial Internet in the US, Society 5.0 in Japan, and Made-in-China 2025 in China. As real-time embedded systems are quickly achieving ubiquity in everyday life and in industrial environments, and many processes already depend on real-time cyber-physical systems and embedded sensors, the integration of IoT with cognitive computing and real-time data exchange is essential for real-time analytics and realization of digital twins in smart environments and services under the various frameworks’ provisions. In this context, real-time sensor networks and systems for the Industrial IoT encompass multiple technologies and raise significant design, optimization, integration and exploitation challenges. The ten articles in this Special Issue describe advances in real-time sensor networks and systems that are significant enablers of the Industrial IoT paradigm. In the relevant landscape, the domain of wireless networking technologies is centrally positioned, as expected
A supporting infrastructure for Wireless Sensor Networks in Critical Industrial Environments
Tese de doutoramento no Programa de Doutoramento em Ciências e Tecnologias da Informação apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.As Redes de Sensores Sem Fios (RSSFs) têm uma aplicabilidade muito elevada nas mais diversas áreas, como na indústria, nos sistemas militares, na saúde e nas casas inteligentes. No entanto, continuam a existir várias limitações que impedem que esta tecnologia tenha uma utilização extensiva. A fiabilidade é uma destas principais limitações que tem atrasado a adopção das RSSFs em ambientes industriais, principalmente quando sujeitos a elevadas interferências e ruídos. Por outro lado, a interoperabilidade é também um dos principais requisitos a cumprir nomeadamente com o avanço para o paradigma da Internet of Things.
A determinação da localização dos nós, principalmente dos nós móveis, é, também ele, um requisito crítico em muitas aplicações. Esta tese de doutoramento propõe novas soluções para a integração e para a localização de RSSFs que operem em ambientes industriais e críticos.
Como os nós sensores são, na maioria das vezes, instalados e deixados sem intervenção humana durante longos períodos de tempo, isto é, meses ou mesmo anos, é muito importante oferecer processos de comunicação fiável. No entanto, muitos problemas ocorrem durante a transmissão dos pacotes, nomeadamente devido a ruídos, interferências e perda de potência do sinal. A razão das interferências deve-se à existência de mais do que uma rede ou ao espalhamento espectral que ocorre em determinadas frequências. Este tipo de problemas é mais severo em ambientes dinâmicos nos quais novas fontes de ruído pode ser introduzidas em qualquer instante de tempo, nomeadamente com a chegadas de novos dispositivos ao meio. Consequentemente, é necessário que as RSSFs tenham a capacidade de lidar com as limitações e as falhas nos processos de comunicação. O protocolo Dynamic MAC (DunMAC) proposto nesta dissertação utiliza técnicas de rádio cognitivo (CR) para que a RSSF se adapte, de forma dinâmica, a ambientes instáveis e ruidosos através da selecção automática do melhor canal durante o período de operação.
As RSSFs não podem operar em isolação completa do meio, e necessitam de ser monitoradas e controladas por aplicações externas. Apesar de ser possível adicionar a pilha protocolar IP aos nós sensores, este procedimento não é adequado para muitas aplicações. Para estes casos, os modelos baseados em gateway ou proxies continuam a apresentar-se preferíveis para o processo de integração. Um dos desafios existentes para estes processos de integração é a sua adaptabilidade, isto é, a capacidade da gateway ou do proxy poder ser reutilizado sem alterações por outras aplicações. A razão desta limitação deve-se aos consumidores finais dos dados serem aplicações e não seres humanos. Logo, é difícil ou mesmo impossível criar normas para as estruturas de dados dada a infinidade de diferentes formatos. É então desejável encontrar uma solução que permita uma integração transparente de diferentes RSSFs e aplicações. A linguagem Sensor Traffic Description Language (STDL) proposta nesta dissertação propõe uma solução para esta integração através de gateways e proxies flexíveis e adaptados à diversidade de aplicações, e sem recorrer à reprogramação.
O conhecimento da posição dos nós sensores é, também ele, crítico em muitas aplicações industriais como no controlo da deslocação dos objectos ou trabalhadores. Para além do mais, a maioria dos valores recolhidos dos sensores só são úteis quando acompanhados pelo conhecimento do local onde esses valores foram recolhidos. O Global Positioning Systems (GPS) é a mais conhecida solução para a determinação da localização. No entanto, o recurso ao GPS em cada nó sensor continua a ser energeticamente ineficiente e impraticável devido aos custos associados. Para além disso, os sistemas GPS não são apropriados para ambientes in-door.
Este trabalho de doutoramento propõe-se actuar nestas áreas. Em particular, é proposto, implementado e avaliado o protocolo DynMAC para oferecer fiabilidade às RSSFs. Para a segunda temática, a linguagem STDL e o seu motor são propostos para suportar a integração de ambientes heterogéneos de RSSFs e aplicações. As soluções propostas não requerem reprogramação e suportam também serviços de localização nas RSSFs. Diferentes métodos de localização foram avaliados para estimar a localização dos nós. Assim, com estes métodos as RSSFs podem ser usadas como componentes para integrar e suportar a Futura Internet.
Todas as soluções propostas nesta tese foram implementadas e validadas tanto em simulação com em plataformas práticas, laboratoriais e industriais.The Wireless Sensor Network (WSN) has a countless number of applications in almost all of the fields including military, industrial, healthcare, and smart home environments. However, there are several problems that prevent the widespread of sensor networks in real situations. Among them, the reliability of communication especially in noisy industrial environments is difficult to guarantee. In addition, interoperability between the sensor networks and external applications is also a challenge. Moreover, determining the position of nodes, particularly mobile nodes, is a critical requirement in many types of applications. My original contributions in this thesis include reliable communication, integration, localization solutions for WSNs operating in industrial and critical environments.
Because sensor nodes are usually deployed and kept unattended without human intervention for a long duration, e.g. months or even years, it is a crucial requirement to provide the reliable communication for the WSNs. However, many problems arise during packet transmission and are related to the transmission medium (e.g. signal path-loss, noise and interference). Interference happens due to the existence of more than one network or by the spectral spread that happens in some frequencies. This type of problem is more severe in dynamic environments in which noise sources can be introduced at any time or new networks and devices that interfere with the existing one may be added. Consequently, it is necessary for the WSNs to have the ability to deal with the communication failures. The Dynamic MAC (DynMAC) protocol proposed in this thesis employs the Cognitive Radio (CR) techniques to allow the WSNs to adapt to the dynamic noisy environments by automatically selecting the best channel during its operation time.
The WSN usually cannot operate in complete isolation, but it needs to be monitored, controlled and visualized by external applications. Although it is possible to add an IP protocol stack to sensor nodes, this approach is not appropriate for many types of WSNs. Consequently, the proxy and gateway approach is still a preferred method for integrating sensor networks with external networks and applications. The problem of the current integration solutions for WSNs is the adaptability, i.e., the ability of the gateway or proxy developed for one sensor network to be reused, unchanged, for others which have different types of applications and data frames. One reason behind this problem is that it is difficult or even impossible to create a standard for the structure of data inside the frame because there are such a huge number of possible formats. Consequently, it is necessary to have an adaptable solution for easily and transparently integrating WSNs and application environments. In this thesis, the Sensor Traffic Description Language (STDL) was proposed for describing the structure of the sensor networks’ data frames, allowing the framework to be adapted to a diversity of protocols and applications without reprogramming.
The positions of sensor nodes are critical in many types of industrial applications such as object tracking, location-aware services, worker or patient tracking, etc. In addition, the sensed data is meaningless without the knowledge of where it is obtained. Perhaps the most well-known location-sensing system is the Global Positioning System (GPS). However, equipping GPS sensor for each sensor node is inefficient or unfeasible for most of the cases because of its energy consumption and cost. In addition, GPS is not appropriate in some environments, e.g., indoors. Similar to the original concept of WSNs, the localization solution should also be cheap and with low power consumption.
This thesis aims to deal with the above problems. In particular, in order to add the reliability for WSN, DynMAC protocol was proposed, implemented and evaluated. This protocol adds a mechanism to automatically deal with the noisy and changeable environments. For the second problem, the STDL and its engine provide the adaptable capability to the framework for interoperation between sensor networks and external applications. The proposed framework requires no reprogramming when deploying it for new applications and protocols of WSNs. Moreover, the framework also supports localization services for positioning the unknown position sensor nodes in WSNs. The different localization methods are employed to estimate the location of mobile nodes. With the proposed framework, WSNs can be used as plug and play components for integrating with the Future Internet. All the proposed solutions were implemented and validated using simulation and real testbeds in both the laboratory and industrial environments
Survey of FPGA applications in the period 2000 – 2015 (Technical Report)
Romoth J, Porrmann M, Rückert U. Survey of FPGA applications in the period 2000 – 2015 (Technical Report).; 2017.Since their introduction, FPGAs can be seen in more and more different fields of applications. The key advantage is the combination of software-like flexibility with the performance otherwise common to hardware. Nevertheless, every application field introduces special requirements to the used computational architecture. This paper provides an overview of the different topics FPGAs have been used for in the last 15 years of research and why they have been chosen over other processing units like e.g. CPUs
On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator
Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise
Recommended from our members
Laboratory Directed Research and Development Program FY 2004 Annual Report
The Oak Ridge National Laboratory (ORNL) Laboratory Directed Research and Development (LDRD) Program reports its status to the U.S. Department of Energy (DOE) in March of each year. The program operates under the authority of DOE Order 413.2A, 'Laboratory Directed Research and Development' (January 8, 2001), which establishes DOE's requirements for the program while providing the Laboratory Director broad flexibility for program implementation. LDRD funds are obtained through a charge to all Laboratory programs. This report describes all ORNL LDRD research activities supported during FY 2004 and includes final reports for completed projects and shorter progress reports for projects that were active, but not completed, during this period. The FY 2004 ORNL LDRD Self-Assessment (ORNL/PPA-2005/2) provides financial data about the FY 2004 projects and an internal evaluation of the program's management process. ORNL is a DOE multiprogram science, technology, and energy laboratory with distinctive capabilities in materials science and engineering, neutron science and technology, energy production and end-use technologies, biological and environmental science, and scientific computing. With these capabilities ORNL conducts basic and applied research and development (R&D) to support DOE's overarching national security mission, which encompasses science, energy resources, environmental quality, and national nuclear security. As a national resource, the Laboratory also applies its capabilities and skills to the specific needs of other federal agencies and customers through the DOE Work For Others (WFO) program. Information about the Laboratory and its programs is available on the Internet at <http://www.ornl.gov/>. LDRD is a relatively small but vital DOE program that allows ORNL, as well as other multiprogram DOE laboratories, to select a limited number of R&D projects for the purpose of: (1) maintaining the scientific and technical vitality of the Laboratory; (2) enhancing the Laboratory's ability to address future DOE missions; (3) fostering creativity and stimulating exploration of forefront science and technology; (4) serving as a proving ground for new research; and (5) supporting high-risk, potentially high-value R&D. Through LDRD the Laboratory is able to improve its distinctive capabilities and enhance its ability to conduct cutting-edge R&D for its DOE and WFO sponsors. To meet the LDRD objectives and fulfill the particular needs of the Laboratory, ORNL has established a program with two components: the Director's R&D Fund and the Seed Money Fund. As outlined in Table 1, these two funds are complementary. The Director's R&D Fund develops new capabilities in support of the Laboratory initiatives, while the Seed Money Fund is open to all innovative ideas that have the potential for enhancing the Laboratory's core scientific and technical competencies. Provision for multiple routes of access to ORNL LDRD funds maximizes the likelihood that novel and seminal ideas with scientific and technological merit will be recognized and supported
Towards the genomic sequence code of DNA fragility
Genomic sequences can be prone to breakages, where the particularly fragile DNA sequence spans can cause genomic instabilities and contribute to diseases such as cancer. Unlike the research in point mutations, the relationship between DNA sequence context and the propensity for strand breaks remains elusive. By analysing the differences and commonalities across various DNA breakage datasets, this thesis identifies strong sequence-driven patterns influencing DNA fragility. We showed the overall deconvolution of the sequence influences into short-, medium-, and long-range effects. The short-range k-meric fragility scores of all processed DNA breakage datasets were quantified and summarised as a feature library (DNAfrAIlib), designed for seamless integration during feature generation for any sequence-based machine learning task, where accounting for DNA fragility could be useful. We employed these features to develop a generalised machine learning model for DNA fragility that is trained on cancer-associated breaks. Applying our model to the entire human genome, we found that structural variants, especially the pathogenic ones, tend to stabilise the regions once they emerge, while chromothripsis events favour less fragile genomic regions. We found that viral integration, especially those of cancer-associated viruses, into the human host could increase genomic fragility. We showed that absent sequences were more fragile than the human genome average. As a proof of concept, we found that incorporating our understanding in the sequence basis of DNA fragility can improve de novo genome assembly algorithms, by aiding the selection of higher-quality sequences out of all assembled variants. Overall, this work offers novel insights into the sequence basis of DNA fragility and presents a powerful machine learning resource to further enhance our understanding in genome instability and evolution
Safety and Reliability - Safe Societies in a Changing World
The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management
- mathematical methods in reliability and safety
- risk assessment
- risk management
- system reliability
- uncertainty analysis
- digitalization and big data
- prognostics and system health management
- occupational safety
- accident and incident modeling
- maintenance modeling and applications
- simulation for safety and reliability analysis
- dynamic risk and barrier management
- organizational factors and safety culture
- human factors and human reliability
- resilience engineering
- structural reliability
- natural hazards
- security
- economic analysis in risk managemen
DTT - Divertor Tokamak Test facility - Interim Design Report
The “Divertor Tokamak Test facility, DTT” is a milestone along the international program aimed at demonstrating – in the second half of this century – the feasibility of obtaining to commercial electricity from controlled thermonuclear fusion. DTT is a Tokamak conceived and designed in Italy with a broad international vision. The construction will be carried out in the ENEA Frascati site, mainly supported by national funds, complemented by EUROfusion and European incentive schemes for innovative investments. The project team includes more than 180 high-standard researchers from ENEA, CREATE, CNR, INFN, RFX and various universities.
The volume, entitled DTT Interim Design Report (“Green Book” from the colour of the cover), briefly describes the status of the project, the planning of the design future activities and its organizational structure. The publication of the Green Book also provides an occasion for thorough discussions in the fusion community and a broad international collaboration on the DTT challenge
- …