131 research outputs found
Feasibility study for a numerical aerodynamic simulation facility. Volume 1
A Numerical Aerodynamic Simulation Facility (NASF) was designed for the simulation of fluid flow around three-dimensional bodies, both in wind tunnel environments and in free space. The application of numerical simulation to this field of endeavor promised to yield economies in aerodynamic and aircraft body designs. A model for a NASF/FMP (Flow Model Processor) ensemble using a possible approach to meeting NASF goals is presented. The computer hardware and software are presented, along with the entire design and performance analysis and evaluation
The design and use of a digital radio telemetry system for measuring internal combustion engine piston parameters.
During the course of this project, a digital radio telemetry system has been
designed and shown to be capable of measuring parameters from the piston of
an internal combustion engine, under load. The impetus for the work stems
from the need to sample the appropriate data required for oil degradation
analysis and the unavailability of system to perform such sampling.
The prototype system was designed for installation within a small Norton
Villiers C-30 industrial engine. This choice of engine presented significant
design challenges due to the small size of the engine (components and
construction) and the crankcase environment. These challenges were manifest
in the choice of carrier frequency, antenna size and location, modulation
scheme, data encoding scheme, signal attenuation, error checking and
correction, choice of components, manufacturing techniques and physical
mounting to reciprocating parts. In order to overcome these challenges detailed
analysis of the radio frequency spectrum was undertaken in order to minimise
attenuation from mechanisms such as, absorption, reflection, motion, spatial
arrangement and noise.
Another aspect of the project concerned the development of a flexible modus
operandi in order to facilitate a number of sampling regimes. In order to
achieve such flexibility a two-way communication protocol was implemented
enabling the sampling system to be programmed into a particular mode of
operation, while in use. Additionally the system was designed to accommodate
the range of signals output from most transducer devices.
The sampling capabilities of the prototype system were extended by enabling
the system to support multiple transducers providing a mixture of output
signals; for example both analogue and digital signals have been sampled.
Additionally, a facility to sample data in response to triggering stimuli has been
tested; specifically a sampling trigger may be derived from the motion of the
piston via an accelerometer.
Ancillary components, such as interface hardware and software, have been
developed which are suitable for the recording of data accessed by the system.
This work has demonstrated that multi-transducer, mixed signal monitoring of
piston parameters, (such as temperature, acceleration etc.) using a two-way,
programmable, digital radio frequency telemetry system is not only possible
but provides a means for more advanced instrumentation
Mission-Critical Communications from LMR to 5G: a Technology Assessment approach for Smart City scenarios
Radiocommunication networks are one of the main support tools of agencies that carry out
actions in Public Protection & Disaster Relief (PPDR), and it is necessary to update these
communications technologies from narrowband to broadband and integrated to information
technologies to have an effective action before society. Understanding that this problem
includes, besides the technical aspects, issues related to the social context to which these
systems are inserted, this study aims to construct scenarios, using several sources of
information, that helps the managers of the PPDR agencies in the technological decisionmaking
process of the Digital Transformation of Mission-Critical Communication considering
Smart City scenarios, guided by the methods and approaches of Technological Assessment
(TA).As redes de radiocomunicações são uma das principais ferramentas de apoio dos órgãos que
realizam ações de Proteção Pública e Socorro em desastres, sendo necessário atualizar essas
tecnologias de comunicação de banda estreita para banda larga, e integra- las às tecnologias
de informação, para se ter uma atuação efetiva perante a sociedade . Entendendo que esse
problema inclui, além dos aspectos técnicos, questões relacionadas ao contexto social ao qual
esses sistemas estão inseridos, este estudo tem por objetivo a construção de cenários,
utilizando diversas fontes de informação que auxiliem os gestores destas agências na tomada
de decisão tecnológica que envolve a transformação digital da Comunicação de Missão Crítica
considerando cenários de Cidades Inteligentes, guiado pelos métodos e abordagens de
Avaliação Tecnológica (TA)
Recommended from our members
A novel semantic IoT middleware for secure data management: blockchain and AI-driven context awareness
In the modern digital landscape of the Internet of Things (IoT), data interoperability and heterogeneity present critical challenges, particularly with the increasing complexity of IoT systems and networks. Addressing these challenges, while ensuring data security and user trust, is pivotal. This paper proposes a novel Semantic IoT Middleware (SIM) for healthcare. The architecture of this middleware comprises the following main processes: data generation, semantic annotation, security encryption, and semantic operations. The data generation module facilitates seamless data and event sourcing, while the Semantic Annotation Component assigns structured vocabulary for uniformity. SIM adopts blockchain technology to provide enhanced data security, and its layered approach ensures robust interoperability and intuitive user-centric operations for IoT systems. The security encryption module offers data protection, and the semantic operations module underpins data processing and integration. A distinctive feature of this middleware is its proficiency in service integration, leveraging semantic descriptions augmented by user feedback. Additionally, SIM integrates artificial intelligence (AI) feedback mechanisms to continuously refine and optimise the middleware’s operational efficiency
COBE's search for structure in the Big Bang
The launch of Cosmic Background Explorer (COBE) and the definition of Earth Observing System (EOS) are two of the major events at NASA-Goddard. The three experiments contained in COBE (Differential Microwave Radiometer (DMR), Far Infrared Absolute Spectrophotometer (FIRAS), and Diffuse Infrared Background Experiment (DIRBE)) are very important in measuring the big bang. DMR measures the isotropy of the cosmic background (direction of the radiation). FIRAS looks at the spectrum over the whole sky, searching for deviations, and DIRBE operates in the infrared part of the spectrum gathering evidence of the earliest galaxy formation. By special techniques, the radiation coming from the solar system will be distinguished from that of extragalactic origin. Unique graphics will be used to represent the temperature of the emitting material. A cosmic event will be modeled of such importance that it will affect cosmological theory for generations to come. EOS will monitor changes in the Earth's geophysics during a whole solar color cycle
Recommended from our members
Novel Computing Paradigms using Oscillators
This dissertation is concerned with new ways of using oscillators to perform computational tasks. Specifically, it introduces methods for building finite state machines (for general-purpose Boolean computation) as well as Ising machines (for solving combinatorial optimization problems) using coupled oscillator networks.But firstly, why oscillators? Why use them for computation?An important reason is simply that oscillators are fascinating. Coupled oscillator systems often display intriguing synchronization phenomena where spontaneous patterns arise. From the synchronous flashing of fireflies to Huygens' clocks ticking in unison, from the molecular mechanism of circadian rhythms to the phase patterns in oscillatory neural circuits, the observation and study of synchronization in coupled oscillators has a long and rich history. Engineers across many disciplines have also taken inspiration from these phenomena, e.g., to design high-performance radio frequency communication circuits and optical lasers. To be able to contribute to the study of coupled oscillators and leverage them in novel paradigms of computing is without question an interesting andfulfilling quest in and of itself.Moreover, as Moore's Law nears its limits, new computing paradigms that are different from mere conventional complementary metal–oxide–semiconductor (CMOS) scaling have become an important area of exploration. One broad direction aims to improve CMOS performance using device technology such as fin field-effect transistors (FinFET) and gate-all-around (GAA) FETs. Other new computing schemes are based on non-CMOS material and device technology, e.g., graphene, carbon nanotubes, memristive devices, optical devices, etc.. Another growing trend in both academia and industry is to build digital application-specific integrated circuits (ASIC) suitable for speeding up certain computational tasks, often leveraging the parallel nature of unconventional non-von Neumann architectures. These schemes seek to circumvent the limitations posed at the device level through innovations at the system/architecture level.Our work on oscillator-based computation represents a direction that is different from the above and features several points of novelty and attractiveness. Firstly, it makes meaningful use of nonlinear dynamical phenomena to tackle well-defined computational tasks that span analog and digital domains. It also differs from conventional computational systems at the fundamental logic encoding level, using timing/phase of oscillation as opposed to voltage levels to represent logic values. These differences bring about several advantages. The change of logic encoding scheme has several device- and system-level benefits related to noise immunity and interference resistance. The use of nonlinear oscillator dynamics allows our systems to address problems difficult for conventional digital computation. Furthermore, our schemes are amenable to realizations using almost all types of oscillators, allowing a wide variety of devices from multiple physical domains to serve as the substrate for computing. This ability to leverage emerging multiphysics devices need not put off the realization of our ideas far into the future. Instead, implementations using well-established circuit technology are already both practical and attractive.This work also differs from all past work on oscillator-based computing, which mostly focuses on specialized image preprocessing tasks, such as edge detection, image segmentation and pattern recognition. Perhaps its most unique feature is that our systems use transitions between analog and digital modes of operation --- unlike other existing schemes that simply couple oscillators and let their phases settle to a continuum of values, we use a special type of injection locking to make each oscillator settle to one of the several well-defined multistable phase-locked states, which we use to encode logic values for computation. Our schemes of oscillator-based Boolean and Ising computation are built upon this digitization of phase; they expand the scope of oscillator-based computing significantly.Our ideas are built on years of past research in the modelling, simulation and analysis of oscillators. While there is a considerable amount of literature (arguably since Christiaan Huygens wrote about his observation of synchronized pendulum clocks in the 17th century) analyzing the synchronization phenomenon from different perspectives at different levels, we have been able to further develop the theory of injection locking, connecting the dots to find a path of analysis that starts from the low-level differential equations of individual oscillators and arrives at phase-based models and energy landscapes of coupled oscillator systems. This theoretical scaffolding is able not only to explain the operation of oscillator-based systems, but also to serve as the basis for simulation and design tools. Building on this, we explore the practical design of our proposed systems, demonstrate working prototypes, as well as develop the techniques, tools and methodologies essential for the process
NASA Tech Briefs, October 2002
Topics include: a technology focus on sensors, electronic components and systems, software, materials, materials, mechanics, manufacturing, physical sciences, information sciences, book and reports, motion control and a special section of Photonics Tech Briefs
High-level automation of custom hardware design for high-performance computing
This dissertation focuses on efficient generation of custom processors from high-level language descriptions. Our work exploits compiler-based optimizations and transformations in tandem with high-level synthesis (HLS) to build high-performance custom processors. The goal is to offer a common multiplatform high-abstraction programming interface for heterogeneous compute systems where the benefits of custom reconfigurable (or fixed) processors can be exploited by the application developers.
The research presented in this dissertation supports the following thesis: In an increasingly heterogeneous compute environment it is important to leverage the compute capabilities of each heterogeneous processor efficiently. In the case of FPGA and ASIC accelerators this can be achieved through HLS-based flows that (i) extract parallelism at coarser than basic block granularities, (ii) leverage common high-level parallel programming languages, and (iii) employ high-level source-to-source transformations to generate high-throughput custom processors.
First, we propose a novel HLS flow that extracts instruction level parallelism beyond the boundary of basic blocks from C code. Subsequently, we describe FCUDA, an HLS-based framework for mapping fine-grained and coarse-grained parallelism from parallel CUDA kernels onto spatial parallelism. FCUDA provides a common programming model for acceleration on heterogeneous devices (i.e. GPUs and FPGAs). Moreover, the FCUDA framework balances multilevel granularity parallelism synthesis using efficient techniques that leverage fast and accurate estimation models (i.e. do not rely on lengthy physical implementation tools). Finally, we describe an advanced source-to-source transformation framework for throughput-driven parallelism synthesis (TDPS), which appropriately restructures CUDA kernel code to maximize throughput on FPGA devices. We have integrated the TDPS framework into the FCUDA flow to enable automatic performance porting of CUDA kernels designed for the GPU architecture onto the FPGA architecture
- …