369 research outputs found
Recommended from our members
Piezoresistive microcantilever optimization for uncooled infrared detection technology
Uncooled infrared sensors are significant in a number of scientific and technological applications. A new approach to uncooled infrared detectors has been developed using piezoresistive microcantilevers coated with thermal energy absorbing materials. Infrared radiation absorbed by the microcantilever detector can be sensitively detected as changes in electrical resistance as function of microcantilever bending. The dynamic range of these devices is extremely large due to measurable resistance change obtained with only nanometer level cantilever displacement. Optimization of geometrical properties for selected commercially available cantilevers is presented. We also present results obtained from a modeling analysis of the thermal properties of several different microcantilever detector architectures
Recommended from our members
Optical and infrared detection using microcantilevers
The feasibility of micromechanical optical and infrared (IR) detection using microcantilevers is demonstrated. Microcantilevers provide a simple means for developing single- and multi-element sensors for visible and infrared radiation that are smaller, more sensitive and lower in cost than quantum or thermal detectors. Microcantilevers coated with a heat absorbing layer undergo bending due to the differential stress originating from the bimetallic effect. Bending is proportional to the amount of heat absorbed and can be detected using optical or electrical methods such as resistance changes in piezoresistive cantilevers. The microcantilever sensors exhibit two distinct thermal responses: a fast one ({theta}{sub 1}{sup thermal} < ms) and a slower one ({tau}{sub 2}{sup thermal} {approximately} 10 ms). A noise equivalent temperature difference, NEDT = 90 mK was measured. When uncoated microcantilevers were irradiated by a low-power diode laser ({lambda} = 786 nm) the noise equivalent power, NEP, was found to be 3.5nW/{radical}Hz which corresponds to a specific detectivity, D*, of 3.6 {times} 10{sup 7} cm {center_dot} {radical}Hz/W at a modulation frequency of 20 Hz
Recommended from our members
Letter processing and font information during reading: beyond distinctiveness, where vision meets design
Letter identification is a critical front end of the
reading process. In general, conceptualizations of the identification process have emphasized arbitrary sets of distinctive features. However, a richer view of letter processing incorporates principles from the field of type design, including an emphasis on uniformities across letters within a font. The importance of uniformities is supported by a small body of research indicating that consistency of font increases letter identification efficiency. We review design concepts and the relevant literature, with the goal of stimulating further thinking about letter processing during reading
hp-DGFEM for Partial Differential Equations with Nonnegative Characteristic Form
Presented as Invited Lecture at the International Symposium on Discontinuous Galerkin Methods: Theory, Computation and Applications, in Newport, RI, USA.\ud
\ud
We develop the error analysis for the hp-version of a discontinuous finite element approximation to second-order partial differential equations with nonnegative characteristic form. This class of equations includes classical examples of second-order elliptic and parabolic equations, first-order hyperbolic equations, as well as equations of mixed type. We establish an a priori error bound for the method which is of optimal order in the mesh size h and 1 order less than optimal in the polynomial degree p. In the particular case of a first-order hyperbolic equation the error bound is optimal in h and 1/2 an order less than optimal in p
A chronology of global air quality
Air pollution has been recognized as a threat to human health since the time of Hippocrates, ca 400 BC. Successive written accounts of air pollution occur in different countries through the following two millennia until measurements, from the eighteenth century onwards, show the growing scale of poor air quality in urban centres and close to industry, and the chemical characteristics of the gases and particulate matter. The industrial revolution accelerated both the magnitude of emissions of the primary pollutants and the geographical spread of contributing countries as highly polluted cities became the defining issue, culminating with the great smog of London in 1952. Europe and North America dominated emissions and suffered the majority of adverse effects until the latter decades of the twentieth century, by which time the transboundary issues of acid rain, forest decline and ground-level ozone became the main environmental and political air quality issues. As controls on emissions of sulfur and nitrogen oxides (SO2 and NOx) began to take effect in Europe and North America, emissions in East and South Asia grew strongly and dominated global emissions by the early years of the twenty-first century. The effects of air quality on human health had also returned to the top of the priorities by 2000 as new epidemiological evidence emerged. By this time, extensive networks of surface measurements and satellite remote sensing provided global measurements of both primary and secondary pollutants. Global emissions of SO2 and NOx peaked, respectively, in ca 1990 and 2018 and have since declined to 2020 as a result of widespread emission controls. By contrast, with a lack of actions to abate ammonia, global emissions have continued to grow
Bayesian calibration, validation and uncertainty quantification for predictive modelling of tumour growth: a tutorial
In this work we present a pedagogical tumour growth example, in which we apply calibration and validation techniques to an uncertain, Gompertzian model of tumour spheroid growth. The key contribution of this article is the discussion and application of these methods (that are not commonly employed in the field of cancer modelling) in the context of a simple model, whose deterministic analogue is widely known within the community. In the course of the example we calibrate the model against experimental data that is subject to measurement errors, and then validate the resulting uncertain model predictions. We then analyse the sensitivity of the model predictions to the underlying measurement model. Finally, we propose an elementary learning approach for tuning a threshold parameter in the validation procedure in order to maximize predictive accuracy of our validated model
On the contact detection for contact-impact analysis in multibody systems
One of the most important and complex parts of the simulation of multibody systems with contact-impact involves the detection of the precise instant of impact. In general, the periods of contact are very small and, therefore, the selection of the time step for the integration of the time derivatives of the state variables plays a crucial role in the dynamics of multibody systems. The conservative approach is to use very small time steps throughout the analysis. However, this solution is not efficient from the computational view point. When variable time step integration algorithms are used and the pre-impact dynamics does not involve high-frequencies the integration algorithms may use larger time steps and the contact between two surfaces may start with initial penetrations that are artificially high. This fact leads either to a stall of the integration algorithm or to contact forces that are physically impossible which, in turn, lead to post-impact dynamics that is unrelated to the physical problem. The main purpose of this work is to present a general and comprehensive approach to automatically adjust the time step, in variable time step integration algorithms, in the vicinity of contact of multibody systems. The proposed methodology ensures that for any impact in a multibody system the time step of the integration is such that any initial penetration is below any prescribed threshold. In the case of the start of contact, and after a time step is complete, the numerical error control of the selected integration algorithm is forced to handle the physical criteria to accept/reject time steps in equal terms with the numerical error control that it normally uses. The main features of this approach are the simplicity of its computational implementation, its good computational efficiency and its ability to deal with the transitions between non contact and contact situations in multibody dynamics. A demonstration case provides the results that support the discussion and show the validity of the proposed methodology.Fundação para a Ciência e a Tecnologia (FCT
Models, measurement and inference in epithelial tissue dynamics
The majority of solid tumours arise in epithelia and therefore much research effort has gone into investigating the growth, renewal and regulation of these tissues. Here we review different mathematical and computational approaches that have been used to model epithelia. We compare different models and describe future challenges that need to be overcome in order to fully exploit new data which present, for the first time, the real possibility for detailed model validation and comparison
FRAXâ„¢ and the assessment of fracture probability in men and women from the UK
SUMMARY: A fracture risk assessment tool (FRAX) is developed based on the use of clinical risk factors with or without bone mineral density tests applied to the UK. INTRODUCTION: The aim of this study was to apply an assessment tool for the prediction of fracture in men and women with the use of clinical risk factors (CRFs) for fracture with and without the use of femoral neck bone mineral density (BMD). The clinical risk factors, identified from previous meta-analyses, comprised body mass index (BMI, as a continuous variable), a prior history of fracture, a parental history of hip fracture, use of oral glucocorticoids, rheumatoid arthritis and other secondary causes of osteoporosis, current smoking, and alcohol intake 3 or more units daily. METHODS: Four models were constructed to compute fracture probabilities based on the epidemiology of fracture in the UK. The models comprised the ten-year probability of hip fracture, with and without femoral neck BMD, and the ten-year probability of a major osteoporotic fracture, with and without BMD. For each model fracture and death hazards were computed as continuous functions. RESULTS: Each clinical risk factor contributed to fracture probability. In the absence of BMD, hip fracture probability in women with a fixed BMI (25 kg/m(2)) ranged from 0.2% at the age of 50 years for women without CRF's to 22% at the age of 80 years with a parental history of hip fracture (approximately 100-fold range). In men, the probabilities were lower, as was the range (0.1 to 11% in the examples above). For a major osteoporotic fracture the probabilities ranged from 3.5% to 31% in women, and from 2.8% to 15% in men in the example above. The presence of one or more risk factors increased probabilities in an incremental manner. The differences in probabilities between men and women were comparable at any given T-score and age, except in the elderly where probabilities were higher in women than in men due to the higher mortality of the latter. CONCLUSION: The models provide a framework which enhances the assessment of fracture risk in both men and women by the integration of clinical risk factors alone and/or in combination with BMD
- …