556 research outputs found

    Assessing Spiritual Development: Reflections on Building a Community Measure

    Get PDF
    Measuring a complex and theologically challenging concept like spiritual formation can be daunting. This article describes a multiyear process and methodology that was used in constructing a measure that demonstrated reasonably high levels of reliability, validity, and useability. The article also describes the many challenges in developing this type of measure and strategies for overcoming those challenges. Reflections on how a measure such as this might be helpful, as well as potentially challenging, to churches and pastors are provided in the closing

    Aerodynamic Analysis of Lattice Grid Fins in Transonic Flow

    Get PDF
    Lattice grid fins have been studied for missile tail control for several years. A lattice grid fin can be described as an unconventional missile control surface comprised of an outer frame supported by an inner lattice grid of lifting surfaces. This unconventional fin design offers favorable lift characteristics at high angle of attack as well as almost zero hinge moments allowing the use of small and light actuators. In addition, they promise good storability for potential tube-launched and internal carriage dispenser-launched applications. The drawback for the lattice grid fins is the high drag and potentially poor radar cross section performance produced by this unconventional control surface configuration. Current research at the United State Air Force\u27s Aeroballistic Research Facility (ARF) at Eglin Air Force Base in Florida has indicated there is a critical transonic Mach number where normal shock waves are believed to be present within some of the grid cells. At this particular Mach number, there is a dynamic instability with severe variations of the pitch moment coefficient. A computational fluid dynamics (CFD) study was conducted to investigate these findings and elucidate the flowfield in the grid fin region. The missile model was numerically modeled in Gridgen and computational tests were run in Fluent. Finally, another fin configuration was developed that produced less drag and similar dynamic stability that the other lattice grid fin configurations tested

    Incarnational evangelism : developing a culturally sensitive model to engage people with the gospel of Jesus Christ

    Get PDF
    https://place.asburyseminary.edu/ecommonsatsdissertations/1239/thumbnail.jp

    Doctor of Philosophy

    Get PDF
    dissertationNew hydrogel-based micropressure sensor arrays for use in the fields of chemical sensing, physiological monitoring, and medical diagnostics are developed and demonstrated. This sensor technology provides reliable, linear, and accurate measurements of hydrogel swelling pressures, a function of ambient chemical concentrations. For the first time, perforations were implemented into the pressure sensors piezoresistive diaphragms, used to simultaneously increase sensor sensitivity and permit diffusion of analytes into the hydrogel cavity. It was shown through analytical and numerical (finite element) methods that pore shape, location, and size can be used to modify the diaphragm mechanics and concentrate stress within the piezoresistors, thus improving electrical output (sensitivity). An optimized pore pattern was chosen based on these numerical calculations. Fabrication was performed using a 14-step semiconductor fabrication process implementing a combination of potassium hydroxide (KOH) and deep reactive ion etching (DRIE) to create perforations. The sensor arrays (2×2) measure approximately 3 × 5 mm2 and used to measure full scale pressures of 50, 25, and 5 kPa, respectively. These specifications were defined by the various swelling pressures of ionic strength, pH and glucose specific hydrogels that were targeted in this work. Initial characterization of the sensor arrays was performed using a custom built bulge testing apparatus that simultaneously measured deflection (optical profilometry), pressure, and electrical output. The new perforated diaphragm sensors were found to be fully functional with sensitivities ranging from 23 to 252 μV/V-kPa with full scale output (FSO) ranging from 5 to 80 mV. To demonstrate proof of concept, hydrogels sensitive to changes in ionic strength were synthesized using hydroxypropyl-methacrylate (HPMA), N,N-dimethylaminoethyl-methacrylate (DMA) and a tetra-ethyleneglycol-dimethacrylate (TEGDMA) crosslinker. This hydrogel quickly and reversibly swells when placed environments of physiological buffer solutions (PBS) with ionic strengths ranging from 0.025 to 0.15 M. Chemical testing showed sensors with perforated diaphragms have higher sensitivity than those with solid diaphragms, and sensitivities ranging from 53.3±6.5 to 271.47±27.53 mV/V-M, depending on diaphragm size. Additionally, recent experiments show sensors utilizing Ultra Violet (UV) polymerized glucose sensitive hydrogels respond reversibly to physiologically relevant glucose concentrations from 0 to 20 mM

    Wireless local area network in a prehospital environment

    Get PDF
    BACKGROUND: Wireless local area networks (WLANs) are considered the next generation of clinical data network. They open the possibility for capturing clinical data in a prehospital setting (e.g., a patient's home) using various devices, such as personal digital assistants, laptops, digital electrocardiogram (EKG) machines, and even cellular phones, and transmitting the captured data to a physician or hospital. The transmission rate is crucial to the applicability of the technology in the prehospital setting. METHODS: We created two separate WLANs to simulate a virtual local are network environment such as in a patient's home or an emergency room (ER). The effects of different methods of data transmission, number of clients, and roaming among different access points on the file transfer rate were determined. RESULTS: The present results suggest that it is feasible to transfer small files such as patient demographics and EKG data from the patient's home to the ER at a reasonable speed. Encryption, user control, and access control were implemented and results discussed. CONCLUSIONS: Implementing a WLAN in a centrally managed and multiple-layer-controlled access control server is the key to ensuring its security and accessibility. Future studies should focus on product capacity, speed, compatibility, interoperability, and security management

    High speed wafer scale bulge testing for the determination of thin film mechanical properties

    Get PDF
    Journal ArticleA wafer scale bulge testing system has been constructed to study the mechanical properties of thin films and microstructures. The custom built test stage was coupled with a pressure regulation system and optical profilometer which gives high accuracy three-dimensional topographic images collected on the time scale of seconds. Membrane deflection measurements can be made on the wafer scale (50-150 mm) with up to nanometer-scale vertical resolution. Gauge pressures up to 689 kPa (100 psi) are controlled using an electronic regulator with and accuracy of approximately 0.344 kPa (0.05 psi). Initial testing was performed on square diaphragms 350, 550, and 1200 µm in width comprised of 720± 10 nm thick low pressure chemical vapor deposited silicon nitride with ~20 nm of e-beam evaporated aluminum. These initial experiments were focused on measuring the system limitations and used to determine what range of deflections and pressures can be accurately measured and controlled. Gauge pressures from 0 to ~8.3 kPa (1.2 psi) were initially applied to the bottom side of the diaphragms and their deflection was subsequently measured. The overall pressure resolution of the system is good (~350 Pa) but small fluctuations existed at pressures below 5 kPa leading to a larger standard deviation between deflection measurements. Analytical calculations and computed finite element analysis deflections closely matched those empirically measured. Using an analytical solution that relates pressure deflection data for the square diaphragms the Young's modulus was estimated for the films assuming a Poisson's ratio of v=0.25. Calculations to determine Young's modulus for the smaller diaphragms proved difficult because the pressure deflection relationship remained in the linear regime over the tested pressure range. Hence, the calculations result in large error when used to estimate the Young's modulus for the smaller membranes. The deflection measurements of three 1200x1200 µm2 Si3N4−x membranes were taken at increased pressures (>25 kPa) to increase nonlinearity and better determine Young's modulus. This pressure-deflection data were fit to an analytical solution and Young's modulus estimated to be 257±3 GPa, close to those previously reported in literature

    Artificial intelligence methods for security and cyber security systems

    Get PDF
    This research is in threat analysis and countermeasures employing Artificial Intelligence (AI) methods within the civilian domain, where safety and mission-critical aspects are essential. AI has challenges of repeatable determinism and decision explanation. This research proposed methods for dense and convolutional networks that provided repeatable determinism. In dense networks, the proposed alternative method had an equal performance with more structured learnt weights. The proposed method also had earlier learning and higher accuracy in the Convolutional networks. When demonstrated in colour image classification, the accuracy improved in the first epoch to 67%, from 29% in the existing scheme. Examined in transferred learning with the Fast Sign Gradient Method (FSGM) as an analytical method to control distortion of dissimilarity, a finding was that the proposed method had more significant retention of the learnt model, with 31% accuracy instead of 9%. The research also proposed a threat analysis method with set-mappings and first principle analytical steps applied to a Symbolic AI method using an algebraic expert system with virtualized neurons. The neural expert system method demonstrated the infilling of parameters by calculating beamwidths with variations in the uncertainty of the antenna type. When combined with a proposed formula extraction method, it provides the potential for machine learning of new rules as a Neuro-Symbolic AI method. The proposed method uses extra weights allocated to neuron input value ranges as activation strengths. The method simplifies the learnt representation reducing model depth, thus with less significant dropout potential. Finally, an image classification method for emitter identification is proposed with a synthetic dataset generation method and shows the accurate identification between fourteen radar emission modes with high ambiguity between them (and achieved 99.8% accuracy). That method would be a mechanism to recognize non-threat civil radars aimed at threat alert when deviations from those civilian emitters are detected

    Repeatable determinism using non-random weight initialisations in smart city applications of deep learning

    Get PDF
    Modern Smart City applications draw on the need for requirements that are safe, reliable and sustainable, as such these applications have a need to utilise machine-learning mechanisms such that they are consistent with public liability. Machine and deep learning networks, therefore, are required to be in a form that is safe and deterministic in their development and also in their deployment. The viability of non-random weight initialisation schemes in neural networks make the network more deterministic in learning sessions which is a desirable property in safety critical systems where deep learning is applied to smart city applications and where public liability is a concern. The paper uses a variety of schemes over number ranges and gradients and achieved a 98.09% accuracy figure, + 0.126% higher than the original random number scheme at 97.964%. The paper highlights that in this case, it is the number range and not the gradient that is affecting the achieved accuracy most dominantly, although there can be a coupling of number range with activation functions used. Unexpectedly in this paper, an effect of numerical instability was discovered from run to run when run on a multi-core CPU. The paper also has shown the enforcement of consistent deterministic results on an multi-core CPU by defining atomic critical code regions, and that aids repeatable information assurance in model fitting (or learning sessions). That enforcement of consistent repeatable determinism has also a benefit to accuracy even for the random schemes, and a highest score of 98.29%, + 0.326% higher than the baseline was achieved. However, also the non-random initialisation scheme causes weight arrangements after learning to be more structured which has benefits for validation in safety critical applications

    An algebraic expert system with neural network concepts for cyber, big data and data migration

    Get PDF
    This paper describes a machine assistance approach to grading decisions for values that might be missing or need validation, using a mathematical algebraic form of an Expert System, instead of the traditional textual or logic forms and builds a neural network computational graph structure. This Experts System approach is also structured into a neural network like format of: input, hidden and output layers that provide a structured approach to the knowledge-base organization, this provides a useful abstraction for reuse for data migration applications in big data, Cyber and relational databases. The approach is further enhanced with a Bayesian probability tree approach to grade the confidences of value probabilities, instead of the traditional grading of the rule probabilities, and estimates the most probable value in light of all evidence presented. This is ground work for a Machine Learning (ML) experts system approach in a form that is closer to a Neural Network node structure

    Non-random weight initialisation in deep learning networks for repeatable determinism

    Get PDF
    This research is examining the change in weight values of deep learning networks after learning. These research experiments require to make measurements and comparisons from a stable set of known weights and biases before and after learning is conducted, such that comparisons after learning are repeatable and the experiment is controlled. As such the current accepted schemes of random number initialisations of the weight values may need to be deterministic rather than stochastic to have little run to run varying effects, so that the weight value initialisations are not a varying contributor. This paper looks at the viability of non-random weight initialisation schemes, to be used in place of the random number weight initialisations of an established well understood test case. The viability of non-random weight initialisation schemes in neural networks may make a network more deterministic in learning sessions which is a desirable property in mission and safety critical systems. The paper will use a variety of schemes over number ranges and gradients and will achieve a 97.97% accuracy figure just 0.18% less than the original random number scheme at 98.05%. The paper may highlight that in this case it may be the number range and not the gradient that is effecting the achieved accuracy most dominantly, although there may be a coupling of number range with activation functions used. Unexpectedly in this paper, an effect of numerical instability will be discovered from run to run when run on a multi-core CPU. The paper will also show the enforcement of consistent deterministic results on an multi-core CPU by defining atomic critical code regions aiding repeatable Information Assurance (IA) in model fitting (or learning sessions)
    • …
    corecore