Georgia Tech Lorraine

Scholarly Materials And Research @ Georgia Tech
Not a member yet
    71091 research outputs found

    Developing a Recommendation-Based Application to Help Endocrinologists Treat Type II Diabetes Mellitus

    Full text link
    Diabetes Mellitus type II is a disease characterized by abnormally high levels of glucose in the bloodstream (hyperglycemia) due to decreased insulin secretion, insulin resistance, or both. It affects approximately 425 million adults worldwide and is the 7th most common chronic condition according to the CDC (Figure 1).[1] Patients with this disease typically have increased urination, increased thirst, and fatigue and can even be vulnerable to many types of infections. Patients with type II diabetes see diabetes specialists and endocrinologists to effectively treat their disease. Currently, however, there is a massive shortage of endocrinologists in the United States due to a growing demand of chronic diseases such as diabetes and osteoporosis.[2] In one study, the majority of endocrinologists surveyed believed the process of treating diabetes is difficult for these four reasons: the shortage of physicians, constantly evolving diabetes research, rapidly changing medication guidelines, and the rate at which medications are being added to the market.[3] Another major problem in the diabetes community is the risk of potentially inappropriate medications (PIMs), which are defined as prescribing medications that have a greater risk of potentially severe adverse effects. 74% of elderly patients with type II diabetes are prescribed at least one PIM when hospitalized.[4] The studies conducted by Healy et al. and Sharma et al. reveal that the process of treating type II diabetes is difficult because of 3 main reasons: The shortage of endocrinologists, rapidly evolving medication recommendations by diabetes associations, and the health risk to elderly diabetic patients due to PIMs. There is a growing need for technology that assists endocrinologists in prescribing medication based on factors that adjust to the evolving recommendations by the American Diabetes Association and uses patient biomarkers along with other factors to recommend appropriate medications for patients.Undergraduat

    Data Tiling for Sparse Computation

    Full text link
    Many real-world data contain internal relationships. Efficient analysis of these relationship data is crucial for important problems including genome alignment, network vulnerability analysis, ranking web pages, among others. Such relationship data is frequently sparse and analysis on it is called sparse computation. We demonstrate that the important technique of data tiling is more powerful than previously known by broadening its application space. We focus on three important sparse computation areas: graph analysis, linear algebra, and bioinformatics. We demonstrate data tiling's power by addressing key issues and providing significant improvements---to both runtime and solution quality---in each area. For graph analysis, we focus on fast data tiling techniques that can produce well-structured tiles and demonstrate theoretical hardness results. These tiles are suitable for graph problems as they reduce data movement and ultimately improve end-to-end runtime performance. For linear algebra, we introduce a new cache-aware tiling technique and apply it to the key kernel of sparse matrix by sparse matrix multiplication. This technique tiles the second input matrix and then uses a small, summary matrix to guide access to the tiles during computation. Our approach results in the fastest known implementation across three distinct CPU architectures. In bioinformatics, we develop a tiling based de novo genome assembly pipeline. We start with reads and develop either a graph or hypergraph that captures internal relationships between reads. This is then tiled to minimize connections while maintaining balance. We then treat each resulting tile independently as the input to an existing, shared-memory assembler. Our pipeline improves existing state-of-the-art de novo genome assemblers and brings both runtime and quality improvements to them on both real-world and simulated datasets.Ph.D

    Rotor Fatigue Life Prediction and Design for Revolutionary Vertical Lift Concepts

    Full text link
    Despite recent technological advancements, rotorcraft still lag behind their fixed-wing counterparts in the areas of flight safety and operating cost. Competition with fixed-wing aircraft is difficult for applications where vertical takeoff and landing (VTOL) capabilities are not required. Both must be addressed to ensure the continued competitiveness of vertical lift aircraft, especially in the context of new military and civilian rotorcraft programs such as Future Vertical Lift and urban air mobility, which will require orders-of-magnitude improvements in reliability, availability, maintainability, and cost (RAM-C) metrics. Lifecycle costs and accident rates are strongly driven by scheduled replacement or failure of flight-critical components. Rotor blades are life-limited to ensure that they are replaced before fatigue damage exceeds critical levels, but purchasing new blades is extremely costly. Despite aggressive component replacement times, fatigue failure of rotor blades continues to account for a significant proportion of inflight accidents. Fatigue damage in rotorcraft is unavoidable due to the physics of rotary-wing flight, but new engineering solutions to improve fatigue life in the rotor system could improve rotorcraft operating costs and flight safety simultaneously. Existing rotorcraft design methods treat fatigue life as a consequence, rather than a driver, of design. A literature review of rotorcraft design and fatigue design methods is conducted to identify the relevant strengths and weaknesses of traditional processes. In rotorcraft design, physics-based rotor design frameworks are focused primarily on fundamental performance analysis and do not consider secondary characteristics such as reliability or fatigue life. There is a missing link between comprehensive rotor design frameworks and conceptual design tools that prevents physics-based assessment of RAM-C metrics in the early design stages. Traditional fatigue design methods, such as the safe life methodology, which applies the Miner's rule fatigue life prediction model to rotorcraft components, are hindered by a lack of physics-based capabilities in the early design stages. An accurate fatigue life quantification may not be available until the design is frozen and prototypes are flying. These methods are strongly dependent on extrapolations built on historical fatigue data, and make use of deterministic safety factors based on organizational experience to ensure fatigue reliability, which can lead to over-engineering or unreliable predictions when applied to revolutionary vertical lift aircraft. A new preliminary fatigue design methodology is designed to address these concerns. This methodology is based on the traditional safe life methodology, but replaces several key elements with modern tools, techniques, and models. Three research questions are proposed to investigate, refine, and validate different elements of the methodology. The first research question addresses the need to derive physics-based fatigue load spectra more rapidly than modern comprehensive analysis tools allow. The second investigates the application of different probabilistic reliability solution methods to the fatigue life substantiation problem. The third question tests the ability of the preliminary fatigue design methodology to evaluate the relative impact of common preliminary fatigue design variables on the probability of fatigue failure of a conceptual helicopter's rotor blade. Hypotheses are formulated in response to each research question, and a series of experiments are designed to test those hypotheses. In the first experiment, a multi-disciplinary analysis (MDA) environment combining the rotorcraft performance code NDARC, the comprehensive code RCAS, and the beam analysis program VABS, is developed to provide accurate physics-based predictions of rotor blade stress in arbitrary flight conditions. A conceptual single main rotor transport helicopter based on the UH-60A Black Hawk is implemented within the MDA to serve as a test case. To account for the computational expense of the MDA, surrogate modeling techniques, such as response surface equations, artificial neural networks, and Gaussian process models are used to approximate the stress response across the flight envelope of the transport helicopter. The predictive power and learning rates of various surrogate modeling techniques are compared to determine which is the most suitable for predicting fatigue stress. Ultimately, shallow artificial neural networks are found the provide the best compromise between accuracy, training expense, and uncertainty quantification capabilities. Next, structural reliability solution methods are investigated as a means to produce high-reliability fatigue life estimates without requiring deterministic safety factors. The Miner's sum fatigue life prediction model is reformulated as a structural reliability problem. Analytical solutions (FORM and SORM), sampling solutions (Monte Carlo, quasi-Monte Carlo, Latin hypercube sampling, and directional simulation), and hybrid solutions importance sampling) are compared using a notional fatigue life problem. These results are validated using a realistic helicopter fatigue life problem \jnr{which incorporates the fatigue stress surrogate model and is based on a probabilistic definition of the mission spectrum to account for fleet-wide usage variations. Monte Carlo simulation is found to provide the best performance and accuracy when compared to the exact solution. Finally, the capabilities of the preliminary fatigue design methodology are demonstrated using a series of hypothetical fatigue design exercises. First, the methodology is used to predict the impact of rotor blade box spar web thickness on probability of fatigue failure. Modest increases in web thickness are found to reduce probability of failure, but larger increases cause structural instability of the rotor blade in certain flight regimes which increases the fatigue damage rate. Next, a similar study tests the impact of tail rotor cant angle. Positive tail rotor cant is found to improve fatigue life in cases where the center of gravity (CG) of the vehicle is strongly biased towards the tail, but is detrimental if the CG is closer to the main rotor hub station line. Last, the effect of design mission requirements like rate of climb and cruising airspeed is studied. The methodology is not sensitive enough to predict the subtle impact of changes to rate of climb, but does prove that a slower cruising airspeed will decrease probability of fatigue failure of the main rotor blade. The methodology is proven to be capable of quantifying the influence of \jnr{rotor blade design variables, vehicle layout and configuration, and certain design mission requirements}, paving the way for implementation in a rotorcraft design framework. This thesis ends with suggestions for future work to address the most significant limitations of this research, as well as descriptions of the tasks required to apply the methodology to conventional rotorcraft or conceptual revolutionary vertical lift aircraft.Ph.D

    Oracle Guided Image Synthesis with Relative Queries

    Full text link
    Isolating and controlling specific features in the outputs of generative models in a user-friendly way is a difficult and open-ended problem. We develop techniques that allow a user to generate an image they are envisioning in their head by answering a sequence of relative queries of the form "do you prefer image a or image b?" Our framework consists of a Conditional VAE that uses the collected relative queries to partition the latent space into preference-relevant features and non-preference-relevant features. We then use the user's responses to relative queries to determine the preference-relevant features that correspond to their envisioned output image. Additionally, we develop techniques for modeling the uncertainty in images' predicted preference-relevant features, allowing our framework to generalize to scenarios in which the relative query training set contains noise.Undergraduat

    The Development of Energy Efficient Wastewater Treatment: Electrochemical Oxidation and PFACs liquid-liquid Extraction

    Full text link
    EAOPs are new, and environmental-friendly techniques that can oxidize organic compounds by direct oxidation and indirect oxidation (hydroxyl radical HO•) on the anode. The benefits of EAOPs are: 1) They are driven by electric power. The stoichiometric connection between power consumption and pollutant removal in EAOP is almost linear. Therefore, they are ease of control. 2) they can generate hydroxyl radical HO• without any chemical additive. However, EAOPs also face several problems for industrial applications. 1) Since the reaction mostly happens on the electrode surface, the mass transfer is the limiting factor in electric efficiency. 2) High operation potential is required for hydroxyl radical generation leading to high energy costs. 3) EAOPs did not have selectivity in terms of the organic compounds. For the third problem, we developed Liquid-liquid extraction to extract and separate specific refractory degradable: PFACs. To improve the EAOPs' energy efficiency, The EAOPs system was optimized at three levels: 1) System level: energy recovery system, and electric power mode; 2) Reactor level: flow-through wire mesh anode 3) Electrode level: electrode material modification. To be specific, a novel EAOP-fuel cell energy recovery system is proposed, and the system performance in varied conditions are summarized. A flow-through multiple layer wire mesh anode reactor is developed for improved mass transfer and PFACs treatment. The Mn2O3-TiO2 NTAs porous anode is developed for optimized electrode conductivity. The anode is tested by both electrochemical oxidation experiments and advanced characterization methods. A pulse potential instead of DC power is used to drive the EAOPs reaction to investigate the frequency and potential amplitude effect on the oxidation. In terms of liquid-liquid extraction, Ionic liquids were used as an extractant for liquid-liquid extraction of PFOA removal from the aqueous phase. The optimized extraction condition is investigated. COSMO-RS, a quantum chemistry-based equilibrium thermodynamics method, is used to screen the ILs for high extraction efficiency. In conclusion, the mass transfer is improved by using the porous anode and multiple-layer wire mesh anodes structure. The mass transfer impact on the overall oxidation is evaluated through the limiting current density analysis. Using the cathodic hydrogen gas for energy recovery reduces the high operation power caused by the operation voltage. The liquid-liquid extraction has the capability of PFACs extraction. In the future, it could be combined with PFACs degradation methodsPh.D

    The impacts of climate variability and change on the physical and social-ecological dynamics of the Kuroshio and North Pacific Transition Zone

    Full text link
    There is growing recognition that climate change is impacting the ocean's western boundary current system. In the Pacific, the Kuroshio and its offshore Kuroshio-Oyashio Extension (KOE) play a central role in the North Pacific climate and impact the social-ecological dynamics of countries that rely on marine ecosystem services (e.g. fisheries). In the thesis, we have used a combination of observations and modeling approaches to understand how past and projected changes in the physical environment of KOE impact social-ecological dynamics linked to the fish industry of Japan and the North Pacific more widely. The thesis is articulated in 3 Chapters. In Chapter 1 we have introduced the problem and the main motivation that lead us to perform this study. In Chapter 2, we analyze the climate variability and change of the KOE over the historical and future projection period 1920-2100. We perform this task using Coupled Model Intercomparison Project 5 (CMIP5) models and a large ensemble from the Community Earth System Model (CESM-LE) output runs. The reason for considering also the CESM-LE runs is that they give the possibility to explore how the variance of the KOE in one model (e.g. a fixed set of dynamics) responds to anthropogenic forcing when compared to the range of natural variability of the CESM-LE model. In this way, we can perform a scenario which goes beyond the time of the observational data. In Chapter 3, we have used an Empirical Dynamical Model approach to characterize the joint statistics of the physical and social-ecological environmental system (SEES) that is relevant to climate and fisheries. To define the states of the SEES we use three international fish databases, (1) the Large Marine Ecosystem (LME, 9,000 fish stocks), (2) the NOAA fishery database referred to as Restricted Access Management (RAM, 300 fish stock) and the (3) the Food and Agriculture Organization (FAO, 1400 fish stocks). Among the approaches used to explore the relationship between KOE’s climate and the SEES response, we have developed a Linear Inverse Model (LIM) approach that has been very successful to simulate and predict the KOE physical climate and its relation to large-scale Pacific dynamics such as El Niño Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO), and others.Ph.D

    Can Equating Perception Also Equate Working Memory Performance in Young Adults? A Stage Report for an Attempt to Resolve Individual Differences in Working Memory

    Full text link
    Working memory performances are not the same among different individuals, here we examined the role initial perceptual processing plays in working memory functioning. We sought to examine whether equating subjects on the initial perceptual processing could also equate their working memory performances. We did this by using a standard 2-back test using slanted bars as stimuli. Perception was equated by obtaining subjects’ 79% threshold for discrimination of two slanted bars with different orientation using a 3-down-1-up staircase. At this stage, we found some evidence that subjects equated on perception performance performed similarly in working memory tasks. A large scale experiment is needed to generalize the findings.Undergraduat

    LEEFT with Nano for Water Disinfection

    Full text link
    Presented online via YouTube and in-person on February 7, 2023 at 12:00 p.m. in the Marcus Nanotechnology Building, room 1116.Xing Xie is an assistant professor and the Carlton S. Wilder Junior Professor in the School of Civil and Environmental Engineering. Xie’s research interests center around the applications of environmental biotechnology and materials science at the nexus of water and energy. He has worked on many projects related to water treatment and reuse, pollutant detection and quantification, and energy and resource recovery.Runtime: 54:29 minutesWater is a basic human need. Nevertheless, more than 10% of the world’s population lacks access to safe drinking water. An effective water disinfection method is still not readily accessible to these people. In developed urban areas, water is typically disinfected in a centralized facility through chlorine-based methods that inevitably generates harmful disinfection byproducts. In addition, current water disinfection systems are vulnerable to natural disasters. Next-generation water disinfection should minimize the use of chemicals, the consumption of energy, and the impact on the environment, while having high resilience for different application scenarios. The recently developed water disinfection approach based on locally enhanced electric field treatment (LEEFT) has a great potential to transform current water disinfection strategies and systems. The LEEFT is a physical treatment process that aims to utilize a strong electric field to disrupt cell membranes and thus inactivate pathogens. The electrodes installed in a LEEFT device are typically modified with one-dimensional nanostructures, such that the electric field is greatly enhanced locally near the tips of the nanostructures. LEEFT can potentially be applicable at all scales, from portable devices to point-of-use household units and from distributed community-scale treatment clusters to centralized treatment plants. This talk will cover the recent progress on the development of the LEEFT technology

    Surface Reactions of Biomass Derived Oxygenates on Lewis Acidic Metal Oxides

    Full text link
    Lignocellulosic biomass is currently the only source of organic carbon making it a sustainable source for production of liquid hydrocarbon fuels. One main challenge for valorization of biomass is reducing the oxygen content of the starting feedstock and producing high value chemicals. Using heterogeneous catalysts for conversion of biomass feedstock to commodity chemicals is one strategy for the valorization process. Specifically, using Lewis acidic metal oxides for this upgrading process has shown promise due to its ability to catalyze relevant reactions such as isomerization and (retro-) aldol condensation. This work seeks to elucidate the surface interactions of biomass derived oxygenates with solid Lewis acid sites. This is done using in-situ spectroscopic techniques such as Fourier transformed infrared, nuclear magnetic resonance and ultra-violet spectroscopies. These techniques were applied for studying the following reactions: (i) aldol condensation of ethanol and acetaldehyde over reduced molybdenum oxide; (ii) aldol condensation of acetaldehyde over supported molybdenum oxides; (iii) dehydration and retro-aldol condensation of C4 polyoxygenates using various Lewis acidic metal oxides and (iv) ring opening and esterification of erythrose using various Lewis acidic metal oxides. Surface properties such as Lewis and Brønsted acid site and reducibility of metal center are essential to rationalizing the reaction pathway of the above reactions. The aforementioned studies provide fundamental knowledge regarding how different oxygenates can interact with solid Lewis acid sites.Ph.D

    Machine learning for point-to-point transmission loss estimates in ocean acoustic waveguides

    Full text link
    Underwater acoustic system performance depends on several complex and dynamic environmental parameters, and simulating such performance is vital to the success of development and implementation of these systems. Because of the complexity of the environment and governing physical equations, realistic simulations can become computationally prohibitive. This is especially true for large environments with many active systems being assessed. By utilizing fully connected neural networks (FNNs) trained on data generated by well-established physics based models (such as BELLHOP’s ray tracing algorithm) network predictions can be used lieu of physics-based models to significantly reduce the computational burden in the loop for system performance simulations. In this paper, the usefulness and limitations of using FNNs to estimate transmission loss (TL), which is a key element in determining system performance, is explored. Using BELLHOP’s ray tracing algorithm as a baseline, FNN’s were able to produce TL results with significantly lower errors than those estimates made using other estimation methods such as spherical spreading and K-nearest neighbors. This indicates that the computational costs of large underwater acoustic simulations may be shifted from inside the simulation to network training, thus allowing for more efficient traditional and Monte Carlo style simulations.M.S

    35,528

    full texts

    71,563

    metadata records
    Updated in last 30 days.
    Scholarly Materials And Research @ Georgia Tech is based in United States
    Access Repository Dashboard
    Do you manage Open Research Online? Become a CORE Member to access insider analytics, issue reports and manage access to outputs from your repository in the CORE Repository Dashboard! 👇