27 research outputs found

    Evaluating Relationships Between Rock Strength And Longitudinal Stream Profile Morphometry In The Southern Guadalupe Mountains, Texas

    Get PDF
    Landscapes record information about the tectonic, climatic, and lithologic environments in which they form (Yang et al., 2015). When one or more of these environmental conditions change spatially or temporally, the landscape responds through erosion and thus, develops representative geomorphic features (Ritter et al., 2011). Since the nineteenth century, it has been clear that bedrock strength and erodibility play an important role in landscape evolution and geomorphology (Lifton et al., 2009). However, the nuances of variable erodibility remain poorly understood. The implications of this limited understanding lies within landscape evolution models. While these models show strong qualitative relationships between longitudinal river profile morphometry and tectonic or climatic processes, major discrepancies remain over the relationship between bedrock strength and river incision. As these models strive to become more accurate, they are limited by our understanding of discrete characteristics of substrate erodibility. For this reason, the Southern Guadalupe Mountains are an excellent location to focus on these issues. Minor variations in carbonate lithology in this region will provide a focused insight on the relationships between discrete changes in bedrock strength, erodibility, and longitudinal stream profile morphometry. Additionally, this study is among the first to utilize longitudinal stream profiles in the Southern Guadalupe Mountains, Texas with the intent to explore the landscape for tectonic and lithologic influences on landscape evolution. Here, the relationships between rock strength and vertical river incision are explored using classic type-N Schmidt hammer analysis and longitudinal stream profiles obtained from digital elevation models. Qualitative exploration of longitudinal stream profiles in the Southern Guadalupe Mountains has revealed high-elevation, low-relief equilibrium profiles in the upstream segments of rivers crossing steep normal faults. It is likely that upstream, downthrown, hanging walls have produced mid profile pseudo-base levels in upper reaches of rivers by producing dam-like structures. Downstream of these structures, profiles are convex and show evidence for possible increased localized uplift rates, or significantly decreased erosional efficiency. Statistical results show that mean rebound values from type-N Schmidt Hammer analysis can be used to predict stream gradient, knickpoint development, and residual errors inherent in Flint’s Law (river incision model) only under relatively simple tectonic and hydrologic regimes. These relationships do not hold true in circumstances where large confluences and/or faulting disrupts major stream channel networks, or in areas under topographic disequilibrium. Finally, geologic units with different, yet statistically similar rebound values were found to influence stream gradients differently. This suggests that lumping lithologies together based on similar rebound values is an overgeneralization and should be avoided

    Third-order iterative methods with applications to Hammerstein equations: A unified approach

    Get PDF
    AbstractThe geometrical interpretation of a family of higher order iterative methods for solving nonlinear scalar equations was presented in [S. Amat, S. Busquier, J.M. Gutiérrez, Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 157(1) (2003) 197–205]. This family includes, as particular cases, some of the most famous third-order iterative methods: Chebyshev methods, Halley methods, super-Halley methods, C-methods and Newton-type two-step methods. The aim of the present paper is to analyze the convergence of this family for equations defined between two Banach spaces by using a technique developed in [J.A. Ezquerro, M.A. Hernández, Halley’s method for operators with unbounded second derivative. Appl. Numer. Math. 57(3) (2007) 354–360]. This technique allows us to obtain a general semilocal convergence result for these methods, where the usual conditions on the second derivative are relaxed. On the other hand, the main practical difficulty related to the classical third-order iterative methods is the evaluation of bilinear operators, typically second-order Fréchet derivatives. However, in some cases, the second derivative is easy to evaluate. A clear example is provided by the approximation of Hammerstein equations, where it is diagonal by blocks. We finish the paper by applying our methods to some nonlinear integral equations of this type

    Second order strategies for complementarity problems

    Get PDF
    Orientadores: Sandra Augusta Santos, Roberto AndreaniTese (doutorado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação CientificaResumo: Neste trabalho reformulamos o problema de complementaridade não linear generalizado (GNCP) em cones poliedrais como um sistema não linear com restrição de não negatividade em algumas variáveis, e trabalhamos na resolução de tal reformulação por meio de estratégias de pontos interiores. Em particular, definimos dois algoritmos e provamos a convergência local de tais algoritmos sob hipóteses usuais. O primeiro algoritmo é baseado no método de Newton, e o segundo, no método tensorial de Chebyshev. O algoritmo baseado no método de Chebyshev pode ser visto como um método do tipo preditor-corretor. Tal algoritmo, quando aplicado a problemas em que as funções envolvidas são afins, e com escolhas adequadas dos parâmetros, torna-se o bem conhecido algoritmo preditor-corretor de Mehrotra. Também apresentamos resultados numéricos que ilustram a competitividade de ambas as propostas.Abstract: In this work we reformulate the generalized nonlinear complementarity problem (GNCP) in polyhedral cones as a nonlinear system with nonnegativity in some variables and propose the resolution of such reformulation through interior-point methods. In particular we define two algorithms and prove the local convergence of these algorithms under standard assumptions. The first algorithm is based on Newton's method and the second, on the Chebyshev's tensorial method. The algorithm based on Chebyshev's method may be considered a predictor-corrector one. Such algorithm, when applied to problems for which the functions are affine, and the parameters are properly chosen, turns into the well-known Mehrotra's predictor corrector algorithm. We also present numerical results that illustrate the competitiveness of both proposals.DoutoradoOtimizaçãoDoutor em Matemática Aplicad

    Beobachtung und Modellierung der Sublimation von Wassereis und Oberflächenveränderungen der Staubdecke auf Komet 67P/Tschurjumow-Gerassimenko

    Get PDF
    The excitement of the Rosetta Mission culminated in 2014 when the spacecraft arrived at comet 67P/Churyumov-Gerasimenko (67P) after a ten-year chase to escort the comet through its following perihelion passage. Characterizing the distribution of activity and surface changes over the nucleus was among the main scientific objectives of OSIRIS, the camera system onboard Rosetta, that would shed light on the physical and compositional properties of the nucleus and its evolution. Some fundamental geometric and photometric methods of image analysis are reviewed and refurbished in this work that are instrumental to the determination of sources of dust activity and quantification of surface changes on 67P observed by OSIRIS. Deriving nucleus properties from the observed dust activity and surface changes relies on modeling of thermo-physical conditions of the nucleus at the epochs of observations. The general formulation and numerical recipes of two cometary thermo-physical models, as well as strategies of model parameterization, are explicated in this work in order to facilitate the determination of nucleus subsurface properties. Dust jets from the night side had been recurrently observed on 67P, in often cases near the dusk terminators. The thermo-physical models are parameterized and applied to simulate the thermal and mechanical conditions of the nucleus subsurface over the source areas of jets under which the observed dust activity could have been sustained after sunset. These jets are found to have probably originated from the depth of a few millimeters below the surface where water ice was present and where the residual warmth could sustain strong water outgassing even one hour after dark. The source areas of the sunset jets had undergone significant changes when 67P reached 2 AU inbound from the Sun. It will be shown that these changes, as well as numerous others found at roughly the same latitudes, were erosive in nature and induced by sublimation of water ice accumulated over months. The quantification of the changes based on OSIRIS observations in comparison with the estimation of accumulated water ice loss via thermo-physical modeling revealed a low ice abundance on the order of 1% in the dust cover on average. These results allude to a fundamental but unresolved question regarding physics of cometary activity and evolution, namely, the detailed mechanism of ejection of dust induced by sublimation of lesser amounts of water ice underneath.Die Euphorie über die Rosetta Mission erreichte ihren Höhepunkt im Jahr 2014, als die Raumsonde nach 10 Jahren Reise den Kometen 67P/Tschurjumow-Gerassimenko erreichte, um ihn über die nächsten zwei Jahre auf seiner Bahn um die Sonne zu begleiten. Das Verständnis von Aktivität und der dadurch hervorgerufenen Oberflächenänderungen war eine zentrale wissenschaftliche Zielsetzung für OSIRIS, dem Kamerasystem auf Rosetta. Dies sollte Aufschluss geben über die veränderlichen physikalischen Eigenschaften und die Materialzusammensetzung des Nukleus. In der vorliegenden Arbeit werden zunächst grundlegende geometrische und photometrische Methoden der Bildanalyse aufgearbeitet, welche die Lokalisierung von Staubaktivität und die Quantifizierung von Oberflächenveränderungen auf Komet 67P anhand von OSIRIS Bilddaten ermöglichen. Um aus diesen Beobachtungen auf die physikalischen Eigenschaften des Nukleus zu schließen, ist eine Modellierung der thermophysikalischen Bedingungen in ausgewählten Zeiträumen nötig. Die Formulierung und Implementierung zweier thermophysikalischer Kometenmodelle sowie Strategien der Modellparametrisierung werden in dieser Arbeit ausgeführt. Die Modelle ermöglichen die Bestimmung von Eigenschaften des Kometen. Staubjets auf der Nachtseite wurden immer wieder auf 67P beobachtet, vielfach in der Nähe der Tag-Nacht-Grenze. Die thermophysikalischen Modelle werden parametriert und angewendet, um die thermischen und mechanischen Bedingungen in Schichten unter deren Quellenbereichen zu simulieren und die dort beobachtete Staubaktivität nach Sonnenuntergang zu erklären. Die Modelle zeigen, dass diese Jets wahrscheinlich aus Tiefen von wenigen Millimetern unterhalb der Oberfläche entstanden, wo durch Restwärme auch eine Stunde nach der Dunkelheit vorhandenes Wassereis verdampfen konnte. In den Quellenbereichen dieser Jets haben erhebliche Oberflächenänderungen stattgefunden, als der Komet einen Abstand von 2 Astronomischen Einheiten zur Sonne erreichte. Vergleicht man quantitativ die Veränderungen basierend auf OSIRIS Beobachtungen mit dem akkumulierten Wassereisverlust aus den thermophysikalischen Modellen, ergibt sich ein niedriger Eisanteil in der Größenordnung von 1% im Staubmantel. Diese Ergebnisse adressieren eine fundamentale, aber ungelöste Frage nach der Physik der Kometenaktivität und -evolution, konkreter dem detaillierten Mechanismus des Ausstoßes von Staub, der durch Sublimation von geringeren Mengen an Wassereis verursacht wird

    Insect phenology: a geographical perspective

    Get PDF

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention
    corecore