100 research outputs found

    Pembuatan Aplikasi Objek Removal Dengan Menggunakan Exemplar-Based Inpainting

    Full text link
    Teknologi saat ini telah berkembang dengan pesat sehingga memudahkan pengambilan gambar. Namun seringkali terdapat objek yang tidak diinginkan pada gambar yang diambil. Masalah dapat timbul apabila objek tersebut dihilangkan dari gambar, karena akan menghasilkan ruang kosong pada gambar tersebut.Untuk mengatasi masalah tersebut, pengisian ruang kosong (target region) pada gambar dengan menggunakan metode novel based exemplar sebagai metode untuk pengisian gambar. Keunggulan dari metode ini adalah penggunaan urutan pengisian gambar yang dipengaruhi oleh nilai isophote dan jumlah source region.Hasil pengujian menunjukkan bahwa ukuran atau bentuk penyeleksian objek, gradasi, pembiasan warna sangat berpengaruh terhadap hasil inpainting. Prioritas sangat mempengaruhi pengambilan pada source region yang dicari. Gradien yang jelas tanpa dipengaruhi oleh pembiasan, gradasi warna dan blur akan membuat hasil inpainting yang alami

    Nonlocal smoothing and adaptive morphology for scalar- and matrix-valued images

    Get PDF
    In this work we deal with two classic degradation processes in image analysis, namely noise contamination and incomplete data. Standard greyscale and colour photographs as well as matrix-valued images, e.g. diffusion-tensor magnetic resonance imaging, may be corrupted by Gaussian or impulse noise, and may suffer from missing data. In this thesis we develop novel reconstruction approaches to image smoothing and image completion that are applicable to both scalar- and matrix-valued images. For the image smoothing problem, we propose discrete variational methods consisting of nonlocal data and smoothness constraints that penalise general dissimilarity measures. We obtain edge-preserving filters by the joint use of such measures rich in texture content together with robust non-convex penalisers. For the image completion problem, we introduce adaptive, anisotropic morphological partial differential equations modelling the dilation and erosion processes. They adjust themselves to the local geometry to adaptively fill in missing data, complete broken directional structures and even enhance flow-like patterns in an anisotropic manner. The excellent reconstruction capabilities of the proposed techniques are tested on various synthetic and real-world data sets.In dieser Arbeit beschäftigen wir uns mit zwei klassischen Störungsquellen in der Bildanalyse, nämlich mit Rauschen und unvollständigen Daten. Klassische Grauwert- und Farb-Fotografien wie auch matrixwertige Bilder, zum Beispiel Diffusionstensor-Magnetresonanz-Aufnahmen, können durch Gauß- oder Impulsrauschen gestört werden, oder können durch fehlende Daten gestört sein. In dieser Arbeit entwickeln wir neue Rekonstruktionsverfahren zum zur Bildglättung und zur Bildvervollständigung, die sowohl auf skalar- als auch auf matrixwertige Bilddaten anwendbar sind. Zur Lösung des Bildglättungsproblems schlagen wir diskrete Variationsverfahren vor, die aus nichtlokalen Daten- und Glattheitstermen bestehen und allgemeine auf Bildausschnitten definierte Unähnlichkeitsmaße bestrafen. Kantenerhaltende Filter werden durch die gemeinsame Verwendung solcher Maße in stark texturierten Regionen zusammen mit robusten nichtkonvexen Straffunktionen möglich. Für das Problem der Datenvervollständigung führen wir adaptive anisotrope morphologische partielle Differentialgleichungen ein, die Dilatations- und Erosionsprozesse modellieren. Diese passen sich der lokalen Geometrie an, um adaptiv fehlende Daten aufzufüllen, unterbrochene gerichtet Strukturen zu schließen und sogar flussartige Strukturen anisotrop zu verstärken. Die ausgezeichneten Rekonstruktionseigenschaften der vorgestellten Techniken werden anhand verschiedener synthetischer und realer Datensätze demonstriert

    Framework-Specific Modeling Languages

    Get PDF
    Framework-specific modeling languages (FSMLs) help developers build applications based on object-oriented frameworks. FSMLs formalize abstractions and rules of the framework's application programming interfaces (APIs) and can express models of how applications use an API. Such models, referred to as framework-specific models, aid developers in understanding, creating, and evolving application code. We present the concept of FSMLs, propose a way of specifying their abstract syntax and semantics, and show how such language specifications can be interpreted to provide reverse, forward, and round-trip engineering of framework-specific models and framework-based application code. We present a method for engineering FSMLs that was extracted post-mortem from the experience of building four such languages. The method is driven by the use cases that the FSMLs under development are to support. We present the use cases, the overall process, and its instantiation for each language. The presentation focuses on providing concrete examples for engineering steps, outcomes, and challenges. It also provides strategies for making engineering decisions. The presented method and experience are aimed at framework developers and tool builders who are interested in engineering new FSMLs. Furthermore, the method represents a necessary step in the maturation of the FSML concept. Finally, the presented work offers a concrete example of software language engineering. FSML engineering formalizes existing domain knowledge that is not present in language form and makes a strong case for the benefits of such formalization. We evaluated the method and the exemplar languages. The evaluation is both empirical and analytical. The empirical evaluation involved measuring the precision and recall of reverse engineering and verifying the correctness or forward and round-trip engineering. The analytical evaluation focused on the generality of the method

    The Role of c-Myc in Regulating Cardiac Intermediary Metabolis

    Get PDF
    Background – The heart adapts in response to stress by inducing adaptive remodelling pathways leading to cardiac hypertrophy, however this is not sustained and with continued stress, maladaptive remodelling pathways ensue, impairing cell viability and contractile function leading to heart failure. An emerging concept is that cardiac hypertrophy is paralleled by changes in cardiomyocyte metabolism, which may themselves drive cardiac hypertrophy. The proto-oncogene c-Myc is a key regulator of cancer metabolism that promotes anabolic pathways driving tumorigenesis. Interestingly, c-Myc overexpression in the heart causes hypertrophy but how this is accomplished remains unclear.Objective – Define the functional role of c-Myc during remodelling of the adult heart with a focus on the relationship to intermediary glucose metabolism pathways that support anabolic growth in response to different types of hypertrophic stress stimuli.Methods – Cardiac-specific c-Myc knock-out (c-Myc KO) mice were generated and subjected to different types of stress stimuli. Pathological stress was induced by abdominal aortic banding (AAB) or transverse aortic constriction (TAC) and compared to sham surgery. Physiological stress was induced by voluntary wheel running (VWR) exercise and compared with sedentary controls. Metabolic pathway activity was comprehensively assessed by optimising a stable isotope resolved metabolomics method, using stable isotope labelled glucose (13C6 glucose) in ex-vivo Langendorff perfused hearts of floxed control and cs-c-Myc KO mice.Results – The c-Myc KO mice appear phenotypically normal and do not exhibit any changes in cardiac structure or function at baseline. When subjected to chronic pressure overload, c-Myc KO mice show a similar decline in cardiac function and a similar extent of cardiac hypertrophy as their floxed littermates. After chronic pressure overload, there is a significant decrease in 13C label incorporation into TCA metabolites and its related amino acids in the cs-c-Myc KO mice. In parallel, metabolites related to the hexosamine biosynthesis pathway show an increased 13C label incorporation after pressure overload in the floxed mice which is attenuated in the c-Myc KO mice. When subjected to regular exercise, c-Myc KO mice develop cardiac hypertrophy to the same extent as the floxed mice. Exercise causes an increase in the pentose phosphate pathway activity in the floxed mice which is mitigated in the c-Myc KO mice. Exercised floxed mice showed an increased lactate uptake and utilisation into the TCA cycle whereas c-Myc KO mice had decreased reliance on lactate.Conclusion – Knock-out of c-Myc alters glucose contribution to the TCA cycle and affects the diversion of glycolytic intermediates into pathways of intermediary metabolism important for anabolic growth and adaptation to stress. These results provide new insight into the rewiring of glucose carbon metabolism in the hypertrophied heart that is in part driven by c-Myc. Understanding adaptive remodelling pathways that drive cardiac hypertrophy may help lead to better treatments for preventing heart failure.<br/

    Developing A Physics-informed Deep Learning Paradigm for Traffic State Estimation

    Get PDF
    The traffic delay due to congestion cost the U.S. economy $ 81 billion in 2022, and on average, each worker lost 97 hours each year during commute due to longer wait time. Traffic management and control strategies that serve as a potent solution to the congestion problem require accurate information on prevailing traffic conditions. However, due to the cost of sensor installation and maintenance, associated sensor noise, and outages, the key traffic metrics are often observed partially, making the task of estimating traffic states (TSE) critical. The challenge of TSE lies in the sparsity of observed traffic data and the noise present in the measurements. The central research premise of this dissertation is whether and how the fundamental principles of traffic flow theory could be harnessed to augment machine learning in estimating traffic conditions. This dissertation develops a physics-informed deep learning (PIDL) paradigm for traffic state estimation. The developed PIDL framework equips a deep learning neural network with the strength of the governing physical laws of the traffic flow to better estimate traffic conditions based on partial and limited sensing measurements. First, this research develops a PIDL framework for TSE with the continuity equation Lighthill-Whitham-Richards (LWR) conservation law - a partial differential equation (PDE). The developed PIDL framework is illustrated with multiple fundamental diagrams capturing the relationship between traffic state variables. The framework is expanded to incorporate a more practical, discretized traffic flow model - the cell transmission model (CTM). Case studies are performed to validate the proposed PIDL paradigm by reconstructing the velocity and density fields using both synthetic and realistic traffic datasets, such as the next-generation simulation (NGSIM). The case studies mimic a multitude of application scenarios with pragmatic considerations such as sensor placement, coverage area, data loss, and the penetration rate of connected autonomous vehicles (CAVs). The study results indicate that the proposed PIDL approach brings exceedingly superior performance in state estimation tasks with a lower training data requirement compared to the benchmark deep learning (DL) method. Next, the dissertation continues with an investigation of the empirical evidence which points to the limitation of PIDL architectures with certain types of PDEs. It presents the challenges in training PIDL architecture by contrasting PIDL performances in learning the first-order scalar hyperbolic LWR conservation law and its second-order parabolic counterpart. The outcome indicates that PIDL experiences challenges in incorporating the hyperbolic LWR equation due to the non-smoothness of its solution. On the other hand, the PIDL architecture with the parabolic version of the PDE, augmented with the diffusion term, leads to the successful reassembly of the density field even with the shockwaves present. Thereafter, the implication of PIDL limitations for traffic state estimation and prediction is commented upon, and readers\u27 attention is directed to potential mitigation strategies. Lastly, a PIDL framework with nonlocal traffic flow physics, capturing the driver reaction to the downstream traffic conditions, is proposed. In summary, this dissertation showcases the vast capability of the developed physics-informed deep learning paradigm for traffic state estimation in terms of efficiently utilizing meager observation for precise reconstruction of the data field. Moreover, it contemplates the practical ramification of PIDL for TSE with the hyperbolic flow conservation law and explores the remedy with sampling strategies of training instances and adding the diffusion term. Ultimately, it paints the picture of potent PIDL applications in TSE with nonlocal physics and suggests future research directions in PIDL for traffic state predictions

    Discontinuity-Aware Base-Mesh Modeling of Depth for Scalable Multiview Image Synthesis and Compression

    Full text link
    This thesis is concerned with the challenge of deriving disparity from sparsely communicated depth for performing disparity-compensated view synthesis for compression and rendering of multiview images. The modeling of depth is essential for deducing disparity at view locations where depth is not available and is also critical for visibility reasoning and occlusion handling. This thesis first explores disparity derivation methods and disparity-compensated view synthesis approaches. Investigations reveal the merits of adopting a piece-wise continuous mesh description of depth for deriving disparity at target view locations to enable disparity-compensated backward warping of texture. Visibility information can be reasoned due to the correspondence relationship between views that a mesh model provides, while the connectivity of a mesh model assists in resolving depth occlusion. The recent JPEG 2000 Part-17 extension defines tools for scalable coding of discontinuous media using breakpoint-dependent DWT, where breakpoints describe discontinuity boundary geometry. This thesis proposes a method to efficiently reconstruct depth coded using JPEG 2000 Part-17 as a piece-wise continuous mesh, where discontinuities are driven by the encoded breakpoints. Results show that the proposed mesh can accurately represent decoded depth while its complexity scales along with decoded depth quality. The piece-wise continuous mesh model anchored at a single viewpoint or base-view can be augmented to form a multi-layered structure where the underlying layers carry depth information of regions that are occluded at the base-view. Such a consolidated mesh representation is termed a base-mesh model and can be projected to many viewpoints, to deduce complete disparity fields between any pair of views that are inherently consistent. Experimental results demonstrate the superior performance of the base-mesh model in multiview synthesis and compression compared to other state-of-the-art methods, including the JPEG Pleno light field codec. The proposed base-mesh model departs greatly from conventional pixel-wise or block-wise depth models and their forward depth mapping for deriving disparity ingrained in existing multiview processing systems. When performing disparity-compensated view synthesis, there can be regions for which reference texture is unavailable, and inpainting is required. A new depth-guided texture inpainting algorithm is proposed to restore occluded texture in regions where depth information is either available or can be inferred using the base-mesh model

    Generalized averaged Gaussian quadrature and applications

    Get PDF
    A simple numerical method for constructing the optimal generalized averaged Gaussian quadrature formulas will be presented. These formulas exist in many cases in which real positive GaussKronrod formulas do not exist, and can be used as an adequate alternative in order to estimate the error of a Gaussian rule. We also investigate the conditions under which the optimal averaged Gaussian quadrature formulas and their truncated variants are internal

    MS FT-2-2 7 Orthogonal polynomials and quadrature: Theory, computation, and applications

    Get PDF
    Quadrature rules find many applications in science and engineering. Their analysis is a classical area of applied mathematics and continues to attract considerable attention. This seminar brings together speakers with expertise in a large variety of quadrature rules. It is the aim of the seminar to provide an overview of recent developments in the analysis of quadrature rules. The computation of error estimates and novel applications also are described

    Historical appraisal analysis: evaluation of the book in sixteenth-century England

    Get PDF
    This dissertation is a study of the evaluation of the book in the English Renaissance. The purpose of the study is to find out what a good book was like in sixteenth-century England, what personal and societal attitudes were held towards books and literature, and how these attitudes were expressed linguistically. While some of these attitudes have been studied previously, the focus has been limited according to genre. The anxieties related to translating ancient classics and the necessity of vernacularizing medical texts have received some attention. Yet, no previous linguistic analyses of these attitudes have been conducted, and linguistic analyses of evaluative language in general have been rare in historical materials. The material for this study consists of a self-built 70,000-word corpus of English Renaissance translator’s paratexts. The corpus consists of 30 dedications and 41 prefaces, collected from the full range of available topics and genres. I analyze the evaluative language within the corpus texts using the Appraisal Framework, a discourse semantic tool for the categorization and analysis of evaluative language. This study shows that the early modern English book was appraised largely for its internal and external value: the distinction it has among others of its type and its usefulness to its reader. The original author of the work is subjected to succinct positive appraisals of their character, while the translator is appraised with more complex structures expressing both positive and negative attitudes related to their capacity and tenacity. The topic of the main text has a heavy influence on the appraisals. While the paratexts to classical translations focus on negative appraisals following textual conventions, the paratexts to more utilitarian texts opt for more positively toned appraisal profiles. Medical texts are presented more positively, and geographical and navigational works circumvent the traditional positive author appraisal to benefit other targets. In addition to advancing the understanding of early modern English book culture, this study contributes to the knowledge of evaluative language as a discourse semantic phenomenon, and expands its study to earlier historical periods.Tämä tutkielma käsittelee arvottavaa kieltä ja kirjoihin kohdistuvia asenteita 1500-luvun Englannissa. Tutkimuksen tavoitteena on selvittää, millaisia kirjoja arvostettiin renessanssiajan Englannissa, millaisia asenteita kirjoihin kohdistui, ja miten näitä asenteita ilmaistiin kielellisin keinoin. Jotkut varhaismodernin Englannin kirjoihin kohdistuvista asenteista tunnetaan varsin hyvin. Erityisesti antiikin kreikkalaisten ja roomalaisten tekstien sekä retoriikan vaikutus kirjoihin liittyvään diskurssiin on varsin hyvin tunnettu. Samoin lääketieteellisten tekstien kääntämiseen kohdistuvia asenteita on tutkittu viime vuosina. Nämä tutkimukset eivät kuitenkaan ole kielitieteellisiä tai keskity arvottavaan kieleen. Kielitieteellinen tutkimus arvottavasta kielestä historiallisissa konteksteissa on ylipäätään vielä varsin vähäistä. Tutkimuksen materiaalina toimii 70 000 sanan korpus englantilaisten kääntäjien parateksteistä. Korpus koostuu 30 omistuskirjoituksesta ja 41 esipuheesta. Korpuksen paratekstejä ei ole rajattu päätekstin genren tai aiheen mukaan, vaan niitä on koottu kaikista genreistä. Arvottavan kielen analyysi tapahtuu käyttäen Appraisal-teoriaa, joka on arvottavan kielen luokitteluun analyysiin kehitetty diskurssisemanttinen metodologia. Tutkimus osoittaa, että varhaismoderniin englantilaiseen kirjaan kohdistuvat arvotukset perustuivat pääsääntöisesti kirjan sisäiseen arvoon tai sen arvoon kontekstissaan: kirjan erikoislaatuisuuteen ja hyödyllisyyteen. Kirjoittajaa arvotettiin käyttäen yksinkertaista, positiivista kieltä, joka kohdistui tämän soveliaisuuteen, kun taas kääntäjää arvotettiin monimutkaisilla ilmauksilla, joilla kommunikoitiin samanaikaisesti positiivisia ja negatiivisia asenteita. Kääntäjän kompetenssi kiistetään, mutta tämän sitkeyttä kehutaan. Päätekstin sisältö vaikuttaa arvottavaan kieleen ja kirjaan kohdistuviin asenteisiin. Antiikin klassisten tekstien käännökset noudattavat nöyryysdiskurssia, kun taas käyttö- ja tietokirjallisuuden parateksteissä arvottava kieli on tyypillisesti positiivisempaa. Tutkimuksellani tuotan tietoa varhaismodernin englannin kielen kirjallisesta kulttuurista ja arvottavan kielen toimintamekanismeista sekä laajennan arvottavan kielen tutkimusta aiemmille aikakausille
    corecore