2,494 research outputs found

    A Landsat-based analysis of tropical forest dynamics in the Central Ecuadorian Amazon : Patterns and causes of deforestation and reforestation

    Get PDF
    Tropical deforestation constitutes a major threat to the Amazon rainforest. Monitoring forest dynamics is therefore necessary for sustainable management of forest resources in this region. However, cloudiness results in scarce good quality satellite observations, and is therefore a major challenge for monitoring deforestation and for detecting subtle processes such as reforestation. Furthermore, varying human pressure highlights the importance of understanding the underlying forces behind these processes at multiple scales but also from an interand transdisciplinary perspective. Against this background, this study analyzes and recommends different methodologies for accomplishing these goals, exemplifying their use with Landsat timeseries and socioeconomic data. The study cases were located in the Central Ecuadorian Amazon (CEA), an area characterized by different deforestation and reforestation processes and socioeconomic and landscape settings. Three objectives guided this research. First, processing and timeseries analysis algorithms for forest dynamics monitoring in areas with limited Landsat data were evaluated, using an innovative approach based in genetic algorithms. Second, a methodology based in image compositing, multisensor data fusion and postclassification change detection is proposed to address the limitations observed in forest dynamics monitoring with timeseries analysis algorithms. Third, the evaluation of the underlying driving forces of deforestation and reforestation in the CEA are conducted using a novel modelling technique called geographically weight ridge regression for improving processing and analysis of socioeconomic data. The methodology for forest dynamics monitoring demonstrates that despite abundant data gaps in the Landsat archive for the CEA, historical patterns of deforestation and reforestation can still be reported biennially with overall accuracies above 70%. Furthermore, the improved methodology for analyzing underlying driving forces of forest dynamics identified local drivers and specific socioeconomic settings that improved the explanations for the high deforestation and reforestation rates in the CEA. The results indicate that the proposed methodologies are an alternative for monitoring and analyzing forest dynamics, particularly in areas where data scarcity and landscape complexity require approaches that are more specialized.Landsat-basierte Analyse der Dynamik tropischer Wälder im Zentral-Ecuadorianischen Amazonasgebiet: Muster und Ursachen von Abholzung und Wiederaufforstung Die tropische Entwaldung stellt eine große Bedrohung für den AmazonasRegenwald dar. Daher ist die Überwachung von Walddynamiken eine notwendige Maßnahme, um eine nachhaltige Bewirtschaftung der Waldressourcen in dieser Region zu gewährleisten. Jedoch verschlechtert Bewölkung die Qualität der Satellitenaufnahmen und stellt die hauptsächliche Herausforderung für die Überwachung der Entwaldung sowie die Detektierung einhergehender Prozesse, wie der Wiederaufforstung, dar. Darüber hinaus zeigt der unterschiedliche menschliche Nutzungsdruck, wie wichtig es ist, die zugrundeliegenden Kräfte hinter diesen Prozessen auf mehreren Ebenen, aber auch interund transdisziplinär, zu verstehen. Variierender anthropogener Einfluss unterstreicht die Notwendigkeit, unterschwellige Prozesse (oder "Driver") auf multiplen Skalen aus interund transdisziplinärer Sicht zu verstehen. Darauf basierend analysiert und empfiehlt die vorliegende Studie unterschiedliche Methoden, welche unter Verwendung von LandsatZeitreihen und sozioökonomischen Daten zur Erreichung dieser Ziele beitragen. Die Untersuchungsgebiete befinden sich im ZentralEcuadorianischen Amazonasgebiet (CEA). Einem Gebiet, das einerseits durch differenzierte Entwaldungsund Aufforstungsprozesse, andererseits durch seine sozioökonomischen und landschaftlichen Gegebenheiten geprägt ist. Das Forschungsprojekt hat drei Zielvorgaben. Erstens werden auf genetischen Algorithmen basierten Verfahren zur Verarbeitung der Zeitreihenanalyse für die Überwachung der Walddynamik in Gebieten, für die nur begrenzte LandsatDaten vorhanden waren, bewertet. Zweitens soll eine Methode in Anlehnung an Satellitenbildkompositen, Datenfusion von mehreren Satellitenbildern und Veränderungsdetektion gefunden werden, die Einschränkungen der Walddynamik durch Entwaldung mithilfe von ZeitreihenAlgorithmen thematisiert. Drittens werden die Ursachen der Entwaldung/Abholzung im CEA anhand der geographischen gewichteten RidgeRegression, die zur einen verbesserten Analyse der sozioökonomischen Information beiträgt, bewertet. Die Methodik für das WalddynamikMonitoring zeigt, dass trotz umfangreicher Datenlücken im LandsatArchiv für das CEA alle zwei Jahre die historischen Entwaldungsund Wiederaufforstungsmuster mit einer Genauigkeit von über 70% gemeldet werden können. Eine verbesserte Analysemethode trägt außerdem dazu bei, die für die Walddynamik verantwortlichen treibenden Kräfte zu identifizieren, sowie lokale Treiber und spezifische sozioökonomische Rahmenbedingungen auszumachen, die eine bessere Erklärung für die hohen Entwaldungsund Wiederaufforstungsraten im CEA aufzeigen. Die erzielten Ergebnisse machen deutlich, dass die vorgeschlagenen Methoden eine Alternative zum Monitoring und zur Analyse der Walddynamik darstellen; Insbesondere in Gebieten, in denen Datenknappheit und Landschaftskomplexität spezialisierte Ansätze erforderlich machen

    Urban Public Transportation Planning with Endogenous Passenger Demand

    Get PDF
    An effective and efficient public transportation system is crucial to people\u27s mobility, economic production, and social activities. The Operations Research community has been studying transit system optimization for the past decades. With disruptions from the private sector, especially the parking operators, ride-sharing platforms, and micro-mobility services, new challenges and opportunities have emerged. This thesis contributes to investigating the interaction of the public transportation systems with significant private sector players considering endogenous passenger choice. To be more specific, this thesis aims to optimize public transportation systems considering the interaction with parking operators, competition and collaboration from ride-sharing platforms and micro-mobility platforms. Optimization models, algorithms and heuristic solution approaches are developed to design the transportation systems. Parking operator plays an important role in determining the passenger travel mode. The capacity and pricing decisions of parking and transit operators are investigated under a game-theoretic framework. A mixed-integer non-linear programming (MINLP) model is formulated to simulate the player\u27s strategy to maximize profits considering endogenous passenger mode choice. A three-step solution heuristic is developed to solve the large-scale MINLP problem. With emerging transportation modes like ride-sharing services and micro-mobility platforms, this thesis aims to co-optimize the integrated transportation system. To improve the mobility for residents in the transit desert regions, we co-optimize the public transit and ride-sharing services to provide a more environment-friendly and equitable system. Similarly, we design an integrated system of public transit and micro-mobility services to provide a more sustainable transportation system in the post-pandemic world

    Memetic algorithms for ontology alignment

    Get PDF
    2011 - 2012Semantic interoperability represents the capability of two or more systems to meaningfully and accurately interpret the exchanged data so as to produce useful results. It is an essential feature of all distributed and open knowledge based systems designed for both e-government and private businesses, since it enables machine interpretation, inferencing and computable logic. Unfortunately, the task of achieving semantic interoperability is very difficult because it requires that the meanings of any data must be specified in an appropriate detail in order to resolve any potential ambiguity. Currently, the best technology recognized for achieving such level of precision in specification of meaning is represented by ontologies. According to the most frequently referenced definition [1], an ontology is an explicit specification of a conceptualization, i.e., the formal specification of the objects, concepts, and other entities that are presumed to exist in some area of interest and the relationships that hold them [2]. However, different tasks or different points of view lead ontology designers to produce different conceptualizations of the same domain of interest. This means that the subjectivity of the ontology modeling results in the creation of heterogeneous ontologies characterized by terminological and conceptual discrepancies. Examples of these discrepancies are the use of different words to name the same concept, the use of the same word to name different concepts, the creation of hierarchies for a specific domain region with different levels of detail and so on. The arising so-called semantic heterogeneity problem represents, in turn, an obstacle for achieving semantic interoperability... [edited by author]XI n.s

    Dense Vision in Image-guided Surgery

    Get PDF
    Image-guided surgery needs an efficient and effective camera tracking system in order to perform augmented reality for overlaying preoperative models or label cancerous tissues on the 2D video images of the surgical scene. Tracking in endoscopic/laparoscopic scenes however is an extremely difficult task primarily due to tissue deformation, instrument invasion into the surgical scene and the presence of specular highlights. State of the art feature-based SLAM systems such as PTAM fail in tracking such scenes since the number of good features to track is very limited. When the scene is smoky and when there are instrument motions, it will cause feature-based tracking to fail immediately. The work of this thesis provides a systematic approach to this problem using dense vision. We initially attempted to register a 3D preoperative model with multiple 2D endoscopic/laparoscopic images using a dense method but this approach did not perform well. We subsequently proposed stereo reconstruction to directly obtain the 3D structure of the scene. By using the dense reconstructed model together with robust estimation, we demonstrate that dense stereo tracking can be incredibly robust even within extremely challenging endoscopic/laparoscopic scenes. Several validation experiments have been conducted in this thesis. The proposed stereo reconstruction algorithm has turned out to be the state of the art method for several publicly available ground truth datasets. Furthermore, the proposed robust dense stereo tracking algorithm has been proved highly accurate in synthetic environment (< 0.1 mm RMSE) and qualitatively extremely robust when being applied to real scenes in RALP prostatectomy surgery. This is an important step toward achieving accurate image-guided laparoscopic surgery.Open Acces

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    Strategic districting for the mitigation of educational segregation : a pilot model for school district optimization in Helsinki

    Get PDF
    Helsingin kaupunkirakenne on eriytynyt viimeisten vuosikymmenien aikana merkittävästi sosiaalisilla mittareilla tarkasteltuna. Kehitys on heijastunut kouluihin oppilaspohjien ja oppimistuloksien erojen kasvuna, minkä lisäksi Helsingissä on löydetty viitteitä myös itsenäisistä kouluvaikutuksista. Koulujen eriytymiskehityksen pelätään mainevaikutuksen kautta kiihdyttävän alueellista segregaatiota ja siten oppilaspohjien eriytymistä entisestään. Oppilaspohjien eroihin on kuitenkin mahdollista vaikuttaa määrittämällä oppilasalueet uudelleen tavalla, joka minimoi oppilasalueiden välisiä sosiaalisia eroja mahdollisimman tehokkaasti. Tätä varten tarvitaan uudenlaisia, koneoppimiseen perustuvia optimointityökaluja. Tämän opinnäytetyön päätavoitteena on tutkia mahdollisuutta optimoida Helsingin oppilasalueita väestöpohjiltaan sisäisesti heterogeenisemmiksi ja keskenään homogeenisemmiksi. Tavoitetta varten olen kehittänyt työssäni automatisoidun optimointimallin, joka minimoi sosiaalisten muuttujien varianssia oppilasalueiden välillä oppilasalueiden muotoa varioimalla. Mallin pilottisovelluksessa optimoin Helsingin oppilaaksiottoalueita tasaisemmiksi käyttäen optimoitavana muuttujana vieraskielisen väestön osuutta. Olemassa olevaa kouluverkostoa eli koulujen sijaintia, oppilasalueiden maantieteellistä yhtenäisyyttä, enimmäisoppilasmääriä koulukohtaisella marginaalilla sekä koulukohtaista koulumatkan enimmäispituutta on käytetty mallissa alueiden muodostamista rajoittavina tekijöinä. Tutkimukseni keskeinen löydös on, että oppilasaluerajoja siirtelemällä oppilasalueiden sosiaalisen pohjan eroihin voidaan vaikuttaa Helsingissä merkittävästi. Malli vaatii kuitenkin vielä perusteellista jatkokehittämistä soveltuakseen aluejakojen käytännön suunnitteluun, ja tässä vaiheessa sen merkittävimmät kehityskohteet liittyvät optimoitujen alueiden muotoon, mallin laskennalliseen vaativuuteen ja koulumatkojen turvallisuutta mittaavan optimointiparametrin puuttumiseen.The social urban structure of Helsinki has experienced a significant rise in spatial differences during the last two decades. This development has reflected on schools as rising differences between schools’ student compositions and learning outcomes. Additionally, signs of independent school effects have been observed in several studies. The differentiation of student compositions is feared to exacerbate residential segregation and differentiate schools’ operating environments further. It is possible, however, to intervene this development by drawing the school attendance districts such that the social differences between schools’ student compositions are effectively minimized. For this purpose, new machine learning based optimization tools are needed. The main objective of this master’s thesis study is to examine the possibility to optimize Helsinki’s school districts toward more internally heterogeneous and externally homogeneous social compositions. For this purpose, I have developed an optimization model that minimizes the variance of social variables between school districts by iteratively redrawing the districts’ borders. In a pilot application of the model I optimize the school districts of Helsinki by using the share of population with immigrant background as the optimization variable, while existing school infrastructure (the school locations and student capacities), spatial contiguity of the districts, and school-specific maximum travel distances are used as constraints restricting the shapes that the districts can take. The core finding of this study is that in Helsinki, the social compositions of school districts can be significantly evened out by redrawing the school district borders. However, for the model to be suitable for district planning in practice it needs further development. At this stage, the main limitations of the model are related to the shapes of the optimized districts, the model’s time complexity and the lack of a constraint or optimization parameter that accounts for the safety of children’s school trips
    • …
    corecore