203 research outputs found

    Commentary on Patrick Bondy, “Bias in Legitimate Ad Hominem Arguments”

    Get PDF

    Super-resolution-based snake model—an unsupervised method for large-scale building extraction using airborne LiDAR Data and optical image

    Get PDF
    Automatic extraction of buildings in urban and residential scenes has become a subject of growing interest in the domain of photogrammetry and remote sensing, particularly since the mid-1990s. Active contour model, colloquially known as snake model, has been studied to extract buildings from aerial and satellite imagery. However, this task is still very challenging due to the complexity of building size, shape, and its surrounding environment. This complexity leads to a major obstacle for carrying out a reliable large-scale building extraction, since the involved prior information and assumptions on building such as shape, size, and color cannot be generalized over large areas. This paper presents an efficient snake model to overcome such a challenge, called Super-Resolution-based Snake Model (SRSM). The SRSM operates on high-resolution Light Detection and Ranging (LiDAR)-based elevation images—called z-images—generated by a super-resolution process applied to LiDAR data. The involved balloon force model is also improved to shrink or inflate adaptively, instead of inflating continuously. This method is applicable for a large scale such as city scale and even larger, while having a high level of automation and not requiring any prior knowledge nor training data from the urban scenes (hence unsupervised). It achieves high overall accuracy when tested on various datasets. For instance, the proposed SRSM yields an average area-based Quality of 86.57% and object-based Quality of 81.60% on the ISPRS Vaihingen benchmark datasets. Compared to other methods using this benchmark dataset, this level of accuracy is highly desirable even for a supervised method. Similarly desirable outcomes are obtained when carrying out the proposed SRSM on the whole City of Quebec (total area of 656 km2), yielding an area-based Quality of 62.37% and an object-based Quality of 63.21%

    Silviculture of Mixed-Species and Structurally Complex Boreal Stands

    Get PDF
    Understanding structurally complex boreal stands is crucial for designing ecosystem management strategies that promote forest resilience under global change. However, current management practices lead to the homogenization and simplification of forest structures in the boreal biome. In this chapter, we illustrate two options for managing productive and resilient forests: (1) the managing of two-aged mixed-species forests; and (2) the managing of multi-aged, structurally complex stands. Results demonstrate that multi-aged and mixed stand management are powerful silvicultural tools to promote the resilience of boreal forests under global change

    Enhancing explainability and scrutability of recommender systems

    Get PDF
    Our increasing reliance on complex algorithms for recommendations calls for models and methods for explainable, scrutable, and trustworthy AI. While explainability is required for understanding the relationships between model inputs and outputs, a scrutable system allows us to modify its behavior as desired. These properties help bridge the gap between our expectations and the algorithm’s behavior and accordingly boost our trust in AI. Aiming to cope with information overload, recommender systems play a crucial role in ïŹltering content (such as products, news, songs, and movies) and shaping a personalized experience for their users. Consequently, there has been a growing demand from the information consumers to receive proper explanations for their personalized recommendations. These explanations aim at helping users understand why certain items are recommended to them and how their previous inputs to the system relate to the generation of such recommendations. Besides, in the event of receiving undesirable content, explanations could possibly contain valuable information as to how the system’s behavior can be modiïŹed accordingly. In this thesis, we present our contributions towards explainability and scrutability of recommender systems: ‱ We introduce a user-centric framework, FAIRY, for discovering and ranking post-hoc explanations for the social feeds generated by black-box platforms. These explanations reveal relationships between users’ proïŹles and their feed items and are extracted from the local interaction graphs of users. FAIRY employs a learning-to-rank (LTR) method to score candidate explanations based on their relevance and surprisal. ‱ We propose a method, PRINCE, to facilitate provider-side explainability in graph-based recommender systems that use personalized PageRank at their core. PRINCE explanations are comprehensible for users, because they present subsets of the user’s prior actions responsible for the received recommendations. PRINCE operates in a counterfactual setup and builds on a polynomial-time algorithm for ïŹnding the smallest counterfactual explanations. ‱ We propose a human-in-the-loop framework, ELIXIR, for enhancing scrutability and subsequently the recommendation models by leveraging user feedback on explanations. ELIXIR enables recommender systems to collect user feedback on pairs of recommendations and explanations. The feedback is incorporated into the model by imposing a soft constraint for learning user-speciïŹc item representations. We evaluate all proposed models and methods with real user studies and demonstrate their beneïŹts at achieving explainability and scrutability in recommender systems.Unsere zunehmende AbhĂ€ngigkeit von komplexen Algorithmen fĂŒr maschinelle Empfehlungen erfordert Modelle und Methoden fĂŒr erklĂ€rbare, nachvollziehbare und vertrauenswĂŒrdige KI. Zum Verstehen der Beziehungen zwischen Modellein- und ausgaben muss KI erklĂ€rbar sein. Möchten wir das Verhalten des Systems hingegen nach unseren Vorstellungen Ă€ndern, muss dessen Entscheidungsprozess nachvollziehbar sein. ErklĂ€rbarkeit und Nachvollziehbarkeit von KI helfen uns dabei, die LĂŒcke zwischen dem von uns erwarteten und dem tatsĂ€chlichen Verhalten der Algorithmen zu schließen und unser Vertrauen in KI-Systeme entsprechend zu stĂ€rken. Um ein Übermaß an Informationen zu verhindern, spielen Empfehlungsdienste eine entscheidende Rolle um Inhalte (z.B. Produkten, Nachrichten, Musik und Filmen) zu ïŹltern und deren Benutzern eine personalisierte Erfahrung zu bieten. Infolgedessen erheben immer mehr In- formationskonsumenten Anspruch auf angemessene ErklĂ€rungen fĂŒr deren personalisierte Empfehlungen. Diese ErklĂ€rungen sollen den Benutzern helfen zu verstehen, warum ihnen bestimmte Dinge empfohlen wurden und wie sich ihre frĂŒheren Eingaben in das System auf die Generierung solcher Empfehlungen auswirken. Außerdem können ErklĂ€rungen fĂŒr den Fall, dass unerwĂŒnschte Inhalte empfohlen werden, wertvolle Informationen darĂŒber enthalten, wie das Verhalten des Systems entsprechend geĂ€ndert werden kann. In dieser Dissertation stellen wir unsere BeitrĂ€ge zu ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten vor. ‱ Mit FAIRY stellen wir ein benutzerzentriertes Framework vor, mit dem post-hoc ErklĂ€rungen fĂŒr die von Black-Box-Plattformen generierten sozialen Feeds entdeckt und bewertet werden können. Diese ErklĂ€rungen zeigen Beziehungen zwischen BenutzerproïŹlen und deren Feeds auf und werden aus den lokalen Interaktionsgraphen der Benutzer extrahiert. FAIRY verwendet eine LTR-Methode (Learning-to-Rank), um die ErklĂ€rungen anhand ihrer Relevanz und ihres Grads unerwarteter Empfehlungen zu bewerten. ‱ Mit der PRINCE-Methode erleichtern wir das anbieterseitige Generieren von ErklĂ€rungen fĂŒr PageRank-basierte Empfehlungsdienste. PRINCE-ErklĂ€rungen sind fĂŒr Benutzer verstĂ€ndlich, da sie Teilmengen frĂŒherer Nutzerinteraktionen darstellen, die fĂŒr die erhaltenen Empfehlungen verantwortlich sind. PRINCE-ErklĂ€rungen sind somit kausaler Natur und werden von einem Algorithmus mit polynomieller Laufzeit erzeugt , um prĂ€zise ErklĂ€rungen zu ïŹnden. ‱ Wir prĂ€sentieren ein Human-in-the-Loop-Framework, ELIXIR, um die Nachvollziehbarkeit der Empfehlungsmodelle und die QualitĂ€t der Empfehlungen zu verbessern. Mit ELIXIR können Empfehlungsdienste Benutzerfeedback zu Empfehlungen und ErklĂ€rungen sammeln. Das Feedback wird in das Modell einbezogen, indem benutzerspeziïŹscher Einbettungen von Objekten gelernt werden. Wir evaluieren alle Modelle und Methoden in Benutzerstudien und demonstrieren ihren Nutzen hinsichtlich ErklĂ€rbarkeit und Nachvollziehbarkeit von Empfehlungsdiensten

    Scalable integration of uncertainty reasoning and semantic web technologies

    Full text link
    In recent years formal logical standards for knowledge representation to model real world knowledge and domains and make them accessible for computers gained a lot of trac- tion. They provide an expressive logical framework for modeling, consistency checking, reasoning, and query answering, and have proven to be versatile methods to capture knowledge of various fields. Those formalisms and methods focus on specifying knowl- edge as precisely as possible. At the same time, many applications in particular on the Semantic Web have to deal with uncertainty in their data; and handling uncertain knowledge is crucial in many real- world domains. However, regular logic is unable to capture the real-world properly due to its inherent complexity and uncertainty, all the while handling uncertain or incomplete information is getting more and more important in applications like expert system, data integration or information extraction. The overall objective of this dissertation is to identify scenarios and datasets where methods that incorporate their inherent uncertainty improve results, and investigate approaches and tools that are suitable for the respective task. In summary, this work is set out to tackle the following objectives: 1. debugging uncertain knowledge bases in order to generate consistent knowledge graphs to make them accessible for logical reasoning, 2. combining probabilistic query answering and logical reasoning which in turn uses these consistent knowledge graphs to answer user queries, and 3. employing the aforementioned techniques to the problem of risk management in IT infrastructures, as a concrete real-world application. We show that in all those scenarios, users can benefit from incorporating uncertainty in the knowledge base. Furthermore, we conduct experiments that demonstrate the real- world scalability of the demonstrated approaches. Overall, we argue that integrating uncertainty and logical reasoning, despite being theoretically intractable, is feasible in real-world application and warrants further research

    Études des systùmes de communications sans-fil dans un environnement rural difficile

    Get PDF
    Les systĂšmes de communication sans fil, ayant de nombreux avantages pour les zones rurales, peuvent aider la population Ă  bien s'y Ă©tablir au lieu de dĂ©mĂ©nager vers les centres urbains, accentuant ainsi les problĂšmes d’embouteillage, de pollution et d’habitation. Pour une planification et un dĂ©ploiement efficace de ces systĂšmes, l'attĂ©nuation du signal radio et la rĂ©ussite des liens d’accĂšs doivent ĂȘtre envisagĂ©es. Ce travail s’intĂ©resse Ă  la provision d’accĂšs Internet sans fil dans le contexte rural canadien caractĂ©risĂ© par sa vĂ©gĂ©tation dense et ses variations climatiques extrĂȘmes vu que les solutions existantes sont plus concentrĂ©es sur les zones urbaines. Pour cela, nous Ă©tudions plusieurs cas d’environnements difficiles affectant les performances des systĂšmes de communication. Ensuite, nous comparons les systĂšmes de communication sans fil les plus connus. Le rĂ©seau sans fil fixe utilisant le Wi-Fi ayant l’option de longue portĂ©e est choisi pour fournir les communications aux zones rurales. De plus, nous Ă©valuons l'attĂ©nuation du signal radio, car les modĂšles existants sont conçus, en majoritĂ©, pour les technologies mobiles en zones urbaines. Puis, nous concevons un nouveau modĂšle empirique pour les pertes de propagation. Des approches utilisant l’apprentissage automatique sont ensuite proposĂ©es, afin de prĂ©dire le succĂšs des liens sans fil, d’optimiser le choix des points d'accĂšs et d’établir les limites de validitĂ© des paramĂštres des liens sans fil fiables. Les solutions proposĂ©es font preuve de prĂ©cision (jusqu’à 94 % et 8 dB RMSE) et de simplicitĂ©, tout en considĂ©rant une multitude de paramĂštres difficiles Ă  prendre en compte tous ensemble avec les solutions classiques existantes. Les approches proposĂ©es requiĂšrent des donnĂ©es fiables qui sont gĂ©nĂ©ralement difficiles Ă  acquĂ©rir. Dans notre cas, les donnĂ©es de DIGICOM, un fournisseur Internet sans fil en zone rurale canadien, sont utilisĂ©es. Wireless communication systems have many advantages for rural areas, as they can help people settle comfortably and conveniently in these regions instead of relocating to urban centers causing various overcrowding, habitation, and pollution problems. For effective planning and deployment of these technologies, the attenuation of the radio signal and the success of radio links must be precisely predicted. This work examines the provision of wireless internet access in the Canadian rural context, characterized by its dense vegetation and its extreme climatic variations, since existing solutions are more focused on urban areas. Hence, we study several cases of difficult environments affecting the performances of communication systems. Then, we compare the best-known wireless communication systems. The fixed wireless network using Wi-Fi, having the long-range option, is chosen to provide wireless access to rural areas. Moreover, we evaluate the attenuation of the radio signal, since the existing path loss models are generally designed for mobile technologies in urban areas. Then, we design a new path loss empirical model. Several approaches are then proposed by using machine learning to predict the success of wireless links, optimize the choice of access points and establish the validity limits for the pertinent parameters of reliable wireless connections. The proposed solutions are characterized by their accuracy (up to 94% and 8 dB RMSE) and simplicity while considering a wide range of parameters that are difficult to consider all together with conventional solutions. These approaches require reliable data, which is generally difficult to acquire. In our case, the dataset from DIGICOM, a rural Canadian wireless internet service provider, is used

    Computation in Complex Networks

    Get PDF
    Complex networks are one of the most challenging research focuses of disciplines, including physics, mathematics, biology, medicine, engineering, and computer science, among others. The interest in complex networks is increasingly growing, due to their ability to model several daily life systems, such as technology networks, the Internet, and communication, chemical, neural, social, political and financial networks. The Special Issue “Computation in Complex Networks" of Entropy offers a multidisciplinary view on how some complex systems behave, providing a collection of original and high-quality papers within the research fields of: ‱ Community detection ‱ Complex network modelling ‱ Complex network analysis ‱ Node classification ‱ Information spreading and control ‱ Network robustness ‱ Social networks ‱ Network medicin

    The 9th International Conference on Sustainable Development

    Get PDF
    The International Conference on Sustainable Development (ICSD) was held virtually on September 20-21, 2021, with the conference theme “Research for Impact: A Sustainable and Inclusive Planet.” ICSD provides a forum for academia, government, civil society, UN agencies, and the private sector to come together to share practical solutions to achieve Sustainable Development Goals (SDGs). The two-day conference hosted 49 different sessions across multiple time zones to accommodate the global audience, with 204 oral presenters, 239 poster presenters, and 977 total authors
    • 

    corecore