190 research outputs found

    GIS-Based Geospatial Data Analysis: the Security of Cycle Paths in Modena

    Get PDF
    The use of fossil fuels is contributing to the global climate crisis and is threatening the sustainability of the planet. Bicycles are a vital component of the solution, as they can help mitigate the effects of climate change and improve the quality of life for all. However, cities need to be equipped with the necessary infrastructure to support their use guaranteeing safety for cyclists. Moreover, cyclists should plan their route considering the level of security associated with the different available options to reach their destination. The paper tests and presents a method that aims to integrate geographical data from various sources with different geometries and formats into a single view of the cycle paths in the province of Modena, Italy. The Geographic Information System (GIS) software functionalities have been exploited to classify paths in 5 categories: from protected bike lanes to streets with no bike infrastructure. The type of traffic that co-exists in each cycle path was analysed too. The main outcome of this research is a visualization of the cycle paths in the province of Modena highlighting the security of paths, the discontinuity of the routes, and the less covered areas. Moreover, a cycle paths graph data model was generated to perform routing based on the security level

    Making Software Architecture a Continuous Practice

    Get PDF
    DevOps is an ever growing trend in software development and it conveys a mindset that all things should be continuous. Interestingly, one of the common challenges with DevOps adoption is related to software architecture and this is in large due to the fact that architecture is not part of DevOps. This thesis looks at making software architecture a continuous practice and thus bring it into the DevOps space. A prototype solution, Architector, was implemented to solve this and the results indicate that it shows a viable approach to making software architecture a continuous practice. However, further work is necessary to expand the scope of continuous architecture and to fully validate this claim by applying Architector to a real world software development workflow

    Essays on the organizational consequences of on-line behavior of audiences

    Get PDF
    Over the past 2 decades, internet use has become increasingly more a part of our every-day lives. We communicate with our friends and colleagues using the internet, we work using the internet, we also shop using the internet. We learn and increase our knowledge from information available on the internet. While on the one hand, we advance from the instant access to online contexts individualistically, on the other hand we participate as members of a community for example when we share our experiences online. The ever growing use of the internet and its flourish in new segments of our daily life brings significant changes not only to us, the individuals, but also to the organizations. In the past decade, there has been a shift in the field of organizational theory considering the environment of organizations. Current approaches extend the horizon of the classical view proclaiming that organizational environment is not only constituted by rival organizations but also their audience members. Several studies found evidence that audience members' perceptions and behavior influence organizational success. For example, category-spanning organizations on average suffer from social and economic disadvantages in markets because they cannot meet the expectations of their audiences. This shift towards understanding the effect of audience responses on the organizational outcome motivates my dissertation. More specifically, I study how individuals on-line behavior affects organizations. I analyze three aspects of internet mediated communication and their consequences to the organization. Firstly, I address the need to compare how traditional face-to-face communication compares to the modern email communication (Chapter 2). Studies tend to take it for granted, that on-line information exchange mirrors it's off-line counterpart at the work place. Although, there are great advantages in the availability of email data, as it retains communication in its completeness, it does not fully correspond to previously studied relations, like friendship or advice seeking. The characteristics of on-line communications also differ from off-line information exchange. Employees respect divisional and hierarchical boundaries in face-to-face conversations while these boundaries are blurred out within the email exchange. Secondly, I analyze a special type of on-line behavior, the on-line word-of-mouth communication among audience members (Chapter 3). Online reviews play an increasingly important role in shaping organizational performance. Drawing conclusions on how customers perceive quality and typicality of a producer and how it manifests in on-line ratings increase the predictability of producer success. Thirdly, I approach audience behavior from a collective behavior perspective (Chapter 4). Specifically, I analyze audience dynamics with threshold models. Doing so I address the micro level mechanism of how audience behavior creates certain macro level patterns of producer success rather than assuming that they are simple aggregates of individual characteristics

    Architecture-based Uncertainty Impact Analysis for Confidentiality

    Get PDF
    In Zeiten vernetzter Systeme ist Vertraulichkeit ein entscheidendes SicherheitsqualitĂ€tsmerkmal. Da die Behebung von Vertraulichkeitsverletzungen kostspieliger wird, je spĂ€ter sie entdeckt werden, sollten Softwarearchitekten diese bereits in der Entwurfsphase berĂŒcksichtigen. WĂ€hrend des Architekturentwurfsprozesses treffen Architekten Entwurfsentscheidungen um Ungewissheit zu reduzieren. Allerdings unterliegen Entscheidungen oft Annahmen und unbekannten oder ungenauen Informationen. Annahmen können sich als falsch erweisen und mĂŒssen revidiert werden. Dies verursacht erneut Ungewissheit. Ungewissheiten zur Entwurfszeit machen genaue Schlussfolgerungen ĂŒber die Vertraulichkeit des Systems daher unmöglich. Es ist also notwendig, ihre Auswirkungen auf Architekturebene zu bewerten, bevor eine Aussage ĂŒber die Vertraulichkeit getroffen wird. Bisher ist diese Bewertung manuell und mĂŒhsam und erfordert ein großes Maß an Wissen. Derzeitige AnsĂ€tze berĂŒcksichtigen Ungewissheiten nicht zur Entwurfszeit, sprich in Softwarearchitekturen, sondern in anderen Bereichen wie z.B. bei selbst-adaptiven Systemen. Diese LĂŒcke wollen wir wie folgt schließen: Erstens stellen wir einen neuen Ansatz zur Kategorisierung von Ungewissheiten vor. Darauf aufbauend stellen wir eine Ungewissheitsschablone zur VerfĂŒgung, die es Architekten ermöglicht, Typen von Ungewissheiten und deren Auswirkungen auf Architekturelementtypen fĂŒr eine DomĂ€ne strukturell abzuleiten. Zweitens stellen wir eine Ungewissheits-Auswirkungs-Analyse vor, die es Architekten ermöglicht zu spezifizieren, welche Elemente direkt von Ungewissheiten betroffen sind. Basierend auf strukturellen Ausbreitungsregeln leitet die Analyse automatisch weitere Elemente ab, die potenziell betroffen sein könnten. Es wird die strukturelle QualitĂ€t, Anwendbarkeit und Zweck der Schablone evaluiert. Wir erlĂ€utern, dass die Kategorien Prinzipien wie OrthogonalitĂ€t, VollstĂ€ndigkeit und Unterscheidbarkeit erfĂŒllen. Außerdem zeigen wir, dass sie dabei hilft Ungewissheitstypen und deren Auswirkungen abzuleiten, sowie Wiederverwendbarkeit und Bewusstsein schafft. Schließlich veranschaulichen wir die Relevanz der Schablone, indem wir zeigen, dass sie im Vergleich zu bestehenden Taxonomien Ungewissheiten in Softwarearchitekturen genauer klassifizieren und somit prĂ€zisere Aussagen ĂŒber deren Auswirkungen machen kann. Die Analyse wird im Hinblick auf Benutzerfreundlichkeit, FunktionalitĂ€t und Genauigkeit bewertet. Wir demonstrieren, dass die Analyse die Benutzerfreundlichkeit erhöht, indem sie die erforderliche Menge an Fachwissen beim Umgang mit Ungewissheiten im Vergleich zu einer manuellen Analyse reduziert. Anhand der Kontaktnachverfolgungs-Applikation Corona-Warn-App zeigen wir, dass die Analyse die Anzahl der zu untersuchenden Elemente im Vergleich zu einer manuellen Analyse um 85% reduziert. DarĂŒber hinaus veranschaulichen wir, wie sie Architekten ermöglicht, Ungewissheiten wĂ€hrend der Entwurfszeit explizit zu verwalten. Anhand der Fallstudie zeigen wir, dass die Analyse eine 100%ige Ausbeute bei einer PrĂ€zision von 44%-91% hat

    A Distributed, Architecture-Centric Approach to Computing Accurate Recommendations from Very Large and Sparse Datasets

    Get PDF
    The use of recommender systems is an emerging trend today, when user behavior information is abundant. There are many large datasets available for analysis because many businesses are interested in future user opinions. Sophisticated algorithms that predict such opinions can simplify decision-making, improve customer satisfaction, and increase sales. However, modern datasets contain millions of records, which represent only a small fraction of all possible data. Furthermore, much of the information in such sparse datasets may be considered irrelevant for making individual recommendations. As a result, there is a demand for a way to make personalized suggestions from large amounts of noisy data. Current recommender systems are usually all-in-one applications that provide one type of recommendation. Their inflexible architectures prevent detailed examination of recommendation accuracy and its causes. We introduce a novel architecture model that supports scalable, distributed suggestions from multiple independent nodes. Our model consists of two components, the input matrix generation algorithm and multiple platform-independent combination algorithms. A dedicated input generation component provides the necessary data for combination algorithms, reduces their size, and eliminates redundant data processing. Likewise, simple combination algorithms can produce recommendations from the same input, so we can more easily distinguish between the benefits of a particular combination algorithm and the quality of the data it receives. Such flexible architecture is more conducive for a comprehensive examination of our system. We believe that a user's future opinion may be inferred from a small amount of data, provided that this data is most relevant. We propose a novel algorithm that generates a more optimal recommender input. Unlike existing approaches, our method sorts the relevant data twice. Doing this is slower, but the quality of the resulting input is considerably better. Furthermore, the modular nature of our approach may improve its performance, especially in the cloud computing context. We implement and validate our proposed model via mathematical modeling, by appealing to statistical theories, and through extensive experiments, data analysis, and empirical studies. Our empirical study examines the effectiveness of accuracy improvement techniques for collaborative filtering recommender systems. We evaluate our proposed architecture model on the Netflix dataset, a popular (over 130,000 solutions), large (over 100,000,000 records), and extremely sparse (1.1\%) collection of movie ratings. The results show that combination algorithm tuning has little effect on recommendation accuracy. However, all algorithms produce better results when supplied with a more relevant input. Our input generation algorithm is the reason for a considerable accuracy improvement

    Happiness and environmental quality

    Get PDF
    Subjective wellbeing — happiness — is of increasing interest to economists, including environmental economists. There are several reasons for thinking that environmental quality (EQ), deïŹned as high levels of environmental goods and low levels of environmental ‘bads’, will be positively related to happiness. Quantitative evidence on this remains limited, however. Some papers use cross-sectional data aggregated at country level, but it is open to doubt whether these aggregated measures reïŹ‚ect individuals’ real EQ exposures. Other papers use individual-level data, but in general have spatial data at very coarse resolution, and consider a limited range of EQ variables, exclusively around individuals’ homes. This thesis reports two related strands of work. The ïŹrst designs, implements and analyses data from two new cross-sectional surveys. It builds on earlier work by using spatial data at very high resolution, and advanced Geographical Information Systems (GIS) techniques; by simultaneously considering multiple EQ characteristics, around both homes and workplaces; and by investigating the sensitivity of results to the choice of happiness indicator. The second strand develops and implements a new methodology focused on individuals’ momentary experiences of the environment. It extends a protocol known by psychologists as the Experience Sampling Method (ESM) to incorporate satellite (GPS) location data. Using an app for participants’ own smartphones, called Mappiness, it collects a panel data set comprising millions of geo-located responses from thousands of volunteers. EQ indicators are again joined to this data set using GIS. Results of the ïŹrst strand of work are mixed, but support some links between happiness and the accessibility of natural environments, providing quantitative (including monetary) estimates of their strength. The second strand demonstrates that individuals are signiïŹcantly and substantially happier outdoors in natural environments than continuous urban ones. It introduces a valuable new line of evidence on this question, which has great potential for future development

    SID 04, Social Intelligence Design:Proceedings Third Workshop on Social Intelligence Design

    Get PDF

    Determination of the Fundamental Image Categories for Typical Consumer Imagery

    Get PDF
    Many tasks in imaging science are image-dependent. While a particular dependency might simply be a function of certain physical attributes of an image, often it is closely related to the perceived semantic category. Therefore, a thorough understanding of image semantics would be of substantial practical value. The primary goal of this research was to determine the fundamental semantic categories for typical consumer imagery. Two psychophysical experiments were performed. Experiment I was a Free Sorting Experiment where observers were asked to sort 32 1 images into piles of similar images. Experiment II was a Distributed Experiment conducted over the internet which used the method of triads to collect similarity and dissimilarity data from 321 images. Due to the large number of images included in the experiment, the method of non-repeating random paths was employed to reduce the number of required responses. Both experiments were analyzed using multidimensional scaling and hierarchical cluster analysis. The Free Sorting Experiment was also analyzed using dual scaling. The results from all three methods were compiled and a set of 34 categories that proved to be stable across multiple methods of analysis was formed. A multidimensional perceptual image semantic space has been suggested and advantages to utilizing such a structure have been outlined. The 34 fundamental categories were represented by 10 perceptual dimensions that described the underlying perceptions leading to categorical assignments. The 10 perceptual dimensions were humanness, artificialness, perceived proximity, candidness, wetness, architecture, terrain, activeness, lightness, and relative age
    • 

    corecore