190 research outputs found
GIS-Based Geospatial Data Analysis: the Security of Cycle Paths in Modena
The use of fossil fuels is contributing to the global climate crisis and is threatening the sustainability of the planet. Bicycles are a vital component of the solution, as they can help mitigate the effects of climate change and improve the quality of life for all. However, cities need to be equipped with the necessary infrastructure to support their use guaranteeing safety for cyclists.
Moreover, cyclists should plan their route considering the level of security associated with the different available options to reach their destination.
The paper tests and presents a method that aims to integrate geographical data from various sources with different geometries and formats into a single view of the cycle paths in the province of Modena, Italy.
The Geographic Information System (GIS) software functionalities have been exploited to classify paths in 5 categories: from protected bike lanes to streets with no bike infrastructure. The type of traffic that co-exists in each cycle path was analysed too.
The main outcome of this research is a visualization of the cycle paths in the province of Modena highlighting the security of paths, the discontinuity of the routes, and the less covered areas.
Moreover, a cycle paths graph data model was generated to perform routing based on the security level
Making Software Architecture a Continuous Practice
DevOps is an ever growing trend in software development and it conveys a mindset that all things should be continuous. Interestingly, one of the common challenges with DevOps adoption is related to software architecture and this is in large due to the fact that architecture is not part of DevOps.
This thesis looks at making software architecture a continuous practice and thus bring it into the DevOps space. A prototype solution, Architector, was implemented to solve this and the results indicate that it shows a viable approach to making software architecture a continuous practice. However, further work is necessary to expand the scope of continuous architecture and to fully validate this claim by applying Architector to a real world software development workflow
Essays on the organizational consequences of on-line behavior of audiences
Over the past 2 decades, internet use has become increasingly more a part of our every-day lives. We communicate with our friends and colleagues using the internet, we work using the internet, we also shop using the internet. We learn and increase our knowledge from information available on the internet. While on the one hand, we advance from the instant access to online contexts individualistically, on the other hand we participate as members of a community for example when we share our experiences online. The ever growing use of the internet and its flourish in new segments of our daily life brings significant changes not only to us, the individuals, but also to the organizations. In the past decade, there has been a shift in the field of organizational theory considering the environment of organizations. Current approaches extend the horizon of the classical view proclaiming that organizational environment is not only constituted by rival organizations but also their audience members. Several studies found evidence that audience members' perceptions and behavior influence organizational success. For example, category-spanning organizations on average suffer from social and economic disadvantages in markets because they cannot meet the expectations of their audiences. This shift towards understanding the effect of audience responses on the organizational outcome motivates my dissertation. More specifically, I study how individuals on-line behavior affects organizations. I analyze three aspects of internet mediated communication and their consequences to the organization. Firstly, I address the need to compare how traditional face-to-face communication compares to the modern email communication (Chapter 2). Studies tend to take it for granted, that on-line information exchange mirrors it's off-line counterpart at the work place. Although, there are great advantages in the availability of email data, as it retains communication in its completeness, it does not fully correspond to previously studied relations, like friendship or advice seeking. The characteristics of on-line communications also differ from off-line information exchange. Employees respect divisional and hierarchical boundaries in face-to-face conversations while these boundaries are blurred out within the email exchange. Secondly, I analyze a special type of on-line behavior, the on-line word-of-mouth communication among audience members (Chapter 3). Online reviews play an increasingly important role in shaping organizational performance. Drawing conclusions on how customers perceive quality and typicality of a producer and how it manifests in on-line ratings increase the predictability of producer success. Thirdly, I approach audience behavior from a collective behavior perspective (Chapter 4). Specifically, I analyze audience dynamics with threshold models. Doing so I address the micro level mechanism of how audience behavior creates certain macro level patterns of producer success rather than assuming that they are simple aggregates of individual characteristics
Architecture-based Uncertainty Impact Analysis for Confidentiality
In Zeiten vernetzter Systeme ist Vertraulichkeit ein entscheidendes SicherheitsqualitÀtsmerkmal.
Da die Behebung von Vertraulichkeitsverletzungen kostspieliger wird, je spÀter
sie entdeckt werden, sollten Softwarearchitekten diese bereits in der Entwurfsphase berĂŒcksichtigen.
WĂ€hrend des Architekturentwurfsprozesses treffen Architekten Entwurfsentscheidungen
um Ungewissheit zu reduzieren. Allerdings unterliegen Entscheidungen
oft Annahmen und unbekannten oder ungenauen Informationen. Annahmen können sich
als falsch erweisen und mĂŒssen revidiert werden. Dies verursacht erneut Ungewissheit.
Ungewissheiten zur Entwurfszeit machen genaue Schlussfolgerungen ĂŒber die Vertraulichkeit
des Systems daher unmöglich. Es ist also notwendig, ihre Auswirkungen auf
Architekturebene zu bewerten, bevor eine Aussage ĂŒber die Vertraulichkeit getroffen
wird. Bisher ist diese Bewertung manuell und mĂŒhsam und erfordert ein groĂes MaĂ an
Wissen. Derzeitige AnsĂ€tze berĂŒcksichtigen Ungewissheiten nicht zur Entwurfszeit, sprich
in Softwarearchitekturen, sondern in anderen Bereichen wie z.B. bei selbst-adaptiven
Systemen.
Diese LĂŒcke wollen wir wie folgt schlieĂen: Erstens stellen wir einen neuen Ansatz zur
Kategorisierung von Ungewissheiten vor. Darauf aufbauend stellen wir eine Ungewissheitsschablone
zur VerfĂŒgung, die es Architekten ermöglicht, Typen von Ungewissheiten und
deren Auswirkungen auf Architekturelementtypen fĂŒr eine DomĂ€ne strukturell abzuleiten.
Zweitens stellen wir eine Ungewissheits-Auswirkungs-Analyse vor, die es Architekten
ermöglicht zu spezifizieren, welche Elemente direkt von Ungewissheiten betroffen sind.
Basierend auf strukturellen Ausbreitungsregeln leitet die Analyse automatisch weitere
Elemente ab, die potenziell betroffen sein könnten.
Es wird die strukturelle QualitÀt, Anwendbarkeit und Zweck der Schablone evaluiert.
Wir erlÀutern, dass die Kategorien Prinzipien wie OrthogonalitÀt, VollstÀndigkeit und Unterscheidbarkeit
erfĂŒllen. AuĂerdem zeigen wir, dass sie dabei hilft Ungewissheitstypen und
deren Auswirkungen abzuleiten, sowie Wiederverwendbarkeit und Bewusstsein schafft.
SchlieĂlich veranschaulichen wir die Relevanz der Schablone, indem wir zeigen, dass sie im
Vergleich zu bestehenden Taxonomien Ungewissheiten in Softwarearchitekturen genauer
klassifizieren und somit prĂ€zisere Aussagen ĂŒber deren Auswirkungen machen kann. Die
Analyse wird im Hinblick auf Benutzerfreundlichkeit, FunktionalitÀt und Genauigkeit
bewertet. Wir demonstrieren, dass die Analyse die Benutzerfreundlichkeit erhöht, indem
sie die erforderliche Menge an Fachwissen beim Umgang mit Ungewissheiten im Vergleich
zu einer manuellen Analyse reduziert. Anhand der Kontaktnachverfolgungs-Applikation
Corona-Warn-App zeigen wir, dass die Analyse die Anzahl der zu untersuchenden Elemente
im Vergleich zu einer manuellen Analyse um 85% reduziert. DarĂŒber hinaus veranschaulichen
wir, wie sie Architekten ermöglicht, Ungewissheiten wÀhrend der Entwurfszeit
explizit zu verwalten. Anhand der Fallstudie zeigen wir, dass die Analyse eine 100%ige
Ausbeute bei einer PrÀzision von 44%-91% hat
A Distributed, Architecture-Centric Approach to Computing Accurate Recommendations from Very Large and Sparse Datasets
The use of recommender systems is an emerging trend today, when user behavior information is abundant. There are many large datasets available for analysis because many businesses are interested in future user opinions. Sophisticated algorithms that predict such opinions can simplify decision-making, improve customer satisfaction, and increase sales. However, modern datasets contain millions of records, which represent only a small fraction of all possible data. Furthermore, much of the information in such sparse datasets may be considered irrelevant for making individual recommendations. As a result, there is a demand for a way to make personalized suggestions from large amounts of noisy data. Current recommender systems are usually all-in-one applications that provide one type of recommendation. Their inflexible architectures prevent detailed examination of recommendation accuracy and its causes. We introduce a novel architecture model that supports scalable, distributed suggestions from multiple independent nodes. Our model consists of two components, the input matrix generation algorithm and multiple platform-independent combination algorithms. A dedicated input generation component provides the necessary data for combination algorithms, reduces their size, and eliminates redundant data processing. Likewise, simple combination algorithms can produce recommendations from the same input, so we can more easily distinguish between the benefits of a particular combination algorithm and the quality of the data it receives. Such flexible architecture is more conducive for a comprehensive examination of our system. We believe that a user's future opinion may be inferred from a small amount of data, provided that this data is most relevant. We propose a novel algorithm that generates a more optimal recommender input. Unlike existing approaches, our method sorts the relevant data twice. Doing this is slower, but the quality of the resulting input is considerably better. Furthermore, the modular nature of our approach may improve its performance, especially in the cloud computing context. We implement and validate our proposed model via mathematical modeling, by appealing to statistical theories, and through extensive experiments, data analysis, and empirical studies. Our empirical study examines the effectiveness of accuracy improvement techniques for collaborative filtering recommender systems. We evaluate our proposed architecture model on the Netflix dataset, a popular (over 130,000 solutions), large (over 100,000,000 records), and extremely sparse (1.1\%) collection of movie ratings. The results show that combination algorithm tuning has little effect on recommendation accuracy. However, all algorithms produce better results when supplied with a more relevant input. Our input generation algorithm is the reason for a considerable accuracy improvement
Recommended from our members
Factors that influence the adoption of e-learning: an empirical study in Kuwait
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonE-learning has emerged as a necessity to meet the challenges posed by the development of information technology and its potential for greater access to knowledge. The general hypothesis of this research is that; e-learning as an organizational activity started in the developed countries, and as such, the implementation models developed there are taken as a benchmark. The implementation barriers and the influential factors for adopting e-learning in different regions and societies may or may not be the same as those found in the developed countries (with varying degrees of intensity or importance). Hence, those available implementation models may not necessary be followed in all stages and steps when used by different countries and societies. Accordingly, the implementation barriers and the influential factor may differ from one case to another. Since e-learning is respectively new in Kuwait and no comprehensive studies about the adoption of e-learning or the important factors that would influence the adoption of e-learning in Kuwait (ref), the aim of this research is to investigate and find the main and important factors that would influence the acceptance and adoption of elearning in Kuwait as an example of a developing country. In order to realize the aim of this research, the e-learning literature was reviewed, and an exploratory study was conducted in Kuwait. The exploratory study explored the state of e-learning in Kuwait and investigated the influential issues to e-learning adoption. Then, a conceptual model was proposed based on the Unified Theory of Acceptance and Use of Technology (UTAUT) model, and amended it with the outcomes of the exploratory study to suit the context of the study. The proposed conceptual model was developed to study e-learning adoption in Kuwait and to offer a further explanation of the adoption of e-learning in the Kuwaiti context. Triangulation in data collection was used to examine and validate the conceptual model, where quantitative and qualitative methods were used. A questionnaire-based survey was firstly conducted, followed by an interview-based field study. This thesis concludes by highlighting the main findings of the research, and presenting the main contributions of this research
Happiness and environmental quality
Subjective wellbeing â happiness â is of increasing interest to economists, including environmental economists. There are several reasons for thinking that environmental quality (EQ),
deïŹned as high levels of environmental goods and low levels of environmental âbadsâ, will be
positively related to happiness.
Quantitative evidence on this remains limited, however. Some papers use cross-sectional data
aggregated at country level, but it is open to doubt whether these aggregated measures reïŹect
individualsâ real EQ exposures. Other papers use individual-level data, but in general have spatial
data at very coarse resolution, and consider a limited range of EQ variables, exclusively around
individualsâ homes.
This thesis reports two related strands of work. The ïŹrst designs, implements and analyses data
from two new cross-sectional surveys. It builds on earlier work by using spatial data at very high
resolution, and advanced Geographical Information Systems (GIS) techniques; by simultaneously
considering multiple EQ characteristics, around both homes and workplaces; and by investigating
the sensitivity of results to the choice of happiness indicator.
The second strand develops and implements a new methodology focused on individualsâ momentary experiences of the environment. It extends a protocol known by psychologists as the
Experience Sampling Method (ESM) to incorporate satellite (GPS) location data. Using an app for
participantsâ own smartphones, called Mappiness, it collects a panel data set comprising millions
of geo-located responses from thousands of volunteers. EQ indicators are again joined to this
data set using GIS.
Results of the ïŹrst strand of work are mixed, but support some links between happiness and the
accessibility of natural environments, providing quantitative (including monetary) estimates of
their strength. The second strand demonstrates that individuals are signiïŹcantly and substantially
happier outdoors in natural environments than continuous urban ones. It introduces a valuable
new line of evidence on this question, which has great potential for future development
Determination of the Fundamental Image Categories for Typical Consumer Imagery
Many tasks in imaging science are image-dependent. While a particular dependency might simply be a function of certain physical attributes of an image, often it is closely related to the perceived semantic category. Therefore, a thorough understanding of image semantics would be of substantial practical value. The primary goal of this research was to determine the fundamental semantic categories for typical consumer imagery. Two psychophysical experiments were performed. Experiment I was a Free Sorting Experiment where observers were asked to sort 32 1 images into piles of similar images. Experiment II was a Distributed Experiment conducted over the internet which used the method of triads to collect similarity and dissimilarity data from 321 images. Due to the large number of images included in the experiment, the method of non-repeating random paths was employed to reduce the number of required responses. Both experiments were analyzed using multidimensional scaling and hierarchical cluster analysis. The Free Sorting Experiment was also analyzed using dual scaling. The results from all three methods were compiled and a set of 34 categories that proved to be stable across multiple methods of analysis was formed. A multidimensional perceptual image semantic space has been suggested and advantages to utilizing such a structure have been outlined. The 34 fundamental categories were represented by 10 perceptual dimensions that described the underlying perceptions leading to categorical assignments. The 10 perceptual dimensions were humanness, artificialness, perceived proximity, candidness, wetness, architecture, terrain, activeness, lightness, and relative age
- âŠ