1,358 research outputs found
Real-Time Traffic Management in Smart Cities: Insights from the Traffic Management Simulation and Impact Analysis
Using simulation and empirical data analysis, this research examines the efficacy of real-time traffic control in smart cities. Traffic data collected in real time from strategically placed sensors shows that traffic volume was reduced by 8.33% on Main Street after a traffic light timing change was implemented. Traffic volume at Highway Junction was also significantly reduced by 5.56% as a result of traffic sign updates. On the other hand, interventions result in a relatively small decrease in traffic volume (2.78%) in the City Center. The influence of these actions is shown by the traffic simulation models, which show average vehicle speeds rising from 25 to 28 mph on Main Street, 45 to 50 mph at Highway Junction, and 30 to 32 mph in the Residential Area. The aforementioned research highlights the crucial function of data-driven decision-making in traffic management, guaranteeing effective distribution of resources and quantifiable enhancements in urban mobility. Urban planners and legislators may use these discoveries to build smart cities that are more accessible, sustainable, and efficient
Navigating the system vs. changing the system: a comparative analysis of the influence of asset-based and rights-based approaches on the well-being of socio-economic disadvantaged communities in Scotland
Asset-based and rights-based approaches have become leading strategies in Scottish community development. The asset-based approach seeks to help communities develop skills to provide self-help solutions. The rights-based approach seeks to help communities claim rights and make governments more accountable. These two approaches are based on contrasting conceptions of empowerment, employ opposing methods and lead to different outcomes. However, there is no empirical research that has comparatively assessed the two. This thesis represents the first in-depth exploration of the comparative effects of asset-based and rights-based approaches on the well-being of communities experiencing socio-economic disadvantage in Scotland.
The study follows a qualitative design that includes a comparative case study of two projects: the AB project (representing the asset-based approach), and the RB project (representing the rights-based approach). The study also includes the perspectives of a wider pool of practitioners working in a range of community development organisations in Scotland. In total, forty-five participants across seventeen organisations have participated in this study.
To assess the influence of asset-based and rights-based approaches upon well-being, this thesis employs a pluralistic account that combines objective and subjective indicators across three dimensions: material, social and personal. The specific well-being framework employed is the result of combining White’s (2010) well-being framework for the development practice and Oxfam Scotland’s (2013) Humankind Index.
The results of this study indicate that asset-based and rights-based approaches have important contrasting effects on well-being. The asset-based approach seems to have a more positive effect on project participants and across a higher number of well-being indicators. The rights-based approach has more observable effects on material well-being and a higher impact on the wider community, but across fewer indicators.
My findings also suggest that employing these approaches in community development settings brings different advantages and disadvantages. The asset-based approach seems easier to apply and to prove the positive outcomes on those involved. This approach, however, risks sustaining the status quo and, by doing so, misses out the opportunity to achieve more transformational outcomes. The right-based approach seems able to address structural disadvantages more effectively. Yet, it is more difficult to apply and to prove a positive impact. Organisations, practitioners, and communities applying it also face higher costs.
These findings have significant implications at the practice level. Asset-based and rights-based approaches are rarely combined in UK community development settings. As a result, practitioners are often left in the position of having to make a trade-off between helping improve the well-being of project participants and helping improve the well-being of the wider community. In theory, practitioners could avoid this trade- off by combining these approaches. In practice, this is not always possible. Asset-based and rights-based approaches represent opposing theories of change. There are also legal and funding requirements that prevent organisations from following a combination of both. Given this, understanding the comparative impact of applying asset-based and rights-based approaches in community development is critical
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
UNPUBLISHING THE NEWS: AN ASSESSMENT OF U.S. PUBLIC OPINION, NEWSROOM ACCOUNTABILITY, AND JOURNALISTS’ AUTHORITY AS “THE FIRST DRAFT OF HISTORY”
Unpublishing, or the manipulation, deindexing, or removal of published content on a news organization’s website, is a hotly debated issue in the news industry that disrupts fundamental beliefs about the nature of news and the roles of journalists. This dissertation’s premise is that unpublishing as a phenomenon challenges the authority of journalism as “the first draft of history,” questions the assumed relevance of traditional norms, and creates an opportunity to reconsider how news organizations demonstrate their accountability to the public. The study identifies public opinions related to unpublishing practices and approval of related journalistic norms through a public opinion survey of 1,350 U.S. adults. In tandem, a qualitative analysis of 62 editorial policies related to unpublishing offers the first inventory and assessment of emerging journalistic practices and the normative values journalists demonstrate through them. These contributions are valuable to both the academy and the news industry, as they identify a path forward for future research and provide desired guidance to U.S. news organizations. Findings suggest that in response to the unpublishing phenomenon, American journalists defend their professionalism primarily through the traditional professional paradigm of accuracy, invoking it to legitimize new guidelines whether those policies permitted or denounced unpublishing as a newsroom practice. Findings also show newsrooms are pledging increased levels of accountability to their communities and society at large, but how they might demonstrate that accountability more tactically was absent from policy discourse. In addition, both American adults and news organizations place a high value on the accuracy of previously published news content, yet the groups’ temporal conceptions of accuracy must be reconciled. Ultimately, the unpublishing phenomenon presents an opportunity for journalists to redefine their notions of accountability to their communities. Based on these findings, the study concludes with a call for American news organizations to abandon claims as the “first draft of history” in the digital era and assume the role of information custodians, proactively establishing and managing the lifecycle of content.Doctor of Philosoph
Differences in well-being:the biological and environmental causes, related phenotypes, and real-time assessment
Well-being is a complex, and multifaceted construct that includes feeling good and functioning well. There is a growing global recognition of well-being as an important research topic and public policy goal. Well-being is related to less behavioral and emotional problems, and is associated with many positive aspects of daily life, including longevity, higher educational achievement, happier marriage, and more productivity at work. People differ in their levels of well-being, i.e., some people are in general happier or more satisfied with their lives than others. These individual differences in well-being can arise from many different factors, including biological (genetic) influences and environmental influences. To enhance the development of future mental health prevention and intervention strategies to increase well-being, more knowledge about these determinants and factors underlying well-being is needed. In this dissertation, I aimed to increase the understanding of the etiology in a series of studies using different methods, including systematic reviews, meta-analyses, twin designs, and molecular genetic designs. In part I, we brought together all published studies on the neural and physiological factors underlying well-being. This overview allowed us to critically investigate the claims made about the biology involved in well-being. The number of studies on the neural and physiological factors underlying well-being is increasing and the results point towards potential correlates of well-being. However, samples are often still small, and studies focus mostly on a single biomarker. Therefore, more well-powered, data-driven, and integrative studies across biological categories are needed to better understand the neural and physiological pathways that play a role in well-being. In part II, we investigated the overlap between well-being and a range of other phenotypes to learn more about the etiology of well-being. We report a large overlap with phenotypes including optimism, resilience, and depressive symptoms. Furthermore, when removing the genetic overlap between well-being and depressive symptoms, we showed that well-being has unique genetic associations with a range of phenotypes, independently from depressive symptoms. These results can be helpful in designing more effective interventions to increase well-being, taking into account the overlap and possible causality with other phenotypes. In part III, we used the extreme environmental change during the COVID-19 pandemic to investigate individual differences in the effects of such environmental changes on well-being. On average, we found a negative effect of the pandemic on different aspects of well-being, especially further into the pandemic. Whereas most previous studies only looked at this average negative effect of the pandemic on well-being, we focused on the individual differences as well. We reported large individual differences in the effects of the pandemic on well-being in both chapters. This indicates that one-size-fits-all preventions or interventions to maintain or increase well-being during the pandemic or lockdowns will not be successful for the whole population. Further research is needed for the identification of protective factors and resilience mechanisms to prevent further inequality during extreme environmental situations. In part IV, we looked at the real-time assessment of well-being, investigating the feasibility and results of previous studies. The real-time assessment of well-being, related variables, and the environment can lead to new insights about well-being, i.e., results that we cannot capture with traditional survey research. The real-time assessment of well-being is therefore a promising area for future research to unravel the dynamic nature of well-being fluctuations and the interaction with the environment in daily life. Integrating all results in this dissertation confirmed that well-being is a complex human trait that is influenced by many interrelated and interacting factors. Future directions to understand individual differences in well-being will be a data-driven approach to investigate the complex interplay of neural, physiological, genetic, and environmental factors in well-being
AI Usage in Development, Security, and Operations
Artificial intelligence (AI) has become a growing field in information technology (IT). Cybersecurity managers are concerned that the lack of strategies to incorporate AI technologies in developing secure software for IT operations may inhibit the effectiveness of security risk mitigation. Grounded in the technology acceptance model, the purpose of this qualitative exploratory multiple case study was to explore strategies cybersecurity professionals use to incorporate AI technologies in developing secure software for IT operations. The participants were 10 IT professionals in the United States with at least 5 years of professional experience working in DevSecOps and managing teams of at least three DevSecOps professionals within the United States. Data were collected using semi structured interviews, and three themes were identified through thematic analysis: (a) implementation obstacles, (b) AI cloud implementation strategy, and (c) AI local implementation strategy. A specific recommendation for IT professionals is to identify knowledge gaps and security challenges in the DevSecOps pipeline to facilitate the necessary training. The implications for positive social include the potential to improve organizations\u27 securities postures and, by extension, the societies and individuals they serve
"False negative -- that one is going to kill you": Understanding Industry Perspectives of Static Analysis based Security Testing
The demand for automated security analysis techniques, such as static
analysis based security testing (SAST) tools continues to increase. To develop
SASTs that are effectively leveraged by developers for finding vulnerabilities,
researchers and tool designers must understand how developers perceive, select,
and use SASTs, what they expect from the tools, whether they know of the
limitations of the tools, and how they address those limitations. This paper
describes a qualitative study that explores the assumptions, expectations,
beliefs, and challenges experienced by developers who use SASTs. We perform
in-depth, semi-structured interviews with 20 practitioners who possess a
diverse range of software development expertise, as well as a variety of unique
security, product, and organizational backgrounds. We identify key
findings that shed light on developer perceptions and desires related to SASTs,
and also expose gaps in the status quo - challenging long-held beliefs in SAST
design priorities. Finally, we provide concrete future directions for
researchers and practitioners rooted in an analysis of our findings.Comment: To be published in IEEE Symposium on Security and Privacy 202
Business Functions Capabilities and Small and Medium Enterprises’ Internationalization
Ineffective global expansion can adversely affect small and medium enterprises (SMEs) business outcomes. Business leaders are concerned with developing effective global expansion strategies to penetrate potential international markets, thus enhancing sustainability. Grounded in the business management systems theory, the purpose of this qualitative multi-case study was to explore strategies that leaders of Sub-Saharan Africa manufacturing SMEs use for global expansion. The participants were five manufacturing value-adding SME leaders participating in export markets. Using Yin’s five steps data analysis process, six themes emerged: (a) enterprise characterization, (b) understanding the enterprise’s product, (c) intra-enterprise factor-based strategies for export participation, (d) the enterprise’s external factor-based strategies for successful export venture, (e) global expansion strategies, and (f) serendipitous findings. A key recommendation for SME leaders is to analyze the critical components of their products and prepare to adjust them to the demand dimensions of the target market. The implications for positive social change include the potential to increase the enterprise’s wealth, increase employment, reduce poverty for all value chain participants, and growth in gross domestic product
Security and Authenticity of AI-generated code
The intersection of security and plagiarism in the context of AI-generated code is a critical theme through-
out this study. While our research primarily focuses on evaluating the security aspects of AI-generated code,
it is imperative to recognize the interconnectedness of security and plagiarism concerns. On the one hand,
we do an extensive analysis of the security flaws that might be present in AI-generated code, with a focus
on code produced by ChatGPT and Bard. This analysis emphasizes the dangers that might occur if such
code is incorporated into software programs, especially if it has security weaknesses. This directly affects
developers, advising them to use caution when thinking about integrating AI-generated code to protect the
security of their applications. On the other hand, our research also covers code plagiarism. In the context
of AI-generated code, plagiarism, which is defined as the reuse of code without proper attribution or in
violation of license and copyright restrictions, becomes a significant concern. As open-source software and
AI language models proliferate, the risk of plagiarism in AI-generated code increases. Our research combines
code attribution techniques to identify the authors of AI-generated insecure code and identify where the code
originated. Our research emphasizes the multidimensional nature of AI-generated code and its wide-ranging
repercussions by addressing both security and plagiarism issues at the same time. This complete approach
adds to a more profound understanding of the problems and ethical implications associated with the use of
AI in code generation, embracing both security and authorship-related concerns
- …