9,323 research outputs found
Recommended from our members
An investigation into the cultural and legal factors influencing the differential prosecution rate for female genital mutilation in England and France
Female Genital Mutilation (FGM) is a problem that both England and France face. Both countries agree that FGM is a criminal offence and that it constitutes child abuse. Accordingly, each nation has taken its own distinct measures in law and policy against the practice. These approaches have produced significantly divergent outcomes, particularly in the prosecution rates of offenders, with France leading in that regard.
This thesis seeks to understand why criminal justice outcomes differ so significantly between the two nations, despite many parallels between the historical and contemporary contexts of these two Western European neighbours. In order to do this, it seeks to explore the overarching, systemic forces at play within both paradigms, what the author has termed “the Medium”. Furthermore, given that FGM within both France and England is a product of migrant communities having transported cultural practices into their new context, particular attention is paid to approaches to multiculturalism as a key aspect of the Medium for the purposes of this study. However, alongside this examination of the Medium, the study also explores the role of individual activism, and the agency of particular campaigners, termed “the Human Catalyst”. It addresses the complex interplay between the Medium and the Human Catalyst, as a means of understanding their combined influence on the divergent pictures in respect of prosecuting FGM
Evaluation of different segmentation-based approaches for skin disorders from dermoscopic images
Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Sala Llonch, Roser, Mata Miquel, Christian, Munuera, JosepSkin disorders are the most common type of cancer in the world and the incident has been lately increasing over the past decades. Even with the most complex and advanced technologies, current image acquisition systems do not permit a reliable identification of the skin lesion by visual examination due to the challenging structure of the malignancy. This promotes the need for the implementation of automatic skin lesion segmentation methods in order to assist in physicians’ diagnostic when determining the lesion's region and to serve as a preliminary step for the classification of the skin lesion. Accurate and precise segmentation is crucial for a rigorous screening and monitoring of the disease's progression.
For the purpose of the commented concern, the present project aims to accomplish a state-of-the-art review about the most predominant conventional segmentation models for skin lesion segmentation, alongside with a market analysis examination. With the rise of automatic segmentation tools, a wide number of algorithms are currently being used, but many are the drawbacks when employing them for dermatological disorders due to the high-level presence of artefacts in the image acquired.
In light of the above, three segmentation techniques have been selected for the completion of the work: level set method, an algorithm combining GrabCut and k-means methods and an intensity automatic algorithm developed by Hospital Sant Joan de Déu de Barcelona research group. In addition, a validation of their performance is conducted for a further implementation of them in clinical training. The proposals, together with the got outcomes, have been accomplished by means of a publicly available skin lesion image database
FairGen: Towards Fair Graph Generation
There have been tremendous efforts over the past decades dedicated to the
generation of realistic graphs in a variety of domains, ranging from social
networks to computer networks, from gene regulatory networks to online
transaction networks. Despite the remarkable success, the vast majority of
these works are unsupervised in nature and are typically trained to minimize
the expected graph reconstruction loss, which would result in the
representation disparity issue in the generated graphs, i.e., the protected
groups (often minorities) contribute less to the objective and thus suffer from
systematically higher errors. In this paper, we aim to tailor graph generation
to downstream mining tasks by leveraging label information and user-preferred
parity constraint. In particular, we start from the investigation of
representation disparity in the context of graph generative models. To mitigate
the disparity, we propose a fairness-aware graph generative model named
FairGen. Our model jointly trains a label-informed graph generation module and
a fair representation learning module by progressively learning the behaviors
of the protected and unprotected groups, from the `easy' concepts to the `hard'
ones. In addition, we propose a generic context sampling strategy for graph
generative models, which is proven to be capable of fairly capturing the
contextual information of each group with a high probability. Experimental
results on seven real-world data sets, including web-based graphs, demonstrate
that FairGen (1) obtains performance on par with state-of-the-art graph
generative models across six network properties, (2) mitigates the
representation disparity issues in the generated graphs, and (3) substantially
boosts the model performance by up to 17% in downstream tasks via data
augmentation
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Harmonising electronic health records for reproducible research: challenges, solutions and recommendations from a UK-wide COVID-19 research collaboration
BackgroundThe CVD-COVID-UK consortium was formed to understand the relationship between COVID-19 and cardiovascular diseases through analyses of harmonised electronic health records (EHRs) across the four UK nations. Beyond COVID-19, data harmonisation and common approaches enable analysis within and across independent Trusted Research Environments. Here we describe the reproducible harmonisation method developed using large-scale EHRs in Wales to accommodate the fast and efficient implementation of cross-nation analysis in England and Wales as part of the CVD-COVID-UK programme. We characterise current challenges and share lessons learnt.MethodsServing the scope and scalability of multiple study protocols, we used linked, anonymised individual-level EHR, demographic and administrative data held within the SAIL Databank for the population of Wales. The harmonisation method was implemented as a four-layer reproducible process, starting from raw data in the first layer. Then each of the layers two to four is framed by, but not limited to, the characterised challenges and lessons learnt. We achieved curated data as part of our second layer, followed by extracting phenotyped data in the third layer. We captured any project-specific requirements in the fourth layer.ResultsUsing the implemented four-layer harmonisation method, we retrieved approximately 100 health-related variables for the 3.2 million individuals in Wales, which are harmonised with corresponding variables for > 56 million individuals in England. We processed 13 data sources into the first layer of our harmonisation method: five of these are updated daily or weekly, and the rest at various frequencies providing sufficient data flow updates for frequent capturing of up-to-date demographic, administrative and clinical information.ConclusionsWe implemented an efficient, transparent, scalable, and reproducible harmonisation method that enables multi-nation collaborative research. With a current focus on COVID-19 and its relationship with cardiovascular outcomes, the harmonised data has supported a wide range of research activities across the UK
Countermeasures for the majority attack in blockchain distributed systems
La tecnología Blockchain es considerada como uno de los paradigmas informáticos más importantes posterior al Internet; en función a sus características únicas que la hacen ideal para registrar, verificar y administrar información de diferentes transacciones. A pesar de esto, Blockchain se enfrenta a diferentes problemas de seguridad, siendo el ataque del 51% o ataque mayoritario uno de los más importantes. Este consiste en que uno o más mineros tomen el control de al menos el 51% del Hash extraído o del cómputo en una red; de modo que un minero puede manipular y modificar arbitrariamente la información registrada en esta tecnología. Este trabajo se enfocó en diseñar e implementar estrategias de detección y mitigación de ataques mayoritarios (51% de ataque) en un sistema distribuido Blockchain, a partir de la caracterización del comportamiento de los mineros. Para lograr esto, se analizó y evaluó el Hash Rate / Share de los mineros de Bitcoin y Crypto Ethereum, seguido del diseño e implementación de un protocolo de consenso para controlar el poder de cómputo de los mineros. Posteriormente, se realizó la exploración y evaluación de modelos de Machine Learning para detectar software malicioso de tipo Cryptojacking.DoctoradoDoctor en Ingeniería de Sistemas y Computació
Lift EVERY Voice and Sing: An Intersectional Qualitative Study Examining the Experiences of Lesbian, Gay, Bisexual, and Queer Faculty and Administrators at Historically Black Colleges and Universities
While there is minimal literature that address the experiences of lesbian, gay, bisexual, and trans* identified students at Historically Black Colleges and Universities (HBCUs), the experiences of Black, queer faculty and administrators at HBCUs has not been studied. This intersectional qualitative research study focused on the experiences of lesbian, gay, bisexual, and queer identified faculty and administrators who work at HBCUs. By investigating the intersections of religion, race, gender, and sexuality within a predominantly Black institution, this study aims to enhance diversity, equity, and inclusion efforts at HBCUs by sharing the experiences of the LGBQ faculty and administrators that previously or currently work at an HBCU as a full-time employee. The research questions that guided this study were 1) How have LGBQ faculty and staff negotiated/navigated their careers at HBCUs? and 2) How do LGBQ faculty and staff at HBCUs influence cultural (relating to LGBQ inclusion) change at the organizational level? The main theoretical framework used was intersectionality and it shaped the chosen methodology and methods. The Politics of Respectability was the second theoretical framework used to describe the intra-racial tensions within the Black/African American community. The study included 60-120 minute interviews with 12 participants. Using intersectionality as a guide, the data were coded and utilized for thematic analysis. Then, an ethnodramatic performance engages readers. The goals of this study were to encourage policy changes, promote inclusivity for LGBQ employees at HBCUs, and provide an expansion to the body of literature in the field pertaining to the experiences of LGBQ faculty and administrators in higher education
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
Wildlife trade in Latin America: people, economy and conservation
Wildlife trade is among the main threats to biodiversity conservation and may pose a risk to human health because of the spread of zoonotic diseases. To avoid social, economic and environmental consequences of illegal trade, it is crucial to understand the factors influencing the wildlife market and the effectiveness of policies already in place. I aim to unveil the biological and socioeconomic factors driving wildlife trade, the health risks imposed by the activity, and the effectiveness of certified captive-breeding as a strategy to curb the illegal market in Latin America through a multidisciplinary approach. I assess socioeconomic correlates of the emerging international trade in wild cat species from Latin America using a dataset of >1,000 seized cats, showing that high levels of corruption and Chinese private investment and low income per capita were related to higher numbers of jaguar seizures. I assess the effectiveness of primate captive-breeding programmes as an intervention to curb wildlife trafficking. Illegal sources held >70% of the primate market share. Legal primates are more expensive, and the production is not sufficiently high to fulfil the demand. I assess the scale of the illegal trade and ownership of venomous snakes in Brazil. Venomous snake taxa responsible for higher numbers of snakebites were those most often kept as pets. I uncover how online wildlife pet traders and consumers responded to campaigns associating the origin of the COVID-19 pandemic. Of 20,000 posts on Facebook groups, only 0.44% mentioned COVID-19 and several stimulated the trade in wild species during lockdown. Despite the existence of international and national wildlife trade regulations, I conclude that illegal wildlife trade is still an issue that needs further addressing in Latin America. I identify knowledge gaps and candidate interventions to amend the current loopholes to reduce wildlife trafficking. My aspiration with this thesis is to provide useful information that can inform better strategies to tackle illegal wildlife trade in Latin America
- …