14,730 research outputs found

    Desarrollo de una batería de memoria semántica para pacientes con epilepsia del lóbulo temporal

    Get PDF
    La epilepsia focal más frecuente es aquella epilepsia cuyo foco epileptógeno está localizado en el lóbulo temporal medial y es secundaria a una esclerosis con atrofia de la región amígdalo-hipocámpica, con una red epileptógena que abarca la porción anterior del lóbulo temporal. En ocasiones los pacientes requieren de un tratamiento quirúrgico que incluye la resección unilateral de ambas regiones, tanto del polo anterior, como del complejo amígdala-hipocampo. Estas estructuras han demostrado tener gran importancia para el procesamiento de la memoria semántica (región anterotemporal) y episódica (región amígdalo-hipocámpica), por lo que los pacientes que son sometidos a esta intervención suelen presentar quejas cognitivas relacionadas con ambos tipos de memoria. Sin embargo, parece que las evaluaciones neuropsicológicas que realizamos de forma rutinaria en las diferentes Unidades de Epilepsia no son capaces de detectar todos los problemas cognitivos que ocurren en estos pacientes ya que, a pesar de las dificultades expresadas por estos, las evaluaciones no muestran alteraciones. La hipótesis principal del presente trabajo es que estas quejas se deben a tipos de memoria que no están incluidos en las pruebas neuropsicológicas actuales y, por tanto, no somos capaces de identificar bien sus problemas. En primer lugar, se propone que la memoria semántica está afectada, pero solamente para palabras de baja frecuencia de uso en la vida diaria, no analizadas en las evaluaciones convencionales actuales. En segundo lugar, otros problemas no objetivados se deben a un problema de la memoria de consolidación, medida como olvido a largo plazo acelerado que se detecta cuando se amplia el periodo de evaluación del recuerdo. Además, estas alteraciones van a manifestarse con mayor intensidad en pacientes cuyo foco epileptógeno está localizado en el lóbulo temporal izquierdo. Los objetivos fundamentales de este trabajo son evaluar en pacientes con epilepsia del lóbulo temporal medial intervenidos quirúrgicamente mediante lobectomía temporal anterior con amigdalohipocampectomía la presencia de alteraciones de la memoria verbal tanto semántica como episódica, así como conocer su valor lateralizador según el hemisferio afectado. El estudio se basó en la comparación de pacientes con epilepsia del lóbulo temporal (ELT) tratados con lobectomía temporal anterior con amigdalohipocampectomía con un grupo control de personas sanas, comparables respecto a edad, nivel educativo y coeficiente intelectual (CI). Las pruebas de memoria semántica mostraron que únicamente los pacientes con ELT izquierda tenían alteraciones, especialmente para ítems de baja frecuencia y tanto en tares de expresión como de comprensión verbal. Asimismo, el tiempo de reacción fue mayor en el grupo de pacientes con ELT izquierda para todos los ítems y únicamente para las palabras o conceptos de baja frecuencia en aquellos con ELT derecha. Además, se incluyó una prueba de memoria episódica estándar (RAVLT) que en lugar de restringir la evaluación a 30 minutos, se evaluó a 7 días para medir el olvido a largo plazo. Los resultados mostraron que los dos grupos de pacientes, tanto los de ELT izquierda como aquellos con ELT derecha, desarrollaron olvido a largo plazo. Por último los resultados mostraron que la presencia de crisis epilépticas no afectó a la presencia de olvido a largo plazo acelerado

    Implementing Health Impact Assessment as a Required Component of Government Policymaking: A Multi-Level Exploration of the Determinants of Healthy Public Policy

    Get PDF
    It is widely understood that the public policies of ‘non-health’ government sectors have greater impacts on population health than those of the traditional healthcare realm. Health Impact Assessment (HIA) is a decision support tool that identifies and promotes the health benefits of policies while also mitigating their unintended negative consequences. Despite numerous calls to do so, the Ontario government has yet to implement HIA as a required component of policy development. This dissertation therefore sought to identify the contexts and factors that may both enable and impede HIA use at the sub-national (i.e., provincial, territorial, or state) government level. The three integrated articles of this dissertation provide insights into specific aspects of the policy process as they relate to HIA. Chapter one details a case study of purposive information-seeking among public servants within Ontario’s Ministry of Education (MOE). Situated within Ontario’s Ministry of Health (MOH), chapter two presents a case study of policy collaboration between health and ‘non-health’ ministries. Finally, chapter three details a framework analysis of the political factors supporting health impact tool use in two sub-national jurisdictions – namely, Québec and South Australia. MOE respondents (N=9) identified four components of policymaking ‘due diligence’, including evidence retrieval, consultation and collaboration, referencing, and risk analysis. As prospective HIA users, they also confirmed that information is not routinely sought to mitigate the potential negative health impacts of education-based policies. MOH respondents (N=8) identified the bureaucratic hierarchy as the brokering mechanism for inter-ministerial policy development. As prospective HIA stewards, they also confirmed that the ministry does not proactively flag the potential negative health impacts of non-health sector policies. Finally, ‘lessons learned’ from case articles specific to Québec (n=12) and South Australia (n=17) identified the political factors supporting tool use at different stages of the policy cycle, including agenda setting (‘policy elites’ and ‘political culture’), implementation (‘jurisdiction’), and sustained implementation (‘institutional power’). This work provides important insights into ‘real life’ policymaking. By highlighting existing facilitators of and barriers to HIA use, the findings offer a useful starting point from which proponents may tailor context-specific strategies to sustainably implement HIA at the sub-national government level

    Visualisation of Fundamental Movement Skills (FMS): An Iterative Process Using an Overarm Throw

    Get PDF
    Fundamental Movement Skills (FMS) are precursor gross motor skills to more complex or specialised skills and are recognised as important indicators of physical competence, a key component of physical literacy. FMS are predominantly assessed using pre-defined manual methodologies, most commonly the various iterations of the Test of Gross Motor Development. However, such assessments are time-consuming and often require a minimum basic level of training to conduct. Therefore, the overall aim of this thesis was to utilise accelerometry to develop a visualisation concept as part of a feasibility study to support the learning and assessment of FMS, by reducing subjectivity and the overall time taken to conduct a gross motor skill assessment. The overarm throw, an important fundamental movement skill, was specifically selected for the visualisation development as it is an acyclic movement with a distinct initiation and conclusion. Thirteen children (14.8 ± 0.3 years; 9 boys) wore an ActiGraph GT9X Link Inertial Measurement Unit device on the dominant wrist whilst performing a series of overarm throws. This thesis illustrates how the visualisation concept was developed using raw accelerometer data, which was processed and manipulated using MATLAB 2019b software to obtain and depict key throw performance data, including the trajectory and velocity of the wrist during the throw. Overall, this thesis found that the developed visualisation concept can provide strong indicators of throw competency based on the shape of the throw trajectory. Future research should seek to utilise a larger, more diverse, population, and incorporate machine learning. Finally, further work is required to translate this concept to other gross motor skills

    Breast mass segmentation from mammograms with deep transfer learning

    Get PDF
    Abstract. Mammography is an x-ray imaging method used in breast cancer screening, which is a time consuming process. Many different computer assisted diagnosis have been created to hasten the image analysis. Deep learning is the use of multilayered neural networks for solving different tasks. Deep learning methods are becoming more advanced and popular for segmenting images. One deep transfer learning method is to use these neural networks with pretrained weights, which typically improves the neural networks performance. In this thesis deep transfer learning was used to segment cancerous masses from mammography images. The convolutional neural networks used were pretrained and fine-tuned, and they had an an encoder-decoder architecture. The ResNet22 encoder was pretrained with mammography images, while the ResNet34 encoder was pretrained with various color images. These encoders were paired with either a U-Net or a Feature Pyramid Network decoder. Additionally, U-Net model with random initialization was also tested. The five different models were trained and tested on the Oulu Dataset of Screening Mammography (9204 images) and on the Portuguese INbreast dataset (410 images) with two different loss functions, binary cross-entropy loss with soft Jaccard loss and a loss function based on focal Tversky index. The best models were trained on the Oulu Dataset of Screening Mammography with the focal Tversky loss. The best segmentation result achieved was a Dice similarity coefficient of 0.816 on correctly segmented masses and a classification accuracy of 88.7% on the INbreast dataset. On the Oulu Dataset of Screening Mammography, the best results were a Dice score of 0.763 and a classification accuracy of 83.3%. The results between the pretrained models were similar, and the pretrained models had better results than the non-pretrained models. In conclusion, deep transfer learning is very suitable for mammography mass segmentation and the choice of loss function had a large impact on the results.Rinnan massojen segmentointi mammografiakuvista syvä- ja siirto-oppimista hyödyntäen. Tiivistelmä. Mammografia on röntgenkuvantamismenetelmä, jota käytetään rintäsyövän seulontaan. Mammografiakuvien seulonta on aikaa vievää ja niiden analysoimisen avuksi on kehitelty useita tietokoneavusteisia ratkaisuja. Syväoppimisella tarkoitetaan monikerroksisten neuroverkkojen käyttöä eri tehtävien ratkaisemiseen. Syväoppimismenetelmät ovat ajan myötä kehittyneet ja tulleet suosituiksi kuvien segmentoimiseen. Yksi tapa yhdistää syvä- ja siirtooppimista on hyödyntää neuroverkkoja esiopetettujen painojen kanssa, mikä auttaa parantamaan neuroverkkojen suorituskykyä. Tässä diplomityössä tutkittiin syvä- ja siirto-oppimisen käyttöä syöpäisten massojen segmentoimiseen mammografiakuvista. Käytetyt konvoluutioneuroverkot olivat esikoulutettuja ja hienosäädettyjä. Lisäksi niillä oli enkooderi-dekooderi arkkitehtuuri. ResNet22 enkooderi oli esikoulutettu mammografia kuvilla, kun taas ResNet34 enkooderi oli esikoulutettu monenlaisilla värikuvilla. Näihin enkoodereihin yhdistettiin joko U-Net:n tai piirrepyramidiverkon dekooderi. Lisäksi käytettiin U-Net mallia ilman esikoulutusta. Nämä viisi erilaista mallia koulutettiin ja testattiin sekä Oulun Mammografiaseulonta Datasetillä (9204 kuvaa) että portugalilaisella INbreast datasetillä (410 kuvaa) käyttäen kahta eri tavoitefunktiota, jotka olivat binääriristientropia yhdistettynä pehmeällä Jaccard-indeksillä ja fokaaliin Tversky indeksiin perustuva tavoitefunktiolla. Parhaat mallit olivat koulutettu Oulun datasetillä fokaalilla Tversky tavoitefunktiolla. Parhaat tulokset olivat 0,816 Dice kerroin oikeissa positiivisissa segmentaatioissa ja 88,7 % luokittelutarkkuus INbreast datasetissä. Esikoulutetut mallit antoivat parempia tuloksia kuin mallit joita ei esikoulutettu. Oulun datasetillä parhaat tulokset olivat 0,763:n Dice kerroin ja 83,3 % luokittelutarkkuus. Tuloksissa ei ollut suurta eroa esikoulutettujen mallien välillä. Tulosten perusteella syvä- ja siirto-oppiminen soveltuvat hyvin massojen segmentoimiseen mammografiakuvista. Lisäksi tavoitefunktiovalinnalla saatiin suuri vaikutus tuloksiin

    Incentivising research data sharing : a scoping review

    Get PDF
    Background: Numerous mechanisms exist to incentivise researchers to share their data. This scoping review aims to identify and summarise evidence of the efficacy of different interventions to promote open data practices and provide an overview of current research. Methods: This scoping review is based on data identified from Web of Science and LISTA, limited from 2016 to 2021. A total of 1128 papers were screened, with 38 items being included. Items were selected if they focused on designing or evaluating an intervention or presenting an initiative to incentivise sharing. Items comprised a mixture of research papers, opinion pieces and descriptive articles. Results: Seven major themes in the literature were identified: publisher/journal data sharing policies, metrics, software solutions, research data sharing agreements in general, open science ‘badges’, funder mandates, and initiatives. Conclusions: A number of key messages for data sharing include: the need to build on existing cultures and practices, meeting people where they are and tailoring interventions to support them; the importance of publicising and explaining the policy/service widely; the need to have disciplinary data champions to model good practice and drive cultural change; the requirement to resource interventions properly; and the imperative to provide robust technical infrastructure and protocols, such as labelling of data sets, use of DOIs, data standards and use of data repositories

    Coloniality and the Courtroom: Understanding Pre-trial Judicial Decision Making in Brazil

    Get PDF
    This thesis focuses on judicial decision making during custody hearings in Rio de Janeiro, Brazil. The impetus for the study is that while national and international protocols mandate the use of pre-trial detention only as a last resort, judges continue to detain people pre-trial in large numbers. Custody hearings were introduced in 2015, but the initiative has not produced the reduction in pre-trial detention that was hoped. This study aims to understand what informs judicial decision making at this stage. The research is approached through a decolonial lens to foreground legacies of colonialism, overlooked in mainstream criminological scholarship. This is an interview-based study, where key court actors (judges, prosecutors, and public defenders) and subject matter specialists were asked about influences on judicial decision making. Interview data is complemented by non-participatory observation of custody hearings. The research responds directly to Aliverti et al.'s (2021) call to ‘decolonize the criminal question’ by exposing and explaining how colonialism informs criminal justice practices. Answering the call in relation to judicial decision making, findings provide evidence that colonial-era assumptions, dynamics, and hierarchies were evident in the practice of custody hearings and continue to inform judges’ decisions, thus demonstrating the coloniality of justice. This study is significant for the new empirical data presented and theoretical innovation is also offered via the introduction of the ‘anticitizen’. The concept builds on Souza’s (2007) ‘subcitizen’ to account for the active pursuit of dangerous Others by judges casting themselves as crime fighters in a modern moral crusade. The findings point to the limited utility of human rights discourse – the normative approach to influencing judicial decision making around pre-trial detention – as a plurality of conceptualisations compete for dominance. This study has important implications for all actors aiming to reduce pre-trial detention in Brazil because unless underpinning colonial logics are addressed, every innovation risks becoming the next lei para inglês ver (law [just] for the English to see)

    Data-to-text generation with neural planning

    Get PDF
    In this thesis, we consider the task of data-to-text generation, which takes non-linguistic structures as input and produces textual output. The inputs can take the form of database tables, spreadsheets, charts, and so on. The main application of data-to-text generation is to present information in a textual format which makes it accessible to a layperson who may otherwise find it problematic to understand numerical figures. The task can also automate routine document generation jobs, thus improving human efficiency. We focus on generating long-form text, i.e., documents with multiple paragraphs. Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or its variants. These models generate fluent (but often imprecise) text and perform quite poorly at selecting appropriate content and ordering it coherently. This thesis focuses on overcoming these issues by integrating content planning with neural models. We hypothesize data-to-text generation will benefit from explicit planning, which manifests itself in (a) micro planning, (b) latent entity planning, and (c) macro planning. Throughout this thesis, we assume the input to our generator are tables (with records) in the sports domain. And the output are summaries describing what happened in the game (e.g., who won/lost, ..., scored, etc.). We first describe our work on integrating fine-grained or micro plans with data-to-text generation. As part of this, we generate a micro plan highlighting which records should be mentioned and in which order, and then generate the document while taking the micro plan into account. We then show how data-to-text generation can benefit from higher level latent entity planning. Here, we make use of entity-specific representations which are dynam ically updated. The text is generated conditioned on entity representations and the records corresponding to the entities by using hierarchical attention at each time step. We then combine planning with the high level organization of entities, events, and their interactions. Such coarse-grained macro plans are learnt from data and given as input to the generator. Finally, we present work on making macro plans latent while incrementally generating a document paragraph by paragraph. We infer latent plans sequentially with a structured variational model while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Overall our results show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and re duces redundancy in the output document

    Detection of Hyperpartisan news articles using natural language processing techniques

    Get PDF
    Yellow journalism has increased the spread of hyperpartisan news on the internet. It is very difficult for online news article readers to distinguish hyperpartisan news articles from mainstream news articles. There is a need for an automated model that can detect hyperpartisan news on the internet and tag them as hyperpartisan so that it is very easy for readers to avoid that news. A hyperpartisan news detection article was developed by using three different natural language processing techniques named BERT, ELMo, and Word2vec. This research used the bi-article dataset published at SEMEVAL-2019. The ELMo word embeddings which are trained on a Random forest classifier has got an accuracy of 0.88, which is much better than other state of art models. The BERT and Word2vec models have got the same accuracy of 0.83. This research tried different sentence input lengths to BERT and proved that BERT can extract context from local words. Evidenced from the described ML models, this study will assist the governments, news’ readers, and other political stakeholders to detect any hyperpartisan news, and also helps policy to track, and regulate, misinformation about the political parties and their leaders
    • …
    corecore