1,672 research outputs found

    Laajat aivoverkot ja MEG: koodikirjasto ja sovellus mittausaineistoon

    Get PDF
    Research has shown that functional connectivity is a powerful tool in the study of the complex processes of the human brain. Functional connectivity is generally defined as the synchronisation of anatomically distant areas and it can be inspected for example through coherent oscillations. Magnetoencephalography (MEG) is well suited for functional connectivity studies as it has a good time resolution that allows us to observe the changes in magnetic field in real time. Dynamic Imaging of Coherent Sources (DICS) uses spatial filters to estimate the oscillatory activity in the human brain. In my master's thesis, I introduce a python-based pipeline and code library that estimates functional connectivity from MEG data using DICS. I will then demonstrate the application with a real MEG dataset. This pipeline also implements the use of canonical coherence, which provides a fast and stable way of calculating coherence between a large number of signal sources. The pipeline presented here consists of seven steps: First the data is preprocessed and the cross-spectral density (CSD) matrices are computed. Then the source space is computed and used with the CSD matrices to compute both oscillatory power and connectivity. These results are then analysed at the group-level and visualised. The results show that the pipeline is easy to apply to a real world dataset. Selection of the parameters in different steps should be made based on the dataset at hand and the results should be interpreted carefully. Further research on the stability of this pipeline is suggested.Toiminnallisten kytkösten tarkastelu on osoittautunut tehokkaaksi apuvälineeksi aivojen monimutkaisten toimintojen tutkimisessa. Neurotieteessä toiminnallinen kytkeytyvyys määritellään yleensä aivojen rakenteellisesti etäisten rakenteiden tahdistumisena. Toiminnallista kytkeytyvyyttä voidaan tarkastella esimerkiksi koherenssin avulla. Magnetoenkefalografia (MEG) on osoittautunut hyvän aikaresoluutionsa takia sopivaksi menetelmäksi tarkastella toiminnallisia kytköksiä. Keilanmuodostusmenetelmä koherenttien lähteiden dynaaminen kuvaaminen (Dynamic Imaging of Coherent Sources, DICS) käyttää spatiaalisuodattimia tarkastellakseen aivoissa tapahtuvia värähtelyitä. Tässä diplomityössä esittelen python-ohjelmointikielellä toteutetun koodikirjaston, joka DICS-menetelmän avulla arvioi aivojen värähtelyjen tehoa ja koherenssia ja näin toiminnallista kytkeytyvyyttä. Tämän kirjaston DICS-menetelmässä koherenssi kahden lähteen välillä lasketaan kanonisella koherenssilla, joka on nopea ja vakaa tapa laskea koherenssi suurelle määrälle lähdepisteitä. Koodikirjaston vaiheita on seitsemän: Data esikäsitellään ja siitä lasketaan ristispektritiheysmatriisi (CSD-matriisi). Lähdepisteet määritellään ja niitä käytetään CSD-matriisien kanssa värähtelyn tehon ja yhteyksien laskemiseen. Tämän jälkeen tuloksia käsitellään ryhmätasolla ja näytetään kuvina. Tulokset osoittavat, että koodikirjaston soveltaminen mittausaineistoon on helppoa. Eri vaiheissa tarvittavat parametrit on valittava aineiston ominaisuuksien perusteella ja tuloksia on tarkasteltava varoen. Lisätutkimusta suositellaan menetelmän vakauden varmistamiseksi

    Geospatial crowdsourced data fitness analysis for spatial data infrastructure based disaster management actions

    Get PDF
    The reporting of disasters has changed from official media reports to citizen reporters who are at the disaster scene. This kind of crowd based reporting, related to disasters or any other events, is often identified as 'Crowdsourced Data' (CSD). CSD are freely and widely available thanks to the current technological advancements. The quality of CSD is often problematic as it is often created by the citizens of varying skills and backgrounds. CSD is considered unstructured in general, and its quality remains poorly defined. Moreover, the CSD's location availability and the quality of any available locations may be incomplete. The traditional data quality assessment methods and parameters are also often incompatible with the unstructured nature of CSD due to its undocumented nature and missing metadata. Although other research has identified credibility and relevance as possible CSD quality assessment indicators, the available assessment methods for these indicators are still immature. In the 2011 Australian floods, the citizens and disaster management administrators used the Ushahidi Crowd-mapping platform and the Twitter social media platform to extensively communicate flood related information including hazards, evacuations, help services, road closures and property damage. This research designed a CSD quality assessment framework and tested the quality of the 2011 Australian floods' Ushahidi Crowdmap and Twitter data. In particular, it explored a number of aspects namely, location availability and location quality assessment, semantic extraction of hidden location toponyms and the analysis of the credibility and relevance of reports. This research was conducted based on a Design Science (DS) research method which is often utilised in Information Science (IS) based research. Location availability of the Ushahidi Crowdmap and the Twitter data assessed the quality of available locations by comparing three different datasets i.e. Google Maps, OpenStreetMap (OSM) and Queensland Department of Natural Resources and Mines' (QDNRM) road data. Missing locations were semantically extracted using Natural Language Processing (NLP) and gazetteer lookup techniques. The Credibility of Ushahidi Crowdmap dataset was assessed using a naive Bayesian Network (BN) model commonly utilised in spam email detection. CSD relevance was assessed by adapting Geographic Information Retrieval (GIR) relevance assessment techniques which are also utilised in the IT sector. Thematic and geographic relevance were assessed using Term Frequency – Inverse Document Frequency Vector Space Model (TF-IDF VSM) and NLP based on semantic gazetteers. Results of the CSD location comparison showed that the combined use of non-authoritative and authoritative data improved location determination. The semantic location analysis results indicated some improvements of the location availability of the tweets and Crowdmap data; however, the quality of new locations was still uncertain. The results of the credibility analysis revealed that the spam email detection approaches are feasible for CSD credibility detection. However, it was critical to train the model in a controlled environment using structured training including modified training samples. The use of GIR techniques for CSD relevance analysis provided promising results. A separate relevance ranked list of the same CSD data was prepared through manual analysis. The results revealed that the two lists generally agreed which indicated the system's potential to analyse relevance in a similar way to humans. This research showed that the CSD fitness analysis can potentially improve the accuracy, reliability and currency of CSD and may be utilised to fill information gaps available in authoritative sources. The integrated and autonomous CSD qualification framework presented provides a guide for flood disaster first responders and could be adapted to support other forms of emergencies

    PREM: Prestige Network Enhanced Developer-Task Matching for Crowdsourced Software Development

    Get PDF
    Many software organizations are turning to employ crowdsourcing to augment their software production. For current practice of crowdsourcing, it is common to see a mass number of tasks posted on software crowdsourcing platforms, with little guidance for task selection. Considering that crowd developers may vary greatly in expertise, inappropriate developer-task matching will harm the quality of the deliverables. It is also not time-efficient for developers to discover their most appropriate tasks from vast open call requests. We propose an approach called PREM, aiming to appropriately match between developers and tasks. PREM automatically learns from the developers’ historical task data. In addition to task preference, PREM considers the competition nature of crowdsourcing by constructing developers’ prestige network. This differs our approach from previous developer recommendation methods that are based on task and/or individual features. Experiments are conducted on 3 TopCoder datasets with 9,191 tasks in total. Our experimental results show that reasonable accuracies are achievable (63%, 46%, 36% for the 3 datasets respectively, when matching 5 developers to each task) and the constructed prestige network can help improve the matching results

    Joint Optimization of Low-power DCT Architecture and Effcient Quantization Technique for Embedded Image Compression

    Get PDF
    International audienceThe Discrete Cosine Transform (DCT)-based image com- pression is widely used in today's communication systems. Signi cant research devoted to this domain has demonstrated that the optical com- pression methods can o er a higher speed but su er from bad image quality and a growing complexity. To meet the challenges of higher im- age quality and high speed processing, in this chapter, we present a joint system for DCT-based image compression by combining a VLSI archi- tecture of the DCT algorithm and an e cient quantization technique. Our approach is, rstly, based on a new granularity method in order to take advantage of the adjacent pixel correlation of the input blocks and to improve the visual quality of the reconstructed image. Second, a new architecture based on the Canonical Signed Digit and a novel Common Subexpression Elimination technique is proposed to replace the constant multipliers. Finally, a recon gurable quantization method is presented to e ectively save the computational complexity. Experimental results obtained with a prototype based on FPGA implementation and com- parisons with existing works corroborate the validity of the proposed optimizations in terms of power reduction, speed increase, silicon area saving and PSNR improvement

    CRYSTAL STRUCTURE PREDICTION IN THE CONTEXT OF PHARMACEUTICAL POLYMORPH SCREENING AND PUTATIVE POLYMORPHS OF CIPROFLOXACIN

    Get PDF
    Molecular simulation is increasingly used by medicinal chemists in the process and product development. Reliable computational predictions are of great value not only for the design of an active pharmaceutical ingredient with novel properties but also for the avoidance of an undesirable change of form in the late stages of development of an industrially important molecule. In the pharmaceutical industry, drug polymorphism can be a critical problem and is the subject of various regulatory considerations. This contribution tried to review the fuzzy frontiers between the chemical structure of the molecule and its crystal energy landscape with a particular focus on the crystal structure prediction (csp) methodology to complement polymorph screening. A detailed application of csp in the pharmaceutical industry is illustrated on ciprofloxacin; describing its putative polymorphs. This approach successfully identifies the known crystal form within this class, as well as a large number of other low-energy structures. The performance of the approach is discussed in terms of both the quality of the results and computational aspects. csp methods are now being used as part of the interdisciplinary range of studies to establish the range of solid forms of a molecule. Moreover, further methodological improvements aimed at increasing the accuracy of the predictions and at broadening the range of molecules i.e. cocrystals, salts and solvates

    How accountants of Kenyan listed companies perceive and construe the intention to disclose social responsibility information

    Get PDF
    This study aimed to examine the Corporate Social Disclosure (CSD) practices of listed companies in Kenya, and the perception, constructs and intentions of accountants to disclose social responsibility information. This study was exploratory in nature. Current CSD practices of listed companies in Kenya were expected to be brought out through the use of disclosure indices and determine how accountants perceive and construe CSD, through the use of repertory grid technique. In order to calculate the disclosure index, data were obtained from the annual reports of the respective companies. The indices were then regressed against different company characteristics and corporate governance variables that affect CSD to determine which variables influence disclosures and which do not. In the case of repertory grid, interviews were conducted with accountants from both low disclosure and high disclosure companies. The repertory grid data were analysed in two stages: individual cases analysis and cross-cases analysis. The individual case were analysed using the principal component analysis. For the cross-cases analysis, content analysis was used to categorize constructs based on their expressed meaning. The findings indicate that CSD has increased over the years for all the companies that were studied. The themes disclosed varied according to size, profitability, liquidity, ownership of the company and the industry in which the company operates. It was also found that the reputation of the company is the main motivation for high disclosure companies to disclose social responsibility information. Low disclosure companies are mainly motivated by institutional factors. It is recommended that regulation and standardisation of CSD can make it more useful for decision-making by various stakeholders

    Temporal precision and the capacity of auditory-verbal short-term memory

    Get PDF
    The capacity of serially-ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel “rehearsal-probe” task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval. After an unpredictable delay, a tone prompted report of the item being rehearsed at that moment. An initial experiment showed cyclic distributions of item responses over time, with peaks preserving serial order and broad, overlapping tails. The spread of the response distributions increased with additional memory load and correlated negatively with participants’ auditory digit spans. A second study replicated the negative correlation and demonstrated its specificity to AVSTM by controlling for differences in visuo-spatial STM and nonverbal IQ. The results are consistent with the idea that a common resource underpins both the temporal precision and capacity of AVSTM. The rehearsal-probe task may provide a valuable tool for investigating links between temporal processing and AVSTM capacity in the context of speech and language abilities

    Image-Based Query by Example Using MPEG-7 Visual Descriptors

    Get PDF
    This project presents the design and implementation of a Content-Based Image Retrieval (CBIR) system where queries are formulated by visual examples through a graphical interface. Visual descriptors and similarity measures implemented in this work followed mainly those defined in the MPEG-7 standard although, when necessary, extensions are proposed. Despite the fact that this is an image-based system, all the proposed descriptors have been implemented for both image and region queries, allowing the future system upgrade to support region-based queries. This way, even a contour shape descriptor has been developed, which has no sense for the whole image. The system has been assessed on different benchmark databases; namely, MPEG-7 Common Color Dataset, and Corel Dataset. The evaluation has been performed for isolated descriptors as well as for combinations of them. The strategy studied in this work to gather the information obtained from the whole set of computed descriptors is weighting the rank list for each isolated descriptor
    corecore