281 research outputs found

    Construire les Digital humanities en France. Des Cyber-infrastructures pour les Sciences humaines et sociales (rapport)

    Get PDF
    La construction de cyber-infrastructures en sciences humaines et sociales est une nécessité pressante. Elle répond à des impératifs majeurs concernant la recherche sur l'homme et sur la société. L'enjeu n'en est pas seulement le rayonnement de la recherche française dans le monde, mais aussi la pérennité de l'accès aux résultats des recherches ainsi que l'apparition de nouveaux paradigmes d'articulation entre le texte scientifique et l'exercice d'administration de la preuve. Or, depuis plus de 10 ans désormais, l'ensemble des données de la recherche en sciences humaines et sociales est numérique. Ce matériau est, pour l'heure, largement laissé en jachère, soumis aux aléas de la structuration, de la diffusion et de la conservation par chaque chercheur ou par son laboratoire. Les programmes de recherche sont financés pour une durée déterminée, sans politique de conservation ou d'accès concernant les résultats et les données collectées. La fragilité d'un tel non-dispositif est évidente. Il n'existe pas de forte alternative à la mise en place de cyber-infrastructures permettant de gérer ces données, qui concernent autant les données primaires que les résultats de la recherche, les données secondaires que les éléments de démonstration, les identités numériques des chercheurs que les logiciels qu'ils développent

    Deaf STEM Community Alliance: Establishing a model virtual academic community

    Get PDF
    Abstract - This presentation describes the incremental and iterative development of the Deaf STEM Community Alliance’s virtual academic community, the Deaf and Hard of Hearing Virtual Academic Community (DHHVAC). The DHHVAC components address three critical barriers to the success of students who are deaf or hard-of-hearing: student preparation, socialization, and access to media

    Large-scale Data Analysis and Deep Learning Using Distributed Cyberinfrastructures and High Performance Computing

    Get PDF
    Data in many research fields continues to grow in both size and complexity. For instance, recent technological advances have caused an increased throughput in data in various biological-related endeavors, such as DNA sequencing, molecular simulations, and medical imaging. In addition, the variance in the types of data (textual, signal, image, etc.) adds an additional complexity in analyzing the data. As such, there is a need for uniquely developed applications that cater towards the type of data. Several considerations must be made when attempting to create a tool for a particular dataset. First, we must consider the type of algorithm required for analyzing the data. Next, since the size and complexity of the data imposes high computation and memory requirements, it is important to select a proper hardware environment on which to build the application. By carefully both developing the algorithm and selecting the hardware, we can provide an effective environment in which to analyze huge amounts of highly complex data in a large-scale manner. In this dissertation, I go into detail regarding my applications using big data and deep learning techniques to analyze complex and large data. I investigate how big data frameworks, such as Hadoop, can be applied to problems such as large-scale molecular dynamics simulations. Following this, many popular deep learning frameworks are evaluated and compared to find those that suit certain hardware setups and deep learning models. Then, we explore an application of deep learning to a biomedical problem, namely ADHD diagnosis from fMRI data. Lastly, I demonstrate a framework for real-time and fine-grained vehicle detection and classification. With each of these works in this dissertation, a unique large-scale analysis algorithm or deep learning model is implemented that caters towards the problem and leverages specialized computing resources

    A Research-led Practice-driven Digital Forensic Curriculum to Train Next Generation of Cyber Firefighters

    Get PDF
    Lack of skilled digital forensic professionals is seriously affecting the everyday life of everyone as businesses and law enforcement are struggling to fill the bare minimum number of digital investigator positions. This skills shortage can hinder incident response, with organizations failing to put effective measures in place following a cyberattack or to gather the digital evidence that could lead to the successful prosecution of malicious insiders and cybercriminals. It therefore makes the connected world less secure and digital economies less reliable, affecting everyone in their ecosystems. The commercial and public sectors are looking to higher education institutions to produce quality graduates equipped to enter the digital forensics profession. This paper presents our proposed research-led, practice-driven digital forensics curriculum. The curriculum is designed to respond to employers’ needs and is built on the experience of running a successful Cyber Security programme at Birmingham City University in the industrial heartland of the UK. All students will take a common set of modules in the first semester, but will be given the opportunity to specialise in digital forensics in the second semester and in their summer project, enabling them to graduate with the degree of MSc Digital Forensics

    Operationalizing Personalized Medicine: Data Translation Practices in Bioinformatics Laboratories

    Get PDF
    This paper presents findings from an ethnographic study of two genomics and bioinformatics labs. The focus of this research is on the day-to-day practices of using multiple technologies to integrate data across different platforms. We argue that sociotechnical challenges (including technical, contextual, and political challenges) emerge when data integration practices are carried out, due to the embedded nature of the important, yet unrecorded and implicit historical information that each dataset carries. We observed that sociotechnical sensemaking was common place in lab work, and was the only method for working out the complexity of the challenges which arose during data integration activities. We suggest that due attention be given to this matter, as challenges related to assessing data are likely to arise once more when such data travels back to the bedside, where it is poised to directly impact human health

    Cyberinfrastructure and the future of the World Research University

    Full text link
    http://deepblue.lib.umich.edu/bitstream/2027.42/88581/1/2004_Draft_CLEAR_prospectus.pd

    Web 2.0 Broker: A standards-based service for spatio-temporal search of crowd-sourced information

    Get PDF
    Recent trends in information technology show that citizens are increasingly willing to share information using tools provided by Web 2.0 and crowdsourcing platforms to describe events that may have social impact. This is fuelled by the proliferation of location-aware devices such as smartphones and tablets; users are able to share information in these crowdsourcing platforms directly from the field at real time, augmenting this information with its location. Afterwards, to retrieve this information, users must deal with the different search mechanisms provided by the each Web 2.0 services. This paper explores how to improve on the interoperability of Web 2.0 services by providing a single service as a unique entry to search over several Web 2.0 services in a single step. This paper demonstrates the usefulness of the Open Geospatial Consortium's OpenSearch Geospatial and Time specification as an interface for a service that searches and retrieves information available in crowdsourcing services. We present how this information is valuable in complementing other authoritative information by providing an alternative, contemporary source. We demonstrate the intrinsic interoperability of the system showing the integration of crowd-sourced data in different scenarios

    Research Data: Who will share what, with whom, when, and why?

    Get PDF
    The deluge of scientific research data has excited the general public, as well as the scientific community, with the possibilities for better understanding of scientific problems, from climate to culture. For data to be available, researchers must be willing and able to share them. The policies of governments, funding agencies, journals, and university tenure and promotion committees also influence how, when, and whether research data are shared. Data are complex objects. Their purposes and the methods by which they are produced vary widely across scientific fields, as do the criteria for sharing them. To address these challenges, it is necessary to examine the arguments for sharing data and how those arguments match the motivations and interests of the scientific community and the public. Four arguments are examined: to make the results of publicly funded data available to the public, to enable others to ask new questions of extant data, to advance the state of science, and to reproduce research. Libraries need to consider their role in the face of each of these arguments, and what expertise and systems they require for data curation.
    • …
    corecore