1,588 research outputs found

    Point-of-care diagnostics for niche applications

    Get PDF
    Point-of-care or point-of-use diagnostics are analytical devices that provide clinically relevant information without the need for a core clinical laboratory. In this review we define point-of-care diagnostics as portable versions of assays performed in a traditional clinical chemistry laboratory. This review discusses five areas relevant to human and animal health where increased attention could produce significant impact: veterinary medicine, space travel, sports medicine, emergency medicine, and operating room efficiency. For each of these areas, clinical need, available commercial products, and ongoing research into new devices are highlighted

    The Empirical Foundations of Teleradiology and Related Applications: A Review of the Evidence

    Full text link
    Introduction: Radiology was founded on a technological discovery by Wilhelm Roentgen in 1895. Teleradiology also had its roots in technology dating back to 1947 with the successful transmission of radiographic images through telephone lines. Diagnostic radiology has become the eye of medicine in terms of diagnosing and treating injury and disease. This article documents the empirical foundations of teleradiology. Methods: A selective review of the credible literature during the past decade (2005?2015) was conducted, using robust research design and adequate sample size as criteria for inclusion. Findings: The evidence regarding feasibility of teleradiology and related information technology applications has been well documented for several decades. The majority of studies focused on intermediate outcomes, as indicated by comparability between teleradiology and conventional radiology. A consistent trend of concordance between the two modalities was observed in terms of diagnostic accuracy and reliability. Additional benefits include reductions in patient transfer, rehospitalization, and length of stay.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140295/1/tmj.2016.0149.pd

    The Handheld Image: Art, History and Embodiment

    Get PDF
    This thesis investigates how images become present through movement and bodily performance. Inspiring this investigation are the contemporary practices of viewers engaging with still and moving images of people on their handheld screen devices. These practices are not only central to contemporary visuality, they also provide a focus for two wider themes relating to images of people: first, the dynamic tension between image control and circulation; and second, the mutual contestation of the physical and the virtual. To explore the struggle between image control and circulation, this thesis compares the dissemination of the twenty-first-century digital image with two historical instances of the handheld image: the sixteenth-century portrait miniature and the nineteenth-century carte de visite photographic portrait. While the physical control of the portrait miniature was paramount, the carte de visite, as the first form of mass-produced photograph, betrays the social benefits and perils of the shift from control to circulation. These historical forms are augmented through a consideration of contemporary moving-image portraiture that reveals the portrait as an interface for the interrelated demands and desires of artists, portrait subjects, and viewers. Having tracked handheld images through the sixteenth-century bedchamber and the nineteenth-century parlour, this thesis then follows handheld devices into the twenty-first-century bed to witness the contest between the somatic and the virtual: between the vulnerable, fatigued body and the seductions of online screen engagement. This thesis challenges the view that an image becomes more powerful through unfettered circulation. Rather it proposes that the potency of an image is powered by the contestation of meaning and memory, through the struggle between circulation and control. It is through these moments of struggle, and the unstable fluctuations between the actual and the virtual, that the image becomes present

    The Boston University Photonics Center annual report 2016-2017

    Full text link
    This repository item contains an annual report that summarizes activities of the Boston University Photonics Center in the 2016-2017 academic year. The report provides quantitative and descriptive information regarding photonics programs in education, interdisciplinary research, business innovation, and technology development. The Boston University Photonics Center (BUPC) is an interdisciplinary hub for education, research, scholarship, innovation, and technology development associated with practical uses of light.This has undoubtedly been the Photonics Center’s best year since I became Director 10 years ago. In the following pages, you will see highlights of the Center’s activities in the past year, including more than 100 notable scholarly publications in the leading journals in our field, and the attraction of more than 22 million dollars in new research grants/contracts. Last year I had the honor to lead an international search for the first recipient of the Moustakas Endowed Professorship in Optics and Photonics, in collaboration with ECE Department Chair Clem Karl. This professorship honors the Center’s most impactful scholar and one of the Center’s founding visionaries, Professor Theodore Moustakas. We are delighted to haveawarded this professorship to Professor Ji-Xin Cheng, who joined our faculty this year.The past year also marked the launch of Boston University’s Neurophotonics Center, which will be allied closely with the Photonics Center. Leading that Center will be a distinguished new faculty member, Professor David Boas. David and I are together leading a new Neurophotonics NSF Research Traineeship Program that will provide $3M to promote graduate traineeships in this emerging new field. We had a busy summer hosting NSF Sites for Research Experiences for Undergraduates, Research Experiences for Teachers, and the BU Student Satellite Program. As a community, we emphasized the theme of “Optics of Cancer Imaging” at our annual symposium, hosted by Darren Roblyer. We entered a five-year second phase of NSF funding in our Industry/University Collaborative Research Center on Biophotonic Sensors and Systems, which has become the centerpiece of our translational biophotonics program. That I/UCRC continues to focus on advancing the health care and medical device industries

    Pseudo-haptics survey: Human-computer interaction in extended reality & teleoperation

    Get PDF
    Pseudo-haptic techniques are becoming increasingly popular in human-computer interaction. They replicate haptic sensations by leveraging primarily visual feedback rather than mechanical actuators. These techniques bridge the gap between the real and virtual worlds by exploring the brain’s ability to integrate visual and haptic information. One of the many advantages of pseudo-haptic techniques is that they are cost-effective, portable, and flexible. They eliminate the need for direct attachment of haptic devices to the body, which can be heavy and large and require a lot of power and maintenance. Recent research has focused on applying these techniques to extended reality and mid-air interactions. To better understand the potential of pseudo-haptic techniques, the authors developed a novel taxonomy encompassing tactile feedback, kinesthetic feedback, and combined categories in multimodal approaches, ground not covered by previous surveys. This survey highlights multimodal strategies and potential avenues for future studies, particularly regarding integrating these techniques into extended reality and collaborative virtual environments.info:eu-repo/semantics/publishedVersio

    An Optofluidic Lens Biochip and an x-ray Readable Blood Pressure Microsensor: Versatile Tools for in vitro and in vivo Diagnostics.

    Full text link
    Three different microfabricated devices were presented for use in vivo and in vitro diagnostic biomedical applications: an optofluidic-lens biochip, a hand held digital imaging system and an x-ray readable blood pressure sensor for monitoring restenosis. An optofluidic biochip–termed the ‘Microfluidic-based Oil-Immersion Lens’ (mOIL) biochip were designed, fabricated and test for high-resolution imaging of various biological samples. The biochip consists of an array of high refractive index (n = 1.77) sapphire ball lenses sitting on top of an oil-filled microfluidic network of microchambers. The combination of the high optical quality lenses with the immersion oil results in a numerical aperture (NA) of 1.2 which is comparable to the high NA of oil immersion microscope objectives. The biochip can be used as an add-on-module to a stereoscope to improve the resolution from 10 microns down to 0.7 microns. It also has a scalable field of view (FOV) as the total FOV increases linearly with the number of lenses in the biochip (each lens has ~200 microns FOV). By combining the mOIL biochip with a CMOS sensor, a LED light source in 3D printed housing, a compact (40 grams, 4cmx4cmx4cm) high resolution (~0.4 microns) hand held imaging system was developed. The applicability of this system was demonstrated by counting red and white blood cells and imaging fluorescently labelled cells. In blood smear samples, blood cells, sickle cells, and malaria-infected cells were easily identified. To monitor restenosis, an x-ray readable implantable blood pressure sensor was developed. The sensor is based on the use of an x-ray absorbing liquid contained in a microchamber. The microchamber has a flexible membrane that is exposed to blood pressure. When the membrane deflects, the liquid moves into the microfluidic-gauge. The length of the microfluidic-gauge can be measured and consequently the applied pressure exerted on the diaphragm can be calculated. The prototype sensor has dimensions of 1x0.6x10mm and adequate resolution (19mmHg) to detect restenosis in coronary artery stents from a standard chest x-ray. Further improvements of our prototype will open up the possibility of measuring pressure drop in a coronary artery stent in a non-invasively manner.PhDMacromolecular Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111384/1/toning_1.pd

    Musiquence – Design, Implementation and Validation of a Customizable Music and Reminiscence Cognitive Stimulation Platform for People with Dementia

    Get PDF
    Dementia is a neurodegenerative disease that affects millions of individuals worldwide and is challenging to diagnose as symptoms may only perceivable decades later. The disease leads to a gradual loss of memory, learning, orientation, language, and comprehension skills, which compromises activities of daily living. Health-related costs caused by dementia will continue to increase over the next few years; between the years 2005 and 2009, an increase of 34% (from 315to315 to 422 billion worldwide) was observed in treating dementia-related issues. Pharmaceutical approaches have been developed to treat dementia symptoms; unfortunately, the risk of side effects is high. For this reason, nonpharmaceutical methods such as music and reminiscence therapies have gained acceptance as patients with dementia positively respond to such approaches even at later stages of the disease. Nevertheless, further research is needed to understand how music and reminiscence therapy should be used and to quantify their impact on individuals with dementia. The development of serious games has gained attention as an alternative approach to stimulate patients. However, the clinical impact that serious games have on individuals with dementia is still unclear. In this dissertation, we contribute with new knowledge regarding the usage of music and reminiscence approaches in people with dementia through a theoretical model. Based on Baddeley’s working memory model, our model aims to explain how the therapeutic properties of music and reminiscence can have a beneficial effect. To test our model, we developed a novel interactive platform called Musiquence, in which healthcare professionals can create music and reminiscence based cognitive activities to stimulate people with dementia. In this dissertation, we present the results from several studies about the usage and effects that music and reminiscence have on such a population. We performed two studies using Musiquence to study the feasibility of a novel learning method based on musical feedback to aid people with dementia during task performance in virtual reality settings. Results show that participants relied more on music-based feedback during the task performance of virtual reality activities than in other forms of feedback. Also, data suggest that the music-based feedback system can improve task performance, compensating for some dementia-related deficits. We also used Musiquence in a longitudinal one-month-long pilot study to assess its efficacy when used for a cognitive stimulation intervention in dementia patients. The results of the study are promising. The 3 participants showed improvements in terms of general cognition, quality of life, mood, and verbal fluency

    Office of Research and Economic Development -- Annual Report 2008-2009

    Get PDF
    Contents New Perspectives 1 Self-aligning Nanotubes 2 Harnessing Nanotechnology’s Potential 4 Grad Program Wades into Water Issues 6 Water for Food: A Global Challenge 7 Sensor System Detects Track Troubles 8 Better Packing Peanuts 10 Precast Pole System Eases Installation 10 Investigating Blasts’ Impact on Brain 11 Partnering on Math Achievement 12 Improving Child Welfare Services 14 Exploring Complex Social Dynamics 15 Focusing on Rural Schools’ Unique Needs 16 A Gut Feeling 18 Of Mice and Health 19 Deciphering Nutrigenomics Puzzle 20 Shear Heads NU Press 22 Anderson Leads Industry Relations 22 Supercomputing Power Expands 23 Shaping Climate Change Research 23 Debugging Complex Software 24 Laying Innovation Campus Groundwork 26 Enhancing International Partnerships 27 Tackling Human Trafficking 28 Opera’s Winning Ways 29 Determined to Make a Difference 30 Stimulus Bolsters Research 31 More Research Highlights 32 Financial

    Esquemas de transferĂȘncia para aprendizado profundo em classificação de imagens

    Get PDF
    Orientadores: Eduardo Alves do Valle Junior, Sandra Eliza Fontes de AvilaDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia ElĂ©trica e de ComputaçãoResumo: Em VisĂŁo Computacional, a tarefa de classificação Ă© complexa, pois visa a detecção da presença de categorias em imagens, dependendo criticamente da habilidade de aprender modelos computacionais generalistas a partir de amostras de treinamento. Aprendizado Profundo (AP) para tarefas visuais geralmente envolve o aprendizado de todos os passos deste processo, da extração de caracterĂ­sticas atĂ© a atribuição de rĂłtulos. Este tipo pervasivo de aprendizado garante aos modelos de AP maior capacidade de generalização, mas tambĂ©m traz novos desafios: um modelo de AP deverĂĄ estimar um grande nĂșmero de parĂąmetros, exigindo um imenso conjunto de dados anotados e grandes quantidades de recursos computacionais. Neste contexto, a TransferĂȘncia de Aprendizado emerge como uma solução promissora, permitindo a reciclagem de parĂąmetros aprendidos por modelos diferentes. Motivados pela crescente quantidade de evidĂȘncias para o potencial de tais tĂ©cnicas, estudamos de maneira abrangente a transferĂȘncia de conhecimento de arquiteturas profundas aplicada ao reconhecimento de imagens. Nossos experimentos foram desenvolvidos para explorar representaçÔes internas de uma arquitetura profunda, testando sua robustez, redundĂąncia e precisĂŁo, com aplicaçÔes nos problemas de rastreio automĂĄtico de melanoma, reconhecimento de cenas (MIT Indoors) e detecção de objetos (Pascal VOC). TambĂ©m levamos a transferĂȘncia a extremos, introduzindo a TransferĂȘncia de Aprendizado Completa, que preserva a maior parte do modelo original, mostrando que esquemas agressivos de transferĂȘncia podem atingir resultados competitivosAbstract: In Computer Vision, the task of classification is complex, as it aims to identify the presence of high-level categories in images, depending critically upon learning general models from a set of training samples. Deep Learning (DL) for visual tasks usually involves seamlessly learning every step of this process, from feature extraction to label assignment. This pervasive learning improves DL generalization abilities, but brings its own challenges: a DL model will have a huge number of parameters to estimate, thus requiring large amounts of annotated data and computational resources. In this context, transfer learning emerges as a promising solution, allowing one to recycle parameters learned among different models. Motivated by the growing amount of evidence for the potential of such techniques, we study transfer learning for deep architectures applied to image recognition. Our experiments are designed to explore the internal representations of DL architectures, testing their robustness, redundancy and precision, with applications to the problems of automated melanoma screening, scene recognition (MIT Indoors) and object detection (Pascal VOC). We also take transfer learning to extremes, introducing Complete Transfer Learning, which preserves most of the original model, showing that aggressive transfer schemes can reach competitive resultsMestradoEngenharia de ComputaçãoMestre em Engenharia ElĂ©tric
    • 

    corecore