297 research outputs found

    Bayesian decoding of tactile afferents responsible for sensorimotor control

    Get PDF
    In daily activities, humans manipulate objects and do so with great precision. Empirical studies have demonstrated that signals encoded by mechanoreceptors facilitate the precise object manipulation in humans, however, little is known about the underlying mechanisms. Models used in literature to analyze tactile afferent data range from advanced—for example some models account for skin tissue properties—to simple regression fit. These models, however, do not systematically account for factors that influence tactile afferent activity. For instance, it is not yet clear whether the first derivative of force influences the observed tactile afferent spike train patterns. In this study, I use the technique of microneurography—with the help of Dr. Birznieks—to record tactile afferent data from humans. I then implement spike sorting algorithms to identify spike occurrences that pertain to a single cell. For further analyses of the resulting spike trains, I use a Bayesian decoding framework to investigate tactile afferent mechanisms that are responsible for sensorimotor control in humans. The Bayesian decoding framework I implement is a two stage process where in a first stage (encoding model) the relationships between the administered stimuli and the recorded tactile afferent signals is established, and a second stage uses results based on the first stage to make predictions. The goal of encoding model is to increase our understanding of the mechanisms that underlie dexterous object manipulation and, from an engineering perspective, guide the design of algorithms for inferring stimulus from previously unseen tactile afferent data, a process referred to as decoding. Specifically, the objective of the study was to devise quantitative methods that would provide insight into some mechanisms that underlie touch, as well as provide strategies through which real-time biomedical devices can be realized. Tactile afferent data from eight subjects (18 - 30 years) with no known form of neurological disorders were recorded by inserting a needle electrode in the median nerve at the wrist. I was involved in designing experimental protocols, designing mechanisms that were put in place for safety measures, designing and building electronic components as needed, experimental setup, subject recruitment, and data acquisition. Dr. Ingvars Birznieks (performed the actual microneurography procedure by inserting a needle electrode into the nerve and identifying afferent types) and Dr. Heba Khamis provided assistance with the data acquisition and experimental design. The study took place at Neuroscience Research Australia (NeuRA). Once the data were acquired, I analyzed the data recorded from slowly adapting type I tactile afferents (SA-I). The initial stages of data analysis involved writing software routines to spike sort the data (identify action potential waveforms that pertain to individual cells). I analyzed SA-I tactile afferents because they were more numerous (it was difficult to target other types of afferents during experiments). In addition, SA-I tactile afferents respond during both the dynamic and the static phase of a force stimulus. Since they respond during both the dynamic and static phases of the force stimulus, it seemed reasonable to hypothesize that SA-I’s alone could provide sufficient information for predicting the force profile, given spike data. In the first stage, I used an inhomogeneous Poisson process encoding model through which I assessed the relative importance of aspects of the stimuli to observed spike data. In addition I estimated the likelihood for SA-I data given the inhomogeneous Poisson model, which was used during the second stage. The likelihood is formulated by deriving the joint distribution of the data, as a function of the model parameters with the data fixed. In the second stage, I used a recursive nonlinear Bayesian filter to reconstruct the force profile, given the SA-I spike patterns. Moreover, the decoding method implemented in this thesis is feasible for real-time applications such as interfacing with prostheses because it can be realized with readily available electronic components. I also implemented a renewal point process encoding model—as a generalization of the Poisson process encoding model—which can account for some history dependence properties of neural data. I discovered that under my encoding model, the relative contributions of the force and its derivative are 1.26 and 1.02, respectively. This suggests that the force derivative contributes significantly to the spiking behavior of SA-I tactile afferents. This is a novel contribution because it provides a quantitative result to the long standing question of whether the force derivative contributes towards SA-I tactile afferent spiking behavior. As a result, I incorporated the first derivative of force, along with the force, in the encoding models I implemented in this thesis. The decoding model shows that SA-I fibers provide sufficient information for an approximation of the force profile. Furthermore, including fast adapting tactile afferents would provide better information about the first moment of contact and last moment of contact, and thus improved decoding results. Finally I show that a renewal point process encoding model captures interspike time and stimulus features better than an inhomogeneous Poisson point process encoding model. This is useful because it is now possible to generate synthetic data with statistical structure that is similar to real SA-I data: This would enable further investigations of mechanisms that underlie SA-I tactile afferents

    Heuristic edge server placement in Industrial Internet of Things and cellular networks

    Get PDF
    Rapid developments in industry 4.0, machine learning, and digital twins have introduced new latency, reliability, and processing restrictions in Industrial Internet of Things (IIoT) and mobile devices. However, using current Information and Communications Technology (ICT), it is difficult to optimally provide services that require high computing power and low latency. To meet these requirements, mobile edge computing is emerging as a ubiquitous computing paradigm that enables the use of network infrastructure components such as cluster heads/sink nodes in IIoT and cellular network base stations to provide local data storage and computation servers at the edge of the network. However, optimal location selection for edge servers within a network out of a very large number of possibilities, such as to balance workload and minimize access delay is a challenging problem. In this paper, the edge server placement problem is addressed within an existing network infrastructure obtained from Shanghai Telecom’s base station the dataset that includes a significant amount of call data records and locations of actual base stations. The problem of edge server placement is formulated as a multi-objective constraint optimization problem that places edge servers strategically to the balance between the workloads of edge servers and reduce access delay between the industrial control center/cellular base-stations and edge servers. To search randomly through a large number of possible solutions and selecting those that are most descriptive of optimal solution can be a very time-consuming process, therefore, we apply the genetic algorithm and local search algorithms (hillclimbing and simulated annealing) to find the best solution in the least number of solution space explorations. Experimental results are obtained to compare the performance of the genetic algorithm against the above-mentioned local search algorithms. The results show that the genetic algorithm can quickly search through the large solution space as compared to local search optimization algorithms to find an edge placement strategy that minimizes the cost functio

    Mapping women’s role in small scale fisheries value chain in India for fisheries sustainability

    Get PDF
    Sustainability in small scale fisheries is receiving wider acceptance worldwide as the system faces different kinds of exploitations. Gender can play a significant role in achieving sustainability as they are the primary beneficiaries in small scale fisheries. Exploring their level of participation in resource use can provide a database that functions as the key determinants for sustainability. This article looks for empirical evidences on the role of men and women in small scale fisheries through gender structure analysis. The indigenous communities (n=154) in Vazhachal Forest Division, Kerala, southern state in India is considered for the study. Methods adopted includes household survey using semi structured questionnaire, transect walks, focus groups and direct observations. Results reveal that although higher percentage of men (66.20%), women’s role is substantial (33.80%) in fisheries value chain including pre harvest, harvest and post-harvest sector. Their presence had a significant relation in supporting men in fisheries activities like collection of baits (χ2= 6.189, p= 0.013), accompanying men in fishing (χ2= 4.153; p= 0.042), sorting of fishes (χ2= 3.566, p=0.059), processing of fishes (χ2=9.776, p= 0.002) and in mending of nets (χ2= 4.40, p=0.042). Results, further, reveal that men and women have unique and overlapping roles in small scale fisheries. The key findings of the study provide quantitative evidence to develop strategies for small scale fisheries sustainability

    Evaluation of Efficacy of Sonosalphingogram for Assessing Tubal PatencyiIn Infertile Patients with Hysterosalphingogram as Gold Standard.

    Get PDF
    Infertility Has Emerged As A Significant Psychosocial Problem Today With Approximately 15% Of Married Couples Being Infertile. (1) DEFINITION OF INFERTILITY It Has Been Defined As One Year Of Unprotected Coitus Without Conception. It Affects Approximately 10 – 15% Of Couples In The Reproductive Age Group. ETIOLOGY An Assessment Of Causative Factors Have Shown That 30% Cases Are Due To Male Factors 30% Due To Female Factors 30% Due To Combined Male & Female Factors 10% Due To Unexplained Causes. Among The Female Infertility Factors, 30 – 40% Cases Are Due To Ovulatory Dysfunction 30 – 40% Due To Tubal Pathology 10 – 15% Due To Unexplained Causes 10 – 15% Due To Miscellaneous Causes Like Endocrine Factors. Tubal Pathology Is Responsible For 30 – 40% Cases Of Infertility. (1) Determining Whether The Fallopian Tube Is Patent Is Part Of Initial Evaluation Procedure In Seeking The Cause Of Infertility. Currently Available Procedures Each With Its Drawbacks Include Rubin Test, Which Is Highly Subjective; Laparoscopy, Which Is Invasive And Hysterosalphingography, Which Expose The Patient To Ionizing Radiation And Contrast Medium. Of The Three Techniques Hysterosalphingography Has Been Commonly Used. (2) In Recent Years Major Technologic Advances In Diagnostic Ultrasound Have Led To Improve Image Quality Particularly With The Use Of Vaginal Probes. Negative Contrast Like Saline Can Be Used To Visualize The Endometrial Cavity (3). The Presence Of Fluid In Periovarian Region And Pouch Of Douglas Indicates The Patency Of The Tube. Further The Use Of Color Doppler To Assess Flow Through The Cornua Can Qualitate The Direction Of Flow. This Study Undertaken In The Barnard Institute Of Radiology Gives Our Experience In Sonosalphinghography In 35 Cases Of Infertility

    The role of ultra-low-dose computed tomography in the detection of pulmonary pathologies : a prospective observational study

    Get PDF
    Purpose: The aim of the study was to compare the image noise, radiation dose, and image quality of ultra-low-dose computed tomography (CT) and standard CT in the imaging of pulmonary pathologies. Material and methods: This observational study was performed between July 2020 and August 2021. All enrolled patients underwent both ultra-low-dose and standard CTs. The image noise, image quality for normal pulmonary structures, presence or absence of various pulmonary lesions, and radiation dose were recorded for each of the scans. The findings of standard-dose CT were regarded as the gold standard and compared with that of ultra-low-dose CT. Results: A total of 124 patients were included in the study. The image noise was higher in the ultra-low-dose CT compared to standard-dose CT. The overall image quality was determined to be diagnostic in 100% of standard CT images and in 96.77% of ultra-low-dose CT images with proportional worsening of the image quality as the body mass index (BMI) range was increased. Ultra-low-dose CT offered higher (> 90%) sensitivity for lesions like consolidation (97%), pleural effusion (95%), fibrosis (92%), and solid pulmonary nodules (91%). The effective radiation dose (mSv) was many times lower in ultra-low-dose CT when compared to standard-dose CT (mean ± SD: 0.50 ± 0.005 vs. 3.99 ± 1.57). Conclusions: The radiation dose of ultra-low-dose chest CT was almost equal to that of a chest X-ray. It could be used for the screening and/or follow-up of patients with solid pulmonary nodules (> 3 mm) and consolidation

    Correlates of morbidity and mortality in severe necrotizing pancreatitis

    Get PDF
    Acute severe pancreatitis is associated with a high morbidity and mortality and frequently is accompanied by underlying pancreatic parenchymal necrosis. Patients with pancreatic necrosis must be identified, because the morbidity and mortality rate in this subgroup is much higher. Our objective was to compare the clinical outcomes of these patients based on the degree of pancreatic necrosis. A total of 35 patients were noted to have pancreatic necrosis. These were divided into 2 groups based on extent of necrosis: group A had less than 50% necrosis and group B had more than 50% necrosis. The rate of mortality (5% versus 40%) was significantly higher in group B. The rate of organ dysfunction also rose along with the rates of other morbidities and variables that were related to a patient’s hospital stay. Only APACHE II significantly correlated with the degree of necrosis, wherein the chances of substantial necrosis rose by 20% with each unit increase of APACHE II score. APACHE II Score could be employed and studied further prospectively to help identify patients with pancreatic necrosis

    Diagnosis and Management of Mandibular Condyle Fractures

    Get PDF
    In the maxillofacial region, mandibular condyle fracture accounts for about 10–40% of the trauma spectrum. This chapter deals with the etiology, classification, clinical features, diagnosis, and contemporary management of mandibular condyle fractures. Along with the regular management strategies, treatment protocols for geriatric and pediatric patients have also been discussed. The indications and contraindications of closed as well as open reduction and fixation of condyle fractures are analyzed in detail

    Soil Classification and Crop Prediction Using Machine

    Get PDF
    Soil classification is a major problem and a heated topic in many countries. The world's population is drastically increasing at an alarming rate which in turn makes the demand for food crops. Farmers are forced to block soil cultivation since their conventional methods are insufficient to fulfil escalating needs. To optimize agricultural output, farmers must understand the best soil type for a certain crop, which has an impact on growing food demand. There areseveral methods for categorizing soil in a scientific way, but each has its own set of disadvantages, such as time and effort. Computer-based soil classification approaches are essential since they will aid farmers in the field and will be quick. Advanced Machine Learning technique-based soil classification methodologies can be used to classify soil and extract various featuresfrom it
    corecore