165 research outputs found

    Spatial Point Pattern Analysis of Neurons Using Ripley's K-Function in 3D

    Get PDF
    The aim of this paper is to apply a non-parametric statistical tool, Ripley's K-function, to analyze the 3-dimensional distribution of pyramidal neurons. Ripley's K-function is a widely used tool in spatial point pattern analysis. There are several approaches in 2D domains in which this function is executed and analyzed. Drawing consistent inferences on the underlying 3D point pattern distributions in various applications is of great importance as the acquisition of 3D biological data now poses lesser of a challenge due to technological progress. As of now, most of the applications of Ripley's K-function in 3D domains do not focus on the phenomenon of edge correction, which is discussed thoroughly in this paper. The main goal is to extend the theoretical and practical utilization of Ripley's K-function and corresponding tests based on bootstrap resampling from 2D to 3D domains

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc… In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented

    A Comparison of Selective Classification Methods in DNA Microar¬ray Data of Cancer: Some Recommendations for Application in Health Promotion

    Get PDF
    Background: The aim of this study was to apply a new method for se¬lecting a few genes, out of thousands, as plausible markers of a disease.Methods: Hierarchical clustering technique was used along with Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers to select marker-genes of three types of breast cancer. In this method, at each step, one sub¬ject is left out and the algorithm iteratively selects some clusters of genes from the remainder of subjects and selects a representative gene from each cluster. Then, classifiers are constructed based on these genes and the accu¬racy of each classifier to predict the class of left-out subject is recorded. The classifier with higher precision is considered superior.Results: Combining classification techniques with clustering method re¬sulted in fewer genes with high degree of statistical precision. Although all classifiers selected a few genes from pre-determined highly ranked genes, the precision did not decrease. SVM precision was 100% with 22 genes instead of 50 genes while the NB resulted in higher precision of 97.95% in this case. When 20 highly ranked genes selected to be fed to the algorithm, same precision was obtained using 6 and 5 genes with SVM and NB clas¬sifiers respectively.Conclusion: Using hybrid method could be effective in choosing fewer number of plausible marker genes so that the classification precision of these markers is increased. In addition, this method enables detecting new plausible markers that their association to disease under study is not bio¬logically proved

    Right to Left Shunt in Agitated Saline Test

    Get PDF
    An 11-year-old girl was presented to the cardiology clinic with a history of atypical chest pain and nonspecific shortness of breath. The patient underwent echocardiography that revealed interatrial septum redundancy with signs of shunt flow in the color-doppler study (Video 1). The differential diagnosis included atrial septal aneurysm (ASA) associated with patent foramen oval (PFO) and atrial septal defect (ASD). Agitated saline test was performed via left cubital vein (Video 3) and significant leakage of contrast through interatrial septum was confirmed (Video 2). For further evaluation, the patient underwent the transesophageal echocardiography (TEE) to rule out the ASD. No left to right shunt was seen in TEE and the color-doppler study showed right to left shunt, hence the diagnosis of ASD was ruled out (Video 4). The final diagnosis was PFO. PFO is a congenital cardiac abnormality as a fetal blood communication tunnel between two atria (right to left) is remained open after one year of age. It has a high prevalence of about 25% in the general population. No intervention and follow up is needed; except for those who have large-sized PFO with significant right to left shunt and a history of neurological event especially in younger ages without other risk factors for atherosclerosis. &nbsp
    corecore