2,648 research outputs found
MITK-ModelFit: A generic open-source framework for model fits and their exploration in medical imaging -- design, implementation and application on the example of DCE-MRI
Many medical imaging techniques utilize fitting approaches for quantitative
parameter estimation and analysis. Common examples are pharmacokinetic modeling
in DCE MRI/CT, ADC calculations and IVIM modeling in diffusion-weighted MRI and
Z-spectra analysis in chemical exchange saturation transfer MRI. Most available
software tools are limited to a special purpose and do not allow for own
developments and extensions. Furthermore, they are mostly designed as
stand-alone solutions using external frameworks and thus cannot be easily
incorporated natively in the analysis workflow. We present a framework for
medical image fitting tasks that is included in MITK, following a rigorous
open-source, well-integrated and operating system independent policy. Software
engineering-wise, the local models, the fitting infrastructure and the results
representation are abstracted and thus can be easily adapted to any model
fitting task on image data, independent of image modality or model. Several
ready-to-use libraries for model fitting and use-cases, including fit
evaluation and visualization, were implemented. Their embedding into MITK
allows for easy data loading, pre- and post-processing and thus a natural
inclusion of model fitting into an overarching workflow. As an example, we
present a comprehensive set of plug-ins for the analysis of DCE MRI data, which
we validated on existing and novel digital phantoms, yielding competitive
deviations between fit and ground truth. Providing a very flexible environment,
our software mainly addresses developers of medical imaging software that
includes model fitting algorithms and tools. Additionally, the framework is of
high interest to users in the domain of perfusion MRI, as it offers
feature-rich, freely available, validated tools to perform pharmacokinetic
analysis on DCE MRI data, with both interactive and automatized batch
processing workflows.Comment: 31 pages, 11 figures URL: http://mitk.org/wiki/MITK-ModelFi
Using analytical CRM system to reduce churn in the telecom sector: A macine learning approach
Applied project submitted to the Department of Computer Science and Information Systems, Ashesi University, in partial fulfillment of Bachelor of Science degree in Computer Science, April 2019Customers are considered to be the most valuable assets of any business, and thus their loyalty
is key to profitability as they indulge in repeat purchases and attract their colleagues through
word-of-mouth. In competitive markets such as telecommunications, customers have a lot of
flexibility due to the variety of service providers available and the introduction of mobile
number portability (MNP) thus they can easily switch services and service providers. Customer
churn is, therefore, a major problem among telecommunication companies hence their quest to
reduce customer churn rate and retain an existing customer. Customer relationship management
systems have been used over the years to track patterns within the customer data, but this could
be improved notably with the technological advances hitting the universe on a daily basis. We
have moved past the age of innovations around steam engines, electricity, computers, mobile,
internet to the current technology trends in artificial intelligence and big data. We are at the
cusp of a new wave where enterprises have embraced the application of machine learning in
streamlining different business processes. Telecom companies have the advantage of mining
large customer datasets that can be leveraged on for predictive analysis using data science.
This project explores the use of analytical CRM system in reducing customer churn in the
telecom industry using machine learning algorithms to predict customer behavior in order to
retain them. Its goal is to analyze all relevant customer data and develop focused customer
retention programs. This is on the focus that if you could somehow predict in advance which
customers are at risk of leaving, you could develop focused customer retention programs to
reduce customer churn.Ashesi Universit
EksPy: a new Python framework for developing graphical user interface based PyQt5
This study introduces EksPy Python framework, a novel framework designed for developing graphical user interface (GUI) applications in Python. EksPy framework is built on PyQt5, which is a collection of Python bindings for the Qt libraries, and it provides a user-friendly and intuitive interface. The comparative analysis of EksPy framework with existing frameworks such as Tkinter and PyQt highlights its notable features, including ease of use, rapid development, enhanced performance, effective database management, and the model-view-controller (MVC) concept. The experimental results illustrate that EksPy framework requires less code and enhances code readability, thereby facilitating better understanding and efficient development. Additionally, EksPy framework offers a modern and customizable appearance, surpassing Tkinterâs capabilities. Furthermore, it incorporates a built-in object-relational mapping (ORM) feature to simplify database interactions and adheres to the MVC architectural pattern. In conclusion, EksPy Python framework emerges as a powerful, user-friendly, and efficient framework for GUI application development in Python
Internship Portfolio
My one-year internship program work was with Mayo Clinic, Rochester. I was involved in the software development as part of a work term, all of which will be outlined in this report. The report will cover some background information on the projects I was involved in, as well as details on how the projects were developed. The report also states how and what academic courses and projects helped me in overall internship experience so far.
At the beginning of the internship, I formulated serval learning goals, which I wanted to achieve: To understand the functioning and working conditions of the organization; To explore working in a professional environment; To explore the work environment for the possibility of a future career; To utilize my gained skills and knowledge; To find skills and knowledge I still need to work in a professional environment; To learn about software development life cycle; To learn about the development methodologies; To obtain fieldwork experience/collect data in an environment unknown for me; To obtain experience working in multicultural and diverse environment; To enhance my interpersonal and technical skills; To network with professionals in the industry.
There are five major projects that I had a significant role in.
The first project was Space Tools, involved gaining a good understanding of a javascript framework called Angular. My task was to study its working, develop wireframes from the view point of developing an application using that technology. My task was to Understand working with Angular framework, Understand working with Git, Develop wireframes.
As this was my first project with Mayo Clinic, particularly at Development Shared Services (DSS) as a team project, I also had a large scope of understanding Agile Methodology - Scrum Process in particular.
The second project was BAMS which was a rewrite of existing application in Windows Presentation Framework(WPF) and .Net backend. In this project my tasks were Understand using WinForms and WPF, Develop pages using WPF- MVVM Framework.
The third project was DSA, where I acquired knowledge of working on Angular4 and frontend Unit testing in Karma using Mocha and Chai frameworks.
The fourth project is MML Notification and Delivery, which started with an analysis phase in which were asked to analyze the data flow and system integrations the current Mayo Access and Mayo Link (MML Internal Operations) are dependent upon. We are to provide a new functionality to Mayo Access users of Notification and Delivery of tests results.
The current project that Iâm working on now is âMML Database Analysisâ. This project is in the analysis phase. We were given a task to analyze MML databases to write an API instead of frontend calls to the database.
I acquired many new technical skills throughout my work. I acquired new knowledge in Front-end development using various versions of Angular framework and Unit testing using Mocha and Chai framework in Karma. I also brushed my HTML/HTML5, CSS/CSS3, Javascript, Java, C# skills while working on various projects. Then I was introduced to the area of research and analysis and how to approach it. Most importantly, the work included good fellowship, cooperative teamwork and accepting responsibilities.
Although I spent much time as a learning curve, I found that I was well trained in certain areas that helped me substantially in my projects. Many programming skills and Software Development Life Cycle understanding that I used in my internship, such as programming style and design, were the skills that I had acquired during my studies in Computer Science.
This report also includes advantages of using Angular framework over other Javascript frameworks. The report concludes with my overall impressions of my work experience as well as my opinion of the Industrial Internship Program in general
Recommended from our members
Multimedia delivery in the future internet
The term âNetworked Mediaâ implies that all kinds of media including text, image, 3D graphics, audio
and video are produced, distributed, shared, managed and consumed on-line through various networks,
like the Internet, Fiber, WiFi, WiMAX, GPRS, 3G and so on, in a convergent manner [1]. This white
paper is the contribution of the Media Delivery Platform (MDP) cluster and aims to cover the Networked
challenges of the Networked Media in the transition to the Future of the Internet.
Internet has evolved and changed the way we work and live. End users of the Internet have been confronted
with a bewildering range of media, services and applications and of technological innovations concerning
media formats, wireless networks, terminal types and capabilities. And there is little evidence that the pace
of this innovation is slowing. Today, over one billion of users access the Internet on regular basis, more
than 100 million users have downloaded at least one (multi)media file and over 47 millions of them do so
regularly, searching in more than 160 Exabytes1 of content. In the near future these numbers are expected
to exponentially rise. It is expected that the Internet content will be increased by at least a factor of 6, rising
to more than 990 Exabytes before 2012, fuelled mainly by the users themselves. Moreover, it is envisaged
that in a near- to mid-term future, the Internet will provide the means to share and distribute (new)
multimedia content and services with superior quality and striking flexibility, in a trusted and personalized
way, improving citizensâ quality of life, working conditions, edutainment and safety.
In this evolving environment, new transport protocols, new multimedia encoding schemes, cross-layer inthe
network adaptation, machine-to-machine communication (including RFIDs), rich 3D content as well as
community networks and the use of peer-to-peer (P2P) overlays are expected to generate new models of
interaction and cooperation, and be able to support enhanced perceived quality-of-experience (PQoE) and
innovative applications âon the moveâ, like virtual collaboration environments, personalised services/
media, virtual sport groups, on-line gaming, edutainment. In this context, the interaction with content
combined with interactive/multimedia search capabilities across distributed repositories, opportunistic P2P
networks and the dynamic adaptation to the characteristics of diverse mobile terminals are expected to
contribute towards such a vision.
Based on work that has taken place in a number of EC co-funded projects, in Framework Program 6 (FP6)
and Framework Program 7 (FP7), a group of experts and technology visionaries have voluntarily
contributed in this white paper aiming to describe the status, the state-of-the art, the challenges and the way
ahead in the area of Content Aware media delivery platforms
THE RANGE AND ROLE OF THEORY IN INFORMATION SYSTEMS DESIGN RESEARCH: FROM CONCEPTS TO CONSTRUCTION
This paper reports results from a field study of cross-disciplinary design researchers in information systems, software engineering, human-computer interaction, and computer-supported cooperative work. The purpose of the study was to explore how these different disciplines conceptualize and conduct design-as-research. The focus in this paper is on how theories are used in a design research project to motivate and inform the particulars of designed artifacts and design methods. Our objective was to better understand how elements of a theory are translated into design action, and how theoretical propositions are translated and then realized in designed artifacts. The results reveal a broad diversity in the processes through which theories are translated into working artifacts. The paper contributes to our understanding of design research in information systems by providing empirical support for existing constructs and frameworks, identifying some new approaches to translating theoretical concepts into research designs, and suggesting ways in which action and artifact-oriented research can more effectively contribute to a cumulative and progressive science of design
Recommended from our members
Web and knowledge-based decision support system for measurement uncertainty evaluation
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityIn metrology, measurement uncertainty is understood as a range in which the true value of the measurement is likely to fall in. The recent years have seen a rapid development in evaluation of measurement uncertainty. ISO Guide to the Expression of Uncertainty in Measurement (GUM 1995) is the primary guiding document for measurement uncertainty. More recently, the Supplement 1 to the "Guide to the expression of uncertainty in measurement" â Propagation of distributions using a Monte Carlo method (GUM SP1) was published in November 2008. A number of software tools for measurement uncertainty have been developed and made available based on these two documents. The current software tools are mainly desktop applications utilising numeric computation with limited mathematical model handling capacity. A novel and generic web-based application, web-based Knowledge-Based Decision Support System (KB-DSS), has been proposed and developed in this research for measurement uncertainty evaluation. A Model-View-Controller architecture pattern is used for the proposed system. Under this general architecture, a web-based KB-DSS is developed based on an integration of the Expert System and Decision Support System approach. In the proposed uncertainty evaluation system, three knowledge bases as sub-systems are developed to implement the evaluation for measurement uncertainty. The first sub-system, the Measurement Modelling Knowledge Base (MMKB), assists the user in establishing the appropriate mathematical model for the measurand, a critical process for uncertainty evaluation. The second sub-system, GUM Framework Knowledge Base, carries out the uncertainty evaluation process based on the GUM Uncertainty Framework using symbolic computation, whilst the third sub-system, GUM SP1 MCM Framework Knowledge Base, conducts the uncertainty calculation according to the GUM SP1 Framework numerically based on Monte Carlo Method. The design and implementation of the proposed system and sub-systems are discussed in the thesis, supported by elaboration of the implementation steps and examples. Discussions and justifications on the technologies and approaches used for the sub-systems and their components are also presented. These include Drools, Oracle database, Java, JSP, Java Transfer Object, AJAX and Matlab. The proposed web-based KB-DSS has been evaluated through case studies and the performance of the system has been validated by the example results. As an
established methodology and practical tool, the research will make valuable contributions to the field of measurement uncertainty evaluation.Brunel Universit
Development of a decision support system through modelling of critical infrastructure interdependencies : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Emergency Management at Massey University, Wellington, New Zealand
Critical Infrastructure (CI) networks provide functional services to support the wellbeing of a community. Although it is possible to obtain detailed information about individual CI and their components, the interdependencies between different CI networks are often implicit, hidden or not well understood by experts. In the event of a hazard, failures of one or more CI networks and their components can disrupt the functionality and consequently affect the supply of services. Understanding the extent of disruption and quantification of the resulting consequences is important to assist various stakeholders' decision-making processes to complete their tasks successfully. A comprehensive review of the literature shows that a Decision Support System (DSS) integrated with appropriate modelling and simulation techniques is a useful tool for CI network providers and relevant emergency management personnel to understand the network recovery process of a region following a hazard event. However, the majority of existing DSSs focus on risk assessment or stakeholders' involvement without addressing the overall CI interdependency modelling process. Furthermore, these DSSs are primarily developed for data visualization or CI representation but not specifically to help decision-makers by providing them with a variety of customizable decision options that are practically viable. To address these limitations, a Knowledge-centred Decision Support System (KCDSS) has been developed in this study with the following aims: 1) To develop a computer-based DSS using efficient CI network recovery modelling algorithms, 2) To create a knowledge-base of various recovery options relevant to specific CI damage scenarios so that the decision-makers can test and verify several âwhat-ifâ scenarios using a variety of control variables, and 3) To bridge the gap between hazard and socio-economic modelling tools through a multidisciplinary and integrated natural hazard impact assessment.
Driven by the design science research strategy, this study proposes an integrated impact assessment framework using an iterative design process as its first research outcome. This framework has been developed as a conceptual artefact using a topology network-based approach by adopting the shortest path tree method. The second research outcome, a computer-based KCDSS, provides a convenient and efficient platform for enhanced decision making through a knowledge-base consisting of real-life recovery strategies. These strategies have been identified from the respective decision-makers of the CI network providers through the Critical Decision Method (CDM), a Cognitive Task Analysis (CTA) method for requirement elicitation. The capabilities of the KCDSS are demonstrated through electricity, potable water, and road networks in the Wellington region of Aotearoa New Zealand. The network performance has been analysed independently and with interdependencies to generate outage of services spatially and temporally.
The outcomes of this study provide a range of theoretical and practical contributions. Firstly, the topology network-based analysis of CI interdependencies will allow a group of users to build different models, make and test assumptions, and try out different damage scenarios for CI network components. Secondly, the step-by-step process of knowledge elicitation, knowledge representation and knowledge modelling of CI network recovery tasks will provide a guideline for improved interactions between researchers and decision-makers in this field. Thirdly, the KCDSS can be used to test the variations in outage and restoration time estimates of CI networks due to the potential uncertainty related to the damage modelling of CI network components. The outcomes of this study also have significant practical implications by utilizing the KCDSS as an interface to integrate and add additional capabilities to the hazard and socio-economic modelling tools. Finally, the variety of âwhat-ifâ scenarios embedded in the KCDSS would allow the CI network providers to identify vulnerabilities in their networks and to examine various post-disaster recovery options for CI reinstatement projects
Real-Time GPS-Alternative Navigation Using Commodity Hardware
Modern navigation systems can use the Global Positioning System (GPS) to accurately determine position with precision in some cases bordering on millimeters. Unfortunately, GPS technology is susceptible to jamming, interception, and unavailability indoors or underground. There are several navigation techniques that can be used to navigate during times of GPS unavailability, but there are very few that result in GPS-level precision. One method of achieving high precision navigation without GPS is to fuse data obtained from multiple sensors. This thesis explores the fusion of imaging and inertial sensors and implements them in a real-time system that mimics human navigation. In addition, programmable graphics processing unit technology is leveraged to perform stream-based image processing using a computer\u27s video card. The resulting system can perform complex mathematical computations in a fraction of the time those same operations would take on a CPU-based platform. The resulting system is an adaptable, portable, inexpensive and self-contained software and hardware platform, which paves the way for advances in autonomous navigation, mobile cartography, and artificial intelligence
- âŠ