85 research outputs found

    A model for predicting court decisions on child custody

    Get PDF
    Awarding joint or sole custody is of crucial importance for the lives of both the child and the parents. This paper first models the factors explaining a court''s decision to grant child custody and later tests the predictive capacity of the proposed model. We conducted an empirical study using data from 1, 884 court rulings, identifying and labeling factual elements, legal principles, and other relevant information. We developed a neural network model that includes eight factual findings, such as the relationship between the parents and their economic resources, the child''s opinion, and the psychological report on the type of custody. We performed a temporal validation using cases later in time than those in the training sample for prediction. Our system predicted the court''s decisions with an accuracy exceeding 85%. We obtained easy-to-apply decision rules with the decision tree technique. The paper contributes by identifying the factors that best predict joint custody, which is useful for parents, lawyers, and prosecutors. Parents would do well to know these findings before venturing into a courtroom

    A multivariate study of Internet use and the Digital Divide

    Get PDF
    Method: The article is based on survey data (N = 2,304) collected in Spain, which are analyzed using multiple regression, principal component analysis, and cluster analysis. Results: Two dimensions are identified: the first is the comprehensive use of Internet and the second is the nature of this use, differentiating between a professional use and a recreational and social use of Internet. The article verifies that factors explaining the digital divide are age, education level, and income. Conclusions: The article identifies digitally excluded segments, and the efforts and actions for digital training to eradicate the digital divide should be directed at these groups. The most serious problem is encountered in homeworkers who are mainly woman. NEETs (not in education, employment, or training) are frequent users of Internet, but they only use it for entertainment and to certain extent they are digitally excluded

    A social and environmental approach to microfinance credit scoring

    Get PDF
    Microfinance institutions provide loans to low-income individuals. Their credit scoring systems, if they exist, are strictly financial. Although many institutions consider the social and environmental impact of their loans, they do not incorporate formal systems to estimate these social and environmental impacts. This paper proposes that their creditworthiness evaluations should be coherent with their social mission and, accordingly, should estimate the social and environmental impact of microcredit. Thus, a decision support system to facilitate microcredit granting is proposed using a multicriteria evaluation. The assessment of social impact is performed by calculating the Social Net Present Value. The system captures credit officers'' experience and addresses incomplete and intangible information. The model has been tested in a microfinance institution. The paper shows how a small institution can include social and environmental issues in its decision-making systems to evaluate credit applications. A gap in the preferences was found between members of the board, who are socially driven, and managers and credit officers, who are financially drifted. This mission drift was corrected. The approach followed contributed to creating a culture of social and environmental assessment within the institution, especially among credit officers, thereby translating Microfinance institutions'' social mission into numbers

    Determinants of default in P2P lending

    Get PDF
    This paper studies P2P lending and the factors explaining loan default. This is an important issue because in P2P lending individual investors bear the credit risk, instead of financial institutions, which are experts in dealing with this risk. P2P lenders suffer a severe problem of information asymmetry, because they are at a disadvantage facing the borrower. For this reason, P2P lending sites provide potential lenders with information about borrowers and their loan purpose. They also assign a grade to each loan. The empirical study is based on loans'' data collected from Lending Club (N = 24, 449) from 2008 to 2014 that are first analyzed by using univariate means tests and survival analysis. Factors explaining default are loan purpose, annual income, current housing situation, credit history and indebtedness. Secondly, a logistic regression model is developed to predict defaults. The grade assigned by the P2P lending site is the most predictive factor of default, but the accuracy of the model is improved by adding other information, especially the borrower''s debt level

    Integrating clinicians, knowledge and data: expert-based cooperative analysis in healthcare decision support

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Decision support in health systems is a highly difficult task, due to the inherent complexity of the process and structures involved.</p> <p>Method</p> <p>This paper introduces a new hybrid methodology <it>Expert-based Cooperative Analysis </it>(EbCA), which incorporates explicit prior expert knowledge in data analysis methods, and elicits implicit or tacit expert knowledge (IK) to improve decision support in healthcare systems. EbCA has been applied to two different case studies, showing its usability and versatility: 1) Bench-marking of small mental health areas based on technical efficiency estimated by <it>EbCA-Data Envelopment Analysis (EbCA-DEA)</it>, and 2) Case-mix of schizophrenia based on functional dependency using <it>Clustering Based on Rules (ClBR)</it>. In both cases comparisons towards classical procedures using qualitative explicit prior knowledge were made. Bayesian predictive validity measures were used for comparison with expert panels results. Overall agreement was tested by Intraclass Correlation Coefficient in case "1" and kappa in both cases.</p> <p>Results</p> <p>EbCA is a new methodology composed by 6 steps:. 1) Data collection and data preparation; 2) acquisition of "Prior Expert Knowledge" (PEK) and design of the "Prior Knowledge Base" (PKB); 3) PKB-guided analysis; 4) support-interpretation tools to evaluate results and detect inconsistencies (here <it>Implicit Knowledg </it>-IK- might be elicited); 5) incorporation of elicited IK in PKB and repeat till a satisfactory solution; 6) post-processing results for decision support. EbCA has been useful for incorporating PEK in two different analysis methods (DEA and Clustering), applied respectively to assess technical efficiency of small mental health areas and for case-mix of schizophrenia based on functional dependency. Differences in results obtained with classical approaches were mainly related to the IK which could be elicited by using EbCA and had major implications for the decision making in both cases.</p> <p>Discussion</p> <p>This paper presents EbCA and shows the convenience of completing classical data analysis with PEK as a mean to extract relevant knowledge in complex health domains. One of the major benefits of EbCA is iterative elicitation of IK.. Both explicit and tacit or implicit expert knowledge are critical to guide the scientific analysis of very complex decisional problems as those found in health system research.</p

    Addressing information asymmetries in online peer-to-peer lending

    Get PDF
    Digital technologies are transforming how small businesses access finance and from whom. This chapter explores online peer-to-peer (P2P) lending, a form of crowdfunding that connects borrowers and lenders. Information asymmetry is a key issue in online peer-to-peer lending marketplaces that can result in moral hazard or adverse selection, and ultimately impact the viability and success of individual platforms. Both online P2P lending platforms and lenders seek to minimise the impact of information asymmetries through a variety of mechanisms. This chapter discusses the structure of online P2P lending platforms and reviews how the disclosure of hard and soft information, and herding can reduce information asymmetries. The chapter concludes with a discussion of further avenues for research

    Differential clinical characteristics and prognosis of intraventricular conduction defects in patients with chronic heart failure

    Get PDF
    Intraventricular conduction defects (IVCDs) can impair prognosis of heart failure (HF), but their specific impact is not well established. This study aimed to analyse the clinical profile and outcomes of HF patients with LBBB, right bundle branch block (RBBB), left anterior fascicular block (LAFB), and no IVCDs. Clinical variables and outcomes after a median follow-up of 21 months were analysed in 1762 patients with chronic HF and LBBB (n = 532), RBBB (n = 134), LAFB (n = 154), and no IVCDs (n = 942). LBBB was associated with more marked LV dilation, depressed LVEF, and mitral valve regurgitation. Patients with RBBB presented overt signs of congestive HF and depressed right ventricular motion. The LAFB group presented intermediate clinical characteristics, and patients with no IVCDs were more often women with less enlarged left ventricles and less depressed LVEF. Death occurred in 332 patients (interannual mortality = 10.8%): cardiovascular in 257, extravascular in 61, and of unknown origin in 14 patients. Cardiac death occurred in 230 (pump failure in 171 and sudden death in 59). An adjusted Cox model showed higher risk of cardiac death and pump failure death in the LBBB and RBBB than in the LAFB and the no IVCD groups. LBBB and RBBB are associated with different clinical profiles and both are independent predictors of increased risk of cardiac death in patients with HF. A more favourable prognosis was observed in patients with LAFB and in those free of IVCDs. Further research in HF patients with RBBB is warranted

    Selecting DEA specifications and ranking units via PCA

    No full text
    DEA model selection is problematic. The estimated efficiency for any DMU depends on the inputs and outputs included in the model. It also depends on the number of outputs plus inputs. It is clearly important to select parsimonious specifications and to avoid as far as possible models that assign full high efficiency ratings to DMUs that operate in unusual ways (mavericks). A new method for model selection is proposed in this paper. Efficiencies are calculated for all possible DEA model specifications. The results are analysed using Principal Components Analysis. It is shown that model equivalence or dissimilarity can be easily assessed using this approach. The reasons why particular DMUs achieve a certain level of efficiency with a given model specification become clear. The methodology has the additional advantage of producing DMU rankings
    corecore