10 research outputs found

    A phase II two stage clinical trial design to handle latent heterogeneity for a binary response.

    Get PDF
    Phase II clinical trial are generally single arm trial where a homogeneity assumption is placed on the response. In practice, this assumption may be violated resulting in a heterogeneous response. This heterogeneous or overdispersed response can be decomposed into distinct subgroups based on the etiology of the heterogeneity. A general classification model is developed to quantify the heterogeneity. The most common Phase II trial design used in practice is the Simon 2-stage design which relies on the assumption of response homogeneity. This design is shown to be flawed under the assumption of heterogeneity with errors exceeding the target trial errors. To correct for the error inflation, a modification is made to the Simon design if heterogeneity is detected after the first stage trial conduct. The trial sample size is increased using an empirical estimate for the variance inflation factor and the trial is then completed with design parameters constructed through the posterior predictive Beta-binomial distribution given the first stage results. The new design, denoted the 2-stage Heterogeneity Adaptive (2HA) design, is applied to a two subgroup problem under latent heterogeneity. Latent heterogeneity represents the most general form of heterogeneity, no information is known prior to trial conduct. The results, through simulation, show that the target errors can be maintained with this modification to the Simon design under a wide range of heterogeneity

    Isolated, full-thickness proximal rectus femoris injury in competitive athletes: A systematic review of injury characteristics and return to play

    Get PDF
    BACKGROUND: Characteristics regarding mechanism of injury, management, and return-to-play (RTP) rate and timing are important when treating and counseling athletes with rectus femoris tears. PURPOSE: To systematically review the literature to better understand the prevalence, sporting activity, injury mechanisms, and treatment of patients with rectus femoris injury and to provide prognostic information regarding the rate and timing of RTP. STUDY DESIGN: Systematic review; Level of evidence, 4. METHODS: Following the 2020 PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, we queried PubMed/MEDLINE, Cochrane, OVID, EMBASE, and Google Scholar in March 2022 for studies reporting on athletes sustaining isolated, full-thickness tearing, or bony avulsion injuries to the proximal rectus femoris during sporting activity. Excluded were studies without evidence of full-thickness tearing or avulsion, with athletes sustaining concomitant injuries, or with injuries occurring from nonsporting activities. The percentage of athletes sustaining injuries was calculated based on sport, injury mechanism, and management (nonoperative versus operative). RESULTS: Of 132 studies initially identified, 18 were included, comprising 132 athletes (mean age, 24.0 ± 5.4 years; range, 12-43 years). The most common sporting activities were soccer (70.5%) and rugby (15.2%). The most reported mechanisms of injury were kicking (47.6%) and excessive knee flexion/forced hip extension (42.9%). Avulsion injuries were reported in 86% (n = 114) of athletes. Nonoperative management was reported in 19.7% of athletes, with operative management performed in 80.3%. The mean follow-up time was 21.4 ± 11.4 months (range, 1.5-48 months). The RTP rate was 93.3% (n = 14) in nonoperatively treated and 100% (n = 106) in operatively treated athletes, and the mean RTP time was 11.7 weeks (range, 5.5-15.2 weeks) in nonoperatively treated and 22.1 weeks (range, 14.0-37.6 weeks) in operatively treated athletes. Complications were reported in 7.7% (2/26) of nonoperatively treated and 18% (n = 19/106) of operatively treated athletes. CONCLUSION: Full-thickness proximal rectus femoris injuries occurred most frequently in athletes participating in soccer and rugby secondary to explosive, eccentric contractions involved in kicking and sprinting. Operative management was performed in the majority of cases. Athletes who underwent operative repair had a 100% RTP rate versus 93.3% in athletes treated nonoperatively

    Combining loan requests and investment offers

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    SME oriented information security level measurement indicators

    Get PDF
    Information and cyber security activities has become a major part of companies processes because of the increasing number of devices and systems that are online and connected to internet. However, information systems should be under cyclic monitoring which implementation can be a demanding task for a small and medium sized company for example because of the lack of available resources. This study aims to develop a security evaluation method which can be easily integrated into small or medium sized companies’ processes because of its straightforward and simple approach. Additionally, this study aims to develop a script which can be used for the testing purposes of the new security evaluation method. The study was conducted by first gathering the requirements for the evaluation criteria which were then used to design the new security evaluation method. The inputs for the evaluation process were based on vulnerability severity and amount. The mentioned input attributes were also used to develop proposals for the calculation process. As an output the security evaluation method provided a single integer which described the target systems information security level. At the end of the study, the created security evaluation method was tested against three test cases which were used to evaluate the effect of different proposals and audit results for the output. A script was also developed for the testing purposes. The results and analyses indicated that the developed security evaluation method was easier to integrate into small businesses processes because of its simplicity and straightforward approach. In addition, the issue frequency-based proposal which considered the business criticality and weighted average proved to be most suitable approach for the security state evaluation. However, the results also indicated that the weights used in the calculations should be raised in order to achieve a more descriptive picture of the state. The further development suggestions proposed that the created security evaluation method could be developed towards a more system-based approach instead of the case based approach. In addition, the evaluation method could be integrated as a part of a service which calculates and evaluates the state of information security in the target system based on the input.Eri järjestelmien tietoturvasta on tullut iso osa organisaatioiden prosesseja internettiin kytkettyjen laitteiden ja järjestelmien määrän kasvun takia. Tietojärjestelmien tulisi kuitenkin olla säännöllisen valvonnan alla ja tämän toteuttaminen saattaa olla vaikeaa pienille ja keskisuurille yrityksille, esimerkiksi resurssien puutteen vuoksi. Tämän tutkimuksen tavoitteena on kehittää uusi yksinkertainen järjestelmän tietoturvan arviointimenetelmä, joka on helppo integroida osaksi pienten ja keskisuurten yritysten prosesseja sen yksinkertaisen lähestymistavan ansiosta. Lisäksi tavoitteena on kehittää skripti, jota voidaan käyttää uuden laskentatavan testaamiseen. Tutkimus aloitettiin keräämällä vaatimukset kehitettävälle järjestelmän tietoturvan arviointimenetelmälle, joita käytettiin uuden mittaustavan suunnittelussa. Uuden mittaustavan syötteenä käytettiin auditointien tuloksia sekä haavoittuvuuksien määrää ja niiden vakavuutta. Mainituista attribuuteista kehitettiin myös laskentaehdotukset. Ulostulona laskentatavat tuottivat luvun, joka kuvaa kohdejärjestelmän sen hetkistä tietoturvallisuuden tilaa. Tutkimuksen lopuksi luotua arviointimenetelmää ja sen tuottamia tuloksia testattiin kolmessa eri testitapauksessa, joissa verrattiin eri laskentatapojen vaikutusta kohdejärjestelmän tietoturvan tason laskentaan. Testejä varten kehitettiin skripti, joka suoritti laskennan. Tulokset ja niiden analysointi indikoivat, että kehitetty arviointimenetelmä on helpompi integroida osaksi pienen yrityksen prosesseja sen yksinkertaisuuden ja suoraviivaisuuden takia. Lisäksi parhaimmaksi tietoturvan laskentatavaksi tutkimuksessa olevista vaihtoehdoista osoittautua havaintojen määrään pohjautuva laskutapa, joka huomioi järjestelmän bisneskriittisyyden ja laskee koko ympäristön tietoturvan tason käyttäen painotettua keskiarvoa. Lisäksi tulokset kuitenkin osoittivat, että laskentaan käytettäviä painoja tulisi nostaa kuvaavamman tuloksen saamiseksi. Jatkokehitysideat sisälsivät ehdotuksen arviointimenetelmän jatkokehityksestä järjestelmäkohtaisempaan suuntaan, jossa tutkittaisiin järjestelmälle suoritettuja toimenpiteitä yksittäisten auditointien sijasta. Lisäksi kehitetty arviointimenetelmä voitaisiin integroida tulevaisuudessa osaksi palvelua, joka laskee syötteen perusteella kohdejärjestelmälle tietoturvallisuuden tason

    Identifying exceptional data points in materials science using machine learning

    Get PDF

    Intelligent flexibility management for prosumers

    Get PDF
    Emission of greenhouse gases and their effects on climate change have become a matter of serious concern all over the world. In addition, electricity demand is expected to increase in the upcoming years. This growth comprises the construction of new power plants, resulting in additional costs on the price of electricity. Due to what is been exposed before, a new way to manage and generate electricity is needed. Recent researches have provided the tools for modernizing the traditional electricity grid into a smart one, which main objective is to coordinate an ever-growing number of intelligent de- vices, each with their own objectives and value perspectives, into a resilient, secure, and ef- ficient system. Here is were the flexibility concept plays an important role in the upcoming energy transition, understanding flexibility as the ability to change certain previously defined parameters in order to fit new requirements. This Master Thesis focuses on the prosumer flexi- bility concept, quantifying his flexibility potential. This flexibility is used to minimize the total expected costs of each prosumer individually, thus reducing their electricity bills. The method- ology developed consists in a Home Energy Management System (HEMS) that will manage automatically that flexibility in order to benefit the end user by minimizing its electrical bill, where the comfort is also taken into account. The results show that it is possible to reduce the electricity invoice by managing optimally the flexibility from loads, batteries, photovoltaic generation and electric vehicles charging points

    Signature-based videos’ visual similarity detection and measurement

    Get PDF
    The quantity of digital videos is huge, due to technological advances in video capture, storage and compression. However, the usefulness of these enormous volumes is limited by the effectiveness of content-based video retrieval systems (CBVR) that still requires time-consuming annotating/tagging to feed the text-based search. Visual similarity is the core of these CBVR systems where videos are matched based on their respective visual features and their evolvement across video frames. Also, it acts as an essential foundational layer to infer semantic similarity at advanced stage, in collaboration with metadata. Furthermore, handling such amounts of video data, especially the compressed-domain, forces certain challenges for CBVR systems: speed, scalability and genericness. The situation is even more challenging with availability of nonpixelated features, due to compression, e.g. DC/AC coefficients and motion vectors, that requires sophisticated processing. Thus, a careful features’ selection is important to realize the visual similarity based matching within boundaries of the aforementioned challenges. Matching speed is crucial, because most of the current research is biased towards the accuracy and leaves the speed lagging behind, which in many cases affect the practical uses. Scalability is the key for benefiting from these enormous available videos amounts. Genericness is an essential aspect to develop systems that is applicable to, both, compressed and uncompressed videos. This thesis presents a signature-based framework for efficient visual similarity based video matching. The proposed framework represents a vital component for search and retrieval systems, where it could be used in three possible different ways: (1)Directly for CBVR systems where a user submits a query video and the system retrieves a ranked list of visually similar ones. (2)For text-based video retrieval systems, e.g. YouTube, when a user submits a textual description and the system retrieves a ranked list of relevant videos. The retrieval in this case works by finding videos that were manually assigned similar textual description (annotations). For this scenario, the framework could be used to enhance the annotation process. This is achievable by suggesting an annotations-set for the newly uploading videos. These annotations are derived from other visually similar videos that can be retrieved by the proposed framework. In this way, the framework could make annotations more relevant to video contents (compared to the manual way) which improves the overall CBVR systems’ performance as well. (3)The top-N matched list obtained by the framework, could be used as an input to higher layers, e.g. semantic analysis, where it is easier to perform complex processing on this limited set of videos. i The proposed framework contributes and addresses the aforementioned problems, i.e. speed, scalability and genericness, by encoding a given video shot into a single compact fixed-length signature. This signature is able to robustly encode the shot contents for later speedy matching and retrieval tasks. This is in contrast with the current research trend of using an exhaustive complex features/descriptors, e.g. dense trajectories. Moreover, towards a higher matching speed, the framework operates over a sequence of tiny images (DC-images) rather than full size frames. This limits the need to fully decompress compressed-videos, as the DC-images are exacted directly from the compressed stream. The DC-image is highly useful for complex processing, due to its small size compared to the full size frame. In addition, it could be generated from uncompressed videos as well, while the proposed framework is still applicable in the same manner (genericness aspect). Furthermore, for a robust capturing of the visual similarity, scene and motion information are extracted independently, to better address their different characteristics. Scene information is captured using a statistical representation of scene key colours’ profiles, while motion information is captured using a graph-based structure. Then, both information from scene and motion are fused together to generate an overall video signature. The signature’s compact fixedlength aspect contributes to the scalability aspect. This is because, compact fixedlength signatures are highly indexable entities, which facilitates the retrieval process over large-scale video data. The proposed framework is adaptive and provides two different fixed-length video signatures. Both works in a speedy and accurate manner, but with different degrees of matching speed and retrieval accuracy. Such granularity of the signatures is useful to accommodate for different applications’ trade-offs between speed and accuracy. The proposed framework was extensively evaluated using black-box tests for the overall fused signatures and white-box tests for its individual components. The evaluation was done on multiple challenging large-size datasets against a diverse set of state-ofart baselines. The results supported by the quantitative evaluation demonstrated the promisingness of the proposed framework to support real-time applications

    Automated Characterisation and Classification of Liver Lesions From CT Scans

    Get PDF
    Cancer is a general term for a wide range of diseases that can affect any part of the body due to the rapid creation of abnormal cells that grow outside their normal boundaries. Liver cancer is one of the common diseases that cause the death of more than 600,000 each year. Early detection is important to diagnose and reduce the incidence of death. Examination of liver lesions is performed with various medical imaging modalities such as Ultrasound (US), Computer tomography (CT), and Magnetic resonance imaging (MRI). The improvements in medical imaging and image processing techniques have significantly enhanced the interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate. Moreover, CAD systems can help physician, as a second opinion, in characterising lesions and making the diagnostic decision. Thus, CAD systems have become an important research area. Particularly, these systems can provide diagnostic assistance to doctors to improve overall diagnostic accuracy. The traditional methods to characterise liver lesions and differentiate normal liver tissues from abnormal ones are largely dependent on the radiologists experience. Thus, CAD systems based on the image processing and artificial intelligence techniques gained a lot of attention, since they could provide constructive diagnosis suggestions to clinicians for decision making. The liver lesions are characterised through two ways: (1) Using a content-based image retrieval (CBIR) approach to assist the radiologist in liver lesions characterisation. (2) Calculating the high-level features that describe/ characterise the liver lesion in a way that is interpreted by humans, particularly Radiologists/Clinicians, based on the hand-crafted/engineered computational features (low-level features) and learning process. However, the research gap is related to the high-level understanding and interpretation of the medical image contents from the low-level pixel analysis, based on mathematical processing and artificial intelligence methods. In our work, the research gap is bridged if a relation of image contents to medical meaning in analogy to radiologist understanding is established. This thesis explores an automated system for the classification and characterisation of liver lesions in CT scans. Firstly, the liver is segmented automatically by using anatomic medical knowledge, histogram-based adaptive threshold and morphological operations. The lesions and vessels are then extracted from the segmented liver by applying AFCM and Gaussian mixture model through a region growing process respectively. Secondly, the proposed framework categorises the high-level features into two groups; the first group is the high-level features that are extracted from the image contents such as (Lesion location, Lesion focality, Calcified, Scar, ...); the second group is the high-level features that are inferred from the low-level features through machine learning process to characterise the lesion such as (Lesion density, Lesion rim, Lesion composition, Lesion shape,...). The novel Multiple ROIs selection approach is proposed, in which regions are derived from generating abnormality level map based on intensity difference and the proximity distance for each voxel with respect to the normal liver tissue. Then, the association between low-level, high-level features and the appropriate ROI are derived by assigning the ability of each ROI to represents a set of lesion characteristics. Finally, a novel feature vector is built, based on high-level features, and fed into SVM for lesion classification. In contrast with most existing research, which uses low-level features only, the use of high-level features and characterisation helps in interpreting and explaining the diagnostic decision. The methods are evaluated on a dataset containing 174 CT scans. The experimental results demonstrated that the efficacy of the proposed framework in the successful characterisation and classification of the liver lesions in CT scans. The achieved average accuracy was 95:56% for liver lesion characterisation. While the lesion’s classification accuracy was 97:1% for the entire dataset. The proposed framework is developed to provide a more robust and efficient lesion characterisation framework through comprehensions of the low-level features to generate semantic features. The use of high-level features (characterisation) helps in better interpretation of CT liver images. In addition, the difference-of-features using multiple ROIs were developed for robust capturing of lesion characteristics in a reliable way. This is in contrast to the current research trend of extracting the features from the lesion only and not paying much attention to the relation between lesion and surrounding area. The design of the liver lesion characterisation framework is based on the prior knowledge of the medical background to get a better and clear understanding of the liver lesion characteristics in medical CT images

    The Weighted Average Constraint

    No full text
    Abstract. Weighted average expressions frequently appear in the con-text of allocation problems with balancing based constraints. In combi-natorial optimization they are typically avoided by exploiting problems specificities or by operating on the search process. This approach fails to apply when the weights are decision variables and when the average value is part of a more complex expression. In this paper, we introduce a novel average constraint to provide a convenient model and efficient propagation for weighted average expressions appearing in a combinato-rial model. This result is especially useful for Empirical Models extracted via Machine Learning (see [2]), which frequently count average expres-sions among their inputs. We provide basic and incremental filtering algorithms. The approach is tested on classical benchmarks from the OR literature and on a workload dispatching problem featuring an Empirical Model. In our experimentation the novel constraint, in particular with incremental filtering, proved to be even more efficient than traditional techniques to tackle weighted average expressions.
    corecore