39 research outputs found

    G-CASCADE: Efficient Cascaded Graph Convolutional Decoding for 2D Medical Image Segmentation

    Full text link
    In recent years, medical image segmentation has become an important application in the field of computer-aided diagnosis. In this paper, we are the first to propose a new graph convolution-based decoder namely, Cascaded Graph Convolutional Attention Decoder (G-CASCADE), for 2D medical image segmentation. G-CASCADE progressively refines multi-stage feature maps generated by hierarchical transformer encoders with an efficient graph convolution block. The encoder utilizes the self-attention mechanism to capture long-range dependencies, while the decoder refines the feature maps preserving long-range information due to the global receptive fields of the graph convolution block. Rigorous evaluations of our decoder with multiple transformer encoders on five medical image segmentation tasks (i.e., Abdomen organs, Cardiac organs, Polyp lesions, Skin lesions, and Retinal vessels) show that our model outperforms other state-of-the-art (SOTA) methods. We also demonstrate that our decoder achieves better DICE scores than the SOTA CASCADE decoder with 80.8% fewer parameters and 82.3% fewer FLOPs. Our decoder can easily be used with other hierarchical encoders for general-purpose semantic and medical image segmentation tasks.Comment: 13 pages, IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024

    Multi-scale Hierarchical Vision Transformer with Cascaded Attention Decoding for Medical Image Segmentation

    Full text link
    Transformers have shown great success in medical image segmentation. However, transformers may exhibit a limited generalization ability due to the underlying single-scale self-attention (SA) mechanism. In this paper, we address this issue by introducing a Multi-scale hiERarchical vIsion Transformer (MERIT) backbone network, which improves the generalizability of the model by computing SA at multiple scales. We also incorporate an attention-based decoder, namely Cascaded Attention Decoding (CASCADE), for further refinement of multi-stage features generated by MERIT. Finally, we introduce an effective multi-stage feature mixing loss aggregation (MUTATION) method for better model training via implicit ensembling. Our experiments on two widely used medical image segmentation benchmarks (i.e., Synapse Multi-organ, ACDC) demonstrate the superior performance of MERIT over state-of-the-art methods. Our MERIT architecture and MUTATION loss aggregation can be used with downstream medical image and semantic segmentation tasks.Comment: 19 pages, 4 figures, MIDL 202

    Quantitative assessment on remote code execution vulnerability in web apps

    Get PDF
    With the exponential increasing use of online tools, applications that are being made for day to day purpose by small and large industries, the threat of exploitation is also increasing. Remote Code Execution (RCE) is one of the top most critical and serious web applications vulnerability of this era and one of the major concerns among cyber threats ,which can exploit web servers through their functionalities and using their scripts/files. RCE is an application layer vulnerability caused by careless coding practice which leads to a huge security breach that may bring unwanted resource loss or damages. Attacker may execute malicious code and take complete control of the targeted system with the privileges of an authentic user with this vulnerability. Attackers can attempt to advance their privileges after gaining access to the system. Remote Code Execution can lead to a full compromise of the vulnerable web application as well as the web server. This chapter highlights the concern and risk needed to put under consideration caused by RCE vulnerability of a system. Moreover, this study and its findings will help application developers and its stakeholders to understand the risk of data compromise and unauthorized access of the system. Around 1011 web applications were taken under consideration and experiment was done by following manual double blinded penetration testing strategy. The experiments shows that more than 12% web application were found vulnerable with RCE. This study also explicitly listed down the critical factors of Remote Code Execution vulnerability and improper input handling. The experimental results are promising to motivate developers to focus on security enhancement through proper and safe input handling

    An improved optimization algorithm-based prediction approach for the weekly trend of COVID-19 considering the total vaccination in Malaysia: A novel hybrid machine learning approach

    Get PDF
    SARS-CoV-2 is a multi-organ disease characterized by a wide range of symptoms, which also causes severe acute respiratory syndrome. When it initially began, it rapidly spread from its origin to adjacent nations, infecting millions of people around the globe. In order to take appropriate preventative and precautionary actions, it is necessary to anticipate positive Covid19 instances in order to better comprehend future risk. Therefore, it is vital to build mathematical models that are resilient and have as few prediction mistakes as feasible. This research recommends an optimization based Least Square Support Vector Machines (LSSVM) for forecasting Covid19 confirmed cases along with the daily total vaccination frequency. In this work, a novel hybrid Barnacle Mating Optimizer (BMO) via Gauss Distribution is combined with Least Squares Support Vector Machines algorithm for time series forecasting. The data source consists of the daily occurrences of cases and frequency of total vaccination since 24 February,2021 to 27 July,2022 in Malaysia. LSSVM will thereafter conduct the prediction job with the optimized hyper-parameter values using BMO via gauss distribution. This study concludes, based on its experimental findings, that hybrid IBMOLSSVM outperforms cross validations, original BMO, ANN and few other hybrid approaches with optimally optimized parameters

    Business opportunities in waste heat utilization in Norway : a new business research for the cloud data center pilot project at the Norwegian center for energy transition studies (NTRANS)

    Get PDF
    This master thesis is a contribution to Statkraft's data center pilot project supported by User Case 2 of the Norwegian Center for Energy Transition Studies (NTRANS). NTRANS is researching the role of the energy system in the transition to a zero-emission society. NTRANS researches the development of environmentally friendly energy from a social science perspective, and in the interaction between technology and society. The research in NTRANS is building a knowledge base for the road to and the consequences of energy and climate change in Norway. NTRANS is working to understand how restructuring can give the business community opportunities for innovation and value creation. NTRANS includes various User Cases to address current issues in close dialogue with user partners (experts and stakeholders). User Case 2 (UC2) is about green power and industry and it is chaired by SINTEF, which is one of Europe’s largest independent research organizations. UC2 deals with the potential of new renewable power production (focus on offshore wind power) and the attraction of new power-intensive industries, and possible interaction between them in Norway. There are two pilot projects in UC2. Statkraft is leading one of them. Statkraft is Europe’s largest producer of renewable energy and is proactive in the electrification of the economy. This master thesis was developed to provide a business model innovation perspective to the pilot data center project by Statkraft. The objective of this research is to feed the funnel of new business opportunities to create more value out of the utilization of waste heat from cloud data centers in Norway

    60-66 of Honey and Sugar Solution on the Shelf Life and Quality of Dried Banana (Musa paradisiaca) Slices

    No full text
    Abstract The main purpose of the study was to investigate the effect of solution on the shelf life and quality of banana slices and development of high quality dehydrated banana products. Ripe banana collected from local market cut into 3, 5 and 7 mm slices. Solution effect was assessed using honey, sugar and mixed (honey plus sugar, 1:1) solution. The osmosis samples were dried in a mechanical drier at 65°C for 24 hour up to moisture content 14.6%. Fresh and dehydrated banana were analyzed for their chemical composition. The effect of pre-treatment (4 min steam blanching plus 20 min sulphyting) and nutrient content also evaluate. Percent solid gain was assumed as indicator of solution of solution effect. Percent solid gain slightly decreased (per unit weight) with increasing thickness of banana slices at constant immersion time (3 hour) and concentration (72% TSS). This gain was higher in honey solution followed by sugar solution. The response of taste panel revealed that banana slices prepared by 4 min steaming plus 20 min sulphyting (0.5% KMS) and subsequently dipping in honey solution gave better colour and flavor. It has been also revealed that pre-treatment by 4 min steaming plus 20 min sulphyting (0.3% KMS) and subsequently dripping in honey was gave better colour and flavour of banana slices. So these pre-treated dehydrated products showed highest degree of acceptability. Studies on the effect of various packaging materials showed that the single layer polythene plus keep in tin can gave the best result for storing the dried banana slices. Among the different storage condition of dried banana slices (75% RH, 80%RH and 90% RH), the 75% RH was found most effective for storing the processed slices. However, in all packaging systems and storage conditions, the slices absorbed moisture over the storage period and lost its quality. To maintain better quality of slices for longer shelflife, the low relative humidity and moisture proof packaging system (polythene plus kept in tin can or laminated aluminum foil) may be required

    Artificial Intelligence in 5G Technology: Overview of System Models

    No full text
    The occurrence of various devices that are interlinked to provide advanced connectivity throughout the systems revolves around the formation of 5G systems. Artificial Intelligence plays a fundamental role in the 5G networks. The popularity and integration of 5G have emerged through advanced cellular networks and many other technologies. This innovative and speedy network has built strong connections in recent years, its conduct in business, personal work, or daily life. Artificial Intelligence and edge computing devices have optimized internet usages in everyday life. The growth of 5G networks is effective in the AI/ML algorithms due to its low latency and high bandwidth, which also performs real-time analysis, reasoning, and optimization. The 5G era has fundamental features that are highlighted among the revolutionary techniques which are most commonly used by cellular device networks, such as the resource management of radio, mobility management, and service management, and so on. This work also integrates the selection of spectrum and access the spectrum which AI-based interface to accomplish demands of 5G. The strategies which are introduced are Fractional Knapsack Greedy-based strategy and Language Hyperplane approach which becomes the basis of subsequently utilized by strategies of Artificial Intelligence for purpose of the selection of spectrum and the right allocation of spectrum for IoT-enabled sensor networks. &nbsp

    Coupling topic modelling in opinion mining for social media analysis

    No full text
    Many of social media platforms such as Facebook and Twitter make it easy for everyone to share their thoughts on literally anything. Topic and opinion detection in social media facilitates the identification of emerging societal trends, analysis of public reactions to policies and business products. In this paper, we proposed a new method that combines the opining mining and context-based topic modelling to analyse public opinions on social media data. Context based topic modelling is used to categorise data in groups and discover hidden communities in data group. The unwanted data group discovered by the topic model then will be discarded. A lexicon based opinion mining method will be applied to the remaining data groups to spot out the public sentiment about the entities. A set of Tweets data on Australian Federal Election 2010 was used in our experiments. Our experimental results demonstrate that, with the help of topic modelling, our social media analysis model is accurate and effective
    corecore