5 research outputs found

    A New Statistical Reconstruction Method for the Computed Tomography Using an X-Ray Tube with Flying Focal Spot

    Get PDF
    Abstract This paper presents a new image reconstruction method for spiral cone- beam tomography scanners in which an X-ray tube with a flying focal spot is used. The method is based on principles related to the statistical model-based iterative reconstruction (MBIR) methodology. The proposed approach is a continuous-to-continuous data model approach, and the forward model is formulated as a shift-invariant system. This allows for avoiding a nutating reconstruction-based approach, e.g. the advanced single slice rebinning methodology (ASSR) that is usually applied in computed tomography (CT) scanners with X-ray tubes with a flying focal spot. In turn, the proposed approach allows for significantly accelerating the reconstruction processing and, generally, for greatly simplifying the entire reconstruction procedure. Additionally, it improves the quality of the reconstructed images in comparison to the traditional algorithms, as confirmed by extensive simulations. It is worth noting that the main purpose of introducing statistical reconstruction methods to medical CT scanners is the reduction of the impact of measurement noise on the quality of tomography images and, consequently, the dose reduction of X-ray radiation absorbed by a patient. A series of computer simulations followed by doctor's assessments have been performed, which indicate how great a reduction of the absorbed dose can be achieved using the reconstruction approach presented here

    Metody przechowywania danych w systemie rozpoznawania wzorc贸w projektowych w oprogramowaniu

    No full text
    Tyt. z nag艂贸wka.Bibliografia s. 831-832.Dost臋pny r贸wnie偶 w formie drukowanej.ABSTRACT: Quality evaluation is one of the key elements of software project accomplishment. Among many evaluation methods there is the static analysis of the source code during its generation and development. As a result of this analysis key factors appear, playing roles of indicators useful during the evaluation phase. These factors describe software complexity which has its source in software modules structure and implementation details. Design patterns instance recognition is one of the major methods of evaluating software structure and it's complexity. Building an effective design patterns instance recognition automata requires an efficient data management and access layer Present paper describes requirements for such kind of solution and shows some analytical results related to large datasets processing results with few Open Source database management systems. STRESZCZENIE: Ocena jako艣ci jest jednym z kluczowych zagadnie艅 warunkuj膮cych sukces projektu programistycznego. W艣r贸d wielu metod s艂u偶膮cych ocenie wyr贸偶niamy statyczn膮 analiz臋 kodu 藕r贸d艂owego dokonywan膮 w trakcie implementacji i rozwoju systemu. W wyniku tego rodzaju analizy wy艂aniane s膮 wska藕niki wykorzystywane w procesie oceny jako艣ci. Wska藕niki te opisuj膮 z艂o偶ono艣膰 oprogramowania, kt贸ra ma swoje 藕r贸d艂o zar贸wno w modularnej strukturze oprogramowania, jak i w szczeg贸艂ach implementacji. Rozpoznawanie wyst膮pie艅 egzemplarzy wzorc贸w projektowych jest jedn膮 z najwa偶niejszych metod oceny jako艣ci struktury oraz z艂o偶ono艣ci oprogramowania. Stworzenie efektywnego mechanizmu automatyzuj膮cego rozpoznawanie wzorc贸w projektowych wymaga zastosowania wydajnej warstwy realizuj膮cej zarz膮dzanie i wyszukiwanie danych. Niniejsza praca opisuje wymagania stawiane tego rodzaju rozwi膮zaniom programistycznym oraz prezentuje wyniki analiz zwi膮zanych z przetwarzaniem rozleg艂ych zbior贸w danych przy u偶yciu kilku system贸w zarz膮dzania danymi nale偶膮cych do grupy oprogramowania Open Source

    Browser fingerprint coding methods increasing the effectiveness of user identification in the web traffic

    No full text
    Web-based browser fingerprint (or device fingerprint) is a tool used to identify and track user activity in web traffic. It is also used to identify computers that are abusing online advertising and also to prevent credit card fraud. A device fingerprint is created by extracting multiple parameter values from a browser API (e.g. operating system type or browser version). The acquired parameter values are then used to create a hash using the hash function. The disadvantage of using this method is too high susceptibility to small, normally occurring changes (e.g. when changing the browser version number or screen resolution). Minor changes in the input values generate a completely different fingerprint hash, making it impossible to find similar ones in the database. On the other hand, omitting these unstable values when creating a hash, significantly limits the ability of the fingerprint to distinguish between devices. This weak point is commonly exploited by fraudsters who knowingly evade this form of protection by deliberately changing the value of device parameters. The paper presents methods that significantly limit this type of activity. New algorithms for coding and comparing fingerprints are presented, in which the values of parameters with low stability and low entropy are especially taken into account. The fingerprint generation methods are based on popular Minhash, the LSH, and autoencoder methods. The effectiveness of coding and comparing each of the presented methods was also examined in comparison with the currently used hash generation method. Authentic data of the devices and browsers of users visiting 186 different websites were collected for the research

    Rough support vector machine for classification with interval and incomplete data

    No full text
    The paper presents the idea of connecting the concepts of the Vapnik鈥檚 support vector machine with Pawlak鈥檚 rough sets in one classification scheme. The hybrid system will be applied to classifying data in the form of intervals and with missing values [1]. Both situations will be treated as a cause of dividing input space into equivalence classes. Then, the SVM procedure will lead to a classification of input data into rough sets of the desired classes, i.e. to their positive, boundary or negative regions. Such a form of answer is also called a three鈥搘ay decision. The proposed solution will be tested using several popular benchmarks

    Fast computational approach to the Levenberg-Marquardt algorithm for training feedforward neural networks

    No full text
    This paper presents a parallel approach to the Levenberg-Marquardt algorithm (LM). The use of the Levenberg-Marquardt algorithm to train neural networks is associated with significant computational complexity, and thus computation time. As a result, when the neural network has a big number of weights, the algorithm becomes practically ineffective. This article presents a new parallel approach to the computations in Levenberg-Marquardt neural network learning algorithm. The proposed solution is based on vector instructions to effectively reduce the high computational time of this algorithm. The new approach was tested on several examples involving the problems of classification and function approximation, and next it was compared with a classical computational method. The article presents in detail the idea of parallel neural network computations and shows the obtained acceleration for different problems
    corecore