21 research outputs found

    Validation of Tool Mark Comparisons Obtained Using a Quantitative, Comparative, Statistical Algorithm

    Get PDF
    A statistical analysis and computational algorithm for comparing pairs of tool marks via profilometry data is described. Empirical validation of the method is established through experiments based on tool marks made at selected fixed angles from 50 sequentially manufactured screwdriver tips. Results obtained from three different comparison scenarios are presented and are in agreement with experiential knowledge possessed by practicing examiners. Further comparisons between scores produced by the algorithm and visual assessments of the same tool mark pairs by professional tool mark examiners in a blind study in general show good agreement between the algorithm and human experts. In specific instances where the algorithm had difficulty in assessing a particular comparison pair, results obtained during the collaborative study with professional examiners suggest ways in which algorithm performance may be improved. It is concluded that the addition of contextual information when inputting data into the algorithm should result in better performance

    Second asymptomatic carotid surgery trial (ACST-2): a randomised comparison of carotid artery stenting versus carotid endarterectomy

    Get PDF
    Background: Among asymptomatic patients with severe carotid artery stenosis but no recent stroke or transient cerebral ischaemia, either carotid artery stenting (CAS) or carotid endarterectomy (CEA) can restore patency and reduce long-term stroke risks. However, from recent national registry data, each option causes about 1% procedural risk of disabling stroke or death. Comparison of their long-term protective effects requires large-scale randomised evidence. Methods: ACST-2 is an international multicentre randomised trial of CAS versus CEA among asymptomatic patients with severe stenosis thought to require intervention, interpreted with all other relevant trials. Patients were eligible if they had severe unilateral or bilateral carotid artery stenosis and both doctor and patient agreed that a carotid procedure should be undertaken, but they were substantially uncertain which one to choose. Patients were randomly allocated to CAS or CEA and followed up at 1 month and then annually, for a mean 5 years. Procedural events were those within 30 days of the intervention. Intention-to-treat analyses are provided. Analyses including procedural hazards use tabular methods. Analyses and meta-analyses of non-procedural strokes use Kaplan-Meier and log-rank methods. The trial is registered with the ISRCTN registry, ISRCTN21144362. Findings: Between Jan 15, 2008, and Dec 31, 2020, 3625 patients in 130 centres were randomly allocated, 1811 to CAS and 1814 to CEA, with good compliance, good medical therapy and a mean 5 years of follow-up. Overall, 1% had disabling stroke or death procedurally (15 allocated to CAS and 18 to CEA) and 2% had non-disabling procedural stroke (48 allocated to CAS and 29 to CEA). Kaplan-Meier estimates of 5-year non-procedural stroke were 2·5% in each group for fatal or disabling stroke, and 5·3% with CAS versus 4·5% with CEA for any stroke (rate ratio [RR] 1·16, 95% CI 0·86–1·57; p=0·33). Combining RRs for any non-procedural stroke in all CAS versus CEA trials, the RR was similar in symptomatic and asymptomatic patients (overall RR 1·11, 95% CI 0·91–1·32; p=0·21). Interpretation: Serious complications are similarly uncommon after competent CAS and CEA, and the long-term effects of these two carotid artery procedures on fatal or disabling stroke are comparable. Funding: UK Medical Research Council and Health Technology Assessment Programme

    § 612a BGB: Maßregelungsverbot; Kommentar

    Get PDF
    Innovation has become a major strategic component of corporate entrepreneurship. Managerial decisions regarding innovative activity are complex and can be affected by numerous factors. In this study, we draw upon the tenets of stakeholder theory to examine how stakeholder salience (consisting of stockholders, employees, and customers) is integral to the decisions made by senior level managers related to social proactiveness within a corporate innovation strategy. In doing so, we introduce a social proactiveness scale that examines a manager’s priorities toward internal and external social issues. Examining 200 senior-level managers, we find that companies which place salience on employees are more proactive on both internal and external social issues, while those placing salience on stockholders are more proactive on internal social issues but not external social issues. Surprisingly, placing salience on customers is associated with neither internal nor external social issues. Finally, the data suggests that proactiveness related to internal social issues leads to greater internal innovation with external innovation mediating the relationship, whereas proactiveness on external social issues is not related to innovation

    PAPER CRIMINALISTICS Optimization of a Statistical Algorithm for Objective Comparison of Toolmarks*

    No full text
    ABSTRACT: Due to historical legal challenges, there is a driving force for the development of objective methods of forensic toolmark identification. This study utilizes an algorithm to separate matching and nonmatching shear cut toolmarks created using fifty sequentially manufactured pliers. Unlike previously analyzed striated screwdriver marks, shear cut marks contain discontinuous groups of striations, posing a more difficult test of algorithm applicability. The algorithm compares correlation between optical 3D toolmark topography data, producing a Wilcoxon rank sum test statistic. Relative magnitude of this metric separates the matching and nonmatching toolmarks. Results show a high degree of statistical separation between matching and nonmatching distributions. Further separation is achieved with optimized input parameters and implementation of a "leash" preventing a previous source of outliers-however complete statistical separation was not achieved. This paper represents further development of objective methods of toolmark identification and further validation of the assumption that toolmarks are identifiably unique. KEYWORDS: forensic science, toolmark, algorithm, statistical comparison, pliers, striae, quasi-striated, shear cutter In recent history, the legitimacy of scientific testimony has been questioned in several court cases-specifically Daubert v. Merrell Dow Pharmaceuticals, Inc. This challenge has had profound implications in the field of firearm and toolmark examination and resulted in many studies conducted to validate the practice of comparative forensic examination. The primary validation needed is the assumption of toolmark examination-every tool has its own unique surface that will leave a unique mark. Screwdriver marks are among the most studied due to their uniform and continuous striae. They have been previously characterized using stylus profilometry and confocal microscopy in various attempts designed for the identification of matching and/ or nonmatching toolmarks (1-4). The results from these types of studies typically show that striae may be successfully objectively compared using a computer algorithm with relatively high accuracy. For example, in a previous study by the authors using a statistical algorithm, marks from fifty sequentially manufactured screwdriver tips were successfully separated between matching and nonmatching pairs to a reasonable degree of accuracy (2). Studies of other tools that produce striations have also been conducted. Pliers are another type of tool that can create a variety of marks. Cassidy, one of the first to study sequentially manufactured pliers (5), found toolmarks produced by plier teethsuch as when a burglar would twist off a door knob to enter a building-to be unique because the broaching process used to manufacture the plier teeth was performed in a direction perpendicular to the striae it would create. This study established the uniqueness of marks created by plier teeth; however the analysis was based on logical reasoning and not backed by mathematical analysis. More recently Bachrach et al. (4) studied tongue and groove pliers marks created on brass pipe, galvanized steel pipe, and lead rope. Test marks were made using a singular tooth from the pliers to create a striated mark. This study found that marks could be compared when made on different material but with less accuracy than marks made on the same material. Petraco et al. (3) has studied striated chisel marks. The test marks created by the chisels were striated but discontinuous, resulting in patches of striations. Unfortunately, the nature of the created marks was too difficult for the employed software to analyze. While regularly striated marks have received the majority of research attention, the extension of mathematically based studies to other forms of toolmarks is also highly desirable. The results discussed in this paper investigate the applicability of the algorithm employed in (2) to quasi-striated marks created by slip-joint pliers. This type of plier mark was chosen for two reasons. First, the type of mark produced, termed a shear cut, presents a more difficult pattern for identification than a fully striated mark. Second, pliers such as these and other tools that produce shear cut marks are routinely used by criminals to steal copper from construction sites. A July 30, 2013 report on CNBC stated that copper theft in the U.S.A. has become a 1 billion dollar industry (http:// www.cnbc.com/id/100917758 Results from an initial investigation on slip-joint pliers were conducted by Grieve (6). These results having shown promise, this paper presents results on shear cut marks made by 50 sequentially manufactured slip-joint pliers. While the initial results revealed the algorithm could correctly separate a large majority of matching and nonmatching pairs, some algorithm parameter values and options that work well for regularly striated marks are not optimal in the present setting. Two distinct deficiencies hindering algorithm operation were noted. The goal of this study was to investigate optimization of the parameters best suited for analysis of marks described as quasi-striated. The first deficiency addressed involved parameters that affect the degree of statistical separation in the results. While separation was seen using the parameters employed for fully striated markings, better results could be obtained by changing the operational parameters of the algorithm. The second deficiency noted was concerning what the authors have termed the "Opposite End Problem". This problem manifests itself when, in a small number of cases, the algorithm declares a "match" from two data sets which are known to be nonmatching. Observation of the raw data files shows that the opposite ends of the two sets of toolmarks being compared are identified as the matching region. Such a match is physically impossible and results due to the inability of the algorithm to successfully complete the validation procedure, which is integral to the operation of the algorithm, when confronted with similar topography at opposite ends of the data sets. This possibility was first noted during research on regularly striated screwdriver toolmarks (1,2). This study involves complete analysis of fifty pliers using various parameter values and an option that accounts for the "Opposite End" problem. The results of the study, including a brief description of the statistical algorithm, are discussed below. Experimental Methodology Fifty sequentially manufactured slip-joint pliers were obtained from Wilde Tool Co., Inc. It is common knowledge within the field of study that the manufacturing process significantly affects the toolmarks that are created The pliers start as pieces of steel that are hot forged into half blanks. Each half blank was then cold forged once again using the same die for every piece. After forging, the first difference between half-pairs was introduced. Fifty halves were punched to create a small hole, while the other fifty halves received a double-hole punch-allowing future users to better hold a wider dimension range of objects. The gripping teeth and shear surface were next created with a broaching process. Two broaching machines were used in the production of the pliers. Plier halves with the double-hole punch went to one machine, while pliers with a single-hole punch went to the other. During this separation time, the manufacturer stamped the numbers 1-50 on the plier halves, so the correct sequence could be ensured. The broaching process on the shear surface created the characteristic nature that is of interest for this study. After broaching, both halves of each plier were given the same heat treatment and shot peened to strengthen the material and increase the surface hardness. The flat side regions were next polished, and the double-hole punch half was branded with the company logo. The plier half with the double-hole punch and company logo was labeled as the "B" side for every plier pair, and the other side "A". An overview of pliers from unfinished to finished states is shown in Wire test samples were created using bolt cutters to cut 2" samples from wire spools. The bolt cut ends were marked using a permanent marker so they could not be confused with the plier shear cut surfaces. Diameters of the wire used were 0.1620" for the copper and 0.1875" for the lead. Test marks were made by shear cutting the copper and lead wire. Shear cutters are defined by the Association of Firearm and Toolmark Examiners as "opposed jawed cutters whose cutting blades are offset to pass by each other in the cutting process" (9). As the shear face was used on the pliers to make the samples, by definition the created marks are shear cutting marks. JOURNAL OF FORENSIC SCIENCES exact location used on the pliers to create the toolmarks. Each shear cut was made by placing the sample between the shear surfaces-with the "B" surface always facing downward-marking the sides "A" and "B" corresponding to which plier shear surface would be acting on that section of the wire. Thus, two samples, one "A" and one "B", were created with each shear cut. The samples were shear cut alternating between copper and lead until ten of each sample type were created. This resulted in 2000 total samples: 1000 samples for both copper and lead with half of each coming from each side of the pliers. For consistency, every sample was made by the author who is a retired forensic examiner. When the wire is mechanically separated, the two surfaces of the shear edges move past each other. The resultant action is therefore a combination of both cutting the surfaces and a shearing action of the edges as they move through the material. The result is two surfaces being created on each half of the separated wire sample, comprising both shear cut and impression markings, roughly at 90°to each other with both being %45°to the long axis of the wire. Only the shear cut surfaces on the "A" and "B" sides of the sample were scanned and analyzed. A schematic showing the process is shown in The scope of this study included only the copper samples, leading to a total sample size of 1000. To obtain the surface data from the samples, each piece was scanned using an Infinite Focus Microsope G3 (Alicona). Scans were completed at 109 magnification with a two micron vertical resolution. An example image obtained using the IFM is shown in An example of the scanned data from the infinite focus microscope prior to and post noise reduction is shown in SPOTTS ET AL. . OBJECTIVE COMPARISON OF TOOLMARKS 305 any remaining spikes and the sides of the cut wire, the authors used a visual painting program to paint over noisy regions. The computer algorithm then interpreted the painted areas as data points to exclude from the analysis. As it is impossible to scan every sample at precisely the same angle relative to the equipment, it is necessary to correct for this sample angle using a process called detrending. To detrend the data, linear least squares were used to fit a plane to the data. To make this process faster and less sensitive to noise, only 80 points were used in the plane fitting. These points were selected in an "X" pattern that evenly covered the majority of the sample surface. Once the plane fitting was obtained, the plane was subtracted from the surface data to remove the global surface angle. When employed in the initial study (6), the comparative algorithm used was discovered to have the same limitation that prevented it from operating effectively in certain instances in (2). For a more complete discussion of the algorithm, the reader is referred to (2). Briefly, the algorithm works in two major steps, the optimization and validation steps. An iterative "search" window of user-determined size (in pixels) is held stationary on Trace 1, while the correlation to a same size window is calculated over the entirety of Trace 2. The window is then shifted one pixel over on Trace 1 and the process is repeated. This is performed until the two regions of best correlation are found. Once the region of highest correlation is found during the optimization step, two shifts are applied and compared-random shifts and rigid shifts. This is the validation step. The size of the "validation" windows that are shifted are user determined. During the rigid shift step, a user-defined window is moved a set distance from the best correlation window on each trace and the correlation at that point is calculated. For the random shift step, the same size window is moved randomly calculated distances from the best correlation window for the two comparison scans and the correlation is again calculated. An example of rigid and random shifts is shown in In the initial investigation (6), outlier data points were observed to stem from the algorithm misidentifying the opposite ends of marks as a positive match. One example of this is shown in As To address this problem, a "leash" was applied to the search window of the original algorithm (2) during the optimization step, the purpose being to limit the comparison distance between profiles. In this case, the comparative correlation is no longer calculated over the entirety of Trace 2 for each iteration of the search window, but only to a certain percentage of the entire distance. The current version of the leash is set as a percentage of the total length of the trace. The leash was set at 80% for this analysis. A Wilcoxon rank Sum test statistic (centered and scaled to have a nominal SD 1) is calculated during the validation step and is what is returned by the algorithm. The T1 statistic is determined by comparing the results of rigid and random shifts. Matching marks should have relatively high correlation after a rigid shift if they are truly similar and lower correlations during random shifts. The magnitude of the T1 statistic is affected by how much the rigid and random shifts differ. High rigid shift correlation and low random shift correlation would result in a high T1 value-indicating a matching pair-while the opposite scenario would result in a low or negative T1 value indicating a nonmatching pair. The reason many shifts are applied is because random chance may allow a few random shift windows to have a high correlation. As more shifts are applied to a matching pair, the probability of observing a small T1 statistic will decrease. As more shifts are applied to a nonmatching pair, the expected JOURNAL OF FORENSIC SCIENCES trend would be average rigid and random shift correlations that become closer in value-resulting in a T1 statistic value near zero. Currently, there is not a definitive T1 value that perfectly defines when a match or nonmatch pair has been confirmed. This is due to the nature of the data-the comparisons are not independent events. Even if a definitive value was created for this study, it would likely not be applicable to other toolmark comparisons due to inherent differences in toolmark variability between different tools. Statements of correct or incorrect "identification" in this paper are qualitative-when the matching pairs consistently have significantly larger T1 values than nonmatching pairs, it is fair to state that the algorithm is correctly separating ("identifying") the majority of the pairs. A more advanced statistical argument is necessary to truly state whether an individual comparison was correctly identified. Results The data from the fifty sequentially manufactured pliers were compared using three different types of comparisons resulting in three sets of data. All three comparison types were performed using data from both the long and short edges as defined in Set 1: Comparing known matching pairs. Data for Set 1 were created by comparing marks made by the same side of the same pliers. Comparisons were made between marks 2 and 4, as well as marks 6 and 8 for both sides of each plier. An example of the methodology for comparisons in Set 1 is best described in a tabular format; an example of the comparison order through two pliers is shown in Set 3: Comparing known nonmatching pairs. Data for Set 3 were created by comparing marks from the same side of different pliers. Marks 16, 18, and 20 were compared between different pliers for both sides. An example of the methodology for comparisons in Set 3 is shown in Search and validation window sizes of 200 and 100 pixels, respectively, were used as part of the initial analysis. These window sizes had been previously used for successful matching of SPOTTS ET AL. . OBJECTIVE COMPARISON OF TOOLMARKS 307 screwdriver toolmarks (2). The results for all three data sets are shown in Observation of Results from the initial investigation that used the original algorithm are shown in As the quasi-striated marks produced by the plier shear cuts are far less regular than the previous screwdriver marks studied, experiments were conducted to determine the effect window size (i.e., search and validation) may have on the results. The 2:1 window size ratio was maintained for this second round of analysis with the search windows set to 1000, 500, 200, and 100 pixels with corresponding validation window sizes of 500, 250, 100, and 50 pixels. The results of these experiments for the short and long edges are shown in Figs 12, 13, and 14. In some cases during the analysis, the algorithm would not return a result for every comparison. This is because the algorithm does not allow validation windows to overlap. Thus, as larger and larger window sizes are used it becomes more likely that the algorithm will run out of profile length, especially on short edges with large window sizes, and not return a T1 value. For a 2:1 ratio, the algorithm did not return 6 values for the Set 1 short edge, 9 values for the Set 2 long edge, 13 values for the Set 2 short edge, and 19 values for the Set 3 short edge. These numbers should be compared to the total of 3965 data comparisons that did return a result for the 2:1 ratio analysis. The algorithm returned a result more than 98% of the time. With a clear trend in the results due to window size, the effect of size ratio was next analyzed using both 4:1 and 6:1 search to validation window size ratios. Search windows were set to 800, 600, 400, and 200 pixels with corresponding validation window sizes of 200, 150, 100, and 50 pixels used for the 4:1 ratio experiment. Search windows were set to 750, 600, 450, and 300 pixels with corresponding validation window sizes of 125, 150, 75, and 50 pixels for the 6:1 ratio experiment. The results from these analyses are shown in Figs 15-20 for the short and long edges. Observation of SPOTTS ET AL. . OBJECTIVE COMPARISON OF TOOLMARKS 309 Set 2 and 3 comparisons, known nonmatches, are shown in Figs 17-20. Observations for Set 2 and 3 comparisons showed a median value near zero regardless of window size, increasing data spread with increasing window sizes and no clear trend in the number of outliers. In general, long edge comparisons had better results evidenced by the general decrease in data spread. The algorithm did not fail to return any results for the 4:1 and 6:1 ratios. This analysis contained 4012 comparisons for each ratio. Discussion The results presented add further credence to the basic assumption involved in toolmark identification, namely, that all manufactured tools are unique due to the machining processes used in their manufacture. This uniqueness is transferred to toolmarks as the tool is employed. Use of advanced characterization methods and computer algorithms can, to a large degree, allow objective comparison and identification of a series of toolmarks. When the research transitioned from regularly striated to quasi-striated marks, it became apparent that a parameter optimization of the algorithm employed is necessary for different tools. This optimization led to improved results. The algorithm used in this research was optimized to provide better results for the current set of toolmarks by experimentally changing window sizes and utilizing an option that limits errors due to the opposite end problem. While the leash restriction is effective, it should be JOURNAL OF FORENSIC SCIENCES realized that its effectiveness is only made possible by the introduction of contextual knowledge into the analysis. For the plier marks, the nonsymmetric shape of the shear cut makes it easy to determine in which direction the scans should be analyzed. A more symmetric plier mark might be more difficult to orient properly to make use of the leash, involving a trained examiner to ensure the data were obtained correctly. In the most ideal scenario, there would be complete data separation between known matching and nonmatching pairs, giving a clear indication of correlation, with no outliers in the data. Although ideal degree of separation has not been achieved, there is clearly a large majority of correctly identified toolmarks. Close examination of the outlying data points from both edges reveals that for these specific comparisons the algorithm produces a correct result for the vast majority of window combinations used. For example, consider If the underlying hypothesis behind the application of the T1 statistic is that matching pairs will have more correlation than nonmatching pairs, one might assume that this also holds true if one uses more search and validation window combinations. A simple experiment was performed to see the effect of using multiple search and validation window combinations simultane

    Validation of Tool Mark Comparisons Obtained Using a Quantitative, Comparative, Statistical Algorithm

    Get PDF
    A statistical analysis and computational algorithm for comparing pairs of tool marks via profilometry data is described. Empirical validation of the method is established through experiments based on tool marks made at selected fixed angles from 50 sequentially manufactured screwdriver tips. Results obtained from three different comparison scenarios are presented and are in agreement with experiential knowledge possessed by practicing examiners. Further comparisons between scores produced by the algorithm and visual assessments of the same tool mark pairs by professional tool mark examiners in a blind study in general show good agreement between the algorithm and human experts. In specific instances where the algorithm had difficulty in assessing a particular comparison pair, results obtained during the collaborative study with professional examiners suggest ways in which algorithm performance may be improved. It is concluded that the addition of contextual information when inputting data into the algorithm should result in better performance.This is the accepted version of the following article: Journal of Forensic Sciences, vol. 55, iss. 4, pg. 953-961, 2010, which has been published in final form at http://dx.doi.org/10.1111/j.1556-4029.2010.01424.x.</p

    Metacognition, entrepreneurial orientation, and firm performance : An upper echelons view

    No full text
    Upper echelons theory suggests that cognitive diversity in top management teams (TMTs) affects firms’ operation and performance. Prior research in this stream has focused primarily on lower-order cognitive factors, such as beliefs, perceptions, and preferences, rather than higher-order ones, known as metacognitive abilities. This study is an early, perhaps the first, attempt to begin this line of enquiry. Adopting a multidimensional view of entrepreneurial orientation, we propose that diversity in the metacognitive ability of top teams has different impacts on each dimension of the team’s entrepreneurial behavior and through this firm performance. Our empirical analysis, based on data from 105 TMTs of Australian small- and medium-sized enterprises (SMEs), partially supports our theorization. We found that while metacognitive diversity is positively associated with the innovative endeavors of TMTs, it has no significant effects on their risk-taking and proactive behaviors. We found additional evidence that each aspect of the TMT’s entrepreneurial orientation has a different implication for firm performance. Overall, our research offers novel and more nuanced insights into how and when diversity in the metacognitive ability of TMTs matters for the performance of the firm

    Effects of informal institutions on the performance of microenterprises in the Phillipines : the mediating role of entrepreneurial orientation

    Full text link
    The study investigates the effects of informal institutions and entrepreneurial orientation on the performance of microenterprises at the subnational level within a developing country context. Using structural equation modeling based on a large-scale survey of 735 microenterprises in the Philippines, it is found that informal institutional factors and entrepreneurial orientation are associated with firm performance. However, further analysis reveals a strong mediating role of entrepreneurial orientation on the informal institutions-firm performance relationships. This finding is novel and adds to our understanding of the mechanism through which informal institutions affect firm performance, particularly for microenterprises in developing countries.<br /
    corecore