Single leg mini squat: an inter tester reproducibility study of children in the age of 9–10 and 12–14 years presented by various methods of kappa calculation
Inter-observer variation can be measured in any situation in which two or more independent observers are evaluating the same thing Kappa is intended to. - ppt download
Diabetic retinopathy detection through artificial intelligent techniques: a review and open issues | Semantic Scholar
Inter-observer variation can be measured in any situation in which two or more independent observers are evaluating the same thing Kappa is intended to. - ppt download
New taxonomy for assessing manual material handlers' footstep patterns - ScienceDirect
Q-Coh: A tool to screen the methodological quality of cohort studies in systematic reviews and meta-analyses | International Journal of Clinical and Health Psychology
Why Cohen's Kappa should be avoided as performance measure in classification
Kappa statistic and 95% CI, and the prevalence-adjusted bias-adjusted... | Download Table
The disagreeable behaviour of the kappa statistic - Flight - 2015 - Pharmaceutical Statistics - Wiley Online Library
SciELO - Brasil - Interexaminer agreement dental caries epidemiological surveys: the importance of disease prevalence in the sample Interexaminer agreement dental caries epidemiological surveys: the importance of disease prevalence in the sample
The Kappa Statistic in Reliability Studies - Use, Interpretation, and Sample Size Requirements - Sim & Wright (2005) PDF | PDF
JCM | Free Full-Text | Interobserver and Intertest Agreement in Telemedicine Glaucoma Screening with Optic Disk Photos and Optical Coherence Tomography | HTML
PDF) The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements Perspective | mitz ser - Academia.edu
Cohen's kappa, 95%CI and prevalence-adjusted and bias-adjusted Cohen's... | Download Table
Guidelines for analysis on measuring interrater reliability of nursing outcome classification | Semantic Scholar