Crowd-based Multi-Predicate Screening of Papers in Literature Reviews / E. Krivosheev, F. Casati, B. Benatallah
Язык: английский.Резюме или реферат: Systematic literature reviews (SLRs) are one of the most commonand useful form of scientific research and publication. Tens of thousands of SLRs are published each year, and this rate is growingacross all fields of science. Performing an accurate, complete andunbiased SLR is however a difficult and expensive endeavor. Thisis true in general for all phases of a literature review, and in particular for the paper screening phase, where authors filter a set ofpotentially in-scope papers based on a number of exclusion criteria.To address the problem, in recent years the research communityhas began to explore the use of the crowd to allow for a faster, accurate, cheaper and unbiased screening of papers. Initial results showthat crowdsourcing can be effective, even for relatively complexreviews. In this paper we derive and analyze a set of strategies for crowdbased screening, and show that an adaptive strategy, that continuously re-assesses the statistical properties of the problem to minimize the number of votes needed to take decisions for each paper,significantly outperforms a number of non-adaptive approachesin terms of cost and accuracy. We validate both applicability andresults of the approach through a set of crowdsourcing experiments, and discuss properties of the problem and algorithms thatwe believe to be generally of interest for classification problemswhere items are classified via a series of successive tests (as it oftenhappens in medicine)..Тематика: труды учёных ТПУ | электронный ресурс | human computation | classification | literature reviews | вычисления | классификации | обзоры литературы Ресурсы он-лайн:Щелкните здесь для доступа в онлайнTitle screen
Systematic literature reviews (SLRs) are one of the most commonand useful form of scientific research and publication. Tens of thousands of SLRs are published each year, and this rate is growingacross all fields of science. Performing an accurate, complete andunbiased SLR is however a difficult and expensive endeavor. Thisis true in general for all phases of a literature review, and in particular for the paper screening phase, where authors filter a set ofpotentially in-scope papers based on a number of exclusion criteria.To address the problem, in recent years the research communityhas began to explore the use of the crowd to allow for a faster, accurate, cheaper and unbiased screening of papers. Initial results showthat crowdsourcing can be effective, even for relatively complexreviews. In this paper we derive and analyze a set of strategies for crowdbased screening, and show that an adaptive strategy, that continuously re-assesses the statistical properties of the problem to minimize the number of votes needed to take decisions for each paper,significantly outperforms a number of non-adaptive approachesin terms of cost and accuracy. We validate both applicability andresults of the approach through a set of crowdsourcing experiments, and discuss properties of the problem and algorithms thatwe believe to be generally of interest for classification problemswhere items are classified via a series of successive tests (as it oftenhappens in medicine).
Для данного заглавия нет комментариев.