On the Impact of Predicate Complexity in Crowdsourced Classification Tasks / J. Ramirez, M. Baez, F. Casati [et al.]

Альтернативный автор-лицо: Ramirez, J., Jorge;Baez, M., Marcos;Casati, F., Italian economist and Professor at the University of Trento (Italy), Professor of Tomsk Polytechnic University, candidate of technical Sciences, 1971-, Fabio;Cernuzzi, L., Luca;Benatallah, B., Boualem;Taran, Е. А., Economist, Senior Lecturer of Tomsk Polytechnic University, 1981-, Ekaterina Aleksandrovna;Malanina, V. A., economist, Associate Professor of Tomsk Polytechnic University, Candidate of economic sciences, 1977-, Veronika AnatolievnaКоллективный автор (вторичный): Национальный исследовательский Томский политехнический университет, Институт социально-гуманитарных технологий, Кафедра экономики, Международная научно-образовательная лаборатория технологий улучшения благополучия пожилых людей;Национальный исследовательский Томский политехнический университет, Школа базовой инженерной подготовки, Отделение социально-гуманитарных наукЯзык: английский.Страна: .Резюме или реферат: This paper explores and offers guidance on a specific and relevant problem in task design for crowdsourcing: how to formulate a complex question used to classify a set of items. In micro-task markets, classification is still among the most popular tasks. We situate our work in the context of information retrieval and multi-predicate classification, i.e., classifying a set of items based on a set of conditions. Our experiments cover a wide range of tasks and domains, and also consider crowd workers alone and in tandem with machine learning classifiers. We provide empirical evidence into how the resulting classification performance is affected by different predicate formulation strategies, emphasizing the importance of predicate formulation as a task design dimension in crowdsourcing..Примечания о наличии в документе библиографии/указателя: [References: 53 tit.].Тематика: электронный ресурс | труды учёных ТПУ | crowdsourcing | task design | predicate complexity | краудсорсинг Ресурсы он-лайн:Щелкните здесь для доступа в онлайн
Тэги из этой библиотеки: Нет тэгов из этой библиотеки для этого заглавия. Авторизуйтесь, чтобы добавить теги.
Оценка
    Средний рейтинг: 0.0 (0 голосов)
Нет реальных экземпляров для этой записи

Title screen

[References: 53 tit.]

This paper explores and offers guidance on a specific and relevant problem in task design for crowdsourcing: how to formulate a complex question used to classify a set of items. In micro-task markets, classification is still among the most popular tasks. We situate our work in the context of information retrieval and multi-predicate classification, i.e., classifying a set of items based on a set of conditions. Our experiments cover a wide range of tasks and domains, and also consider crowd workers alone and in tandem with machine learning classifiers. We provide empirical evidence into how the resulting classification performance is affected by different predicate formulation strategies, emphasizing the importance of predicate formulation as a task design dimension in crowdsourcing.

Для данного заглавия нет комментариев.

оставить комментарий.