. . . . "For the first part of our study, the Find stage was implemented using a contest-based format to engage with a community of LD experts in discovering and classifying quality issues of DBpedia resources. Contributions obtained through the contest (referring to flawed object values, incorrect datatypes and missing links) and were submitted to Amazon Mechanical Turk (MTurk), where we asked workers to Verify them. For the second part, only microtask crowdsourcing was used to perform the Find and Verify stages on the same set of DBpedia resources used in the first part." . . . . "2019-11-10T18:05:11+01:00"^^ . .