. . . . "The observed results suggest that LD experts and crowd workers offer complementary strengths that can be exploited not only in different assessment iterations or stages (RQ3) but also in particular subspaces of quality issues. LD experts exhibited a good performance when finding incorrect object values and datatypes (in particular, numerical datatypes). In turn, microtask crowdsourcing can be effectively applied to: i) verify whether objects values are incorrect, ii) verify literals annotated with language tags, and iii) find and verify incorrect links of RDF resources to web pages." . . . . "2019-11-10T18:05:11+01:00"^^ . .