@prefix dcterms: . @prefix orcid: . @prefix this: . @prefix sub: . @prefix xsd: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "In the implementation of the Find and Verify stages in our approach, we explore two different crowdsourcing workflows combining different types of crowds. The first workflow combines LD experts and microtask workers: This workflow leverages the expertise of Linked Data experts in a contest to find and classify erroneous triples according to a predefined quality taxonomy, while the workers verify the outcome of the contest. The second workflow entirely relies on microtask crowdsourcing to perform the Find and Verify stages. As discussed in Section 3, these crowdsourcing approaches exhibit different characteristics in terms of the types of tasks they can be applied to, the way the results are consolidated and exploited, and the audiences they target. Therefore, in this work we study the impact on involving different types of crowd to detect quality issues in RDF triples: LD experts in the contest and workers in the microtasks. Table 1 presents a summary of the two approaches as they have been used in this work for LD quality assessment purposes."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo orcid:0000-0003-0530-4305 . } sub:pubinfo { this: dcterms:created "2019-11-08T18:05:11+01:00"^^xsd:dateTime; pav:createdBy orcid:0000-0002-7114-6459 . }