@prefix dcterms: . @prefix orcid: . @prefix this: . @prefix sub: . @prefix xsd: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "Similar to the previous experiment, we measured the inter-rater agreement achieved by the crowd in both stages using the Fleiss’ kappa metric. In the Find stage the inter-rater agreement of workers was 0.2695, while in the Verify stage, the the crowd achieved substantial agreement for all the types of tasks: 0.6300 for object values, 0.7957 for data types or language tags, and 0.7156 for interlinks. In comparison to the first workflow, the crowd in the Verify stage achieved higher agreement. This suggests that the triples identified as erroneous in the Find stage were easier to interpret or process by the crowd. Table 6 reports on the precision achieved by the crowd in each stage. It is important to notice that in this workflow we crowdsourced all the triples that could have been explored by the LD experts in the contest. In this way, we evaluate the performance of lay user and experts under similar conditions. During the Find stage, the crowd achieved low values of precision for the three types of tasks, which suggests that this stage is still very challenging for lay users. In the following we present further details on the results for each type of task."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo orcid:0000-0003-0530-4305 . } sub:pubinfo { this: dcterms:created "2019-09-20T18:05:11+01:00"^^xsd:dateTime; pav:createdBy orcid:0000-0002-7114-6459 . }