@prefix this: . @prefix sub: . @prefix po: . @prefix t4: . @prefix xsd: . @prefix dc: . @prefix prov: . @prefix pav: . @prefix np: . @prefix npx: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "As reported in Table 4, our crowdsourcing experiments reached a precision of 0.90 for MTurk workers (majority voting) and 0.72 for LD experts. Most of the missing or incomplete values that are extracted from Wikipedia occur with the predicates related to dates, for example: (2005 Six Nations Championship, Date, 12). In these cases, the experts and workers presented a similar behavior, classifying 110 and 107 triples correctly, respectively, out of the 117 assessed triples for this class. The difference in precision between the two approaches can be explained as follows. There were 52 DBpedia triples whose values might seem erroneous, although they were cor- rectly extracted from Wikipedia. One example of these triples is: (English (programming language), Influenced by, ?) . We found out that the LD experts classified all these triples as incorrect. In contrast, the workers successfully answered that 50 out of this 52 were correct, since they could easily compare the DB- pedia and Wikipedia values in the HITs."; po:contains t4:\#table; a doco:Paragraph . t4: npx:introduces t4:\#table . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-09-20T18:05:11+01:00"^^xsd:dateTime; pav:createdBy . }