@prefix this: . @prefix sub: . @prefix xsd: . @prefix dc: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "As a baseline approach a complete direct comparison is not possible except for 85 wrong datatypes and 13 failed regular expressions (cf. Table 8). However, even in this case it is not possible to provide a precision since RDFUnit runs through the whole set of resources and possibly catches errors the LD experts didn’t catch since it considers the ontological schema. This is because the LD experts performed triple based evalua- tion using the TripleCheckMate tool, which does not provide schema information directly. Thus, only those experts who are conversant with the schema might be able to identify those errors. Examples of such inconsistencies are datatype detection that is not defined in the ontology e.g. dates vs numbers (“1935”ˆˆxsd:integer ) or erroneous language tags. Also, rdfs:domain violations were not reported from the LD experts since for every triple they had to cross-check the ontology definitions for the evaluated property and the rdf:type statements of the resource. Similar combinations apply for all the other patterns types described in Table 8. RDFUnit was running beyond the isolated triple level that the LD experts and crowd were evaluating and was checking various combinations of triples."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-09-20T18:05:11+01:00"^^xsd:dateTime; pav:createdBy . }