@prefix dc: . @prefix this: . @prefix sub: . @prefix xsd: . @prefix prov: . @prefix pav: . @prefix np: . @prefix linkflows: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:comment-5 a linkflows:ActionNeededComment, linkflows:ContentComment, linkflows:NegativeComment, linkflows:ReviewComment; linkflows:hasCommentText "*I also do not completely understand the \"anatomy\" (weird term) of the crowdsourcing task: The description in Section 7.2.1 and Figure 3 suggest that the sentence to be annotated is presented to the workers together with the frame label. How can this be determined in advance? I suspect that this is done by assuming a fixed mapping between lexical units and a frame, which obviously neglects potential lexical ambiguity at the level of lexical units. This aspect needs clarification, and it should be quantified to what extent such ambiguities really occur and pose a problem to the system."; linkflows:hasImpact "3"^^xsd:positiveInteger; linkflows:refersTo , . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-11-26T09:05:11+01:00"^^xsd:dateTime; pav:createdBy . }