@prefix dc: <
http://purl.org/dc/terms/
> .
@prefix this: <
http://purl.org/np/RANT75p9EbuySUWpX_8nxgBq_JRVCtyfcODFkOcYe7g5Y
> .
@prefix sub: <
http://purl.org/np/RANT75p9EbuySUWpX_8nxgBq_JRVCtyfcODFkOcYe7g5Y#
> .
@prefix xsd: <
http://www.w3.org/2001/XMLSchema#
> .
@prefix prov: <
http://www.w3.org/ns/prov#
> .
@prefix pav: <
http://purl.org/pav/
> .
@prefix np: <
http://www.nanopub.org/nschema#
> .
@prefix linkflows: <
https://github.com/LaraHack/linkflows_model/blob/master/Linkflows.ttl#
> .
sub:Head
{
this:
np:hasAssertion
sub:assertion
;
np:hasProvenance
sub:provenance
;
np:hasPublicationInfo
sub:pubinfo
;
a
np:Nanopublication
.
}
sub:assertion
{
sub:comment-5
a
linkflows:ActionNeededComment
,
linkflows:ContentComment
,
linkflows:NegativeComment
,
linkflows:ReviewComment
;
linkflows:hasCommentText
"*I also do not completely understand the \"anatomy\" (weird term) of the crowdsourcing task: The description in Section 7.2.1 and Figure 3 suggest that the sentence to be annotated is presented to the workers together with the frame label. How can this be determined in advance? I suspect that this is done by assuming a fixed mapping between lexical units and a frame, which obviously neglects potential lexical ambiguity at the level of lexical units. This aspect needs clarification, and it should be quantified to what extent such ambiguities really occur and pose a problem to the system." ;
linkflows:hasImpact
"3"^^
xsd:positiveInteger
;
linkflows:refersTo
<
http://purl.org/nanopub/temp/linkflows/sample-paper-2/v1/f3#figure
> , <
http://purl.org/np/RAVvOHUZoDh5eHszog4zZdOAlxErkVbZ1Hif2-taGQOFM#section
> .
}
sub:provenance
{
sub:assertion
prov:hadPrimarySource
<
http://dx.doi.org/10.3233/SW-170269
> ;
prov:wasAttributedTo
<
https://orcid.org/0000-0001-6549-066X
> .
}
sub:pubinfo
{
this:
dc:created
"2019-11-26T09:05:11+01:00"^^
xsd:dateTime
;
pav:createdBy
<
https://orcid.org/0000-0002-7114-6459
> .
}