@prefix dct: . @prefix orcid: . @prefix this: . @prefix sub: . @prefix xsd: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "Given the use case corpus, we first extract the complete set of verbs through a standard NLP pipeline: to- kenization, lemmatization and POS tagging. POS in- formation is required to identify verbs, while lemmas are needed to build the ranking. TREE TAGGER 15 is exploited to fulfill these tasks. Although our input has a relatively low dimension (i.e., 7.25 million tokens circa), we observe that the tool is not able to handle it as a whole, since it crashes with a segmentation fault even on a powerful machine (i.e., 24 cores CPU at 2.53 GHz, 64 GB RAM). Consequently, we had to run it over each document, thus impacting on the processing time. However, we believe that further investigation will lead to the optimization of such issue."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo orcid:0000-0002-5456-7964 . } sub:pubinfo { this: dct:created "2019-11-10T18:05:11+01:00"^^xsd:dateTime; pav:createdBy orcid:0000-0002-7114-6459 . }