Uses spaCy to extract part-of-speech (POS) tags from tokenized text. Returns a data frame with token-level POS annotations.
Usage
extract_pos_tags(
tokens,
include_lemma = TRUE,
include_entity = FALSE,
include_dependency = FALSE,
model = "en_core_web_sm"
)Arguments
- tokens
A quanteda tokens object or character vector of texts.
- include_lemma
Logical; include lemmatized forms (default: TRUE).
- include_entity
Logical; include named entity recognition (default: FALSE).
- include_dependency
Logical; include dependency parsing (default: FALSE).
- model
Character; spaCy model to use (default: "en_core_web_sm").
Value
A data frame with columns:
doc_id: Document identifiersentence_id: Sentence number within documenttoken_id: Token position within sentencetoken: Original tokenpos: Universal POS tag (e.g., NOUN, VERB, ADJ)tag: Detailed POS tag (e.g., NN, VBD, JJ)lemma: Lemmatized form (if include_lemma = TRUE)entity: Named entity type (if include_entity = TRUE)head_token_id: Head token in dependency tree (if include_dependency = TRUE)dep_rel: Dependency relation type, e.g., nsubj, dobj (if include_dependency = TRUE)
Details
This function requires the spacyr package and a working Python environment with spaCy installed. If spaCy is not initialized, this function will attempt to initialize it with the specified model.
See also
Other lexical:
calculate_text_readability(),
clear_lexdiv_cache(),
detect_multi_words(),
extract_keywords_keyness(),
extract_keywords_tfidf(),
extract_morphology(),
extract_named_entities(),
lexical_analysis,
lexical_diversity_analysis(),
lexical_frequency_analysis(),
plot_keyness_keywords(),
plot_keyword_comparison(),
plot_lexical_diversity_distribution(),
plot_morphology_feature(),
plot_readability_by_group(),
plot_readability_distribution(),
plot_tfidf_keywords(),
plot_top_readability_documents(),
render_displacy_dep(),
render_displacy_ent(),
summarize_morphology()
