Title: | Do Things with Words |
---|---|
Description: | Doing things with words currently means scaling documents on a presumed underlying dimension on the basis of word frequencies and heroic assumptions about language generation. |
Authors: | Will Lowe [aut, cre] |
Maintainer: | Will Lowe <[email protected]> |
License: | file LICENSE |
Version: | 0.5.0 |
Built: | 2024-11-07 04:27:22 UTC |
Source: | https://github.com/conjugateprior/austin |
Extract a word count matrix with documents as rows and words as columns
as.docword(wfm)
as.docword(wfm)
wfm |
an object of class wfm |
This is a helper function for wfm objects. Use it instead of manipulating wfm object themselves.
a document by word count matrix
Will Lowe
Constructs a wfm object from various other kinds of objects
as.wfm(mat, word.margin = 1)
as.wfm(mat, word.margin = 1)
mat |
a matrix of counts |
word.margin |
which margin of mat represents the words |
an object of class wfm
Will Lowe
Extract a matrix of word counts with words as rows and documents as columns
as.worddoc(wfm)
as.worddoc(wfm)
wfm |
an object of class wfm |
This is a helper function for wfm objects. Use it instead of manipulating wfm object themselves.
a word by document count matrix
Will Lowe
Austin helps you see what people, usually politicians, do with words. Currently that means how positions on a presumed underlying policy scale are taken by manipulating word occurrence counts. The models implemented here try to try to recover those positions using only this information, plus some heroic assumptions about language generation, e.g. unidimensionality, conditional independence of words given ideal point and Poisson-distributed word counts.
The package currently implements Wordfish (Slapin and Proksch, 2008) and Wordscores (Laver, Benoit and Garry, 2003). See references for details.
Computes bootstrap standard errors for document positions from a fitted Wordfish model
bootstrap.se(object, L = 50, verbose = FALSE, ...)
bootstrap.se(object, L = 50, verbose = FALSE, ...)
object |
a fitted Wordfish model |
L |
how many replications |
verbose |
Give progress updates |
... |
Unused |
This function computes a parametric bootstrap by resampling counts from the fitted word counts, refitting the model, and storing the document positions. The standard deviations for each resampled document position are returned.
Standard errors for document positions
Will Lowe
Construct a Wordscores model from reference document scores
classic.wordscores(wfm, scores)
classic.wordscores(wfm, scores)
wfm |
object of class wfm |
scores |
reference document positions/scores |
This version of Wordscores is exactly as described in Laver et al. 2003 and is provided for historical interest and continued replicability of older analyses.
scores
is a vector of document scores corresponding to the documents
in the word frequency matrix wfm
. The function computes wordscores
and returns a model from which virgin text scores can be predicted.
An old-style Wordscores analysis.
Will Lowe
Laver, M. and Benoit, K. and Garry, J. (2003) 'Extracting policy positions from political texts using words as data' American Political Science Review. 97. pp.311-333
data(lbg) ref <- getdocs(lbg, 1:5) ws <- classic.wordscores(ref, scores=seq(-1.5,1.5,by=0.75)) summary(ws) vir <- getdocs(lbg, 'V1') predict(ws, newdata=vir)
data(lbg) ref <- getdocs(lbg, 1:5) ws <- classic.wordscores(ref, scores=seq(-1.5,1.5,by=0.75)) summary(ws) vir <- getdocs(lbg, 'V1') predict(ws, newdata=vir)
Lists wordscores from a fitted Wordscores model.
## S3 method for class 'classic.wordscores' coef(object, ...)
## S3 method for class 'classic.wordscores' coef(object, ...)
object |
a fitted Wordscores model |
... |
extra arguments, currently unused |
The wordscores
Will Lowe
Extract word parameters beta and psi in an appropriate model parameterization
## S3 method for class 'wordfish' coef(object, form = c("poisson", "multinomial"), ...)
## S3 method for class 'wordfish' coef(object, form = c("poisson", "multinomial"), ...)
object |
an object of class wordfish |
form |
which parameterization of the model to return parameters for |
... |
extra arguments |
Slope parameters and intercepts are labelled beta and psi respectively. In multinomial form the coefficient names reflect the fact that the first-listed word is taken as the reference category. In poisson form, the coefficients are labeled by the words the correspond to.
Note that in both forms there will be beta and psi parameters, so make sure they are the ones you want.
A data.frame of word parameters from a wordfish model in one or other parameterization.
Will Lowe
Irish Confidence Debate
This are word counts from the no-confidence motion debated in the
Irish Dáil from 16-18 October 1991 over the future of the Fianna
Fail-Progressive Democrat coalition.
daildata
is a word frequency object.
Laver, M. & Benoit, K.R. (2002). Locating TDs in Policy Spaces: Wordscoring Dáil Speeches. Irish Political Studies, 17(1), 59–73.
A random sample of words and their frequency in German political party manifestos from 1990-2005.
demanif is word frequency matrix
Wordfish website (http://www.wordfish.org)
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
A word frequency matrix from the economic sections of German political party manifestos from 1990-2005.
demanif.econ is word frequency matrix
These data are courtesy of S.-O. Proksch.
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
A word frequency matrix from the foreign policy sections of German political party manifestos from 1990-2005.
demanif.foreign is word frequency matrix
These data courtesy of S.-O. Proksch.
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
A word frequency matrix from the societal sections of German political party manifestos from 1990-2005.
demanif.soc is word frequency matrix
These data courtesy are of S.-O. Proksch.
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
Extracts the document names from a wfm object.
docs(wfm) docs(wfm) <- value
docs(wfm) docs(wfm) <- value
wfm |
an object of type wfm |
value |
replacement if assignment |
A list of document names.
Will Lowe
Extract a list of matching words from another list of words
extractwords(words, patternfile, pattern.type = c("glob", "re"))
extractwords(words, patternfile, pattern.type = c("glob", "re"))
words |
the words against which patters are matched |
patternfile |
file containing the patters to match, one per line |
pattern.type |
marks whether the patterns are 'globs' or full regular expressions |
A list of matching words.
Will Lowe
Extracts the estimated word rates from a fitted Wordfish model
## S3 method for class 'wordfish' fitted(object, ...)
## S3 method for class 'wordfish' fitted(object, ...)
object |
a fitted Wordfish model |
... |
Unused |
Expected counts in the word frequency matrix
Will Lowe
Gets particular documents from a wfm by name or index
getdocs(wfm, which)
getdocs(wfm, which)
wfm |
a wfm object |
which |
names or indexes of documents |
getdocs is essentially a subset command that picks the correct margin for you.
A smaller wfm object containing only the desired documents with the same word margin setting as the original matrix.
Will Lowe
as.wfm
, as.docword
,
as.worddoc
, docs
, words
,
is.wfm
, wordmargin
Irish budget debate 2009
Irish budget debate 2009
This are word counts from the 2009 Budget debate in Ireland.
This is a word frequency nmatrix. Loading this data also makes available
iebudget2009cov
which contains covariates for the speakers.
This are word counts from the 2009 Budget debate in Ireland.
This is a word frequency nmatrix. Loading this data also makes available
iebudget2009cov
which contains covariates for the speakers.
Get cheap starting values for a Wordfish model
initialize.urfish(tY)
initialize.urfish(tY)
tY |
a document by word matrix of counts |
This function is only called by model fitting routines and does therefore not take a wfm classes. tY is assumed to be in document by term form.
In the poisson form of the model incidental parameters (alpha) are set to log(rowmeans/rowmeans[1]). intercept (psi) values are set to log(colmeans) These are subtracted from a the data matrix, which is logged and decomposed by SVD. Word slope (beta) and document position (theta) are estimated by rescaling SVD output.
List with elements:
alpha |
starting values of alpha parameters |
psi |
starting values of psi parameters |
beta |
starting values of beta parameters |
theta |
starting values for document positions |
Will Lowe
This is substantially the method used by Slapin and Proksch's original code.
Interest Groups and the European Commission
Word counts from interest groups and a European Commission proposal to reduce CO2 emissions in 2007.
comm1
and comm2
are the Commission's proposal before and after
the proposals of the interest groups.
H. Kluever (2009) 'Measuring influence group influence using quantitative text analysis' European Union Politics 11:1.
Checks whether an object is a Word Frequency Matrix
is.wfm(x)
is.wfm(x)
x |
a matrix of counts |
Whether the object can be used as a Word Frequency Matrix
Will Lowe
Interest Groups and the European Commission
Word counts from interest groups and a European Commission proposal to reduce CO2 emissions in 2007.
K2009
is a jl_df
object.
H. Kluever (2009) 'Measuring influence group influence using quantitative text analysis' European Union Politics 10(4) 535-549.
Irish Confidence Debate (jl format)
This are word counts from the no-confidence motion debated in the Irish Dáil from 16-18 October 1991 over the future of the Fianna Fail-Progressive Democrat coalition.
LB2003
is jl_df
object.
Laver, M. & Benoit, K.R. (2002). Locating TDs in Policy Spaces: Wordscoring Dáil Speeches. Irish Political Studies, 17(1), 59–73.
Irish budget debate 2009
These are word counts from the 2009 Budget debate in Ireland.
LB2013
is a jl_df
object
W. Lowe and K. Benoit (2013) 'Validating estimates of latent traits from textual data using human judgment as a benchmark' Political Analysis 21(3) 298-313.
Example data from Laver Benoit and Garry (2003)
This is the example word count data from Laver, Benoit and Garry's (2000)
article on Wordscores. Documents R1 to R5 are assumed to have known
positions: -1.5, -0.75, 0, 0.75, 1.5. Document V1 is assumed unknown. The
‘correct’ position for V1 is presumed to be -0.45.
classic.wordscores
generates approximately -0.45.
To replicate the analysis in the paper, use the wordscores function either with identification fixing the first 5 document positions and leaving position of V1 to be predicted.
Laver, Benoit and Garry (2003) 'Estimating policy positions from political text using words as data' American Political Science Review 97(2).
Example data from Laver Benoit and Garry (2003)
This is the example word count data from Laver, Benoit and Garry's (2000)
article on Wordscores. Documents R1 to R5 are assumed to have known
positions: -1.5, -0.75, 0, 0.75, 1.5. Document V1 is assumed unknown. The
‘correct’ position for V1 is presumably -0.45.
classic.wordscores
generates approximately -0.45.
To replicate the analysis in the paper, use the wordscores function either with identification fixing the first 5 document positions and leaving position of V1 to be predicted.
LBG2003
is a jl_df
object.
M. Laver, K. Benoit and J. Garry (2003) 'Estimating policy positions from political text using words as data' American Political Science Review. 97(2) 311-331.
UK manifesto data from Laver et al.
This are word counts from the manifestos of the three main UK parties for the 1992 and 1997 elections.
LG2000
is a jl_df
object.
M. Laver, K. Benoit and J. Garry (2003) 'Estimating policy positions from political text using words as data' American Political Science Review 97(2) 311-331.
Plots Wordscores from a fitted Wordscores model
## S3 method for class 'classic.wordscores' plot(x, ...)
## S3 method for class 'classic.wordscores' plot(x, ...)
x |
a fitted Wordscores model |
... |
other arguments, passed to the dotchart command |
A plot of the wordscores in increasing order.
Will Lowe
Plots sorted beta and optionally also psi parameters from a Wordfish model
## S3 method for class 'coef.wordfish' plot(x, pch = 20, psi = TRUE, ...)
## S3 method for class 'coef.wordfish' plot(x, pch = 20, psi = TRUE, ...)
x |
a fitted Wordfish model |
pch |
Default is to use small dots to plot positions |
psi |
whether to plot word fixed effects |
... |
Any extra graphics parameters to pass in |
A plot of sorted beta and optionally psi parameters.
Will Lowe
Plots a fitted Wordfish model with confidence intervals
## S3 method for class 'wordfish' plot(x, truevals = NULL, level = 0.95, pch = 20, ...)
## S3 method for class 'wordfish' plot(x, truevals = NULL, level = 0.95, pch = 20, ...)
x |
a fitted Wordfish model |
truevals |
True document positions if known |
level |
Intended coverage of confidence intervals |
pch |
Default is to use small dots to plot positions |
... |
Any extra graphics parameters to pass in |
A plot of sorted estimated document positions, with confidence intervals and true document positions, if these are available.
Will Lowe
Predicts positions of new documents from a fitted Wordscores model
## S3 method for class 'classic.wordscores' predict(object, newdata = NULL, rescale = c("lbg", "none"), z = 0.95, ...)
## S3 method for class 'classic.wordscores' predict(object, newdata = NULL, rescale = c("lbg", "none"), z = 0.95, ...)
object |
Fitted wordscores model |
newdata |
An object of class wfm in which to look for word counts to predict document ideal points. If omitted, the reference documents are used. |
rescale |
Rescale method for estimated positions. |
z |
Notional confidence interval coverage |
... |
further arguments (quietly ignored) |
This is the method described in Laver et al. 2003, including rescaling for
more than one virgin text. Confidence intervals are not provided if
rescale
is 'none'.
predict.wordscores
produces a vector of predicted document
positions and standard errors and confidence intervals.
Will Lowe
Predicts positions of new documents using a fitted Wordfish model
## S3 method for class 'wordfish' predict( object, newdata = NULL, se.fit = FALSE, interval = c("none", "confidence"), level = 0.95, ... )
## S3 method for class 'wordfish' predict( object, newdata = NULL, se.fit = FALSE, interval = c("none", "confidence"), level = 0.95, ... )
object |
A fitted wordfish model |
newdata |
An optional data frame or object of class wfm in which to look for word counts to predict document ideal points which to predict. If omitted, the fitted values are used. |
se.fit |
A switch indicating if standard errors are required. |
interval |
Type of interval calculation |
level |
Tolerance/confidence level |
... |
further arguments passed to or from other methods. |
Standard errors for document positions are generated by numerically inverting the relevant Hessians from the profile likelihood of the multinomial form of the model.
predict.wordfish
produces a vector of predictions or a matrix
of predictions and bounds with column names ‘fit’ and ‘se.fit’, and with
‘lwr’, and ‘upr’ if ‘interval’ is also set.
Will Lowe
Linearly rescales estimated document positions on the basis of two control points.
rescale(object, ident = c(1, -1, 10, 1))
rescale(object, ident = c(1, -1, 10, 1))
object |
fitted wordfish or wordscores object |
ident |
two documents indexes and and their desired new positions |
The rescaled positions set document with index ident[1] to position ident[2] and docuemnt with index ident[3] to position ident[4]. The fitted model passed as the first argument is not affected.
A data frame containing the rescaled document positions with standard errors if available.
Will Lowe
Simulates data and returns parameter values using Wordfish model assumptions: Counts are sampled under the assumption of independent Poisson draws with log expected means linearly related to a lattice of document positions.
sim.wordfish( docs = 10, vocab = 20, doclen = 500, dist = c("spaced", "normal"), scaled = TRUE )
sim.wordfish( docs = 10, vocab = 20, doclen = 500, dist = c("spaced", "normal"), scaled = TRUE )
docs |
How many ‘documents’ should be generated |
vocab |
How many ‘word’ types should be generated |
doclen |
A scalar ‘document’ length or vector of lengths |
dist |
the distribution of ‘document’ positions |
scaled |
whether the document positions should be mean 0, unit sd |
This function draws ‘docs’ document positions from a Normal distribution, or regularly spaced between 1/‘docs’ and 1.
‘vocab’/2 word slopes are 1, the rest -1. All word intercepts are 0. ‘doclen’ words are then sampled from a multinomial with these parameters.
Document position (theta) is sorted in increasing size across the documents. If ‘scaled’ is true it is normalized to mean zero, unit standard deviation. This is most helpful when dist=normal.
Y |
A sample word-document matrix |
theta |
The ‘document’ positions |
doclen |
The ‘document’ lengths |
beta |
‘Word’ intercepts |
psi |
‘Word’ slopes |
Will Lowe
A random sample of words and their frequency in German political party manifestos from 1990-2005.
SP2008
is a jl_df
object.
Wordfish website (http://www.wordfish.org)
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
A word frequency matrix from the economic sections of German political party manifestos from 1990-2005.
SP2008_econ
is a jl_df
object
These data are courtesy of S.-O. Proksch.
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
A word frequency matrix from the foreign policy sections of German political party manifestos from 1990-2005.
SP2008_for
is a jl_df
object
These data courtesy of S.-O. Proksch.
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
A word frequency matrix from the societal sections of German political party manifestos from 1990-2005.
SP2008_soc
is a jl_df
object
These data courtesy of S.-O. Proksch.
J. Slapin and S.-O. Proksch (2008) 'A scaling model for estimating time-series party positions from texts' American Journal of Political Science 52(3), 705-722.
Summarises a Wordscores model
## S3 method for class 'classic.wordscores' summary(object, ...)
## S3 method for class 'classic.wordscores' summary(object, ...)
object |
a fitted wordscores model |
... |
extra arguments (currently ignored) |
To see the wordscores, use coef
.
A summary of information about the reference documents used to fit the model.
Will Lowe
Summarises estimated document positions from a fitted Wordfish model
## S3 method for class 'wordfish' summary(object, level = 0.95, ...)
## S3 method for class 'wordfish' summary(object, level = 0.95, ...)
object |
fitted wordfish model |
level |
confidence interval coverage |
... |
extra arguments, e.g. level |
if ‘level’ is passed to the function, e.g. 0.95 for 95 percent confidence, this generates the appropriate width intervals.
A data.frame containing estimated document position with standard errors and confidence intervals.
Will Lowe
Ejects low frequency observations and subsamples
trim(wfm, min.count = 5, min.doc = 5, sample = NULL, verbose = TRUE)
trim(wfm, min.count = 5, min.doc = 5, sample = NULL, verbose = TRUE)
wfm |
an object of class wfm, or a data matrix |
min.count |
the smallest permissible word count |
min.doc |
the fewest permissible documents a word can appear in |
sample |
how many words to randomly retain |
verbose |
whether to say what we did |
If sample
is a number then this many words will be retained
after min.doc
and min.doc
filters have been applied.
Will Lowe
UK manifesto data from Laver et al.
This are word counts from the manifestos of the three main UK parties for the 1992 and 1997 elections.
ukmanif is a word frequency object.
Laver, Benoit and Garry (2003) 'Estimating policy positions from political text using words as data' American Political Science Review 97(2) 311-331.
A word count matrix that know which margin holds the words.
wfm(mat, word.margin = 1)
wfm(mat, word.margin = 1)
mat |
matrix of word counts or the name of a csv file of word counts |
word.margin |
which margin holds the words |
If mat
is a filename it should name a comma separated value format
with row labels in the first column and column labels in the first row.
Which represents words and which documents is specified by
word.margin
, which defaults to words as rows.
A word frequency matrix is defined as any two dimensional matrix with
non-empty row and column names and dimnames 'words' and 'docs' (in either
order). The actual class of such an object is not important for the
operation of the functions in this package, so wfm is essentially an
interface. The function is.wfm
is a (currently rather loose)
check whether an object fulfils the interface contract.
For such objects the convenience accessor functions as.docword
and as.worddoc
can be used to to get counts whichever way up
you need them.
words
returns the words and docs
returns the
document titles. wordmargin
reminds you which margin contains
the words. Assigning wordmargin
flips the dimension names.
To get extract particular documents by name or index, use getdocs.
as.wfm
attempts to convert things to be word frequency
matrices. This functionality is currently limited to objects on which
as.matrix
already works, and to TermDocument
and
DocumentTerm
objects from the tm
package.
A word frequency matrix from a suitable object, or read from a file
if mat
is character. Which margin is treated as representing words
is set by word.margin
.
Will Lowe
as.wfm
, as.docword
,
as.worddoc
, docs
, words
,
is.wfm
, wordmargin
mat <- matrix(1:6, ncol = 2) rownames(mat) <- c('W1','W2','W3') colnames(mat) <- c('D1','D2') m <- wfm(mat, word.margin = 1) getdocs(as.docword(m), 'D2')
mat <- matrix(1:6, ncol = 2) rownames(mat) <- c('W1','W2','W3') colnames(mat) <- c('D1','D2') m <- wfm(mat, word.margin = 1) getdocs(as.docword(m), 'D2')
Transforms a wfm to the format used by BMR/BLR
wfm2bmr(y, wfm, filename)
wfm2bmr(y, wfm, filename)
y |
integer dependent variable, may be NULL |
wfm |
a word frequency matrix |
filename |
Name of the file to save data to |
BMR is sparse matrix format similar to that used by SVMlight
Each line contains an optional dependent variable index and a sequence of indexes and feature value pairs divided by colons. Indexes refer to the words with non-zero counts in the original matrix, and the feature values are the counts.
A file containing the variables in in sparse matrix format.
Will Lowe
Transforms a wfm to the format used by the lda package
wfm2lda(wfm, dir = NULL, names = c("mult.dat", "vocab.dat"))
wfm2lda(wfm, dir = NULL, names = c("mult.dat", "vocab.dat"))
wfm |
a word frequency matrix |
dir |
a file to dump the converted data |
names |
Names of the data and vocabulary file respectively |
See the documentation of lda
package for the relevant object
structures and file formats.
A list containing
data |
zero indexed word frequency information about a set of documents |
vocab |
a vocabulary list |
,
unless dir
is specified.
If dir
is specified then the same information is dumped to
'vocab.dat' and 'mult.dat' in the dir
folder.
Will Lowe
Estimates a Wordfish model using Conditional Maximum Likelihood.
wordfish( wfm, dir = c(1, length(docs(wfm))), control = list(tol = 1e-06, sigma = 3, startparams = NULL, conv.check = c("ll", "cor")), verbose = FALSE )
wordfish( wfm, dir = c(1, length(docs(wfm))), control = list(tol = 1e-06, sigma = 3, startparams = NULL, conv.check = c("ll", "cor")), verbose = FALSE )
wfm |
a word frequency matrix |
dir |
set global identification by forcing |
control |
list of estimation options |
verbose |
produce a running commentary |
Fits a Wordfish model with document ideal points constrained to mean zero and unit standard deviation.
The control
list specifies options for the estimation process.
conv.check
is either 'll' which stops when the difference
in log likelihood between iterations is less than tol
, or 'cor'
which stops when one minus the correlation between the theta
s
from the current and the previous iterations is less
than tol
. sigma
is the standard deviation for the beta
prior in poisson form. startparams
is a list of starting values
(theta
, beta
, psi
and alpha
) or a
previously fitted Wordfish model for the same data.
verbose
generates a running commentary during estimation
The model has two equivalent forms: a poisson model with two sets of document and two sets of word parameters, and a multinomial with two sets of word parameters and document ideal points. The first form is used for estimation, the second is available for alternative summaries, prediction, and profile standard error calculations.
The model is regularized by assuming a prior on beta with mean zero and standard deviation sigma (in poisson form). If you don't want to regularize, set beta to a large number.
An object of class wordfish. This is a list containing:
dir |
global identification of the dimension |
theta |
document positions |
alpha |
document fixed effects |
beta |
word slope parameters |
psi |
word fixed effects |
docs |
names of the documents |
words |
names of words |
sigma |
regularization parameter for betas in poisson form |
ll |
final log likelihood |
se.theta |
standard errors for document position |
data |
the original data |
Will Lowe
Slapin and Proksch (2008) 'A Scaling Model for Estimating Time-Series Party Positions from Texts.' American Journal of Political Science 52(3):705-772.
plot.wordfish
, summary.wordfish
,
coef.wordfish
, fitted.wordfish
,
predict.wordfish
, sim.wordfish
dd <- sim.wordfish() wf <- wordfish(dd$Y) summary(wf)
dd <- sim.wordfish() wf <- wordfish(dd$Y) summary(wf)
Checks which margin (rows or columns) of a Word Frequency Matrix holds the words
wordmargin(x)
wordmargin(x)
x |
a word frequency matrix |
Changing the wordmargin by assignment just swaps the dimnames
1 if words are rows and 2 if words are columns.
Will Lowe
Extracts the words from a wfm object
words(wfm) words(wfm) <- value
words(wfm) words(wfm) <- value
wfm |
an object of type wfm |
value |
replacement if assignment |
A list of words.
Will Lowe