On Monday evening, April 13, In Washington, DC the Helen Hayes (HH) Awards were announced with
great fanfare as a celebration of the best acting and theater during the preceding calendar year. But was it representative
of the best and what is it they were really honoring? Inevitably, the bulk of the nominations and
awards went to select theaters with the biggest budgets and who carry the most clout with HH judges. Big budgets
afforded by well-heeled donors and well-timed grants mean shows with high production values, i.e., lavish set designs, sumptuous
costumes, and high tech sound and lighting. It also means they can bring in outside talent - actors with name recognition
and directors (and their teams) with national reputations.
It invariably follows that the judges are wowed
by the "high production values" afforded by big budgets, and those who make them available, or the reputations of
those involved in the theatrical process.
To counter the annual complaints that come in privately -
due to fears of blacklisting, these complaints are not aired in public - the Awards Committee keeps tweaking the process seemingly
ad hoc: categories are expanded, nominations increased, and the selection criteria are revised and further clarified, though
not fast enough to keep pace with their written rules, which call for five nominations per category (see Awards Rules 2e http://www.helenhayes.org/ ). This amounts to putting bandaids on a process that has sprung a leak and it just keeps getting worse. There
were twelve nominations for best ensemble and eight for best lead actor and director; seven for the best new play or musical,
choreography, set design, and sound design; and six in eleven other categories. As you might have expected, this bumper
crop of nominations resulted in ties in five categories or a further increase in awards winners. Maybe next year we'll
be treated to three winners in a category. The awards are starting to resemble the Grammys!
Where are the problems?
For starters the judges are selected in a closed process, drawing on candidates from the theater community - professional
or not (they may serve on boards or do volunteer work or are donors - and are fairly well known or come recommended by someone
who is). These people aren't walking in off the street, and they certainly aren't contrarians with their
own views of what constitutes great theater. They can be counted on to return the same type of result or maintain the
status quo. To say it plainly, the deck is stacked.
The rules themselves for evaluating plays are perplexing.
The process allows different criteria for evaluating plays based on experience (2d) and obliges them to use a 0-10 point
scale in 25 categories without qualification as to what those scales comprise and the training or experience for
evaluating what those categories constitute (2f). What's a 9 or a 10 (or an 8 or 7 or 6) mean and is mine the same
as the next judge with more or less DC theater experience? Presumably, they believe the numbers will all average
out in the end, particularly because those results will be tabulated by an outside firm (2g). We hope that the outside
firm is not involved in accounting, because you know where that got us.
The selection process favors some companies
over others. There is no real competition for Signature in musical awards categories (they had six of six nominations
for supporting actress, five of the six for supporting actor and the same person received three of the six
for director) Co-productions or associations - I counted six - have an inherent advantage in funding and rehearsal time. Does
a theater need to form a partnership to break through? And new plays get special consideration, as with musicals,
one panel of judges evaluates new plays exclusively (3a).
The small number of judges evaluating each show - eight -
also points to a problem with the methodology: so few a number of judges increases the potential error within a category (2b).
The Tony Awards, an organization you would think the HH Awards would emulate, determines nominees from not eight but
30 theater professionals, each of whom are asked to see every new show. From the nominations in 27
categories, 750 eligible theater professionals select the winner from no more than 5 nominees per category.
The methodology looks suspect - flawed and biased: one that instead of reflecting theater excellence, leads to
results which may be precise but not accurate. Anyone who eyeballs the data can tell there's a problem.
I invite the HH Awards Committee to post the complete methodology and validation of the selection process on their website
if one exists. If a methodology can not be validated, it may be many things, but it is not science. You might
as well flip a coin.
This is not to say that the shows selected are not good ones, only that they are not the only
good ones out there. You do not need a big budget to have a good show or performance. I have seen many, many noteworthy
(award winning) performances with modest design, using actors who rise to the level of an outstanding script or focused direction,
or ensemble or collaborative effort.
What's the solution? Why not establish an alternative or independent awards
process such as used in New York with the NY Drama Critics Circle, Innovative Theater, Drama Desk, and Obie Awards to honor
artists and theaters elbowed out of the current process. Theater fans might also vote on their favorites: patron's
choice awards are given out at certain juried art shows. It might boost interest in the local theater scene with a salutary
effect at the box office. But whatever is done, let's do it independently, so that the same sorry process does not leave
unrecognized the many fine performances and productions, the same time next year.