Loading...
 

CCCC 2014 Reviews

A.18 “The WPA Census: What Do the Numbers Tell Us?”

Reviewed by Joel Wingard (wingardj@moravian.edu)

Chair: Jill Gladstein, Swarthmore College, PA

Speakers:
Jill Gladstein, Swarthmore College, PA, “From Apples to Oranges: Deciding which Variables to Consider when Crafting National and Local Arguments about Writing Program Design”
Dara Regaignon, Pomona College, Claremont, CA, “Local Context, Best Practices, and Big-Picture Empirical Data”
Jennifer Wells, Florida State University, Tallahassee, “Hidden in Plain Sight: What the WOA Census Reveals about Writing Centers and Writing Center Directors”
Brandon Fralix, Bloomfield College, NJ, “MSIs and Basic Writing: An Empirical Examination of Basic Writing Practices at Minority-Serving Institutions”

This session featured the four members of the WPA Census team, who reported on their work so far on this project. They sought to contextualize the project in terms of historical surveys of writing programs, to share some of their preliminary data, to suggest how messy the data are in terms of interpretation, and to particularize some of their findings regarding writing centers and minority-serving institutions. The audience numbered about 70 people.

The first speaker, Dara Regaignon of Pomona College, provided an overview of the Census project and put it in historical perspective. She noted tensions between pairs of desirable qualities: obsolescence and responsibility to the historical record and diversity and best practices. Looking forward, she said that once the researchers have analyzed their data, it will be available at an open-access website and that institutions participating in the census will be able to update their information at five-year intervals. Thus, recognizing that data collection of this type runs the risk of obsolescence, the project team hopes to minimize if not avoid the data becoming useless when at some point it might be “old.” In terms of the history of this kind of research, she said that empirical research of writing programs in the U.S. has been going on for more than 100 years, but the studies were always just one-and-done, never in conversation with each other. This group is trying both to make data from diverse professional situations visible and to relate data and interpret it, not just collect it. The project grew from the Council of Writing Program Administrators Executive Board wanting to know about and promote diversity among the people who do that kind of work, and the initial leaders of the Census had experience surveying one segment of the American higher education scene for their book Writing program administration at small liberal arts colleges (2012). Certainly the Census data shows diversity, but not simply in terms of race, gender, or sexual orientation. She also interrogated the notion of best practices in the field, saying that it implied a top-down structure and encouraged imitation. The Census will reveal diverse local practices that emerge from local conditions such as size and mission of an institution and the population an institution serves, and so it will not be prescriptive in the sense that best practices tend to be but will rather be descriptive of the field at a certain moment in time.

The second speaker, Jill Gladstein of Swarthmore College, reminded the audience that the data collected so far are preliminary and cautioned against any premature citation of it. Then she went on to outline the methods of data collection the team used and some problems they encountered, before sharing some selected bits of preliminary data. She named the general areas the more than 200 survey questions covered and referred the audience to the project website (www.wpacensus.swarthmore.edu). The survey was sent to 1621 public and private, not-for-profit American colleges and universities offering four-year undergraduate degrees. The team tried to identify the appropriate person or persons on each campus, but found that task difficult because institutional websites are so different from one another. Seven hundred eighteen replies came back, for a 44% response rate. (After CCCC, the team sent out another invitation, hoping to get a response rate of over 50%.) Of these 718 schools, 612 (or 85%) said they had a writing program and 106 (or 15%) said they did not. At the same time, 687 schools (or 96%) said they had a writing requirement, while 31 (4%) did not. The responses to these two particular questions, she said, indicated more of the messiness of the data. Schools use different terms for the same thing: “program” and “requirement.” The team is writing a grant proposal to allow, among other things, for the collected data to be run through SPSS for analysis. One item that the researchers want to clarify is schools or programs where writing leadership is embedded or decentralized, not explicit, so that such institutions don’t elude attention. When this is accomplished, she said, the field should have a clearer sense of the diversity of programs and trends in this area of American higher education.

The third speaker was Jennifer Wells of Florida State University. She focused on Census data pertaining to writing centers and writing center directors. She said they found a big range of whether writing center directors’ jobs were tenure track or not. Of 409 stand-alone writing center directors, for example, 31% said they were TT. She speculated that not being TT might suppress research by such writing center directors because they might not have research expectations as part of their jobs. She also discussed data on capacity of sessions in any given writing center vs. number of sessions vs. percent of student population served by number of sessions. Numbers of annual sessions reported, for instance, look different when those are compared to the capacity of sessions a center could accommodate or percentage of student body actually served by however many sessions. This is yet another instance where the data are more complicated than they may at first appear, so while data collection is mostly finished, data interpretation will be an ongoing project.

The final speaker, Brandon Fralix of Bloomfield College, discussed Census data about minority-serving institutions, which the group breaks down into not only Historically Black Colleges and Universities but also Predominately Black Institutions (not necessarily identical with HBCUs), Hispanic-Serving Institutions, and Tribal Colleges, as defined by the U.S. Department of Education. One area of data of interest here is support for ELL and other under-prepared students at schools like these. One survey question asked if such a school routinely identifies students needing additional academic support. The data show that such identification happens less frequently at minority-serving institutions than at non-MSI’s. Also, more MSI’s use standardized test scores to place students than use directed self-placement or other means. What this means in a larger sense is as yet unclear. The researchers cannot conclude that such practices stem from writing program administrators at such institutions not being involved in professional organizations that promote best practices in these areas, so they wonder what the answer is. Rather than see this uncertainty as a problem with the data collection, the speaker offered it as an example of how the data can drive further research.

References:
Gladstein, Jill M., & Rossman Regaignon, Dara. (2012). Writing program administration at small liberal arts colleges. Writing program administration. Anderson, SC: Parlor Press.

CCCC 2014 Reviews


Created by admin. Last Modification: Wednesday August 13, 2014 17:30:33 GMT-0000 by ccccreviews.