A post about giving academic writing proper disciplinary status
Bar a few exceptions, it is rare (in the UK, where I am based) to find job adverts for Lecturers in Academic Writing; when you do, they tend to be posts created to help people become ‘better writers’ (e.g. in Writing or Graduate Centres, or Libraries) rather than to educate in matters of writing by foregrounding the nature and the disciplinarystatus of Academic Writing.
Academic Writing continues to lack disciplinary status despite: a) the recent publication of Linda Adler-Kassner and Elizabeth Wardle’s edited book Naming What We Know: Threshold Concepts of Writing Studies, which features contributions from US writing experts including CharlesBazerman and David Russell
View original post 1,127 more words
This book review is written by guest Susan Mowbray, Western Sydney University. The book seems highly pertinent to our community, so we thank Susan for alerting us to it with her detailed critique.
Supporting graduate student writers. Research, curriculum and program design. (2016). Edited by Steve Simpson, Nigel. A. Caplan, Michelle Cox & Talinn Phillips. Ann Arbour: University of Michigan Press.
Supporting graduate student writers captured my attention as I have recently taken on a literacy support role with our Graduate Research School. The idea for the book was conceived at an invited colloquium on graduate writing support in 2014 and the result of the editors’ labours arrived via the University of Michigan Press in March this year. The book is organised in three parts. Part 1: What do we know/need to know? broadly covers supporting graduate research. Curriculum is dealt with in Part 2: Issues in graduate…
View original post 945 more words
We encourage doctoral students to take deep dives into very narrow questions, implicitly encouraging them to think small. The narrowness of their research topics focuses them laser like into silos …
Source: Rethinking the Engineering PhD
Getting three examiners in a room, to put a candidate through some kind of ringer does not a valid and reliable assessment make. Who or what are they examining? The candidate or candidate’s work or some combination thereof? On what basis would each determine a fail? Is it the same benchmark for all three? Does the university give them guidelines and check for their common understanding? Does the university check that all defenses look for standards set by the program?
What if examiners get it wrong? The examiners for Jason Richwine’s Harvard PhD got it wrong. What process is in place to check the oral defense committee’s work? No checks on Richwine’s committee and supervisor occurred even after a journalist did his own check and broke the story . Journalists often unmask wrong doing in a doctorate, but why don’t universities put checks in place? Laxity in assessment in doctoral programs puts them in the way of ridicule and undermines their legitimacy.
Did Harvard or other universities ask the questions Richwine’s committee;s epic fail asks? Richwine got a standard doctoral education. He applied to, was accepted and enrolled in a doctoral program at the Kennedy School of Government, wrote a dissertation, sat a defense and presto, Harvard awarded him its esteemed PhD. Richwine’s story shows even the best universities fail to fix problems in doctoral programs.
Harvard experienced protests and national notoriety but merely placated protesters and let the unrest die out. If Richwine didn’t believe in the inferiority of Hispanic immigrants, he could sue Harvard to place blame on his program, his committee, and his university. As a student at Harvard he had every right to expect the best doctoral education and highest standards of assessment.
What argument would his lawyers make? Harvard gave Richwine a PhD in 2009 for research based on an assumption that ‘illegal Hispanic immigrants to America had lower IQs than non-Hispanic whites’. After a reporter blogged about Richwine’s dissertation in 2013, many academics read it. The complainant then learned that the research questions, methodology and literature review suffered grievous and invalidating errors. The complainant asks Harvard to refund all expenses incurred in his five years of study and award recompense for a failure to assess his doctoral work to withstand the scrutiny of academics outside of his committee and supervisor. He asks Harvard to make a public apology placing blame on the PhD program he took to properly assess his doctoral work.
Without a rigorous process to assess whatever it is oral defense committees assess, shouldn’t all doctoral students demand transparency about checks of their work (beside typos), checks on the work of the examining committees, and for the committee’s scores on validity markers in the oral defense?
The next time a journalist unmasks a questionable dissertation to shame an esteemed official with a PhD, it’d be wonderful to also blame the granting institution. Without checks for cheating and soundness within doctoral programs, deficient dissertations will continue to come to light. Research on the quality of dissertations is needed.
Right now programs place too much onus on the integrity of the candidate. At the very least shouldn’t all dissertations go through a thorough plagiarism check, even a digital check like Turnitin, before any oral defense takes place? Plagiarism shows up more frequently than a robust, thorough, quality control process should let through. Plagiarism aside, the research questions, design, methods, and conclusions may fail the thin-red-line-thread-of-coherence test taught in the Cohen, Manion and Morris text for social science research.
The rarely failed oral defense gives false confidence. How could Richwine be trusted with setting up and conducting research save for an right-wing foundation? Why did he need a Harvard PhD to work for a right-wing foundation, save to confer a sheen of legitimacy to him?
The sample of one research project also lends doubt about the soundness of a doctoral program structure. Research done after the degree shows whether the committee recognized the, as yet, undefined confidence level conferred in a doctoral degree. Yes, of course, the record shows holders of the degree and those early career researchers going through its dubious assessment process produce influential works even with the problems of doctoral programs.
Greater confidence in the doctoral program could come from multiple micro-assessments by multiple examiners and multiple research investigations. Assessment data tied into block chains linked to discrete aspects of ‘doctoralness‘ overcome validity and reliability assessment problems within doctoral programs. Let’s say students submitted their questions, methodology and literature review to random, bona fide external reviewers. Remember a journalist and the web provided reviewers for every aspect of Richwine’s notorious screed. Associating reviews to a block chain would give more confidence in the doctoral program.
The bogus doctoral degrees now destroying Russian doctoral education come in here too. Russians seeking a PhD or two to pump up their prestige took advantage of the laxity in standards in doctoral education. They turned Russian universities into doctorate diploma mills which now sadly calls all Russian doctorates into disrepute.
Outside of Russia, doctoral education suffers from the same lack of rigor in assessment that allowed for mass corruption to take hold in Russia. Unless doctoral programs declare exactly what that achievement mark is and devise a robust means to test for it, the same fate could befall doctorates outside of Russia. The hocus pocus and hokey pokey that passes for assessment in a doctoral program disrespects and makes vulnerable the work of all doctoral students.
As a solution, an entrepreneur or international not-for-profit could set up a bitcoin like block chain system to outsource assessment for doctoral programs and students. This could be called the ‘journalist test’.
A block chain would secure pass/fail assessments of characteristic aspects of doctoral study.
Originality test: How does the research make an original contribution to the literature?
Research instruments: Does the research show fidelity to principles inherent in the methodologies?
Questions: How does the research plan execute on the questions asked? Is the quality of the questions aligned to the literature and problems in research?
Literature Review: To what degree is the research based on a comprehensive interrogation of the literature? In the case of doctorates in the sciences, a literature review skill may require programming a lit review search and critiquing the results.
Plagiarism checks: The work should undergo a check for plagiarism.
Multiple draft changes: This sort of assessment checks that the writing came from the candidate. Taped evidence discussing changes to the manuscript through multiple iterations lends authenticity as to the author.
Assessment would secure aspects of doctoral know-how against habits of mind derived of a doctoral education. The above list is not, of course, exhaustive. If doctoral programs decided what the oral exam tested or recognized, the check list would follow from it.
Evaluations of discrete aspects of doctoral education against specific criteria and exemplars could be sliced into a bitcoin kind of security. Unless granting institutions and disciplinary bodies make-up a robust assessment system, capped off by an oral defense if necessary, to assess the lessons of a doctoral program, glaring omissions may still occur.
So back to the fitness of the oral defense. Presently, the oral defense makes candidates defend something they may have thought better of, from which they learned what not to do the next time round. Post PhD, a newly minted PhD will likely not repeat the solitary toil in graduate school. The vast majority of academics work with others. Besides which how much experience do students get in a doctoral program with defending anything before an audience save in the oral? Re-framing the defense as a candidate’s public critique of a first contribution to scholarship before curious and supportive fellow scholars makes more sense. Re-framing the defense as a lively discussion between collaborators on research watched by an examining committee more closely mirrors research practice. Ironically, concern over assessing individual understanding has kept doctoral programs from offering collaborative research pathways to students.
The problem of a sample of one undermines doctoral assessment. Demanding less than book-size output from a novice researcher in favor of more samplings of research acumen, would change the scholarly output but not the purpose of doctoral study. Doctoral programs structured around three discrete research collaborations would get a richer sampling of the skills of scholarship. It’s easy enough to improve on the standard doctoral program.
For all that toil, all that time taken, all the treachery students take on, don’t doctorate students deserve better assessments and more thoughtful doctoral programming?
By Claire Aitchison
In a world of spiralling credentialism where employers require ever higher qualifications, and institutions compete to recruit and keep doctoral candidates, its easy to see how students and supervisors can feel pressured to keep students enrolled. But what if you decide that doctoral study isn’t for you?
Recently I met up with a former student and in conversation she reminisced about her time as a doctoral student. Despite the many challenges she had experienced, she said how much she had enjoyed herself, especially the intellectual stimulation and sense of purpose she had as a doctoral scholar. She told me how she still loved her topic and wished she could have completed. I understood all of that – but then she went on to say she deeply regretted dropping out of her PhD.
This took me by surprise, because, all those years ago, when she contacted me about…
View original post 1,001 more words
Graduate schools need to equip their students with this knowledge and should be supporting CKF.
Kristen Ratan, known to many in the scholarly communications world, has been embarking on a new adventure of late. Together with Adam Hyde, she has launched the Collaborative Knowledge Foundation (CKF), a nonprofit organization whose mission is to evolve how scholarship is created, produced and reported. I recently had the chance to talk to Kristen about the Foundation – what it is, what problem(s) it aims to address, how it is funded, and more.
Please describe the Collaborative Knowledge Foundation and your role there
CKF, or Coko for friends, is a nonprofit open source technology organization offering alternatives to the ways that we currently produce and share knowledge. We’ve kept that directive deliberately broad so that we can focus on many different types of knowledge and processes. We’re starting with research communication because it is such a critical form of knowledge that is currently broken in ways that are…
View original post 1,263 more words
In France in, in June 2016, the government has stepped in to limit the time to degree, among other changes. La Confédération des Jeunes Chercheurs (CJC) or the association of young researchers complained about the legislation although it may have contributed to the legislation’s development. The CJC wanted the PhD recognized as a course in professional development in the legislation which seems to consider the degree more of an academic achievement.
Do we need legislation like this?
If grad studies departments had some performance measures to go by society and grad students would be better served. Certainly time limits and less attrition would result in enormous savings for students and governments. Moreover such legislation would likely result in doctoral programs taking a more hands-on approach. Suddenly the program might have to confront the sloppiness in its pedagogy. Do we really need comprehensive exams? How can we get more students through faster? Do we send clear messages to help students understand our expectations? What are our expectations beyond producing a dissertation? Could we possibly allow students to collaborate on research to aid speed? What would our former students suggest?
What factors went into the decision to legislate time to degree in France? Saving money supporting students for longer than is necessary? Will ‘quality’ now suffer or improve in France because grad school administrators need to comply with the law? If a student of doctoral studies is reading this blog, that’s a good research topic. I wonder how much the directors of doctoral programs contributed to the legislation. What would Canadian deans and VP Academics say if legislation made them more accountable for times to completion?
How about attrition? Does the legislation touch on attrition? What would happen if doctoral programs had to produce doctorates in five years with 10% attrition rates?
We’d save so much time and money. Some of the pomposity of graduate studies would be punctured. Programs would improve. Students would feel supported and the pressure to perform would be greater. But first, major pushback from universities could be expected.
What would happen to that pool of cheap labor to teach those undergraduate courses? Maybe they’d just move into the adjunct underbelly in every university sooner.