Look through the Canadian Association of Graduate Studies (CAGS) website with a few questions in mind about leadership in graduate education in Canada. A public relations emphasis emerges more than a graduate education leadership focus. If I was a CAGS member or graduate student executive looking for best practice guidance on a basic, long-lasting problem in graduate education in Canada, namely attrition and times to completion, the best practices part of the CAGS website wouldn’t help at all. The CAGS website offers not a word of advice in this area. The three-minute thesis contest however, which is not a mandatory requirement of Canadian graduate program, gets a side panel box and is flashed through the rotating banner feature. Even so, the three-minute contest does not warrant a mention as a best practice.
The Ontario Chapter of the National Graduate Caucus of the Canadian Students’ Federation put times to completion on a list as an issue in 2010. By 2010, CAGS had been around for decades. In November 2015, at a CAGS convention, a graduate student executive brought up times to completion. A little homework on the CAGS website before hand might have saved the questioner the trouble: CAGS, as represented by its website, is mute on a policy for times to completion and attrition. A better question for the graduate student executive to have asked is, “Why doesn’t CAGS address times to completion and attrition in graduate programs?”
The previous Fabrication Nation post concerned lack of action on times to completion and attrition by Canadian graduate student associations. CAGS interests and those of graduate students diverge here, with CAGS inaction and silence according to its membership interests and graduate student inaction out of line with their memberships’ interests. The case of the omitted onus around times to completion and attrition from CAGS becomes more curious as graduate students assume CAGS shares their concerns. What sort of leadership and guidance in graduate education can Canadian graduate students expect from CAGS then?
There is one thing after another on the CAGS site without addressing fundamental aspects of graduate education practice. One conference on the Future of the PhD does not an influential leadership organization make.
According to Tamburri (2013), very few CAGS recommendations got implemented by CAGS members over a ten year period. So now perhaps, instead of recommendations, CAGS promotes contests and its convention, like a PR firm might.
Still CAGS identifies its self as a leader in graduate education in Canada. Membership in CAGS feels and looks good but evidence shows that CAGS’ membership supports the bottom line and status quo. Why is there no advice for best practices around times to completion and attrition on the CAGS website? Maybe CAGS recommended transparency practices thirteen years ago to no avail. CAGS members don’t take to recommendations very well. So the solution for this industry association is to sexy up graduate education with the challenging but non-essential three-minute thesis contest and other contests. Surely CAGS will soon promote The Dance Your PhD contest.
No CAGS isn’t setting out times to completion and attrition statistics publication as a best practice likely because its membership doesn’t want to hear of it and wouldn’t do it anyway. Almost no good public data is available on times to completion and attrition in Canada. DeClou (2014) studied attrition and times to completion in her dissertation. Aside from the work of DeClou, the trouble Mazur took to try to find the answer to How Long Does a PhD take at UBC, and an unpublished, inaccessible and unobtainable leaked study by the U15 group of universities data on times to completion and attrition appears like an elephant in the room, stuffed into the dark closet.
Why, with all the great rhetoric coming out of CAGS officials over the years, have none to these luminaries ever of their own accord and conviction, started a project to publish times to completion and attrition within their own universities? Where is the walk to match the talk. How would CAGS respond to a campaign by graduate students to track and publish times to completion and attrition for every doctoral program? Why, did graduate students and not CAGS commission a much cited, influential and seminal article in Canadian graduate education, Elgar‘s 2003 paper on PhD Completion rates in Canada?
The 3MT contest originated in Australia at Queensland University. Australia punches above its weight in influence of graduate education in North America. Australia’s culture toward graduate training features a robust graduate education community and robust time to completion tracking too. As the history of CAGS and its website shows, Canada has yet to develop a keen community for learning about the practice of graduate education.
Ergo, leadership rests with the people with actual skin in the game in Canada; graduate students. Canadian graduate students, with all their problems in forming a stable, long-lasting national interest group, show the most promise in providing leadership, as a result of commissioning the Elgar study.
The one short-lived, national interest group Canadian graduate students formed, produced arguably one of the most influential papers in Canadian graduate education. The ones with time and money at stake, graduate students, asked the questions, that still stymies CAGS.
CAGS should partner with and seek graduate student representation in order to better influence graduate school leadership. CAGS has yet to reach out to graduate students to work with them in an ongoing fashion, even though CAGS ought to know that graduate student input to program change proved invaluable during the five-year Carnegie Initiative on the Doctorate and of course, here at home, the Graduate Students Association of Canada commissioned Elgar paper lives on. CAGS should seek graduate student representation and actively seek to strengthen and support graduate student associations. CAGS could reach out to graduate student executives to choose some number of graduate student representatives to provide student input and set up partnership programs with CAGS.
At the 2016 CAGS conference, CAGS should set out to actively organize graduate student representation within CAGS. Graduate students should lobby CAGS to join forces for projects and representation.
CAGS should drop the marketing aspect in its award criteria for new programs. Such appeals to marketing show a truer purpose of CAGS; to help its members promote graduate education rather than develop intelligence about it, study it, solve chronic problems within it and move graduate education practice.
After all this time, CAGS has evolved into the kind of organization its members want. Marketing puts those graduate student bums in seats. Graduate education is Canada has grown apace. If it ain’t broke don’t fix it, don’t study it, don’t keep stats on it, and don’t show any enduring empathy or connection to those bums in seats. CAGS get your mojo working.
Graduate students in research training need advocacy different from undergraduate students, otherwise why distinguish the two?
Graduate students attend cash cow programs in Canadian universities. Prestigious MBA programs can cost a $100 grand in Canada, not that they are any better than a cheaper program. Students pay for the prestige and the opportunity to network with others who have the means to pay the exorbitant tuition. These cash cow MBA students would be at odds with CFS campaigns designed to bring down the cost of education. Do they want their student union fighting for lower tuition? Do they want their student union fighting for higher, more exclusive tuition?
At a national level, Canadian graduate student associations and undergraduate associations can join the CFS or CASA. Neither focuses on graduate student concerns, although the CFS does have a graduate student caucus.
Graduate students have real needs. Needs that go unaddressed. Advocacy around times to completion and attrition have yet to make the agenda of a national advocacy organization for Canadian students. If GSAs get action on undergraduate student issues from national industry groups and no representation on graduate student issues, then why even join CFS or CASA or the provincial industry group in Quebec, the UEQ? Canadian graduate students could save money. CFS is costly to miss the mark for so many years on graduate student needs.
Graduate students, who lack importance in the agendas of CFS or CASA to take up their issues, might be better served as a part of a larger student association consisting of graduate and undergraduate students. FACEUM combines reps for graduate students as part of the overall student union. Would this be a better model than the grad and undergrad division that presently exists? Unless agendas specific to graduate education come from graduate student associations, what is the point of a separation between grad and undergrad representation?
Graduate student associations in Canada notoriously defederate from CFS and tear down their own national or provincial industry groups, so as at 2016 no stable, long-lasting national or provincial group manages continuity and stability of campaigns across the yearly changes in graduate student union leadership.
Why hasn’t CFS or CASA campaigned for grad students on times to completion and attrition in all these years? The campaigns and issues addressed center on the undergrad, fees, minority groups and tuition. While CFS urges Justin Trudeau to keep his promises to Indigenous students, CFS needs to find the problems in grad schools buried under pluralistic ignorance and silent exits. CFS campaigns need to better address issues unique to grad students or grad students need to find better industry group representation. Without industry group coordination and continuity, GSAs have little hope of mounting a campaign to bring down times to completion and attrition statistics.
A post about giving academic writing proper disciplinary status
Bar a few exceptions, it is rare (in the UK, where I am based) to find job adverts for Lecturers in Academic Writing; when you do, they tend to be posts created to help people become ‘better writers’ (e.g. in Writing or Graduate Centres, or Libraries) rather than to educate in matters of writing by foregrounding the nature and the disciplinarystatus of Academic Writing.
Academic Writing continues to lack disciplinary status despite: a) the recent publication of Linda Adler-Kassner and Elizabeth Wardle’s edited book Naming What We Know: Threshold Concepts of Writing Studies, which features contributions from US writing experts including CharlesBazerman and David Russell
View original post 1,127 more words
This book review is written by guest Susan Mowbray, Western Sydney University. The book seems highly pertinent to our community, so we thank Susan for alerting us to it with her detailed critique.
Supporting graduate student writers. Research, curriculum and program design. (2016). Edited by Steve Simpson, Nigel. A. Caplan, Michelle Cox & Talinn Phillips. Ann Arbour: University of Michigan Press.
Supporting graduate student writers captured my attention as I have recently taken on a literacy support role with our Graduate Research School. The idea for the book was conceived at an invited colloquium on graduate writing support in 2014 and the result of the editors’ labours arrived via the University of Michigan Press in March this year. The book is organised in three parts. Part 1: What do we know/need to know? broadly covers supporting graduate research. Curriculum is dealt with in Part 2: Issues in graduate…
View original post 945 more words
We encourage doctoral students to take deep dives into very narrow questions, implicitly encouraging them to think small. The narrowness of their research topics focuses them laser like into silos …
Source: Rethinking the Engineering PhD
Getting three examiners in a room, to put a candidate through some kind of ringer does not a valid and reliable assessment make. Who or what are they examining? The candidate or candidate’s work or some combination thereof? On what basis would each determine a fail? Is it the same benchmark for all three? Does the university give them guidelines and check for their common understanding? Does the university check that all defenses look for standards set by the program?
What if examiners get it wrong? The examiners for Jason Richwine’s Harvard PhD got it wrong. What process is in place to check the oral defense committee’s work? No checks on Richwine’s committee and supervisor occurred even after a journalist did his own check and broke the story . Journalists often unmask wrong doing in a doctorate, but why don’t universities put checks in place? Laxity in assessment in doctoral programs puts them in the way of ridicule and undermines their legitimacy.
Did Harvard or other universities ask the questions Richwine’s committee;s epic fail asks? Richwine got a standard doctoral education. He applied to, was accepted and enrolled in a doctoral program at the Kennedy School of Government, wrote a dissertation, sat a defense and presto, Harvard awarded him its esteemed PhD. Richwine’s story shows even the best universities fail to fix problems in doctoral programs.
Harvard experienced protests and national notoriety but merely placated protesters and let the unrest die out. If Richwine didn’t believe in the inferiority of Hispanic immigrants, he could sue Harvard to place blame on his program, his committee, and his university. As a student at Harvard he had every right to expect the best doctoral education and highest standards of assessment.
What argument would his lawyers make? Harvard gave Richwine a PhD in 2009 for research based on an assumption that ‘illegal Hispanic immigrants to America had lower IQs than non-Hispanic whites’. After a reporter blogged about Richwine’s dissertation in 2013, many academics read it. The complainant then learned that the research questions, methodology and literature review suffered grievous and invalidating errors. The complainant asks Harvard to refund all expenses incurred in his five years of study and award recompense for a failure to assess his doctoral work to withstand the scrutiny of academics outside of his committee and supervisor. He asks Harvard to make a public apology placing blame on the PhD program he took to properly assess his doctoral work.
Without a rigorous process to assess whatever it is oral defense committees assess, shouldn’t all doctoral students demand transparency about checks of their work (beside typos), checks on the work of the examining committees, and for the committee’s scores on validity markers in the oral defense?
The next time a journalist unmasks a questionable dissertation to shame an esteemed official with a PhD, it’d be wonderful to also blame the granting institution. Without checks for cheating and soundness within doctoral programs, deficient dissertations will continue to come to light. Research on the quality of dissertations is needed.
Right now programs place too much onus on the integrity of the candidate. At the very least shouldn’t all dissertations go through a thorough plagiarism check, even a digital check like Turnitin, before any oral defense takes place? Plagiarism shows up more frequently than a robust, thorough, quality control process should let through. Plagiarism aside, the research questions, design, methods, and conclusions may fail the thin-red-line-thread-of-coherence test taught in the Cohen, Manion and Morris text for social science research.
The rarely failed oral defense gives false confidence. How could Richwine be trusted with setting up and conducting research save for an right-wing foundation? Why did he need a Harvard PhD to work for a right-wing foundation, save to confer a sheen of legitimacy to him?
The sample of one research project also lends doubt about the soundness of a doctoral program structure. Research done after the degree shows whether the committee recognized the, as yet, undefined confidence level conferred in a doctoral degree. Yes, of course, the record shows holders of the degree and those early career researchers going through its dubious assessment process produce influential works even with the problems of doctoral programs.
Greater confidence in the doctoral program could come from multiple micro-assessments by multiple examiners and multiple research investigations. Assessment data tied into block chains linked to discrete aspects of ‘doctoralness‘ overcome validity and reliability assessment problems within doctoral programs. Let’s say students submitted their questions, methodology and literature review to random, bona fide external reviewers. Remember a journalist and the web provided reviewers for every aspect of Richwine’s notorious screed. Associating reviews to a block chain would give more confidence in the doctoral program.
The bogus doctoral degrees now destroying Russian doctoral education come in here too. Russians seeking a PhD or two to pump up their prestige took advantage of the laxity in standards in doctoral education. They turned Russian universities into doctorate diploma mills which now sadly calls all Russian doctorates into disrepute.
Outside of Russia, doctoral education suffers from the same lack of rigor in assessment that allowed for mass corruption to take hold in Russia. Unless doctoral programs declare exactly what that achievement mark is and devise a robust means to test for it, the same fate could befall doctorates outside of Russia. The hocus pocus and hokey pokey that passes for assessment in a doctoral program disrespects and makes vulnerable the work of all doctoral students.
As a solution, an entrepreneur or international not-for-profit could set up a bitcoin like block chain system to outsource assessment for doctoral programs and students. This could be called the ‘journalist test’.
A block chain would secure pass/fail assessments of characteristic aspects of doctoral study.
Originality test: How does the research make an original contribution to the literature?
Research instruments: Does the research show fidelity to principles inherent in the methodologies?
Questions: How does the research plan execute on the questions asked? Is the quality of the questions aligned to the literature and problems in research?
Literature Review: To what degree is the research based on a comprehensive interrogation of the literature? In the case of doctorates in the sciences, a literature review skill may require programming a lit review search and critiquing the results.
Plagiarism checks: The work should undergo a check for plagiarism.
Multiple draft changes: This sort of assessment checks that the writing came from the candidate. Taped evidence discussing changes to the manuscript through multiple iterations lends authenticity as to the author.
Assessment would secure aspects of doctoral know-how against habits of mind derived of a doctoral education. The above list is not, of course, exhaustive. If doctoral programs decided what the oral exam tested or recognized, the check list would follow from it.
Evaluations of discrete aspects of doctoral education against specific criteria and exemplars could be sliced into a bitcoin kind of security. Unless granting institutions and disciplinary bodies make-up a robust assessment system, capped off by an oral defense if necessary, to assess the lessons of a doctoral program, glaring omissions may still occur.
So back to the fitness of the oral defense. Presently, the oral defense makes candidates defend something they may have thought better of, from which they learned what not to do the next time round. Post PhD, a newly minted PhD will likely not repeat the solitary toil in graduate school. The vast majority of academics work with others. Besides which how much experience do students get in a doctoral program with defending anything before an audience save in the oral? Re-framing the defense as a candidate’s public critique of a first contribution to scholarship before curious and supportive fellow scholars makes more sense. Re-framing the defense as a lively discussion between collaborators on research watched by an examining committee more closely mirrors research practice. Ironically, concern over assessing individual understanding has kept doctoral programs from offering collaborative research pathways to students.
The problem of a sample of one undermines doctoral assessment. Demanding less than book-size output from a novice researcher in favor of more samplings of research acumen, would change the scholarly output but not the purpose of doctoral study. Doctoral programs structured around three discrete research collaborations would get a richer sampling of the skills of scholarship. It’s easy enough to improve on the standard doctoral program.
For all that toil, all that time taken, all the treachery students take on, don’t doctorate students deserve better assessments and more thoughtful doctoral programming?
By Claire Aitchison
In a world of spiralling credentialism where employers require ever higher qualifications, and institutions compete to recruit and keep doctoral candidates, its easy to see how students and supervisors can feel pressured to keep students enrolled. But what if you decide that doctoral study isn’t for you?
Recently I met up with a former student and in conversation she reminisced about her time as a doctoral student. Despite the many challenges she had experienced, she said how much she had enjoyed herself, especially the intellectual stimulation and sense of purpose she had as a doctoral scholar. She told me how she still loved her topic and wished she could have completed. I understood all of that – but then she went on to say she deeply regretted dropping out of her PhD.
This took me by surprise, because, all those years ago, when she contacted me about…
View original post 1,001 more words