Skip to content

Why Canadian GSAs have yet to speak up on times to completion and attrition: Blind spots in CFS and CASA’s agendas

Blind_Spot_2-1-300x270

Graduate students in research training need advocacy different from undergraduate students, otherwise why distinguish the two?

Graduate students attend cash cow programs in Canadian universities.  Prestigious MBA programs can cost a $100 grand in Canada, not that they are any better than a cheaper program.  Students pay for the prestige and the opportunity to network with others who have the means to pay the exorbitant tuition.    These cash cow MBA students would be at odds with CFS campaigns designed to bring down the cost of education.  Do they want their student union fighting for lower tuition?  Do they want their student union fighting for higher, more exclusive tuition?

At a national level, Canadian graduate student associations and undergraduate associations can join the CFS or CASA.  Neither focuses on graduate student concerns, although the CFS does have a graduate student caucus.

Graduate students have real needs.  Needs that go unaddressed.  Advocacy around times to completion and attrition have yet to make the agenda of a national advocacy organization for Canadian students.  If GSAs get action on undergraduate student issues from national industry groups and no representation on graduate student issues, then why even join CFS or CASA or the provincial industry group in Quebec, the UEQ?  Canadian graduate students could save money.  CFS is costly to miss the mark for so many years on graduate student needs.

Graduate students, who lack importance in the agendas of  CFS or CASA to take up their issues, might be better served as a part of a larger student association consisting of graduate and undergraduate students. FACEUM combines reps for graduate students as part of the overall student union.  Would this be a better model than the grad and undergrad division that presently exists?  Unless agendas specific to graduate education come from graduate student associations, what is the point of a separation between grad and undergrad representation?

Graduate student associations in Canada notoriously defederate from CFS and tear down their own national or provincial industry groups, so as at 2016 no stable, long-lasting national or provincial group manages continuity and stability of campaigns across the yearly changes in graduate  student union leadership.

Why hasn’t CFS or CASA campaigned for grad students on times to completion and attrition in all these years?  The campaigns and issues addressed center on the undergrad, fees, minority groups and tuition.  While CFS urges Justin Trudeau to keep his promises to Indigenous students, CFS needs to find the problems in grad schools buried under pluralistic ignorance and silent exits.  CFS campaigns need to better address issues unique to grad students or grad students need to find better industry group representation.  Without industry group coordination and continuity, GSAs have little hope of mounting a campaign to bring down times to completion and attrition statistics.

What makes writing ‘academic’? Part I

Academic Emergence

Threshold Concepts

A post about giving academic writing proper disciplinary status

Since writing thisin Patter and thisin The Guardian, and since thisseries of posts was published, I still don’t understand why Academic Writing does not have disciplinary status.

Bar a few exceptions, it is rare (in the UK, where I am based) to find job adverts for Lecturers in Academic Writing; when you do, they tend to be posts created to help people become ‘better writers’ (e.g. in Writing or Graduate Centres, or Libraries) rather than to educate in matters of writing by foregrounding the nature and the disciplinarystatus of Academic Writing.

Academic Writing continues to lack disciplinary status despite: a) the recent publication of Linda Adler-Kassner and Elizabeth Wardle’s edited book Naming What We Know: Threshold Concepts of Writing Studies, which features contributions from US writing experts including CharlesBazerman and David Russell

View original post 1,127 more words

Supporting graduate student writers: Research, curriculum and program design (2016).

DoctoralWriting SIG

This book review is written by guest Susan Mowbray, Western Sydney University. The book seems highly pertinent to our community, so we thank Susan for alerting us to it with her detailed critique.

Supporting graduate student writers. Research, curriculum and program design. (2016). Edited by Steve Simpson, Nigel. A. Caplan, Michelle Cox & Talinn Phillips. Ann Arbour: University of Michigan Press.

Supporting graduate student writers captured my attention as I have recently taken on a literacy support role with our Graduate Research School. The idea for the book was conceived at an invited colloquium on graduate writing support in 2014 and the result of the editors’ labours arrived via the University of Michigan Press in March this year. The book is organised in three parts. Part 1: What do we know/need to know? broadly covers supporting graduate research. Curriculum is dealt with in Part 2: Issues in graduate…

View original post 945 more words

Rethinking the Engineering PhD

We encourage doctoral students to take deep dives into very narrow questions, implicitly encouraging them to think small. The narrowness of their research topics focuses them laser like into silos …

Source: Rethinking the Engineering PhD

Do the failings of the oral defense vacate the doctorate degree?

Exam Torture

The oral defense at the end of doctoral education lacks legitimacy.  A reputable tests would be both reliable and valid.  assessmenthitthemark

Getting three examiners in a room, to put  a candidate through some kind of ringer does not a valid and reliable assessment make.  Who or what are they examining?  The candidate or candidate’s work or some combination thereof?  On what basis would each determine a fail?  Is it the same benchmark for all three?  Does the university give them guidelines and check for their common understanding?  Does the university check that all defenses look for standards set by the program?

What if examiners get it wrong? The examiners for Jason Richwine’s Harvard PhD got it wrong.  What process is in place to check the oral defense committee’s work?  No checks on Richwine’s committee and supervisor occurred even after a journalist did his own check and broke the story .   Journalists often unmask wrong doing in a doctorate, but why don’t universities put checks in place?  Laxity in assessment in doctoral programs puts them in the way of ridicule and undermines their legitimacy.

In Latin, 'nullius in verba.'

Did Harvard or other universities ask the questions Richwine’s committee;s epic fail asks?  Richwine got a standard doctoral education.  He applied to, was accepted and enrolled in a doctoral program at the Kennedy School of Government, wrote a dissertation, sat a defense and presto, Harvard awarded him its esteemed PhD.  Richwine’s story shows even the best universities fail to fix  problems in doctoral programs.

Harvard experienced protests and national notoriety but merely placated protesters and let the unrest die out.   If Richwine didn’t believe in the inferiority of Hispanic immigrants, he could sue Harvard to place blame on his program, his committee, and his university.  As a student at Harvard he had every right to expect the best doctoral education and highest standards of assessment.

What argument would his lawyers make?  Harvard gave Richwine a PhD in 2009 for research based on an assumption that ‘illegal Hispanic immigrants to America had lower IQs than non-Hispanic whites’.  After a reporter blogged about Richwine’s dissertation in 2013, many academics read it.  The complainant then learned that the research questions, methodology and literature review suffered grievous and invalidating errors.  The complainant asks Harvard to refund all expenses incurred in his five years of study and award recompense for a failure to assess his doctoral work to withstand the scrutiny of  academics outside of his committee and supervisor.   He asks Harvard to make a public apology placing blame on the PhD program he took to properly assess his doctoral work.

Without a rigorous process to assess whatever it is oral defense committees assess, shouldn’t all doctoral students demand transparency about checks of their work (beside typos), checks on the work of the examining committees, and for the committee’s scores on validity markers in the oral defense?

The next time a journalist unmasks a questionable dissertation to shame an esteemed official with a PhD, it’d be wonderful to also blame the granting institution.  Without checks for cheating and soundness within doctoral programs, deficient dissertations will continue to come to light.  Research on the quality of dissertations is needed.

Right now programs place too much onus on the integrity of the candidate.  At the very least shouldn’t all dissertations go through a thorough plagiarism check, even a digital check like Turnitin, before any oral defense takes place?   Plagiarism shows up  more frequently than a robust, thorough, quality control process should let through.  Plagiarism aside, the research questions, design, methods, and conclusions may fail the thin-red-line-thread-of-coherence test taught in the Cohen, Manion and Morris text for social science research.

The rarely failed oral defense gives false confidence.  How could Richwine be trusted with setting up and conducting research save for an right-wing foundation?  Why did he need a Harvard PhD to work for a right-wing foundation, save to confer a sheen of legitimacy to him?

The sample of one research project also lends doubt about the soundness of a doctoral program structure.   Research done after the degree shows whether the committee recognized the, as yet, undefined confidence level conferred in a doctoral degree.  Yes, of course, the record shows holders of the degree and those early career researchers going through its dubious assessment process produce influential works even with the problems of doctoral programs.

Greater confidence in the doctoral program could come from multiple micro-assessments by multiple examiners and multiple research investigations.  Assessment data tied into block chains linked to discrete aspects of  ‘doctoralness‘  overcome validity and reliability assessment problems within doctoral programs.  Let’s say students submitted their questions, methodology and literature review to random, bona fide external reviewers.  Remember a  journalist and the web provided reviewers for every aspect of Richwine’s notorious screed.   Associating reviews to a block chain would give more confidence in the doctoral program.

The bogus doctoral degrees now destroying Russian doctoral education come in here too.   Russians seeking a PhD or two to pump up their prestige took advantage of the laxity in standards in doctoral education. They turned Russian universities into doctorate diploma mills which now sadly calls all Russian doctorates into disrepute.

Outside of Russia, doctoral education suffers from the same lack of rigor in assessment that allowed for mass corruption to take hold in Russia.   Unless doctoral programs declare exactly what that achievement mark is and devise a robust means to test for it, the same fate could befall doctorates outside of Russia.  The hocus pocus and hokey pokey that passes for assessment in a doctoral program disrespects and makes vulnerable the work of all doctoral students.

As a solution, an entrepreneur or international not-for-profit could set up a bitcoin like block chain system to outsource assessment for doctoral programs and students.  This could be called the ‘journalist test’.

A block chain would secure pass/fail assessments of characteristic aspects of doctoral study.

Originality test: How does the research make an original contribution to the literature?

Research instruments: Does the research show fidelity to principles inherent in the methodologies?

Questions:  How does the research plan execute on the questions asked?  Is the quality of the questions aligned to the literature and problems in research?

Literature Review: To what degree is the research based on a comprehensive interrogation of the  literature?  In the case of doctorates in the sciences, a literature review skill may require programming a lit review search and critiquing the results.

Plagiarism checks: The work should undergo a check for plagiarism.

Multiple draft changes:  This sort of assessment checks that the writing came from the candidate.  Taped evidence discussing changes to the manuscript through multiple iterations lends authenticity as to the author.

Assessment would secure aspects of doctoral know-how against habits of mind derived of a doctoral education.  The above list is not, of course, exhaustive.   If doctoral programs decided what the oral exam tested or recognized, the check list would follow from it.

Evaluations of discrete aspects of doctoral education against specific criteria and exemplars could be sliced into a bitcoin kind of security.  Unless granting institutions and disciplinary bodies make-up a robust assessment system, capped off by an oral defense if necessary, to assess the lessons of a doctoral program, glaring omissions may still occur.

So back to the fitness of the oral defense.  Presently, the oral defense makes candidates defend something they may have thought better of, from which they  learned what not to do the next time round.  Post PhD, a newly minted PhD will likely not repeat the solitary toil in graduate school.  The vast majority of academics work with others.  Besides which how much experience do students get in a doctoral program with defending anything before an audience save in the oral?  Re-framing the defense as a candidate’s public critique of a first contribution to scholarship before curious and supportive fellow scholars makes more sense.  Re-framing the defense as a lively discussion between collaborators on research watched by an examining committee more closely mirrors research practice.  Ironically, concern over assessing individual understanding has kept doctoral programs from offering collaborative research pathways to students.

The problem of a sample of one undermines doctoral assessment.  Demanding less than book-size output from a novice researcher in favor of more samplings of research acumen, would change the scholarly output but not the purpose of doctoral study.  Doctoral programs structured around three discrete research collaborations would get a richer sampling of the skills of scholarship.  It’s easy enough to improve on the standard doctoral program.

For all that toil, all that time taken, all the treachery students take on, don’t doctorate students deserve better assessments and more thoughtful doctoral programming?

 

 

 

 

Is dropping out failure? Je ne regrette rien

DoctoralWriting SIG

By Claire Aitchison

In a world of spiralling credentialism where employers require ever higher qualifications, and institutions compete to recruit and keep doctoral candidates, its easy to see how students and supervisors can feel pressured to keep students enrolled. But what if you decide that doctoral study isn’t for you?

Recently I met up with a former student and in conversation she reminisced about her time as a doctoral student. Despite the many challenges she had experienced, she said how much she had enjoyed herself, especially the intellectual stimulation and sense of purpose she had as a doctoral scholar. She told me how she still loved her topic and wished she could have completed. I understood all of that – but then she went on to say she deeply regretted dropping out of her PhD.

This took me by surprise, because, all those years ago, when she contacted me about…

View original post 1,001 more words

Can You Coko? An Interview with Kristen Ratan of the Collaborative Knowledge Foundation

Graduate schools need to equip their students with this knowledge and should be supporting CKF.

The Scholarly Kitchen

coko logoKristen Ratan, known to many in the scholarly communications world, has been embarking on a new adventure of late. Together with Adam Hyde, she has launched the Collaborative Knowledge Foundation (CKF),  a nonprofit organization whose mission is to evolve how scholarship is created, produced and reported. I recently had the chance to talk to Kristen about the Foundation – what it is, what problem(s) it aims to address, how it is funded, and more.

Please describe the Collaborative Knowledge Foundation and your role there

CKF, or Coko for friends, is a nonprofit open source technology organization offering alternatives to the ways that we currently produce and share knowledge. We’ve kept that directive deliberately broad so that we can focus on many different types of knowledge and processes. We’re starting with research communication because it is such a critical form of knowledge that is currently broken in ways that are…

View original post 1,263 more words

%d bloggers like this: