Sunday, April 6, 2025

ICYMI: Columbus Edition (4/6)

Greetings from the NPE conference in Columbus, Ohio. It's also  the first time in quite a while that the CMO and I have been out without the board of directors. So it's a great weekend, even if Columbus reminds me very much of a big concrete hamster tunnel run. I've still got some reading for you. 

Naval Academy removes nearly 400 books from library in new DEI purge ordered by Hegseth’s office

Book banning for the military, because if there's anyone we want to have a limited view of the world...

If You Thought Mike DeWine Hated Public School Kids, Wait'll You Meet Matt Huffman

Stephen Dyer traces the source of Ohio's newest school budgeting failures.

Connecting the Dots

Why is Trumpworld so obsessed with education? Jennifer Berkshire has an answer for that question, and a suggestion for edu-journalists.

No Future in Our Dreaming

Audrey Watters spins off the Berkshire piece, plus other bonuses.

Teachers warn AI is impacting students' critical thinking

Ivana Saric at Axios with some of the least surprising news ever.

Will Religion’s Remarkable Winning Streak at the Supreme Court Continue?

Adam Liptak at the New York Times reminds us what's at stake with the upcoming SCOTUS take on a Catholic charter school.


Adam Laats at the New Republic makes the case that this Supreme Court case is really a lose-lose moment for the charter industry.

Oklahoma Democrats file joint resolutions to disapprove social studies standards

Not that Oklahoma Democrats have a lot of say, but there's a fight continuing in that legislature over the proposed christianist nationalist social studies standards.

Columbus parents, leaders express frustration over student name changes

Columbus schools surprised students and parents with a little comply-in-advance rollback of name use.

West Virginia teachers unions vote to combine and form ‘Education WV’

AFT and NEA merge for the first time in West Virginia, where any little bit of gain in teacher power is a big deal.

Protect Funding for College & Career Readiness Programs—Take Action Now!

Florida is, as always, in the forefront of terrible education choices. How about slashing the heck out of CTE, AP, and a host of other programs? Sue Kingery Woltanski has the details.


In Arizona, a failing charter is being shut down, and wining about it. Laurie Roberts offers a blistering op-ed.

Boys

Nancy Flanagan looks at the question of what has happened to boys.

With Trump’s Education Department, Public Schools Can’t Count on Previous Federal Funding Commitments

Trump's old personal policy of stiffing people for their work is now federal policy. Jan Resseger looks at the new normal of the feds reneging on contracts.

Mortal Thinking

Well, this is pretty damn awesome. Audrey Watters and Benjamin Riley together for a podcast.

The Tech Fantasy That Powers A.I. Is Running on Fumes

Tressie McMillan Cottom for the New York Times with a take that I hope turns out to be the right one (and which pissed off all sorts of techbros on line), which is that AI is just mid.

This week at Forbes.com I took a look at Derek Black's new book, and you should, too. 

Have some George Harrison.



And as always, I invite you to subscribe to my newsletter, a slightly more reliable way to keep updated in this wonky webby world.


Saturday, April 5, 2025

Maybe It's The Racism

I want to return to West Ada because I think there's more to learn here, and not just foir folks in Idaho.

Quick recap. Sarah Inama is a 6th grade world civilizations teacher in West Ada School District (the largest district in the state). She had two posters in her classroom. Here they are.










She was told to take them down. She did. Then she went home, thought about it, and put the second one, the one with many skin tones hands, back up. She's been told to get rid of it by year's end. She took her story to a local reporter, and then all hell broke loose.

We know a lot more now thanks to some stellar reporting by Carly Flandro and the folks at Idaho Ed News, who FOIAed 1200 emails surrounding this. You should read the resulting stories (here and here). I'm going pick out just a few points. 

The district had Inama when she disobeyed the order; the word "insubordination" was used. In my local union president days, the standard advice in situations was "comply, then grieve" because once you refuse to comply, you are insubordinate. Inama's high profile made disciplining her a PR nightmare for the district,  but it also seems the district admins and board couldn't really decide where they wanted to go with this.

Inama was told the poster was divisive, that it was "not neutral," that the problem was not the message, but the hands of v arious skin tones. Teachers shouldn't have political stuff in the classroom.  Inama nails the issue here

“I really still don’t understand how it’s a political statement,” she said. “I don’t think the classroom is a place for anyone to push a personal agenda or political agenda of any kind, but we are responsible for first making sure that our students are able to learn in our classroom.”

And yet many folks within and outside the district saw this as a political issue. How could anyone do that? Meet district parent Brittany Bieghler, who was dropping her kids off the day that parents were chalking the "Everyone is welcome here" message on the sidewalks.

“The ‘Everyone is Welcome’ slogan is one filled with marxism and DEI, there is no need for those statements because anyone with a brain knows that everyone is welcome to attend school, so there is no need to have it posted, written or worn on school grounds,” she wrote. “My family and I relocated here from a state that did not align with our beliefs and we expected it to be different here, but it seems as time goes by, its becoming more like our former state, which is extremely disheartening.”

"Anyone with a brain" might begin to suspect that everyone is not welcome here under these circumstances. And the school board itself couldn't decide what to respond, drafting an assortment of emails that tried to show conciliation to those that were defiant and defensive, including one complaining in MAGA-esque tones that Inama was naughty for going to "new media."

But I want you to look at the offending poster again. The curent Trumpian argument is that all this Marxist DEI naughhtiness is bad because it unfairly elevates people of color above white folks, that white folks are being discriminated against and denied what they deserve. The new Ed Dpartment civil rights office is dedicated to rooting out discrimination--against white folks. But look at those hands, the ones that make this poster controversial. The hands are all the same size, all have the same prominence and weight in the poster. It's not as if the Black and Brown hands are dominating the frame. Is it political to suggest that they are somehow equal? What could explain that?

Maybe it's the racism.

What would be the acceptable alternative? White hands given greater prominence and weight in the image? No hands at all so that folks can imagine whatever relationship between tghe skin tons they prefer, even if what they imagine contradicts the message of the poster? 

Inama has also been the target of district concern trolling, the whole "Of course we agree with the message, but we don't want to see our teachers embroiled in controvefrsy like this" thing. But that's an admission that given the choice between making children feel welcome in your district and maintaining the comfort of racists, your district chooses the comfort of racists. That is not a great district policy, no better than folks who suggested that Black students should not try to show u at newly-integrated sc hools because there would just be trouble. 

The district also says that it took this action because of Idaho's anti-diversity bill, which parallels the anti-diversity edicts comeing out of DC. While the Trump edict on DEI in education has been vague as hell, if this is how it's going to be interpreted, thijgs are going to get extremely ugly. If it's discrimination against white people to admit that people of color exist and have just as much value as white folks--well, what would explain such a viewpoint?

Maybe it's the racism.

There's one more layer here, and the district seems to be missing this entirely. There's a world of difference between never putting that poster up in the first place and taking it down after it was already up. The latter is a pretty explicit rejection of the message, and it makes matters far worse.

West Ada is a bad harbinger of what's to come. If a public school system can't bring itself to say unequivocally, "All students are welcome here, and that means students of every race, religion, and creed" then we are in a bad place. If a school leader can't identify racism when we see it and call it wrong, they have really lost their way.

Friday, April 4, 2025

Trump, McMahon, and Gollum's Lie

They couldn't resist. Faced with a choice between either sending education back to the states in the form of unrestricted block grants or using the power of that big pile of money to force states to bend the knee, the administration just could not throw the Ring of Power away. Especially when they can use The Precious to force their most favorite thing in the world-- making someone bow to them and kiss the ring, acknowledging that Dear Leader is their master, and they will do as Dear Leader tells them to.

So the Department of Education will require every school and state to sign a statement certifying that they will absolutely comply with the administration's demand that they never, ever touch that nasty DEI stuff. Otherwise, the administration will withhold the money. Dance, puppets! Dance!

This is yet another probably-illegal Trump move; the federal government is expressly forbidden to dictate to local schools how they are going to do business. But Trump wouldn't be the first President to look at that obstacle and say, "I'll bet we can work around this." No Child Left Behind and Race To The Top wore that obstacle down to barely a speed bump.

So rather than wait for the courts to weigh in and then Trump to ignore them and then for them to weigh in again, I have an idea about how districts can deal with this. 

Lie.

Pinky promise that you will never ever touch the dirty DEI. Make the pledge. Sign whatever piece of paper they concoct. And then go back to doing what you know is right.

I mean, lying is the Trump way. Say whatever the hell you want, make whatever claims suit you, and then go back to doing whatever you intended to do. Breaking agreements and welching on contracts is the Trump business way, and given the amount of government contractual obligation being cut off in mid progress, it's apparently the Trump government way as well. 

And Trump and McMahon are lying right now with this demand. The administration continues to be coy and vague about what, exactly, about DEI they want stopped. One reason is because having clear rules reduces the dependence on Dear Leader. It's not just that the chilling effect will lead to people over-complying in advance. It's that having a clear rule would mean that people wouldn't have to constantly turn back to Dear Leader for approval. "There are no rules," says the authoritarian ruler. "Not even rules I make. There is only me. Don't ever take your attention away from me."

The DEI rules are also vague because even these guys know that saying out loud, "The nice things must always be only for the white people. You must never give attention, privilege, or support to non-white people that is more than what white people get."

See, they are lying about what this edict requires. 

If you are a long-time regular reader, you know that I am not a fan of lying. I hate lies. Lying is a toxic activity, and it always comes with a cost.

They are lying about what they want, about what they are demanding schools to do. What they appear to want is A) for every school and state in the country to acknowledge that Dear Leader is the boss of them and B) stop trying to give nice things to people who aren't white. 

I hate lies. But schools are now in a lose-lose, lie-lie situation. Either they accept the lies implicit in the edict, or they lie about what they are going to do. One of those lies allows for mistreatment of students and erosion of the independence and local control of schools. The other lets educators do the work they are supposed to be doing. 

Gollum could not willingly give up the ring of power, and he used it for terrible purposes. Would it have been wrong to lie to him? These are the kinds of moral dilemas we face these days.

I was about halfway through my career when I concluded that teaching is a sort of guerilla battle in which one pursues the work and does whatever one must to circumvent obstacles, even if those obstacles are things (and people) that are supposed to be supporting you. How many teachers dealt with requirements to tag every bit of every lesson plan with the specific standards it would address by simply adding whatever tags filled up the space and then went back to work, paperwork requirements met. Schools could do that again. 

Difficult times call for difficult choices. I'm just saying.



Wednesday, April 2, 2025

OK: Another First Amendment Lawsuit

Oklahoma's Education Dudebro-in-Chief just loves him some lawsuits, so he's decided to launch another one, this time going after the Freedom From Religion Foundation in a federal lawsuit that pushes back against a challenge to his efforts to inject Christianity into Oklahoma classrooms.

The triggering event for Walters appears to have been a cease and desist letter sent to Achilles Public School on behalf of a parent who objected to a beginning the day with a mandatory prayer and teachers reading Bible verses to students. Walters says this is about more than a single school, but does not name other schools in the suit. FFRF surmises that these may be references to other complaints against Oklahoma schools that were peacefully settled in previous years. 

Walters statement about the suit boils down to "We won't let these out-of-state atheists try to erase faith from public life." FFRF is based in Wisconsin.

The sequence of event laid out by the complaint puts the letter in the context of his drive to address the “dismantling of faith and family values in public schools.” It notes that he made his Bibles-in-classrooms directive, then opened the Office of Religious Liberty and Patriotism, and so, in line with that, an APS teacher started using Bible verses in lessons, and the school started including prayers in morning announcements. Shortly after that, the superintendent received the letter regarding “unconstitutional school-sponsored prayer and bible readings.” FFRF requested that the school knock it off.

The actual argument cites the "trendy disdain for deep religious convictions" line from Espinoza. It argues that Oklahoma is super-religious (therefor, I guess, they want religion injected in schools). OSDE and Walters are doing their job of determining what Oklahoma students should learn, and FFRF 

has interfered with and continues to interfere with Superintendent Walters’s and OSDE’s statutory duty to oversee Oklahoma’s public schools and their duty to implement curricular standards, investigate any complaints levied against an Oklahoma school, and advocate for its students and parents.

 There is the usual dismissal of the wall between church and state:

FFRF claims as its basis for such interference as its desire to “promote the constitutional principle of separation of church and state.” Curiously, neither the word “separation” nor the word “church” appears anywhere in the text of the United States Constitution. By contrast, the Declaration of Independence makes reference to God, a “Creator,” a “Supreme Judge,” and “Divine Providence,” thereby solidifying the notion that a complete “separation of church and state” was never the intention of the Nation’s founders.

The complaint also paints FFRF as just annoying busybodies, going all the way back to their response to the 1996 Oklahoma bombing. The audacity.  

In reality, their actions are nothing more than the very prejudice, hatred, and bigotry they pretend to despise hidden behind a thinly woven cloak of constitutional championship.

Finally, Achille is a small town and FFRF has 40,000 members. So FFRF, argues the complaint in "an analogy sure to draw FFRF's ire, is Goliath picking on a David. 

And while the plaintiffs face "irreparable injury," not so the FFRF

as the Defendant has no interest in how the State of Oklahoma chooses to govern its citizens, how the duly elected Superintendent of Public Instruction performs the duties of his office, or how Oklahoma’s public schools implement curriculum and standards set forth by the OSDE and the State Board of Education. Granting an injunction weighs in favor of public interest. If the citizens of Oklahoma are unhappy with their elected officials, the solution is at the ballot box, and not in the hands of an out-of-state organization with little else to do but issue non-stop cease and desist letters to rural and independent school districts in states that are half a country away from them.

I include all these quotes just to give a sense of how angry the lawsuit is. Walters, like many MAGA christianists, just seems so angry and unhappy. 

The lawsuit can't quite make up its mind about what's going on here. This Bible reading shouldn't be a big deal because the Supreme Court has long recognized "the secular value of religious texts, including the Bible, in school settings" but also the court should enjoin FFRF from interfering with the school faculty, staff or students "exercising their rights under the Free Exercise clause of the First Amendment." So, there are no religious practices going on here, and also, how dare you interfere with these religious practices. But they're correct in mentioning the First Amendment, because if Walters' various Religion (But Only My Religion) In The Classroom policies aren't a violation of the Establishment Clause, I don't know what is. 

So here we go-- one more case to pry apart the First Amendment and batter the separation of church and state. Who knows how this will turn out, other than resulting in one more Ryan Walters media blitz. But in the meantime, if you'd like to join or contribute to the Freedom From Religion Foundation, you can do that here. 



Where Does AI Fit In The Writing Process

Pitches and articles keep crossing my desk that argue for including AI somewhere in the student writing process. My immediate gut-level reaction is similar to my reaction upon finding glass shards in my cheeseburger, but, you know, maybe my reaction is a just too visceral and I need to step back and think this through.

So let's do that. Let's consider the different steps in a student essay, both for teachers and students, and consider what AI could contribute.

The Prompt

The teacher will have to start the ball rolling with the actual assignment. This could be broad ("Write about a major theme in Hamlet") or very specific ("How does religious imagery enhance the development of ideas related to the role of women in early 20th century New Orleans in Kate Chopin's The Awakening?"). 

If you're teaching certain content, I am hoping that you know the material well enough to concoct questions about it that are A) worth answering and B) connected to your teaching goals for the unit. I have a hard time imagining a competent teacher who says, "Yeah, I've been teaching about the Industrial Revolution for six weeks, but damned if I know what anyone could write about it." 

I suppose you could try to use ChatGPT to bust some cobwebs loose or propose prompts that are beyond what you would ordinarily set. But evaluating responses to a prompt that you haven't thought through yourself? Also, will use of AI at this stage save a teacher any real amount of time?

Choosing the Response

Once the student has the prompt, they need to do their thinking and pre-writing to develop an idea about which to write. 

Lord knows that plenty of students get stuck right here, so maybe an AI-generated list of possible topics could break the logjam. But the very best way to get ready to write about an idea starts when you start developing the idea. 

The basic building block of an essay is an idea, and the right question to ask is "What do I have to say about this prompt?" Asking ChatGPT means you're starting with the question, "What could I write an essay about?" Which is a fine question if your goal is to create an artifact, a piece of writing performance. 

I'm not ruling out the possibility that a student see a topic on a list and have a light bulb go off-- "OOoo! That sounds interesting to me!" But mostly I think asking LLMs to pick your topic is the first step down the wrong road, particularly when you consider the possibility that the AI will spit out an idea that is simply incorrect.

Research and Thinking

So the student has picked a topic and is now trying to gather materials and formulate ideas. Can AI help now?

Some folks think that AI is a great way to summarize sources and research. Maybe combine that with having AI serve as a search engine. "ChatGPT, find me sources about symbiosis in water-dwelling creatures." The problem is that AI is bad at all those things. Its summarizing abilities are absolutely unreliable and it is not a good search engine, both because it tends to make shit up and because its training data is probably not up to date.

But here's the thing about the thinking part of preparing to write. If you are writing for real, and not just filling in some version of a five paragraph template, you have to think about the idea and their component parts and how they relate, because that is where the form and organization of your essay comes from. 

Form follows function. If you start with five blank paragraphs and then proceed to ask "What can I put in this paragraph, you get a mediocre-at-best artifact that can be used for generating a grade. But if you want to communicate ideas to other actual humans, you have to figure out what you want to say first, and that will lead you straight to How To Say It. 

So letting AI do the thinking part is a terrible idea. Not just because it produces a pointless artifact, but because the whole thinking and organizing part is a critical element of the assignment. It exercises exactly the mental muscles that a writing assignment is supposed to build. In the very best assignments, this stage is where the synthesis of learning occurs, where the student really grasps understanding and locks it in place. 

So many writing problems are really thinking problems-- you're not sure how to say it because you're not sure what to say. And every problem encountered is an opportunity. Every point of friction is the place where learning occurs.

Organization

See above. If you have really done the thinking part, you can organize the elements of the paper faster and better than the AI anyway. 

Drafting

You've got a head full of ideas, sorted and organized and placed in a structure that makes sense. Now you just have to put them into words and sentences and paragraphs. Well, maybe not "just." This composing stage is the other major point of the whole assignment-- how do we take the thoughts into our heads and turn them into sequences of words that communicate across the gulf between separate human beings? That's a hell of a different challenge than "how does one string together words to fill up a page in a way that will collect grade tokens?" 

And if you've done all the thinking part, what does tagging in AI do for you anyway? You know better than the AI what exactly you have in mind, and by the time you've explained all that in your ChatGPT prompt box, you might as well have just written the essay yourself.

I have seen the argument--from actual teachers-- that having students use AI to create a rough draft is a swell idea. Then the student can just "edit" the AI product-- just fix the mistakes, organize things more in line with what you were thinking, maybe add a little voice here and there. 

But if you haven't done the thinking part, how can you edit? If you don't know what the essay is intended to say--or if, in fact, it came from a device that cannot form intent-- how can you judge how well it is working?

Proof and edit

The AI can't tell you how well you communicated what you intended to communicate because, of course, it has no grasp of your intent. That said, this is a step that I can imagine some useful of computerized analysis, though whether it all rises to the level of AI is debatable.

I used to have my students do some analysis of their own writing to illuminate and become more conscious of their own writing patterns. Some classics like counting the forms of "be" in the essay (shows if you have a love for passive or weak verbs). Count the number of words per sentence. Do a grammatical analysis of the first four words of every sentence. All data points that can help a writer see and then try to break certain unconscious habits. Students can do this by hand; computers could do it faster, and that would be okay.

The AI could be played with for some other uses. Ask the AI to summarize your draft, to see if you seem to have said what you meant to say. I suppose students could ask AI for editing suggestions, but only if we all clearly understand that many of those suggestions are going to be crappy. I've seen suggestions like having students take the human copy and the edited-by-AI copy and perform a critical comparison, and that's not a terrible assignment, though I would hope that the outcome would be realization that human editing is better. 

I'm also willing to let my AI guard down here because decades of classroom experience taught me that students would, generally speaking, rather listen to their grandparents declaim loudly about the deficiencies of Kids These Days than do meaningful proofreading of their own writing. So if playing editing games with AI can break down that barrier at all, I can live with it. But so many pitfalls; for instance, the students who comply by writing the most half-assed rough draft ever and just letting ChatGPT finish the job. 

Final Draft

Another point at which, if you've done all the work so far, AI won't save you any time or effort. On the other hand, if this is the main "human in the loop" moment in your process, you probably lack the tools to make any meaningful final draft decisions.

Assessing the Essay

As we have noted here at the institute many, many times over the years, computer scoring of essays is the self-driving car of the academic world. It is always just around the corner, and it never, ever arrives. Nor are there any signs that is about to. 

No responsible school system (or state testing system) should use computers to assess human writing. Computers, including AI programs, can't do it well for a variety of reasons, but let's leave it at "They do not read in any meaningful sense of the word." They can judge is the string of words is a probable one. They can check for some grammar and usage errors (but they will get much of that wrong). They can determine if the student has wandered too far from the sort of boring mid sludge that AI dumps every second onto the internet. And they can raise the philosophical question, "Why should students make a good faith attempt to write something that no human is going to make a good faith attempt to read?"

Yes, a ton of marketing copy is being written (probably by AI) about how this will streamline teacher work and make it quicker and more efficient and even more fair (based on the imaginary notion that computers are impartial and objective). The folks peddling these lies are salivating at the dreams of speed and efficiency and especially all the teachers that can be fired and replaced with servers that don't demand raises and don't join unions and don't get all uppity with their bosses. 

But all the wishing in the world will not bring us effective computer assessment of student writing. It will just bring us closer to the magical moment when AI teachers generate an AI assignment which student AI then generate to be fed into AI assessment programs. The AI curriculum is thereby completed in roughly eight and a half minutes, and no actual humans even have to get out of bed. What that gets us other than wealthy, self-satisfied tech overlords, is not clear. 

Bottom Line

All of the above is doubly true if you are in classroom where writing is used as an assessment of content knowledge. 

This is all going to seem like quibbling to people who having an artifact to exchange for grade tokens is the whole point of writing. But if we want to foster writing as a real meaningful means of expression and communication, AI doesn't have much to offer the process. Call me an old fart, but I still haven't seen much of a use case for AI in the classroom when it comes to any sort of writing. 

What AI mostly promises is the classroom equivalent of having someone come to the weight room and do the exercises for you. Yeah, it's certainly easier than doing it yourself, but you can't be surprised that you aren't any stronger when your substitute is done. 






Sunday, March 30, 2025

Ready For An AI Dean?

From the very first sentence, it's clear that this recent Inside Higher Ed post suffers from one more bad case of AI fabulism. 

In the era of artificial intelligence, one in which algorithms are rapidly guiding decisions from stock trading to medical diagnoses, it is time to entertain the possibility that one of the last bastions of human leadership—academic deanship—could be next for a digital overhaul.

AI fabulism and some precious notions about the place of deans in the universe of human leadership.

The author is Birce Tanriguden, a music education professor at the Hartt School at the University of Hartford, and this inquiry into what "AI could bring to the table that a human dean can't" is not her only foray into this topic. This month she also published in Women in Higher Education a piece entitled "The Artificially Intelligent Dean: Empowering Women and Dismantling Academic Sexism-- One Byte at a Time."

The WHE piece is academic-ish, complete with footnotes (though mostly about the sexism part). In that piece, Tanriguden sets out her possible solution

AI holds the potential to be a transformative ally in promoting women into academic leadership roles. By analyzing career trajectories and institutional biases, our AI dean could become the ultimate career counselor, spotting those invisible banana peels of bias that often trip up women's progress, effectively countering the "accumulation of advantage" that so generously favors men.

Tanriguden notes the need to balance efficiency with empathy:

Despite the promise of AI, it's crucial to remember that an AI dean might excel in compiling tenure-track spreadsheets but could hardly inspire a faculty member with a heartfelt, "I believe in you." Academic leadership demands more than algorithmic precision; it requires a human touch that AI, with all its efficiency, simply cannot emulate.

I commend the author's turns of phrase, but I'm not sure about her grasp of AI. In fact, I'm not sure that current Large Language Models aren't actually better at faking a human touch than they are at arriving at efficient, trustworthy, data-based decisions.  

Back to the IHE piece, in which she lays out what she thinks AI brings to the deanship. Deaning, she argues, involves balancing all sorts of competing priorities while "mediating, apologizing and navigating red tape and political minefields."

The problem is that human deans are, well, human. As much as they may strive for balance, the delicate act of satisfying all parties often results in missteps. So why not replace them with an entity capable of making precise decisions, an entity unfazed by the endless barrage of emails, faculty complaints and budget crises?

The promise of AI lies in its ability to process vast amounts of data and reach quick conclusions based on evidence. 

Well, no. First, nothing being described here sounds like AI; this is just plain old programming, a "Dean In A Box" app. Which means it will process vast amounts of data and reach conclusions based on whatever the program tells it to do with that data, and that will be based on whatever the programmer wrote. Suppose the programmer writes the program so that complaints from male faculty members are weighted twice as much as those from female faculty. So much for AI dean's "lack of personal bias." 

But suppose she really means AI in the sense of software that uses a form of machine learning to analyze and pull out patterns in its training data. AI "learns: to trade stocks by being trained with a gazillion previous stock trades and situations, thereby allowing it to suss out patterns for when to buy or sell. Medical diagnostic AI is training with a gazillion examples of medical histories of patients, allowing it to recognize how a new entry from a new patient fits in all that the patterns. Chatbots like ChatGPT do words by "learning" from vast (stolen) samples of word use that lead to a mountain of word patter "rules" that allow it to determine what words are likely next.

All of these AI are trained on huge data sets of examples from the past.

What would you use to train AI Dean? What giant database would you use to train it, what collection of info about the behavior of various faculty and students and administrators and colleges and universities in the past? More importantly, who would label the data sets as "successful" or "failed"? Medical data sets come with simple metrics like "patient died from this" or "the patient lived fifty more years with no issues." Stock markets come with their own built in measure of success. Who is going to determine which parts of the Dean Training Dataset are successful or not.

This is one of the problems with chatbots. They have a whole lot of data about how language has been used, but no meta-data to cover things like "This is horrifying racist nazi stuff and is not a desirable use of language" and so we get the multiple examples of chatbots going off the rails

Tanriguden tries to address some of this. Under the heading of how AI Dean would evaluate faculty.

With the ability to assess everything from research output to student evaluations in real time, AI could determine promotions, tenure decisions and budget allocations with a cold, calculated rationality. AI could evaluate a faculty member’s publication record by considering the quantity of peer-reviewed articles and the impact factor of the journals in which they are published.

Followed by some more details about those measures. Which raises another question. A human could do this-- if they wanted to. But if they don't want to, why would they want a computer program to do it?

The other point here is that once again, the person deciding what the algorithm is going to measure is the person whose biases are embedded in the system. 

Tanriguden also presents "constant availability, zero fatigue" as a selling point. She says deans have to do a lot of meetings, but (her real example) when, at 2 AM, the department chair needs a decision on a new course offering, AI Dean can provide an answer "devoid of any influence of sleep deprivation or emotional exhaustion." 

First, is that really a thing that happens? Because I'm just a K-12 guy, so maybe I just don't know. But that seems to me like something that would happen in an organization that has way bigger problems than any AI can solve. But second, once again, who decided what AI Dean's answer will be based upon? And if it's such a clear criterion that it can be codified in software, why can't even a sleepy human dean apply it?

Finally, she goes with "fairness and impartiality," dreaming of how AI Dean would apply rules "without regard to the political dynamics of a faculty meeting." Impartial? Sure (though we could argue about how desirable that is, really). Fair? Only as fair as it was written to be, which starts with the programmer's definition of "fair."

Tanriguden wraps up the IHE piece once again acknowledging that leadership needs more than data as well as "the issue of the academic heart." 

It is about understanding faculty’s nuanced human experiences, recognizing the emotional labor involved in teaching and responding to the unspoken concerns that shape institutional culture. Can an AI ever understand the deep-seated anxieties of a faculty member facing the pressure of publishing or perishing? Can it recognize when a colleague is silently struggling with mental health challenges that data points will never reveal?

In her conclusion she arrives at Hybrid Dean as an answer:

While the advantages of AI—efficiency, impartiality and data-driven decision-making—are tantalizing, they cannot fully replace the empathy, strategic insight and mentorship that human deans provide. The true challenge may lie not in replacing human deans but in reimagining their roles so that they can coexist with AI systems. Perhaps the future of academia involves a hybrid approach: an AI dean that handles (or at least guides) the operational decisions, leaving human deans to focus on the art of leadership and faculty development.

We're seeing lots of this sort of resigned knuckling under in lots of education folks who seem resigned to the predicted inevitability of AI (as always in ed tech, predicted by people who have a stake in the biz). But the important part here is that I don't believe that AI can hold up its half of the bargain. In a job that involves management of humans and education and interpersonal stuff in an ever-changing environment, I don't believe AI can bring any of the contributions that she expects from it. 

ICYMI: One Week To Go Edition (3/30)

Next weekend the CMO and I will be off to the gathering of the Network for Public Education. It will be a nice road trip for us (the CMO is an excellent travel partner), and it is always invigorating to be around a whole lot of people who believe that public education is important and worth defending. If you're there, be sure to say hi!

In the meantime, keep sharing and amplifying and contacting your Congressperson regularly. These are not the days to sit quietly and hope for the best.

Here's this week's list.

Trump Says He’ll Fully Return Education to the States: Why That’s a Dangerous Idea

Jan Resseger  points to some of what reporters have uncovered about the potential pitfalls of Trusk's "back to the states" plans. 

Coming to Life: Woodchippers and Community Builders

Nancy Flanagan on the moment in Michigan, and some encouragement to keep swinging.

Texas lawmakers advance bill that makes it a crime for teachers to assign "Catcher in the Rye"

Rebecca Crosby and Noel Sims at Popular Information cover the latest censorship bill in Texas

Trump and his allies are selling a story of dismal student performance dating back decades. Don't buy it

The regime is pushing its bad education ideas on the back of false claims about education failures. Jennifer Berkshire talks to Karin Chenoweth about the actual truth.

Embattled Primavera Online owner, who made millions while his charter school students failed, lays off staff but is poised for another major payout
 
In Arizona, the news reports on one more charter scamster filling his own pockets while shafting actual workers.

Are taxpayers footing the bill for out-of-state cyber school students? CASD investigating

In Pennsylvania, one school district discovers it ios paying cyber tuition for students who don't even live there any more.

Tallahassee: Closing Title i Schools and opening Private Schools for the Privileged.

Profiteers at Charter Schools USA have decided there's more money to be made serving the elite, so good bye Renaissance Academy and hello a private school for "advanced and gifted learners." This story is important because it shows the shift from charter schools to private schools under universal vouchers. Sue Kingery Woltanski explains in this picture of some of the most naked money-grubbing to be seen--but not for the last time.


Research might suggest it could become addictive for some folks.

Banned Books, School Walkouts, Child Care Shortages: Military Families Confront Pentagon's Shifting Rules

At Military.com, a look at how the takeover of DOD schools by the regime is going, and how students are fighting back.

The Plagiarism Machine

Have you subscribed to the Audrey Watters newsletter yet? You should do that. And get the paid subscription for extra stuff. She looks this week at how AI is stealing content on an impossible scale.

Dismantling Public Education: No Laughing Matter!

Nancy Bailey on Trusk's dismantling of the education department.

EXCLUSIVE: AI Insider reveals secrets about artificial general intelligence

Ben Riley passes along some AI-skeptic wisdom from Yann LeCun (no, AI will not replace teachers).


John Warner contemplates being an author whose work has been thieved by AI developers. What is the future of writing?

I Teach Memoir Writing. Don’t Outsource Your Life Story to A.I.

Tom McAllister at the New York Times with an exceptional argument for writing by humans, not by bots.


Carlos Greaves at McSweeney's, reminding us that satire isn't always entirely funny.

I've got some Shirley Temple for you this week. Bert Lahr is fine, but when Bill Robinson comes down those steps...!



Also, join me at my newsletter. Free now and always.