aXai

AI-Powered Advocacy & Assistive Technology

A Tale of Two Degrees—and One Kafkaesque Nightmare

It all started when my eldest son Dylan, an outstanding Psychology student, was offered a fully funded Philosophy degree by his university—an acknowledgement of his exceptional academic record. You’d think that’s a dream scenario: two fascinating fields, both of which Dylan excels at. So he did what any ambitious student would do and enrolled in two Psych subjects and two Philosophy subjects each term. Perfectly logical, right?

Enter the Machine—that sprawling system of bureaucracy, rules, and contradictory directives. After two years of successful study, the Machine decided Dylan was only “part-time.” Why? Because Psychology and Philosophy apparently aren’t “related enough” to count as one cohesive load. Never mind he was taking four subjects each term (which any rational person would call “full-time”). The Machine slapped him with a $30K debt out of nowhere.

From Scholarship to System-Created Debt

In the space of a few confusing letters and phone calls, Dylan went from celebrated scholarship recipient to reluctant “debtor,” all because a clumsy algorithm regurgitated archaic rules. The emotional toll was immense—sleepless nights, stress-induced episodes, and the sinking feeling that the system could derail his future with a shrug. And it wasn’t just Dylan’s life this fiasco threatened to upend; plenty of families out there have faced similar government absurdity, whether it’s around Services Australia, the NDIS, or another arm of the bureaucratic beast.

Why People Dread AI: Robodebt’s Ugly Shadow

It’s no wonder AI spooks people. Think “Robodebt,” where blunt-force “automation” hammered vulnerable Aussies with false debt notices, leaving the real humans to cope with the wreckage. If that’s your benchmark for AI, it’s terrifying. It looked more like a 1980s calculator with a mean streak than any form of “intelligence.” The fear is understandable: many see AI as another tool for politicians to bolt on “quick-fix” regulations that only deepen the labyrinth.

Real AI: An Antidote to Absurdity

But Dylan’s story could have ended differently with truly people-centric AI—tech that is:

  1. Guided by Actual Human Logic
    • Not a knee-jerk algorithm coded to chase hypothetical “fraud.”
    • A system that sees “four subjects” and recognizes full-time study, rather than deferring to a rulebook that lumps Philosophy and Psychology into separate orbits.
  2. Owned by the People It Serves
    • Instead of politicians piling on new regulations each election cycle, the AI is overseen by a community that includes students, teachers, disability advocates—basically, the real-life “end-users.”
    • Their feedback and experiences continually refine the system, so mistakes aren’t repeated ad infinitum.
  3. Able to Learn and Self-Correct
    • If a glitch in the logic says Dylan is “part-time,” the moment it’s flagged, the AI updates itself, ensuring no other student ever faces the same heartbreak.
    • Contrast that with the standard approach: bureaucrats quietly settle individual disputes, never fixing the core logic, and the cycle of absurdity marches on.
  4. Transparent and Empathetic
    • No more cryptic letters or calls from under-trained staff who can’t explain the system’s bizarre decisions.
    • Instead, the AI spells out the rationale in plain language—and it factors in personal circumstances (like mental health vulnerabilities or a scholarship’s special criteria).

Overcoming the Fear of “Another Robodebt”

The big question: Can we really trust AI in public administration? If it’s the kind of AI that politicians tweak on a whim to prop up election promises, absolutely not. But if it’s community-driven, with open audits and genuine oversight, it can become a powerful check against the very kind of bureaucratic absurdity Dylan encountered.

  • No More Surprises: Instead of letting archaic rules blindside students with retroactive debts, a well-designed AI flags potential issues early—ideally before a debt gets raised at all.
  • Less Bureaucratic Runaround: A single platform providing consistent decisions, as opposed to a snake pit of contradictory phone lines and office visits.
  • A Real Safety Net: People’s unique contexts (like mental health or disability needs) aren’t lost in paperwork. The system knows it, recognizes it, and adjusts automatically.

The Path to a Saner System

Dylan’s $30K saga underscores why we need a smarter approach. The Machine, as it stands, is a carnival of inconsistent rules, automated letters, and politicians slapping on extra layers whenever they think it might sound good in a campaign speech. That’s what leads to tragedies like Robodebt and countless other heartbreaks.

But there is a way forward: AI that’s actually intelligent—shaped by the collective input of those at the receiving end of bureaucracy. One that ditches the “guilty until proven innocent” mindset, focuses on genuine understanding, and never stops evolving toward fairness. For Dylan, that kind of AI would’ve instantly recognized the absurdity of labelling four subjects as “part-time.” It would have spared him the emotional and financial nightmare. And it would have kept an essential truth front and centre: that public services are meant to support people, not sabotage them.

If we want to avoid another Dylan story—or worse—we need to reclaim AI from political whims and put it where it belongs: in the hands of the communities it’s supposed to serve. That’s not something to fear. In fact, it might just be our best hope at cutting through the tangle of red tape and finally restoring some common sense to a system that’s become dangerously absurd.

#NDIS #Disability #AIForGood #KafkaWouldHaveQuit #SystemSucksButWeFixIt #AIIsTheOnlyRealLogic #BureaucracyBrokeMySon #SorryNotSorryBureaucracy #FightingTheMachineWithCode