Your AI Needs a Degree - Or Perhaps You Do
I have a friend who expressed working with AI this way "you have to explain your PhD level question to the AI like it is a 3 year old." My question is what makes AI qualified to answer the PhD level question in the first place?
Businesses are buying into AI as a way to either enable employees to do more per person or to replace people. This has always been the case with automation - which is what AI really is at the end of the day - all the way back to things like robotics, the steam engine, and even the wheel.
But this is the first time we're expecting automation to "think" for us. Even the ever mighty calculator only does very well defined calculations, following extremely rigid and precise rules and behaviors. Up until this point computers have, comparatively, been overgrown calculators - so long as you gave them the appropriate input you got the appropriate output - or you got an error thrown.
I know that whenever I've been hired to "think" on behalf of a company, or I've hired someone else to "think" for my organization, I've either had to show or I've required some credentials that suggest the ability to think well. But what credentials are we demanding of AI systems? How do we know they can "think" well enough for the job at hand?
The Point of a Degree
If I've heard or said this once, I've heard or said it a thousand times: a degree really means that you've learned how to learn. Secondarily it indicates a domain in which you've learned a great deal and are hopefully prepared to put what you've learned to good use.
Oversimplified? Probably. But not wrong. We've invested so much faith in the value of a degree that we have accreditation boards who are responsible for validating that the places we get our degrees from are meeting standards and criteria in order to be considered capable of conferring a degree on us as students. We even have multiple levels of degrees, each with more and more stringent requirements: Associates, Bachelors, Masters, Doctorates and the like. Clearly we've found value in this concept, or we wouldn't be investing as much into it as we do. (Some would argue we've invested too much in this system, but I digress...)
What certifications are we demanding of artificial intelligence?
Entry Level Positions
Yet when someone graduates with a degree they don't usually get a job as a CEO, CISO, CFO, etc. No, they get an "entry level" position, where they're generally given tasks to do under the supervision of someone else, someone with at least partial responsibility for their productivity and comportment. We sometimes have these new workers go through some sort of training program over the first weeks and months of their tenure to make sure they understand the organization, industry, and even technical details about what the organization does.
We even have organizations that work through a program generally referred to as an apprenticeship, ensuring that new workers prove their competency before they're given responsibilities for decision making and direction. Until they've passed through this program they're generally limited to task work - important in any environment, but not the "thinking" work that they're being trained and tested for.
What apprenticeship or "entry level" jobs do we give AI as it joins our organization? How are you ensuring AI is learning how to make decisions that will work well for your organization?
The Definition of Expert Is NOT "Someone From Out of Town."
In my presales career I've often made the joke that an expert is somebody that our prospect hasn't met yet. There's something reassuring for prospects in getting to talk to someone who isn't part of their local sales team - somehow just being from some other part of the company makes your sales pitch more credible, as if salespeople are ignorant liars and anybody from any other part of the company is automatically both brilliant and truthful.
Yet the salesperson and their engineering partner are the ones who have spent weeks if not months learning about your organization's needs and situation to ensure they're presenting you with the best possible solution their company can offer. On the other hand, the "pro" they brought in for this important call has probably spent less than an hour getting to know what's going on.
I've also seen this phenomenon happen with consultants and consulting engagements. As often as not, a consultant is brought in not to "discover the best path forward," for an organization, but to justify the path that someone has already chosen, even as it is sold to other stakeholders in the process as being an independent external opinion.
(My apologies if I've shattered any illusions here, or if I've torpedoed any sales or consulting engagements, but let's remember that relationships built on trust and mutual benefit are always much stronger than those built on deceit and chicanery.)
All of that was to make this point: we seem to be treating AI as if it is an expert just because it is AI. AI is assumed to be "smarter" than the people we've been employing for years who have domain experience and knowledge. But again, I ask, why? What rational basis do we have for thinking this way?
It's an Entirely Different Way of Computing, All Together.

Airplane reference aside (and if you don't get it spend the couple hours watching the movie, it is worth it), AI is an entirely different way of computing. As I wrote earlier, prior to AI, computing was essentially algebra being done by a calculator. The output you got was factual, or an error condition occurred. AI output is very different: it provides guesses.
I know, that's not how it is being promoted. That's not what the marketecture you just read says. Something about expert systems doing expert things, and we don't think of experts guessing. AI doesn't calculate, it estimates. This estimating isn't based on math anymore, it's based on language. Large Language Models do what I'll refer to as "word frequency math," where the system generates output based on what words were input and what the most likely words are to follow-up on the prompt. If that's math it is statistics, and it certainly isn't "intelligence," or knowledge. However it is good at providing a "good enough" answer in many cases.
This isn't a bad thing - many problems aren't solvable with math, but are solvable with language based reasoning. However it takes computing out of the old paradigm that the output is immediately "black-and-white" trustworthy to a whole spectrum of not just gray but the full Techicolor spread, with "extremely correct" on one end and "sounds vaguely plausable" somewhere towards the other end which culminates in "that's 150% bullshit and you should be embarrassed for having expressed that at all." That's a paradigm shift without a clutch for oh so many of us, and we're still treating LLMs like algebra.
So One of Us Needs a Degree
Yes, one of us - either "us" or the AI - needs a degree. Either the AI needs some level of credentials that can demonstrate it has the ability to learn, and has become knowledgeable in a particular subject matter, or you and I need a degree in making AI give us correct answers. If AI really has "intelligence," it needs to be able to learn our unique environments and become a meaningful contributor to our organizations through "on-the-job" training or some level of apprenticeship. If it isn't we need to be far better educated as to how to make good use of it as a tool of limited capabilities.
I can guess which of these is more likely in the short term, but I know which one I think will be more helpful to us all in the long run.
Epilogue
Now I know what you're thinking: "Bill, why are you so negative about AI?" Let me see if I can help clarify this. I believe AI is an important tool in our toolbox. I think it will eventually be extremely valuable to most of us in our daily lives at work and at home, just like the "Web" became after some time. But we're going to break a great many eggs before it gets to that point. We're rushing to give AI so much credit - and responsibility - that we're rushing right past every oops, gosh, golly, and oh crap! that happens. Perhaps we need to just slow our roll a little bit, and make sure we do this right.