Where Does AI Fall Within Your RACI?
Responsible. Accountable. Consulted. Informed. Where does AI fit?
We're rushing towards a new paradigm where computers and AI in general are being given more and more credence and latitude in both our personal and corporate worlds. Liability, ownership, and responsibility are becoming less clear cut as these changes happen.
For instance, when a self-driving vehicle with someone sitting in the driver seat gets into a crash, who is at fault? Who would be held liable by the police, by the courts, and by the insurance companies? Is it the driver? The AI code maintainer? The car manufacturer? How does that change in a self-driving car with nobody in the driver's seat, but instead a remote operator who is supposed to intervene in an emergency? Does that remote individual now become part of the liability chain?
What happens with business decisions? Did you let AI decide if the risks imposed by a new product were below your risk threshold? Did you let AI decide that you don't need those 30 people in department <insert your department here>? Did you let AI write your sales contract? Did you let AI decide which companies you'd invest your funds in? Did you let your AI write your code in violation of the licenses that it "borrowed" code from? Who is liable for those decisions? The company you bought the AI platform from? Your own AI prompt writers? The software engineer you told to use AI for coding? Your C-Suite? Your Board?
Are you ready for where the buck is going to stop when the poop and the air blower attempt to occupy the same space and time?
I didn't really think so. And spoiler alert: I'm not going to have a fast and easy answer for you on that either, other than what those folks at IBM said almost 50 years ago.
The RACI TLDR
If you want the deep dive on RACI, well, you'll need another resource for that - I'm not your huckleberry. But if you need the quick and dirty, I'm your man. The following definitions are taken directly from the Wikipedia page on the subject:
RACI is a model that describes the participation of various roles in completing tasks or deliverables for a project or business process.[1] The four key responsibilities most typically used are: responsible, accountable, consulted, and informed. It is often used to define roles and responsibilities in cross-departmental projects.[2]
R: Responsible — Those who complete the task.[3] There is at least one role with a participation type of responsible, although others can be delegated to assist in the work required.
A: Accountable (also approver or approving authority) — The one ultimately answerable for the correct completion of the deliverable or task, ensuring the prerequisites of the task are met, and delegating the work to those responsible.[3] Accountable stakeholders sign off and approve work that responsible stakeholders provide. According to some theories of project management, there must be only one accountable stakeholder specified for each task or deliverable.[4]
C: Consulted — Those whose opinions are sought, such as subject-matter experts, and with whom there is two-way communication.[3]
I: Informed — Those who are kept up-to-date on progress, often only on completion of the task or deliverable, and with whom there is just one-way communication.[3]
(No AI was responsible, accountable, consulted, or informed during the retrieval of these definitions from Wikipedia)
In my view, RACI really breaks down into doers (Responsible), approvers (Accountable), and anyone else (Consulted or Informed).
Doers
Somebody does the work/task/thinking/designing/developing/coding/... you get the idea. But this group also includes teams like quality control, acceptance testing, legal review. Yes, this group covers a lot of ground and a lot of roles. It is this space that many Gen AI vendors promise their solution can be used. It can "write the report/code/summary/research/analysis/etc."
Approvers
Here is where the buck stops, when implemented properly. This isn't just the "project manager," this is the executive who has final responsibility for the project. Decisions are made in this group, and they are the resources who are accountable to the organization for getting the job done properly.
It may not seem like it, but AI solutions have put themselves in this space without you realizing it. The most obvious example here is a self driving vehicle that is making literal life and death decisions, choosing what risks are acceptable and what risks are not. Yes, they sometimes defer to human intervention for the "really tricky stuff," but they're approving things like if it is safe to proceed through that intersection and full speed even though there's a pedestrian that seems to be stepping out onto the roadway. Is it safer to proceed, or should it slam on the brakes? Will that cause an accident because the vehicle behind is following too closely for conditions? Should the car instead swerve? Honk the horn?
Anyone Else
These are roles where there may be valuable input gleaned, or others may need to coordinate progress on projects they're completing, but they have no actual responsibility or ownership of the project in question. This is the lowest risk group to allow AI to be a part of - assuming you trust AI to not share your project's details with 3rd parties, which we know is a problem with many of the Gen AI systems today.
So You're Saying AI Is Already Everywhere In My RACI?
Yes.
Hey, I told you the bigger question wouldn't have an easy answer, and that's in part because this question did. So now that that's settled:
What Am I Going To Do About It?

That is the $64,000 question isn't it? Thank you so much for asking. Here are my suggestions:
Revisit the concept of separation of duties. It is one of the most basic concepts in security and audit, but we tend to forget about it in the pursuit of efficiency. The long shot is this: don't let AI be both doer and approver. Don't let it be coder and quality assurance. Don't let it be researcher and decision maker.
Add focus to your approval and quality assurance processes. We all see where the trend is going, we want Gen AI to "do" lots of the stuff because we believe it will be more efficient. That may well be, but at the moment Gen AI is about as good at many of the jobs we ask of it as a fresh out of school intern: the raw materials may be there but there's no "life experience" to guide that raw talent. So don't just ask your coders to start being QA, train them on QA, or hire QA professionals, as an example. Fact check your AI - they do tend to make up sources as lawyers seem to keep finding out the hard way.
Control the training data your Gen AI has access to. The age old adage of "garbage in equals garbage out" is alive and well in the era of Gen AI. LLMs trained on social media content are going to tell you to put glue on your pizza to keep the cheese from sliding off - and most (if not all) of them seem to consider social media a key part of the training data. So when you have any level of control over the training materials make good choices. When you don't, well, that's why fact-checking and QA are so important.
Wait, These Suggestions Fly In the Face of Efficiency!
I know. The promise of Gen AI has been efficiency. But the industry isn't mature enough to deliver on that promise at scale. Whether we're talking about the computing power required to ask AI a question vs. just going to a search engine or even Wikipedia for an answer, or we're talking about the "AI slop" problem, Gen AI is just not living up to the efficiency hype...yet. Sorry, but that's what the evidence tells me.
So like I said, no easy answers.
Perhaps the best advice I can give you is this: be judicious in your use of AI tools. Use them for the work they're proven to be good at, and focus on those benefits and optimizations. Be skeptical about the hype before you buy into these unproven (and over-promised) solutions. But if that horse has left the barn, well, reread the previous section and do what you can. Perhaps one day Gen AI will be as good as people at many jobs, but until then we have our work cut out for us.