As we all experiment with AI, it’s likely that each of our journeys is different. This is one journey represented by a need to make expertise portable. It starts with personal YouTube experiments and moves into merging the concept of communities of practice with AI.
This post uses the word AI as shorthand for large language models, which is slightly reductionist, but it’s what we seem to do these days.
Where I’m at
Let’s be honest, we’re all at very different places with AI due to our philosophies, time, and abilities. I’d probably describe myself as an inquisitive, rigorous contrarian, which I should probably explain. I’m less focused on efficiency gains in isolation. Similarly, I’m still not sold on replacing the process with the product, because the human going through the process can be more valuable than the end product.
I orient my AI thinking toward getting actual people to a point where they can make good decisions with sound judgment. This encapsulates efficiency, but in a way, my mind accepts it as more valuable.
While many at the frontier are focused on crazy levels of agentic automation, or so YouTube videos tell me, or a shotgun approach at questionable efficiency gains, I’m sitting at a more thoughtful frontier, using the technology to enhance and validate human expertise. I’ll admit the idea that one day we may have AI colleagues on the organisation chart, even as your boss feels like a soulless nightmare and won’t end well.
I think that’s fine. We all have different ways of thinking, and we all can’t be the Reed Richards of AI. I think what’s important is we’re engaging with it and navigating the journey.
The YouTube experiment
I’ve discussed my journey and failure with YouTube before, but I continue to experiment with it as a learning opportunity. One experiment was on how to focus my channel on a specific brand of content targeted to a specific audience.
Problem number one was defining the content strategy and audience. Problem number two was finding a way to assess content against that definition without a YouTube adviser to keep me on track. The lack of an opinion other than mine. To address these problems, I decided to set out my brand and audience as a practice document against which written video scripts, as well as ideas, titles, etc, could be assessed.
It made perfect sense as an approach, as I was essentially wanting to make the expertise of a resource I didn’t have access to portable. The result is a short document in Google Docs that Gemini can access (it can also access my YouTube channel and actual videos). The document outlines my audience and content in three layers: –
- The identity (who)
- The undercurrent emotions (why)
- The tangible actions (how)
The beauty of it is I can tell Gemini to assess my YouTube script as a YouTube advisor, taking into account the practice document. I can also have AI review videos on my channel and give an opinion on how well they align with the content and audience strategy, and, historically, what might have resulted in lower views.
It does a really good job of telling me how my script delivers on the practice well and suggests improvements. It keeps me in my lane with my content and audience while allowing me to consider and act on the returned advice. I can prompt it for title advice, and it does a way better job than I.
Now, how will this pan out in the metrics that matter, like increasing my views, watch time and how many people become regular, committed viewers, is another matter and probably not the subject of this post, but it does exemplify why I continue with YouTube from a creative and a sort of personal ‘marketing experiment’.
What does matter is how this informed my next experiment.
Making expertise portable
A real-world problem that presented itself was how to transfer expertise from a highly constrained, experienced pool of resources to a less constrained but less experienced set of resources. You can’t do this with process alone, as that only tells you what to do when. You can’t eradicate it with templates, because while they’re explicit about what should be in the document, they don’t help inform what good looks like or to complete the document well. It’s possible someone may look at template and it doesn’t help them complete well at all.
Is it possible to make that expertise more portable, as I did with my YouTube channel experiment? The answer is yes, as a community of practice is an established model, and practices are a great way to guide AI on what good looks like.
A practice-based approach
In the first case, I experimented with writing a practice for what a good problem investigation looked like, as this was the immediate and present problem. When different resources undertook investigations into problems, responses varied because the expertise for what a good investigation response looked like wasn’t sufficiently portable.
In fairness, the improvement came from a number of approaches: coaching calls, the practice as a document to read and the practice as a tool for the AI to assess investigations. It is true that the final step was probably the least impactful, but this was done in a very small team; the distribution of those elements may well have been different in a much larger team, where personal investments of time don’t transport as efficiently.
Emergent approaches
Once these things are out in the wild, people find different ways to use them. The intention of the investigation practice was to guide human beings and help them uplift investigations based on what good looks like.
It was not to generate documents, and I still maintain this is the case. I believe the process can be sped up, but there is value in a human going through the process just as much as the document itself.
Yet someone took snapshot images from a rambling JIRA ticket’s comments, prompted the AI to construct a document using the snapshot of the comments as input, and the investigation practice as direction on what good looks like for the investigation document. The document wasn’t used directly. It was edited into a better form, but it created a very good rough draft and shape.
I still hold to the idea that the practice isn’t meant to replace writing documents, as the good thinking was embedded in the image snapshots from the JIRA ticket; it just wasn’t well-structured, but it’s still a productive use of the practice via the AI.
Extending the concept
Since the idea of creating practice documents worked with investigations via personal experiments and some actual use, including an emergent one, it made sense to extend the idea to other documents we may be called on to write: –
- Solution Approaches
- Requirements
- Designs
The first lesson was writing these things as valuable in and of themselves, which is the process is as valuable as the product argument. They all exist in coordination with a template, though the practices don’t strictly need it; it just gets people on the road to a good structural start faster. It also helps human readers understand these things if they can see the template reflected in the practice, or vice versa, depending on which they approach the problem through first.
Experiments with using these practice documents to assess these outputs show the AI provide a good assessment of the documents themselves, as they reflect feedback informed by the good thinking that a more experienced resource would bring to the process. It has made that experience more portable. The fact that the AI knows what good looks like means it can actively provide feedback on more esoteric concepts, such as the document telling the overall story of the solution to the reader, rather than just a bunch of isolated parts. This, again, is exactly what a more experienced resource would bring to a completed document.
The Conclusion
The purpose of a practice is to impart what good thinking looks like. It’s essentially what sits in an experienced person’s head when they produce very good artefacts. As a result, a practice exists to fill the gaps between when someone needs to do something (process) and what they need to do (template), but don’t really know how (practice).
As you’d imagine, since that’s what they are originally for, practice documents make the expertise around what good looks like more portable to others. The good thing is that those exact documents can be written in ways that still serve human readers while also serving as excellent instruction for AI on what good looks like.
While it doesn’t completely replace the good judgment of experienced resources when it comes to these artefacts, it does make assessing what good looks like more portable and available to different resources, while also serving as a coaching tool.