The former is mainly a burden on me– so at a certain point, it makes more sense for me to just do it, since I’ve already put in the effort to create the training, re-teach the training privately here, and chase it down.
What actions should we take to remedy the pain of unqualified people making it through our hiring process?
Each time this happens, they generate massive vandalism for us to fix. And this also pulls my time into teaching, fixing, and chasing— so I’m distracted from doing the things that create value for our company.
Those high value (level 8-9 activities) are more valuable than the level 1 QA work I’m currently spending 90% of my time doing.
Hence, anyone joining our company must be able to do MAA— which is to be able to understand problems (especially ones they generate) and fix them. Because you can see the results of when people make mistakes and then continue to make excuses and rationalize instead of actually owning up.
Look at this excuse here, where the VA probably thinks it’s a good one to use:
On a scale of 1 to 10, with 10 being excellent, how well did they own up to their mistake? Based on that, how might we easily test whether they are practicing active listening.
In this video (watch it now, even if you’ve seen it before), look at how I explain how to instantly tell when someone is NOT doing active listening, even if they insist that they are:
Being able to screen for quality people shouldn’t require a lot of effort of their part or our part if we ourselves understand this concept.
For example, few people understand why we want candidates to submit an article and also rate their performance via our guidelines using active listening.
Note that ChatGPT is better at helping coaching these candidates (explaining where they went wrong) and our own team than in auto-generating content (which is how most people use it).
If they continue to make the #1 mistake in AI (which is closely related to the #1 VA mistake and the #1 way to tell if they’re not doing MAA), it would be better for them to use ChatGPT to explain, rather than rely upon me personally to explain each time.
When you see a reliability issue (ignoring messages or lost track of messages), it’s usually an underlying competency issue (no understanding of what we’re talking about).
Which usually means they skipped the L in LDT— jumping straight to D or even T (publishing articles).
I’m a reliable emergency rescue as the last line of defense.
But we don’t want people to get used to me being the support VA.
Of course, Ops should catch this before I have to intervene.
But if we’re reliably and competently iterating, nobody needs to step in.