Global AI adoption continues to grow steadily and impressively, with 83% of companies claiming that using AI in their business strategies is a top priority, and nearly half of all businesses using some form of machine learning, data analysis, or AI today (dive deeper into AI stats here).
Companies implement AI for a number of reasons, from fueling IT automation to delivering superior customer service to addressing sustainability goals. The wish lists are clear, the desire is there and yet one core barrier remains in the way of AI’s continued momentum.
Limited skills, expertise, or knowledge in-house with regard to how best apply AI.
In fact, this is the No. 1 obstacle to more widespread AI adoption, according to the index report. And with limited skills and expertise, there ends up being a lot of AI gone wrong, misused, or misaimed.
As businesses think about the most impactful places to apply AI, they need to remember that AI is best used when serving a human world… not a machine world. As such, it’s important to identify places to integrate AI at the points at which humans and machines come together.
When leveraged powerfully, AI makes humans superhuman. But when leveraged incorrectly, it replaces people and often loses effectiveness or degrades end-user experience in the process.
Consider a quick example of an emergency services company that develops an AI “coach” for human responders that work help hotlines. The AI is trained to decipher vocal intonation, emotion, and language in real-time. This allows it to make judgments — like a human would — as to whether it should offer help in real-time, or whether it should escalate to a human coach based on the caller’s state of mind. In this case, the AI makes the human superhuman.
So where should AI be applied? And more importantly, where should it not? Let’s dive into a few core tips…
Go Here
Think semi-automatic versus fully automatic.
Organizations should be wary of automating anything 100%. In shifting to fully autonomous, businesses often take away the benefit of the things humans can do better than machines.
Instead, leaders ought to think about semi-autonomic applications where the task is uniquely human but could benefit from having immediate access to more information AI brings (remember: make humans superhuman). This is important where people have to make judgments or take responsibility — e.g. where they literally cannot be taken out of the loop.
In this case, the point of AI is to help these people make those decisions while being better informed with more complete and up-to-date information. We have heard too many cases of decision-makers saying, “If I had known that at the time, I would have decided differently.” Data fusion and rapid presentation powered by ML-informed AI can reduce drastically the need to fill in the blanks.
AI/ML loves big data, so give it a worthy target.
AI/ML has a unique capability to handle enormous data sets and extract correlations and conclusions that would be impossible for humans to achieve or take far too long. Now that big data is a reality, it is a place where the application of an AI/ML solution should have the highest priority.
Examples of a worthy target might include analyzing factors for enormous populations of relevant individual humans, attempting to model a scenario with hundreds or thousands of factors affecting the outcome, or having to ensure data quality in a data set in the millions but with even a modest error rate of 1%.
When we aim too low with our target — e.g. using incomplete or too small data sets — not only can organizations end up wasting effort, but they can also end up drawing the wrong conclusions – which is in fact the greater danger.
Allow the village to “raise” ML.
The parallels between ML and human pedagogy are striking. We tend to think about AI as a miniature adult but it’s not. It’s still growing, and it takes a village to educate and “civilize” the AI.
Neglected AI/ML subjects are doomed from the start. That’s because ML requires expert teachers and mentors — from the Data Scientist, who can develop the right ML approach and guide the AI to “enlightenment” to the end user who applies practical knowledge to be watchful for unintentional ill effects and recommend adjustments to the ML regimen to correct them.
AI/ML lore is filled with plentiful cases of AI/ML projects going awry (one of the most common diseases being overfitting), so organizations need to make sure a mentor is present, attentive, and qualified. Give the AI room to make mistakes, grow, and learn.
Related Reading: AI Naïve or AI Aware?
Not Here
Don’t expect AI/ML to be a panacea.
AI/ML does not magically fix dirty or incomplete data or misunderstood or poorly defined workflows. In fact, good Data Governance is a necessary pre-requisite to an effective ML program. Organizations can make costly AI mistakes when they begin before they’ve tackled core data issues, or when the organization has unrealistic expectations of what AI is and isn’t.
What’s more, while AI can help identify the workflow set and its variations, it is conspicuously poor at recognizing the difference between a necessary branching event and an aberrant deviation from the workflow — something that a human might recognize immediately and intuitively.
Don’t use AI as a solution for straightforward problems.
If the options at each decision branching point for a problem set are finite and clearly defined, and the data required at each step in a workflow are specific and discrete, there is no advantage to using AI. In fact, an AI solution may be less reliable and require more maintenance, and will almost certainly execute more slowly.
For example, when businesses turn AI loose on business process automation, it often comes up with inefficient, messy solutions that almost always have to be corrected by humans and don’t save time or money. Similarly, consider businesses that use robots within call centers so that every call begins with “Your call is very important to us.” The very fact that a robot is answering the call, and not a human, negates the call being important!
If over-relied upon, AI can drive results that are not human-friendly, as well as create new problems that weren’t there previously.
Don’t approach AI casually.
AI/ML is a discipline that requires skilled practitioners. Organizations can’t fake their way through AI. While there may come a point where “shrink-wrapped” AI/ML solutions can be applied without anyone in an enterprise really understanding how it works, treating AI/ML as an “appliance” at this point is unreliable, probably ineffective – and potentially dangerous.
For businesses that are using AI to drive critical business processes, it is imperative to have the expertise in-house – or on call – to understand the historical development of the system in their specific instance: its data sources, and how it is applying what it has learned to the specific problem statements. Without that specific expertise, organizations risk surrendering critical functions and decision-making to an unknown entity and neglecting the value that a company’s human resources can bring to the AI solution itself.
When organizations combine the things people do best with the things machines do best, they create a synergy that is far more powerful than each on its own. In approaching AI, businesses ought to look for cases where there are intersections and where the machine can give the human powers they wouldn’t have otherwise.
That’s how we continue to build things for a human world powered by technology, versus building things for a technology world.
Our six-week AI Accelerator Program helps companies uncover new ways to use AI to gain next-level impact. Interested in learning more? Click here.