Last month, I wrote about what artificial intelligence is, and what it is not. In the very short time between then and now, I heard a much more concise definition of the term from a presentation by Tableau’s Richard Tibbetts which I tend to favor:

Artificial Intelligence [is] computers doing things we’re surprised that computers can do.

Tibbetts goes on to point out that people tend to be “sloppy” with this term. I agree, and it’s why I love the simplicity of his makeshift definition. It allows us to shift focus from the what of AI – the specific technical tools involved – and more towards the why, or the reasons you would adopt any sort of “artificial intelligence” solution to drive positive outcomes.

Unfortunately, the mystique of AI carries with it some myths that we hear frequently in conversation when leaders discuss their hesitation around adoption.

  • It’s costly. Artificial intelligence and machine learning solutions are highly customized and will cost me an arm and a leg, and I just can’t justify that investment right now.
  • I can’t afford to wait. AI projects require months or even years to implement. I need help now!
  • I’m late to the party. My competitors have been “doing AI” for years now, so I’m already laggard.

While these statements may have been accurate years ago, the new reality is that technological advances have reduced cost, shortened implementation cycles, and leveled the playing field for new adopters. This has paved the way for organizations of all sizes, and at various levels of technological maturity, to enjoy a high return on investment for initiatives that we might fairly classify as AI implementations.

Related Reading: AI: Beyond the Mystique 

Consider three alternatives to wholly custom, greenfield projects:

1. Off-the-Shelf Solutions

When advising companies on how to tackle their biggest challenges, the first question I consider is: does something exist already that solves this problem? If so, why reinvent that particular wheel? I am a firm believer in a modified version of the Pareto principle when it comes to solution selection: that 80% of the solution can be deployed for 20% of the cost. This is a great place to start, generating early returns while sparking thought and conversation about what else might be possible.

As a practical example, there are dozens – if not hundreds – of conversational AI solutions for web-based chatbots, email handling, text messaging, and other forms of communication – even voice! The best tools are highly trainable, and customizable enough that implementation and support require some level of specialization. Nonetheless, the architecture is already built, allowing you to focus on the data and customer experience.

2. Extension of Existing Technologies

Do you already use a business intelligence and visualization platform, like Tableau or Power BI? Did you know that there are some pretty nifty AI capabilities embedded within each, many of which are included in the base license?

Tableau’s “Ask Data” and “Explain Data” functions do exactly what you might imagine. You can ask plain text questions to get ad-hoc visualizations that go beyond “what” and venture into “why”, and zero in on certain data points to find outliers and root causes. Meanwhile, Power BI’s “Insights” feature identifies interesting correlations with a single mouse click, and the AutoML feature helps you turn dataflows into pipelines for Machine Learning – which is like AI for your AI! This is just a small sample of what you can do in each tool. Try to get the most out of what you’re using now before adding new, costly, and in some cases redundant systems.

3. Low-Code Platforms

Inevitably, there will come a time where you need to build something that doesn’t exist. Or something does exist, but it’s missing some critical piece that you can’t do without. As an alternative to a lengthy and costly custom project, you might want to consider a low-code option.

Low-code platforms let you build solutions – applications, data integrations, automated workflows, you name it – that can be visualized as process maps. You can usually drag and drop functions, actions, and decisions, and the platform will do the “work” of building the actual code. Unlike its cousin no-code, low-code also lets you get under the hood and make changes to the source code if you know what you’re doing or add new functions that can be pulled into your processes. This is perfect for the user with some technical expertise who understands business processes to build something relatively quickly without needing a large team of seasoned software engineers.

The First Step

When thinking about innovation strategies – whether for data, or artificial intelligence, or any technology really – I’m reminded of the Arthur Ashe quote, “Start where you are. Use what you have. Do what you can.” The first step can be the most difficult, especially when starting from zero. So don’t try to boil the ocean. Search for quick wins that will deliver early returns. Make sure you’re getting the most out of the technology you’ve already invested in. Then, iterate and build on what you’ve learned.

Most importantly, make sure you are focused squarely on the why rather than the what. For your initiatives to be successful, the technology needs to serve the business case rather than the other way around.