Skip Navigation
News & Ideas

An Executive Guide to Artificial Intelligence

Let’s start with a question. Does your company see artificial intelligence as a threat, an opportunity or a bit of both? To be honest, it’s not really a fair question, akin to asking a manufacturer at the end of the 19th century what they expect the impact of electricity will have on business. Just like any foundational technology, be it electricity, the internet or blockchain, AI can be applied in many different ways and it impacts every business differently.

So first, let’s break down what we mean when we talk about AI. Artificial Intelligence and its various subgenres (more on that in a minute) are still in their technological infancy within most business contexts. Yet you don’t need to look hard to find examples where AI is already driving transformative change within enterprise organizations. In almost every vertical — health, finance, media, manufacturing, legal — you’ll find companies using AI to automate routine tasks, facilitate customer interactions and discover valuable data that would be impractical, if not impossible for humans to find on their own.

It would be a mistake to think this technology is still a few years away. If AI isn’t a meaningful part of your technology roadmap today, how do you plan to compete with AI-enabled competitors tomorrow? The good news is that there are many powerful AI technologies available today that can enhance current products and drive your future product roadmap.

Of course, the same technology is available to startups, who are just as likely to be your new competitor as your new vendor. They have the ability to focus on a specific niche, adapt quickly to changing technology and acquire unprecedented levels of venture capital. Typically they are looking to serve enterprise companies through an AI-as-a-service (AIaaS) model, although outright acquisitions are not unheard of. In January of this year Layer6, a Toronto-based AI company with fewer than 20 employees, was acquired by TD Bank for a rumored $100M.

Of course there are the tech giants delivering AIaaS, notably Google, Microsoft, Amazon and IBM. These companies would like to sell you AI services the same way they sell cloud storage and hosting. The challenge is that planning, integrating and deploying an effective AI solution is much more complicated than running applications or storing files in the cloud.

So where do you start? How do you figure out what areas of your business would benefit most from AI? Let’s first clarify what we mean by ‘Artificial Intelligence’.

What exactly is — and is not — Artificial Intelligence?

This is a subtle but important point. As we talk about various applications of artificial intelligence, we tend to talk about computer vision applications, natural language interfaces that understand what we say, or predictive analytics applications that discover valuable insights from mountains of data.

We’ll get to machine learning later on, but first, to understand AI it really helps to understand use cases and real-world applications that AI is typically applied to. So let’s do that.

Natural Language Processing, Conversational UIs and Sentiment Analysis

Natural language processing (NLP) a method for understanding the meaning of the words you speak or type. Everything from a Google search, to Siri, Alexa and possibly your thermostat use some form of NLP, likely with varying degrees of success. Within narrow use cases, NLP can work very well. If 80% of your call centre requests can be resolved by providing the customer with a relevant URL or online service, you can free up your CSRs to spend more time with customers who truly need a human to resolve their issues. This in turn can significantly improve customer satisfaction — all while giving you the ability to gather better data on why customers are calling in the first place.

A Conversational User Interface (or conversational UI) is the method through which users engage with an NLP platform. With Siri and Alexa, the interface is voice, with chatbots it’s text messages. When you’re thinking about developing a NLP application, figure out the conversational UI first. Maybe you want to build a Slack integration that helps you find subject matter experts within your company. For example, “Hey Slackbot, who do we have in the New York office that knows about GDPR?

Sentiment analysis is another useful component of NLP . Incoming support emails, messages, or mentions on social media are analyzed for tone and strong language to help determine an appropriate response. A use case might be running a sentiment analysis algorithm on all social feeds mentioning your company, forwarding all the positive ones to the social marketing team and all the negative ones to customer support.

While it might not initially be obvious, there’s a good use case here for including a defined set of emojis in your text-based customer feedback interfaces. Emojis are surprisingly effective in representing concepts that pure text-based sentiments struggle with, like sarcasm 😉

Predictive Analytics

Predictive analytics is not so much a technology as a use case for a number of AI technologies. The concept itself is not new — customer credit scores have been used for decades to ‘predict’ how likely someone is to pay back a loan — but technologies like machine learning and digital twin simulations have significantly enhanced the accuracy and range of applications.

Using vast amounts of customer data a phone company might create a predictive model to identify customers who are likely to cancel their service and implement a pre-emptive winback* strategy.

Where AI has a huge advantage in this scenario is that it doesn’t rely on the judgement of humans to decide which data points or linear regression models are the most effective at predicting an outcome. It can digest years of customer data and try thousands of models to check its ability to accurately predict customers likelihood to churn based on the prior behavior of customers who did actually leave the service.

In addition to predictive models, descriptive models use similar techniques to categorize users into behaviourally similar groups, and decision models can be used to figure out which algorithms are most effective for each group.

To illustrate this, let’s say you are a large investment firm with hundreds of thousands of customers and thousands of advisors. You have reams of data on your customers — how long they have been a customer, their age, what they have under investment, how often they meet with their advisor.

Let’s say the goal is finding the right advisor for a new customer. Properly implemented, predictive analytics should be able to accurately apply a descriptive model to classify the new customer, then use a decision model to know what set of algorithms is best for matching this customer to an advisor.

You would want to run the same process with your advisors too. An advisor may perform extremely well with some accounts and struggle with others. Humans tend to suffer from bias and a lack of analytical horsepower to effectively process all of the variables involved. But a decision model set up to test and self-adjust will find patterns, correlations and indicators hidden inside mountains of data — assuming you provide it with a clear definition of success.

Putting all this a bit more simply, predictive analytics is what happens when you turn algorithms loose on your analytics data. To make it ‘intelligent’ you’ll need to apply some machine learning processes that test various models and algorithms, and ‘trains’ itself on a set of data where the answers or correct outcomes are known. The beauty of intelligent predictive analytics systems is that they improve over time as they are exposed to more and more data. They learn what input weights and algorithms lead to desired outcomes without the guiding hand of data scientists. Provide a predictive analytics platform with a clear definition of the proverbial needle and it will search through a huge haystack of data to find what you’re looking for.

* Winbacks are strategies to literally “win back” customers who have cancelled an ongoing service or left for a competing product.


Computer Vision

Often abbreviated as simply ‘CV’, computer vision is one of the most broadly useful technologies under the AI umbrella. One of the earliest uses of CV, optical character recognition, or OCR, has been around for decades (and remains one of Google Docs best hidden features).

An important distinction about CV is that it’s not about the hardware. Many autonomous vehicle systems use LiDAR sensors (which uses pulses of laser light) to ‘see’, while Tesla uses a combination of radar, ultrasonic sensors and cameras. These sensors have no inherent intelligence. Their purpose is to collect data that computer vision software uses to construct a view of the world and identify the objects in it.

Less complex might be a CV application that identifies the species of bird or type of automobile based on a still image. Computer vision applications typically share a common set of techniques; edge detection to identify shapes, then feature detection classify a shape as a face, an eye, or a beak, using our aforementioned predictive models to identify objects based on similar profiles. Machine learning is a core component as well, which is how CV systems ‘learn’ and improve accuracy over time.

Current CV use cases work best when the applications have a relatively narrow focus. Instead of attempting to identify every known species of animal, a CV classifier will perform best if it’s only asked to identify different breeds of dog, and is fed a strict diet of dog images or video.

Even for humans, distinguishing between dogs and baked goods can be tricky.

Machine Learning and Deep Learning

You’ve probably noticed that that machine learning has been referenced a few times already, and that’s because ML (and it’s more sophisticated offspring deep learning) is the key technology of artificial intelligence. Machine learning as a theory has been around for as long as computers have existed, but only recently have scientists figured out how to get broadly useful results from it.

Historically what computers have been very good at is math. Statisticians and computer scientists have long been able to program computers to analyze big sets of data using to complex probability models, inductive logic, linear regression and related statistical prediction formulas. But these systems had no inherent intelligence. Any trial and error learnings were the learnings of the scientists, who tweaked formulas and tried new approaches to see if they could improve the results of computer’s calculations.

Early experiments in machine learning proved capable of beating a world-champion chess player or recognizing handwritten text. While these use cases were narrow (Deep Blue didn’t know how to play checkers), they proved that machine learning was capable of human-level skills.

Up until the mid-2000s the neural networks used in these applications typically consisted of an input layer (what is the current state of the chessboard?), a middle ‘dense’ layer (of all the possible moves, which one is most advantageous?) and an output layer (this is my next move). The success or failure of the output could be used to train the middle layer to improve its decision methods.

Within narrow use cases, machine learning can quite easily outperform humans. Consider the elevators at the building you probably work in. They probably have a default setting that was set by a human which ‘parks’ some of the elevators in the lobby and some on the upper floors. But what if the behaviour of the elevators was determined directly by how people used them? What if it could ’learn’ the patterns of each floor at different times of the day, optimizing for the differences between 9:15 on a Monday vs 5:30 on a Friday? Add an occupancy sensor to each car and the system could further optimize based around moving the most people. This may sound like a trivial example, but the businesses of ecobee and Nest thermostats are built around this exact premise.

Though it may seem obvious in hindsight, the breakthrough in deep learning came out of the study of how the human brain learns. As University of Toronto’s Geoffrey Hinton is fond of saying, we aren’t born with a brain full of rules and algorithms to make sense of the world. Instead, networks of interconnected sensory and motor neurons form and reform as we learn to walk, talk or ride a bike, reinforcing the neural pathways that produce positive results.

A 2006 paper co-authored by Hinton introduced the concept of deep learning. Instead of a single dense layer connected to the input and output layers, Hinton’s team showed how multiple ‘hidden’ layers (each with a specific functions and behaviors) could be trained far more effectively than ‘flat’ neural networks.

Empowered by faster computers and access to large pools of data, this research led to a series of impressive feats. Some, like picking out cat videos on YouTube were technically impressive, but didn’t capture the public’s imagination. But when IBM’s Watson AI won at Jeopardy and Facebook announced a deep learning-powered facial recognition system, it was clear that the technology had arrived.

(For more on machine learning and deep learning I have a separate article here.)

Conclusion

Something I hope you take away from this article is an understanding that data is the lifeblood of an intelligent AI system, and it’s here that enterprise organizations need to start. Large companies often have significant amounts of customer and operational data, frequently unstructured and isolated in discreet silos. Startups and cloud AIaaS that have powerful, proprietary systems that need large amounts of data to train algorithms to find patterns and insights. Start by thinking about how you can work together.

A metaphor I’ve found useful to help enterprises to think about AI systems is to get them to imagine what they would do with a thousand interns that could self-organize around a data analysis problem. A thousand photo editors to look at hundreds of thousands of images, tagging, sorting and interpreting the contents. A thousand data scientists sifting through all of your customer data, creating and testing algorithms to connect customer satisfaction metrics to churn and subscriber revenue. A thousand CSRs to monitor every support call and social media post and alert IT as soon as patterns indicative of a service issue are detected.

Finally — don’t think you’ve missed the boat! For some perspective, here’s how venture capitalist Marc Andreessen described how he felt his arrival in Silicon Valley in 1994: “My big feeling was ‘I just missed it, I missed the whole thing. It had happened in the ’80s, and I got here too late’” (he didn’t*).

Despite the hype about AI and its impact on business, we’re still in the early days. Humans are great at coming up with things to measure, and we’re essential when it comes to defining successful outcomes. With AI we’ve created an incredibly powerful new set of tools that can augment our thinking, much like eyeglasses improves our vision or a calculator helps us with math problems. Understanding what these tools are capable of is only the first step.

  • February 12, 2018

*Shortly after arriving in Silicon Valley Andreessen co-founded Netscape, which he sold in 1999 to AOL for $4.2B, then Opsware which sold to HP in 2007 for $1.7B. He and his former partner Ben Horowitz subsequently founded the VCfirm Andreessen Horowitz, whose early investments in Twitter, Facebook and Pinterest have all turned out OK.


Want to read more? Access our exclusive guide to digital transformation and AI today at: http://info.twg.io/ai-product-innovation.

We have a lot going on! Stay up to date.