Datamine’s GM of Strategy and Innovation, Bob Stone, is well-versed in the world of artificial intelligence.  In his advisory positions at companies including Google, Bosch and CenturyLink, Bob has worked with and been pitched hundreds of AI use cases - some good, some bad and others that fall somewhere in the middle.  Here’s his opinion on what makes an AI application good or bad, and what questions we need to be asking to capitalise on this powerful technology while also mitigating the risks it entails.

 

AI has become a buzzword and a mainstream concept that’s no longer limited to movie screen robots and smart computers.  There are countless applications across all manner of industries and verticals that have the potential to change the world.  So it’s no wonder why everyone wants to do it in their business!

In the last few years I’ve had the privilege of working with various incubators, and I’ve been pitched many different AI ideas.  While all of these pitches were creative, many of them weren’t what I consider to be a good business use of AI - let me explain.

Good AI in business should do at least one of two things (preferably both):

1.  Streamline and automate time-consuming tasks - such as scheduling meetings, proofing documents, updating spreadsheets etc.
2.  Open up doors that would still be closed to humanity if it weren’t for machines - like cancer detection, self-driving cars, locating new planets etc.
 
 

Good AI AI

A good AI application should solve real-world problems that individuals, businesses or humanity as a whole are facing.  In business, what we need most are optimised processes and more time to spend on high-value tasks - and in the real world, what we need most are things that will make life significantly easier or better.  So any AI tool that does either of these things well is good AI in my opinion.  Some examples are:

  • Machine learning chatbots that learn more about you over time and can therefore give users better recommendations
  • Fraud detection AI applications that learn from criminal activity to stay one step ahead
  • Robotic document digitization, where paper documents can be converted to a digital form, categorised and even tagged with metadata - all without human intervention
  • Government phone bots designed to streamline call centres and long wait times for customers needing to do simple things, like renewing licenses or filing complaints
 

Bad AI Pet - Cat

On the other side of the spectrum, we have bad AI - this doesn’t necessarily mean the idea itself is bad, it just means the application isn’t solving a common real-world problem.  This often happens when startups are trying to ‘ride the AI wave’ and put AI in something gimmicky that really doesn’t need it.  Here are some examples of what I consider to be ‘bad’ AI:

  • Cat food preference AI tools that teach you what kind of food your cat likes best
  • Machine learning robot pets that act like real animals and have facial recognition
  • AI use by bad actors to damage or steal IP and other trade and personal information

 
Questionable AI

Question mark

Then of course there’s the middle ground.  AI use cases that have the potential to be ground-breaking in both a positive and negative way.  There are so many examples of situations where the excitement around an AI idea overshadows the bad things that could happen if it fell into the wrong hands.  Or if the laws around its use weren’t tight enough.  Here are a couple of good examples of what I interpret as questionable AI:

  • Analysing US govt drone flights and success rates – this was a contract Google took on when I worked there, but that the company ultimately decided against renewing because of the moral ambiguity
  • Gene splicing in medicine – this has the potential to eradicate diseased DNA in sick people, which is, of course, an incredible invention.  But it could also be used to genetically modify children (make them have blue eyes, or blonde hair etc.) which also comes with a whole host of ethical issues
  • Artificial General Intelligence – this is where systems can do any task that intelligent humans could perform and beat us with speed and accuracy

 

predictive and ai modelling whitepaper cta

 

Datamine and AI

At Datamine, before we take on any new AI projects for clients, we ask ourselves, “Will this be great for society?”  This question has two components - will this AI project solve an actual common problem that needs to be solved?  And will it do so in a way that is ethical and can’t be used for evil?  If the answer isn’t ‘yes’, we look for other ways to help our client through non-AI applications and methodologies, such as through predictive modelling or automation.

In the words of Google CEO Sundar Pichai, “[AI] can’t solve every problem, but its potential to improve our lives is profound.  We recognize that such powerful technology raises equally powerful questions about its use.  How AI is developed and used will have a significant impact on society for many years to come.”

I believe this, and Datamine believes this - we’re excited to be involved in the development of innovation and AI in New Zealand.  Get in touch with me today if you want to connect and discuss collaboration opportunities.

 

BobABOUT THE AUTHOR: BOB STONE

Bob boasts an impressive history of BD and Strategy positions and is also a founding member of the New Zealand Innovation Nation, a group dedicated to making NZ the Southern Hemisphere’s innovation hub.  Bob’s greatest passion is connecting with people, and he’s excited about sharing the transformative power of technology and analytics in his role at Datamine.

 

 

THE COMPANY WE KEEP