Black and white padlock

 

AI, Ethics, and Data Privacy

As AI technology hits the mainstream, we examine risks, opportunities, and ethics

 

Just as Windows changed the game in personal computing, new AI technology is transforming how we work and create content.  While artificial intelligence has been used behind the scenes for decades – helping drive search technology and business management software – new iterations are making it easier and easier to use.

AI comes with risks and ethical challenges.  Free public AI (like Chat GPT) runs on massive amounts of data scraped from the internet and other sources, which poses privacy concerns.  There are also ethical issues around IP ownership and the accuracy of AI-driven content.

However, the existence of risk does not mean you need to avoid it altogether.  Here’s how you can solidify your understanding of the challenges, define your company ethics concerning AI, and establish guardrails to ensure the safe use of the technology. 

 


 

 

Establishing AI guardrails

 

Business controls define the boundaries of acceptable AI behaviour by setting up guiding principles and behaviours.  Before we dive into ways to establish policies, procedures, and mechanisms in AI guardrails, here are some definitions and actions you can take now: 

 

Household iconPrivacy

 

Privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby express themselves selectively.  In business and privacy legislation, privacy concerns two aspects: access to personal information and control of personal information.  Considered a fundamental right of every person, upholding privacy is non-negotiable.

You may find it useful to check your company's privacy policy and know what personally identifiable information you keep about your customers and prospects, where it is stored, and which business functions have access to it.

 

Scales iconEthics

Ethics is the branch of philosophy concerned with what we ought to do.  Unlike privacy, ethical decisions are not always clear-cut.

As Marijn Sax puts it in The Handbook of Privacy Studies1:

“The formulation ‘the ethics of privacy’ might suggest that there is one ethics of privacy.  Nothing could be further from the truth.  Precisely because ethics is concerned with normative questions, there are no fixed answers to any of these questions.  The answer to normative questions admits to different degrees or plausibility, relative to arguments provided.”

It is essential to formulate a code of ethics for dealing with data privacy and AI.  This is not as simple as establishing your customers’ rights to privacy.

The marketing and IT teams can lead this work, but they may need board approval for the code of ethics ratification.

 


 

AI Policies

 

New Zealand’s Privacy Act (2020) doesn’t yet have AI-specific wording, but that will probably change.  For now, the government has released a set of guidelines around the use of AI in business.  These focus on risks and emphasise that companies are accountable for using AI, particularly regarding customer data.  Australia has created a set of AI Ethics Principles, offering a valuable framework for businesses looking to ensure the safe use of AI.

Creating AI policies for your business will help you build trust with your customers.  Your policies can include the areas not yet covered by existing privacy laws and can be updated periodically as this field evolves.  Part of this involves establishing an ethics committee and maintaining human oversight.

Three people iconEthics Committee

You should establish an AI ethics committee and, if you do not already have one, a data privacy committee. Their brief is to incorporate AI-related policy into your business.

The role of an AI committee is to define how your organisation’s ethical standards can align with AI usage and what upholding these standards looks like in action. We all need to be able to recognise risks in AI projects and mitigate them before they happen. Harvard Business Review recommends that an AI ethics committee could contain ethics experts, lawyers, business strategists, bias scouts, and technologists.

 

Computer person profile iconHuman oversight

Human oversight should be a critical component in your AI policies if you aim to use AI in decision-making.  AI handles repetitive tasks more quickly and accurately than a person could, but there is no way of replacing human intuition and judgement.  Start by training your team to work with AI, being explicit about what role AI plays, and delineating how your employees use AI to supplement that work.

 


 

AI Procedures

 

You need to create well-defined steps outlining how you accomplish your AI goals.  Procedures in your business should include strategies for preventing discrimination and data bias that can emerge from AI analysis.

Finger pointing iconAI auditing

Setting up AI auditing is much like establishing a QA process – you need to ensure your systems maintain their quality, accuracy, and efficacy.  Continual auditing ensures your AI models uphold the standards set by your ethics committee and helps eliminate errors, risks, and biases.  Components you should audit include what data, frameworks, parameters, and workflows you use, as well as the outputs and results.

 

Person profileAssessing software for bias

One clear-cut example of AI bias comes from an audit of a resume-screening tool, which found that its algorithm favoured two data points above all others:  the name Jared and whether an applicant had played lacrosse!  Naturally, the tool was scrapped – but it still demonstrates an important lesson:  the data that trains the model influences its outputs.

To ensure your company upholds data science best practices and the responsible use of customer data, you must manage how AI is used.  Be proactive about mitigating bias in your datasets and models, get third-party experts to analyse your work if possible, and keep checking your results.  This approach prioritises fairness, accurate results, and robust models.

 


 

AI Mechanisms

 

AI runs on huge volumes of data. To maintain privacy and data confidentiality, platforms using publicly available data need to implement means of detecting situations where AI can be misused. 

 

Statistical Model iconSensitive data

Privacy breaches can lead to some severe issues.  Regardless of how you set up your code of ethics, you need to ensure your AI mechanisms sift out sensitive information.  Samsung learned this the hard way last year when internal source code was leaked after a staffer uploaded it to ChatGPT.  As a result, the company has banned generative AI tools to prevent further data breaches.

Any company using a new AI tool needs to read the fine print and, if possible, get expert advice on the legal and ethical implications.

 

Differential privacy

 

Differential privacy is a privacy-preserving framework that ensures statistical queries on a dataset do not reveal sensitive information about individual data points by adding controlled noise to the results.

Differential privacy is one technique that helps protect the privacy of individuals in datasets.  It operates by adding noise to the training data to obscure individual data points, which helps prevent the system from inadvertently learning and propagating sensitive information.

 

Person giving presentationReliability of information

“AI, or any kind of analytics, is only as good as the information it’s given. AI technology like Bard and ChatGPT can scour the internet, but no further.” – Paul O’Connor, Datamine Founder

We’re used to relying on computers to give us answers – no one questions the numbers that come out of your accounting software – but AI is a different beast. Because content is created using public information, it’s not always accurate or up-to-date.

If you are considering using AI to generate content, make sure to have a human involved in the process. AI can be a fantastic tool for drafts and outlines, giving a valuable starting point and cutting out much of the drudge work. Using plagiarism software can ensure your content creators are double-checking any claims made by AI.

 


 

Balancing AI risk and reward in your business

 

AI is one of the biggest topics in the business and data space now, and we’ve seen a lot of concern about missing the boat.  But as Datamine Founder Paul O’Connor puts it:

“Most businesses I talk to simply aren’t ready to implement even the most basic kinds of AI in their business – the data isn’t there, and the ethical parameters haven’t been set.  Here’s what needs to happen:  Get a single source of truth… and make sure your data is accurate.”

From a business perspective, it is a waste to ignore all the potential benefits and opportunities that come with using AI. From a privacy and consumer protection standpoint, however, it’s crucial to be cautious and stay updated with the shifting AI landscape.

Instead of rushing in or throwing out the concept of AI altogether, the best result comes by putting guardrails in place to protect your business – ensuring that everything you do complies with privacy and data laws, as well as your code of ethics.

We are happy to have a conversation if you want to chat about your organisation and AI.

 


 

References

1  Sax, M. (2018). Privacy from an Ethical Perspective. In B. Van der Sloot & A. De Groot (Eds.), The Handbook of Privacy Studies: An Interdisciplinary Introduction (pp. 143-173). Amsterdam: Amsterdam University Press.

 


 

Further reading

 

 


 

Get in touch

 

Want to protect your customer data and take advantage of AI?  Talk to the Datamine team now.