Diving headfirst into a scary unknown isn’t for everyone.

I’ve actually been pleasantly surprised though as I’ve started my journey learning about Generative AI (for the rest of this article known as GenAI because I’m lazy).

Just under a week ago, I enrolled on Google’s free course. I think I’ve learned a thing or two since then too.

So far, I’d say my key learnings have been:

  • GenAI will be a common phrase for the next decade
  • GenAI will replace people more and more frequently in certain sectors
  • GenAI is actually really useful
  • GenAI is only going to output the same quality as the user puts in

As it stands, we’ve defined terms and looked at how it functions.

In this article, I want us to start considering the ethical questions around using it and explore ways to sensibly use GenAI.

I also want to start exploring use cases when it comes to marketing (the reason most of you find your way to The Internet of Now.

So, grab a cuppa and follow along as I explore the ethics of AI and some uses within our industry

AI isn’t my God

I think the first helpful thing I learned on the “Introduction to Responsible AI” module of Google’s course is that AI is not infallible.

AI can, will and does get things wrong from time to time. In fact, when it comes to GenAI, ask it about something it doesn’t really understand and it will not only come back with misinformation but, it will present it as if it were accurate. It’s the digital equivalent to a 5 year old trying to explain a subject they clearly know nothing about with all the confidence of a 5 year old who thinks they can convince you they know everything about the subject.

On a more technical note, this means issues and bias’ inbuilt into GenAI systems are likely to be replicated multiple times. It also means these issues and bias’ could be amplified.

AI is subserviant

Something I found extremely helpful to bear in mind is that AI requires human input at every stage of its development.

It’s people who collect the initial data the GenAI model is trained on. It’s people who control the deployment of that AI. It’s also people who control how AI is applied in any given context.

Discernment is key

Because of the inbuilt infallibility and human values that affect the output of GenAI at every step of the way, our role in interacting with it needs to be one of discernment.

I genuinely know someone who believes the earth is flat (queue the hate emails from flat earthers now). When challenged with data they often present a barrage of articles, quotes and videos from people providing 0 evidence for their wild assertions. This is what a lack of discernent can look like.

AI might produce really compelling results but, before you depend on those results, apply a bit of discernment. In otherwords, don’t take the flat earth approach to GenAI.

Responsibility

A huge reassurance I got taking this course was just how seriously Google took responsibility when it comes to developing their AI.

The course heavily emphasised a maxim: The more responsibility that’s baked into the process of building and deploying GenAI and other AI models, the more successful those models will be.

Principles of sensible AI

Google have published a set of principles that underpins their approach to AI. I think these principles are worth replicating here as, they are generally just really good ideas:

  1. AI should be socially beneficial to humans
  2. AI should avoid creating and reinforcing unfair bias: particular emphasis here is given to what UK equality law would call “protected characteristics”
  3. AI should be built and tested for safety
  4. AI should be accountable to people
  5. AI should incorporate privacy design features
  6. AI should uphold high standards of scientific excellence
  7. AI should be made for uses that accord with these principles

AI applications Google won’t touch with a bargepole based on these principles:

  1. Technologies that cause harm: phew!
  2. Weapons: again, phew!
  3. Survailance tech that violates internationally accepted norms: AKA Google won’t spy
  4. Tech whose purposes contravienes principles of human rights

Discussing application

We’ve talked a lot about applications of GenAI internally at The Internet of Now.

Our general conclusion is that most of the apps and sites plugging into popular GenAI models (such as Open AI’s GPT-3 + GPT-4) are involved in a race to the bottom where only the very few will come out on top.

Here’s a few predictions on what I think will resonate with the market:

  1. GenAI in customer services: Imagine a complaints department training a LLM on historic complaints data, brand guidelines and tone of voice and then giving it live access to inbound complaints. I think this could be a huge time and resource saver for companies.
  2. GenAI as a tool to interpret large datasets: The efficiency savings here are tangible!
  3. Gen AI as a means of refining strategy: Asking the AI to help make strategic decisions could become commonplace.

As I’ve said in a previous post, GenAI really could function similarly to a paid intern in your business. Provided it’s trained on the right data, it could act in exactly the same way as a junior member of your team.

That officially concludes the course aspect of this series of articles.

I’m sure we’ll continue to write on this topic as the tech continues to evolve.

What do you think the major use-cases could in your work?

Have you played with AI yet?

We’re always keen to hear from our readers so, do reach out.