Back in 2017, Prof. Stephen Hawkings, chilled us with warnings about what could happen if artificial intelligence (AI) is left to grow unchecked.
“The genie is out of the bottle. We need to move forward on artificial intelligence development, but we also need to be mindful of its very real dangers... If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans.” (WIRED magazine.)
There is no arguing that AI is disrupting the world at every level of society as we enter the Age of AI.
Keep reading as we investigate these questions and more.
AI ethics is the responsible use of AI that does not harm but is used fairly, truthfully, and securely as it helps people live better.
We don’t know all there is to know about AI and we need to be on the lookout for the unexpected damage it may cause.
AI will herald a new way of how we interact with each other and with computer systems. With the increasing use of AI, questions about who controls what now urgently need attention.
For example, mundane jobs that require repetition and not much creative thought can easily be taken over by an AI bot. However, jobs that require human interaction, like in the Care field, will always need people. The way forward is to use AI to work resourcefully alongside humans.
What’s more, the use of weaponised drones and police bots have raised concerns. There needs to be strict control of these systems to prevent catastrophic consequences.
An example of this type of influence is the Cambridge Analytica scandal where they used AI algorithms through Facebook to build large data sets to shape the US 2016 presidential elections for Donald Trump, and also for the campaign in the UK to leave the EU. Putting in place vigilant regulations can help plug this weakness in AI.
Another pitfall is badly designed AI which can cause harm and create problems in legal, business and personal contexts.
There is the risk of AI getting into the hands of criminals and people with malicious agendas. AI is powerful and can cause great harm if used with evil intent, such as hacking, criminal activities and exploitation.
At the moment, humans still have control of the extent of AI’s power to make decisions. If we lose this control, can we reclaim it?
Already, our lives are tracked in many ways: what we buy, where we live, our social connections, etc. There is the 'safety net’ of consent, but do we fully understanding what we are agreeing to? How far can organisations go into our personal spaces?
Ethical AI needs to guarantee that our personal and data security is built into the system. This is especially true when dealing with minors and vulnerable people.
In 2016, Microsoft introduced an AI-powered chatbot called Tay. It was designed to learn from interactions with users on social media platforms. Alarmingly, Tay's machine learning algorithms could not filter out malicious input and it quickly began to generate offensive responses. Microsoft had to shut it down within 24 hours of its release.
This threw into the spotlight the potential for bias and misuse of AI-powered chatbots and the importance of having rigorous built-in protection against any prejudice creeping in.
AI developers cannot replicate the depths of human wisdom when making judgements, showing intuition and sensitivity. Also, when building ‘morality’ into AI, whose ethics do you use? Is there an agreed universal moral code?
AI can be used to make decisions that have far-reaching effects on people’s lives. Examples include deciding on who gets a bank loan, and even, who gets preferential health treatment. We need to protect against historical discrimination against certain sectors of society getting into the system.
AI apps like DALL-E 2 that create art using AI have raised the questions of copyright and ownership. Who owns AI-generated art and literature? Who is responsible if a self-driving car crashes?
Governments have an obligation to step in and bring in regulations, but these have been slow in coming to fruition.
The good news is that regulations and laws around AI are being written and implemented at this time. These largely cover privacy, risk, security and bias.
It is not just about putting laws in place, but also enforcing them. Organisations need to submit to regular audits to protect those using their AI.
Recent legislation on AI:
On a smaller scale, each company that uses AI should have their own AI ethics policies.
There should be continuous learning around AI and its implications. This is indeed happening, for example, the Oxford University Institute for Ethics in AI conducts research into themes:
AI and..
At the Institute, philosophers, technical people, and academia work through these issues. In turn, organisations need to remain up to date with developments in AI.
AI can’t be a black hole of mystery that excuses its developers from being accountable. It needs to be transparent about where the data comes from and how it is used.
To help avoid unconscious bias, AI development needs to include people from all spheres of society to create fair systems.
Especially when dealing with vulnerable people in sensitive situations, we need to build human-like sympathy into the AI’s DNA. An example of this is having empathic conversations with customers in financial difficulty using conversational AI and AI chatbots.
After reading the above it may seem that AI is dangerous and we need to avoid it, but that is not at all the case. The scope of what AI can do is mind-blowing.
Here’s a brief list of examples of where AI helps us:
How we use AI will determine whether it is a threat or not. With the right safeguards in place, AI can be used ethically for good and not for harm.
See: Webio Takes Home Two AI Awards Ireland 2022
If you need to improve your customer engagement, talk to us and we'll show you how AI automation via digital messaging apps works.
You will love the Webio experience. We promise.