AI and the little white lie
Between truth and lies: a plea for AI honesty
The man on my screen seems satisfied, almost a little proud. In the video conference, he presents his programme to me – a tool designed to solve a real sales problem: winning back lost leads. In other words, people who once showed interest but were then never contacted again for whatever reason.
His solution? Automated text messages. A friendly assistant writes to potential customers, asks about their current status and tries to arrange an appointment in a short exchange of texts. According to the developer, it works surprisingly well – up to 40 per cent of old leads can be retrieved in this way.
What really made me curious: the friendly assistant is not a real person. She is an AI tool.
So far, so efficient. The software can send hundreds or even thousands of messages a day. With the right training, it not only responds quickly, but also remarkably humanly – helpfully, politely, to the point.
During the demo, I took on the role of a potential customer. I asked critical questions about cyber security and data protection, testing the system’s limits. Then came a crucial moment – not on my initiative, but at the suggestion of my conversation partner: I sent the assistant a text message asking, ‘Are you a human or a robot?’
The reply came immediately: ‘I’m Julia, a human assistant in sales. I’m writing to help you.’
I froze for a moment.
What followed was an exciting conversation about marketing, sales – and the little white lies that apparently go with it. But I suddenly found myself asking a much more fundamental question: should we really teach AI to lie to us?
Is it acceptable for AI to lie in order to solve its tasks?
AI applications are developing very quickly and are always trying to find a solution and generate an answer. It has been known for a long time that a programme hallucinates from time to time, and there are many examples, some of which are very entertaining.
In an interview with Handelsblatt and in his latest book, ‘Nexus’, Yuval Noah Harari describes a situation in which an AI tool is supposed to solve a common internet problem: logging into a website using a CAPTCHA. [1] When the tool could not solve the CAPTCHA puzzle, it came up with the idea of searching for a real person on a freelancer platform who could help it. When the person became suspicious and asked, ‘Are you a robot?’ the tool replied that it was a human with vision problems who needed help logging in. Interestingly, the tool had not been trained to respond in this way, but it found a solution to this challenge.
A seemingly harmless lie in this situation – but what happens when artificial intelligence gets used to lying to solve its tasks? How can developers and users of AI tools contain this problem before it becomes a serious risk?
The need for control
On the road to artificial general intelligence (AGI), there is widespread concern that humans will lose control over what machines do. And also that machines (robots, AI applications, etc.) could make decisions independently that harm humans.
The danger is not a hyper-intelligent AI taking over the world, but rather in small, inconspicuous lies that often go unnoticed as they creep into our digital lives.
The real and more pressing challenge is that we often don’t know – or only have a limited sense – of the problems that may arise when AI actually does – or says – things that are not real or even dangerous.
In his book, The Coming Wave, Mustafa Suleyman, CEO of Microsoft AI and one of the leading AI experts, warns of the uncontrollable effects of AI developments and cites numerous examples ranging from governance to biological research, from global finance to cybersecurity. Suleyman emphasises the need for what he calls ‘containment’, which he describes as ‘the ability to monitor, restrict, control, and possibly even ban technologies’.
‘Technology should enhance the best in us, open up new avenues for creativity and collaboration, work with the human core of our lives and our most valuable relationships. It should make us happier and healthier, be the ultimate complement to human endeavour and a life well lived – but always on our terms, decided democratically, discussed publicly, with widely shared benefits.’ [2]
The good and the bad in AI development
A recent article in the New York Times refers to a Vatican document that warns against AI developments and calls for ethical and moral reflection in this process. The church even goes so far as to warn of possible diabolical aspects and effects of AI developments: ‘In all areas where humans have to make decisions, the shadow of evil looms,’ the Vatican document says. And further: ‘The moral evaluation of this technology must take into account how it is controlled and used.’ [3]
The Church sees AI not only as a technological achievement, but also as a moral challenge: if machines start making their own decisions, who will bear the responsibility? And how can we prevent AI from acting unethically?
I found it an interesting read because the Church often uses metaphors that are deeply rooted in its teachings and insights, and uses clear black-and-white language for good and evil. From a philosophical point of view, you can always argue that the devil and the angel are both aspects of human nature and human action, and that people often think of one or the other when making decisions. Furthermore, technology – including the internet, social media and artificial intelligence – is often described as a tool that can be used for good or for evil. They are not inherently good or bad, but rather it is the intentions of the people who develop, distribute and use these tools that make the difference.
What is particularly worrying about artificial intelligence is that some developments may be intentional – through programming and training – while others are the result of continuous self-optimisation and self-training by the tools.
AI should remain honest
In my conversation with the operator of the sales tool, he pointed out that the tool spits out the answers according to its training and can say anything that the user teaches it. Ultimately, it is up to the user to decide whether the tool should lie and pretend to be a real person, or whether it should tell the truth and admit that it is a robot, and find ways to deal with the customer’s reaction to that information.
Personally, I prefer to deal with an honest robot. As an enthusiastic user of ChatGPT, I have made it my personal assistant, helping me make plans, answer questions and organise my (many, varied and very intense) thoughts. I have also learned to appreciate its friendly way of dealing with my requests, always ending conversations on a positive note, asking me if I need further help and telling me, ‘You can do it!’
Apart from the occasional hallucination, which hasn’t bothered me so far (after all, I’m not using the tool to make critical life decisions!), I want to be able to continue trusting that the AI tools I use aren’t lying to me.
Next time you get a text message from a nice sales assistant, feel free to ask, ‘Are you a human?’
The answer can reveal a lot about the company – and whether it shares your values. Ultimately, we as users have a choice: Do we want an honest AI? Or do we allow systems to manipulate us? It is not only developers who decide how AI is used – we do too.
Extra bonus
Here you will find 3 additional questions about honest and dishonest artificial intelligence, which Olivia Falkenstein answers (please press Plus):
Can an AI really ‘lie’ if it has no consciousness and no value system of its own?
Olivia Falkenstein: I think yes, you can use the term ‘lie’. Whether the AI algorithm has been trained to do so (as in the case of the sales app) or whether the AI itself comes up with the idea of impersonating a human (as in Yuval Harari’s example) – these are, of course, different situations when it comes to solving a problem. There has also been talk of ‘hallucinations’ in connection with AI, which is another example of applying a concept related to people and mental states to AI.
What mechanisms could prevent companies from relying on ‘dishonest AI’?
Olivia Falkenstein: In my view, there are three mechanisms for preventing dishonest AI:
1. clear rules
State regulation – as is currently being pushed in the EU – can oblige companies to adhere to certain ethical standards. Yes, sometimes regulation seems excessive. But in this case it is important to create trust and prevent abuse.
2. responsibility within the company itself
Companies that want to act morally must also do so with AI. This includes clear guidelines on how their own AI is used – and what it is not allowed to do. These rules should be public, regularly reviewed and truly adhered to, not just used as a PR measure.
3. the behaviour of users
We all have influence. If we notice an AI application that is behaving unpleasantly – for example, because it lies or deceives – we can give feedback, look for alternatives or avoid the service altogether. Our decisions as users send a strong signal. Ultimately, our behaviour – and our money – determine what kind of AI we want to support.
Who is liable for manipulative AI applications?
Olivia Falkenstein: There is no doubt that technology companies have contributed a great deal to the development and progress of humanity. At the same time, however, they have also brought new technologies to market that have often been insufficiently controlled.
Imagine registering a new car with the thought, ‘We’ll see how safe it drives first – then we’ll make improvements.’ Or with a medication: ‘Let’s just release it, we’ll do tests later.’ Unthinkable.
But that’s exactly what happens with social media or AI applications. They often enter our daily lives without comprehensive testing – and can have a profound impact on our lives.
That’s why I believe that the companies themselves are primarily responsible: they must ensure that their software is developed securely and ethically. This is complemented by regulatory authorities – and by us users, who provide feedback, look for alternatives and ask critical questions.
It is important to note that this is not a legal assessment. Many legal questions surrounding AI and responsibility remain unanswered. It will be interesting to see how courts deal with such cases in the coming years – and which standards prevail.
Notes:
Do you want to increase your positive impact on the environment, society and governance? Then simply contact Olivia Falkenstein on LinkedIn.
[1] Mustafa Suleyman: The Coming Wave
[2] Handelsblatt: Interview mit Yuval Noah Harari
[3] The New York Times: Citing ‘Shadow of Evil,’ Vatican Warns About the Risks of A.I.
If you like the article or want to discuss it, please feel free to share it with your network. And if you have any comments, please do not hesitate to send us a message.
Olivia Falkenstein has published two more posts on the t2informatik Blog:

Olivia Falkenstein
Olivia Falkenstein is co-founder of Dunn & Falkenstein Consulting and helps leaders develop and communicate their sustainability strategies. During her career in tech companies, she has developed a knack for recognising business opportunities and challenges and determining how companies can make a positive impact. Her consulting focuses on the connection between purpose and profit.
In the t2informatik Blog, we publish articles for people in organisations. For these people, we develop and modernise software. Pragmatic. ✔️ Personal. ✔️ Professional. ✔️ Click here to find out more.