Dutchism #3: You have sawdust in your head

10 min read
The literal translation of this Dutch saying "Je hebt zaagsel in je hoofd." which means that you are stupid and that your brains have been replaced with sawdust. Which brings me to a question I've been asking myself more often lately: Does AI make us dumber or smarter? First of all, I am neither an ethicist nor a philosopher. However, I have some doubts about how much the marketing world is embracing technology and AI. I'm not alone; some people are concerned that half of us will lose our jobs, and others believe that AI will become smarter than us and eventually destroy humanity. Recently, I watched a movie called The Creator, which was recommended to me by a Disney+ algorithm. The film explores this fear in a dramatized, romanticized way. It was entertaining and gave me an interesting perspective on the future of technological tools. I don't want to focus on movies, so I'll move on. I hope that AI will serve me and my colleagues in our pursuit of quality and originality. To avoid turning this dream into a nightmare, we need to address some issues related to the increased use of AI.

Are we going too far in our desire for convenience and efficiency?

Various tech giants are currently promoting their latest AI tools, vying for our attention, and offering them for a subscription fee. As a creative within a marketing agency, I marvel at the impressive capabilities of these tools. With just one well-chosen prompt, they can produce a Spielberg-worthy video clip for an eyewear brand, or a razor-sharp fairy-tale image of a woman with fluffy cat ears on a cloud in a purple sky with a futuristic laptop. The list of AI-powered tools is almost endless, ranging from text and image editing tools to video, music, face, and avatar generators, painting and drawing tools, audio and voice generators, design tools, business tools, and data and research tools.

I regularly use AI text tools as a source of information and inspiration. However, I want to make it clear that I never blindly copy texts or ideas. Firstly, this goes against my desire to challenge myself creatively. Secondly, I have observed that the tools are not entirely original and perfect. Nonetheless, it’s crucial to acknowledge that AI has the potential to revolutionize various aspects of our lives, including healthcare, transportation, education, and entertainment. AI systems can analyze and combine vast amounts of data much faster and more accurately than humans, leading to breakthroughs in medical diagnosis, predictive analytics, and other areas. Furthermore, AI-powered technologies like virtual assistants and chatbots can automate daily tasks, freeing up people’s time to concentrate on more creative and meaningful pursuits. Recently, there was a discussion about an AI assistant at a doctor’s practice in The Netherlands that can quickly and reliably decide which patient should be helped first. In my view, this is an excellent application of today’s technological possibilities. Wrong decisions will inevitably be made at times, regardless of whether they are made by humans or machines. Given the current situation where general practitioners are under immense pressure and have an overwhelming workload, any form of assistance is greatly appreciated.

Impact on humans

As AI becomes more prevalent, it brings with it opportunities and concerns. One concern is that relying too heavily on AI technologies could lead to a decline in certain cognitive skills, such as critical thinking, problem-solving, and decision-making. As more tasks become automated, there is a risk that individuals will become overly dependent on these systems. This can result in the ‘de-skilling’ effect, where people lose the ability to perform certain tasks independently.

Children who can barely count or write and rely on Google for all kinds of knowledge, American presidential candidates who would rather introduce new shoe brands than think about how to help their people, and politicians who put on their clothes backward, and inside out (true story), are already noticeable consequences of the dependence on modern resources? Can we still think for ourselves? Am I going to train my brain with pub quizzes and Sudokus, while an AI assistant can solve them a millionth of the time? I do, but for how long? When I wake up, I grab my iPhone to see how long I can stay in bed. When I want to know how tall Tom Cruise is, I grab my iPhone. If I want to know what the weather will be like in the coming days, I grab my iPhone. When I want to know what my favorite athlete is doing, I grab my iPhone. Nowadays I often no longer have to pick up my iPhone because I already have it in my hand. And if I don’t, I can always call Siri and ask it to get the answers for me and turn the thermostat down a bit. Lately, my iPhone has also been giving me useful reminders so that I don’t forget things I do regularly. And he knows the way to my work and tells me if delays on my commute are to be expected. Of course, there are several causes for people’s ‘mental laziness’. Political policy, old-fashioned teaching methods, a changing social and moral consciousness, and a desire for convenience and luxury certainly play a role. I don’t want to blame AI for everything.

Socially desirable

It’s important to consider the social aspect of technology, as human beings naturally seek contact and validation. However, we must be aware of the potential danger of falling into “echo chambers”, where algorithms and personal recommendations reinforce our existing opinions and biases. This phenomenon has been observed on social media platforms like Facebook and YouTube, where all the information we receive caters to our interests and views, including advertising, political preferences, opinions, and beliefs. This raises the question of how difficult it will be to remain objective and open to diversity when we are no longer exposed to information that challenges our views. Ultimately, it is up to individuals to filter information for themselves.

Artificial intelligence (AI) has the potential to enhance existing social inequalities. Unequal access to AI education and technologies may increase the gap between privileged and non-privileged individuals, where only the privileged ones could fully leverage its advancements. Additionally, there are ethical concerns about AI, such as privacy breaches, bias, and discrimination, that can undermine trust in these technologies and impede their adoption.

Is Artificial Intelligence a good term?

The question at hand is whether Artificial Intelligence (AI) is genuinely intelligent or just a cleverly programmed tool that shuffles and recombines existing information to arrive at a so-called new solution. The AI community itself considers this a complex question and defines intelligence as follows: the ability to learn, solve problems, reason, understand, and adapt to new situations – previously characteristics attributed solely to humans. AI can process vast amounts of data rapidly, recognize patterns, and make decisions based on them. Speech and image recognition, translations, and even playing complex strategic games such as chess and Go are some examples of its practical applications. However, there are fundamental differences between human and artificial intelligence. Humans also use more abstract skills such as consciousness, emotion, and artistic and moral awareness, and make decisions based on experience and context. These are abilities that AI has not yet mastered. You could say that AI is just a simulation of human intelligence based on data and algorithms. Of course, AI can ‘learn’ based on user feedback and improve its answers over time. But it is only when AI starts using emotion and becomes aware of context that it becomes truly interesting. The question is whether this will ever happen. Many smart minds are working on it right now, and it’s a good thing that they are doing so while they still can.

AI tools such as ChatGPT search digital sources, including the internet, for relevant information. It is an almost inexhaustible knowledge base from which AI draws. An incredible amount of information has been collected there, but therein lies the danger. Partly because a lot has been added by human hands and has not been checked for untruths. AI cannot always distinguish between the truth and the ‘nonsense’ that is on the web. A simple test about the number of British footballers who have ever played for Ajax football club produced different answers from 2 AI assistants. That is of course strange since this is a fact that should be easy to check. Yet there is information on the web that makes one assistant think that Jordan Henderson is the first British Ajax player, while in the past there have been other English and Scottish footballers wearing an Ajax shirt.

In AI technical terms, this phenomenon is called a ‘hallucination’. Seriously, that’s what it’s called. This is the term we use with humans to indicate that we experience something that is not there. Making mistakes is human, one must have thought. AI is therefore starting to look more and more like us. It can make mistakes and sometimes draw incorrect conclusions. Could it be because we invented and developed AI ourselves? Is it because we have trained AI in such a way that it always wants to give a satisfactory answer? Yes, or no? Left, or right? True or False. For now, it is mainly an exact science. People are more nuanced in their solutions to problems. She does not think linearly and understands that even if you choose not to decide, you have still made a choice. To be fair, if AI really can’t find relevant data to answer, then it’s certainly capable of saying it can’t formulate an answer due to a lack of useful input.

The legal aspect…

This brings us to another phenomenon that concerns us: Who owns the newly generated images, texts, music, data sets, etc? Who has the intellectual property? Who is the creator? Whose creative idea is it? Can we draw up rules for this and later enshrine them in law? There are initiatives worldwide that try to accelerate this. Upcoming regulations regarding AI will also include guidelines for ethical use, algorithm transparency, and liability. They can also include privacy protection, preventing discrimination, and ensuring human control over decision-making processes. These rules are expected to guide and regulate the use of AI in various sectors such as healthcare, finance, and justice. The aim is to strike a balance between innovation and the protection of individuals and societal values while considering the potential impact of AI on our work and lives.

A worrying development is that AI can of course also be used for purposes that are less able to tolerate daylight. Deep-fake videos of famous people who suddenly say strange and/or untrue things. Government leaders announcing extreme actions are perhaps the most disturbing examples. Other potential misuses of AI include manipulating algorithms for propaganda, deception, and spreading disinformation. AI systems can also be used for surveillance, invasion of privacy, and profiling of individuals without consent. In addition, there is a risk of discrimination and bias in AI systems, which can lead to unequal treatment based on race, gender, or other characteristics. Although in my opinion this may have been caused by human intervention. AI-enabled cyber-attacks are also becoming increasingly relevant, with hackers using AI to find vulnerabilities and infiltrate systems. It is critical to address these risks as quickly as possible and implement regulations to prevent AI misuse.

AI have a dream

AI will only further influence our lives in the future. It is important to recognize that humans have a remarkable capacity for adaptation and resilience. Throughout history, we have faced countless technological benchmarks, from the Industrial Revolution to the introduction of the Internet, and have found ways to adapt and evolve in the face of these changes. While AI can indeed reshape the nature of our work, lives, and education, it also has the potential to create new opportunities for human creativity, collaboration, and innovation. Ultimately, the use of AI is irreversible. Man must ensure a healthy balance. We must continue to dream. Dreams are human. AI can help us achieve them. My dream is of a world in which I can continue to do my job with my colleagues. Using our common sense, working hard, and using our creativity for brands that deserve it. Knowledge of the market and the people in it are and remain important. The smart use of data and AI can help us achieve growth for our clients. And if my creative brain lets me down now and then, I won’t hesitate to ask an AI assistant for support.

Edwin Wolters
Creative mind Referro

Curious about other Dutchisms? Have a look at this one. Or this one.

#Note. What is a Dutchism?
The Dutch are known for their ability to master foreign languages. Not surprising considering the size of the country and its entrepreneurial nature. Despite their ability to speak foreign languages, sometimes things get lost in translation. Especially in literal translations from Dutch to English, sometimes strange, funny, or even offending mistakes are made. These unintended ‘mistakes’ are called Dutchisms. In the next few months, we will regularly give you an example and attach an interesting view on sales and marketing to it.

Make the smart move!

There’s a lot of knowledge within our BBN AI Taskforce. Want to know more? Have a chat with Monique.

Have a chat

+31 (0)85 07 06 936

Send a message

Leave Monique a message