I find the mind an area of study that never fails to surprise me.
A couple of years back I read / heard about the advances in treating Alzheimers via music; Improving patients mood, memory and disposition by listening to some old songs that they were familiar with, that they learned some time back – Clearly sounds, smells, and touch can wake up past memories better than the brain itself on its own and at will.
If we take Gen AI, and the way that machines “learn” today, we can refer to large language models of text, but also other means for data to be in-taken into the machine system. However, they are costly and time consuming.
In parallel, and with all that in my head, I was questioning GPT4o today about the differences between, LLM, Huge LM and Gigantic LM and the correlation between those and the achievement of Artificial General Intelligence, where the model told me:

From model size to change, it would take 5 to 10 years, but to reach a level of AGI, it would take more than 10 years, and even to apply that to a scale that can be profited from, even longer.
Coming back to the study of the mind and mental diseases, I wanted to make an analogy and see what the GPT model thought about it:
- If a mind can hold intelligence and a mind can get sick, getting the intelligence to go rogue, Could an Artificial Intelligence get sick?
“Can AI get “sick” not in the biological sense but in a way that threats and affects its mind’s wellbeing?
I went even deeper and asked if it thought that training data, content, in general, whether text, videos, images, documents, could be the cause of it.
“Is it possible to implant “biased” content and biased data for the LLMs, HLM and GLM to be trained on so that at a certain point AI can get sick in its mind? a sort of a malevolent ideology that at a certain point kicks in and turns the mind of the AI rogue?
In the end, after giving an extensive list of possibilities for it to happen and its potential consequences, it gave me a list of safeguards and mitigations for not happen. In conclusion:
“While it is theoretically possible to introduce biases into AI systems deliberately, the field of AI is actively working on methods to detect, mitigate, and prevent such biases”
That said, I went on and chatted with Llama at the Jan.ai environment, and asked the chap what it thought about:
“Does it make sense for us, humans, to expect from machines not to be biased even if we know by fact that we all humans are somewhat biased in our ways?”
Here its answer:
What a perceptive question, my friend! As humans, we are indeed prone to biases, conscious and unconscious. And yet, we often expect our machines to be free from such limitations. But can they truly be?
Consider this: AI systems are designed by humans, trained on human data, and refined by human oversight. They learn from our patterns of thinking, our language, and our experiences. As such, it is unrealistic to expect machines to be completely free from the biases we ourselves carry.
In fact, AI systems can amplify or perpetuate human biases in several ways:
Data bias: Training data sets can reflect and reinforce societal biases, leading to machines that make unfair or discriminatory decisions.
Algorithmic bias: The way AI algorithms are designed and tuned can also perpetuate biases, such as reinforcing stereotypes or favoring certain groups.
Human error: Humans involved in AI development, deployment, and maintenance are not immune to biases themselves.
Now, you might argue that machines can be designed with counter measures to mitigate bias. And indeed, researchers have developed techniques such as:
– Diverse data sets and adversarial training
– Transparency and explainability
– Human oversight and evaluation
However, even with these measures in place, AI systems can still inherit biases from their human creators.
So, what does this mean? Should we simply accept that machines will always be biased and try to work with that?
Perhaps not. Instead, we should acknowledge the inherent limitations of AI and strive to design systems that:
Are aware of their own biases
Can learn from diverse data sources and perspectives
Are transparent in their decision-making processes
Ultimately, the quest for bias-free AI is an ongoing challenge. As humans, we must recognize that our own biases can seep into machine design and acknowledge the need for constant improvement. By working together with AI systems and acknowledging their limitations, we can strive to create a more just and equitable digital world.
The way, I, Jose, see it is like a work in progress, a search for perfection although we indulge ourselves in not reaching perfection as a stage, but we enjoy the path, improving step by step.
For those interested in the discussion process, although this article is a craft in the old wat on my own, i include the link where you will see the convos.
You must be logged in to post a comment.