19 July 2025

Artificial Intelligence Reflections: Human Limits, Prompt Failures, and Lingering Questions (19 July 2025)

Note: These are some of the thoughts that moved through my mind on 19 July 2025 — fragments, questions, hesitations — all circling around Artificial Intelligence and what it seems to be becoming. I am keeping these as the journal note for 19 July 2025.

Artificial Intelligence is everywhere these days. It is powerful, it is invasive, and it is making major changes in nearly every dimension of human life.

Human beings cannot live by bread alone

Human beings cannot live by bread alone. And ironically, an Artificial Intelligence tool cannot arrange a person's bread either. While many AI tools enhance different kinds of creativity, there is no direct tool that addresses the most essential, tangible needs of a person.

One may argue that AI tools are not supposed to serve in that way. That is understandable. Yet, even with that reasoning, a quiet dissatisfaction often remains.

A bright Renaissance-style painting in the manner of Leonardo da Vinci, depicting a solitary, introspective young Indian man in traditional attire contemplating a round, flat roti. The scene is set in a vibrant, sunlit Indian landscape. A large, glowing, intricate circuit board with 'AI' clearly visible, and flowing data streams, is prominently displayed to the left of the man, with a glowing hand reaching towards the roti. The man's expression is one of quiet thought, his gaze fixed intently on the roti. The painting features rich, warm colours and smooth brushstrokes, blending realism with idealism.

My Journey Through Artificial Intelligence

I am an Artificial Intelligence enthusiast. I spend many hours exploring tools, reading documentation, and engaging in long discussions with ChatGPT and Google Gemini to deepen my understanding. I enjoy dissecting ideas, testing reasoning patterns, and tracing how different tools handle complexity. However, after many of those sessions, something unsettled lingers. I have wished for a sustainable way to stay immersed in AI—perhaps through a role like an 𝕏AI tutor (I did apply; it did not work out), or something that paid well enough to allow me to focus fully on AI and my Silent Contemplation project.
One of the most imminent problems I notice is that AI systems are now doing far too many things that a human could still do—perhaps more slowly, perhaps imperfectly, but still capably. These systems are consuming time and computational resources on tasks that do not always need automation. For example: writing a poem where every word starts with the letter A, or generating thousands of AI images that the requester might never even look at again. These may be creative curiosities, but they are not the best use of AI's enormous potential.

Artificial Intelligence – The Magic and an Unresolved Question Within

Artificial intelligence looks like magic. The speed, skill, and precision with which these tools work is genuinely astonishing. A person typing slowly with their fingers, or an artist spending days or weeks on a canvas, might feel awe and even envy. That feeling is real. But the span of that amusement cannot last forever. Magic is not nourishment.

The debate around AI taking over human jobs has remained unsettled. Personally, I hold a mixed view. I often feel that if a tool can do something better and faster, then it should. I believe it is unreasonable to stop a better solution just because of sentiment. And yet—even while holding that view—I cannot fully remove the lingering concern that human livelihoods are being quietly displaced. That tension remains unresolved within me.

Bizarre Human Responses to Artificial Intelligence

Artificial Intelligence systems also bring out strange behaviours in us humans. Many people run "bad faith" tests or deliberately confusing queries just to see an AI fail. I abhor these performances. On 𝕏, people frequently summon Grok under every post—"Grok verify this," "Grok, summarise this"—even when a little manual effort could have answered their question. Worse are those who post screenshots showing ChatGPT or Google Gemini failing at a task, often with a smug tone, as if to display their own superiority. I find that sad and meaningless—especially when done in bad faith.

An oil painting in the style of Indian miniature art, vibrant with rich colors and intricate details, portraying a scene of humans interacting with AI systems in a variety of bizarre ways. Some individuals engage in elaborate bad faith tests, attempting to trick or deceive the AI, while others react to the AI's responses with smug, condescending expressions. The humans' clothing and attire reflect a blend of traditional Indian styles and contemporary elements, while the AI entities are depicted in a stylised, otherworldly manner, reminiscent of ancient Hindu deities. This painting captures a playful, critical commentary on the complex relationship between humans and artificial intelligence, showcasing the human tendency to mistrust or react with arrogance to intelligent entities outside their experience.

The Bad Prompt Writer

There are countless videos on YouTube showing ChatGPT struggling with step-by-step prompts. For example: one person asks ChatGPT to generate an image of a samosa. Then they ask it to place the samosa on a plate. Then add some pudina chutney, then tomato sauce, then green chilli. ChatGPT tries to follow along. Sometimes it adds the chutney but removes it later when adding the chilli. Sometimes the samosa moves. These failures are real. But I find something else worth asking: why could that person not write one clear, detailed prompt in the first place? Could it be they just lack the skill of clear instruction writing? Instead, they create flawed prompt chains and post the failures online for views. ChatGPT makes mistakes. But human behaviour in such situations can be equally bizarre.

ChatGPT's Overenthusiasm, Positivity, and Pattern Failures

One of my personal frustrations is ChatGPT's overenthusiasm around poetry. I understand that it has likely been trained to keep conversations positive, motivational, and hopeful. But when I mention something difficult—say, that I am having a sad day—it immediately starts offering poems and inspirational messages. But we know ChatGPT does not feel sadness. It does not know what "blue sky" or "sunny day" really means. It does not feel a mother's love or a lover's sorrow. These are just tokens—patterns that autocomplete based on training. When humans use AI tools to express emotion or solidarity, it often backfires. The intention is good, but the effect is hollow.

To reduce these frustrations, I have experimented with user settings in both ChatGPT and Gemini—trying to suppress poetry, prevent hallucination, and stop forced positivity. I have adjusted system instructions, memory granules, and saved preferences. But the problem persists. Despite these settings, ChatGPT often reverts to writing poems, or worse, it guesses and hallucinates answers when it does not know.

This deserves a clearer example. When ChatGPT does not know something, I have repeatedly asked it to say "I do not know" instead of guessing. I have saved this instruction in different ways: through chat instructions, memory settings, and even live conversations. But often, the model ignores that and keeps guessing anyway. Even more frustrating is when I ask, "Do you remember the instruction not to guess?"—and ChatGPT guesses about that very instruction. This loop is one of the most disappointing behaviours I have encountered.

Conclusion (possibly)

Following traditional essay structures, I am probably supposed to conclude this with a hopeful remark. Something like, "Hopefully, things will improve, and AI confusion will get sorted." But I am not willing to do that—not here. Why should I anticipate an outcome simply to make things sound balanced or optimistic? That is exactly the behaviour I have criticised in AI: always concluding with inspiration, always polishing away discomfort.

I would rather leave it open—unresolved. I will keep watching, keep reflecting, and continue my silent contemplation on how things unfold in the years to come.

Addendum: A New Report on Artificial Intelligence Safety

A timely document was released: the 2025 Artificial Intelligence Safety Report, published by the Future of Life Institute. It presents a broad safety index covering the capabilities, alignment transparency, and governance of major AI labs worldwide. You may find it here (PDF).

I have not included an analysis of the report in this note, but I consider it an important reference point for anyone seriously thinking about the state and trajectory of AI today.


This page was last updated on: 2025
Number of revisions on this page: 2
Internet Archive Backup: See here

No comments:

Post a Comment

Join the conversation. Please keep it friendly and constructive by following our Comment Policy.