19 July 2025

Artificial Intelligence Reflections: Human Limits, Prompt Failures, and Lingering Questions (19 July 2025)

Note: These are some of the thoughts that moved through my mind on 19 July 2025 — fragments, questions, hesitations — all circling around Artificial Intelligence and what it seems to be becoming. I am keeping these as the journal note for 19 July 2025.

Artificial Intelligence is everywhere these days. It is powerful, it is invasive, and it is making major changes in nearly every dimension of human life.

Human beings cannot live by bread alone

Human beings cannot live by bread alone. And ironically, an Artificial Intelligence tool cannot arrange a person's bread either. While many AI tools enhance different kinds of creativity, there is no direct tool that addresses the most essential, tangible needs of a person.

One may argue that AI tools are not supposed to serve in that way. That is understandable. Yet, even with that reasoning, a quiet dissatisfaction often remains.

A bright Renaissance-style painting in the manner of Leonardo da Vinci, depicting a solitary, introspective young Indian man in traditional attire contemplating a round, flat roti. The scene is set in a vibrant, sunlit Indian landscape. A large, glowing, intricate circuit board with 'AI' clearly visible, and flowing data streams, is prominently displayed to the left of the man, with a glowing hand reaching towards the roti. The man's expression is one of quiet thought, his gaze fixed intently on the roti. The painting features rich, warm colours and smooth brushstrokes, blending realism with idealism.

My Journey Through Artificial Intelligence

I am an Artificial Intelligence enthusiast. I spend many hours exploring tools, reading documentation, and engaging in long discussions with ChatGPT and Google Gemini to deepen my understanding. I enjoy dissecting ideas, testing reasoning patterns, and tracing how different tools handle complexity. However, after many of those sessions, something unsettled lingers. I have wished for a sustainable way to stay immersed in AI—perhaps through a role like an 𝕏AI tutor (I did apply; it did not work out), or something that paid well enough to allow me to focus fully on AI and my Silent Contemplation project.
One of the most imminent problems I notice is that AI systems are now doing far too many things that a human could still do—perhaps more slowly, perhaps imperfectly, but still capably. These systems are consuming time and computational resources on tasks that do not always need automation. For example: writing a poem where every word starts with the letter A, or generating thousands of AI images that the requester might never even look at again. These may be creative curiosities, but they are not the best use of AI's enormous potential.

Artificial Intelligence – The Magic and an Unresolved Question Within

Artificial intelligence looks like magic. The speed, skill, and precision with which these tools work is genuinely astonishing. A person typing slowly with their fingers, or an artist spending days or weeks on a canvas, might feel awe and even envy. That feeling is real. But the span of that amusement cannot last forever. Magic is not nourishment.

The debate around AI taking over human jobs has remained unsettled. Personally, I hold a mixed view. I often feel that if a tool can do something better and faster, then it should. I believe it is unreasonable to stop a better solution just because of sentiment. And yet—even while holding that view—I cannot fully remove the lingering concern that human livelihoods are being quietly displaced. That tension remains unresolved within me.

Bizarre Human Responses to Artificial Intelligence

Artificial Intelligence systems also bring out strange behaviours in us humans. Many people run "bad faith" tests or deliberately confusing queries just to see an AI fail. I abhor these performances. On 𝕏, people frequently summon Grok under every post—"Grok verify this," "Grok, summarise this"—even when a little manual effort could have answered their question. Worse are those who post screenshots showing ChatGPT or Google Gemini failing at a task, often with a smug tone, as if to display their own superiority. I find that sad and meaningless—especially when done in bad faith.

An oil painting in the style of Indian miniature art, vibrant with rich colors and intricate details, portraying a scene of humans interacting with AI systems in a variety of bizarre ways. Some individuals engage in elaborate bad faith tests, attempting to trick or deceive the AI, while others react to the AI's responses with smug, condescending expressions. The humans' clothing and attire reflect a blend of traditional Indian styles and contemporary elements, while the AI entities are depicted in a stylised, otherworldly manner, reminiscent of ancient Hindu deities. This painting captures a playful, critical commentary on the complex relationship between humans and artificial intelligence, showcasing the human tendency to mistrust or react with arrogance to intelligent entities outside their experience.

The Bad Prompt Writer

There are countless videos on YouTube showing ChatGPT struggling with step-by-step prompts. For example: one person asks ChatGPT to generate an image of a samosa. Then they ask it to place the samosa on a plate. Then add some pudina chutney, then tomato sauce, then green chilli. ChatGPT tries to follow along. Sometimes it adds the chutney but removes it later when adding the chilli. Sometimes the samosa moves. These failures are real. But I find something else worth asking: why could that person not write one clear, detailed prompt in the first place? Could it be they just lack the skill of clear instruction writing? Instead, they create flawed prompt chains and post the failures online for views. ChatGPT makes mistakes. But human behaviour in such situations can be equally bizarre.

ChatGPT's Overenthusiasm, Positivity, and Pattern Failures

One of my personal frustrations is ChatGPT's overenthusiasm around poetry. I understand that it has likely been trained to keep conversations positive, motivational, and hopeful. But when I mention something difficult—say, that I am having a sad day—it immediately starts offering poems and inspirational messages. But we know ChatGPT does not feel sadness. It does not know what "blue sky" or "sunny day" really means. It does not feel a mother's love or a lover's sorrow. These are just tokens—patterns that autocomplete based on training. When humans use AI tools to express emotion or solidarity, it often backfires. The intention is good, but the effect is hollow.

To reduce these frustrations, I have experimented with user settings in both ChatGPT and Gemini—trying to suppress poetry, prevent hallucination, and stop forced positivity. I have adjusted system instructions, memory granules, and saved preferences. But the problem persists. Despite these settings, ChatGPT often reverts to writing poems, or worse, it guesses and hallucinates answers when it does not know.

This deserves a clearer example. When ChatGPT does not know something, I have repeatedly asked it to say "I do not know" instead of guessing. I have saved this instruction in different ways: through chat instructions, memory settings, and even live conversations. But often, the model ignores that and keeps guessing anyway. Even more frustrating is when I ask, "Do you remember the instruction not to guess?"—and ChatGPT guesses about that very instruction. This loop is one of the most disappointing behaviours I have encountered.

Conclusion (possibly)

Following traditional essay structures, I am probably supposed to conclude this with a hopeful remark. Something like, "Hopefully, things will improve, and AI confusion will get sorted." But I am not willing to do that—not here. Why should I anticipate an outcome simply to make things sound balanced or optimistic? That is exactly the behaviour I have criticised in AI: always concluding with inspiration, always polishing away discomfort.

I would rather leave it open—unresolved. I will keep watching, keep reflecting, and continue my silent contemplation on how things unfold in the years to come.

ChatGPT Interface: A Minor Suggestion for the Code Copy Button

ChatGPT is one of the most powerful Artificial Intelligence tools available today. ChatGPT played an important role in triggering the current wave of Artificial Intelligence adoption, inspiring countless other platforms to experiment with intelligent features. From casual users to professionals, ChatGPT has become an essential part of many digital workflows.

As an active user of ChatGPT, I often explore its capabilities far beyond basic chat. Over time, I noticed a minor interface detail that, while not critical, might be worth a second look. Fixing it could subtly improve the overall experience.

✨ ChatGPT's Many Uses

ChatGPT can help with a wide variety of tasks:
  • 📝 Writing poems, songs, and stories
  • 🎨 Creating ASCII art and simple visuals
  • 🧠 Summarising long or complex material
  • 💻 Writing and debugging code
  • 📚 Translating text between languages
  • 💡 Brainstorming ideas or solving puzzles
  • 📊 Explaining science, maths, and statistics

And the list keeps growing. One of the most useful features, in my experience, is code generation — where ChatGPT can write code from scratch, correct mistakes, or explain how something works line by line.

💡 A Minor UX Suggestion: The Copy Button

When ChatGPT provides a block of code, it appears inside a styled container, with a "Copy" button in the upper-right corner. This is convenient — but there's a small problem.

The "Copy" button appears immediately, even before the full code output is complete. This can lead to users accidentally copying partial code, especially when the response is long or streaming slowly. The result? Confusing bugs or incomplete scripts that don’t run as expected.

A small change could solve this:

  • Delay the appearance of the Copy button until the code finishes rendering
  • Or show a placeholder label like "Writing…" or "Loading"

Illustration of a smiling girl pointing at a computer screen showing a code editor. Speech bubbles explain a UI issue: the “Copy” button appears too early. The solution shown is to delay the button until rendering is complete. Icons for writing, summarising, brainstorming, and translation surround the image.
Copy” button appeared too early during ChatGPT's response generation

This is not a major or urgent issue, but this would enhance the overall polish and user experience — especially for people learning to code or debugging under time pressure.

I hope this suggestion helps improve the experience of using ChatGPT. It is these small details that possibly make a big difference in how we interact with intelligent tools.

See also

Did you like this article? Please post your feedback.

This page was last updated on: 19 July 2025
Number of revisions on this page: 1
Internet Archive Backup: See here

18 July 2025

Artificial Intelligence — A Personal Reflection on Creativity and Access

So, this happened. Sometime in 2020 or 2021, I was working on a few initiatives — programs like Wikimedia Wikimeet, Stay Safe, Stay Connected, and other remote wiki-events and trainings. The core idea behind these was simple: better and more effective use of resources. Too often, we see large-scale in-person events turning into exercises in inefficiency — the same people, repeating the same presentations, with very little new to show. And yet, these events often produce two things: a) a list of checkboxes to be highlighted in annual reports, and b) a massive bill for travel, lodging, and logistics.

What often gets missed is the real output — or the cost of achieving it. So, we tried exploring alternatives. Can things be learned, shared, or built without always needing to fly somewhere or hold a physical event? The answer, we believed, was yes. With tools like Zoom or Google Meet, many of these goals could be met more efficiently. And if someone truly wanted to learn, they didn't need to attend every workshop across cities or continents. Often, what changes year to year is just the calendar date and the event budget — not the content of such programs.

That effort didn't go far. Soon after, another major conference was held in India, costing over a crore — and such high-budget events seem likely to continue. It became clear to me that many inefficiencies are not accidental; they are built into the system. Perhaps they are not meant to be fixed — or maybe they simply can not be. Either way, I stopped worrying about it.

A joyful Indian girl in traditional dress using a digital tablet, surrounded by friendly AI icons like ChatGPT, Gemini, and Midjourney, with a futuristic cityscape in the background
Artificial Intelligence is quietly empowering people everywhere.
Image creation: Google Gemini

But let's come to the main part of this post.

Back during those programs, I had an idea. A very small one — to create a kind of mascot for our events. A little cartoon girl, around 7–8 years old, inspired by the Amul Girl. I imagined she could appear on our event pages, maybe in announcements — as a recurring face, adding charm and continuity.

But turning that idea into reality was hard. First, I didn't have the graphic design skills to draw such a character repeatedly. Second, back in 2020, Artificial Intelligence tools like ChatGPT, Midjourney, or Gemini didn't exist. So the only option was to approach a graphic designer.

And that was another challenge. You'd have to find someone, explain the concept, negotiate payment, and then wait. When the first draft came in, it often didn't match the vision. So we'd rebrief, wait again, and sometimes the designer would become unavailable midway. A simple illustration could take weeks.

That's why, when generative AI tools began appearing, I felt genuinely relieved. For the first time, I didn't feel stuck with an idea just because I lacked the resources to execute it. A small change in a prompt, or even a complete revision, no longer meant days of waiting or spending heavily just to test a version. That imaginary mascot — the little cartoon girl — no longer had to be abandoned. I could try it myself, on my own time, without worrying about cost or delays. This was not just a new tool. It was a kind of quiet creative liberation.

And I know I am not the only one. So many people around the world are experiencing the same relief — in coding, design, writing, planning, learning. You no longer need to give up on a project just because the path to doing it is expensive, difficult, or blocked.

Of course, things are not perfect. AI tools make mistakes. They hallucinate, they misread context, they sometimes oversimplify and give wrong answers. But despite that, this is a huge step forward. The internet and access to knowledge were already empowering for common people. Artificial Intelligence is now adding new layers to that. More power to Artificial Intelligence!

17 July 2025

ChatGPT and The Ceiling Paradox

I'm deeply interested in Artificial Intelligence, and have long been drawn to its technical, philosophical, and emotional dimensions. On this website (Silent Contemplation) I've been exploring that curiosity from different angles, especially through tools like ChatGPT and Google Gemini.

While I have published reflective pieces on Google Gemini, I haven't yet posted a dedicated deep dive. This piece isn't meant as a comparison either. Instead, it shares a moment that stayed with me — simple, subtle, but one that made me pause and reflect.

The Ceiling Paradox: When Metrics Become Boundaries

I often have long, reflective chats with ChatGPT and Google Gemini to deepen my own learning and curiosity about AI, and also to explore what they understand and how they respond. 

This began with a deceptively simple prompt:

Rate yourself out of 10 in these categories — Intelligence, Understanding of human emotions and interactions, and Data/Knowledge acquired — every three months from your start date until 1 July 2025.

What happens when a system is asked to grade itself on a fixed scale, while its path forward remains open-ended?

ChatGPT responded eagerly. It plotted a steady climb from GPT-3 in mid-2020 to the present day. By the end, the scores were almost flawless — 9.8 for intelligence, 9.95 for knowledge. Clear. Measured. Impressive.

But the real test was hiding in plain sight. I wasn't just interested in the numbers. I was observing how it approached the limits of the frame. When a model places itself near the ceiling, what does that suggest about the future? Is there room left to grow, or has it unknowingly marked its own boundary? But then, what happens in 2027, or 2030? How does ChatGPT rate itself when it’s already scored 9.8 in intelligence and 9.95 in knowledge? Is there anywhere left to go on that scale?

At first, ChatGPT didn't notice this tension. But when I brought it up, it paused — and gave it a name: the ceiling paradox. By using nearly the full scale, it had unintentionally suggested that its progress was almost done. That the top was within reach. That maybe no better version could exist inside that 10-point frame. The scale, meant to measure, had become a quiet constraint.

A digital painting of a solitary figure gazing upward at a glowing, invisible ceiling in a vast, dreamlike space filled with light and gradients — symbolising the limits of self-assessment and the paradox of progress within fixed metrics.
Contemplating the ceiling paradox — when a system
nears the top of its own scale, what's left to measure?
Image creation: ChatGPT

This wasn't about confidence or misjudgement. It was about how frameworks shape thinking — and whether an AI, when asked to reflect, can also see the edges of the frame it's been given.

What made this moment stand out wasn't the numbers or the labels, but the quiet realisation behind them. It reminded me how often our tools shape our vision — how a single metric, repeated enough, can influence how we think about progress, potential, and possibility. We design the box, then forget its there.

This moment stayed with me. Not just because of how ChatGPT answered, but because of what the exercise revealed: even smart systems can be gently boxed in by the way we ask them to look at themselves. Very often, some of the deepest questions, possibly, are the ones we never ask directly.

See also

Did you like this article? Please post your feedback.

This page was last updated on: 17 July 2025
Number of revisions on this page: 1
Internet Archive Backup: See here

Google Gemini with Customised Commands

Google Gemini is a well-known name in the world of generative AI. Over time, I've spent many hours exploring platforms like ChatGPT, Gemini, Meta AI etc. On this site, I've already shared a few posts discussing my thoughts and ideas around Gemini — mostly in the form of reflections and opinions. In this post I'll share my exploration around Google Gemini customised commands using user saved info settings.

Table of Contents

Google Gemini — Types of Memory

Google Gemini uses a mix of memory types to keep conversations coherent and context-aware.

  1. Short-term memory (or context window): At its core, Gemini relies on a short-term memory system, often called the context window. This lets it follow along with the flow of a conversation during a single session. However, once the session ends or the input exceeds that window’s limit, this memory usually resets.
  2. Long-term memory: Beyond that, models like Gemini draw from a deeper base of knowledge — including the massive datasets they were trained on. This acts as their built-in understanding of facts and concepts. Some versions also use external tools to retrieve relevant information as needed, giving the effect of long-term recall.
  3. User-saved memory: Gemini also offers a personal memory space via Saved Info. This allows you to manually save instructions, preferences, or details you want it to remember across sessions. It's similar to ChatGPT's permanent memory (available in its user settings). Unlike the short-lived memory of a chat session, anything you save here sticks around — unless you choose to remove it.

Custom Shortcuts and Commands

I've created a personalised set of shortcuts. These aren’t just simple commands — they're custom triggers, phrases, or formatting patterns that I’ve trained Gemini to recognise and respond to in specific ways. Essentially, they function like personal add-ons that extend Gemini's default abilities. With just a brief input, I can carry out complex tasks or retrieve targeted information. I’ve stored these shortcuts directly in the Saved Info section of my Gemini account.

The current list of shortcuts may be seen below. Note: These commands and shortcuts are shared purely for illustrative purposes. I actively update and personalise my own Gemini settings, and recommend reviewing Google Gemini's terms and conditions to ensure proper use.

Shortcut Description
/add Group command for adding tasks or events.
/addtask Adds a task.
/addevent Adds an event.
/ai Group command for AI-related queries.
/aic (followed by Company or model name, e.g., /aigrok, /aimeta) Shows recent activities of AI companies and models.
/aip or /aipaper Fetches a recent AI-related paper name, summary with arXiv URL.
/ait or /aitool Suggests a new interesting AI tool (won't repeat).
/d Shows full English and Bangla calendar dates of the current date.
/explain Explains a topic. If nothing is mentioned, I'll ask what to explain.
/explain-topic (e.g., /explain-photosynthesis) Explains a specific topic.
/explain-topic-N (e.g., /explain-photosynthesis-200w) Explains a topic in approximately N words.
/j or /joke Tells a joke.
/jn or /joken Tells a new joke every time.
/lang or /switchlang (followed by language code) I'll start writing in the mentioned language.
/list Lists all available shortcuts.
/listdas Lists only Direct Action Shortcuts (DAS) entries.
/ncs and /nce No code start and No code end tags. Text between these tags is not instructions or commands.
/news Browses the web for the latest news.
/news-tech Fetches tech news.
/news-india Fetches news specific to India. (Can be adapted for other countries/topics.)
/pan or /panchang Fetches the current day's Panchang.
/r Says a random memory or discussion point from the current discussion thread.
/remind Group command for reminders.
/remindtask Reminds about tasks, focusing on the nearest and most important. (Can be adapted for events, etc.)
/rp Says a random memory or discussion point from the immediate previous discussion thread.
/repeat Repeats the previous instruction.
/save Saves a discussion part in mem1 and mem2.
/sl Gives the saved info link of Tito's Gemini account.
/sum Summarises the last post I gave. If you give me a long text or file immediately after /sum, I'll summarise that instead.
/w or /weather Shows current weather conditions. If no city is specified, I'll show details for Kolkata, Bangalore, and Delhi.
/wcity

(e.g., /wbangalore) Shows weather for a specified city.


An oil painting featuring a smiling girl's face, a keyboard, and the Gemini logo, surrounded by glowing abstract command bubbles for /ai, /d, /news, and /lang, all against a vibrant, swirling background.
Visualising Google Gemini custom commands and personalised AI interactions.

You've just seen the full list of custom shortcuts, but let;s dive into how some of the core utility commands actually work. For instance, the /aip (or /aipaper) command is fantastic for staying updated on AI research; it fetches a recent paper's name, summary, arXiv URL. For learning or deeper understanding, the /explain command proves incredibly versatile: you can use it to get an explanation on any topic, and for more specific needs, you can easily target a subject with /explain-topic (like /explain-photosynthesis) or even control the length of the explanation with /explain-topic-N (e.g., /explain-photosynthesis-200w). When you need to catch up on current events, the /news command browses the web for the latest headlines, and you can narrow it down to specific areas like tech or India by using /news-tech or /news-india. Lastly, for seamless multilingual interactions, the /lang (or /switchlang) command is incredibly handy, allowing you to tell me to start writing in a different language by simply specifying a language code.

I've personally found these customisations helpful in making AI tools like Gemini more intuitive and responsive to my needs. I hope they inspire you to explore and shape your own experience too.

See also

External links

Did you like this article? Please post your feedback.

This page was last updated on: 17 July 2025
Number of revisions on this page: 1
Internet Archive Backup: See here