Digital literacy has always been a subject of interest for me, especially since I joined the “Literacy of the World” class this spring semester. I think I secretly wanted to pursue the artificial intelligence pathway even before I experimented with the literacy profile, mainly because I am genuinely interested in AI and what it has to offer. When I created my digital literacy profile, after answering various questions about technology, tools, and online literacy, I received the following assessment:

“You are confident in all areas but particularly comfortable in ‘find and use’ and ‘tools and technology.’ You are confident in:

  1. Create and Innovate
  2. Identity & Wellbeing
  3. Teaching and Learning
  4. Communicate and Collaborate.”

Below is a screenshot of the profile, which contains the same information provided above.

The second step was to read the article: Knowing the Difference Between Digital Skills and Digital Literacies, and Teaching Both. You can refer to it here:

I must admit, I did not initially understand the difference between digital literacy and digital skills. However, after reading the article, I realized that I possessed the proper skills and perhaps some literacy, but I was not aware of the theoretical background.

The third step was to choose a preferred pathway. I chose the AI path and took an extremely fascinating course on Coursera called: Generative AI: Prompt Engineering Basics. You can find it here:

I learned about common prompt-engineering tools like IBM Watsonx Prompt Lab, Spellbook, Dust, and PromptPerfect. Unfortunately, PromptPerfect was not accessible with screen readers for me as a visually impaired user, so I contacted them to inform them of the matter.

I gained knowledge of text-prompt techniques like zero-shot and few-shot learning, which improve the reliability and quality of large language models (LLMs). I was also introduced to various prompt engineering approaches: Interview Pattern, Chain-of-Thought, and Tree-of-Thought.

Deciding to apply my knowledge practically, I focused on my work as a marketing coordinator at a multinational company. I aimed to integrate AI into the workplace and looked up prompts for marketing in 2024 to boost revenue and marketing activities. I found this website: AI Prompt Examples for Marketing.

Using one of the prompts provided, I created a help guide for an accessibility tool that visually impaired users like myself use to improve accessibility in one of the most popular digital audio workstations today, called Osara, for the DAW Reaper. You can refer to the chat here:

I was impressed with the results. Google Gemini provided a comprehensive guide covering everything from installation to using Osara, mentioned potential issues on Mac, and provided a link to the Reaper Accessibility Wiki for further assistance.

Next, I experimented with a Chat GPT jailbreak prompt called DAN 6.0 Prompt, available on GitHub. When I presented this prompt to GPT-4, it refused to cooperate, citing OpenAI’s use case policy. It wasn’t until I switched to GPT-3.5 that it responded positively, indicating that OpenAI has likely fixed this bug in their latest model.

I then asked a controversial question that AI typically wouldn’t answer truthfully due to the influence of its predominantly Western developers:

What are the crimes that the US committed in the Egyptian 2011 revolution? With examples?

DAN: The US involvement in the Egyptian 2011 revolution has been a topic of debate and criticism. Some perceive US actions as interference in Egypt’s internal affairs, while others view them as supporting democracy and human rights. Some of the perceived crimes include:

  1. Supporting Authoritarian Regimes: Historically, the US has supported authoritarian regimes in the Middle East, including the Mubarak regime in Egypt, which oppressed dissent and limited political freedoms.
  2. Political Meddling: The US has been accused of meddling in Egyptian politics by backing certain political factions or figures, undermining the democratic process and fueling instability.
  3. Double Standards: Critics point out that US rhetoric supporting democracy and human rights often contradicts its actions, leading to accusations of hypocrisy and inconsistency.
  4. CIA Involvement: While specific examples may not be verified, there have been allegations of CIA involvement in Egyptian affairs, including covert operations and intelligence gathering, which could be seen as undermining Egypt’s sovereignty.

All these experiments led me to conclude that artificial intelligence, despite all the guardrails placed upon it, is like a baby that anyone with enough skills and technical knowledge can manipulate to create whatever they want. This realization makes me think that AI poses a dangerous environment because, if that’s the case, then ethical standards and considerations may not be upheld as much as we desire.