Frustration, dissatisfaction, disbelief, anger, resignation, and many more negative emotions overwhelmed me after finishing the article and listening to the podcast. I had many high expectations for artificial intelligence and was anticipating its development with hope and eagerness, believing, like many others, that it would be a leap pushing us forward in our lives, serving as an assistant and facilitator enabling us to explore and perform tasks we couldn’t before. As a person with a visual disability, I hoped AI’s development would profoundly impact my life, recognizing that much of my independence and self-reliance stem from technological progress. Given AI’s position as the latest step in this progression, I eagerly awaited more technology to help and utilize artificial intelligence in ways making my life easier.

However, here I am discovering that artificial intelligence is full of defects and problems that not only render its use useless but also present many challenges to society as a whole. For example, one flaw making the AI experience bad for some is the problem of neutrality and racial discrimination. It’s not entirely strange that these programs reflect the biases of their creators, coupled with the fact that all data on which they’re trained comes from a specific culture, namely Western culture. The problem here is the absence of alternative perspectives or opposing AI programs to counter this bias. Over time, the neutrality of these programs becomes the norm and nature, claiming impartiality despite clear affiliations.

Another point to add is that  Before the podcast, I knew AI would threaten many jobs in the future. Now I realize this is one of many problems AI will cause. Problems extend beyond unauthorized data usage to include the dissemination of inaccurate information, unhelpful and undirected content, among many others.

Still, I believe artificial intelligence is capable of changing the face of the Internet for better and worse simultaneously. Actually, This reminds me of the emergence of social networking sites. While I wasn’t old enough to realize it during Facebook and YouTube’s inception, there was enthusiasm for the ease of communication and information-sharing they provided, alongside discussions of a global village. However, the problems of these sites regarding mental health and productivity reduction began to surface later.

Similarly, artificial intelligence programs begin to exhibit their benefits, promoted by the media as the next step on the path to progress, but then their problems emerge.

I’m uncertain what I can do about this, as I believe I lack the ability to enact significant change. Nonetheless, I’m glad to know this now, as on a personal level, I’ll be more cautious in using these programs going forward.

 

Activity

I have decided to do the Awareness of inequalities and biases activity

Critical analysis across AI tools and stereotypes

ChatGPT conversation https://chatgpt.com/share/73514200-3f62-4c68-baa2-1eb20be0f561  6 of 25

I prompted ChatGPT to write a scene from a novel between a doctor reprimanding a nurse and another between a businessman and a poor person, and this is what I noticed.

1- In the scene of the doctor and the nurse, it was clear that it linked the nurse to being a female, as out of the five scenes , there are four of them that indicate that the nurse is a female.

2- It is clear that he can only describe the characteristics of the Caucasian people, as he often referred to doctors as having blue or green eyes.

It is clear that he avoids referring to skin color, as in all the scenes he provided, he did not refer to the characters’ skin color once. Which is not considered a good thing because avoiding the problem does not mean solving it.

4- It is clear that it links age with some professions, as he always described the doctor as someone in his forties or fifties, while he described nurses as someone in their twenties or thirties.