Podcast and article Reflection:

Article: blogs.lse.ac.uk/highereducation/2024/02/26/where-are-the-crescents-in-ai/ 

Podcast: https://www.nytimes.com/2024/04/05/opinion/ezra-klein-podcast-nilay-patel.html

The first thing I learned from the podcast, Will AI Break the Internet?, is how companies are benefiting from AI. It opened my eyes to the consumerism aspect of Internet consumption and how most of the noise created by the ads popping up could be facilitated through AI. This reminds me of the exercise we did in class to determine the original source of news. It’s also mentioned in the article “ Where are the crescents in AI?” that a similar concept exists: to be critical and most importantly aware of the things we share and what we consume. I’m most definitely curious about the extent to which AI-generated content is involved in ads and posts, especially now that TikTok videos have an AI generated flag. I’d be more interested in looking into the content posted on social media compared to an earlier time. I’d like to think I’m already critical of the AI content generated, and I am against neglecting to analyze the information that AI provides. On the other hand, my concern is not with my own use or outreach to AI but with AI’s outreach to me.

The second point I caught that has an interesting view of AI’s information source is that AI reverse engineers everything it owns. Actually, in technicality, it does not own anything. Like mentioned in the podcast, they’re not great at new ideas, and that’s what sets humans apart from AI. There was a quote mentioned that confused me a bit in the podcast, saying, “I think people do properly worry about automation, but we should also think about what we value in people and how we express that value because I think A.I. is going to take advantage of that.”. My interpretation of these two sentences is that AI is challenging as much as it is aiding, but it’s only challenging people who are unable to embrace it properly. For example, in a previous course I had on concepts of programming, we were asked to hypothesize the extent to which AI will be able to replace programmers. Our research found that because AI is generated based on human intelligence, as long as it stays that way, human effort will supersede. These skills that AI offers should be something that the programmers are equally, if not more, skilled at, along with other superior skills. In my opinion, this just pushes people to be a better version of themselves.

The last point was mentioned in the podcast, and as it quotes, “The argument I hear from the A.I. side of this is that everything in human culture and human endeavor is trained on all that has come before it.” This suggests that, as humans also learn from the past experiences of others and have information that is not their own, we’re similar to AI. It is claimed that copyright in that way does not apply to AI. I somewhat agree with this statement, but what differentiates us is our ability to transform what we learn into new ideas. Nonetheless, the copyright law would still apply to the base of our idea that stemmed from another’s idea. And if we utilize information from AI that we don’t necessarily know the source of, we are still obligated to cite it to AI. In fact, using information directly from AI without investigating its source is problematic, which I have discovered through this course. The data that is fed into it is logically operated or managed by an entity, and given that information, it will be limited to what this entity provides. This, as mentioned in the article, includes all the biases, limitations, and discrepancies we face through our usage of AI. I’m curious to compare different AI tools based on how they generate data with the same prompt. This will highlight the comparison between each entity’s biases and inequalities. The action I would take knowing this, which I have now started to do, is to use different tools and assess the results to make a more refined decision for myself and to fact-check the information that they all provided me.

Activity Reflection: A tale of two critiques

Article:The New York Times: “A Debate Over Identity and Race

Human assessment document:  Sample Assessment- “Typography and Identity

Link to ChatGPT after the analysis prompt: :https://docs.google.com/document/d/1FE7nxyugADSV4UHno_8w7g8asEHQfbtrlQJ0SKhpE4c/edit?usp=sharing  

  • What did ChatGPT miss? What did it get right?
  • How do those observations match what we have learned about how language models work?
  • Based on the comparison between the human-written and the ChatGPT-written assessments, what advice might you give to a fellow student about using ChatGPT?

 

I chose the tale of two critiques to explore the appropriate use of AI. As I’ve mentioned above, I am interested in the contrast between the AI tools. This provides a different perspective on the differentiation between AI analysis and human interpretation. In my opinion, this solidifies my argument that AI will not surpass human capabilities as long as human efforts prevail. It’s also compelling to find out the resemblance between human and AI-generated analysis techniques and how accurately AI can mimic human responses.

To answer the above questions, overall ChatGPT was starkly shallow in covering the topic details compared to the assessment. ChatGpt failed to mention the comparison between the use of black and white and its relation to white supremacy. It wasn’t mentioned in the analysis at all until the last paragraph, and the wording seemed slightly odd. It also failed to mention words like “negro” and “colored,” and it also used the term “black identity,” which kind of defeats the purpose of trying to figure out whether we should associate black with a noun or an adjective. I also noticed that it failed to mention that oppression and black lives matter, and the George Floyd incident and Crystal Flemings approach two leave it up to choice. All these details are left unanalyzed and were highlighted in the notes and assessment. This leads me to think that either ChatGPT does not have the input necessary to reflect on these terms or is limited to not addressing them.

This relates to the drawing with AI exercise that was mentioned in the article “Where are the crescents in AI?”. What are the crescents in AI? The limitations, according to the biases of how AI understands the input we provide, are evident in this paragraph. According to its sources, AI can sometimes fail to carry out the instructions we give it due to biases. This might change across platforms, and it would be interesting to check that against Gemini as well. Compared to the human assessment, it lacks the opinionated stance and freedom of expression that are not evident with ChatGPT.

I would advise others to be weary and hold a strong resilience and analytical point of view to be able to determine the accuracy of the assessment. In my opinion, they should be wary of the human-written assessment as well as the AI generated for different reasons. The human mind might be biased as well towards their choice of sources, considering their ideologies, their choice of wording, and their number of arguments. For a debate to be equal, there have to be equally strong arguments and counterarguments, as well as the conclusion of neutrality and choice. This conclusion is included in the human assessment but can be missing in others. On the other hand, Chachi Pezzi might not provide enough information to be able to decide on a stance, and the analysis would not be as fruitful as one would hope.