Sunday, January 26, 2025

Artificial Intelligence and Biased Behaviors

Artificial Intelligence and Biased Behaviors

Recently, a friend of mine sent me a video ๐ŸŽฅ. In the video, two people were complaining about not being able to get an AI tool to draw a "full wine glass ๐Ÿท." My friend asked me, "Is this true?" ๐Ÿค”. Curious, I decided to try it myself, and unfortunately, the results weren’t much different than I expected ๐Ÿ˜….

When you ask an AI model to "draw a wine glass filled to the brim," it generally can’t do it correctly ๐Ÿšซ. It keeps drawing a glass half full. I tested this issue on different platforms like Gemini, ChatGPT, and Copilot, but the results were nearly the same ๐Ÿง.



So, why is this happening? 

AI models are heavily influenced by the datasets they are trained on. The model learns from the examples it encounters most frequently in the training data, which can sometimes lead to biased behavior ๐Ÿง . A "full wine glass" might be a rare example in the training data ๐Ÿคจ. Therefore, the model tends to repeat the "half full" glasses it has seen more often.

Another example came up when I tried to create a professor avatar without a beard, mustache, or glasses ๐Ÿ‘จ‍๐Ÿซ. Similarly, the model kept drawing professors with beards, mustaches, and glasses ๐Ÿ–ผ️. This bias in the AI's drawings is actually a result of the distribution in the training data. It tends to repeat items it encounters with high probability and ignores those that are rarely seen ๐Ÿ“Š.



⚖️ Bias, in fact, refers to the tendency of the model to repeat examples it has seen with high probability in the statistical distribution of the training data ⚖️. This leads to some elements, especially those encountered rarely, being overlooked ๐Ÿง. In other words, the AI's ability to generate accurate or creative solutions is limited by the data it has been exposed to ๐Ÿšง.

Such biases serve as an important reminder for AI developers and users ๐Ÿ“ข: The diversity and balance of training data play a crucial role in achieving more accurate and inclusive outcomes ๐ŸŒ. It is clear that more effort is needed to make AI more objective and unbiased ๐Ÿ‘ฉ‍๐Ÿ’ป๐Ÿ‘จ‍๐Ÿ’ป.

#AI #ArtificialIntelligence #Bias #MachineLearning #Technology #DataScience #AIethics #Innovation #MuratKarakayaAcademy