GenAI completely makes up sources and facts. It is designed to provide an answer for you and will pull information from anywhere to do so. At times, this is by fabricating information, often times through a Frankenstein method of putting bits of information together for you. This is most evident in an academic setting with how it provides references. If you were to ask GenAI like ChatGPT to create a reference list for you on a given topic, it is going to provide you several fake/hallucinated sources. Look at the reference list below. You'll notice four entries in green and 8 in red. The ones in green are real sources, the other 8 are made-up. That's 33% that are real and 67% that are entirely fabricated. This reference list was based on the prompt "provide me a reference list on the topic of the use of GenAI in an academic setting." So, it is on a meta topic for ChatGPT where it is looking for sources referring to GenAI itself. It's possible that other topics have an even worse proportion for real to made-up sources. Also note that even the sources that are real are not implementing the correct Harvard Style reference list citation even though I had asked it to provide the references in the Harvard Cite Them right referencing style.
Bias is caked into our everyday lives and this is no different when it comes to any online platform like ChatGPT. It is our responsibility in the work we do to be as equitable and socially conscious as possible. This is another reason why we cannot overly rely on GenAI and need to put in the work to find good and proper sources of information from which we learn and can thus produce knowledge ourselves.