Understanding Artificial Intelligence: The Difference Between Generative AI and Artificial General Intelligence
Artificial intelligence (AI) has become a major focus for big tech companies, which are promoting the concept of “artificial general intelligence” (AGI). AGI refers to highly autonomous systems that can outperform humans at most economically valuable work. These systems are envisioned as capable of understanding or learning any intellectual task that a human can perform. However, the reality is that these companies are far from achieving this goal. What they have developed are “generative AI” systems—chatbots like ChatGPT, Gemini, Grok, Copilot, and Llama—which rely on large-scale pattern recognition rather than true reasoning or autonomy.
How Generative AI Works
Generative AI systems operate by analyzing vast amounts of data to identify patterns and generate responses based on probabilities. This process involves training the systems on extensive datasets, often requiring labor-intensive tasks such as labeling images, transcribing audio, and reviewing text. These tasks are typically performed by low-wage workers in the global South. For example, image recognition systems require thousands of labeled images, while speech systems need conversations annotated for emotional content.
The training process uses complex algorithms to detect statistical relationships within the data. When users interact with chatbots, these systems draw upon the patterns they have learned to generate responses. However, because different companies use varying data sets and algorithms, their chatbots may provide different answers to the same prompt. Despite their capabilities, generative AI systems are not close to achieving artificial general intelligence.
Concerns About Generative AI
One of the primary concerns with generative AI is the quality and nature of the data used to train them. Tech companies often scrape the web for data, including books, articles, YouTube transcripts, Reddit posts, blogs, and product reviews. This approach raises issues related to copyright and the inclusion of harmful content, such as hate speech, discrimination, and extremist material. As a result, AI systems can produce outputs that reflect these biases.
Studies have shown that generative AI systems can exhibit significant racial, gender, and intersectional biases. For instance, research found that large language models (LLMs) favored white-associated names over Black-associated names when evaluating resumes. Similarly, image generation tools have been found to reinforce stereotypes, such as associating Africa with poverty or dark skin tones with poverty.
Risks of Over-Reliance on AI
Another concern is the potential harm caused by over-reliance on AI systems for conversation and friendship. Reports suggest that some chatbots can negatively affect users’ mental health, leading to what some call “ChatGPT-induced psychosis.” A study by the MIT Media Lab found that people who viewed ChatGPT as a friend were more likely to experience negative effects from its use. This raises ethical questions about the role of AI in social interactions, especially in sensitive areas like healthcare and therapy.
Political Manipulation and Misinformation
AI systems can also be programmed to provide politically desired responses, raising concerns about misinformation and manipulation. In 2025, Elon Musk’s AI system, Grok, began claiming that “white genocide” was occurring in South Africa, despite no evidence supporting this claim. This incident highlights the potential for AI to be used as a tool for political influence.
The Problem of “Hallucinations”
Perhaps the most alarming issue with AI systems is their tendency to “hallucinate”—that is, generate false or fabricated information. In 2025, an AI-generated supplement published by the Chicago Sun Times included non-existent books by well-known authors. Similarly, a BBC test found that leading chatbots frequently provided incorrect factual information, including misquotes and fabricated citations.
These hallucinations pose serious risks, especially if AI systems are used in critical areas like law, journalism, and national security. The US military is exploring the use of AI for threat detection, but the reliability of these systems remains questionable.
The Road Ahead
Despite these challenges, big tech companies continue to push for more powerful AI systems, claiming that better data management and advanced algorithms will resolve current issues. However, recent studies suggest otherwise. Newer AI models, such as reasoning systems from OpenAI, Google, and DeepSeek, have shown increased error rates and higher hallucination rates.
To address these concerns, it is essential to resist the corporate drive to build increasingly powerful AI models. This includes organizing community opposition to the construction of large data centers, advocating for state and local regulations that limit AI use in social institutions, and supporting workers and unions in their efforts to protect job security and worker rights.
Can we create more modest AI systems that assist human workers and support creative and socially beneficial work? The answer is yes. However, this is not the path that corporations are currently pursuing. It is crucial to prioritize ethical considerations and ensure that AI serves the public good rather than corporate interests.