What is technology? : Technology refers to the use of scientific knowledge to achieve practical goals in human life, or, as it is often described, to alter and shape the human environment.
The topic of technology is explored in various articles. For a general overview, refer to technology and its history, as well as hand tools. To learn about materials that serve as both objects and means for manipulating the environment, check out elastomers, industrial ceramics, industrial glass, metallurgy, mineral deposits, mineral processing, mining, and plastics. For energy generation, look into energy conversion, coal mining, coal utilization, petroleum production, and petroleum refining. If you’re interested in food production, see agriculture and its history, agricultural economics, beekeeping, beer, cereal farming, coffee, commercial fishing, dairy farming, distilled spirits, food preservation, fruit farming, livestock farming, poultry farming, soft drinks, tea, vegetable farming, and wine. For construction technology techniques, explore bridges, building construction, canals and inland waterways, dams, harbors and sea works, lighthouses, roads and highways, tunnels and underground excavations, and environmental works. For the design and manufacture of transportation means, refer to the aerospace industry, automotive industry, and ship construction. In terms of communications technology, look into broadcasting, computer science, information processing, photography, printing, photo engraving, typography, and telecommunications. For processes and products in other manufacturing industries, see adhesives, the clothing and footwear industry, dyes, explosives, floor coverings, forestry, the chemical industry, man-made fibers, surface coatings, papermaking, soaps and detergents, and textiles. For medical technology applications, check out the diagnosis, therapeutics, drugs, the history of medicine, and the pharmaceutical industry. For military applications, refer to military technology. To understand the organization of technological systems, see automation, engineering,
What is technology
Table of Contents
artificial intelligence
Artificial intelligence (AI) refers to the capability of a digital computer or a robot controlled by a computer to perform tasks that are typically associated with intelligent beings. This term is often used in the context of developing systems that possess intellectual processes similar to those of humans, such as reasoning, understanding meaning, generalizing, or learning from past experiences. Since the 1940s, digital computers have been programmed to perform highly complex tasks—like proving mathematical theorems or playing chess—with remarkable skill. Although there have been significant advancements in computer processing speed and memory, no programs have yet achieved the full flexibility of human intelligence across a wide range of domains or in tasks that require extensive everyday knowledge. However, some programs have reached performance levels comparable to human experts in specific tasks, making artificial intelligence applicable in various fields, including medical diagnosis, search engines, voice and handwriting recognition, and chatbots. What is technology
What is intelligence?
Most human behavior is attributed to intelligence, whereas even the most complex behaviors of insects are typically not seen as signs of intelligence. What accounts for this difference? Take the digger wasp, Sphex ichneumoneus, as an example. When a female wasp returns to her burrow with food, she first places it at the entrance, checks for any intruders inside, and only if everything is clear does she bring the food inside. The true nature of the wasp’s instinctual actions becomes apparent if the food is moved a few inches away from the entrance while she is inside: upon emerging, she will repeat the entire process as many times as the food is displaced. The absence of intelligence in the wasp is evident, as true intelligence must involve the capacity to adapt to new situations.
Learning
There are various forms of learning in artificial intelligence. The most basic is learning through trial and error. For instance, a simple computer program designed to solve mate-in-one chess problems might randomly try different moves until it finds a checkmate. Once it discovers the solution, the program can store it along with the position, allowing it to recall the solution the next time it encounters that same position. This straightforward method of memorizing specific items and procedures—known as rote learning—is relatively easy to implement on a computer. A more complex challenge is implementing what is referred to as generalization. Generalization means applying past experiences to similar new situations. For example, a program that learns the past tense of regular English verbs through rote memorization won’t be able to produce the past tense of a word like jump unless it has previously encountered jumped. In contrast, a program capable of generalization can learn the “add -ed” rule for regular verbs ending in a consonant and thus form the past tense of jump based on its experience with similar verbs.
Reasoning
To reason is to make inferences that are suitable for a given situation. Inferences can be categorized as either deductive or inductive. A deductive example would be, “Fred must be in either the museum or the café. Since he is not in the café, he must be in the museum.” An example of inductive reasoning is, “Previous accidents of this type were due to instrument failure. This accident is similar; therefore, it was probably caused by instrument failure.” The key distinction between these two reasoning types is that in deductive reasoning, the truth of the premises guarantees the truth of the conclusion, while in inductive reasoning, the premises support the conclusion without providing absolute certainty. Inductive reasoning is frequently used in science, where data is gathered and provisional models are created to explain and predict future outcomes—until unexpected data emerges, prompting a revision of the model. On the other hand, deductive reasoning is prevalent in mathematics and logic, where complex structures of undeniable theorems are constructed from a limited set of fundamental axioms and rules.
There has been significant progress in programming computers to make inferences. However, genuine reasoning encompasses more than just making inferences; it requires drawing relevant inferences that pertain to solving a specific problem. This represents one of the most challenging issues facing AI.
Problem-solving
Problem-solving in artificial intelligence can be described as a systematic exploration of various possible actions to achieve a specific goal or solution. There are two main categories of problem-solving methods: special purpose and general purpose. A special-purpose method is designed for a specific problem and often takes advantage of particular characteristics of the situation at hand. On the other hand, a general-purpose method can be applied to a wide range of problems. One common general-purpose technique in AI is means-end analysis, which involves a step-by-step reduction of the gap between the current state and the desired goal. The program chooses actions from a predefined list of means—such as PICKUP, PUTDOWN, MOVE FORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT in the case of a simple robot—until the goal is achieved.
Artificial intelligence programs have successfully tackled a variety of problems. Examples include determining the best move (or series of moves) in a board game, creating mathematical proofs, and manipulating “virtual objects” in a computer-generated environment.
Perception
In perception, the environment is examined using various sensory organs, whether they are natural or artificial, breaking the scene down into distinct objects that exist in different spatial relationships. The analysis becomes challenging because an object can look different based on the angle from which it is observed, the direction and intensity of light in the scene, and the level of contrast between the object and its surroundings. Currently, artificial perception has progressed enough for optical sensors to recognize individuals and for autonomous vehicles to navigate at moderate speeds on open roads.
Language
Language is a system of signs that convey meaning through convention. This means that language isn’t limited to just spoken words. For instance, traffic signs create a mini-language, where it is conventionally understood that ⚠ signifies “hazard ahead” in certain countries. A key feature of languages is that their units of meaning are based on convention, which differs significantly from what is referred to as natural meaning, as seen in phrases like “Those clouds mean rain” or “The drop in pressure indicates the valve is malfunctioning.”
One notable aspect of fully developed human languages, unlike birdcalls or traffic signs, is their productivity. A productive language can generate an endless array of sentences.
Large language models, such as ChatGPT, can respond fluently to questions and statements in a human language. While these models do not truly understand language in the same way humans do, as they simply choose words based on probability, they have advanced to a level where their language use is often indistinguishable from that of a typical human. This raises the question: what constitutes genuine understanding, especially when a computer can use language like a native speaker yet is not considered to truly understand? There is no consensus on this complex issue.
Methods and goals in AI
Symbolic vs. connectionist approaches
AI research is characterized by two main, somewhat competing methodologies: the symbolic (or “top-down”) approach and the connectionist (or “bottom-up”) approach. The top-down method aims to replicate intelligence by examining cognition without considering the brain’s biological structure, focusing instead on the processing of symbols—hence the term symbolic. Conversely, the bottom-up approach involves constructing artificial neural networks that mimic the brain’s architecture, which is why it is referred to as connectionist. What is technology
To highlight the distinction between these approaches, imagine creating a system with an optical scanner that can recognize letters of the alphabet. A bottom-up strategy usually entails training an artificial neural network by presenting letters individually, gradually enhancing its performance through a process known as “tuning.” Tuning modifies how different neural pathways respond to various stimuli. In contrast, a top-down approach generally involves developing a computer program that matches each letter to geometric descriptions. In essence, the bottom-up approach relies on neural activities, while the top-down approach is grounded in symbolic descriptions.
In his 1932 work, The Fundamentals of Learning, Edward Thorndike, a psychologist at Columbia University in New York City, proposed that human learning is based on some unknown property of connections between neurons in the brain. Later, in The Organization of Behavior (1949), Donald Hebb, a psychologist at McGill University in Montreal, suggested that learning specifically entails reinforcing certain patterns of neural activity by increasing the likelihood (weight) of neuron firing among the connected pathways.
In 1957, two strong proponents of symbolic AI—Allen Newell, a researcher at the RAND Corporation in Santa Monica, California, and Herbert Simon, a psychologist and computer scientist at Carnegie Mellon University in Pittsburgh—summarized the top-down approach with what they termed the physical symbol system hypothesis. This hypothesis posits that the manipulation of symbols is fundamentally sufficient to create artificial intelligence in a digital computer and that human intelligence arises from similar symbolic processes.
Throughout the 1950s and ’60s, both top-down and bottom-up approaches were explored concurrently, yielding significant, albeit limited, outcomes. However, during the 1970s, the bottom-up approach fell out of favor, only to regain attention in the 1980s. Today, both methodologies are actively pursued, with each facing its own set of challenges. Symbolic techniques tend to perform well in controlled environments but often struggle when applied to the complexities of the real world. On the other hand, bottom-up researchers have yet to successfully replicate the nervous systems of even the simplest organisms. For instance, Caenorhabditis elegans, a well-researched worm, has around 300 neurons with a fully mapped interconnection pattern. Despite this, connectionist models have not been able to accurately simulate even this basic organism. Clearly, the neurons described in connectionist theory are significant oversimplifications of their biological counterparts.
Artificial general intelligence (AGI), applied AI, and cognitive simulation
AI research aims to achieve one of three main objectives: artificial general intelligence (AGI), applied AI, or cognitive simulation. AGI, often referred to as strong AI, seeks to create machines that can think like humans. The ultimate goal of AGI is to develop a machine with intellectual capabilities that are indistinguishable from those of a human. So far, progress has been inconsistent. While there have been advancements in large language models, it remains uncertain whether AGI can be realized through even more powerful models or if an entirely different approach is necessary. In fact, some researchers focused on the other two branches of AI consider the pursuit of AGI to be unworthy.
Applied AI, sometimes called advanced information processing, focuses on creating commercially viable “smart” systems, such as expert medical diagnosis tools and stock trading algorithms. This area has seen significant success.
Cognitive simulation involves using computers to test theories about human cognition, such as how we recognize faces or retrieve memories. This approach has already proven to be a valuable asset in both neuroscience and cognitive psychology. What is technology
AI technology (What is technology)
In the early 21st century, advancements in processing power and the availability of larger datasets, often referred to as “big data,” propelled artificial intelligence beyond the confines of computer science departments and into everyday life. Moore’s law, which states that computing power tends to double approximately every 18 months, remained valid. The early chatbot Eliza operated within a modest 50 kilobytes, while the language model powering ChatGPT was trained on an impressive 45 terabytes of text.
Machine learning
The development of neural networks took a significant leap in 2006 with the introduction of the “greedy layer-wise pretraining” technique. This method demonstrated that training each layer of a neural network individually was more effective than attempting to train the entire network from start to finish. This advancement paved the way for a new branch of machine learning known as “deep learning,” characterized by neural networks that consist of four or more layers, including the input and output layers. Additionally, these networks can learn in an unsupervised manner, meaning they can identify patterns in data without any prior guidance.
What is technology
Deep learning has led to remarkable progress in image classification, particularly through the use of specialized neural networks called convolutional neural networks (CNNs). These networks are trained to recognize features from a diverse set of images containing various objects. Once trained, a CNN can analyze a new image, compare it to the features it has learned, and accurately classify it as, for instance, a cat or an apple. One notable example is the PReLU-net developed by Kaiming He and his team at Microsoft Research, which has outperformed human capabilities in image classification.
Garry Kasparov and Deep Blue World chess champion Garry Kasparov faced off against Deep Blue, the chess-playing computer created by IBM. In their first match in 1996, Kasparov emerged victorious with a score of 4−2, but in 1997, he was defeated by Deep Blue with a score of 3 ½−2 ½. The success of Deep Blue in overcoming Kasparov was later eclipsed by DeepMind’s AlphaGo, which excelled at go, a game that is significantly more complex than chess. AlphaGo’s neural networks learned the game by studying human players and through self-play, ultimately defeating top go player Lee Sedol 4–1 in 2016. AlphaGo was subsequently surpassed by AlphaGo Zero, which, starting only with the basic rules of go, managed to defeat AlphaGo with a staggering score of 100–0. A more versatile neural network, Alpha Zero, applied similar techniques to rapidly master chess and shogi.
Machine learning has been applied in various fields beyond just gaming and image classification. For instance, the pharmaceutical company Pfizer utilized this technology to rapidly explore millions of potential compounds while developing the COVID-19 treatment Paxlovid. Google employs machine learning to sift through and eliminate spam from Gmail users’ inboxes. Similarly, banks and credit card companies analyze historical data to train models that can identify fraudulent transactions.
A TikTok account featuring a deepfake of Keanu Reeves showcases content that includes relationship humor and dance videos. Deepfakes are AI-generated media created using two distinct deep-learning algorithms: one that generates a highly accurate replica of a real image or video, and another that identifies whether the replica is fake, highlighting the differences from the original. The first algorithm generates a synthetic image and receives feedback from the second algorithm, which then fine-tunes the image to enhance its realism. This process continues until the second algorithm can no longer detect any inaccuracies. Deepfake media can depict images that do not exist in reality or events that have never taken place. Some widely circulated deepfakes include an image of Pope Francis wearing a puffer jacket, a depiction of former U.S. President Donald Trump in a confrontation with police officers, and a video of Facebook CEO Mark Zuckerberg discussing his company’s questionable influence. These events never actually happened in real life.
What is technology
Large language models and natural language processing
Natural language processing (NLP) focuses on how computers can analyze and understand language in a way that resembles human comprehension. To achieve this, NLP models rely on a combination of computational linguistics, statistics, machine learning, and deep learning techniques. The early models were primarily rule-based and hand-coded, which often overlooked the complexities and subtleties of language. The development of statistical NLP marked a significant advancement, utilizing probability to determine the likelihood of various meanings within text. Today’s NLP systems leverage deep learning models that enable them to adapt and improve as they process more information.
Prominent examples of contemporary NLP include language models that employ AI and statistical methods to forecast the completion of a sentence based on its existing parts. In the context of large language models (LLMs), the term “large” pertains to the parameters—variables and weights—that the model uses to shape its predictions. While there is no strict guideline for the number of parameters required, LLM training datasets can vary significantly, ranging from 110 million parameters in models like Google’s BERTbase to an impressive 340 billion parameters in Google’s PaLM 2. Additionally, “large” also refers to the vast amounts of data utilized for training an LLM, which can reach multiple petabytes and encompass trillions of tokens— the fundamental units of text or code, typically consisting of just a few characters.
One well-known language model is GPT-3, which was released by OpenAI in June 2020. As one of the earliest large language models, GPT-3 was capable of solving high school-level math problems and creating computer programs. It served as the foundation for ChatGPT, which launched in November 2022. The introduction of ChatGPT quickly raised concerns among academics, journalists, and others, as it became difficult to differentiate between human-written content and that generated by ChatGPT.
Following the release of ChatGPT, a surge of large language models and chatbots emerged. In 2023, Microsoft integrated the chatbot Copilot into its Windows 11 operating system, Bing search engine, and Edge browser. That same year, Google introduced its own chatbot, Bard (later renamed Gemini), and in 2024, the company announced that “AI Overviews” would be featured at the top of search results.
A significant challenge with large language models is the phenomenon known as “hallucinations.” Instead of indicating a lack of knowledge, the model may generate plausible but incorrect information based on user prompts. This issue can arise when LLMs are used as search engines rather than for their primary purpose as text generators. One approach to mitigate hallucinations is called prompt engineering, where engineers create prompts designed to elicit the best possible output from the model. For instance, a common prompt style is chain-of-thought, which includes an example question along with a detailed answer to guide the LLM in its response.
Other examples of machines that utilize natural language processing (NLP) include voice-activated GPS systems, customer service chatbots, and language translation tools. Additionally, businesses leverage NLP to improve their understanding of consumers and enhance service by auto-completing search queries and monitoring social media interactions.
Programs like OpenAI’s DALL-E, Stable Diffusion, and Midjourney employ NLP to generate images from textual prompts, which can range from simple descriptions like “a red block on top of a green block” to more intricate ones such as “a cube with the texture of a porcupine.” These programs are trained on extensive datasets containing millions or even billions of text-image pairs, meaning images paired with their textual descriptions.
NLP also faces certain challenges, particularly since machine-learning algorithms can reflect biases present in the training data. For instance, when prompted to describe a doctor, language models might be more inclined to say “He is a doctor” rather than “She is a doctor,” highlighting a gender bias. Such biases in NLP can lead to significant real-world implications. A notable example occurred in 2015 when Amazon’s NLP program for résumé screening was found to discriminate against women, as the original training set used for the program had a lower representation of female employees.
Autonomous vehicles
Machine learning and AI are essential components of autonomous vehicle systems. These vehicles learn from complex data, such as the movements of other cars and road signs, using machine learning to enhance their operational algorithms. AI allows these systems to make decisions independently, without needing detailed instructions for every possible scenario. What is technology
To ensure the safety and effectiveness of autonomous vehicles, artificial simulations are developed to evaluate their performance. Black-box testing is employed for these simulations, as opposed to white-box validation. In white-box testing, the tester has knowledge of the system’s internal structure, which can confirm the absence of failures. Conversely, black-box methods are more intricate and take a more adversarial stance. In these approaches, the tester does not know the internal workings of the system and instead focuses on its external design and structure. The goal is to identify vulnerabilities in the system to ensure it adheres to stringent safety standards.
As of 2024, fully autonomous vehicles are not yet available for consumers. Several challenges remain to be addressed. For instance, nearly four million miles of public road maps in the United States are required for an autonomous vehicle to function effectively, which poses a significant challenge for manufacturers. Moreover, the most well-known vehicles with “self-driving” capabilities, particularly those from Tesla, have raised safety concerns, as they have been known to veer into oncoming traffic and collide with obstacles. AI has not yet advanced to a level where cars can navigate complex interactions with other drivers, cyclists, or pedestrians. This kind of “common sense” is crucial for preventing accidents and ensuring a safe driving environment.
In October 2015, Waymo, Google’s self-driving car project that began in 2009, achieved its first fully driverless trip with a passenger on board. The technology had undergone extensive testing, with one billion miles driven in simulations and two million miles on actual roads. Waymo operates a fleet of fully electric vehicles in San Francisco and Phoenix, allowing users to request rides similarly to Uber or Lyft. Unlike Tesla’s autonomous driving feature, Waymo’s vehicles operate the steering wheel, gas pedal, and brake pedal without any human input. Although the technology was valued at $175 billion in November 2019, it plummeted to just $30 billion by 2020. The U.S. National Highway Traffic Safety Administration (NHTSA) is currently investigating Waymo following over 20 reports of traffic violations, including instances where the vehicles drove on the wrong side of the road and even collided with a cyclist.
Virtual assistants
Virtual assistants (VAs) perform various tasks, such as helping users manage their schedules, making and receiving calls, and providing navigation assistance. These devices rely on extensive data and learn from user interactions to better anticipate needs and behaviors. The most well-known VAs available today include Amazon Alexa, Google Assistant, and Apple’s Siri. Unlike chatbots and conversational agents, virtual assistants offer a more personalized experience, adapting to individual user habits and improving over time.
The journey of human-machine communication began in the 1960s with the introduction of Eliza. In the early 1970s, PARRY was developed by psychiatrist Kenneth Colby to simulate conversations with individuals experiencing paranoid schizophrenia. In 1994, IBM introduced Simon, one of the first devices that could be classified as a “smartphone,” marketed as a personal digital assistant (PDA). Simon was notable for being the first device to feature a touchscreen and included email and fax capabilities. While not a VA in the modern sense, its development laid the groundwork for future assistants. In February 2010, Siri made its debut as the first contemporary VA for iOS, specifically with the iPhone 4S, marking the first time a VA could be downloaded onto a smartphone.
Voice assistants interpret human speech by breaking it down into individual sounds called phonemes, utilizing an automatic speech recognition (ASR) system. Once the speech is analyzed, the VA retains information about the tone and other vocal characteristics to identify the user. Over the years, VAs have evolved significantly through machine learning, gaining access to vast amounts of words and phrases. Additionally, they frequently utilize the Internet to provide answers to user inquiries, such as when someone requests a weather forecast.
Risks
AI presents various risks related to ethical and socioeconomic issues. As automation increases in fields like marketing and healthcare, many workers may find themselves at risk of job loss. While AI could generate some new employment opportunities, these positions often demand more technical skills than those being replaced.
Additionally, AI systems can exhibit biases that are challenging to eliminate without adequate training. For instance, some U.S. police departments have started using predictive policing algorithms to forecast crime hotspots. However, these systems rely partly on arrest data, which tends to be disproportionately high in Black communities. This can result in over-policing in those areas, further skewing the algorithms. Since humans are inherently biased, it’s inevitable that these algorithms will reflect those biases.
Privacy is another significant concern regarding AI. The technology often involves gathering and analyzing vast amounts of data, raising the risk of unauthorized access by malicious entities. With generative AI, there’s also the potential to manipulate images and create fake identities. Furthermore, AI can be employed to monitor populations and track individuals in public areas. Experts have urged policymakers to establish guidelines that enhance the advantages of AI while reducing its associated risks. In January 2024, singer Taylor Swift became a victim of non-consensual deepfakes that circulated widely on social media. While many individuals have experienced this form of online harassment (facilitated by AI), Swift’s prominence brought the issue into sharper focus in public policy discussions.
Data centers housing LLMs consume significant amounts of electricity. In 2020, Microsoft committed to achieving carbon neutrality by 2030. However, in 2024, the company reported a nearly 30 percent increase in carbon emissions from the previous fiscal year, primarily due to the materials and hardware needed for constructing additional data centers. A single query to ChatGPT uses about ten times the electricity of a Google Search. Goldman Sachs projects that by 2030, data centers will account for approximately 8 percent of electricity consumption in the U.S.
As of 2024, there are limited regulations governing AI. Current laws, such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), apply to AI models only when they involve personal data. The most comprehensive regulation is the EU’s AI Act, which was enacted in March 2024. This act prohibits models that engage in social scoring of individuals’ behaviors and characteristics or that seek to manipulate user behavior. Additionally, AI models addressing “high-risk” areas, like law enforcement and infrastructure, are required to be registered in an EU database.
AI has raised significant concerns regarding copyright law and policy. In 2023, the U.S. government’s Copyright Office launched an initiative to explore how AI utilizes copyrighted works to create new content. That year, nearly 15 new copyright-related lawsuits were filed against companies developing generative AI programs. One notable case involved Stability AI, which faced criticism for using unlicensed images to produce new content. Getty Images, the plaintiff in the case, even introduced its own AI feature on its platform, partly in response to the numerous services that provide what they call “stolen imagery.” Additionally, there are ongoing debates about whether AI-generated work should be eligible for copyright protection. Currently, content created by AI cannot be copyrighted, but there are arguments both for and against granting it such protection.
While many AI companies assert that their content is produced without human labor, the reality is that this so-called “revolutionary” technology often depends on the exploitation of workers from developing nations. For instance, a Time magazine investigation revealed that OpenAI employed Kenyan workers, who were paid less than $2 an hour, to sift through text snippets to eliminate toxic and sexually explicit language from ChatGPT. This project was ultimately canceled in February 2022 due to the traumatic nature of the work for those involved. Similarly, although Amazon promoted its cashier-less Amazon Go stores as fully automated, it was uncovered that the “Just Walk Out” technology relied on outsourced labor from India, where over a thousand workers functioned as “remote cashiers.” This led to the humorous observation that, in this context, AI actually stood for “Actually Indians.”
Is artificial general intelligence (AGI) possible?
Artificial general intelligence (AGI), often referred to as strong AI, aims to replicate human cognitive abilities, yet it remains a contentious topic and is still beyond our reach. The challenge of expanding upon AI’s current limited successes cannot be overstated.
Nonetheless, this stagnation might simply reflect the inherent challenges of achieving AGI rather than suggesting it is impossible. Let’s consider the concept of AGI itself. Is it feasible for a computer to think? The theoretical linguist Noam Chomsky argues that discussing this question is futile, as it ultimately comes down to a subjective choice about whether to apply the term “think” to machines. According to Chomsky, there is no factual basis for determining whether such a choice is correct or incorrect—similar to how we accept that airplanes fly but do not say that ships swim. However, this perspective may oversimplify the issue. The crucial question is whether it could ever be appropriate to claim that computers think, and if so, what criteria must a computer meet to earn that designation?
Some authors suggest that the Turing test defines intelligence. However, Alan Turing, the mathematician and logician, noted that a computer deemed intelligent might still fail his test if it cannot convincingly mimic a human. For instance, ChatGPT often highlights its nature as a large language model, making it unlikely to succeed in the Turing test. If an intelligent entity can fail the test, then it cannot serve as a definitive measure of intelligence. Moreover, it’s debatable whether passing the test genuinely indicates that a computer possesses intelligence, as information theorist Claude Shannon and AI pioneer John McCarthy pointed out in 1956. They argued that it is theoretically possible to create a machine with a comprehensive set of pre-programmed responses to any questions an interrogator might ask within the test’s time limit. Similar to PARRY, this machine would generate answers by referencing a vast database of responses. This argument suggests that, in theory, a system lacking any intelligence could still pass the Turing test.
AI lacks a clear definition of intelligence, even when considering subhuman examples. For instance, while rats exhibit intelligence, it’s unclear what benchmarks an artificial intelligence must meet to be considered on par with rats. Without a precise standard for determining when an artificial system qualifies as intelligent, it’s impossible to objectively assess the success or failure of AI research initiatives. This ambiguity means that when researchers achieve milestones in AI—such as creating a program capable of conversing like GPT or defeating a world chess champion like Deep Blue—critics can easily dismiss these accomplishments by saying, “That’s not intelligence!” Marvin Minsky addresses the challenge of defining intelligence by suggesting, much like Turing did, that intelligence is merely a term we use for any cognitive process we haven’t yet fully grasped. Minsky compares intelligence to the idea of “unexplored regions of Africa”: it vanishes as soon as we uncover it.
What is technology