recent
Breaking news

What is Artificial Intelligence? How Does AI Work? - RK store

Home

 Introduction to AI


Frequently Asked Questions About Artificial Intelligence (AI)


WHAT IS ARTIFICIAL INTELLIGENCE?

Artificial intelligence (AI) is a broad field of computer science that focuses on creating intelligent machines that can accomplish activities that would normally need human intelligence.


WHAT ARE THE FOUR TYPES OF ARTIFICIAL INTELLIGENCE?

  • Machines that react.
  • Memory Limits.
  • Mind-Body Theory.
  • Self-Awareness.

What is Artificial Intelligence? How Does AI Work?
AI

WHAT ARE EXAMPLES OF ARTIFICIAL INTELLIGENCE?

  • Siri, Alexa, and other virtual assistants are examples of smart assistants.
  • Automobiles that drive themselves.
  • Robo-advisors.
  • Bots that converse.
  • Spam filters for email.
  • Netflix's recommendations.


How Does Artificial Intelligence Work?

AI Approaches and Concepts

Alan Turing changed history for the second time with a simple question: "Can machines think?" Less than a decade after breaking the Nazi encryption machine Enigma and helping the Allies win World War II, mathematician Alan Turing changed history once more with a simple question: "Can machines think?"


The core purpose and vision of artificial intelligence were set by Turing's paper "Computing Machinery and Intelligence" (1950) and the Turing Test that followed.


At its most basic level, AI is a discipline of computer science whose goal is to answer yes to Turing's question. It is the goal of artificial intelligence researchers to reproduce or simulate human intellect in robots.


Artificial intelligence's broad purpose has sparked a slew of questions and arguments. So much so that there is no commonly acknowledged definition of the field.


Do machines have the ability to think? — Alan Turing (about 1950)


The biggest flaw in just defining AI as "creating intelligent machines" is that it fails to describe what artificial intelligence is.


What distinguishes a machine as intelligent? Although AI is a multidisciplinary discipline with many methodologies, advances in machine learning and deep learning are causing a paradigm shift in nearly every sector of the IT industry.


Stuart Russell and Peter Norvig approach the subject in their pioneering textbook Artificial Intelligence: A Modern Approach by uniting their work around the theme of intelligent agents in machines.


AI is defined as "the study of agents that receive percepts from the environment and perform actions," according to this definition. (viii) Russell and Norvig


Norvig and Russell then go on to look at four main approaches to AI that have shaped the discipline in the past:


  1. Human-like thinking
  2. Reasonable thinking
  3. Humane behavior
  4. Rational decision-making


The first two concepts are about thinking and thought processes, whereas the rest are about conduct. "All the skills needed for the Turing Test also allow an agent to operate rationally," Norvig and Russell write, focusing on rational agents that act to get the best outcome. (Russel and Norvig 4, for example.)


"Algorithms enabled by limitations, exposed by representations that support models focused at loops that tie thought, perception, and action together," says Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT.


While these concepts may appear esoteric to the common person, they assist to focus the discipline as a branch of computer science and provide a roadmap for incorporating machine learning and other artificial intelligence subsets into machines and programs.


The Four Types of Artificial Intelligence

Machines that react

A reactive machine is guided by the most fundamental AI principles and, as the name suggests, is solely capable of perceiving and reacting to the world around it.


Because a reactive machine lacks memory, it cannot depend on previous experiences to guide real-time decision-making.


Reactive machines are designed to do only a restricted number of specialized tasks since they perceive the world directly.


However, intentionally reducing a reactive machine's worldview is not a cost-cutting technique; instead, it means that this type of AI will be more trustworthy and reliable — it will respond consistently to the same stimuli.


Deep Blue, an IBM chess-playing supercomputer that defeated international expert Gary Kasparov in a game in the 1990s, is a renowned example of a reactive machine.


Deep Blue could only recognize the pieces on a chessboard and know how they move according to the rules of the game, as well as recognize each piece's current position and choose the best logical move at the time.


The machine was not looking for future potential plays from its opponent or attempting to better place its own pieces. Every turn was treated as though it were its own world, distinct from any previous action.


Google's AlphaGo is another example of a game-playing reactive machine. AlphaGo is also unable to predict future plans, instead of relying on its own neural network to assess current game developments, giving it an advantage over Deep Blue in a more complex game.


AlphaGo has also defeated world-class Go players, including Lee Sedol, the 2016 Go champion.


Reactive machine artificial intelligence can achieve a level of complexity and reliability when designed to accomplish recurring tasks, despite its limited scope and inability to be easily updated.


Limited Memory

When gathering information and assessing prospective options, artificial intelligence with limited memory can store previous data and predictions, essentially peering into the past for indications of what might happen tomorrow.


Artificial intelligence with limited memory is more complicated and has more possibilities than reactive machines.


Memory problems When a team regularly educates a model on how to assess and use fresh data, or when an AI environment is constructed to allow models to be automatically taught and regenerated, AI is produced.


Six steps must be followed when using restricted memory AI in machine learning: The machine learning model must be constructed, the model must be able to generate predictions, the model must be able to receive human or environmental feedback, that feedback must be saved as data, and these stages must be repeated in a cycle.


There are three major machine learning methods that use artificial intelligence with limited memory:


Reinforcement learning is a type of machine learning that learns to produce better predictions through trial and error.


Long Short Term Memory (LSTM) is a type of memory that uses previous information to predict the next item in a sequence. When making predictions, LTSMs prioritize more recent information and devalue data from the past, yet they still use it to draw inferences.


With each new decision, Evolutionary Generative Adversarial Networks (E-GAN) grow over time, evolving to explore slightly different paths depending on prior experiences.


Throughout its evolutionary mutation cycle, this model is always looking for a better path and uses simulations and statistics, or chance, to anticipate outcomes.

what is artificial intelligence with examples

Theory of Mind

Theory of Mind is exactly that: a theory. We haven't yet developed the technology and scientific capabilities required to advance artificial intelligence to the next level.


The concept is founded on the psychological premise that other living beings have thoughts and feelings that influence one's own actions.


This would mean that AI computers might understand how people, animals, and other machines feel and make decisions through self-reflection and determination, and then use that information to make their own conclusions.


In order to create a two-way relationship between people and artificial intelligence, computers would need to be able to grasp and interpret the idea of "mind", the fluctuations of emotions in decision making, and a slew of other psychological concepts in real-time.


Self-awareness

The ultimate stage for AI to become self-aware will be to build a Theory of Mind in artificial intelligence, which will happen sometime in the future.


This type of artificial intelligence is conscious on a human level and is aware of its own presence in the world as well as the presence and emotional condition of others.


It would be able to deduce what others may require based on not just what they say to them, but also how they say it.


In artificial intelligence, self-awareness requires both human researchers to grasp the concept of consciousness and then learn how to replicate it so that it can be implemented into machines.


How is AI Used?

Jeremy Achin, CEO of DataRobot, opened his keynote at the Japan AI Experience in 2017 by giving the following characterization of how AI is used today:


"An artificial intelligence (AI) system is a computer that can execute tasks that would normally need human intelligence... Many of these artificial intelligence systems are based on machine learning, while others are based on deep learning, and yet others are based on mundane things like rules."


Artificial intelligence can be divided into two categories:


  • Narrow AI, often known as "Weak AI," is a type of artificial intelligence that functions in a constrained setting and simulates human intellect. While narrow AI is frequently focused on executing a specific task very well, these machines operate under many more constraints and limits than even the most basic human intelligence.


  • Artificial General Intelligence (AGI): AGI, often known as "Strong AI," is the type of artificial intelligence that we see in movies like Westworld's machines or Star Trek: The Next Generation's Data. AGI is a machine that has general intelligence and can use that intelligence to solve any problem, much like a person can.


Narrow Artificial Intelligence

Narrow AI is all around us, and it is by far the most successful implementation of AI so far. According to "Preparing for the Future of Artificial Intelligence," a 2016 report released by the Obama Administration, Narrow AI has had major advancements in the recent decade that have had "substantial societal advantages and have contributed to the economic vitality of the nation."


Here are a few instances of Narrow AI:

  • Performing a Google search
  • Software that recognizes images
  • Personal assistants such as Siri, Alexa, and others
  • Automobiles that drive themselves
  • Watson is an IBM product.


Machine Learning & Deep Learning

Machine learning and deep learning advancements are at the heart of Narrow AI. It might be difficult to tell the difference between artificial intelligence, machine learning, and deep learning. Frank Chen, a venture capitalist, gives a fair explanation of how to tell them apart, noting:


"Artificial intelligence is a collection of algorithms and intelligence designed to imitate human intellect." One of them is machine learning, and deep learning is one of the techniques used in machine learning."


Simply defined, machine learning feeds data to a computer and employs statistical techniques to help it "learn" how to get better at a task without being particularly programmed for it, reducing the need for millions of lines of written code.


Unsupervised learning (using labeled data sets) and supervised learning (using unlabeled data sets) are both types of machine learning (using unlabeled data sets).

Deep learning is a sort of machine learning that processes data through a neural network design inspired by biology.


The data is processed through a number of hidden layers in the neural networks, which allows the machine to go "deep" in its learning, creating connections and weighing input for the best outcomes.


Artificial General Intelligence

For many AI researchers, the creation of a machine with human-level intellect that can be applied to any activity is the Holy Grail, yet the road to AGI has proven difficult.


The hunt for a "universal algorithm for learning and acting in any environment" (Russel and Norvig 27) isn't new, but the difficulties of constructing a machine with a complete set of cognitive abilities haven't become any easier.


AGI has long been the subject of dystopian science fiction, in which super-intelligent robots take over humans, but scientists believe that it isn't something we should be concerned about anytime soon.


A Brief History of Artificial Intelligence

Ancient Greek mythology included intelligent robots and artificial entities for the first time. The creation of syllogism and its application of deductive reasoning by Aristotle was a watershed point in humanity's search to comprehend its own intelligence.


Despite its long and deep roots, artificial intelligence as we know it today has only been around for a century. The following is a quick rundown of some of the most significant AI events.

artificial intelligence article

1940s

  • "A Logical Calculus of Ideas Immanent in Nervous Activity," by Warren McCullough and Walter Pitts, is published in the 1940s (1943). The first mathematical model for developing a neural network was proposed in this publication.


  • (1949) Donald Hebb offers the hypothesis that brain pathways are formed by experiences and that connections between neurons become stronger the more they are used in his book The Organization of Behavior: A Neuropsychological Theory. In AI, Hebbian learning remains an essential model.


1950s

  • Alan Turing publishes "Computing Machinery and Intelligence" in 1950, suggesting the Turing Test, a method for detecting whether or not a machine is intelligent.
  • Marvin Minsky and Dean Edmonds, two Harvard undergraduates, create SNARC, the first neural network computer, in 1950.
  • Claude Shannon publishes "Programming a Computer for Playing Chess" in 1950.
  • The "Three Laws of Robotics" are published by Isaac Asimov in 1950.
  • Arthur Samuel creates a checkers self-learning software in 1952.
  • (1954) In the Georgetown-IBM machine translation experiment, 60 carefully selected Russian sentences are mechanically translated into English.
  • (1956) At the "Dartmouth Summer Research Project on Artificial Intelligence," the term "artificial intelligence" is developed. The conference, which established the scope and goals of AI and was led by John McCarthy, is largely regarded as the genesis of artificial intelligence as we know it today.
  • Allen Newell and Herbert Simon exhibit the first reasoning program, Logic Theorist (LT), in 1956.
  • John McCarthy creates Lisp, an AI programming language, and publishes his article "Programs with Common Sense" in 1958. The paper proposed the Advice Taker, a complete AI system capable of learning from experience in the same way as humans do.
  • Allen Newell, Herbert Simon, and J.C. Shaw create the General Problem Solver (GPS) in 1959, computer software that mimics human problem-solving.
  • Herbert Gelernter created the Geometry Theorem Prover program in 1959.
  • While working at IBM in 1959, Arthur Samuel coined the phrase "machine learning."
  • The MIT Artificial Intelligence Project was started by John McCarthy and Marvin Minsky in 1959.


1960s

  • John McCarthy establishes the Stanford AI Lab in the 1960s (1963).
  • (1966) The United States government's Automatic Language Processing Advisory Committee (ALPAC) study highlights the lack of progress in machine translations research, a significant Cold War effort that promised automatic and instantaneous Russian translation. All government-funded MT projects have been canceled as a result of the ALPAC study.
  • (1969) At Stanford, the first effective expert systems are built-in DENDRAL, a XX software, and MYCIN, a blood illness diagnostic program.


1970s

  • The logic programming language PROLOG is created in the 1970s (1972).
  • (1973) The British government releases the "Lighthill Report," which details the failures of AI research and leads to significant cuts in financing for AI programs.
  • (1974-1980) Dissatisfaction with AI development leads to significant DARPA cuts in university grants. Artificial intelligence funding is drying up, and research is stalling, according to the ALPAC study and the previous year's "Lighthill Report." The "First AI Winter" is the name given to this period.


1980s

  • R1 (also known as XCON) is the first successful commercial expert system, developed by Digital Equipment Corporations in the 1980s. R1, which is designed to configure orders for new computer systems, kicks off a decade-long investment boom in expert systems, essentially ending the first "AI Winter."
  • The ambitious Fifth Generation Computer Systems project is launched by Japan's Ministry of International Trade and Industry in 1982. The purpose of FGCS is to create a platform for AI development and supercomputer-like performance.
  • (1983) In response to Japan's FGCS, the US government establishes the Strategic Computing Initiative to fund advanced computing and artificial intelligence research through DARPA.
  • (1985) Companies invest over a billion dollars per year on expert systems, and an entire sector called the Lisp machine market emerges to support them. Symbolics and Lisp Machines Inc., for example, create specialized computers that run the AI programming language Lisp.
  • (1987-1993) The Lisp machine industry failed in 1987, ushering in the "Second AI Winter," as computing technology advanced and cheaper competitors appeared. Expert systems became too expensive to operate and update during this time, and they finally fell out of favor.


1990s

  • During the Gulf War in 1991, US forces use DART, an automated logistics planning and scheduling program.
  • (1992) Japan cancels the FGCS project in 1992, alleging failure to reach the grandiose targets set a decade before.
  • (1993) After spending over $1 billion and falling far short of projections, DARPA stops the Strategic Computing Initiative in 1993.
  • (1997) World chess champion Gary Kasparov is defeated by IBM's Deep Blue in the 2000s.


2000s

  • (2005) The DARPA Grand Challenge is won by STANLEY, a self-driving automobile.
  • (2005) The United States military begins to invest in self-driving robots such as Boston Dynamics' "Big Dog" and iRobot's "PackBot."
  • (2008) Google develops advancements in speech recognition and makes the feature available via its iPhone app.

2010-2014

  • On Jeopardy!, IBM's Watson defeats the competition.
  • (2011) Apple's iOS operating system introduces Siri, an AI-powered virtual assistant.
  • (2012) Andrew Ng, the founder of the Google Brain Deep Learning project, feeds 10 million YouTube videos as a training set to a neural network using deep learning methods. The neural network learns to recognize a cat without being informed what a cat is, ushering in a new era of deep learning funding and neural networks.
  • (2014) Google's self-driving car passes a state driving test for the first time.
  • Alexa, Amazon's virtual assistant, debuted in 2014.


2015-2021

  • (2016) World champion Go player Lee Sedol is defeated by Google DeepMind's AlphaGo. The ancient Chinese game's complexity was seen as a significant barrier to overcome in AI.
  • (2016) Hanson Robotics creates the first "robot citizen," Sophia, a humanoid robot capable of facial recognition, verbal dialogue, and facial expression.
  • (2018) Google releases BERT, a natural language processing engine that helps machine learning applications reduce barriers to translation and understanding.
  • (2018) Waymo introduces the Waymo One service, which allows consumers in the Phoenix metro region to order a pick-up from one of the company's self-driving vehicles.
  • (2020) Baidu makes their LinearFold AI algorithm available to scientific and medical teams working on a vaccine during the SARS-CoV-2 pandemic's early phases. The system can anticipate the virus's RNA sequence in just 27 seconds, which is 120 times faster than prior methods.
author-img
Rayen Kessabi

Comments

google-playkhamsatmostaqltradent