0% found this document useful (0 votes)
50 views14 pages

Understanding Artificial Intelligence Basics

Artificial intelligence (AI) enables machines to simulate human-like learning and decision-making, with a significant focus on generative AI in 2024, which creates original content. The document outlines the history of AI from its inception in the 1950s through key milestones, including the development of machine learning and deep learning techniques. It also discusses the benefits, challenges, and ethical considerations associated with AI technologies.

Uploaded by

Adewusi temitope
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Topics covered

  • Supervised Learning,
  • Fairness in AI,
  • AI and Robotics,
  • AI Performance,
  • AI and Big Data,
  • Operational Risks,
  • AI and Decision Making,
  • Generative AI,
  • AI Models,
  • Large Language Models
0% found this document useful (0 votes)
50 views14 pages

Understanding Artificial Intelligence Basics

Artificial intelligence (AI) enables machines to simulate human-like learning and decision-making, with a significant focus on generative AI in 2024, which creates original content. The document outlines the history of AI from its inception in the 1950s through key milestones, including the development of machine learning and deep learning techniques. It also discusses the benefits, challenges, and ethical considerations associated with AI technologies.

Uploaded by

Adewusi temitope
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Topics covered

  • Supervised Learning,
  • Fairness in AI,
  • AI and Robotics,
  • AI Performance,
  • AI and Big Data,
  • Operational Risks,
  • AI and Decision Making,
  • Generative AI,
  • AI Models,
  • Large Language Models

What is AI?

Artificial intelligence (AI) is technology that enables computers and machines to simulate human
learning, comprehension, problem solving, decision making, creativity and autonomy.
Applications and devices equipped with AI can see and identify objects. They can understand and
respond to human language. They can learn from new information and experience. They can make
detailed recommendations to users and experts. They can act independently, replacing the need for
human intelligence or intervention (a classic example being a self-driving car).

But in 2024, most AI researchers and practitioners—and most AI-related headlines—are focused on
breakthroughs in generative AI (gen AI), a technology that can create original text, images, video
and other content. To fully understand generative AI, it’s important to first understand the
technologies on which generative AI tools are built: machine learning (ML) and deep learning.

History of AI 1950-1997
The idea of "a machine that thinks" dates back to ancient Greece. But since the advent of electronic
computing (and relative to some of the topics discussed in this article) important events and
milestones in the evolution of AI include the following:
1950
Alan Turing publishes Computing Machinery and Intelligence (link resides outside
[Link]). In this paper, Turing—famous for breaking the German ENIGMA code during
WWII and often referred to as the "father of computer science"—asks the following
question: "Can machines think?"

From there, he offers a test, now famously known as the "Turing Test," where a human
interrogator would try to distinguish between a computer and human text response.
While this test has undergone much scrutiny since it was published, it remains an
important part of the history of AI, and an ongoing concept within philosophy as it uses
ideas around linguistics.

1956

John McCarthy coins the term "artificial intelligence" at the first-ever AI conference at
Dartmouth College. (McCarthy went on to invent the Lisp language.) Later that year,
Allen Newell, J.C. Shaw and Herbert Simon create the Logic Theorist, the first-ever
running AI computer program.
1967

Frank Rosenblatt builds the Mark 1 Perceptron, the first computer based on a neural
network that "learned" through trial and error. Just a year later, Marvin Minsky and
Seymour Papert publish a book titled Perceptrons, which becomes both the landmark
work on neural networks and, at least for a while, an argument against future neural
network research initiatives.

1980

Neural networks, which use a backpropagation algorithm to train itself, became widely
used in AI applications.

1995

Stuart Russell and Peter Norvig publish Artificial Intelligence: A Modern Approach (link
resides outside [Link]), which becomes one of the leading textbooks in the study of
AI. In it, they delve into four potential goals or definitions of AI, which differentiates
computer systems based on rationality and thinking versus acting.

1997

IBM's Deep Blue beats then world chess champion Garry Kasparov, in a chess match
(and rematch).

History of AI 2004-2024
2004

John McCarthy writes a paper, What Is Artificial Intelligence? (link resides


outside [Link]), and proposes an often-cited definition of AI.

By this time, the era of big data and cloud computing is underway, enabling
organizations to manage ever-larger data estates, which will one day be
used to train AI models.

2011

IBM Watson® beats champions Ken Jennings and Brad Rutter at Jeopardy! Also, around
this time, data science begins to emerge as a popular discipline.

2015

Baidu's Minwa supercomputer uses a special deep neural network called a convolutional
neural network to identify and categorize images with a higher rate of accuracy than
the average human.
2016

DeepMind's AlphaGo program, powered by a deep neural network, beats Lee Sodol, the
world champion Go player, in a five-game match. The victory is significant given the
huge number of possible moves as the game progresses (over 14.5 trillion after just
four moves). Later, Google purchased DeepMind for a reported USD 400 million.

2022

A rise in large language models or LLMs, such as OpenAI’s ChatGPT, creates an


enormous change in performance of AI and its potential to drive enterprise value. With
these new generative AI practices, deep-learning models can be pretrained on large
amounts of data.

2024

The latest AI trends point to a continuing AI renaissance. Multimodal models that can
take multiple types of data as input are providing richer, more robust experiences.
These models bring together computer vision image recognition and NLP speech
recognition capabilities. Smaller models are also making strides in an age of diminishing
returns with massive models with large parameter counts.

Machine learning
A simple way to think about AI is as a series of nested or derivative concepts that have emerged
over more than 70 years:

Directly underneath AI, we have machine learning, which involves creating models by training an
algorithm to make predictions or decisions based on data. It encompasses a broad range of
techniques that enable computers to learn from and make inferences based on data without being
explicitly programmed for specific tasks.
There are many types of machine learning techniques or algorithms, including linear regression,
logistic regression, decision trees, random forest, support vector machines (SVMs), k-nearest
neighbor (KNN), clustering and more. Each of these approaches is suited to different kinds of
problems and data.

But one of the most popular types of machine learning algorithm is called a neural network (or
artificial neural network). Neural networks are modeled after the human brain's structure and
function. A neural network consists of interconnected layers of nodes (analogous to neurons) that
work together to process and analyze complex data. Neural networks are well suited to tasks that
involve identifying complex patterns and relationships in large amounts of data.

The simplest form of machine learning is called supervised learning, which involves the use of
labeled data sets to train algorithms to classify data or predict outcomes accurately. In supervised
learning, humans pair each training example with an output label. The goal is for the model to learn
the mapping between inputs and outputs in the training data, so it can predict the labels of new,
unseen data.
Deep Learning
What Is It?

Unsupervised Learning

Deep Neural Network

Deep learning

Deep learning is a subset of machine learning that uses multilayered neural networks,
called deep neural networks, that more closely simulate the complex decision-making
power of the human brain.

Deep neural networks include an input layer, at least three but usually hundreds of
hidden layers, and an output layer, unlike neural networks used in classic machine
learning models, which usually have only one or two hidden layers.

Generative AI
Generative AI

Model Types

Transformers
Generative AI, sometimes called "gen AI", refers to deep learning models that can
create complex original content—such as long-form text, high-quality images, realistic
video or audio and more—in response to a user’s prompt or request.

At a high level, generative models encode a simplified representation of their training


data, and then draw from that representation to create new work that’s similar, but not
identical, to the original data.

How Generative AI Works


In general, generative AI operates in three phases:
1. Training
Generative AI begins with a "foundation model"; a deep learning model that serves as
the basis for multiple different types of generative AI applications.

The most common foundation models today are large language models (LLMs), created
for text generation applications. But there are also foundation models for image, video,
sound or music generation, and multimodal foundation models that support several
kinds of content.

To create a foundation model, practitioners train a deep learning algorithm on huge


volumes of relevant raw, unstructured, unlabeled data, such as terabytes or petabytes
of data text or images or video from the internet.

1.5. More Training

The training yields a neural network of billions of parameters—encoded representations


of the entities, patterns and relationships in the data—that can generate content
autonomously in response to prompts. This is the foundation model.
This training process is compute-intensive, time-consuming and expensive. It requires
thousands of clustered graphics processing units (GPUs) and weeks of processing, all of
which typically costs millions of dollars. Open source foundation model projects, such as
Meta's Llama-2, enable gen AI developers to avoid this step and its costs.

2. Tuning

Next, the model must be tuned to a specific content generation task. This can be done
in various ways, including:
Fine-tuning, which involves feeding the model application-specific labeled data—
questions or prompts the application is likely to receive, and corresponding correct
answers in the wanted format.

Reinforcement learning with human feedback (RLHF), in which human users evaluate
the accuracy or relevance of model outputs so that the model can improve itself. This
can be as simple as having people type or talk back corrections to a chatbot or virtual
assistant.

3. Generation, Evaluation and More Tuning

Developers and users regularly assess the outputs of their generative AI apps, and
further tune the model—even as often as once a week—for greater accuracy or
relevance. In contrast, the foundation model itself is updated much less frequently,
perhaps every year or 18 months.

Another option for improving a gen AI app's performance is retrieval augmented


generation (RAG), a technique for extending the foundation model to use relevant
sources outside of the training data to refine the parameters for greater accuracy or
relevance.

Benefits of AI
AI offers numerous benefits across various industries and applications. Some
of the most commonly cited benefits include:
Automation of Repetitive Tasks

Enhanced Decision-Making
Fewer Human Errors

24x7 Availability

Reduced Physical Risk

AI can automate routine, repetitive and often tedious tasks—including digital tasks such
as data collection, entering and preprocessing, and physical tasks such as warehouse
stock-picking and manufacturing processes. This automation frees to work on higher
value, more creative work.

AI Challenges and Risks


Organizations are scrambling to take advantage of the latest AI technologies and capitalize on AI's
many benefits. This rapid adoption is necessary, but adopting and maintaining AI workflows comes
with challenges and risks.
Data Risks

Model Risks

Operational Risks

Ethics and Legal Risks

AI systems rely on data sets that might be vulnerable to data poisoning, data
tampering, data bias or cyberattacks that can lead to data breaches.

Organizations can mitigate these risks by protecting data integrity and implementing
security and availability throughout the entire AI lifecycle, from development to training
and deployment and post deployment.

AI ethics and Governance


AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while
reducing risks and adverse outcomes. Principles of AI ethics are applied through a system of AI
governance consisted of guardrails that help ensure that AI tools and systems remain safe and
ethical. AI governance encompasses oversight mechanisms that address risks. An ethical
approach to AI governance requires the involvement of a wide range of stakeholders, including
developers, users, policymakers and ethicists, helping to ensure that AI-related systems are
developed and used to align with society's values. Here are common values associated with AI
ethics and responsible AI:
Explainability and Interpretability

As AI becomes more advanced, humans are challenged to comprehend and retrace how
the algorithm came to a result. Explainable AI is a set of processes and methods that
enables human users to interpret, comprehend and trust the results and output created
by algorithms.

Fairness and Inclusion

Although machine learning, by its very nature, is a form of statistical discrimination, the
discrimination becomes objectionable when it places privileged groups at systematic
advantage and certain unprivileged groups at systematic disadvantage, potentially
causing varied harms. To encourage fairness, practitioners can try to minimize
algorithmic bias across data collection and model design, and to build more diverse and
inclusive teams.

Robustness and Security


Robust AI effectively handles exceptional conditions, such as abnormalities in input or
malicious attacks, without causing unintentional harm. It is also built to withstand
intentional and unintentional interference by protecting against exposed vulnerabilities.

Accountability and Transparency

Organizations should implement clear responsibilities and governance structures for the
development, deployment and outcomes of AI systems. In addition, users should be
able to see how an AI service works, evaluate its functionality, and comprehend its
strengths and limitations. Increased transparency provides information for AI consumers
to better understand how the AI model or service was created.

Privacy and Compliance

Many regulatory frameworks, including GDPR, mandate that organizations abide by


certain privacy principles when processing personal information. It is crucial to be able
to protect AI models that might contain personal information, control what data goes
into the model in the first place, and to build adaptable systems that can adjust to
changes in regulation and attitudes around AI ethics.

Common questions

Powered by AI

The evolution of AI from the 1950s to 2024 shows a progression from foundational concepts like the Turing Test and the creation of neural networks to significant technological advancements including machine learning algorithms, big data, and cloud computing. Through these decades, AI transitioned from theoretical constructs to practical applications, such as IBM's Deep Blue and Watson, leading up to today's generative AI models, which employ deep learning techniques to perform complex tasks independently .

Machine learning techniques are foundational to AI development, forming the basis of models that can learn from and make inferences based on data. In generative AI, these techniques are crucial for training models to generate content, where machine learning algorithms like neural networks enable systems to autonomously produce text, images, and videos based on learned patterns .

Multimodal models have expanded AI capabilities by integrating multiple types of data inputs, such as text, images, and audio, resulting in richer and more robust user experiences. Benefits include enhanced pattern recognition and decision-making abilities across diverse applications, improving tasks like image recognition and natural language processing simultaneously .

Explainability and interpretability are crucial concepts in AI ethics and governance, as they help users understand and trust AI outcomes by clarifying how algorithms reach decisions. These concepts address challenges such as the black-box nature of AI systems and promote accountability, ensuring that AI technologies are aligned with societal values and ethical standards .

The rise of large language models (LLMs) has significantly improved AI performance, particularly in language comprehension and generation tasks. This enhancement has increased enterprise value by enabling more sophisticated and scalable solutions for customer service, content creation, and data analysis, leading to greater efficiency and reduced operational costs .

As AI capabilities increase, ethical considerations include issues of data privacy, algorithmic bias, security, and accountability. Organizations address these concerns by implementing AI governance frameworks that promote explainability, fairness, robustness, and transparency, and ensuring compliance with regulatory guidelines like GDPR to protect personal information .

Neural networks and deep learning models have transformed machine learning by enabling the processing of large datasets to identify complex patterns akin to human cognitive processes. These models function through interconnected layers of nodes that analyze data features, making them essential for tasks requiring high-level abstraction and pattern recognition, such as image and speech processing .

Reinforcement learning with human feedback (RLHF) enhances the tuning of generative AI models by iteratively refining the models based on user evaluations of output accuracy and relevance. RLHF allows models to adapt to real-world applications by incorporating human insights, thereby improving performance and aligning results with user expectations .

Generative AI uses deep learning models to create original content such as text, images, and videos by encoding a representation of training data and drawing from it to produce new, similar work. The autonomy in content generation implies that traditional content creation processes may become more efficient, but also face challenges such as reducing the need for human creators and posing ethical questions about originality and authorship .

Rapid AI adoption presents challenges such as data integrity risks, model biases, and ethical concerns. Mitigation strategies include implementing robust AI governance, enhancing data security and privacy, ensuring transparency in AI processes, and involving diverse stakeholder groups to align AI systems with societal values and reduce adverse outcomes .

You might also like