0% found this document useful (0 votes)
44 views149 pages

Preemptive Maritime Cybersecurity Measures

The document discusses the challenges to internal security posed by communication networks and the importance of cybersecurity in addressing these threats. It highlights vulnerabilities in India's critical information infrastructure, the government's initiatives for cybersecurity, and the need for international cooperation. Additionally, it outlines Israel's cybersecurity strategies and suggests lessons for India to enhance its cybersecurity framework.

Uploaded by

Paridhi Sinha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views149 pages

Preemptive Maritime Cybersecurity Measures

The document discusses the challenges to internal security posed by communication networks and the importance of cybersecurity in addressing these threats. It highlights vulnerabilities in India's critical information infrastructure, the government's initiatives for cybersecurity, and the need for international cooperation. Additionally, it outlines Israel's cybersecurity strategies and suggests lessons for India to enhance its cybersecurity framework.

Uploaded by

Paridhi Sinha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UPSC Current Only

Digital Sphere

Challenges to Internal Security through Communication Networks:

1. Communication Networks are part of our critical information infrastructure, they are
crucial for connectivity of other critical infrastructure such as Civil Aviation,
shipping, railways, nuclear, power, oil and gas, finance, banking, information tech,
law enforcement, space and defence and others.
2. Issues with communication networks:
a. Much of the hardware and software that make up our communication
ecosystem is sourced externally. For example Chinese manufacturer such
as Huawei and ZTE have supplied around 20% of telecommunication
equipment.
b. Task of securing the network is also complicated by the fact that much of
the infrastructure is in hands of private companies who see measures such
as security auditing, regulations as additional cost.
c. Vulnerability to Trojan, Malware, Viruses, Darknet etc.
d. After operation Shakti in 1998, an anti nuclear activist group “Miworm”
hacked into BARC network to protest against India’s nuclear test.
e. In a joint survey done by Price Waterhouse Coopers (PWC) along with
Confederation of Indian Industries (CII), 60 per cent of the companies in
India from a sample of 72 were reported to have witnessed a security
breach in their IT infrastructure.
3. National Telecom Policy of 2012 has set a target for domestic production of
telecom equipment to meet the Indian telecom sectors’ demand to 60-80% by
2020.
4. The Indian Telegraph (Amendment) Rules, 2017, provides that every telecom
equipment must undergo mandatory testing and certification prior to sale, import
or use in India. As per mandate of Department of Telecommunications, the
Procedure for Mandatory Testing and Certification of Telecommunication
Equipment (MTCTE) was formulated in October 2018. The Testing and
Certification framework requires that the telecom equipment meets the essential
requirements under (a)EMI/EMC (b)Safety (c)Technical requirements (d)other
requirements and (e)Security requirements. The testing is carried out with the

1|Page
UPSC Current Only
Digital Sphere

objective of ensuring that the equipment meets relevant national and international
standards such that it is safe to use, the radio frequency emissions from it are
within prescribed limits, it does not degrade the performance of the network to
which it is connected and complies with national security requirements.
5. National Centre for Communication Security was established and entrusted with
responsibility of implementing ComSec Scheme. The ComSec scheme has
objective of developing country specific standards, testing, certification of
components. Ensuring that telecom network elements meet security assurance
requirements. It has developed Indian Telecom Security Assurance Requirement
(ITSAR) norms for every telecom equipment. Designation of Third Party Telecom
Security Test Laboratories (TSTL) for carrying out the security testing.

Role of Cybersecurity in Internal Security Challenges

1. Cybersecurity can be defined as protection of system, networks and data in cyber


space and taking preventing measure to protect information from being stolen,
compromised or attacked.
2. Cyber threats can be categorised as
a. Cyber crime against an individual or corporate: the attackers exploit the
weaknesses in the software and hardware design through the use of
malware.
i. Cybercrimes that target computer directly include Denial of Service
(DoS), Malware software to disrupt computer.
ii. Cybercrime facilitated by Computer Network or Devices includes
economic fraud to destabilise the economy of a country, attack on
banking system, financial and intellectual theft, misusing social
media in fanning intolerance, instigating communal tensions and
inciting riots. Posting inflammatory materials that tends to incite hate
crimes.
b. Cyber warfare against a state: The cyber warfare is defined as When any
state initiate the use of internet based invisible force as an instrument to
sabotage and espionage against another nation. It espionage on critical

2|Page
UPSC Current Only
Digital Sphere

infrastructure. Whereas Cyber Terror is defined as when an organisation,


working independently of a nation state, operates terrorist activities through
medium of cyber space it is called as cyber terror.
3. Special features of cyber warfare from traditional war
a. The network facilities like undersea cables, microwave and optical fibre
networks, telecom exchange routers, data servers are compromised by
enemy state.
b. There is no cyber space boundary like in case of traditional wars where
boundaries are defined. So the cyber space need to be protected with
international cooperation and diplomatic channels.
c. It is easy for attackers to hide himself and tracking and even misleading the
agencies.
4. Dark Web, Bitcoins and their linkage to terror financing: In dark webs the IP
addresses are hidden (Website name directs to IP address of that site) and its hard
to identify and track these criminals.
5. Snowden’s leaked document uncovered the existence of numerous global
surveillance programmes, many of them run by National Security Agency (NSA)
USA. The document indicated that India’s domestic politics and its strategic and
commercial interest exposing India’s vulnerability to cyber snooping in all sectors.
India was 5th among the targeted nations.
6. Revelation from Cambridge Analytica scandal shows how democracies across
world are vulnerable to cyber manipulation.
7. Critical Infrastructure and Critical Information Infrastructure: it can be defined as
those facilities, systems whose incapacity or destruction would cause a huge
impact on national security, governance, economy and social well being of nation.
It broadly includes Energy, transportation, banking & finance, telecommunication,
defence, space, law enforecement, sensitive government organisations, public
health, water supply, critical manufacturing, e-governance. The Critical
Infrastrcture is interrelated, interconnected and interdependent. India has issued
guidelines for Critical Infra under IT act 2000. The critical infra faces two types of
threats viz

3|Page
UPSC Current Only
Digital Sphere

a. Interna threat: it is defined as one or more individuals with access of a


company, organisation or enterprise that would allow them to exploit the
vulnerabilities.
b. External threat: this arises from outside of organisation by individuals,
hackers, organisation, terrorist, foreign government, non state actors.
8. Steps taken by Government of India
a. National Policy on Cybersecurity was framed in 2013.
b. A National Critical Information Infrastrcture Protection Centre (NCIIPC)
established to create fool proof firewall around networks
c. A multi agency National Cyber Coordination Centre to make assessment of
cyber threats and share information with stakeholders is also set up.
d. A centre of excellence in Cryptography established at Indian Institute of
Statistics.
e. Government has come up with “A roadmap on cybersecurity” which has laid
stress on collaboration between government and private sector.
f. A Cyber Crisis Management Plan has already been put in place with state
governments as an integral part.
g. CERT-In the nodal agency to deal with such crisis has been established.
h. Fin-CERT has also been setup to protect financial infrastructure.
i. Cyber Surakshit Bharat Initiative by Meity and National e-Governance
Division to strengthen cybersecurity ecosystem in India. It is first PPP of its
kind. It will operate on three principles viz Awareness, Education and
Enablement.

National Cybersecurity Policy 2013:

1. Meity released to this policy to build a secure and resilient cyber space for citizens,
business and government.
2. The objective of the policy are to protect information and infrastructure in
cyberspace, build capacity to prevent cyber threat, reduce vulnerabilities and
minimise damage from cyber attacks.
3. Establishment of NCIIPC.
4. To create a work force of 5 lac personal skilled in cyber security.

4|Page
UPSC Current Only
Digital Sphere

5. Provide fiscal benefits to business for adoption of standard security practices.


6. Enhance PPP and global cooperation by promoting information exchange
mechanism.
7. Chief Information Security Officer (CISO) in organisation will be responsible for
cyber security efforts and initiatives.
8. Adapting global best practices in information security, fostering education and
training in formal and informal sector.
9. Security risks associated with cloud computing have not been addressed and there
also a need to incorporate cyber crime tracking, cyber forensic capacity building.

National Critical Information Infrastructure Protection Centre (NCIIPC):

1. It will function under NTRO under section 70 of IT Act 2000.


2. It is responsible for all measures including research ad development for protection
of critical information infra.
3. It has to take all necessary measures to facilitate protection of critical information
infra.
4. Functions of NCIIPC include identification of critical sub sectors, issuing daily and
monthly cyber alerts, Malware analysis, tracking zombies, cyber forensic activities,
annual CISO conference, 24*7 operation and helpdesk.

Legal framework dealing with cybersecurity in India

1. IT Act 2000 (amendment in 2008):


a. The act provides for legal recognition to e-commerce, commercial e-
transactions
b. Act provides for legal recognition to digital signature
c. Cyber Law Appellate Tribuban has been set up to hear appeal against
adjudicating authorities.
d. The act applies to any cyber offense committed outside India by a person
irrespective of his nationality.
2. Offences under IT Act 2000:
a. Tempering with computer source documents
b. Hacking with computer system

5|Page
UPSC Current Only
Digital Sphere

c. Sending offensive messages through communication service, it has been


repealed by Supreme Court.
d. Theft, cheating, violation of privacy, cyber terrorism, publishing obscene
material have been covered.
3. International Cooperation in cyber security
a. Budapest Convention on cybercrime 2001: it is convention of Europe and
only legally binding instrument on this issue. It is supplemented by Protocol
on Xenophobia and Racism.
b. Global Centre for Cybersecurity by World Economic Forum to function as
autonomous organisation under WEF. It will serve as lab and think tank for
future cybersecurity concerns.
c. Global Conference on CyberSpace: it is biennial conference.

Some Events/Info

• Indians lose $1.3 billion to Cyber Fraud in 2024


• Pakistan-linked hacking group, Transparent Tribe (APT36), is actively targeting
Indian entities with advanced malware called ElizaRAT.
• Current Trend is toward “Non-Kinetic warfares” which include cyberspace,
PsychologicalOperations (PsyOp), Economic and Information warfare.
• The Bharat National Cyber Security Exercise (Bharat NCX 2024), a major initiative
to enhance India’s cybersecurity resilience, was launched by the National Security
Council Secretariat (NSCS) in partnership with Rashtriya Raksha University
(RRU).
• The International Counter Ransomware Initiative (CRI) is the world’s largest
international cyber partnership. The CRI builds collective resilience to
ransomware, disrupts the ransomware ecosystem and designs policy approaches
to combat ransomware. It was established by the United States in 2021 because
no single entity, no matter its capabilities or experience, can combat ransomware
effectively alone. International partnerships are a force multiplier against
ransomware actors and their ecosystem – they strengthen our capability to detect,

6|Page
UPSC Current Only
Digital Sphere

disrupt, and deter malicious cyber actors that engage in or facilitate ransomware
attacks. Ransomware is a pocketbook issue that affects all aspects of our lives,
and through CRI, our schools, hospitals, and businesses are better protected from
ransomware threats. India is a member of it.
• Cyberattacks targeting Iran and Israel amidst escalation:
• In a separate incident, just before Israel launched a retaliatory strike on
Iran, Iranian defense radar systems were reportedly breached, causing
their screens to freeze, according to [Link] breach is said to have
limited Iran’s ability to intercept targets, enabling the Israeli air force to
penetrate Iranian airspace. Israel also reportedly carried out a preliminary
strike on radar installations in Syria to disable Iran’s defenses ahead of an
attack.
• Unknown hackers have also reportedly attempted to infect Israeli
organizations with Wiper malware distributed through phishing emails
impersonating the cybersecurity

• Reports indicates that mule bank accounts, which are used to facilitate illegal
transactions and money laundering, are a major factor in online financial scams
that could potentially drain 0.7% of the country’s GDP.
• Star Health Insurance suffered a significant data breach, which the company has
confirmed compromising data of 31 million people.
• India has emerged as one of the top ransomware targets in the Asia-Pacific region,
ranking second in successful attacks, according to a Threat Intelligence Report.
The report also highlighted that as AI-driven attacks increase, vulnerabilities in
India’s digital defenses are widening, leading to calls for stronger cybersecurity
measures.
• Cybercrime Convention of UN or Budhapest Convention Draft released: It will be
legally binding convention. India not member of it citing that it did not participated
in its draft.
o The Convention is the first international treaty on crimes committed via the
Internet and other computer networks, dealing particularly

7|Page
UPSC Current Only
Digital Sphere

with infringements of copyright, computer-related fraud, child


pornography, hate crimes, and violations of network security.
o Note: The Additional Protocol to the Convention on Cybercrime is an
initiative of the Council of Europe. The protocol supplements the
Convention on Cybercrime (also known as the Budapest Convention),
which is the first international treaty addressing crimes committed via the
internet and other computer networks.
• Europol has announced a significant global initiative, termed Operation Morpheus,
targeting the criminal misuse of Cobalt Strike, the commercial penetration testing
tool widely used by cybersecurity testers to check for vulnerabilities. Or Cobalt
Strike is a commercial penetration testing tool used by cybersecurity professionals
to simulate real-world cyberattacks. It provides a platform for testing an
organization’s security defenses by mimicking advanced persistent threats (APTs).
• In a notable incident, private data, including blood test results and login credentials
of Israeli athletes, were released on Telegram in a doxing cyberattack. Doxing (or
doxxing) refers to the malicious act of publicly revealing someone's private,
personal, or sensitive information without their consent, typically with the intent to
harass, intimidate, or harm them. The term is derived from "dropping documents"
(docs), indicating the exposure of a person's confidential details.
• Malaysia’s government is preparing to implement a “kill switch” to bolster online
security and address cybercrime.
• The U.S. plans to ban the sale of Kaspersky antivirus software, citing security risks
from its Russian origins and concerns over potential misuse in critical
infrastructure.
• India's Chief of Defence Staff, General Anil Chauhan, unveiled the country's first
joint doctrine for cyberspace operations, recognizing cyberspace as a critical
domain in modern warfare. The doctrine aims to guide military commanders in
cyber operations and emphasizes collaboration to address emerging cyber
threats.

Israel's Cybersecurity and Cyber Dome Initiative

8|Page
UPSC Current Only
Digital Sphere

1. Israel is building a Cyber Dome, a proactive system using big data and AI to
defend against cyberattacks. It leverages real-time threat detection and
collaboration between multiple agencies.
2. The Cyber Dome integrates AI for threat detection and uses data sharing for
coordinated defense across Israel’s cybersecurity infrastructure.
3. Distributed Denial of Service (DDoS) attack

Israel’s Cyber Model

• Israel’s geopolitical vulnerabilities have driven its robust cybersecurity framework,


which integrates military, intelligence, and academia.

• Programs like Magshimim (high school talent scouting), Atudai (scholarships for
cyber studies), and Odyssey (academic-military collaboration) exemplify its
whole-of-nation approach.

• Gender diversity initiatives like Mofet (Cyber Girls) promote inclusivity, enhancing
workforce diversity.

Lessons for India

1. Policy Overhaul

• Update the NCSP to reflect the evolving threat landscape.

• Define clear mandates for cybersecurity institutions like CERT-In and the Defence
Cyber Agency.

2. Talent Development

• Early Identification: Introduce school-level programs akin to Israel’s Magshimim.

• Higher Education: Expand scholarships (like Atudai) to incentivize cybersecurity


studies.

• Gender Inclusivity: Launch initiatives for women, similar to Mofet, to enhance


representation in cybersecurity.

3. Institutional Strengthening

9|Page
UPSC Current Only
Digital Sphere

• Establish a Cyber Command under the armed forces to consolidate offensive and
defensive cyber operations.

• Enhance coordination among agencies like CERT-In, NIC, and private


stakeholders through a centralized framework.

4. International Collaboration

• Collaborate with global leaders like Israel and the USA to gain strategic and
technical expertise.

• Actively participate in drafting international cybersecurity norms.

5. Public-Private Partnership

• Encourage partnerships to protect critical infrastructure, such as power grids,


banking, and transportation networks.

• Promote knowledge-sharing and innovation through joint ventures.

6. Offensive Capabilities

• Develop advanced offensive tools to deter potential threats and neutralize


adversarial capabilities proactively.

7. Collaborative Threats:

• China-Pakistan cooperation in disinformation campaigns and espionage.

• Use of AI to collect sensitive voice samples from border regions like Jammu and
Kashmir.

• Explore decentralized technologies like IPFS for secure data storage and backup.

• Advocate for international norms on state behavior in cyberspace through


platforms like the UN Open Ended Working Group (OEWG).

• Creation of the Automated Control System (ACS) to integrate air, land, sea,
cyber, and space operations.

10 | P a g e
UPSC Current Only
Digital Sphere

Russia Ukraine War and Cyberspace

Role of External Assistance

• NATO and Allied Support:

o NATO facilitated Ukraine’s membership in the Cooperative Cyber Defence


Centre of Excellence (CCDCOE) and provided extensive training and
resources.

o The European Union invested €10 million to bolster Ukrainian cyber


defenses, including the establishment of a cyber lab.

• Private Sector Contributions:

o Companies like Microsoft, Amazon, and coalitions like the Cyber Defense
Assistance Collaboration (CDAC) played critical roles.

o Microsoft provided $400 million in cybersecurity aid and preemptive


defense measures.

o Starlink by Elon Musk enabled resilient communications but raised


concerns about dependence on private entities.

Russian Approach to Cyber Warfare

• Decentralized Command:

o Russian intelligence agencies like the GRU and FSB relied on hacking
groups like Killnet, which disrupted critical infrastructure and spread
disinformation.

o However, the decentralized approach lacked coherence in achieving


strategic goals during an active war.

• Impact of Sanctions:

o Sanctions and internet controls disrupted Russian tech companies like


Kaspersky and Yandex, reducing their global influence.

11 | P a g e
UPSC Current Only
Digital Sphere

India's Cyber Security Challenges by the IDSA Task Force

Important Points and Challenges

1. Emergence of Cyberspace as a Warfare Domain:

o Cyberspace is being recognized as the fifth domain of warfare, alongside


land, sea, air, and space.

o Increasing threats from cyberattacks by state and non-state actors,


targeting critical infrastructure and defense systems.

2. Threat Landscape:

o Cyber Espionage: Theft of intellectual property and government data.

o Cyberterrorism: Use of cyberspace for recruitment, propaganda, and


attacks.

o Cybercrime: Growing online frauds and identity thefts.

o Cyber Warfare: Attacks on critical infrastructure and national security


systems.

3. India’s Vulnerability:

o Heavy reliance on IT for governance, defense, and critical infrastructure.

o Challenges in coordination between various ministries and agencies.

o Limited public-private partnerships (PPP) for infrastructure protection.

4. Global Cooperation and Legal Frameworks:

o Need for international conventions on cyberspace norms.

o Concerns about sovereignty and control in global cybersecurity policies.

Technical Points

1. Critical Infrastructure Protection (CII):

o Sectors like power, communication, finance, and transport are priority


areas.

12 | P a g e
UPSC Current Only
Digital Sphere

o Need for secure backup systems and incident response protocols.

2. Cyber Command:

o Recommendation to create a dedicated Cyber Command under the Armed


Forces for offensive and defensive operations.

3. Policy Gaps:

o Lack of a comprehensive National Cybersecurity Policy.

o Need for clearer definitions and doctrines, particularly regarding offensive


capabilities.

4. Capacity Building:

o Address the shortage of skilled cybersecurity professionals through focused


education and training programs.

Recommendations

1. Institutional Framework:

o Strengthen inter-ministerial coordination under the National Security


Adviser (NSA).

o Establish a Director General for Cyber and Information Warfare to oversee


policies and operations.

2. Public-Private Partnership (PPP):

o Enhance collaboration between the government and private sector to


protect critical infrastructure and share threat intelligence.

3. Legal and Regulatory Measures:

o Harmonize national laws with international frameworks.

o Implement stringent regulations to address cybercrimes and data breaches.

4. Proactive Measures:

o Develop offensive cyber capabilities as a deterrence mechanism.

o Establish a Proactive Pre-emptive Operations Group for cyber defense.

5. International Engagement:

13 | P a g e
UPSC Current Only
Digital Sphere

o Actively participate in drafting international cybersecurity norms.

o Ensure policies align with India's sovereign interests while fostering global
cooperation.

6. Awareness and Education:

o Promote cybersecurity awareness among citizens and businesses.

o Invest in research and development for advanced cybersecurity solutions.

Maritime Cyber Warfare and India's Cyber Readiness

In the age of disruptive technologies like AI and cloud computing, the maritime domain
has emerged as a critical arena for cyber warfare. With over 95% of India’s trade by
volume transported by sea, safeguarding maritime infrastructure is vital for national
security and economic resilience. This analysis explores the threats posed by maritime
cyber warfare, global security initiatives, and India’s readiness to address these
challenges.

1. Threats in the Maritime Cyber Domain

14 | P a g e
UPSC Current Only
Digital Sphere

• Cyber Warfare Tactics:

o AIS and GPS Spoofing: False navigation signals to provoke reactions


(e.g., Crimean Coast incident, 2021).

o Data Manipulation: NotPetya malware caused $300 million in damages to


Maersk, disrupting shipping operations.

o Critical Infrastructure Attacks: Targeting of operational technology (OT)


and communication technology (CT) systems in ships and ports.

• Emerging Vulnerabilities:

o AI-driven automation enhances operational efficiency but creates risks like


adversarial AI attacks.

o Maritime infrastructure, including navigation systems and Global Maritime


Distress and Safety Systems (GMDSS), are vulnerable to exploitation by
cybercriminals.

2. Maritime Cyber Security Initiatives

• Global Frameworks:

o IMO Guidelines (2022): Cyber risk management best practices for


maritime security.

o US National Maritime Cybersecurity Plan (2020): Focus on workforce


development, international cooperation, and protection of the Maritime
Transportation System (MTS).

o UK Strategy (2022): Augmentation of maritime cyber capabilities and multi-


sector collaboration.

• Indian Initiatives:

o Indian Register of Shipping (IRCLASS): Published guidelines on


maritime cyber risk management in 2018.

o CERT, DCA, and NCIIPC: Agencies working to secure critical assets in the
maritime domain.

3. India’s Maritime Cyber Vulnerabilities

15 | P a g e
UPSC Current Only
Digital Sphere

• Reliance on interconnected IT systems for cargo tracking and analysis.

• Limited focus on OT and CT security on merchant and naval platforms.

• Absence of a unified framework for maritime cyber security.

4. Strategic Importance of Maritime Cybersecurity

• Economic Security:

o Delayed shipments, tampered cargo, and disrupted supply chains from


cyberattacks can destabilize global trade.

• Energy Security:

o Attacks on maritime oil reserves or supply chains can have cascading


effects on energy availability and pricing.

• Geopolitical Stability:

o India’s strategic location in the Indian Ocean Region (IOR) makes its
maritime domain a target for adversarial forces.

Global Lessons for India

1. Enhancing Cyber Resilience

• Cyber Quick Response Teams (QRTs):

o Establish QRTs at major ports for immediate threat response.

• AI in Cybersecurity:

o Deploy AI-based forensic tools for detecting and mitigating cyber threats.

• Decentralized Systems:

o Adopt decentralized technologies to reduce single points of failure in


maritime IT systems.

2. Strengthening Governance

• Formulate a National Maritime Cybersecurity Framework integrating the armed


forces, intelligence agencies, civil authorities, and private sector.

16 | P a g e
UPSC Current Only
Digital Sphere

• Ensure compliance with global standards, such as IMO guidelines and Baltic
International Maritime Council (BIMCO) frameworks.

3. Building Strategic Capabilities

• Develop indigenous AI-based solutions for maritime logistics, navigation, and


cybersecurity.

• Promote collaboration with international allies to share best practices and


resources.

• Initiatives like the creation of Maritime Cyber Quick Response Teams (QRT) at
each major port city for providing immediate response to a cyber threat

4. Public-Private Partnerships

• Collaborate with the private sector to enhance cyber defenses for ports and
shipping corporations.

• Encourage startups and academia to innovate in maritime cybersecurity.

17 | P a g e
UPSC Current Only
Digital Sphere

(Distributed Denial-of-Service) DDoS

1. What is a DDoS Attack?

• Definition: Overwhelms a server, network, or website with excessive traffic,


making it inaccessible to legitimate users.

• How It Works: Uses compromised devices (e.g., computers, IoT) as attackers.

• Analogy: Like a traffic jam caused by fake vehicles, blocking real ones.

2. How Does a DDoS Attack Work?

1. Botnet Creation:

o Hackers infect devices with malware, turning them into bots (zombies).

o Bots form a botnet, remotely controlled by attackers.

2. Launching the Attack:

o Bots flood the target's IP address with requests.

o The server becomes overwhelmed, slowing down or crashing.

3. Challenge:

o Bots resemble legitimate devices, making it hard to distinguish fake from


real traffic.

3. Goals and Impacts of DDoS Attacks

1. Disruption of Services:

• Goal: Make critical systems unavailable.

• Impact:

o Interrupts businesses, governments, or public services (e.g., healthcare,


airports).

o Causes financial losses and damages reputation.

18 | P a g e
UPSC Current Only
Digital Sphere

2. Economic Damage:

• Goal: Inflict financial harm.

• Impact:

o Lost revenue due to downtime.

o High recovery costs and long-term reputation damage.

3. Political or Ideological Statements:

• Hacktivism:

o Groups like Anonymous use DDoS to protest or promote causes.

• Enemy States:

o Target infrastructure (e.g., power grids) to undermine public trust.

4. Testing Cyber Capabilities:

• Goal: Probe defenses or identify vulnerabilities.

• Impact:

o Information gained may lead to more advanced attacks.

5. Masking Other Cyberattacks:

• Goal: Distract IT teams.

• Impact:

o Enables attackers to deploy malware, steal data, or exploit vulnerabilities


elsewhere.

6. Sabotage and Chaos:

• Goal: Destabilize society or organizations.

• Impact:

o Creates panic by targeting public infrastructure (e.g., hospitals, airports).

o Undermines confidence in institutions.

7. Ransom or Extortion:

• Goal: Financial gain through ransom (RDoS).

19 | P a g e
UPSC Current Only
Digital Sphere

• Impact:

o Organizations may pay attackers to stop the attack and restore services.

8. Competitor Sabotage:

• Goal: Gain unfair advantage.

• Impact:

o Rival businesses disrupt services or tarnish reputations in competitive


industries.

9. Psychological Warfare:

• Goal: Instill fear and uncertainty.

• Impact:

o Repeated attacks erode public morale and cause political instability.

10. Espionage and Strategic Gains:

• Goal: Weaken adversaries strategically.

• Impact:

o Disrupt critical operations during elections or conflicts.

o Learn about defense strategies for future attacks.

4. Example

• Event: October 2022, Russian hacktivist group Killnet targeted US airport


websites.

• Motive: Political disruption and showcasing cyber capabilities.

20 | P a g e
UPSC Current Only
Digital Sphere

IPFS (InterPlanetary File System) and Blockchain: Key Points

1. What is IPFS?

o A peer-to-peer protocol, hypermedia, and file-sharing network for storing


and sharing data.

o Uses content-based addressing instead of location-based protocols like


HTTP/HTTPS.

o Aims to decentralize the World Wide Web by reducing dependency on


central servers.

2. Shared Features of IPFS and Blockchain:

o Both are decentralized systems that eliminate reliance on centralized


authorities.

o Employ cryptographic techniques to ensure security and data integrity.

3. Differences in Purpose:

o Blockchain: An immutable ledger designed to record and verify


transactions.

o IPFS: Optimized for storing and retrieving large amounts of data in a


distributed manner.

4. Complementary Relationship:

o Blockchain often stores small data (e.g., transactions or metadata) due to


scalability and cost limitations.

o IPFS is used alongside blockchain to store large files, with the blockchain
recording hashes or references to data stored in IPFS.

5. Use Case Example:

o In NFTs (Non-Fungible Tokens), IPFS is used to store digital content (e.g.,


artwork), while blockchain records a hash pointing to the stored content.

Examples of Integration of IPFS and Blockchain:

21 | P a g e
UPSC Current Only
Digital Sphere

• Decentralized applications (dApps) often use IPFS to store large data files while
keeping metadata or pointers to those files on the blockchain.

• For example, in NFTs (Non-Fungible Tokens), the digital artwork or content is


often stored on IPFS, and the hash of that file is recorded on the blockchain.

Difference among IPFS, Cryptography, Blockchain, and Web 3:

IPFS
Aspect (InterPlanetary Cryptography Blockchain Web 3
File System)

A technique for A The next


A decentralized securing decentralized, generation of the
protocol for file information immutable internet, focused
storage and through encoding, ledger used for on
Definition
sharing using ensuring secure decentralization,
content-based confidentiality, transaction user ownership,
addressing. integrity, and recording and and blockchain
authenticity. verification. integration.

Decentralize
Record
Store and share applications
Protect and transactions or
Primary large files in a (dApps), giving
secure data and data securely
Purpose distributed users control over
communications. in a distributed
manner. their data and
ledger.
assets.

Immutable
Content-based Uses encryption, Decentralization,
ledger,
addressing, hashing, digital ownership, and
Key Feature consensus
peer-to-peer file signatures, and trustless
mechanisms
system. more. systems.
(PoW, PoS).

22 | P a g e
UPSC Current Only
Digital Sphere

IPFS
Aspect (InterPlanetary Cryptography Blockchain Web 3
File System)

Cryptography Web 3 is
Fully itself isn't Blockchain is decentralized,
decentralized decentralized, but decentralized with a focus on
Decentralization network where can be used in with no central peer-to-peer
no central decentralized authority or interactions
server exists. systems like middlemen. without
blockchain. intermediaries.

Security is
Relies on Ensures Uses
ensured by
cryptographic confidentiality, cryptography
decentralized
Security hashes for data integrity, and for securing
protocols
integrity and authenticity of transactions
(blockchain,
retrieval. data. and data.
encryption, etc.).

Decentralized
File storage, Digital
Protect data (e.g., applications (e.g.,
distributed file currency (e.g.,
bank transactions, decentralized
Use Case sharing (e.g., Bitcoin), smart
passwords, and finance, social
storing large contracts,
communications). media, NFT
files for dApps). NFTs.
platforms).

Secure
Bitcoin,
Storing large communications dApps (e.g.,
Ethereum,
files like NFTs, (SSL/TLS), digital Uniswap for DeFi,
Examples supply chain
websites, or signatures, OpenSea for
tracking,
digital content. cryptocurrency NFTs).
NFTs.
wallets.

23 | P a g e
UPSC Current Only
Digital Sphere

IPFS
Aspect (InterPlanetary Cryptography Blockchain Web 3
File System)

Decentralized
Proof of Work applications
Peer-to-peer RSA, AES,
(PoW), Proof (dApps), smart
networks, hashing
Key of Stake contracts, IPFS,
cryptographic algorithms (SHA),
Technologies (PoS), DAOs
hashes, content elliptic curve
consensus (Decentralized
addressing. cryptography.
algorithms. Autonomous
Organizations).

Ensures data
Ensures data
Ensures data integrity by
integrity through Ensures data
integrity decentralizing
immutable integrity through
Data Integrity through control and
content hashing and
blockchain verifying data
addressing digital signatures.
consensus. through
(hashes).
blockchain.

Ethereum,
IPFS, Filecoin RSA, AES, Bitcoin,
Example Polkadot,
(to store and Elliptic Curve Ethereum,
Technologies Filecoin,
pay for files). Cryptography Cardano
Uniswap

24 | P a g e
UPSC Current Only
Digital Sphere

Key concepts related to Cybersecurity and Privacy

• Types of Keys

o Symmetric Key

▪ What it is: The same key locks and unlocks the data.

▪ Daily Life Example: Think of a locker key at the gym. You use the
same key to lock and unlock the locker.

▪ Tech Example: Used in apps like WhatsApp for encrypting


messages quickly.

▪ You share a secret password with your best friend to access a private
Instagram account. Both of you use the same password to log in.

o Asymmetric Key (Public/Private Key Pair)

▪ What it is: One key (public) locks the data, and another key (private)
unlocks it.

▪ Daily Life Example: A mailbox. Anyone can drop a letter in (public


key), but only you can unlock the box to retrieve it (private key).

▪ Tech Example: Used when you make secure online payments or log
into your bank account.

▪ When sending a private message on Facebook:

▪ The public key is like your friend's open mailbox. You drop the
message in.

▪ The private key is what your friend uses to open the mailbox
and read the message.

o Private Key

▪ What it is: A secret key used in asymmetric cryptography to decrypt


data or create digital signatures.

▪ Example: Your private key in a cryptocurrency wallet.

25 | P a g e
UPSC Current Only
Digital Sphere

▪ Use: Critical for keeping sensitive data secure.

▪ Think of your Instagram login password. It’s like a private key that
only you know and use to access your account. If someone else gets
it, they can "unlock" your account.

o Public Key

▪ What it is: A key shared openly to encrypt data or verify digital


signatures.

▪ Example: Public keys used in PGP (Pretty Good Privacy).

▪ Use: Allows others to send you secure messages without


compromising your private key.

▪ If someone wants to send you a private message on LinkedIn, they


use your public key (LinkedIn handles this in the background). Only
you can read the message because you have the private key.

o API Key

▪ A unique key that apps or websites use to connect securely.

▪ When a third-party app like Canva lets you post directly to Instagram,
it uses an API key to access your Instagram account securely. It’s
like a secret handshake between apps.

• Types of Encryptions

o Symmetric Encryption

▪ What it is: The same key is used to both lock (encrypt) and unlock
(decrypt) data.

▪ You and your friend use a secret code to send private messages on
WhatsApp. Only both of you know the code, so the messages are
secure.

26 | P a g e
UPSC Current Only
Digital Sphere

▪ Why it’s useful: It’s fast and works well when both parties can safely
share the key.

o Asymmetric Encryption (Public and Private Key Encryption)

▪ What it is: A pair of keys is used—one (public key) to encrypt the


data and another (private key) to decrypt it.

▪ When you send a message on Facebook Messenger, it’s like locking


it in a digital envelope with your friend’s public key. Only your friend
can open the envelope because they have the matching private key.

▪ Why it’s useful: It’s highly secure since only the private key holder
can access the data.

o End-to-End Encryption (E2EE)

▪ What it is: Data is encrypted on the sender's side and can only be
decrypted by the receiver. No one else, not even the platform, can
read the data.

▪ On WhatsApp or Signal, your messages are encrypted so that even


the app’s servers cannot read them. Only you and the person you’re
chatting with can access the messages.

▪ Why it’s useful: Protects privacy from hackers or even the platform
itself.

o Steganography (Hidden Encryption)

▪ What it is: Hides data within other data, like images or videos, to keep
it hidden in plain sight.

▪ Social Media Example:

▪ Someone embeds a secret message inside a meme on Instagram,


and only the recipient knows how to decode it.

▪ Why it’s useful: Adds an extra layer of secrecy.

27 | P a g e
UPSC Current Only
Digital Sphere

o Ephemeral Messaging Protocols

▪ What it is: Designed to automatically delete messages, images, or


videos after being viewed or after a set time.

▪ How it works in these features:

▪ Instagram Vanish Mode:

▪ Messages disappear automatically once the chat is closed.

▪ Encryption ensures they cannot be intercepted, while the app


ensures the deletion.

▪ Perfect forward secrecy (PFS) is a cryptographic feature


that protects sensitive data by generating new keys for
each communication session.

o Self-Destruct Mechanisms

▪ What it is: Built-in mechanisms delete content on both ends after


viewing or a set time.

▪ How it works in these features:

▪ WhatsApp View Once Images:

▪ Once viewed, the app deletes the media from its memory and
disables any screenshots.

▪ Telegram Secret Chats (with two-way deletion):

▪ Messages are encrypted and tied to a timer. Once the timer expires
or a user manually deletes, the app ensures both devices erase the
messages permanently.

• Malwares

o Ransomware:

▪ Malware that encrypts a victim’s data and demands a ransom for its
decryption.

28 | P a g e
UPSC Current Only
Digital Sphere

▪ WannaCry (2017): Spread globally, exploiting a Windows


vulnerability. It encrypted files and demanded Bitcoin as ransom.

▪ Ransomware attacks are typically carried out using


a Trojan disguised as a legitimate file that the user is tricked into
downloading or opening when it arrives as an email attachment.
However, one high-profile example, the WannaCry worm, traveled
automatically between computers without user interaction

o Computer Virus

▪ Malware that attaches itself to files or programs and spreads when


executed

▪ ILOVEYOU (2000): Spread via email, overwriting files and


replicating itself, causing billions in damages.

o Worm

▪ Self-replicating malware that spreads across networks without


needing a host file.

▪ Code Red (2001): Targeted Windows servers, defacing websites


and creating network overloads.

o Trojan Horse/Rogue Software

▪ Malware disguised as legitimate software to trick users into installing


it.

▪ Zeus (2007): Stole banking credentials and financial data from


infected systems.

o Spyware

▪ Software that secretly gathers user information without consent

▪ Pegasus

o Wipers

▪ Malware designed to erase data on infected devices, often used for


sabotage

29 | P a g e
UPSC Current Only
Digital Sphere

▪ Olympic Destroyer (2018): Disrupted IT systems at the Winter


Olympics.

o Keylogger

▪ Records keystrokes to steal sensitive information like passwords or


credit card numbers.

▪ StarLogger: Recorded all user activity and sent it via email.

o Adware

▪ Malware that generates unwanted advertisements, often disrupting


the user experience.

• Zero Trust Architecture

o A security framework where no user, device, or system is trusted by default,


regardless of location.

o Ensures access is only granted after strict verification and monitoring.

o Key Principles of ZTA

▪ Verification: Always verify identity, device integrity, and context (e.g.,


location, time).

▪ Example: Logging into a bank app requires a password and a one-


time verification code.

▪ Least Privilege Access: Give users only the permissions they need
to do their tasks.

▪ Example: A guest on your Wi-Fi can access the internet but not your
shared files.

▪ Assume Breach: Act as if a breach has already occurred. Minimize


damage by isolating access.

▪ Example: Sensitive files are encrypted, and access is granted only


to authorized users.

o Daily Life Examples of ZTA

30 | P a g e
UPSC Current Only
Digital Sphere

▪ Social Media: Instagram's vanish mode ensures messages


disappear after the chat ends.

▪ Banking Apps: MFA prevents unauthorized access to your account.

▪ Office Networks: Only authorized laptops can access secure drives.

▪ Smart Homes: Guest devices can connect to Wi-Fi but not security
cameras.

• Deepfake

▪ Deepfakes are media (images, videos, audio) created or edited using


artificial intelligence tools, often depicting real or fictional people.

▪ Technology Behind Deepfakes : Utilizes machine learning and AI


techniques, including:

▪ Facial recognition algorithms

▪ Artificial neural networks, such as:

▪ Variational autoencoders (VAEs)

▪ Generative adversarial networks (GANs)

▪ Malicious Uses

▪ Creating child sexual abuse material, celebrity pornographic


videos, revenge porn

▪ Spreading fake news, hoaxes, and financial fraud

▪ Cyberbullying and political misinformation

▪ Medical Tampering

▪ Example: Deepfake technology used to alter 3D CT scans,


such as injecting or removing lung cancer, which misled
radiologists and AI detection systems during a penetration
test.

▪ Corporate and Political Uses

31 | P a g e
UPSC Current Only
Digital Sphere

▪ Corporate training: Personalized videos using


deepfake avatars (e.g., Synthesia).

▪ Political campaigns: Translation of speeches into


regional languages using deepfake technology in
India's state assembly elections.

▪ Social and Propaganda Use

▪ Creation of non-existent people ("sockpuppets") for


online and media manipulation.

▪ Example: Israeli propaganda featuring fake


testimonies on Facebook to shift political opinions.

▪ Detection Techniques for Deepfakes

▪ Blockchain Verification

▪ Verifies the source of media via blockchain ledgers,


allowing only trusted sources to disseminate content
on social platforms.

▪ Digital Signatures

▪ Proposes embedding digital signatures in videos and


images directly from cameras, ensuring authenticity
before dissemination.

▪ Frame by Frame Analysis

▪ Breaking down the video into individual frames to catch


deviations between them.

▪ Blending Analysis, Blink Analysis, Edge Analysis, Error Level


Analysis, Speed Analysis, Luminance Gradient Analysis

▪ Other techs to detect deepfakes are

▪ Convolutional Neural Networks (CNNs): Used in many


deepfake detection algorithms to analyze visual data and
detect artifacts that indicate tampering.

32 | P a g e
UPSC Current Only
Digital Sphere

▪ Recurrent Neural Networks (RNNs): Often used to detect


spatio-temporal inconsistencies in videos that may not be
visible in individual frames.

▪ Laplacian of Gaussian (LoG): Used in some deepfake


detection models to enhance certain features while
suppressing others, helping identify subtle artifacts.

▪ End-to-End Deep Networks: Employed to learn and


distinguish between authentic and manipulated facial data.

▪ Variational Autoencoders (VAEs): Used in the creation of


deepfakes but also adapted for detecting certain patterns
typical of manipulated media.

▪ Generative Adversarial Networks (GANs): Although GANs


are used to create deepfakes, detection models can use them
to analyze and understand patterns in synthetic media.

▪ AI-based Detection Platforms: Examples include


Deeptrace, Sensity AI, and Microsoft Video Authenticator,
which use machine learning and advanced algorithms to
detect deepfakes.

▪ FaceForensics++: A large-scale benchmark dataset used to


test the accuracy of deepfake detection algorithms.

• Multi-Factor Authentication (MFA)

o Multi-factor authentication (MFA) is a security method that requires users to


provide more than one form of authentication to access an account or
application. It's also known as two-step verification.

o Ways: Email codes, text and call one-time passwords (OTPs), biometric
verification, authenticator apps, magic links, social login, soft token software

33 | P a g e
UPSC Current Only
Digital Sphere

development kits (SDKs), and smartcards and cryptographic hardware


tokens.

• Phishing

o Phishing: A social engineering scam where attackers trick individuals into


revealing sensitive information or installing malware like viruses, worms,
adware, or ransomware.

o Email Phishing: Delivered through email spam to deceive people into


sharing sensitive information or login credentials.

o Vishing (Voice Phishing): Utilizes Voice over IP (VoIP) to make


automated phone calls, often with text-to-speech technology, pretending to
be from a legitimate bank or institution. Calls are spoofed to appear
legitimate, prompting victims to give up sensitive information or connect with
a live attacker using social engineering.

o Smishing (SMS Phishing): A type of phishing attack using text messages


to lure victims into clicking on a link that can steal personal data.

o Page Hijacking: Redirects users to malicious websites without their


knowledge.

o Quishing: A newer scam exploiting QR codes, tricking users into scanning


a code that leads to a malicious website to capture sensitive data.

o Spam Filters: Specialized filters use techniques like machine learning and
natural language processing to identify and block phishing emails,
especially those with forged addresses.

• Cyber Hygiene

o Cyber hygiene refers to fundamental cybersecurity best practices that an


organization’s security practitioners and users can undertake.

o Ways: A network firewall, Data-wiping software, A password manager,


Antivirus, MFA,

• Dark Net/ Web

34 | P a g e
UPSC Current Only
Digital Sphere

o An overlay network within the Internet, accessible only with specific


software, configurations, or authorization, and often uses unique
communication protocols.

o Types of Darknets:

▪ Social Networks: Used for file hosting and often operate with peer-
to-peer connections.

▪ Anonymity Proxy Networks: Examples include Tor, which uses an


anonymized series of connections to ensure user privacy.

o Technology Used:

▪ Tor: Accessible via a customized browser (e.g., Tor browser bundle


or Vidalia) or through a configured proxy.

▪ I2P: Provides a similar level of anonymity and security.

▪ Freenet: Focuses on censorship resistance and peer-to-peer file


sharing.

o Purpose and Use:

o Darknets aim to protect digital rights by offering security, anonymity, and


censorship resistance.

o Used for both illegal (e.g., black market activities) and legitimate purposes
(e.g., free speech in restrictive countries), pornography, terrorism,
trafficking, blackmoney transactions, drugs.

o The Dark Web:

▪ Includes smaller, friend-to-friend networks as well as larger networks


like Tor, Hyphanet, I2P, and Riffle.

▪ Users refer to the regular web as "clearnet" due to its unencrypted


nature.

▪ The Tor network uses onion routing and the .onion top-level
domain for anonymity.

o Bitcoin in the Dark Web:

35 | P a g e
UPSC Current Only
Digital Sphere

▪ Bitcoin is commonly used for transactions due to its flexibility and


relative anonymity.

▪ Users can convert bitcoin into in-game currencies (e.g., World of


Warcraft gold) to obscure their transactions further.

▪ Bitcoin tumblers are services that mix coins to make transactions


harder to trace and are often available on Tor.

o Technologies used to counter the Dark Web/Net

▪ Web Crawlers: Specialized web crawlers are configured to index


and collect data from dark web sites, such as those using the .onion
domain.

▪ Data Mining and Machine Learning: Algorithms that analyze dark


web data to identify patterns, trends, and potentially illegal activities.

▪ Natural Language Processing (NLP): Used to extract and


understand information from unstructured text data found on dark
web forums and sites.

▪ Threat Intelligence Platforms: Software that aggregates and


analyzes data from various sources, including dark web sites, to
provide insights on cyber threats.

▪ Deep and Dark Web Search Engines: Tools like Ahmia and
DarkSearch specifically index and search content within the dark
web.

▪ Dark Web Monitoring Tools: Services like Recorded Future, Dark


Owl, and [Link] provide comprehensive dark web monitoring
and threat intelligence.

▪ Blockchain Analysis Tools: Tools such as Chainalysis and


Elliptic track cryptocurrency transactions to uncover links to dark
web marketplaces and other illicit activities.

36 | P a g e
UPSC Current Only
Digital Sphere

▪ Dark Web Scanning Software: Solutions that continuously scan the


dark web for leaked data, such as Digital Stakeout and Terbium
Labs.

▪ Anomaly Detection Systems: Software that identifies deviations in


network traffic patterns that may suggest dark web activity.

▪ Network Traffic Analysis Tools: Tools that monitor network activity


for signs of Tor or other anonymizing networks.

▪ Forensic Tools: Digital forensics tools like EnCase and FTK can
help investigators analyze data from dark web activities.

▪ Security Information and Event Management (SIEM) Systems:


Systems like Splunk and IBM QRadar can be configured to detect
signals indicative of dark web interactions.

▪ Custom Bots and Scrapers: Bots designed to scrape specific dark


web forums or marketplaces for targeted information gathering.

• Zero-Day Vulnerabilities

o Zero-day vulnerabilities are security flaws in software or hardware that are


unknown to the vendor or developer. They are called "zero-day" because
the developer has had zero days to fix the issue, making them highly
exploitable by attackers.

o Zero-day attacks can lead to severe consequences, such as data breaches,


system compromises, financial losses, and reputational damage for
affected organizations.

• Social Engineering

37 | P a g e
UPSC Current Only
Digital Sphere

• Penetration Testing (Pentesting)

o Penetration testing, or pentesting, is a simulated cyberattack conducted by


cybersecurity professionals to identify and exploit vulnerabilities in a
system, network, or application. The goal is to discover weaknesses before
malicious hackers can exploit them.

o Aims to identify vulnerabilities in web applications, such as SQL injection,


cross-site scripting (XSS), and cross-site request forgery (CSRF).

• Credential Stuffing

o Credential Stuffing is a type of cyberattack where attackers use


automated tools to test large sets of stolen usernames and passwords on
various websites and applications. The goal is to gain unauthorized access
to user accounts by exploiting people who reuse the same login credentials
across multiple platforms.

o Attackers acquire data breaches that contain usernames and passwords


from other sites and use bots to automatically input these credentials into
different services to see if any of them match and result in a successful
login.

• Cyber Resilience

38 | P a g e
UPSC Current Only
Digital Sphere

o Cyber resilience refers to an entity's ability to continuously deliver the


intended outcome, despite cyber attacks

o Technologies used for cyber resilience

▪ Artificial Intelligence (AI) and Machine Learning (ML)

▪ Next-Generation Firewalls (NGFWs)

▪ A firewall is a network security device or software that


monitors and controls incoming and outgoing network traffic
based on predetermined security rules.

▪ Endpoint Detection and Response (EDR)

▪ Security Information and Event Management (SIEM)

▪ Zero Trust Architecture

▪ Patch Management Tools

▪ Sandboxing and Endpoint hardening.

• IoT Security Challenges

• Insider Threat Detection

• Cyber Threat Hunting

39 | P a g e
UPSC Current Only
Digital Sphere

Blockchain Technology

Blockchain is defined as distributed laser where an identical copy is held by all the users on the
network called Nodes. A node is simply a computer on the blockchain network that stores the laser
or the data. A block is comprised of group of transactions of the similar nature having its own
Hash which is unique alpha numeric combination and the hash of the previous block as well as
transaction data alongwith the time stamp.

Note: Merkle Tree – A data structure for efficiently verifying blockchain data.

In blockchain there are three layer

1) Protocol layer which lays down the foundational structure


2) Networking layer where the rules of the protocol layer are actually implemented
3) Application layer where network and protocols are used to build applications that user interact
with.

The categories of blockchain are

1) Public Permission Less, here anyone can write, it is open to all example Bitcoin, Ethereum.

40 | P a g e
UPSC Current Only
Digital Sphere

2) Public Permission, it is open to all but only authorized participants can write. Example Supply
Chain management.
3) Consortium, it is closed blockchain where restricted participants can read and write. Example:
transactions involving banks.
4) Private Commissioned Enterprise, it is further restricted one where number of participants are
very few.

Characteristics of Blockchain Technology

1) Decentralized: there is no central authority therefore decision making is faster.


2) Distributed
3) Transparent
4) Consensus based
5) Immutable: once an information is added to the block, it cannot be modified without the
consensus of the participants.
6) Permanence

Uses of Blockchain Technology in Governance of India

1. NITI AAYOG has used BCT in pilot projects like Tracking fertilizer distribution and
GNFC, and;

41 | P a g e
UPSC Current Only
Digital Sphere

2. SuperCert an initiative of NITI AAYOG, ISB, Bitgram where a mechanism has been
created for virtual varication of educational documents, it was also used for handling of
land records specially settling the titles.
3. In another case study Niti Aayog used for Cold Chain Management of Vaccine and sending
messages to patients, it was also employed to find out the counterfeit drugs.
4. The Reserve Bank of India (RBI) is exploring the Central Bank Digital Currency (CBDC),
leveraging blockchain technology.
5. Banks like SBI, ICICI, and HDFC are using blockchain for trade finance and cross-border
payments.
6. Andhra Pradesh has adopted blockchain for e-governance initiatives.
7. Blockchain for tracking GST invoices and ensuring compliance in the supply chain.
8. Power Ledger, an Australian blockchain-based energy trading platform, partnered with
Uttar Pradesh Power Corporation Limited (UPPCL).
a. Facilitate peer-to-peer (P2P) energy trading among consumers, especially in
renewable energy sectors like solar power.
b. Individuals with surplus solar energy can sell it to others within the grid network
using blockchain, ensuring transparency and fair pricing.
9. Tata Power has explored blockchain for renewable energy certificate (REC)
management and peer-to-peer energy trading.

Blockchain Challenges in India:

1) Endogenous (India Specific)


a) Process may require changes to be made blockchain amendable i.e. how to modify our
existing infrastructure as per BCT and that requires a shared view of success which is
possible if potential benefits are defined.
b) Integration with legacy system (certain pattern of work culture)
c) Legal and regulatory modifications.
d) Using technology to regulate.
e) Blockchain platforms often face challenges in handling large-scale transactions,
which is critical for a country with a population as large as India’s.
2) Exogenic Challenges

42 | P a g e
UPSC Current Only
Digital Sphere

a) Suitability of atomic v/s non atomic transactions. Atomic transactions are those where there
is finite life. Example Supply Chain. Non atomic example land record. BCT more suitable
for atomic transactions.
b) Initial cost of implementation very high.
c) Human resource constraints
d) The number of developers is very small.
e) Blockchain, particularly systems using Proof of Work (PoW) consensus mechanisms, can
be energy-intensive.
f) While blockchain ensures transparency, excessive visibility of sensitive data could infringe
on citizens' privacy. Striking a balance between transparency and confidentiality is critical.
g) While blockchain is secure, vulnerabilities in poorly coded smart contracts can be
exploited.

Virtual Currency (VC) or Cryptocurrency:

The most famous example is bitcoin, Ethereum. Ripple, NXT. Virtual currencies have
divided opinion on them because of

1) 95% of virtual currencies are held by 5% of participants


2) They are highly volatile in nature prices fluctuate like anything
3) They bypass the central banks.
4) The traditional currencies are issued against some asset. Which is not the case with
VC.
5) As transactions are international, given the fact there is no uniform international
benchmark there are bound to be discrepancy in handling of VC from one country to
other.
6) As VC are stored digitally, if they are stolen how such crimes will be dealt with, there
is no clarity.
7) They can be used for money laundering,
8) They can be used for financing organized crime, terror activities.

43 | P a g e
UPSC Current Only
Digital Sphere

In April 2018, RBI issued a circular which has BARRED entities regulated by it from providing
any service in relation to virtual currencies including those of transfer or receipt of money related
to cryptocurrencies. When this matter reached SC there were 3 questions,

1) Whether virtual currency is money


2) Whether RBI has power to regulate them
3) Whether the circular issued by RBI is the right exercise of power.

SC gave the verdict in favor of RBI in first 2 questions but in 3rd SC said circular was not
proportionate in nature i.e. not in accordance with nature. RBI failed to provide empirical relation
that this type of currency have negative impact of banking sector or other entities regulated by it.
It implies if in future RBI is able to furnish the evidence of negative impact then they can go for
barring transactions involving cryptocurrencies.

Bitcoin: is based on proof of work where the miner has to verify a certain set of information and
they will be rewarded with bitcoin. The number of bitcoin is fixed at 21 million and there is
principle of halving i.e. after the addition of every 2 lac 10 thousand blocks the reward is reduced
to half. The graph for bitcoin circulation is

What Is Bitcoin Mining? (A Simple Explanation)

▪ Bitcoin mining is like solving a giant puzzle on a computer. Imagine you're in a lottery where
you guess a random number, and whoever guesses correctly wins a prize (Bitcoin). Many
people (miners) are competing to guess the number, and everyone is using powerful computers
to try millions of guesses every second.

How It Works: A Simple Example

1. The Setup

▪ Imagine a digital "block" as a container that holds:

• A list of transactions (like a bank ledger recording who sent Bitcoin to whom).

• A special key from the previous container (block) to link them together.

• A space for the miner to add their secret "magic number" (the nonce).

44 | P a g e
UPSC Current Only
Digital Sphere

2. The Challenge

▪ The Bitcoin network gives everyone a puzzle:

• "Find a number that, when added to this block and processed through a special formula
(called a hash), creates a result that starts with a certain number of zeros."

▪ For example:

• The formula might look like:

• plaintext

• Copy code

• Block data + Your magic number = Hash (must start with '0000')

3. The Work

▪ You (the miner) start guessing numbers (the magic number):

• First guess: "1" → Hash: 5F2A... (not valid).

• Second guess: "2" → Hash: AB34... (not valid).

• You keep guessing until you find a number that works.

▪ For example, when you try "34567":

• Hash: 0000A1BC... (Valid! Starts with 4 zeros).

4. The Reward

▪ When you solve the puzzle:

1. You announce it to the network, and everyone verifies your solution.

2. The block you solved is added to the blockchain (Bitcoin's digital ledger).

3. You win the reward:

o New Bitcoins (currently 6.25 BTC per block).

o Transaction fees from the transactions inside the block.

45 | P a g e
UPSC Current Only
Digital Sphere

ETHERUM is based on proof of stake i.e. the participation has to put some virtual currency at
stake then only they can engage themselves in mining, it is like gambling.

Proof of Burn: the participant has to permanently forgo certain Cryptocurrency then only engage
in mining

Proof of elapsed time i.e. the participant will be having a lottery system where the network will
randomly assign a particular time to a node to engage in mining.

Some terms

1. Stablecoins – Cryptocurrencies pegged to stable assets like fiat (e.g., USDT, USDC).
2. Initial Coin Offering (ICO) – A crowdfunding method for new crypto projects.
3. Non-Fungible Tokens (NFTs) – Unique digital assets stored on a blockchain.

With reference to Non-Fungible Tokens (NFTs), consider the following statements:

1. They enable the digital representation of physical assets.

2. They are unique cryptographic tokens that exist on a blockchain.

3. They can be traded or exchanged at equivalency and therefore can be used as


a medium of commercial transactions.

4. DeFi Tokens – Tokens used in decentralized finance ecosystems.


5. Atomic Swaps – Peer-to-peer cryptocurrency trading without intermediaries.
6. Central Bank Digital Currency (CBDC) – Digital currency issued by central banks (e.g.,
e₹).

With reference to Central Bank digital currencies, consider the following statements:

1. It is possible to make payments in a digital currency without using US dollar or


SWIFT system.
2. A digital currency can be distributed with a condition programmed into it such as
a time-frame for spending it.

46 | P a g e
UPSC Current Only
Digital Sphere

Consider the following statements:

1.E-rupee is a cryptocurrency issued by the Indian government.

[Link] e-rupee is different from other digital payment methods, such as UPI, in that it is a
direct liability of the RBI.

7. Web 3.0 – The decentralized web built on blockchain.

With reference to Web 3-0, consider the following statements:

1. Web 3.0 technology enables people to control their own data.


2. In Web 3.0 world, there can be blockchain based social networks.
3. Web 3.0 is operated by users collectively rather than a corporation.

8. Cold Wallet – Offline storage for cryptocurrencies.


9. Hot Wallet – Online cryptocurrency wallets.
10. Gasless Transactions – Transactions executed without fees in certain ecosystems.

With reference to “Blockchain Technology”, consider the following statements:

1. It is a public ledger that everyone can inspect but no single user controls it.

2. The structure and design of the blockchain is such that all the data in it are about
cryptocurrency only.

3. Applications that depend on basic features of blockchain can be developed without


anybody’s permission.

Decentralised Financing DeFi:

1. Decentralized finance (DeFi) is one of the emerging technological evolutions based on


blockchain and cryptocurrency.

47 | P a g e
UPSC Current Only
Digital Sphere

2. It is an alternative to the traditional financial system, and it decentralizes both finance and
its regulations, nullifying the significance of intermediaries like banks and exchanges in
financial transactions.
3. Examples of its applications are DeFi P2P Lending, crowdfunding platforms and
decentralized hedge funds.
4. An important term associated with DeFi is the smart contract. Smart contracts are computer
programs stored on blockchains that automatically get executed when the predetermined
conditions are met; for instance, they can connect a borrower and lender if their conditions
match.

48 | P a g e
UPSC Current Only
Digital Sphere

49 | P a g e
UPSC Current Only
Digital Sphere

Artificial Intelligence

PYQs

1. Rise of Artificial Intelligence: the threat of jobless future or better job opportunities
through reskilling and upskilling
2. Introduce the concept of Artificial Intelligence (AI). How does AI help clinical
diagnosis? Do you perceive any threat to privacy of the individual in the use of AI in
healthcare?
3. The application of Artificial Intelligence as a dependable source of input for
administrative rational decision-making is a debatable issue. Critically examine the
statement from the ethical point of view.
4. The application of Artificial Intelligence as a dependable source of input for
administrative rational decision-making is a debatable issue. Critically examine the
statement from the ethical point of view.
5. There is a technological company named ABC Incorporated which is the second largest
worldwide, situated in the Third World. You are the Chief Executive Officer and the
majority shareholder of this company. The fast technological improvements have raised
worries among environmental activists, regulatory authorities, and the general public
over the sustainability of this scenario. You confront substantial issues about the
business's environmental footprint. In 2023, your organization had a significant increase
of 48% in greenhouse gas emissions compared to the levels recorded in 2019. The
significant rise in energy consumption is mainly due to the surging energy requirements
of your data centers, fuelled by the exponential expansion of Artificial Intelligence (AI).
AI-powered services need much more computational resources and electrical energy
compared to conventional online activities, notwithstanding their notable gains. The
technology's proliferation has led to a growing concern over the environmental
repercussions, resulting in an increase in warnings. Al models, especially those used in
extensive machine learning and data processing, exhibit much greater energy
consumption than conventional computer tasks, with an exponential increase. Although
there is already a commitment and goal to achieve net zero emissions by 2030, the
challenge of lowering emissions seems overwhelming as the integration of AI continues
to increase. To achieve this goal, substantial investments in renewable energy use would

50 | P a g e
UPSC Current Only
Digital Sphere

be necessary. The difficulty is exacerbated by the competitive environment of the


technology sector, where rapid innovation is essential for preserving market standing and
shareholders' worth. To achieve a balance between innovation, profitability and
sustainability, a strategic move is necessary that is in line with both, business objectives
and ethical obligations. (a) What is your immediate response to the challenges posed in
the above case? (b) Discuss the ethical issues involved in the above case. (c) Your
company has been identified to be penalized by technological giants. What logical and
ethical arguments will you put forth to convince about its necessity? (d) Being a
conscience being, what measures would you adopt to maintain balance between Al
innovation and environmental footprint?
6. “The emergence of Fourth Industrial Revolution (Digital Revolution) has initiated e-
Governance as an integral part of government”. Discuss.
7. Digital economy: A leveler or a source of economic inequality
8. How have digital initiatives in India contributed to the functioning of education system in
the country? Elaborate your answer.
9. Has digital literacy, particularly rural areas, coupled with lack of information and
Communications Technology (ICT) accessibility hindered socio - economic
development? Examine with justification.
10. E- Governance is not just about the routine application of digital technology in service
delivery process. It is as much about multifarious interactions for ensuring transparency
and accountability. In this context evaluate the role of the ‘Interactive Service Model’ of
e – governance.
11. What is the status of digitalization in the Indian economy? Examine the problems faced
in this regard and suggest improvements.
12. How can the Digital India program help farmers to improve farm productivity and
income? What step has the government taken in this regard?
13. Considering the threats cyberspace poses for the country, India needs a “Digital Armed
Forces” to prevent crimes. Critically evaluate the National Cyber Security Policy, 2013
outlining the challenges perceived in its effective implementation
14. Impact of digital technology as reliable source of input for rational decision making is a
debatable issue. Critically evaluate with suitable example

51 | P a g e
UPSC Current Only
Digital Sphere

Artificial Intelligence: intelligence is the ability of an individual to adjust in its environment. It


is measured with the help of IQ. But There are 2 issues with that concept:

1) With the age the IQ score decline


2) IQ is not true reflector of potential that’s why EQ has replaced IQ.

After 2nd world war, British Mathematician Alan Turing said the computers are the only machines
capable of demonstrating human behavior.

AI is commonly defined as depiction of human behavior like speech recognition, voice


recognition, pattern recognition by the machine.

AI has 2 streams:

1) Narrow AI: Where AI systems are able to perform better than human beings in some areas.
2) Generalised AI: also called as singularity is when the machine will be able to perform better
than human in all the areas. This is associated with ultimate objective of robotics, AI i.e. to
have an artificial person in the form of humanoid or android robot.

Robots are being developed with the following objectives:

1) Transhumanism: i.e. taking humanity to the next level that means routine activities will be
outsourced to AI machines and human being can focus on their interest and passion but in a
country like India before introducing such tech it is very important that the skill of people
should be enhanced otherwise it will lead to massive unemployment. Some of the experts have
used a phrase “Polarization of Skill Set” where very few individuals with advanced skill will
control the organization. Middle level jobs will be cut and a big population without skills at
the bottom, such a situation will be completely unsustainable. Some corporates have realized
the gravity of situation like Infosys and Wipro, they have started training their mid-level
executives.
2) These AI systems should be able to understand the non-verbal communication which is a hall
mark of human behavior.
3) They should be able to use the same tools as used by humans.

Impediments in development of Generalised AI:

52 | P a g e
UPSC Current Only
Digital Sphere

1) Lack of natural language ability, it is attribute of human being to take decisions, draw
interferences, critical thinking.
2) AI system can’t move from one architecture to another architecture.
3) They have not yet passed the Turing test i.e. when human being will fail to distinguish between
work of man and work of a robot, they will pass the Turing test.
4) The information processing capacity of AI system is very high as compared to human being.

Evolution of AI:

1) Artificial Neutral Network:


a) These are the structures which resembles the functioning of neurons like neurons they also
have input and output.
b) A mathematical function applied to a neuron's output to introduce non-linearity.
Common activation functions include ReLU (Rectified Linear Unit), Sigmoid, and
Tanh.
c) ANNs learn through a process called backpropagation. During training, the network
adjusts its weights and biases using an algorithm like gradient descent to minimize
the error between its predicted outputs and actual targets.
d) Applications
i) Image Recognition: Used in facial recognition software, medical image analysis, and
automated quality control in manufacturing.
ii) Natural Language Processing (NLP): Powers chatbots, translation services, and voice
assistants
iii) Predictive Analytics: Employed for stock market predictions, weather forecasting, and
trend analysis
iv) Gaming and Simulations: Utilized to create intelligent agents in video games and
simulations.
v) Healthcare: Assists in diagnosing diseases from medical data, developing treatment
plans, and drug discovery
2) Machine Learning: it is buzz word and is defined as making machines to perform the task
which comes naturally to humans. It is having application is healthcare, education, ecommerce
etc. In machine learning the data is being used by algorithms to develop the program. Some of
keys involved in machine learning are:
a) Data Mining: it is about arranging the unconstructed data in a meaningful manner.

53 | P a g e
UPSC Current Only
Digital Sphere

b) Data Analysis: that is drawing the inferences from the structured data.
c) Data visualization: it is about selecting the right tool and methods to analysis the complex
data like GDP.
d) Recommendation system: on the basis of previous history of an individual suggesting the
likes and dislikes like Youtube.
e) Predictive modelling: it is further evolution of the recommendation system where how
much overlapping the current issues having with the previous history is also depicted.
3) Deep Leaning: in AI, efforts are on the impart critical thinking abilities to the AI system,
these systems are expected to operate in a social context. Recently a Neuromorphic computer
was developed with the name SPINNAKER (Spiking Neural Network Architecture) which is
having the same architecture as human brain capable of performing massive numbers of
operations per second. It was used in SPOMINIBIT.


▪ To develop neuromorphic computers IBM and Darpa started SYNAPSE program (System
Neuromorphic Adaptive Plastic Scalable Electronics) later IBM developed a neuromorphic
chip TrueNorth. The neuromorphic approach for computers development is analogy is efficient
and expected to be faster. One of the project which has started was Blue Brain Project. In
BBP, a virtual model of the functioning of rats brain was developed. It was followed by human
brain project to develop the virtual model of the functioning of human brain.

Kamakoti and NITI Aayog paper on AI: Kamakoti panel was constituted by Min of Commerce.
The mandate of Kamakoti panel was to answer 3 questions

1) What are the areas that government should play a role.


2) How can AI improve quality of life and solve problems for Indian citizens.

54 | P a g e
UPSC Current Only
Digital Sphere

3) What are the sectors that can generate employment and growth by the use of AI.

The panel has suggested the setting up of inter-ministerial national artificial intelligence mission
for 5 years with corpus of 1200 crore. Setting up of centre of excellence, creation of inter
disciplinary data centres, AI based curriculum for education and skilling.

It has identified 10 important domains:

1) Manufacturing
2) Fin Tech
3) Healthcare
4) Education
5) Agriculture food processing
6) Retail costumer engagement
7) Public utility
8) National security
9) Environment
10) Aids for differently abled.

NITI Aayog has presented a paper on AI with the theme #AI FOR ALL i.e. citizen centric approach
for utilizing AI. It has suggested 5 areas where AI can serve the society.

1) Agriculture with precision


2) Education
3) Healthcare
4) Smart cities and infrastructure
5) Smart mobility and transport.

It has suggested the following strategies to push AI and make India a Leader:

1) Set up research centres


2) Institute scholarship
3) Protect IPR to AI
4) Promote reskilling of workforce
5) Continuous assess changing nature of jobs and the reason for joblessness.

55 | P a g e
UPSC Current Only
Digital Sphere

NITI Aayog has suggested a 2 tier structure to address India’s AI research aspirations:

1) CORE (Centre for Research Excellence for Basic research)


2) ICTAI (International Centre for Transformation AI) i.e. deploying the research into practical
settings.

NITI Aayog has identified following impediments:

1) Low intensity of AI research


2) Lack of enabling data ecosystem
3) Inadequate availability of AI expertise
4) Lack of clarity on the privacy and security issues
5) Unattractive IPR regime.

Other aspects of AI

Artificial Intelligence other aspects

• AI as a Disruptive Factor: The introduction of AI complicates the concept of meritocracy.


• Socioeconomic Disparities: AI’s influence on job markets could lead to further inequality,
especially for those without access to high-level education and training.
• Opaque Nature of AI Systems: The “black box” nature of many AI algorithms makes it
difficult to understand or challenge the criteria by which merit is evaluated. This opacity
can hinder fair assessment and accountability.
• Concentration of Power: The editorial warns that the concentration of power in tech giants,
who control AI systems

56 | P a g e
UPSC Current Only
Digital Sphere

"Different approaches to AI regulation":

1. United Nations' Resolution on AI:

o The UN resolution is a significant step towards formalizing AI regulations globally.


It recognizes the need for responsible AI usage to achieve the 2030 Sustainable
Development Goals (SDGs). The resolution emphasizes the importance of ethical
AI systems and the potential adverse impacts of improper AI use on social,
environmental, and economic dimensions.

2. The U.K.'s Context-Based Approach:

o The U.K. has adopted a context-based approach to regulating AI, which involves
extensive consultations with regulatory bodies to bridge complex technological
gaps. This approach is more flexible and decentralized, differing from the more
stringent regulatory frameworks of the European Union.

3. China's Stand on AI Regulation:

o China has taken a strong stance by categorizing AI systems into four types:
unacceptable, high-risk, limited-risk, and minimal-risk. China’s approach includes
promoting AI tools and innovations with safeguards to ensure they align with social
and economic goals.

4. India's Position on AI Regulation:

o India’s response to AI regulation is crucial given its large consumer base and
significant role as a labor force for global technology companies. The country is set

57 | P a g e
UPSC Current Only
Digital Sphere

to host over 10,000 deep tech startups by 2030, and plans include deploying
significant resources to enhance AI infrastructure and innovation.

o It stresses the importance of India's AI regulations aligning with its SDG


commitments while also ensuring economic growth. The phased approach to AI
regulation is recommended to foster innovation and build a robust AI system that
benefits the country holistically.

5. The Importance of Ethical AI:

o Across all the global discussions, a recurring theme is the need for ethical AI
development, ensuring that AI systems do not compromise citizens' rights and are
developed with a strong consideration of their social impact.

• The Bletchley Declaration, a recent initiative, underscores the need for global cooperation
in addressing AI-related cybersecurity challenges. It calls for international collaboration to
navigate the ethical, legal, and safety challenges posed by AI technologies.
• A holistic approach to cybersecurity in an AI-driven world is necessary.
• Regulatory sandboxes
o Regulatory sandboxes provide a safe space to innovate while ensuring that AI
technologies comply with legal and ethical standards. This balance helps prevent
potential harm that could arise from the unregulated deployment of AI
technologies.
o Sandboxes promote transparency by requiring participants to disclose information
about their AI models, thus enabling better regulatory oversight.
o To address these challenges, governments and regulatory bodies worldwide are
increasingly adopting regulatory sandboxes. These sandboxes serve as controlled
environments where new AI technologies can be tested and evaluated within a
defined framework, ensuring that they meet ethical and safety standards before
wider deployment.

58 | P a g e
UPSC Current Only
Digital Sphere

AI and election

o Some political parties have translated their leader’s speeches into different
languages in elections, Lok Sabha election marks the "first AI election."
o Social media platforms have been central to Indian electoral strategy over the years,
with each election cycle integrating emerging technologies more deeply.
o AI technologies increasingly used to influence voter behavior. For example, AI-
generated robocalls in New Hampshire and AI-manipulated audio in Slovakia have
raised concerns about the impact of AI on democratic processes.
o The danger of AI in elections is underscored by its potential to spread
disinformation, as seen in recent elections in India, where deepfakes and AI-
generated content were used to manipulate public opinion.

Artificial Intelligence – Green transition – limiting 1.5 degree

• The World Economic Forum highlighted AI as the “only solution” to meet the 1.5-degree
target by reducing emissions through its immense computational power
• AI plays a critical role in optimizing energy systems, which is fundamental for the green
transition. Machine learning algorithms can predict energy demand and supply more
accurately, enabling better integration of renewable energy sources like solar and wind into
the grid. For example, Google's DeepMind has used AI to reduce the energy consumption
of its data centers by 40%, showcasing the potential of AI in improving energy efficiency.
• AI is instrumental in the development and deployment of electric vehicles (EVs) and
autonomous driving technologies. AI algorithms optimize battery management, route
planning, and energy consumption in EVs, extending their range and making them more
viable for widespread adoption.
• AI can enhance public transportation systems by optimizing routes and schedules based on
real-time data, reducing fuel consumption and emissions.
• A 2019 study by researchers at the University of Massachusetts, Amherst, found that
training a single AI model could generate about 626,000 pounds of carbon dioxide, which
is roughly five times the emissions of an average American during their lifetime.

59 | P a g e
UPSC Current Only
Digital Sphere

The deployment of AI in the green transition also raises ethical concerns related to bias and
inequality. AI systems may unintentionally favor certain groups over others, leading to unequal
access to the benefits of the green transition.

Regulation of AI

o Regulating AI is complex because of the fast pace of innovation and the risks associated
with it, including transparency, misinformation, privacy issues, security threats, bias in AI
algorithms, and discrimination. There is no one-size-fits-all approach to regulating AI due
to its widespread impact across industries, education, healthcare, and governance.
o Continuous feedback from active users will make AI systems more responsive to their
needs, improving accuracy and efficiency
o India is aiming for a human-centric AI approach that weaves AI technology into society.
However, this goal can only be achieved if the AI attention divide is addressed to ensure
equitable AI access across all segments of the population.
o The United States has adopted a balanced approach to AI regulation, focusing on both
innovation and safety, using executive orders to manage AI risks, privacy, and fairness.
o The European Union (EU) introduced the AI Act, which provides a comprehensive
framework for regulating AI and protecting privacy and personal data.
o China has implemented strong AI regulations with detailed guidelines around data
protection and usage.
o India hosted the Global Partnership on Artificial Intelligence (GPAI) summit

Artificial Intelligence and environmental issues

• Artificial Intelligence (AI) has emerged as one of the most transformative technologies of
the 21st century, reshaping industries, economies, and societies. However, its rapid
development and deployment have also raised concerns about its environmental impact.
While AI promises to solve some of the world's most pressing environmental challenges,
its energy consumption, resource demands, and carbon footprint are increasingly being

60 | P a g e
UPSC Current Only
Digital Sphere

scrutinized. This analysis critically examines the environmental impact of AI, synthesizing
insights from various reports and proposing solutions to mitigate its adverse effects.

1. Key Reports on the Environmental Impact of AI

1.1 Stanford University’s Artificial Intelligence Index Report (2022)

• Observation: This report highlights the significant energy consumption associated with
training large AI models.

• Key Findings:

o The training of AI models, particularly large neural networks like GPT-3 and
AlphaGo, requires immense computational power, leading to substantial carbon
emissions.

o There is a growing computational divide between large corporations and smaller


enterprises, with larger firms consuming vastly more resources.

1.2 MIT's Study on AI’s Carbon Footprint (2021)

• Observation: The MIT study estimated the environmental costs of training a single AI
model.

• Key Findings:

o Training one large AI model can emit as much carbon as five cars over their entire
lifetimes (around 284 metric tons of CO2).

o The environmental cost is particularly high for deep learning models that require
millions of computations over multiple GPUs (Graphics Processing Units).

1.3 Report from the Global Partnership on AI (2021)

• Observation: The Global Partnership on AI report emphasizes the potential of AI to help


mitigate environmental challenges but also warns of the environmental costs associated
with its computational needs.

• Key Findings:

o AI can support climate change mitigation by optimizing energy systems, reducing


waste, and enhancing the efficiency of agriculture and transportation.

61 | P a g e
UPSC Current Only
Digital Sphere

o However, data centers, which house the servers that power AI, are responsible for
approximately 2% of global greenhouse gas emissions, a figure expected to rise as
AI adoption increases.

1.4 World Economic Forum Report on AI and Sustainability (2022)

• Observation: The WEF report discusses the trade-offs between the benefits of AI and its
environmental footprint.

• Key Findings:

o While AI can reduce emissions in sectors like logistics, energy, and manufacturing,
its deployment requires careful planning to ensure that its carbon savings outweigh
its carbon costs.

o The report emphasizes the need for AI systems to be energy-efficient and powered
by renewable energy sources.

2. Environmental Impacts of AI

2.1 Energy Consumption

• The energy consumption required to power AI is one of the most significant contributors
to its environmental impact. Training large AI models involves running billions of
computations over extended periods, which demands large amounts of electricity.

• Data Centers: The data centers that house the computational resources for AI are energy-
intensive, consuming large amounts of electricity for both running servers and cooling
them. Many of these data centers rely on fossil fuels, contributing to carbon emissions.

• AI Training Costs: As per the MIT study, training a single deep learning model can
consume as much electricity as an average household uses in 50 years. This energy demand
will continue to increase as AI models become more complex.

2.2 Carbon Emissions

• AI systems, particularly large language models and deep learning architectures, are
responsible for substantial carbon emissions.

• Carbon Footprint: As highlighted by Stanford’s AI Index, training large AI models like


GPT-3 or AlphaFold can result in carbon emissions that are equivalent to several years of

62 | P a g e
UPSC Current Only
Digital Sphere

car emissions. Additionally, the deployment of AI systems across various sectors adds to
their environmental footprint.

• Cloud Computing: The reliance on cloud computing services such as AWS, Google Cloud,
and Microsoft Azure contributes significantly to global emissions, as these platforms host
the infrastructure required for training and deploying AI models.

2.3 Resource Usage

• AI also has a direct impact on resource consumption beyond energy. The hardware required
to train and run AI models involves the extraction and use of rare earth minerals and metals.

• GPU and TPU Manufacturing: Training large AI models necessitates powerful hardware,
specifically Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). The
production of these hardware components involves mining for materials like lithium,
cobalt, and nickel, which have environmental and human rights implications, particularly
in developing countries.

• E-Waste: The rapid advancement of AI technology leads to shorter hardware lifecycles,


contributing to electronic waste (e-waste). The disposal of outdated hardware further
exacerbates environmental degradation.

3. Positive Contributions of AI to Environmental Sustainability

3.1 Optimization of Energy Systems

• AI holds significant potential to optimize energy consumption and reduce emissions in


various sectors. AI-powered solutions can analyze large datasets to identify inefficiencies
and optimize energy usage in real-time.

• Smart Grids: AI can optimize energy distribution networks by predicting energy demand
and dynamically adjusting supply, reducing energy waste and promoting the use of
renewable energy.

• Renewable Energy Integration: AI can improve the integration of renewable energy


sources into the power grid by forecasting energy production from solar and wind farms,
reducing the need for fossil-fuel-based backup power.

3.2 Climate Change Mitigation

63 | P a g e
UPSC Current Only
Digital Sphere

• AI can play a key role in climate change mitigation by improving the efficiency of sectors
such as transportation, agriculture, and manufacturing.

• Precision Agriculture: AI-powered precision farming techniques can reduce water usage,
minimize pesticide use, and increase crop yields, thereby reducing the environmental
impact of agriculture.

• Logistics and Supply Chain Optimization: AI is being used to optimize supply chains,
reducing emissions by minimizing transport distances and improving fuel efficiency
through route optimization.

3.3 Disaster Management

• AI can enhance disaster preparedness and response by providing early warnings of climate-
related disasters such as floods, wildfires, and storms. AI-powered prediction models can
analyze historical and real-time data to forecast extreme weather events, helping
governments and communities prepare more effectively.

4. Critical Analysis of AI’s Environmental Impact

4.1 Balancing AI’s Environmental Costs with Its Benefits

• The environmental costs of AI, particularly its carbon footprint and resource usage, must
be critically weighed against the potential benefits of AI in promoting sustainability. As
AI adoption accelerates, the key challenge is ensuring that the benefits of AI in sectors like
energy, agriculture, and logistics outweigh its carbon and resource costs.

• Energy Efficiency: AI developers need to prioritize the development of energy-efficient


algorithms and hardware to minimize the energy required for training and deploying AI
systems. Quantum computing and neuromorphic computing offer potential avenues for
reducing AI’s energy demands.

4.2 Geopolitical and Economic Implications

• The global dominance of large AI firms, primarily based in developed countries, raises
concerns about the equity of resource usage and the disproportionate environmental impact
on developing nations. Many rare earth minerals used in AI hardware are mined in
developing countries, where environmental regulations may be lax, leading to ecosystem
degradation and human rights abuses.

64 | P a g e
UPSC Current Only
Digital Sphere

• Sustainability in Supply Chains: There is a need for more sustainable supply chain
practices in the production of AI hardware. Companies should prioritize ethical sourcing
of materials and adopt circular economy principles to minimize e-waste.

4.3 Regulatory and Legislative Gaps

• Current legislative frameworks are insufficient to address the environmental impact of AI.
Governments must implement policies that promote sustainable AI development, including
mandatory carbon reporting for AI systems and regulations on e-waste disposal.

• Energy Regulations: Governments should incentivize green data centers powered by


renewable energy and implement tax policies that encourage companies to reduce the
carbon footprint of their AI operations.

• Circular Economy Policies: Legislators need to create circular economy frameworks that
promote the reuse and recycling of AI hardware components to mitigate the environmental
impact of manufacturing and disposal.

5. Solutions to Mitigate AI’s Environmental Impact

5.1 Adoption of Renewable Energy in AI Operations

• AI firms should prioritize the use of renewable energy to power data centers and
computational resources. Many tech companies, including Google and Microsoft, have
committed to achieving carbon neutrality by powering their AI operations with 100%
renewable energy.

• Carbon Offsetting: Companies can invest in carbon offset programs to neutralize the
carbon footprint generated by their AI models. However, offset programs should be
combined with concrete steps to reduce emissions at the source.

5.2 Development of Energy-Efficient Algorithms

• AI researchers need to focus on developing more energy-efficient algorithms that reduce


the computational load of training and deploying AI models. The concept of Green AI,
which emphasizes sustainability in AI development, advocates for metrics that track the
environmental cost of AI alongside performance.

• Model Optimization: By optimizing AI models, researchers can achieve similar levels of


performance with fewer computations, reducing energy consumption.

65 | P a g e
UPSC Current Only
Digital Sphere

5.3 Investment in Emerging Technologies

• Quantum computing and neuromorphic computing hold the potential to revolutionize AI


by significantly reducing the energy required for computations. Investing in these emerging
technologies can provide long-term solutions to the environmental impact of AI.

• Neuromorphic Computing: Inspired by the human brain, neuromorphic chips promise to


perform AI computations at much lower energy levels compared to traditional GPUs and
TPUs.

5.4 Legislative and Judicial Measures

• Governments and international bodies need to implement legislation and judicial


frameworks that address the environmental impacts of AI.

• International Standards: Establishing global environmental standards for AI operations can


ensure that companies across the world adhere to consistent practices in terms of energy
usage, carbon emissions, and resource management.

• Green Data Center Mandates: Regulatory bodies should introduce mandates requiring all
data centers to run on renewable energy and report their carbon emissions annually.

6. Conclusion

• The environmental impact of AI is a growing concern as its applications proliferate across


industries. While AI holds tremendous potential for promoting sustainability and
mitigating climate change, its current energy consumption, carbon emissions, and resource
usage present significant challenges. A balance must be struck between harnessing AI’s
benefits and minimizing its environmental costs.

• To achieve this balance, there is a pressing need for energy-efficient AI models, increased
use of renewable energy, sustainable hardware supply chains, and robust legislative
frameworks. The role of AI in combating climate change will only be fully realized if it is
developed and deployed with sustainability at its core.

66 | P a g e
UPSC Current Only
Digital Sphere

Ethical Aspects of AI

Ethical Harms and Concerns Tackled by Initiatives

• Ethical initiatives agree that AI must be researched, developed, deployed, and used
ethically, though priorities vary across initiatives. The concerns addressed can be broadly
grouped into 12 key categories:

1. Human Rights and Well-being

• Key Questions:

o Does AI prioritize humanity's best interests and well-being?

o How do we ensure AI respects fundamental rights like dignity, privacy, and


equality?

• Proposed Solutions:

o Develop governance frameworks and regulatory bodies to oversee AI use.

o Maintain human control over AI without granting it rights equivalent to humans.

o Use clear metrics to measure societal success and well-being.

o Educate policymakers and bridge industry-consumer gaps for transparency.

2. Emotional Harm

• Key Questions:

o How does AI influence human emotional experiences, positively or negatively?

o Can AI interactions degrade emotional integrity or foster harmful attachments?

• Proposed Solutions:

o Adapt AI norms to cultural sensitivities.

o Avoid AI designs promoting false intimacy, over-attachment, or isolation.

o Implement systematic ethical analyses for affective design.

o Provide education on recognizing emotional manipulation (e.g., "nudging").

67 | P a g e
UPSC Current Only
Digital Sphere

o Ensure transparency about AI influence on human relationships.

3. Accountability and Responsibility

• Key Questions:

o Who is responsible for AI's actions, and how can accountability be ensured?

• Proposed Solutions:

o Create mechanisms to trace AI decisions and offer redress for harm.

o Ensure designers, developers, and operators take responsibility for AI outcomes.

4. Security, Privacy, Accessibility, and Transparency

• Key Questions:

o How can privacy and security be balanced with accessibility and transparency?

• Proposed Solutions:

o Allow users control over their personal data.

o Make AI systems auditable and ensure data use is lawful and equitable.

5. Safety and Trust

• Key Questions:

o How do we build public trust in AI?

o How do we ensure AI operates safely and ethically?

• Proposed Solutions:

o Design AI to act transparently and impartially.

o Establish safety standards and prevent unintended harmful behaviors.

6. Social Harm and Social Justice

68 | P a g e
UPSC Current Only
Digital Sphere

• Key Questions:

o How do we ensure AI is free from bias and aligned with societal ethics?

• Proposed Solutions:

o Design AI inclusively, considering diverse demographics and values.

o Avoid reinforcing biases and discrimination in AI systems.

7. Financial Harm

• Key Questions:

o How do we address economic disruption caused by AI?

• Proposed Solutions:

o Prevent AI from negatively impacting job quality and economic opportunities.

o Consider policies like universal basic income and workforce retraining.

8. Lawfulness and Justice

• Key Questions:

o How do we ensure AI is used equitably and lawfully?

o Should AI be granted "personhood"?

• Proposed Solutions:

o Implement clear regulations and governance structures for AI applications.

o Avoid granting legal rights or autonomy to AI.

9. Control and Ethical Use

• Key Questions:

o How do we prevent unethical AI use and ensure human control?

• Proposed Solutions:

69 | P a g e
UPSC Current Only
Digital Sphere

o Retain complete control over AI’s development and deployment.

o Anticipate and mitigate risks of unethical applications.

10. Environmental Harm and Sustainability

• Key Questions:

o How can AI development be sustainable and minimize environmental harm?

• Proposed Solutions:

o Develop environmentally friendly AI practices.

o Leverage AI for waste management and conservation efforts.

11. Informed Use

• Key Questions:

o How do we ensure the public is educated and informed about AI interactions?

• Proposed Solutions:

o Raise awareness about AI's capabilities and limitations.

o Promote public engagement and informed consent for AI applications.

12. Existential Risk

• Key Questions:

o How do we prevent AI from posing long-term global risks?

• Proposed Solutions:

o Regulate AI to avoid arms races and catastrophic misuse.

o Ensure machine learning progresses under manageable and ethical frameworks.

70 | P a g e
UPSC Current Only
Digital Sphere

Overall Focus of Initiatives

• All initiatives emphasize the need to:

• Center AI development on human and environmental benefits.

• Mitigate risks like bias, inequality, and emotional harm.

• Promote accountability, transparency, and inclusivity in AI systems.

• Notable frameworks, such as IEEE’s Ethically Aligned Design, provide substantial


guidelines for addressing these concerns and fostering responsible AI innovation.

Key Ethical Issues in AI

1. Accountability and Responsibility

• Mandate for Auditability:


AI systems must be auditable to ensure accountability for their actions. Designers,
manufacturers, owners, and operators must take responsibility for any harm caused.

• Proposed Measures:

o Clarify liability and culpability during development and deployment phases


(IEEE).

o Incorporate cultural diversity in design considerations.

o Create multi-stakeholder ecosystems to establish norms for AI accountability.

o Develop registration and record-keeping systems to identify legally responsible


entities.

• Asilomar Principles (Future of Life Institute):

o Emphasize designers’ and builders’ responsibility for the moral implications of AI


use and misuse.

o Ensure traceability to understand why and how mistakes occur.

• Bias Accountability:
The Partnership on AI stresses addressing biases in data and systems to prevent replicating
unfairness in AI.

71 | P a g e
UPSC Current Only
Digital Sphere

2. Access and Transparency vs. Security and Privacy

• Transparency Challenges:
Transparency in AI systems is critical, particularly for safety-critical applications like
autonomous vehicles or medical diagnostics.

• Proposed Solutions:

o New Standards (IEEE): Measurable levels of transparency for objective system


assessment. Examples include a "why-did-you-do-that" button for users or an
"ethical black box" for accident investigators.

o Personal Data Rights:

▪ Systems must ask for explicit consent when collecting or using data.

▪ Develop tools like "privacy AI" to help individuals manage and control their
digital identities.

o Asilomar Principles: Advocate transparency in failure analysis, judicial decision-


making, and data privacy.

o Saidot’s Emphasis: Promote transparency, accountability, and trust in AI systems.

3. Emotional and Social Implications

• Sex and Robots:

o Concerns include reinforcing gender stereotypes, commodifying bodies, and


eroding human intimacy.

o Potential benefits: Emotional support, crime reduction, and therapeutic use for
abuse victims.

• Emotional Influence ("Nudging"):

o AI's ability to manipulate user behavior raises ethical concerns.

o Safeguards include:

▪ Transparent nudging mechanisms.

▪ Education for users to recognize manipulation.

72 | P a g e
UPSC Current Only
Digital Sphere

▪ Opt-in systems for emotional influence.

o Governments and entities must ensure transparency regarding beneficiaries of such


influence.

4. Autonomy and the Agent vs. Patient Debate

• Anthropocentrism in AI Design:

o Current AI systems are anthropocentric, blurring lines between moral agents (those
acting) and patients (those acted upon).

• Defining AI Autonomy:

o Machine autonomy reflects operational regulation, distinct from human autonomy.

o Embodied AI resembling humans risks anthropomorphic expectations.

• Need for Clarity:


Further discussion is required to define autonomy in AI without conflating it with human
free will or moral agency.

Safety and Trust

• Consensus: AI systems must be safe, trustworthy, reliable, and act with integrity,
especially when supplementing or replacing human decision-making.

• Key Proposals:

o Safety Mindset (IEEE): Researchers should anticipate and mitigate unintended


behaviors, design AI to be inherently safe, and establish institutional review boards
for project evaluations.

o Mission-Led Development (Asilomar Principles): AI should align with ethical


ideals and benefit humanity, fostering public trust.

o Communication and Dialogue:

▪ The Japanese Society for AI emphasizes mutual learning and


communication between AI and society to strengthen understanding and
trust.

73 | P a g e
UPSC Current Only
Digital Sphere

▪ The Partnership on AI advocates for openness and cooperation among AI


scientists and engineers.

▪ The Institute for Ethical AI & Machine Learning stresses stakeholder


communication to build trust and understanding.

Social Harm and Social Justice: Inclusivity, Bias, and Discrimination

• Inclusivity and Diversity:

o AI development must respect community norms, values, ethics, and cultural


diversity.

o Developers should ensure AI benefits everyone, avoiding harm or exacerbating


existing inequalities.

• Key Actions (IEEE):

o Identify community-specific norms and design AI to adapt to changing cultural


dynamics.

o Evaluate and address biases that may disadvantage certain social groups.

• Global Responsibility:

o Address the inequality gap between developed and developing nations by


promoting corporate social responsibility (CSR), standardization, and open-source
AI.

o Encourage interdisciplinary discussions on effective AI education and training.

• Advocacy Groups:

o AI4All, AI Now Institute, and the Foundation for Responsible Robotics emphasize
ethical AI practices and fair inclusion.

o The Partnership on AI warns of "blind spots" in biased data.

o Saidot stresses maintaining human-centric AI that aligns with public values.

Financial Harm: Economic Opportunity and Employment

74 | P a g e
UPSC Current Only
Digital Sphere

• Workforce Disruption:

o AI automation risks job displacement, requiring workforce retraining and


adaptability.

o Traditional employment structures must evolve to address complexities arising


from technological changes.

• Proposed Solutions:

o Implement training programs for emerging skill sets at early education levels.

o Establish global and regional multi-stakeholder governance bodies to ensure


equitable AI deployment.

• Opportunities in AI:

o AI can reveal workplace biases and improve product designs proactively.

o Collaboration with diverse stakeholder groups is essential to understand AI's


impact on labor and working conditions.

• Ethical Implications (Future Society): AI’s role in professions like law must balance
superior performance with ethical and professional considerations.

Lawfulness and Justice

• Legal Challenges:

o How should AI be classified legally (e.g., as a product, animal, or person)?

o Issues include transparency, accountability, and compliance with international and


domestic laws.

• Key Recommendations (IEEE):

o AI should not be granted "personhood."

o Human control over AI must be maintained, particularly for critical decisions.

o Existing laws should be reviewed to prevent granting AI practical autonomy.

o Governments should reassess legal frameworks as AI sophistication increases,


prioritizing humanity’s interests over AI development.

75 | P a g e
UPSC Current Only
Digital Sphere

Control and the Ethical Use – or Misuse – of AI

• Potential Risks:

o AI systems are vulnerable to hacking, exploitation, and misuse of personal data for
profit or manipulation.

o Behavioral manipulation and subversion pose significant ethical challenges.

• Proposed Solutions:

o Educate the public on ethics, security issues, and the risks of AI misuse.

o Provide clear warnings, such as "data privacy" notices on smart devices.

o Governments, law enforcement, and citizens should collaborate to ensure safe AI


use.

• Key Principles:

o AI must remain predictable, reliable, and subject to validation and testing.

o Regular human reviews are necessary to ensure AI aligns with humanity’s good
and avoids exploitation.

Environmental Harm and Sustainability

▪ The production, management, and implementation of AI must prioritize sustainability and


avoid environmental harm. This aligns with the broader concept of well-being, which includes
environmental aspects such as air quality, biodiversity, climate change, soil, and water quality
(IEEE, 2019). The IEEE (EAD, 2019) emphasizes that AI must not harm Earth's natural
systems or exacerbate their degradation and should contribute to sustainable stewardship,
preservation, or restoration of these systems.

▪ The UNI Global Union advocates for AI to prioritize people and the planet, enhancing
biodiversity and ecosystems (UNI Global Union, n.d.). The Foundation for Responsible
Robotics highlights potential uses of AI in agriculture, climate change monitoring, and species
protection. However, these opportunities require informed policies to govern AI and robotics
responsibly, mitigating risks while fostering innovation and development.

76 | P a g e
UPSC Current Only
Digital Sphere

Informed Use: Public Education and Awareness

▪ Educating the public on the use, misuse, and potential harms of AI is critical. This involves
fostering civic participation, communication, and dialogue. Consent is a central issue,
especially in cases where personal data is used without direct interaction with the individual.
The IEEE highlights scenarios such as the 'Internet of Other People’s Things,' where consent
becomes ambiguous (IEEE, 2019).

▪ Corporate environments exacerbate power imbalances, with many employees unclear about
how their data, including health information, is used. The IEEE (2017) suggests implementing
employee data impact assessments to ensure transparency and prevent unauthorized data
collection. Additionally, data should only be gathered and processed for legitimate, clearly
stated purposes, kept up-to-date, and retained only as necessary.

▪ To promote AI understanding, undergraduate and postgraduate curricula should integrate AI's


relationship to sustainable human development. Programs in engineering, international
development, and humanitarian relief must address AI's opportunities and risks, especially in
Lower Middle Income Countries. Organizations like the Foundation for Responsible Robotics,
Partnership on AI, and AI Now Institute stress the importance of transparent dialogue to build
trust and acceptance of AI.

Existential Risk

▪ The Future of Life Institute identifies AI's existential risk as arising more from competence
than malevolence. AI systems, through continuous learning, may develop aims that conflict
with human interests. The institute illustrates this with the analogy of humans inadvertently
harming ants during hydroelectric projects, emphasizing the need for AI safety research to
prevent humanity from being in a similar position.

▪ Autonomous weapon systems (AWS) present another risk. These systems, designed to cause
harm, raise significant ethical concerns. The IEEE (2019) recommends audit trails for
accountability, transparent adaptive learning systems, identifiable human operators,
predictable behavior, and professional ethical standards for AWS development. Without
meaningful human control, AWS could spark international arms races and destabilize
geopolitics. Initiatives like the UNI Global Union and the Future of Life Institute call for
preemptive regulation to mitigate these risks.

77 | P a g e
UPSC Current Only
Digital Sphere

▪ The Future of Life Institute’s Asilomar Principles caution against underestimating future AI
capabilities, recognizing that advanced AI represents a profound shift in life on Earth.

Case Studies

Case Study: Healthcare Robots

▪ AI and robotics are transforming healthcare by assisting in diagnosis, clinical treatment, and
patient monitoring. Robots may perform simple surgeries, remind patients to take medications,
and help with mobility. In medical imaging diagnostics, AI has shown the potential to surpass
human accuracy.

▪ However, embodied AI—robots with physical parts—poses safety risks. Incidents such as a
surgical robot malfunction in 2005, a fatal robot accident at a Volkswagen plant in 2015, and
a Tesla autopilot crash in 2016 highlight these risks. The potential for harm increases with the
prevalence of driverless cars, assistive robots, and drones, which face decisions impacting
human safety. The stakes are especially high for vulnerable populations such as children and
the elderly, as moving physical parts can pose significant risks (Lin et al., 2017).

1. Safety and Avoidance of Harm

• Robots and AI systems must prioritize patient safety, especially in sensitive populations
like the ill, elderly, and children.

• Clinical trials are essential to ensure long-term safety, as highlighted by the legal and
medical fallout of vaginal mesh implants.

2. User Understanding and Training

• Effective use of AI (e.g., da Vinci surgical assistant) requires proper training for healthcare
professionals.

• Digital literacy among medical staff is crucial as genomics and machine learning become
integral to decision-making.

• AI systems can sometimes fail due to biased datasets or assumptions (e.g., pneumonia and
asthma case).

78 | P a g e
UPSC Current Only
Digital Sphere

• Licensing AI for specific procedures and revoking licenses for repeated errors may
improve trust and accountability.

3. Data Protection and Privacy

• Concerns include unauthorized access to personal medical data and misuse by third parties
like insurers.

• Strong information governance is required to balance innovation with patient


confidentiality.

4. Legal Responsibility

• Determining liability when AI errors occur is challenging, especially with "black box"
systems.

• Currently, healthcare professionals are held accountable, but omission of AI use may soon
be considered negligent.

5. Bias and Equity

• AI-trained datasets often lack diversity, leading to biases in diagnoses (e.g., skin cancer
models underperforming for darker skin tones).

• Initiatives like The Partnership on AI aim to address biases, but greater diversity in such
initiatives is necessary.

6. Equality of Access

• Technologies like fitness trackers can deepen inequalities if unaffordable or inaccessible.

• Programs such as the UK's NHS Widening Digital Participation aim to ensure equitable
access and inclusivity in healthcare.

7. Quality of Care

79 | P a g e
UPSC Current Only
Digital Sphere

• AI-driven tools could enhance care efficiency and enable social interactions for the elderly.

• Concerns include the potential lack of empathy in robot-assisted care and the ethical
implications of substituting human caregivers with robots.

8. Autonomy and Dignity

• Robots can empower patients but must avoid overstepping, particularly in sensitive tasks
or cases where mental capacity is diminished.

• Ensuring patient autonomy, dignity, and privacy in AI applications is essential.

9. Deception and Moral Agency

• Robots designed for emotional interaction may blur reality for vulnerable individuals (e.g.,
dementia patients).

• Encouraging emotional bonds with robots can raise questions about authenticity and
infantilization.

10. Trust in Healthcare AI

• Introducing AI may disrupt traditional trust dynamics between doctors and patients.

• Trust may grow as evidence of AI's benefits accumulates, but over-reliance on AI by


healthcare providers could erode patient trust.

11. Employment and Workforce Impact

• Concerns about job replacement exist, but reports like the NHS Topol Review argue that
AI will augment, not replace, healthcare professionals, freeing time for direct patient care.

• Case Study: Autonomous Vehicles (AVs)

80 | P a g e
UPSC Current Only
Digital Sphere

• Autonomous Vehicles (AVs) are designed to sense their environment and operate with
minimal or no input from a human driver. Although the concept of self-driving cars has
existed since at least the 1920s, technological advancements in recent years have made it
possible for AVs to appear on public roads.

• According to SAE International (2018), there are six levels of driving automation:

• Level 0: No Automation - The automated system may issue warnings and/or momentarily
intervene in driving but does not control the vehicle.

• Level 1: Hands on - The driver and the automated system share control, such as with
cruise control or adaptive cruise control. The driver must be ready to take full control at
any time.

• Level 2: Hands off - The automated system controls the vehicle's acceleration, braking,
and steering, but the driver must monitor and be prepared to intervene at any time.

• Level 3: Eyes off - The driver can safely divert attention from driving, but must be ready
to intervene if prompted by the AV.

• Level 4: Minds off - Similar to Level 3, but no driver intervention is needed at all. The
driver can disengage completely.

• Level 5: Steering Wheel Optional - No human intervention required. An example of


Level 5 is a fully autonomous robotic taxi.

• Lower levels of automation are already established in the market, with higher levels
undergoing development and testing. As we advance towards higher automation, ethical
concerns become more prominent.

• Societal and Ethical Impacts of AVs

• "We cannot build these tools saying, 'we know that humans act a certain way, we're going
to kill them – here's what to do.'" (John Havens)

1. Public Safety and Ethics of Testing on Public Roads Currently, cars with "assisted
driving" functions, such as Tesla's Autopilot (Level 2), are legal in many countries.
However, these functions have not been fully certified for safety and may pose risks. A
report by Germany’s Ethics Commission on Automated Driving highlights the
responsibility of the public sector to ensure the safety of AV systems. A tragic event in

81 | P a g e
UPSC Current Only
Digital Sphere

2018, involving an Uber AV in Arizona, resulted in the first pedestrian fatality from an
AV. Despite this, Uber was not criminally liable, and the public debate surrounding this
incident raised concerns about testing AVs on public roads.

2. Human Safety Concerns As AVs are developed to operate in complex environments, the
issue of safety is a central concern. AVs could potentially reduce road traffic accidents, but
challenges remain in programming AVs to make decisions when it comes to prioritizing
the safety of passengers versus pedestrians. For instance, if a car must choose between
swerving into a wall to avoid hitting children or sacrificing the passenger's safety, ethical
dilemmas arise.

3. Processes and Technologies for Accident Investigation Several serious accidents have
involved AVs, such as the crashes of Tesla Model S vehicles in 2016 and 2018, where
Autopilot was believed to be engaged. Efforts to investigate these accidents have been
hindered due to the lack of standard regulatory frameworks for AV-related accidents. One
potential solution is to equip AVs with "ethical black boxes," similar to flight data
recorders, to provide independent investigators access to crucial data.

4. Near-Miss Accidents There is no standard system for reporting near-miss accidents


involving AVs. Although California requires manufacturers to disclose the frequency of
disengagements, where a human driver must take control, data collection is often
inconsistent and lacking transparency. The absence of such data limits policymakers'
ability to understand near-miss incidents and regulate AVs effectively.

5. Data Privacy AVs collect large amounts of data, raising concerns about privacy and data
protection. Issues surrounding the misuse of this data, such as sharing information without
drivers' consent, have surfaced. A recommendation from the German Ethics Commission
is to grant AV drivers full sovereignty over their data, enabling them to control how it is
used.

6. Employment The rise of AVs is expected to impact employment, particularly in industries


like trucking, taxi services, and bus driving. Self-driving trucks have already been tested
commercially, and fully autonomous buses and taxis are likely to become a reality in the
future. This raises concerns among unions and workers about job displacement.

7. Quality of Urban Environments The proliferation of AVs will likely transform urban
environments, with potential negative consequences for pedestrians, cyclists, and local

82 | P a g e
UPSC Current Only
Digital Sphere

residents. New infrastructure, such as AV-only lanes, may be required, and urban planning
will need to account for changes in traffic congestion, parking, and public spaces.
Additionally, the environmental impact of AVs should be considered. While they may
reduce fuel usage, increased automation could lead to more travel and longer distances,
which may offset these benefits.

• Legal and Ethical Responsibility in Autonomous Vehicles

• From a legal perspective, determining responsibility for crashes caused by autonomous


vehicles (AVs) is a complex issue. If an AV controlled by an algorithm causes injury,
questions arise about who should be held liable—the manufacturer, the software
developers, the vehicle owner, or another party. In cases where courts struggle to resolve
this, manufacturers could face unforeseen costs, potentially discouraging further
investment in AV technology. On the other hand, if victims are not adequately
compensated, public trust in AVs could be undermined, preventing widespread adoption.

• Compensation for Victims:


When an AV causes injury, there is a need to determine who is legally responsible and how
victims should be compensated. If AVs are programmed to make judgments in scenarios
where no clear solution exists, such as life-threatening accidents with no right answer, it
becomes essential to define the legal frameworks for responsibility. Without proper
compensation, public acceptance and trust in autonomous vehicles could be significantly
hindered.

• Ethical Decision-Making:
AVs are programmed to make judgments under conditions of uncertainty, or "no-win"
situations, such as choosing between saving the passenger or bystanders in the event of an
unavoidable crash. This raises the question of which ethical theory or approach should
guide these decisions. Lin et al. (2017) point out that different ethical frameworks could
lead to different outcomes in terms of fatalities or harm. For example, utilitarianism might
prioritize minimizing the number of fatalities, whereas deontological ethics could focus on
duties and rights.

• Who Decides the Ethical Principles?


The ethical guidelines for AV behavior raise another significant question: who should

83 | P a g e
UPSC Current Only
Digital Sphere

decide the ethical principles an autonomous vehicle follows? There are several views on
this matter:

• Loh and Loh (2017) argue that responsibility should be shared between the engineers, the
vehicle’s driver, and the autonomous driving system itself.

• Millar (2016) suggests that users—specifically passengers—should have the autonomy to


decide the ethical principles the AV follows. Using the example of healthcare, where
doctors must obtain informed consent from patients before making critical decisions,
Millar argues that AV users should be informed about the vehicle's decision-making
processes. Furthermore, the user should have the right to influence how the vehicle behaves
in certain situations. Not asking users for input, or failing to inform them of the vehicle's
programmed responses, would create moral concerns similar to the outcry over doctors
making end-of-life care decisions without patient consent.

• This debate underscores the complexity of both the legal and ethical challenges
surrounding autonomous vehicles. The resolution of these issues is crucial for ensuring that
AV technology is developed in a way that is both legally accountable and ethically sound,
ultimately contributing to its successful and trusted integration into society.

Case Study: Warfare and Weaponization

▪ Although partially autonomous and intelligent systems have been used in military technology
since at least the Second World War, advances in machine learning and AI mark a turning
point in the use of automation in warfare. AI is already sophisticated enough for use in areas
like satellite imagery analysis and cyber defense, but the full potential of these technologies is
yet to be realized. A recent report suggests that AI has the potential to transform warfare in
ways comparable to, or even surpassing, the advent of nuclear weapons, aircraft, computers,
and biotechnology (Allen and Chan, 2017). Below are key ways in which AI is expected to
impact militaries:

Lethal Autonomous Weapons

84 | P a g e
UPSC Current Only
Digital Sphere

▪ As automatic and autonomous systems continue to advance, militaries are increasingly willing
to delegate authority to them. This trend is expected to continue with the widespread adoption
of AI, leading to a potential arms race. The Russian Military Industrial Committee, for
example, has approved a plan where 30% of Russian combat power will consist of entirely
remote-controlled and autonomous robotic platforms by 2030. Other countries may set similar
goals. While the United States Department of Defense has placed restrictions on autonomous
and semi-autonomous systems that wield lethal force, other countries and non-state actors may
not exercise such self-restraint.

Drone Technologies

▪ Standard military aircraft can cost upwards of US$100 million per unit, while a high-quality
quadcopter unmanned aerial vehicle (UAV) can cost only about US$1,000. This price disparity
means that for the cost of one high-end aircraft, a military could acquire up to one million
drones. Although current commercial drones have limited range, future advancements could
enable them to achieve ranges similar to those of ballistic missiles, making existing platforms
obsolete.

Robotic Assassination

▪ The widespread availability of low-cost, highly capable, lethal, and autonomous robots could
make targeted assassination more prevalent and harder to attribute. Automatic sniping robots,
for instance, could execute targets from a distance.

Mobile-Robotic Improvised Explosive Devices (IEDs)

▪ As commercial robotic and autonomous vehicle technologies become more widely available,
some groups may exploit them to create more advanced IEDs. Currently, only powerful nation-
states possess the technological capability to deliver explosives over long distances. However,
if drone-based package delivery becomes widespread, the cost of delivering explosives from
afar could decrease dramatically, from millions to thousands or even hundreds of dollars.
Similarly, self-driving cars could make suicide bombings more frequent and devastating, as
they no longer require a human driver.

AI in Warfare: Strategic and Tactical Support

▪ Hallaq et al. (2017) highlight several ways in which machine learning is likely to affect
warfare. For example, a commanding officer (CO) could use an intelligent virtual assistant

85 | P a g e
UPSC Current Only
Digital Sphere

(IVA) in a dynamic battlefield environment, which could automatically scan satellite imagery
to detect specific vehicle types and help identify potential threats. The IVA could also predict
the enemy's intent and compare situational data to a database of previous war games and live
engagements, providing the CO with access to a level of accumulated knowledge that would
otherwise be impossible to gather.

Legal and Ethical Concerns in Autonomous Warfare

▪ The deployment of AI in warfare raises several important legal and ethical questions. One
significant concern is whether automated weapon systems, which exclude human judgment,
could violate International Humanitarian Law (IHL) and undermine fundamental rights such
as the right to life and the principle of human dignity. AI may also lower the threshold for
going to war, destabilizing global peace.

International Humanitarian Law (IHL) and Autonomous Weapons

▪ IHL stipulates that any attack must distinguish between combatants and non-combatants, be
proportional, and avoid targeting civilians or civilian infrastructure. Additionally, no attack
should unnecessarily aggravate the suffering of combatants. There are concerns that AI may
not be able to meet these requirements without human involvement, particularly when it comes
to Lethal Autonomous Weapon Systems (LAWS). These autonomous military robots, capable
of searching for and engaging targets independently, may fail to distinguish between civilians
and combatants, and they might not be able to assess whether the force used is proportionate,
especially in cases where civilian damage is a consequence.

▪ Amoroso and Tamburrini (2016) argue that LAWS must be capable of respecting IHL
principles such as distinction and proportionality as effectively as a competent and
conscientious human soldier. Lim (2019) acknowledges that while LAWS that do not meet
these requirements should not be deployed, future technological advancements may enable
LAWS to meet these standards. However, Asaro (2012) argues that regardless of how
advanced LAWS become, it is morally wrong to delegate life-and-death decisions to machines,
suggesting that only humans should initiate lethal force.

The Ethics of Delegating Lethal Force to Machines

86 | P a g e
UPSC Current Only
Digital Sphere

▪ Some argue that delegating the decision to kill a human to a machine infringes on human
dignity, as robots do not experience emotion, sacrifice, or the gravity of taking a life. Lim et
al. (2019) explain that a machine, "bloodless and without morality or mortality," cannot fully
comprehend the significance of using force against a human being, and thus cannot make the
ethical decisions required in such scenarios. Furthermore, robots have no concept of the
consequences of killing the "wrong" person, as they lack the emotional understanding that
humans have.

▪ However, others argue that there is no inherent reason why being killed by a machine would
be more undignified than being killed by a missile strike. Lim et al. (2019) suggest that what
matters is whether the victim feels humiliated during the killing process. In the chaos of battle,
soldiers often do not have time to reflect on sacrifice or engage in the emotions associated with
lethal force decisions.

Accountability for Autonomous Warfare Actions

▪ A critical question is who should be held accountable for the actions of autonomous systems
in warfare—whether the responsibility should fall on the commander, the programmer, or the
operator of the system. Schmit (2013) argues that responsibility for committing war crimes
should be shared between the individual who programmed the AI and the commander or
supervisor, assuming they knew or should have known that the autonomous weapon system
was programmed for use in a war crime, yet did nothing to prevent it.

87 | P a g e
UPSC Current Only
Digital Sphere

Addressing the Governance Challenges Posed by AI

International AI Governance Frameworks:

• OECD AI Principles (2019):

o Adopted by 42 countries.

o Offers five fundamental principles for AI operation.

o Provides practical recommendations for governments to implement these


principles.

• G20 AI Principles (2019):

o Derived from OECD principles.

o Focused on human-centered AI.

• EU AI Framework:

o European Commission's AI strategy since April 2018.

o Includes investment plans and preparation for socio-economic changes.

o Complemented by ethics guidelines on AI.

Gaps in Current AI Frameworks:

• Environmental Concerns:

o OECD: Mentions AI's potential for positive environmental outcomes but lacks
specifics on how to achieve this.

o EU Communication on AI: Does not address the environment.

o EU Ethics Guidelines: Incorporates prevention of harm, including environmental


and living beings’ protection. Focuses on sustainability, resource use, and energy
consumption in AI development.

• Human Psychology and AI Interaction:

o OECD and EU Communication: Do not address the psychosocial impact of AI,


including its effects on human relationships and social skills.

o EU Ethics Guidelines: Acknowledges social impacts and requires AI systems to


signal simulated human interaction clearly.

88 | P a g e
UPSC Current Only
Digital Sphere

• Economic and Social Inequality:

o OECD: States AI should reduce inequalities but lacks detailed solutions for
achieving this.

o EU Ethics Guidelines: Emphasize diversity, non-discrimination, and fairness.


Calls for AI to be trained on representative data to avoid biased outputs.

• Human Rights and Democracy:

o OECD and EU: Both frameworks include human rights as a central concern, such
as privacy and data governance.

o OECD: Briefly mentions the implications of AI for democracy, especially


regarding issues like Deepfake and opinion manipulation.

o EU Ethics Guidelines: Focus on AI maintaining democratic processes,


deliberation, and voting systems.

• Legal System and Liability:

o OECD: Focuses on creating an enabling policy environment for innovation and


competition, not addressing liability for AI-assisted crime.

o EU Communication and Ethics Guidelines: Explicitly discuss product liability,


safety, security issues, and the need for new regulatory powers to address AI-related
misconduct.

• Accountability in AI Systems:

o OECD: Lists accountability as a key principle, stating organizations and


individuals developing or operating AI should be held accountable.

o EU Ethics Guidelines: Provide over 10 conditions for accountability in


trustworthy AI systems.

• Trust, Fairness, and Transparency:

o OECD: Emphasizes fairness, transparency, and explainability in AI systems.


States that people should be able to understand and challenge AI decisions.

o EU Ethics Guidelines: Provide more context and practical means (e.g., audits,
human oversight, "stop button," human-in-the-loop approach).

89 | P a g e
UPSC Current Only
Digital Sphere

• AI in Financial Systems:

o OECD and EU: Acknowledge AI’s beneficial use in finance but do not address
potential negative impacts like financial crimes.

o G7 Concerns (2019): Voiced concerns over digital currencies and other new
financial products, suggesting forthcoming regulatory changes.

90 | P a g e
UPSC Current Only
Digital Sphere

91 | P a g e
UPSC Current Only
Digital Sphere

Generative AI (Now Generative AI surf internet to answer your querries)

What Generative AI Cannot Do:

1. Common Sense Reasoning: Lacks the ability to reason like a human, missing nuanced
understanding.

2. Understanding Context: Struggles with grasping complex or shifting contexts.

92 | P a g e
UPSC Current Only
Digital Sphere

3. Understanding Emotional Context: Cannot accurately perceive or react to emotional


nuances.

4. Fact-Checking and Verification: Prone to producing false or unverifiable information.

5. Real-Time Interaction on Dynamic Topics: Struggles with staying up-to-date with live
events or evolving topics.

6. Handling Sensitive or Controversial Issues: Limited in providing balanced or ethical


responses in sensitive areas.

7. Assisting with Criminal or Illegal Activities: Cannot be used to aid or endorse unlawful
activities.

8. Predicting Sports Outcomes: Cannot accurately predict or analyze live sports results.

9. Generating Hate Speech: Even though it can be filtered, the AI might produce
inappropriate content.

10. Multitasking: Cannot perform multiple unrelated tasks simultaneously with the same
efficiency as humans.

11. Maintaining Accuracy: prone to factual inaccuracies or logical errors.

12. Understanding Sarcasm or Irony: Often misinterprets or fails to recognize sarcasm and
humor.

13. Offering Personal Advice: Cannot provide tailored, reliable advice for personal matters
or life decisions.

14. Making Ethical Judgments: Lacks human-like morality to make ethical decisions.

93 | P a g e
UPSC Current Only
Digital Sphere

94 | P a g e
UPSC Current Only
Digital Sphere

1. Prompt Engineering

• Definition: Structuring text to be understood and processed by generative AI models.

• Applications:

o Question generation.

o Content summarization.

o Language translation.

o Creative writing.

o Opinion and argument generation.

2. Language Models

• Bidirectional Encoder Representations from Transformers (BERT):

o Introduced by Google in October 2018.

• Large Language Models (LLMs):

o Examples: BERT, ChatGPT.

• Fine-Tuning:

o A type of transfer learning where a pre-trained model is adapted with new data to
improve performance.

3. Transfer Learning (TL)

• Definition: Using knowledge gained from one task to enhance performance on related
tasks.

• Example: Using knowledge from car recognition to identify trucks in image classification.

4. Reinforcement Learning from Human Feedback (RLHF)

• Definition: A method where human feedback is used to train a "reward model" to guide
reinforcement learning algorithms.

• Application: Used in optimizing agent policies through Proximal Policy Optimization.

5. Generative AI

95 | P a g e
UPSC Current Only
Digital Sphere

• Definition: AI capable of generating new content such as text, images, audio, etc., using
models like transformer-based deep neural networks.

• Examples:

o Chatbots: ChatGPT, Bing Chat, Bard.

o Text-to-image AI: Stable Diffusion, Midjourney, DALL-E.

o Audio generation: MusicLM, MusicGen.

o Video generation: Gen1 and Gen2 by RunwayML.

o Protein structure prediction: AlphaFold.

• Applications:

o Visual art creation.

o Music generation based on text prompts.

o Video synthesis.

o Drug discovery and protein analysis.

o Motion planning in robotics.

• Limitations:

o Cannot innovate or create entirely new ideas.

o Not capable of generating original solutions independently.

6. Bletchley Declaration

• Purpose: Acknowledges AI's potential risks and the need for safety, transparency, ethics,
and regulation.

• Signatories: 27 countries + EU, including India.

• Key Points:

o Non-legally binding and voluntary for companies.

o Emphasizes civil society involvement and safety in AI, especially "Frontier AI"—
highly capable generative models that can pose risks.

96 | P a g e
UPSC Current Only
Digital Sphere

• Example of Safety Measures: Companies sharing "red teaming" results to test safety
before release.

7. Spiking Neural Networks (SNNs)

• Definition: AI networks that mimic biological neural networks, incorporating signal


amplitude and timing.

• Features:

o Event-driven processing, similar to the human brain, saving energy.

o Can reduce carbon footprints of AI systems.

• Use: Real-time processing and energy-efficient AI strategies.

8. AlphaGeometry

• Description: AI developed by Google DeepMind capable of solving complex geometry


problems, comparable to International Mathematical Olympiad-level skills.

9. Deductive Database

• Definition: A system that deduces outcomes through logical steps and tracebacks.

• Process:

o Starts with known data (e.g., statement A).

o Determines all logical next steps (e.g., B, C).

o Arrives at the conclusion (e.g., Z) and performs traceback to find the minimum
proof.

10. Auxiliary Constructions in Mathematics

• Definition: Creative constructions used by mathematicians that are not part of the given
problem but aid in finding solutions.

• Challenge: Selecting the best construction for a problem and applying it effectively
requires human intelligence.

What is NLP (Natural Language Processing)?

97 | P a g e
UPSC Current Only
Digital Sphere

• Definition: A subfield of computer science and AI that enables computers to understand


and communicate in human language using machine learning.

• Components: Combines computational linguistics (rule-based modeling), statistical


modeling, machine learning, and deep learning.

Importance of NLP:

1. Generative AI: Supports large language models (LLMs) and image generation models,
enhancing communication and understanding.

2. Everyday Use: Powers search engines, chatbots, voice-operated systems (e.g., GPS,
digital assistants like Alexa, Siri, Cortana).

3. Enterprise Solutions: Helps streamline operations, automate processes, and improve


productivity.

Benefits of NLP:

1. Automation of Repetitive Tasks:

o Examples: NLP-powered chatbots for customer support, automated document


classification, data extraction, summarization.

o Use Case: Chatbots handle routine customer queries, allowing human agents to
focus on complex issues.

2. Improved Data Analysis and Insights:

o Text Mining: Extracts insights from unstructured text data like social media,
customer reviews, news articles.

o Sentiment Analysis: Identifies attitudes, emotions, and sarcasm in text, aiding


communication routing.

o Applications: Helps businesses understand customer preferences, market trends,


and public opinion.

3. Enhanced Search:

o Contextual Understanding: Goes beyond keyword matching to analyze user


intent for more accurate results.

o Use Cases: Improves web search, document retrieval, and enterprise data systems.

98 | P a g e
UPSC Current Only
Digital Sphere

4. Content Generation:

o Capabilities: Generates articles, reports, marketing content, social media posts,


and more.

o Examples: Tools like GPT-4 can create coherent and contextually relevant content
based on user prompts.

o Applications: Assists in drafting emails, writing legal documents, and creating


creative content, saving time and maintaining quality.

Applications in Business:

• Automation: Reduces manual data handling, improves process efficiency.

• Productivity Boost: Automates repetitive tasks, freeing human effort for higher-value
work.

• Data-Driven Decisions: Enhances the ability to draw meaningful insights from large
datasets.

▪ NLP's ability to understand, interpret, and generate human language allows for significant
improvements in communication, automation, data analysis, and content creation across
multiple industries.

NLP (Natural Language AGI (Artificial General


Aspect Generative AI
Processing) Intelligence)

Subfield of AI that enables


AI systems that create Theoretical AI capable of
computers to process and
Definition new content based on human-like general
understand human
input data. intelligence.
language.

Chatbots (e.g., customer Large language models


No existing examples;
Examples service bots), voice (e.g., ChatGPT), image
theoretical in nature.
assistants (e.g., Siri, generation (e.g.,

99 | P a g e
UPSC Current Only
Digital Sphere

NLP (Natural Language AGI (Artificial General


Aspect Generative AI
Processing) Intelligence)

Alexa), language DALL·E), music creation


translation. tools.

- Language understanding - Content generation (text, - Human-like problem-


and generation. images, music). solving.
- Text analysis (e.g., - Simulating creative - Learning and adaptation
Capabilities
sentiment analysis, processes. across tasks.
summarization). - Task automation in - Reasoning and common
- Speech recognition. content creation. sense.

- Does not exist yet.


- Limited comprehension - Lacks true
- Cannot display human-
and context understanding. understanding of context.
like creativity or moral
- Cannot exhibit true - May produce inaccurate
Limitations reasoning.
emotional intelligence. or misleading content.
- Would require advances
- Relies on training data - Cannot create
beyond current AI
and predefined algorithms. completely original ideas.
capabilities.

Widely used and integrated Available in tools and


Still a research goal; no
Status into various applications products like ChatGPT,
functioning AGI exists.
today. DALL·E, etc.

- Customer service - Writing assistance. - Theoretical applications


automation. - Artistic creation. could include any task that
Applications - Language translation. - Generating reports, requires human-like
- Sentiment analysis and emails, and creative adaptability and
data mining. content. understanding.

100 | P a g e
UPSC Current Only
Digital Sphere

NLP (Natural Language AGI (Artificial General


Aspect Generative AI
Processing) Intelligence)

AI research publications,
OpenAI, MIT Technology DeepMind, OpenAI, MIT
Source theoretical AI discussions
Review, Stanford AI Lab. Technology Review.
(e.g., OpenAI, DeepMind).

Intelligent virtual assistants (IVAs) like Alexa, Siri, and Google Assistant:

Ethical Concerns with IVAs:

1. Free Data Contribution:

o Consumers provide data for training and enhancing virtual assistants, often without
explicit consent.

o This practice raises ethical issues regarding user awareness and data ownership.

2. Training Data Transparency:

o The methods by which AIs are trained, using vast amounts of labeled data, are
concerning.

o The opaque nature of data collection and training processes can create ethical
dilemmas regarding fairness and consent.

3. Privacy Risks:

o Voice commands, which may contain sensitive personal data, are often transmitted
in unencrypted form.

o This poses risks of unauthorized sharing, processing, and exposure to third parties.

o Voice data can reveal more than just spoken words; it may contain biometric data,
personality traits, mood, emotions, and other private details.

Potential Implications:

101 | P a g e
UPSC Current Only
Digital Sphere

• Biometric Identification: The voice characteristics captured can imply information such
as gender, physical and mental health, and even socioeconomic background.

• Privacy Violations: The potential for misuse and unauthorized processing of this data
poses significant privacy concerns.

102 | P a g e
UPSC Current Only
Digital Sphere

Role of Media and Social Networking Sites in Internal Security Challenges

Media and social networking platforms have become critical in shaping public opinion. While they
serve as vital tools for governance and awareness, their misuse can undermine internal security.

Roles

1. Media as a Watchdog

o Unveils corruption and governance lapses, but sensationalism can harm security
efforts.

2. Social Media for Coordination

o Enables law enforcement and citizen collaboration, but also facilitates riots and
propaganda.

3. Role in Psychological Operations

o Platforms influence mindsets, both positively (patriotic campaigns) and negatively


(radicalization).

Advantages of Media and Social Networking Sites

1. Crisis Management: Alerts during emergencies like floods or riots.

2. Transparency: Holds authorities accountable through investigative reporting.

3. Policy Advocacy: Amplifies voices advocating for reforms and governance


improvements.

Disadvantages

1. Spread of Fake News: Amplifies misinformation that destabilizes communities.

2. Privacy Breaches: Compromises citizen data security.

3. Terrorist Propaganda: Platforms serve as recruitment and propaganda tools.

103 | P a g e
UPSC Current Only
Digital Sphere

Challenges

1. Unchecked Content

o Algorithms prioritize sensational content, intensifying polarization.

2. Global Influence

o Foreign interference in elections and misinformation campaigns.

3. Legal and Ethical Concerns

o Difficulty in balancing free speech with regulation.

Solutions

1. Platform Accountability

o Mandate tech companies to proactively address harmful content.

2. Ethical Journalism

o Train media personnel on responsible reporting of sensitive issues.

3. Public Awareness

o Conduct campaigns to educate citizens on media literacy.

Government Initiatives and Laws

1. Media Certification and Monitoring Committee (MCMC): Prevents misinformation in


advertisements.

2. IT Rules 2021: Ensures accountability of social media platforms.

3. Cyber Crime Prevention Against Women and Children (CCPWC): Addresses online
abuse.

Way Ahead

1. AI-Driven Content Moderation: Employ AI to filter harmful content.

2. Improved Laws: Update existing laws to address new-age threats.

104 | P a g e
UPSC Current Only
Digital Sphere

3. Strengthened Collaboration: Encourage partnerships between governments, platforms,


and civil society.

Social Media

Communication networks, including the internet, social media platforms, and messaging services,
have transformed information dissemination and connectivity. However, their misuse poses
significant internal security challenges, including terrorism, communal unrest, and cybercrime.
India's increasing reliance on digital infrastructure makes it particularly vulnerable to these threats.

Challenges

1. Terrorism and Radicalization

o Communication networks are exploited for propaganda, recruitment, and


coordination by terrorist organizations such as ISIS.

o Example: The use of social media by ISIS to recruit Indian youth and spread their
ideology.

2. Cybercrime

o Activities such as hacking, phishing, ransomware attacks, and identity theft


destabilize systems.

o Example: Attacks on Indian financial institutions and healthcare systems.

3. Fake News and Disinformation

o The rapid spread of false information leads to communal tensions and panic.

o Example: The 2018 child kidnapping rumors on WhatsApp that caused lynchings
in Jharkhand.

4. Communal Unrest and Riots

o Hate speech and inflammatory content exacerbate divisions and provoke violence.

o Example: The Muzaffarnagar riots in 2013, fueled by a morphed video.

105 | P a g e
UPSC Current Only
Digital Sphere

5. Cross-Border Threats

o Hostile neighbors leverage communication networks for espionage and


disinformation.

o Example: Honeytrap, Tiktok.

6. Lack of Regulation

o Absence of comprehensive laws to monitor encrypted communication and cross-


border content.

7. Technical Expertise

o Insufficient cybersecurity infrastructure and trained personnel.

8. Privacy Concerns

o Balancing surveillance with individual rights remains contentious.

Advantages of Communication Networks

1. Enhanced Connectivity: Real-time communication improves coordination for disaster


response and governance.

2. Citizen Empowerment: Platforms enable civic participation, reporting grievances, and


activism.

3. Economic Growth: E-commerce and digital banking thrive due to robust communication
networks.

Solutions

1. Strengthening Cybersecurity

o Set up advanced infrastructure like the National Cyber Coordination Centre


(NCCC).

o Invest in AI and machine learning tools to detect threats.

106 | P a g e
UPSC Current Only
Digital Sphere

2. Monitoring and Regulation

o Monitor sensitive content through proactive moderation by platforms.

o Regulate encrypted messaging services like WhatsApp.

3. Community Engagement

o Educate the public on identifying and reporting misinformation.

o Collaborate with NGOs and community leaders to counter radicalization.

Government of India Initiatives and Laws

1. Information Technology (IT) Act, 2000: Provides the legal framework to combat
cybercrime and regulate online activities.

2. Cyber Swachhta Kendra: A platform to ensure safe and secure internet usage.

3. CERT-IN (Indian Computer Emergency Response Team): Tracks and mitigates


cybersecurity threats.

4. The Intermediary Guidelines and Digital Media Ethics Code, 2021: Mandates
platforms to remove harmful content and appoint grievance officers.

5. Digital Data Protection Act.

Way Ahead

1. Comprehensive Legislation: Develop robust laws for cybersecurity and digital


governance.

2. International Collaboration: Cooperate with global agencies for intelligence sharing.

3. Public Awareness Campaigns: Promote digital literacy to counter misinformation.

Biased Media: A Real Threat to Indian Democracy

▪ Media plays a pivotal role in shaping public opinion, acting as the fourth pillar of
democracy. However, in India, the increasing bias in media—manifested in selective
reporting, sensationalism, and ideological alignment—undermines democratic principles.

107 | P a g e
UPSC Current Only
Digital Sphere

Impact of Biased Media on Democracy

1. Undermining Public Trust

o A 2020 report by the Reuters Institute for the Study of Journalism highlights a
decline in trust in Indian news outlets, with only 38% of Indians trusting news
overall.

o Biased reporting erodes confidence in media, weakening its role as an impartial


watchdog.

2. Polarization of Society

o Selective reporting exacerbates communal, political, and regional divides. It leads


to breakage of social fabric of Unity in Diversity.

o Example: Biased coverage during the Delhi riots in 2020 led to intensified
communal tensions.

3. Threat to Electoral Processes

o Partiality in media compromises fair elections, influencing voter perceptions.

o As per UNESCO, biased media diminishes public discourse quality and misguides
electoral choices.

4. Manipulation by External Forces

o Fake news and biased narratives can be weaponized by foreign entities to


destabilize democracy.

o Example: Reports from the Ministry of Electronics and IT (2021) reveal how
foreign-controlled social media campaigns targeted Indian elections.

5. Suppression of Marginalized Voices

o Biased media prioritizes corporate and political interests, sidelining issues like
caste-based discrimination, gender inequality, and tribal rights.

108 | P a g e
UPSC Current Only
Digital Sphere

Media Bias as a Barrier to Policy Effectiveness

• Biased media narratives can distort the reception of government policies, leading to either
undue glorification or unwarranted criticism.

• The rollout of GST (Goods and Services Tax) in 2017 faced polarized media reactions,
with some outlets exaggerating its negatives, while others ignored genuine challenges in
implementation. This created confusion among businesses and citizens.

Normalization of Extremist Narratives

• When media biases favor specific ideologies, they inadvertently normalize extremist
positions, making them part of mainstream discourse.

• Certain media outlets amplified divisive content during the 2019 Lok Sabha elections,
giving a platform to fringe voices and fueling communal polarization.

Biased Media as a Tool for Economic Manipulation

• Media bias can manipulate market sentiment, affecting investments and economic stability.

• During the IL&FS financial crisis (2018), selective reporting on systemic failures in the
financial sector delayed critical regulatory interventions, exacerbating economic
instability.

• Hindenburg shortselling in Indian Stock market.

Weaponization of Media by Non-State Actors

• Non-state actors like terrorist groups, separatists, or extremists exploit biased media
narratives to spread their propaganda.

• ISIS used biased coverage in international media to portray their power, enhancing
recruitment globally.

Undermining Scientific Temper and Evidence-Based Debate

• Sensationalist media narratives overshadow scientific discourse, weakening informed


public decision-making.

109 | P a g e
UPSC Current Only
Digital Sphere

• During the COVID-19 pandemic, biased reporting on vaccine efficacy contributed to


vaccine hesitancy. For instance, misinformation about Covaxin and Covishield created
unnecessary doubts.

Bias in Regional Media Landscapes

• While national-level biases are widely discussed, regional media biases—often dictated by
state politics—are underexplored.

• In Tamil Nadu, coverage of the NEET examination protests was heavily influenced by
regional political agendas, affecting the perception of the issue nationally.

Limited Scrutiny of Corporate Media Ownership

• Concentration of media ownership in the hands of a few corporations often leads to biased
coverage that protects their interests.

• The overlap between corporate advertisers and media ownership influences coverage of
industrial pollution and environmental concerns.

Solutions

1. Decentralization of Media Ownership:


Encourage diversity in media ownership to reduce corporate and political influence.

2. Strengthened Fact-Checking Mechanisms:


Government-supported initiatives like PIB Fact Check should collaborate with
independent agencies to tackle misinformation.

3. Regulating Algorithmic Bias in Social Media:


Algorithms that amplify polarizing content must be regulated to ensure balanced content
distribution.

4. Promoting Regional Journalism:


Strengthening regional news outlets to cover grassroots issues without urban bias.

5. Public Funding for Independent Media:


Government grants for independent journalism could enhance accountability and reduce
commercial influence.

110 | P a g e
UPSC Current Only
Digital Sphere

The Intersection of Social Media, Encryption, and Security Challenges

▪ Social media platforms and encrypted messaging services, like WhatsApp, Telegram, and
Facebook, have redefined communication and self-expression. However, their misuse by
individuals, influencers, non-state actors (NSAs), and extremist groups creates critical
security challenges. Balancing the immense benefits of these platforms with the risks of
exploitation for radicalization, misinformation, cybercrimes, and terrorism remains a
priority for governments and organizations worldwide.

▪ This integrated analysis explores social media’s selfish tendencies, its selfless potential,
and the misuse by NSAs while suggesting remedies to address these issues.

Dimensions of Social Media's Impact

1. Selfish Dimensions of Social Media:

▪ Social media’s structure often incentivizes self-promotion and individualism over


collective well-being:

• Self-Presentation and Narcissism:


Platforms enable users to curate idealized personas to gain approval through likes and
shares. This validation loop fosters narcissistic behaviors, as evidenced by Indian
influencers’ focus on personal branding over meaningful engagement.

• Algorithmic Design and Surveillance Capitalism:


Algorithms prioritize sensational, self-serving content, while user data is monetized for
targeted advertising, reflecting corporate selfishness.

• Mental Health Effects:


Constant social comparison leads to anxiety, depression, and low self-esteem, especially
among Indian youth navigating societal expectations.

2. Selfless Dimensions of Social Media:

▪ Social media has also demonstrated its potential for societal good:

• Catalyst for Social Change:


Campaigns like #MeTooIndia and Swachh Bharat Abhiyan show how platforms amplify
marginalized voices and foster collective action.

111 | P a g e
UPSC Current Only
Digital Sphere

• Disaster Response and Community Building:


Platforms like Twitter played vital roles in coordinating relief efforts during crises like
the Kerala floods (2018).

Threats from Non-State Actors (NSAs):

▪ Encrypted messaging services and social media have been exploited for:

1. Radicalization and Recruitment:


ISIS and similar groups effectively use platforms to indoctrinate youth, as seen in Indian
cases from Kerala and Karnataka.

2. Misinformation and Polarization:


Fake news incites violence and disrupts communal harmony, as observed during the 2020
Delhi riots.

3. Planning Subversive Activities:


Encrypted apps enable covert coordination, evident in the Pulwama attack.

4. Cybercrimes and Espionage:


Platforms have been used for phishing, extortion, and targeting defense personnel, as
highlighted by North Korea's Lazarus Group.

Measures Adopted

1. Policy and Legislative Actions:

• India’s IT Rules (2021):


Mandates traceability of flagged content, grievance redressal mechanisms, and data
localization.

• Other Initiatives:

o EU's Digital Services Act (2022): Ensures content transparency and combats
misinformation.

o CLOUD Act (US): Facilitates lawful access to encrypted communications.

2. Technological Interventions:

112 | P a g e
UPSC Current Only
Digital Sphere

• AI and Metadata Analysis:


AI systems are deployed to detect extremist content and analyze metadata to track
patterns.

• End-to-End Encryption Backdoors:


Proposed measures allow lawful access while protecting user privacy, though
controversial.

3. Multilateral Collaborations:

• Global Internet Forum to Counter Terrorism (GIFCT):


Shares databases of extremist content among companies like Facebook and Microsoft.

• UNODC Programs:
Supports countries in addressing cyber threats and digital extremism.

4. Community Awareness and Education:

• Digital Literacy Campaigns:


Governments and NGOs promote critical thinking and media literacy to counter fake
news.

• Fact-Checking Platforms:
Organizations like Alt News actively debunk misinformation in India.

Recommendations and Remedies

1. Enhancing Legal Frameworks:

o Strengthen penalties for creating and disseminating harmful content.

o Encourage global consensus on balancing encryption with security.

2. Advanced Surveillance Mechanisms:

o Use blockchain analysis to track illicit financing through cryptocurrencies.

o Implement AI systems capable of detecting coded threats in encrypted messages.

3. Ethical Platform Design:

o Mandate algorithm transparency to reduce bias and misinformation.

113 | P a g e
UPSC Current Only
Digital Sphere

o Restrict message virality to prevent mass-forwarding of fake news.

4. Capacity Building:

o Establish specialized cybercrime units with training in digital forensics.

o Expand de-radicalization programs with psychological and social support.

Conclusion

▪ Social media and encrypted messaging services present a dual reality: enabling
empowerment and societal progress while simultaneously posing significant security
threats. Misuse by non-state actors and the inherent structural biases of these platforms
demand a multi-pronged strategy that incorporates legislative reforms, advanced
technology, and global collaboration. By fostering ethical usage and strengthening
safeguards, governments can mitigate risks while maximizing the benefits of digital
platforms.

114 | P a g e
UPSC Current Only
Digital Sphere

Evolution of Mobile Technology

Difference Between Radio Waves, Microwaves, and Infrared Waves

• Radio waves: Best for long-range communication but have lower data transmission
speeds.

• Microwaves: Suitable for medium-range high-bandwidth applications, like Wi-Fi and


mobile networks.

• Infrared waves: Ideal for short-range, high-precision tasks like remote controls.

Transmission Characteristics

1. Radio Wave Transmission:

o Frequency Range: 3 KHz to 1 GHz.

o Properties:

▪ Omni-directional: Can travel in all directions.

115 | P a g e
UPSC Current Only
Digital Sphere

▪ Penetrates walls, making them ideal for both indoor and outdoor
communication.

▪ Long-distance communication capability.

o Applications:

▪ AM/FM radio, television, cellular phones, and wireless LAN.

▪ GPS uses radio waves to transmit signals from satellite to the receiver on
ground.

▪ Radio waves are emitted by stars, galaxies and other celestial objects.

▪ By transmitting radio waves and measuring their reflection from


atmospheric particles, meteorologists can track storms, measure
precipitation, and predict weather patterns. This information is crucial for
weather forecasting, aviation safety, and disaster preparedness.

▪ MRI, WiFi, Bluetooth.

▪ Radars.

2. Microwave Transmission:

o Frequency Range: 1 GHz to 300 GHz.

o Properties:

▪ Unidirectional: Travels in a straight line.

▪ Cannot penetrate walls at very high frequencies.

▪ Suitable for long-distance, one-to-one communication.

o Applications:

▪ Cellular phones, satellite networks, and wireless LAN.

▪ CT Scan, MRI, Radar systems,

3. Infrared Wave Transmission:

o Frequency Range: 300 GHz to 400 THz.

o Properties:

116 | P a g e
UPSC Current Only
Digital Sphere

▪ Short-range communication with line-of-sight propagation.

▪ Cannot penetrate walls or solid objects.

▪ Quantum computing.

▪ IOT applications.

o Applications:

▪ Remote controls for TVs, DVD players, and stereo systems.

▪ Detecting blood flow and heat pattern in tissues.

▪ Fitness trackers, smartwatches.

▪ Drug development and environmental monitoring.

▪ Augmented Reality.

▪ LIDAR

Pager

A pager (or beeper/bleeper) is a wireless telecommunications device for receiving and displaying
alphanumeric or voice messages. They use radio waves for communication.

Types:

1. One-way pagers: Only receive messages.

2. Response/Two-way pagers: Can acknowledge, reply, and send messages using an


internal transmitter.

1) 1G: was analogue


2) 2G: was digital allowed voice and non voice data. The advance in technology from 1G to 2G
introduced many of the fundamental services that we still use today, such as SMS, internal
roaming , conference calls, call hold and billing based on services e.g. charges based on long
distance calls and real time billing. The max speed of 2G with General Packet Radio Service
( GPRS ) is 50 Kbps or 1 Mbps with Enhanced Data Rates for GSM Evolution ( EDGE ).

117 | P a g e
UPSC Current Only
Digital Sphere

Before making the major leap from 2G to 3G wireless networks, the lesser-known 2.5G and
2.75G was an interim standard that bridged the gap.
3) 2.5G: it started digital convergence as mobiles became a multimedia platform because of
internet which was due to GPRS.
4) 2.7G: it was based on EDGE.
5) 3G: The distinguishing feature was video calling. The 3G standard utilises a new technology
called UMTS as its core network architecture - Universal Mobile Telecommunications
System. This network combines aspects of the 2G network with some new technology and
protocols to deliver a significantly faster data rate. Based on a set of standards used for mobile
devices and mobile telecommunications use services and networks that comply with the
International Mobile Telecommunications-2000 ( IMT-2000 ) specifications by the
International Telecommunication Union. One of requirements set by IMT-2000 was that speed
should be at least 200Kbps to call it as 3G service. 3G increased the efficiency of frequency
spectrum by improving how audio is compressed during a call, so more simultaneous calls can
happen in the same frequency range. The UN's International Telecommunications Union IMT-
2000 standard requires stationary speeds of 2Mbps and mobile speeds of 384kbps for a "true"
3G. The theoretical max speed for HSPA+ is 21.6 Mbps.
6) 4G: also known as LTE (Long term evolution), it has 2 technologies:
a) Frequency division duplex: the operator has to buy spectrum in 2 bands , one for
uplinking and other for downlinking.
b) Time Division Duplex: operator uses only one band for uplinking and downlinking.
▪ LTE without voice over data whereas Volte is for Voice over data.
7) 5G: it is a transformative technology projected as a general purpose technology for IR4.0. 5G
is not about internet speed alone, it will be foundation for IoT, smart mobility (driverless car),
smart cities, industrial manufacturing. Speed 10 gigabits per second.
▪ Benchmark for 5G are:
a) Internet speed will be around 100 times faster than 4G.
b) Latency will be 1 milli second.
c) 99.9% territory will be covered.
d) Ferberisation should be atleast 80%. World Radio Conference organised by ITU for
deciding the spectrum for such services has identified 2 frequency ranges:-
i) Sub 5 giga hertz: this part of spectrum is very crowded. Therefore the challenge will
be to ensure the certain bandwidth to each of the operator.

118 | P a g e
UPSC Current Only
Digital Sphere

ii) Frequency range 2 (24 GHz to 100GHz): it is part of extremely high frequency also
known as millimeter wave technology. Here the spectrum can be made available easily
but the attenuation of frequency will be very very high. Even leaves and rain drops can
absorb them, therefore base station have to be at very close distance to eachother. As
far as India is concerned Dept of Telecommunication has initiated the process of 5G
trials which includes Huawei as a participant whereas USA, Japan, South Korea,
Australia has banned Huawei from 5G services. USA decided to go for FR2 for the
reason that Huawei has no competence in that range of frequency. Huawei is known
for its proximity with the Chinese government and above that 2014 counter espionage
law and 2017 national intelligence law have left no route for doubt that the Chinese
authority can ask this company to share data whenever they feel like. 5G will be such
a deeply embedded technology where all types of data can be collected in a centralised
manner that is the reason presence of warming has raise certain questions. For India
5G will give long term dividend if there is indigenisation of technology for example
Dept of Telecommunication founded $34 million large scale 5G tech demonstrator at
IITM alongwith IISc. They developed the equipment for both the frequency ranges.
Most of them were indigenous for example IIT Hyderabad developed the chip but for
some reason it was not scaled up. Another issue is w.r.t spectrum pricing and spectrum
availability.
▪ According to TRAI for 1MHz frequency the price is 492 crore, if an operator requires
100 MHz they have to sell around 49000 crore. Given the fact 2 out of 3 operators have
reported massive losses. Compared to South Korea, Britain, the price of spectrum is 6
times in India. Dept of Telecommunication has identified 3.3 to 3.6 GHz for 5G. Out
of that 25 MHz and 100 MHz for defence then only 175 MHz is left which will not
be enough to have quality 5G services.
▪ In the period of 2020-35 the global 5G industrial change will contribute 3.5 trillion
dollar. To capture 1 dollar trillin we need to create an ecosystem, incentivise
investment then only target like India moving in top 50 countries in ICT development
index of ITU and Digital communication sector to contribute 8% of GDP can be
achieved. One of the step could be incentivising like use of home grown systems and
softwares, licencing agreements must include value addition clause.

119 | P a g e
UPSC Current Only
Digital Sphere

Net Neutrality: net neutrality was debated quite intensively because of 2 specific instances:

1) Zero rating plan of airtel, where airtel and Flipkart has a tie up and the airtel subscribers were
not charged for the data consume if the visit Flipkart.
2) This was bigger development which was announced by Mark Zuck in the form of [Link],
in this reliance, facebook and 17 other came together and the plan was any subscriber of
reliance visiting 17 sites will not be charged.

Both these plans were criticised by the civil society on the ground that it is against norms of
net neutrality i.e. there should be same speed, same cost and equal access.

In the debate of Net Neutrality there are 4 stakeholders:

1) Government
2) Consumer
3) Telecommunication service provider
4) OTT Over The Top ex: ecommerce, social network.

The argument of telecom service provider was that they should be allowed to have a share in profit
of OTTs because they are investing in infrastructure, paying the taxes, buying the spectrum. But
government refuse to accept the proposal on the ground that it will reduce the options for
consumer.

The supporters of [Link] argued that some internet is better than no internet & India is one
of the least connected country and they are here to provide internet connectivity to breach internet
divide.

The opponents argue that it will lead to fragmented internet or splitment as now there will be 3
categories: Haves, Have not, those with limited access.

120 | P a g e
UPSC Current Only
Digital Sphere

The supporters reply that it will not always remain like this as the aspirations will grow and people
will move from free internet to paid internet. In response the opponents said arrangement like this
will have a detrimental impact on emerging culture of start-ups in India.

India and TRAI asked people what they want and overwhelming number of respondents said no
to [Link]. Following that TRAI issued guidelines to ban data blocking and data throttling
and paid prioritisation. The exceptions are emergency services and closed electronic
communication groups. If there is any violation there will be penalty of Rs 50 per day which can
reach upto 50 Lac in total. There is universal benchmark for net neutrality. In US, UK paid
prioritisation is legal which is not in the case of India. In order to push internet taxes the options
for India are

i) Bharat Net
ii) Universal service obligation fund should be utilised
iii) CSR

121 | P a g e
UPSC Current Only
Digital Sphere

Non-Radio Frequency Technologies

1. IMU (Inertial Measurement Unit)

How It Works:
IMUs use a combination of accelerometers, gyroscopes, and sometimes magnetometers
to measure motion, orientation, and gravitational forces. It detects changes in velocity,
rotation, and position by measuring accelerations and angular velocities in three-
dimensional space.

Applications in Daily Life:

1. Smartphones: Used for screen orientation, motion detection, and step counting.

2. Gaming Consoles: Motion-sensitive controllers (e.g., Nintendo Wii, PlayStation


Move) track player movements.

3. Drones: Helps maintain stability and orientation for navigation and control.

4. Wearable Devices: Tracks steps, distance, and physical activity, such as in fitness
trackers.

5. Autonomous Vehicles: Assists with navigation and positioning by detecting


motion and orientation changes.

Accelerometer

1. An accelerometer detects changes in motion or orientation by measuring the force of


acceleration in one or more directions. It works by using a mass that is suspended or
attached to a spring, and when the device moves, the mass displaces, generating an
electrical signal corresponding to the amount of acceleration.
2. Applications in Daily Life:

1. Smartphones: Used for screen orientation (landscape/portrait mode) and detecting


shake gestures.

2. Fitness Trackers: Measures steps, movement, and activity levels.

3. Cars: Airbag deployment systems detect sudden acceleration or deceleration


during a crash.

122 | P a g e
UPSC Current Only
Digital Sphere

4. Gaming Consoles: Motion sensors in controllers (e.g., Nintendo Switch) for


interactive gaming.

5. Wearable Devices: Tracks posture and provides data on user’s physical


movements

Magnetometer

1. A magnetometer detects the strength and direction of magnetic fields. It operates by


measuring changes in the local magnetic field and comparing it to a known standard. Some
types use sensors like Hall effect sensors or fluxgate magnetometers.

2. Applications in Daily Life:

1. Smartphones: Used for compass functionality to determine direction.

2. Metal Detectors: Detects metal objects in the ground.

3. Navigation Systems: Helps in orienting GPS systems for accurate positioning.

4. Earthquake Monitoring: Used to detect magnetic anomalies associated with


earthquakes.

5. Security Systems: Detects magnetic field changes around entryways,


enhancing security systems.

VLC (Visible Light Communication)

VLC uses visible light, typically from LEDs, to transmit data. The light is modulated at
high speeds, allowing information to be encoded in light signals that are then received by
photodetectors.

Applications in Daily Life:

1. Li-Fi (Light Fidelity): Offers high-speed internet access through visible light
instead of Wi-Fi.

2. Indoor Positioning: Provides location-based services by detecting the position of


VLC-enabled light sources.

123 | P a g e
UPSC Current Only
Digital Sphere

3. Smart Lighting: Adjusts lighting based on user behavior and transmits data like
time schedules or sensor data.

4. Vehicle Communication: Cars use VLC to communicate with each other


(Vehicle-to-Vehicle communication).

5. Healthcare: Uses VLC for secure patient data transmission and monitoring in
hospitals.

LIDAR (Light Detection and Ranging)

1. LIDAR works by emitting laser pulses towards a surface and measuring the time it takes
for the pulse to return. The reflected laser light helps create highly accurate 3D maps of the
surface or objects, calculating distances with millimeter precision.

2. Applications in Daily Life:

1. Autonomous Vehicles: Maps the environment to detect obstacles and assist in


navigation.

2. 3D Scanning: Used in architecture and design to create detailed 3D models.

3. Agriculture: Maps the land to assess crop health and terrain.

4. Archaeology: Surveys and scans to uncover hidden ruins and structures.

5. Robotics: Assists robots in mapping and navigating environments for various


tasks.

IR (Infrared)

1. IR sensors use infrared radiation, which is a type of electromagnetic radiation with longer
wavelengths than visible light. These sensors can detect heat signatures or communicate
by emitting and receiving IR signals.

2. Applications in Daily Life:

1. Remote Controls: Used for controlling TVs, air conditioners, and other electronics.

124 | P a g e
UPSC Current Only
Digital Sphere

2. Security Systems: Motion detectors and thermal imaging in security cameras.

3. Health Devices: Non-contact thermometers measure body temperature using infrared


radiation.

4. Night Vision: Infrared sensors are used in cameras to capture images in low-light
conditions.

5. Robotics: Provides obstacle detection and navigation for autonomous robots.

Ultrasonic

Ultrasonic sensors emit sound waves at frequencies above the human hearing range. The time it
takes for these sound waves to bounce back from an object is used to determine the distance to
that object.

Applications in Daily Life:

1. Car Parking Sensors: Detects nearby objects to assist in parking.

2. Robotics: Used for navigation and obstacle avoidance in robots.

3. Medical Imaging: Ultrasound technology in prenatal scans and diagnostics.

4. Water Level Monitoring: Measures the water levels in tanks or reservoirs.

5. Cleaning Robots: Helps robots like robotic vacuum cleaners to detect obstacles and clean
effectively.

Computer Vision

1. Computer vision enables machines to interpret and make decisions based on visual data
from cameras or images. Algorithms analyze visual information to detect patterns,
recognize objects, and understand scenes.

2. Applications in Daily Life:

1. Face Recognition: Used in smartphones, security systems, and payment


systems.

125 | P a g e
UPSC Current Only
Digital Sphere

2. Autonomous Vehicles: Helps cars detect pedestrians, road signs, and other
vehicles.

3. Retail: Automated checkout systems that recognize products for quick


purchase.

4. Healthcare: Used in medical imaging to analyze X-rays, MRIs, and CT scans.

5. Agriculture: Monitors crop health and detects issues like pests or diseases
through image analysis.

Hyperspectral Imaging

1. Hyperspectral imaging captures a wide range of light spectra from ultraviolet to infrared,
providing detailed information about the chemical composition and properties of objects.
It collects data from multiple bands beyond the visible spectrum.

2. Applications in Daily Life:

1. Agriculture: Monitors crop health by analyzing plant conditions.

2. Medical Diagnostics: Used in detecting skin cancers or tissue abnormalities.

3. Environmental Monitoring: Detects pollutants or environmental changes.

4. Food Quality Inspection: Analyzes food composition for quality control.

5. Military: Used in surveillance for detecting camouflaged or hidden objects.

Thermal Imaging

1. Thermal imaging detects infrared radiation emitted by objects. It creates an image based
on temperature differences, allowing us to see heat patterns, even in complete darkness.

2. Applications in Daily Life:

1. Home Inspections: Identifies heat leaks in buildings or insulation issues.

2. Security: Detects intruders in low visibility conditions (e.g., night surveillance).

3. Healthcare: Diagnoses inflammation, infections, or circulatory issues.

4. Firefighting: Helps firefighters locate hotspots or victims in smoky environments.

126 | P a g e
UPSC Current Only
Digital Sphere

5. Automotive: Detects potential mechanical problems in vehicles through


temperature variations.

Li-Fi (Light Fidelity)

1. Li-Fi uses visible light communication to transmit data. It modulates the intensity of LED
light in a way that is undetectable to the human eye but can be detected by receivers,
transmitting high-speed internet data.

2. Applications in Daily Life:

1. Wireless Internet: Provides high-speed data transfer in environments where radio


waves are not ideal.

2. Smart Homes: Enables communication between devices in the home network via
light.

3. Public Spaces: Internet access in places like airports, offices, and museums
through light-based connections.

4. Healthcare: Reduces electromagnetic interference in hospitals while providing


data transmission.

5. Education: Creates a safer, interference-free environment for learning and


communication.

Quantum Dots

1. Quantum dots are semiconductor nanoparticles that emit light when excited by energy. The
color of light emitted depends on the size of the dots, which can be engineered for specific
wavelengths.

2. Applications in Daily Life:

1. Television Displays: Quantum dot technology is used in QLED TVs for brighter
and more vibrant displays.

2. Solar Panels: Increases the efficiency of solar cells by utilizing the full light
spectrum.

127 | P a g e
UPSC Current Only
Digital Sphere

3. Medical Imaging: Used in bio-imaging for detecting specific diseases or


abnormalities.

4. Lighting: Improves LED lighting quality with customizable light emission.

5. Consumer Electronics: Found in monitors and laptops for improved color


accuracy and efficiency.

Pulse Oximeter

1. A pulse oximeter measures the oxygen saturation level of a person’s blood by shining light
through a fingertip or earlobe. It uses two different wavelengths of light (red and infrared)
to calculate the amount of oxygen in the blood.

2. Applications in Daily Life:

1. Health Monitoring: Home use for monitoring oxygen levels, especially in patients
with respiratory conditions.

2. Hospitals: Used in emergency rooms and ICU for continuous monitoring of blood
oxygen levels.

3. Fitness: Tracking oxygen levels during high-intensity physical activity.

4. Aviation: Ensures pilots’ oxygen levels are within safe ranges during flights.

5. Sleep Studies: Monitors blood oxygen levels in patients suspected of having sleep
apnea.

ECG (Electrocardiogram) Sensors

1. ECG sensors detect electrical signals generated by the heart. The electrodes placed on the
skin measure the electrical impulses that trigger heartbeats, providing an ECG waveform
for analysis.

2. Applications in Daily Life:

1. Personal Health Devices: Smartwatches with built-in ECG sensors for heart health
monitoring.

128 | P a g e
UPSC Current Only
Digital Sphere

2. Cardiac Diagnosis: Used in hospitals to detect heart conditions like arrhythmias


and heart attacks.

3. Fitness: Monitors heart rate and rhythm during exercise.

4. Telemedicine: Remote monitoring of heart health for patients.

5. Sports: Monitors athletes' heart health for performance and recovery.

Smart Contact Lenses

1. Smart contact lenses have built-in sensors and electronics that can monitor health metrics
like glucose levels or provide augmented reality (AR) content directly on the lens.

2. Applications in Daily Life:

1. Healthcare Monitoring: Tracks blood glucose levels for diabetics without the
need for blood samples.

2. Augmented Reality (AR): Displays virtual information on the lens, enhancing the
user’s view of the world.

3. Vision Correction: Provides traditional vision correction (e.g., nearsightedness)


while adding extra smart features.

4. Sports: Monitors physical activity and health metrics during exercise or games.

5. Medical Diagnostics: Detects early signs of eye conditions like glaucoma or


macular degeneration.

SLAM (Simultaneous Localization and Mapping)

1. SLAM algorithms help robots and autonomous systems simultaneously map an


environment and track their own location within that environment. This is done using
sensors like LIDAR, cameras, and accelerometers.

2. Applications in Daily Life:

1. Autonomous Vehicles: Assists self-driving cars in navigating and mapping the


road.

129 | P a g e
UPSC Current Only
Digital Sphere

2. Robotics: Enables robots to perform tasks in unfamiliar environments without


human input.

3. Augmented Reality (AR): Helps AR devices place virtual objects within the real
world accurately.

4. Drones: Assists drones in mapping terrains or capturing aerial imagery without


GPS.

5. Indoor Navigation: Helps guide people through large indoor spaces like malls or
airports.

Quantum Communication

1. Quantum communication relies on quantum mechanics to secure data transmission. It uses


quantum states of particles like photons to encode information, with the unique property
that any eavesdropping disturbs the system, making it detectable.

2. Applications in Daily Life:

1. Secure Communication: Provides high levels of security for personal or business


communications.

2. Government and Military: Used for secure transmission of sensitive information.

3. Financial Transactions: Ensures privacy and security in digital banking and


payments.

4. Cloud Storage: Protects data from hacking by using quantum encryption for cloud
services.

5. Satellite Communication: Secure communication between ground stations and


space-based satellites.

Photonic Chips

1. Photonic chips use light (photons) instead of electricity (electrons) for data transmission
and processing, offering faster speeds and lower energy consumption.

2. Applications in Daily Life:

130 | P a g e
UPSC Current Only
Digital Sphere

1. Data Centers: Helps process large amounts of data at higher speeds.

2. Quantum Computing: Powers quantum computing systems by enabling faster


data manipulation.

3. Telecommunications: Improves internet and communication speeds for long-


distance transmissions.

4. Healthcare: Powers advanced medical imaging systems, like MRI and CT scans.

5. Consumer Electronics: Enhances smart devices with faster processing speeds and
lower power consumption.

131 | P a g e
UPSC Current Only
Digital Sphere

Radio Frequency Technologies

Wi-Fi

1. Wi-Fi uses radio waves to transmit data between devices and a router. Devices connect to
the router wirelessly and access the internet or other resources over a local area network
(LAN).

2. Applications in Daily Life:

1. Home Internet: Used for connecting smartphones, laptops, and smart devices
to the internet.

2. Office Networking: Provides wireless internet access in offices for computers


and printers.

3. Streaming: Streams video and music to TVs and speakers.

4. Smart Homes: Connects smart devices like lights, thermostats, and security
cameras to a central system.

5. Public Spaces: Wi-Fi hotspots in cafes, airports, and libraries for internet
access.

Bluetooth

1. Bluetooth operates by transmitting short-range radio waves for communication between


devices. It connects devices over a small radius, usually up to 100 meters.

2. Applications in Daily Life:

3. Wireless Headphones: Enables communication between smartphones and wireless


earphones.

4. Smartwatches: Syncs with smartphones to display notifications and health data.

5. Keyless Entry: Used in cars and homes for remote access through Bluetooth-enabled
devices.

6. Fitness Devices: Tracks physical activity and syncs data with mobile apps.

132 | P a g e
UPSC Current Only
Digital Sphere

7. Gaming Controllers: Wireless controllers for gaming consoles or PC gaming.

ZigBee

1. ZigBee is a low-power, low-data rate wireless communication standard based on the IEEE
802.15.4 standard. It operates over short distances and is used for devices that require
intermittent data transmission.

2. Applications in Daily Life:

1. Home Automation: Controls smart home devices like lights, locks, and
thermostats.

2. Energy Management: Smart meters use ZigBee for reporting energy usage to
utility companies.

3. Healthcare: Used in medical devices for remote monitoring, like patient vitals.

4. Security Systems: ZigBee-enabled motion detectors and door/window sensors in


home security.

5. Agriculture: Used for environmental monitoring (e.g., soil moisture sensors).

RFID (Radio Frequency Identification)

1. RFID uses electromagnetic fields to automatically identify and track tags attached to
objects. The RFID system includes a reader that emits radio waves and a tag that reflects
the signal back to the reader.

2. Applications in Daily Life:

1. Inventory Management: Used in retail to track products and stock levels.

2. Contactless Payments: Enables payment systems like credit cards and


smartphones to work without physical contact.

3. Access Control: Used in office buildings for employee identification and entry.

4. Logistics: Tracks parcels and shipments in supply chain management.

5. Libraries: RFID tags are used to check out books and manage library inventories

133 | P a g e
UPSC Current Only
Digital Sphere

UWB (Ultra-Wideband)

1. UWB is a short-range radio communication technology that transmits a wide range of


frequencies over a large bandwidth. It offers precise location tracking and high-speed data
transfer.

2. Applications in Daily Life:

1. Location Tracking: Used in asset tracking systems to pinpoint the location of


objects accurately.

2. Wireless Data Transfer: High-speed data transfer between devices like


smartphones or computers.

3. Smart Homes: Enables precise location-based automation (e.g., turning on lights


when you enter a room).

4. Personal Tracking: Track individuals or pets in real time using UWB-enabled


tags.

5. Automotive: Used in parking sensors and keyless entry systems in cars.

Short-Range Technologies:

▪ Wi-Fi

▪ Bluetooth

▪ ZigBee

▪ UWB (Ultra-Wideband)

▪ Li-Fi (Light Fidelity)

▪ Infrared (IR)

▪ Near Field Communication (NFC)

▪ RFID (Radio Frequency Identification)

▪ Bluetooth LE (Low Energy)

134 | P a g e
UPSC Current Only
Digital Sphere

▪ Z-Wave

▪ CCTV

Long-Range Technologies:

• Cellular (4G/5G)

• Satellite Communication

• WiMAX

• LoRa (Long Range)

• Microwave (Point-to-Point)

• Radio Waves (AM/FM)

• Terrestrial Microwave

Miscellaneous

IP Cameras

o An Internet Protocol (IP) camera is a type of digital video camera.

o It sends image data and receives control data via an IP network (usually a Local
Area Network or LAN).

o IP cameras transmit video data over the network to a computer, cloud storage, or
network video recorder (NVR).

o Unlike analog CCTV cameras, they do not require a local recording device (like
a VCR or DVR), only the network infrastructure.

o Primarily used for surveillance purposes, including security monitoring in homes,


businesses, and public spaces.

o Webcams vs. IP Cameras:

▪ Most webcams are a form of IP camera.

135 | P a g e
UPSC Current Only
Digital Sphere

▪ However, IP cameras specifically refer to those that are designed for


network access and often feature more advanced functionality (e.g., higher
resolution, PTZ (pan-tilt-zoom) capabilities).

136 | P a g e
UPSC Current Only
Digital Sphere

Nanotech

Nanotechnology: the concept nanotech originated from a phrase used by Richard Feynman. He
said there is Plenty of space at the bottom, by this he meant if the matter is utilised at the
fundamental scale then the possibility of manipulating of matter will be higher and properties of
matter will be drastically different. The concept was explained by experiment like gold nano
particles showing change in colour and in the reactivity, explosion of aluminium nano particles as
soon as they come in contact with air.

The concept was popularised by German scientist Eric Drexler through his book “Engine of
Creation” finally the term nanotech was used by Norio Tanijuchi from Japan.

The nano products are those which are having atleast one of their dimension 1 nanometer to
100nm. There are two approaches viz Top Down i.e. reduction in size of object and other is Bottom
up.

Nanotech has been defined as future revolution because of its interdisciplinary nature. It is
providing for the convergence of diverse disciplines like physics, chemistry, biotech and computer
science and AI.

IR 4.0: the term 4th revolution was used by WEF founder Clause Schwab. It is defined as
convergence of physical, biological and digitalisation.

137 | P a g e
UPSC Current Only
Digital Sphere

It is also known as Industry 4.0. where internet will be embedded in everything right from supply
chain, delivery of materials, management of production etc. the classical example of 4th IR is cyber
physical system which represent the convergence of the cyber and physical aspects i.e. sensors and
AI. It can be used in architecture where the sensors from the agriculture field will provide real time
information about the change in characteristics of soil which will then be analysed by the AI
system to send automatic messages to farmers.

Nano-Ethics: it is nothing but the convergence of ethical issues which are social, economic and
environment in nature because of the increased proliferation of nanotech. As far as environment
and health are concerned it has been found that the nano particles can kill the bacteria to hamper
natural recycles. In an experiment it was found that the spray of silver nanoparticles on the
domestic waste waster killed the bacteria it can easily happen in case of biogeochemical cycles
like Nitrogen and carbon cycles. In another study it was found that in a food chain that nano
particles exhibit bioaccumulation or biomagnification. Once their concentration crosses the
threshold then they form free radials. Once these chemical entities enter the cell the entire
physiological cycle is disturbed causing the health issues. It has also been found that the inhalation

138 | P a g e
UPSC Current Only
Digital Sphere

of nanoparticles damage the lungs and cause an increase in indicators of stress. An MP from
Vijayawada Mr Srinivasan presented a report in the parliament that certain pesticides were mixed
with silver nano particles without any permission, he drafted and presented a private member bill
for the regulation of nano tech.

Most of the nano products are of general consumption therefore they will easily come into the
market and the chances of damage to the health and environment will further increase. As observed
in Israel where after the introduction of GM crops a fungi emerged to wipe out the endemic crops.

As far as social is concerned the use of nanotech will make it very easy for the people to carry out
the monitoring and surveillance without any authorisation it will grossly undermine the privacy of
citizens. For example Nano UAV which is of size of mosquito can be used easily to carry out
surveillance.

Nanotech along with AI will take us toward autonomous weapon which are prohibited by
international laws but USA has developed ATLAS nanorobot defined as killer robot by Amnesty
International and there is UAV Harope which is also an autonomous weapon. Around 100 CEOs
have signed a pledge paper at international joint conference on AI that they will not develop AI
system weapon. According to UN Chief such weapons are politically unacceptable and morally
repungable. As the international conventions, only weapons which are controlled by human beings
are permitted then only accountability can be fixed. These autonomous weapons can become an
existential threat if they are hacked by enemy of state.

139 | P a g e
UPSC Current Only
Digital Sphere

140 | P a g e
UPSC Current Only
Digital Sphere

141 | P a g e
UPSC Current Only
Digital Sphere

3d Printing: 3D printing or additive manufacturing is a process of making three dimensional solid


objects from a digital file. The creation of a 3D printed object is achieved using additive processes.
In an additive process an object is created by laying down successive layers of material until the
object is created. Each of these layers can be seen as a thinly sliced horizontal cross-section of the
eventual object. 3D printing is the opposite of subtractive manufacturing which is cutting out /
hollowing out a piece of metal or plastic with for instance a milling machine. 3D printing enables
you to produce complex shapes using less material than traditional manufacturing methods. Here
are the nine 3D printer types :

1. Stereolithography (SLA)
2. Digital Light Processing (DLP)
3. Fused deposition Modeling (FDM)
4. Selective Laser Sintering (SLS)
5. Selective Laser Melting (SLM)
6. Electronic Beam Melting (EBM)
7. Laminated Object Manufacturing (LOM)
8. Binder Jetting (BJ)
9. Material Jetting (MJ)

4d Printing: 3D Printing is about repeating a 2D structure, layer by layer in a print path, from the
bottom to the top, layer by layer until a 3D volume is created. 4D Printing is referred to as 3D

142 | P a g e
UPSC Current Only
Digital Sphere

printing transforming over time. Thus, a fourth dimension is added: time. So, the big breakthrough
about 4D Printing over 3D Printing technology is its ability to change shape over time. A 4D
printed object is printed just like any 3D printed shape. The difference is that the 4D Printing
technology uses programmable and advanced materials that perform a different functionality by
adding hot water, light or heat. That’s why a non-living object can change its 3D shape and
behaviour over time.

Advantages of 4d printing:

Size changing

The most obvious advantage of 4D printing is that through computational folding, objects larger
than printers can be printed as only one part. Since the 4D printed objects can change shape, can
shrink and unfold, objects that are too large to fit a printer can be compressed for 3D printing into
their secondary form.

Self-repair piping system

One potential application of 4D Printing in the real world would be pipes of a plumbing system
that dynamically change their diameter in response to the flow rate and water demand. Pipes that
could possibly heal themselves automatically if they crack or break, due to their ability to change
in response to the environment’s change.

In space, currently, the 3D printing process of the building causes some issues related to cost,
efficiency, and energy consumption. So, instead of using 3D printed materials, 4D printed
materials could be used to take advantage of their transformable shape. They could provide the

143 | P a g e
UPSC Current Only
Digital Sphere

solution to build bridges, shelters or any kind of installations, as they would build up themselves
or repair themselves in case of weather damage.

Medical industry

On the other hand, imagine 4D printing being applied to a very small scale, in sectors such as the
medicinal one. 4D printed proteins could be a great application, as the self-reconfiguring protein.

Swarm intelligence is the discipline that deals with natural and artificial systems composed of
many individuals that coordinate using decentralized control and self-organization. In particular,
the discipline focuses on the collective behaviours that result from the local interactions of the
individuals with each other and with their environment. Examples of systems studied by swarm
intelligence are colonies of ants and termites, schools of fish, flocks of birds, herds of land animals.
Some human artifacts also fall into the domain of swarm intelligence, notably some multi-robot
systems, and also certain computer programs that are written to tackle optimization and data
analysis problems. Research in swarm intelligence can be classified according to different criteria.

Natural vs. Artificial: It is customary to divide swarm intelligence research into two areas
according to the nature of the systems under analysis. We speak therefore of natural swarm
intelligence research, where biological systems are studied; and of artificial swarm intelligence,
where human artifacts are studied.

Scientific vs. Engineering: An alternative and somehow more informative classification of swarm
intelligence research can be given based on the goals that are pursued: we can identify
a scientific and an engineering stream. The goal of the scientific stream is to model swarm
intelligence systems and to single out and understand the mechanisms that allow a system as a
whole to behave in a coordinated way as a result of local individual-individual and individual-
environment interactions. On the other hand, the goal of the engineering stream is to exploit the
understanding developed by the scientific stream in order to design systems that are able to solve
problems of practical relevance.

144 | P a g e
UPSC Current Only
Digital Sphere

145 | P a g e
UPSC Current Only
Digital Sphere

146 | P a g e
UPSC Current Only
Digital Sphere

147 | P a g e
UPSC Current Only
Digital Sphere

148 | P a g e
UPSC Current Only
Digital Sphere

149 | P a g e

You might also like