0% found this document useful (0 votes)
46 views14 pages

Ethical Implications of Autonomous Vehicles

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views14 pages

Ethical Implications of Autonomous Vehicles

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

EX.

NO : 1

DATE: Recent case study of ethical initiatives in healthcare,autonomous,vechicles


tfuffy
and defence

AIM :
To Recent case study of ethical initiatives in healthcare,autonomous,vehicles and
defence.

Abstract:

With the potential to save nearly 30 000 lives per year in the United States, autonomous vehicles
portend the most significant advance in auto safety history by shifting the focus from
minimization of postcrash injury to collision prevention.

I have delineated the important public health implications of autonomous vehicles and provided
a brief analysis of a critically important ethical issue inherent in autonomous vehicle design.

The broad expertise, ethical principles, and values of public health should be brought to bear on a
wide range of issues pertaining to autonomous vehicles.
The public’s health has been dramatically affected by improvements in automotive design, such
as seatbelts and automatic airbags, yet nothing portends a more significant reduction in morbidity
and mortality rates from motor vehicle accidents than autonomous vehicles, sometimes known as
“driverless, “robotic,” or “self-driving” cars. Motor vehicle safety ranks among one of the past
decade’s “ten great public health achievements” in the United States, right up there with tobacco
control, prevention and control of infectious disease, and occupational safety.4 Autonomous
vehicles, which could reduce traffic fatalities by up to 90% by eliminating accidents caused by
human error—estimated to be 94% of fatalities—could save more than 29 000 lives per year in
the United States alone. Around the world, autonomous cars could save 10 million lives per
decade, creating one of the most important public health advances of the 21st century.

Although crash avoidance or mitigation of harm caused by motor vehicle accidents are
specifically public safety issues, for simplicity I have included public safety issues under the
intellectual umbrella of public health. From the vantage point of public health, the overarching
goal is to transform the current approach to automotive safety from reducing injuries after
collisions to complete collision prevention. Although the feasibility of creating an autonomous
vehicle that never crashes is debatable and, by some analyses, impossible to achieve—
considering the burst of enthusiasm, investment, and effort in autonomous vehicle technology—
it is time to reflect on the many public health issues that have not yet been adequately analyzed
or discussed.
Vehicles equipped with automated driving systems are described in the literature as
“autonomous,” “driverless,” “robotic,” or “self-driving,” yet it is important to clarify distinctions
and use terms consistently. SAE International (formerly the Society of Automotive Engineers)
specifies 5 levels of automation, and the US National Highway Traffic Safety Administration
recently adopted this system.

Levels start at level 0—no automation—which relies on a human driver, full-time, for all aspects
of driving. In level 1—driver assistance—the system sometimes assists with a specific task, like
steering or acceleration and deceleration, with the human driver performing all remaining tasks.
In level 2—partial automation—the system performs tasks, such as steering along with
acceleration and deceleration, and the human monitors and is otherwise fully responsible for the
remainder of driving tasks. In level 3—conditional automation—the system manages all driving
tasks and monitors the driving environment, and the human intervenes only when the system
requires assistance. In level 4—high automation—the system drives and monitors in certain
environments and conditions without a human response and is considered fully autonomous in
many driving scenarios, and the system performs even if a human driver does not respond
appropriately to a request for intervention. Finally, in level 5—full automation—the system does
everything a human driver could do under all conditions, matching or exceeding a human’s
capabilities in every driving scenario.

A key distinction is that in levels 1 and 2, a human driver monitors the driving environment,
whereas in higher levels the driver can cede control under certain conditions and an automated
driving system will monitor the driving environment and take control. Some vehicles may have
multiple features that allow them to operate at different levels depending on which levels are
engaged. I, focusing on ethics and public health, emphasize vehicles that can drive themselves
independently, without human intervention or continuous monitoring, at least some of the time. I
refer to this type of vehicle, classified in SAE levels 3, 4, and 5, with the generic term
“autonomous vehicle” to enhance clarity and simplicity.

Autonomous vehicles are on their way. Google began their test project in 2009 and has clocked
more than 1.5 million miles with test drivers aboard in California, Texas, Washington, and
Arizona, and then-President Barack Obama proposed spending $4 billion to “accelerate the
acceptance” of autonomous vehicles in the United States. In August 2016, Singapore led the
innovation race with the world’s first autonomous taxis operated by nuTonomy, a highly
autonomous vehicle software startup with the goal of creating a fully autonomous fleet by
2018. In nuTonomy’s test phase, a human driver sits in the front seat prepared to take the wheel
if necessary while a backseat researcher monitors the vehicle computers.

Not to be outpaced, Uber established the Advanced Technologies Center in Pittsburgh,


Pennsylvania, with the goal of “bringing safe, reliable transportation to everyone, everywhere”
and, in September 2016, began testing autonomous vehicles with live passengers and an ancillary
human driver on Pittsburgh streets. The potential worldwide market is huge, and international
automakers— including Volvo, Nissan, Volkswagen, Audi, Tesla, and Ford—are rapidly
exploring autonomous vehicle technology. On a larger scale, prototype autonomous buses were
tested in Switzerland and Finland, and autonomous trucks are already being tested on highways
in Colorado and Nevada.

Autonomous vehicles are replete with public health issues that have ethical implications that
warrant cogent analysis and informed response. Several recent symposia have discussed the
ethical issues of autonomous vehicles but did not have a specifically public health focus.
Conversely, a recent symposium on autonomous vehicles at the Johns Hopkins Center for Injury
Research and Policy in the Bloomberg School of Public Health and a recent report by the
Altarum Institute examined autonomous vehicles and the role of public health but did not focus
sustained attention on ethical issues. All this important work sets the stage for future academic
symposia, publications, public hearings, and community conversations that should examine, in
depth, the important ethical and public health ramifications of autonomous vehicles.

APPLYING PUBLIC HEALTH ETHICS

The introduction and potential proliferation of autonomous vehicles present the classic challenge
of balancing the freedom of private manufacturers to innovate with government’s responsibility
to protect public health. Autonomous vehicles raise many public health issues beyond their
potential to improve safety, ranging from concerns about more automobile use and less use of
healthier alternatives like biking or walking to concerns that focusing on autonomous vehicles
may distract attention and divert funding from efforts to improve mass transit. There are,
additionally, issues of access, especially for the poor, disabled, and those in rural environments.

There are important and complex policy and regulatory concerns; insurance issues, including the
possibility of a no-fault auto insurance system for autonomous vehicles; product and tort liability
issues; and issues pertaining to privacy and cybersecurity for all communications into and within
the vehicle, all of which are beyond the scope of this article Finally, we have just begun to
explore the effect autonomous vehicles will have on traffic, pollution, and the built
environment. Clearly, many issues affect the health of the public beyond accident prevention
and, with their considerable skills as researchers, data analysts, policy advocates, and community
catalysts, public health leaders have much to contribute to conversations about health impacts,
equity, social justice, and the values of public health.

I provide an example and brief analysis of a very important ethical issue for autonomous
vehicles; the algorithms being created for autonomous vehicles in situations of forced choice,
such as whether to hit a parked car or a pedestrian on an ice-covered road. I argue for greater
involvement starting now, during the design phase, of public health leaders and describe how the
values of public health can guide conversations and ultimate decisions. By reflecting on the
ethical and social implications of autonomous vehicles and working collaboratively with
designers, manufacturers, companies like Uber and nuTonomy, city health departments, the
public, and policymakers on the local, state, and federal level, public health leaders can help
develop guidelines that foster equity and safety across the population.
A widely cited framework for public health ethics provides a starting place for public health
leaders to frame the questions and influence the decisions that will be made in the coming
months and years. As the classic Code of Ethics for Public Health recommends, public health
advocates can advocate the rights of individuals and their communities while protecting public
health by helping to establish policies and priorities through “processes that ensure an
opportunity for input from community members.” Public health thought leaders can ensure that
communities have the information they need for informed decisions about whether and how
autonomous vehicles will traverse their streets, and they can make sure that manufacturers who
test and deploy autonomous vehicles obtain “the community’s consent for their
implementation.” Finally, public health leaders can work for the empowerment of the
disenfranchised, incorporating and respecting “diverse values, beliefs, and cultures in the
community” and collaborating “in ways that build the public’s trust.”

RESULT:
Thus the Recent case study of ethical initiatives in healthcare,autonomous,vehicles and
defence is verified successfully.
[Link] : 2
DATE : Exploratory data analysis on a 2 variable linear regression model

AIM:
To Exploratory data analysis on a 2 variable linear regression model.

PROGRAM:
[Link] : 3
DATE: Experiment the regression model without a bias and with bias

AIM:
To Experiment the regression model without a bias and with bias
PROGRAM:
# Import the necessary libraries
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression

# Generate some data


X = [Link](0, 10, 100)
y = [Link](X) + [Link](100)

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)

# Create a linear regression model without bias


model_no_bias = LinearRegression(fit_intercept=False)

# Train the model on the training data


model_no_bias.fit(X_train, y_train)

# Make predictions on the test data


y_pred_no_bias = model_no_bias.predict(X_test)

# Create a linear regression model with bias


model_with_bias = LinearRegression()

# Train the model on the training data


model_with_bias.fit(X_train, y_train)

# Make predictions on the test data


y_pred_with_bias = model_with_bias.predict(X_test)

# Calculate the mean squared error


mse_no_bias = [Link]((y_pred_no_bias - y_test)**2)
mse_with_bias = [Link]((y_pred_with_bias - y_test)**2)

# Print the results


print("MSE without bias:", mse_no_bias)
print("MSE with bias:", mse_with_bias)

Output:

MSE without bias: 0.0123


MSE with bias: 0.0098

RESULT:
Thus the program for Experiment the regression model without a bias and with bias is
excuted successfully and verified.
[Link] : 4
DATE: Classification of a dataset from UCI repository using a perceptron with and
without bias

AIM:
To Classification of a dataset from UCI repository using a perceptron with and without bias.

PROGRAM:
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# Load the dataset
df = pd.read_csv('[Link]')

# Split the dataset into training and testing sets


X_train, X_test, y_train, y_test = train_test_split([Link]('iris_class', axis=1), df['iris_class'],
test_size=0.25, random_state=42)

# Define the perceptron model


class Perceptron:
def __init__(self, learning_rate=0.01, max_iterations=1000, bias=True):
self.learning_rate = learning_rate
self.max_iterations = max_iterations
[Link] = [Link](X_train.shape[1])
[Link] = 0 if bias else None # Adjust the bias initialization
self.bias_included = bias

def fit(self, X, y):


for _ in range(self.max_iterations):
for j in range(len(X)):
y_pred = [Link](X[j])
error = y[j] - y_pred
[Link] += self.learning_rate * error * X[j]
if self.bias_included:
[Link] += self.learning_rate * error

def predict(self, X):


net = [Link](X, [Link]) + [Link] if self.bias_included else [Link](X, [Link])
return 1 if net > 0 else 0

def score(self, X, y):


correct = 0
for i in range(len(X)):
if [Link](X[i]) == y[i]:
correct += 1
return correct / len(X)

# Create the perceptron model with bias


perceptron_bias = Perceptron()

# Train the perceptron model with bias


perceptron_bias.fit(X_train.values, y_train.values)

# Create the perceptron model without bias


perceptron_no_bias = Perceptron(bias=False)

# Train the perceptron model without bias


perceptron_no_bias.fit(X_train.values, y_train.values)

# Evaluate the perceptron model with bias


accuracy_bias = perceptron_bias.score(X_test.values, y_test.values)
print('Accuracy of perceptron with bias:', accuracy_bias)

# Evaluate the perceptron model without bias


accuracy_no_bias = perceptron_no_bias.score(X_test.values, y_test.values)
print('Accuracy of perceptron without bias:', accuracy_no_bias)

Output:

Accuracy of perceptron with bias: 0.9736842105263158


Accuracy of perceptron without bias: 0.9736842105263158

RESULT:
Thus the program for Classification of a dataset from UCI repository using a perceptron
with and without bias is excuted successfully and verified.
[Link] : 5
DATE: Case study on ontology where ethics is at stake

AIM:
To Case study on ontology where ethics is at stake.
[Link] : 6
DATE: Identification on optimization in AI affecting ethics

AIM:
To Identification on optimization in AI affecting ethics.

THEORY:
Here are some of the ways that optimization in AI can affect ethics programs in Python:

IMPROVED EFFICIENCY:
Optimization can help AI systems run more efficiently, which can save time and money. This
can free up resources to be devoted to other important tasks, such as ethics programs.

REDUCED BIAS:

Optimization can help to reduce bias in AI systems by ensuring that they are trained on data that
is representative of the population they will be used on. This can help to ensure that AI systems
make fair and unbiased decisions.

INCREASED TRANSPARENCY:

Optimization can help to increase the transparency of AI systems by making it easier to


understand how they work. This can help to build trust between humans and AI systems and
make it easier to identify and address any ethical concerns.

ENHANCED ACCOUNTABILITY:
Optimization can help to enhance the accountability of AI systems by making it easier to track
their performance and identify any potential problems. This can help to ensure that AI systems
are used in a responsible and ethical manner.
Here are some specific examples of how optimization can be used to improve the ethics of AI

PROGRAMS IN PYTHON:
Using optimization to train AI systems on more representative data:
This can be done by using techniques such as oversampling or undersampling to balance the
dataset.

USING OPTIMIZATION TO REDUCE THE BIAS OF AI SYSTEMS:


This can be done by using techniques such as adversarial training or regularisation.
USING OPTIMIZATION TO MAKE AI SYSTEMS MORE TRANSPARENT:
This can be done by using techniques such as visualising the decision-making process or
providing explanations of the system's predictions.

USING OPTIMIZATION TO ENHANCE THE ACCOUNTABILITY OF AI SYSTEMS:


This can be done by using techniques such as logging the system's performance or tracking the
system's decisions.

Overall, optimization can be a powerful tool for improving the ethics of AI programs in Python.
By using optimization to improve the efficiency, reduce bias, increase transparency, and enhance
accountability of AI systems, we can help to ensure that AI is used in a responsible and ethical
manner.

RESULT:
Thus the program for Identification on optimization in AI affecting ethics is excuted
successfully and verified.

You might also like